Message ID | 20201105143431.1874789-19-npiggin@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | powerpc: interrupt wrappers | expand |
Context | Check | Description |
---|---|---|
snowpatch_ozlabs/apply_patch | success | Successfully applied on branch powerpc/merge (58b9041431fcf9f3f87c8def87ca82dd54cf80dd) |
snowpatch_ozlabs/build-ppc64le | success | Build succeeded and removed 5 sparse warnings |
snowpatch_ozlabs/build-ppc64be | fail | Build failed! |
snowpatch_ozlabs/build-ppc64e | success | Build succeeded |
snowpatch_ozlabs/build-pmac32 | fail | Build failed! |
snowpatch_ozlabs/checkpatch | warning | total: 2 errors, 11 warnings, 3 checks, 42 lines checked |
snowpatch_ozlabs/needsstable | success | Patch has no Fixes tags |
Hi Nicholas,
I love your patch! Yet something to improve:
[auto build test ERROR on powerpc/next]
[also build test ERROR on v5.10-rc2 next-20201105]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Nicholas-Piggin/powerpc-interrupt-wrappers/20201105-231909
base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-allyesconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/561aae13dd3400b772b91d60681d55937e046cbc
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Nicholas-Piggin/powerpc-interrupt-wrappers/20201105-231909
git checkout 561aae13dd3400b772b91d60681d55937e046cbc
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
{standard input}: Assembler messages:
>> {standard input}:662: Error: unsupported relocation against r13
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Le 05/11/2020 à 15:34, Nicholas Piggin a écrit : > Christophe asked about doing this, most of the code is still in > asm but maybe it's slightly nicer? I don't know if it's worthwhile. Heu... I don't think I was asking for that, but why not, see later comments. At first I was just asking to write the following in C: + + .globl power4_idle_nap_return +power4_idle_nap_return: + blr In extenso, instead of the above do somewhere something like: void power4_idle_nap_return(void) { } > --- > arch/powerpc/kernel/idle.c | 25 ++++++++++++++++++++----- > arch/powerpc/kernel/idle_book3s.S | 22 ---------------------- > 2 files changed, 20 insertions(+), 27 deletions(-) > > diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c > index ae0e2632393d..849e77a45915 100644 > --- a/arch/powerpc/kernel/idle.c > +++ b/arch/powerpc/kernel/idle.c > @@ -72,6 +72,9 @@ int powersave_nap; > #ifdef CONFIG_PPC_970_NAP > void power4_idle(void) > { > + unsigned long msr_idle = MSR_KERNEL|MSR_EE|MSR_POW; > + unsigned long tmp1, tmp2; > + > if (!cpu_has_feature(CPU_FTR_CAN_NAP)) > return; > > @@ -84,13 +87,25 @@ void power4_idle(void) > if (cpu_has_feature(CPU_FTR_ALTIVEC)) > asm volatile("DSSALL ; sync" ::: "memory"); > > - power4_idle_nap(); > - > + asm volatile( > +" ld %0,PACA_THREAD_INFO(r13) \n" > +" ld %1,TI_LOCAL_FLAGS(%0) \n" > +" ori %1,%1,_TLF_NAPPING \n" > +" std %1,TI_LOCAL_FLAGS(%0) \n" Can't this just be: current_thread_info()->local_flags |= _TLF_NAPPING; > /* > - * power4_idle_nap returns with interrupts enabled (soft and hard). > - * to our caller with interrupts enabled (soft and hard). Our caller > - * can cope with either interrupts disabled or enabled upon return. > + * NAPPING bit is set, from this point onward nap_adjust_return() > + * will cause interrupts to return to power4_idle_nap_return. > */ > +"1: sync \n" > +" isync \n" > +" mtmsrd %2 \n" > +" isync \n" > +" b 1b \n" And this: for (;;) { mb(); isync(); mtmsr(MSR_KERNEL|MSR_EE|MSR_POW); isync(); } > +" .globl power4_idle_nap_return \n" > +"power4_idle_nap_return: \n" > + : "=r"(tmp1), "=r"(tmp2) > + : "r"(msr_idle) > + ); > } > #endif > Christophe
Excerpts from Christophe Leroy's message of November 7, 2020 7:43 pm: > > > Le 05/11/2020 à 15:34, Nicholas Piggin a écrit : >> Christophe asked about doing this, most of the code is still in >> asm but maybe it's slightly nicer? I don't know if it's worthwhile. > > Heu... I don't think I was asking for that, but why not, see later comments. > > At first I was just asking to write the following in C: > > + > + .globl power4_idle_nap_return > +power4_idle_nap_return: > + blr > > > In extenso, instead of the above do somewhere something like: > > void power4_idle_nap_return(void) > { > } Ah! Well either was a good question. I don't mind attempting it :) >> --- >> arch/powerpc/kernel/idle.c | 25 ++++++++++++++++++++----- >> arch/powerpc/kernel/idle_book3s.S | 22 ---------------------- >> 2 files changed, 20 insertions(+), 27 deletions(-) >> >> diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c >> index ae0e2632393d..849e77a45915 100644 >> --- a/arch/powerpc/kernel/idle.c >> +++ b/arch/powerpc/kernel/idle.c >> @@ -72,6 +72,9 @@ int powersave_nap; >> #ifdef CONFIG_PPC_970_NAP >> void power4_idle(void) >> { >> + unsigned long msr_idle = MSR_KERNEL|MSR_EE|MSR_POW; >> + unsigned long tmp1, tmp2; >> + >> if (!cpu_has_feature(CPU_FTR_CAN_NAP)) >> return; >> >> @@ -84,13 +87,25 @@ void power4_idle(void) >> if (cpu_has_feature(CPU_FTR_ALTIVEC)) >> asm volatile("DSSALL ; sync" ::: "memory"); >> >> - power4_idle_nap(); >> - >> + asm volatile( >> +" ld %0,PACA_THREAD_INFO(r13) \n" >> +" ld %1,TI_LOCAL_FLAGS(%0) \n" >> +" ori %1,%1,_TLF_NAPPING \n" >> +" std %1,TI_LOCAL_FLAGS(%0) \n" > > Can't this just be: > > current_thread_info()->local_flags |= _TLF_NAPPING; > >> /* >> - * power4_idle_nap returns with interrupts enabled (soft and hard). >> - * to our caller with interrupts enabled (soft and hard). Our caller >> - * can cope with either interrupts disabled or enabled upon return. >> + * NAPPING bit is set, from this point onward nap_adjust_return() >> + * will cause interrupts to return to power4_idle_nap_return. >> */ >> +"1: sync \n" >> +" isync \n" >> +" mtmsrd %2 \n" >> +" isync \n" >> +" b 1b \n" > > And this: > > for (;;) { > mb(); > isync(); > mtmsr(MSR_KERNEL|MSR_EE|MSR_POW); > isync(); > } I was hoping something nicer like this but I think not because as soon as we set _TLF_NAPPING, we might take an interrupt which returns somewhere else, and you aren't allowed to do that in C code (mainly because the stack and register state would be unknown). Even going immediately to blr or end of function might miss restoring NVGPRs etc. There might be some tricks we could play with soft-masking interrupts, using MSR[EE]=0, and then doing all this and returning to right after the mtmsr POW with a flag set... But it's a bit of tricky churn for an old CPU that works okay. Thanks, Nick > > >> +" .globl power4_idle_nap_return \n" >> +"power4_idle_nap_return: \n" >> + : "=r"(tmp1), "=r"(tmp2) >> + : "r"(msr_idle) >> + ); >> } >> #endif >> > > Christophe >
diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c index ae0e2632393d..849e77a45915 100644 --- a/arch/powerpc/kernel/idle.c +++ b/arch/powerpc/kernel/idle.c @@ -72,6 +72,9 @@ int powersave_nap; #ifdef CONFIG_PPC_970_NAP void power4_idle(void) { + unsigned long msr_idle = MSR_KERNEL|MSR_EE|MSR_POW; + unsigned long tmp1, tmp2; + if (!cpu_has_feature(CPU_FTR_CAN_NAP)) return; @@ -84,13 +87,25 @@ void power4_idle(void) if (cpu_has_feature(CPU_FTR_ALTIVEC)) asm volatile("DSSALL ; sync" ::: "memory"); - power4_idle_nap(); - + asm volatile( +" ld %0,PACA_THREAD_INFO(r13) \n" +" ld %1,TI_LOCAL_FLAGS(%0) \n" +" ori %1,%1,_TLF_NAPPING \n" +" std %1,TI_LOCAL_FLAGS(%0) \n" /* - * power4_idle_nap returns with interrupts enabled (soft and hard). - * to our caller with interrupts enabled (soft and hard). Our caller - * can cope with either interrupts disabled or enabled upon return. + * NAPPING bit is set, from this point onward nap_adjust_return() + * will cause interrupts to return to power4_idle_nap_return. */ +"1: sync \n" +" isync \n" +" mtmsrd %2 \n" +" isync \n" +" b 1b \n" +" .globl power4_idle_nap_return \n" +"power4_idle_nap_return: \n" + : "=r"(tmp1), "=r"(tmp2) + : "r"(msr_idle) + ); } #endif diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S index 27d2e6a72ec9..d4047f3c672e 100644 --- a/arch/powerpc/kernel/idle_book3s.S +++ b/arch/powerpc/kernel/idle_book3s.S @@ -184,25 +184,3 @@ _GLOBAL(isa206_idle_insn_mayloss) IDLE_STATE_ENTER_SEQ_NORET(PPC_SLEEP) 2: IDLE_STATE_ENTER_SEQ_NORET(PPC_WINKLE) #endif - -#ifdef CONFIG_PPC_970_NAP -_GLOBAL(power4_idle_nap) - LOAD_REG_IMMEDIATE(r7, MSR_KERNEL|MSR_EE|MSR_POW) - ld r9,PACA_THREAD_INFO(r13) - ld r8,TI_LOCAL_FLAGS(r9) - ori r8,r8,_TLF_NAPPING - std r8,TI_LOCAL_FLAGS(r9) - /* - * NAPPING bit is set, from this point onward power4_fixup_nap - * will cause exceptions to return to power4_idle_nap_return. - */ -1: sync - isync - mtmsrd r7 - isync - b 1b - - .globl power4_idle_nap_return -power4_idle_nap_return: - blr -#endif