Message ID | 1426221564-15086-1-git-send-email-mpe@ellerman.id.au (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
On Fri, 2015-03-13 at 15:39 +1100, Michael Ellerman wrote: > We currently have a "special" syscall for switching endianness. This is > syscall number 0x1ebe, which is handled explicitly in the 64-bit syscall > exception entry. > > diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h > index 91062eef582f..c3ee21a1d9cf 100644 > --- a/arch/powerpc/include/asm/systbl.h > +++ b/arch/powerpc/include/asm/systbl.h > @@ -367,3 +367,4 @@ SYSCALL_SPU(getrandom) > SYSCALL_SPU(memfd_create) > SYSCALL_SPU(bpf) > COMPAT_SYS(execveat) > +PPC_SYS(switch_endian) And of course I forgot about 32-bit. According to Paul there are no working implementations of LE on 32-bit cpus, so the syscall doesn't really make sense there. Scott does that sound right to you for FSL stuff? cheers
On Fri, 2015-03-13 at 17:38 +1100, Michael Ellerman wrote: > On Fri, 2015-03-13 at 15:39 +1100, Michael Ellerman wrote: > > We currently have a "special" syscall for switching endianness. This is > > syscall number 0x1ebe, which is handled explicitly in the 64-bit syscall > > exception entry. > > > > diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h > > index 91062eef582f..c3ee21a1d9cf 100644 > > --- a/arch/powerpc/include/asm/systbl.h > > +++ b/arch/powerpc/include/asm/systbl.h > > @@ -367,3 +367,4 @@ SYSCALL_SPU(getrandom) > > SYSCALL_SPU(memfd_create) > > SYSCALL_SPU(bpf) > > COMPAT_SYS(execveat) > > +PPC_SYS(switch_endian) > > And of course I forgot about 32-bit. > > According to Paul there are no working implementations of LE on 32-bit cpus, so > the syscall doesn't really make sense there. > > Scott does that sound right to you for FSL stuff? We don't support LE on FSL chips. -Scott
On Fri, Mar 13, 2015 at 05:38:46PM +1100, Michael Ellerman wrote: > According to Paul there are no working implementations of LE on 32-bit cpus, so > the syscall doesn't really make sense there. Ummm that doesn't sound right. I don't think there is an LE linux userspace but I'm pretty sure we had 32-bit working on 44x. Check where Ian did the initial LE patchset. Yours Tony.
On Mon, 2015-03-16 at 09:59 +1100, Tony Breeds wrote: > On Fri, Mar 13, 2015 at 05:38:46PM +1100, Michael Ellerman wrote: > > > According to Paul there are no working implementations of LE on 32-bit cpus, so > > the syscall doesn't really make sense there. > > Ummm that doesn't sound right. I don't think there is an LE linux userspace > but I'm pretty sure we had 32-bit working on 44x. Check where Ian did the > initial LE patchset. Yes but that's done by using a per-page endian flag, not a global MSR bit, so we never supported a syscall to switch there and never will. Cheers, Ben.
On Mon, 2015-03-16 at 11:07 +1100, Benjamin Herrenschmidt wrote: > On Mon, 2015-03-16 at 09:59 +1100, Tony Breeds wrote: > > On Fri, Mar 13, 2015 at 05:38:46PM +1100, Michael Ellerman wrote: > > > > > According to Paul there are no working implementations of LE on 32-bit cpus, so > > > the syscall doesn't really make sense there. > > > > Ummm that doesn't sound right. I don't think there is an LE linux userspace > > but I'm pretty sure we had 32-bit working on 44x. Check where Ian did the > > initial LE patchset. > > Yes but that's done by using a per-page endian flag, not a global MSR > bit, so we never supported a syscall to switch there and never will. Yeah sorry, I should have said "implementations of MSR_LE on 32-bit cpus". We can always add a 32-bit version in future if we need to, but we can't remove it once it's there, so for now we won't do it on 32-bit. cheers
diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h index 91062eef582f..c3ee21a1d9cf 100644 --- a/arch/powerpc/include/asm/systbl.h +++ b/arch/powerpc/include/asm/systbl.h @@ -367,3 +367,4 @@ SYSCALL_SPU(getrandom) SYSCALL_SPU(memfd_create) SYSCALL_SPU(bpf) COMPAT_SYS(execveat) +PPC_SYS(switch_endian) diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h index 36b79c31eedd..f4f8b667d75b 100644 --- a/arch/powerpc/include/asm/unistd.h +++ b/arch/powerpc/include/asm/unistd.h @@ -12,7 +12,7 @@ #include <uapi/asm/unistd.h> -#define __NR_syscalls 363 +#define __NR_syscalls 364 #define __NR__exit __NR_exit #define NR_syscalls __NR_syscalls diff --git a/arch/powerpc/include/uapi/asm/unistd.h b/arch/powerpc/include/uapi/asm/unistd.h index ef5b5b1f3123..e4aa173dae62 100644 --- a/arch/powerpc/include/uapi/asm/unistd.h +++ b/arch/powerpc/include/uapi/asm/unistd.h @@ -385,5 +385,6 @@ #define __NR_memfd_create 360 #define __NR_bpf 361 #define __NR_execveat 362 +#define __NR_switch_endian 363 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */ diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S index d180caf2d6de..afbc20019c2e 100644 --- a/arch/powerpc/kernel/entry_64.S +++ b/arch/powerpc/kernel/entry_64.S @@ -356,6 +356,11 @@ _GLOBAL(ppc64_swapcontext) bl sys_swapcontext b .Lsyscall_exit +_GLOBAL(ppc_switch_endian) + bl save_nvgprs + bl sys_switch_endian + b .Lsyscall_exit + _GLOBAL(ret_from_fork) bl schedule_tail REST_NVGPRS(r1) diff --git a/arch/powerpc/kernel/syscalls.c b/arch/powerpc/kernel/syscalls.c index b2702e87db0d..5fa92706444b 100644 --- a/arch/powerpc/kernel/syscalls.c +++ b/arch/powerpc/kernel/syscalls.c @@ -121,3 +121,20 @@ long ppc_fadvise64_64(int fd, int advice, u32 offset_high, u32 offset_low, return sys_fadvise64(fd, (u64)offset_high << 32 | offset_low, (u64)len_high << 32 | len_low, advice); } + +long sys_switch_endian(void) +{ + struct thread_info *ti; + + current->thread.regs->msr ^= MSR_LE; + + /* + * Set TIF_RESTOREALL so that r3 isn't clobbered on return to + * userspace. That also has the effect of restoring the non-volatile + * GPRs, so we saved them on the way in here. + */ + ti = current_thread_info(); + ti->flags |= _TIF_RESTOREALL; + + return 0; +}
We currently have a "special" syscall for switching endianness. This is syscall number 0x1ebe, which is handled explicitly in the 64-bit syscall exception entry. That has a few problems, firstly the syscall number is outside of the usual range, which confuses various tools. For example strace doesn't recognise the syscall at all. Secondly it's handled explicitly as a special case in the syscall exception entry, which is complicated enough without it. As a first step toward removing the special syscall, we need to add a regular syscall that implements the same functionality. The logic is simple, it simply toggles the MSR_LE bit in the userspace MSR. This is the same as the special syscall, with the caveat that the special syscall clobbers fewer registers. This version clobbers r9-r12, XER, CTR, and CR0-1,5-7. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> fixup --- arch/powerpc/include/asm/systbl.h | 1 + arch/powerpc/include/asm/unistd.h | 2 +- arch/powerpc/include/uapi/asm/unistd.h | 1 + arch/powerpc/kernel/entry_64.S | 5 +++++ arch/powerpc/kernel/syscalls.c | 17 +++++++++++++++++ 5 files changed, 25 insertions(+), 1 deletion(-)