From patchwork Fri Mar 20 10:20:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1258792 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48kKbv6JSBz9sPF for ; Fri, 20 Mar 2020 21:23:07 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48kKbv5SNLzDrgB for ; Fri, 20 Mar 2020 21:23:07 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx2.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48kKXz3k4yzDrfb for ; Fri, 20 Mar 2020 21:20:35 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 9BBD4B299; Fri, 20 Mar 2020 10:20:31 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v12 1/8] powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro Date: Fri, 20 Mar 2020 11:20:12 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: <20200225173541.1549955-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Gustavo Luiz Duarte , Peter Zijlstra , Sebastian Andrzej Siewior , linux-kernel@vger.kernel.org, Paul Mackerras , Jiri Olsa , Rob Herring , Michael Neuling , Mauro Carvalho Chehab , Masahiro Yamada , Nayna Jain , Alexander Shishkin , Ingo Molnar , Allison Randal , Jordan Niethe , Michal Suchanek , Valentin Schneider , Arnd Bergmann , Arnaldo Carvalho de Melo , Alexander Viro , Jonathan Cameron , Namhyung Kim , Thomas Gleixner , Andy Shevchenko , Hari Bathini , Greg Kroah-Hartman , Nicholas Piggin , Claudio Carvalho , Eric Richter , "Eric W. Biederman" , linux-fsdevel@vger.kernel.org, "David S. Miller" , Thiago Jung Bauermann Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" This partially reverts commit caf6f9c8a326 ("asm-generic: Remove unneeded __ARCH_WANT_SYS_LLSEEK macro") When CONFIG_COMPAT is disabled on ppc64 the kernel does not build. There is resistance to both removing the llseek syscall from the 64bit syscall tables and building the llseek interface unconditionally. Link: https://lore.kernel.org/lkml/20190828151552.GA16855@infradead.org/ Link: https://lore.kernel.org/lkml/20190829214319.498c7de2@naga/ Signed-off-by: Michal Suchanek Reviewed-by: Arnd Bergmann --- v7: new patch --- arch/powerpc/include/asm/unistd.h | 1 + fs/read_write.c | 3 ++- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h index b0720c7c3fcf..700fcdac2e3c 100644 --- a/arch/powerpc/include/asm/unistd.h +++ b/arch/powerpc/include/asm/unistd.h @@ -31,6 +31,7 @@ #define __ARCH_WANT_SYS_SOCKETCALL #define __ARCH_WANT_SYS_FADVISE64 #define __ARCH_WANT_SYS_GETPGRP +#define __ARCH_WANT_SYS_LLSEEK #define __ARCH_WANT_SYS_NICE #define __ARCH_WANT_SYS_OLD_GETRLIMIT #define __ARCH_WANT_SYS_OLD_UNAME diff --git a/fs/read_write.c b/fs/read_write.c index 59d819c5b92e..bbfa9b12b15e 100644 --- a/fs/read_write.c +++ b/fs/read_write.c @@ -331,7 +331,8 @@ COMPAT_SYSCALL_DEFINE3(lseek, unsigned int, fd, compat_off_t, offset, unsigned i } #endif -#if !defined(CONFIG_64BIT) || defined(CONFIG_COMPAT) +#if !defined(CONFIG_64BIT) || defined(CONFIG_COMPAT) || \ + defined(__ARCH_WANT_SYS_LLSEEK) SYSCALL_DEFINE5(llseek, unsigned int, fd, unsigned long, offset_high, unsigned long, offset_low, loff_t __user *, result, unsigned int, whence) From patchwork Fri Mar 20 10:20:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1258797 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48kKgh4YHMz9sSR for ; Fri, 20 Mar 2020 21:26:24 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48kKgh4DLWzDrhS for ; Fri, 20 Mar 2020 21:26:24 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx2.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48kKY04JJ5zDrfb for ; Fri, 20 Mar 2020 21:20:36 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 08CA7B2E2; Fri, 20 Mar 2020 10:20:33 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v12 2/8] powerpc: move common register copy functions from signal_32.c to signal.c Date: Fri, 20 Mar 2020 11:20:13 +0100 Message-Id: <9fd6d9b7c5e91fab21159fe23534a2f16b4962d3.1584699455.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: <20200225173541.1549955-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Gustavo Luiz Duarte , Peter Zijlstra , Sebastian Andrzej Siewior , linux-kernel@vger.kernel.org, Paul Mackerras , Jiri Olsa , Rob Herring , Michael Neuling , Mauro Carvalho Chehab , Masahiro Yamada , Nayna Jain , Alexander Shishkin , Ingo Molnar , Allison Randal , Jordan Niethe , Michal Suchanek , Valentin Schneider , Arnd Bergmann , Arnaldo Carvalho de Melo , Alexander Viro , Jonathan Cameron , Namhyung Kim , Thomas Gleixner , Andy Shevchenko , Hari Bathini , Greg Kroah-Hartman , Nicholas Piggin , Claudio Carvalho , Eric Richter , "Eric W. Biederman" , linux-fsdevel@vger.kernel.org, "David S. Miller" , Thiago Jung Bauermann Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" These functions are required for 64bit as well. Signed-off-by: Michal Suchanek Reviewed-by: Christophe Leroy --- arch/powerpc/kernel/signal.c | 141 ++++++++++++++++++++++++++++++++ arch/powerpc/kernel/signal_32.c | 140 ------------------------------- 2 files changed, 141 insertions(+), 140 deletions(-) diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c index d215f9554553..4b0152108f61 100644 --- a/arch/powerpc/kernel/signal.c +++ b/arch/powerpc/kernel/signal.c @@ -18,12 +18,153 @@ #include #include #include +#include #include #include #include #include "signal.h" +#ifdef CONFIG_VSX +unsigned long copy_fpr_to_user(void __user *to, + struct task_struct *task) +{ + u64 buf[ELF_NFPREG]; + int i; + + /* save FPR copy to local buffer then write to the thread_struct */ + for (i = 0; i < (ELF_NFPREG - 1) ; i++) + buf[i] = task->thread.TS_FPR(i); + buf[i] = task->thread.fp_state.fpscr; + return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double)); +} + +unsigned long copy_fpr_from_user(struct task_struct *task, + void __user *from) +{ + u64 buf[ELF_NFPREG]; + int i; + + if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double))) + return 1; + for (i = 0; i < (ELF_NFPREG - 1) ; i++) + task->thread.TS_FPR(i) = buf[i]; + task->thread.fp_state.fpscr = buf[i]; + + return 0; +} + +unsigned long copy_vsx_to_user(void __user *to, + struct task_struct *task) +{ + u64 buf[ELF_NVSRHALFREG]; + int i; + + /* save FPR copy to local buffer then write to the thread_struct */ + for (i = 0; i < ELF_NVSRHALFREG; i++) + buf[i] = task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET]; + return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double)); +} + +unsigned long copy_vsx_from_user(struct task_struct *task, + void __user *from) +{ + u64 buf[ELF_NVSRHALFREG]; + int i; + + if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double))) + return 1; + for (i = 0; i < ELF_NVSRHALFREG ; i++) + task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i]; + return 0; +} + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM +unsigned long copy_ckfpr_to_user(void __user *to, + struct task_struct *task) +{ + u64 buf[ELF_NFPREG]; + int i; + + /* save FPR copy to local buffer then write to the thread_struct */ + for (i = 0; i < (ELF_NFPREG - 1) ; i++) + buf[i] = task->thread.TS_CKFPR(i); + buf[i] = task->thread.ckfp_state.fpscr; + return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double)); +} + +unsigned long copy_ckfpr_from_user(struct task_struct *task, + void __user *from) +{ + u64 buf[ELF_NFPREG]; + int i; + + if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double))) + return 1; + for (i = 0; i < (ELF_NFPREG - 1) ; i++) + task->thread.TS_CKFPR(i) = buf[i]; + task->thread.ckfp_state.fpscr = buf[i]; + + return 0; +} + +unsigned long copy_ckvsx_to_user(void __user *to, + struct task_struct *task) +{ + u64 buf[ELF_NVSRHALFREG]; + int i; + + /* save FPR copy to local buffer then write to the thread_struct */ + for (i = 0; i < ELF_NVSRHALFREG; i++) + buf[i] = task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET]; + return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double)); +} + +unsigned long copy_ckvsx_from_user(struct task_struct *task, + void __user *from) +{ + u64 buf[ELF_NVSRHALFREG]; + int i; + + if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double))) + return 1; + for (i = 0; i < ELF_NVSRHALFREG ; i++) + task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i]; + return 0; +} +#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ +#else +inline unsigned long copy_fpr_to_user(void __user *to, + struct task_struct *task) +{ + return __copy_to_user(to, task->thread.fp_state.fpr, + ELF_NFPREG * sizeof(double)); +} + +inline unsigned long copy_fpr_from_user(struct task_struct *task, + void __user *from) +{ + return __copy_from_user(task->thread.fp_state.fpr, from, + ELF_NFPREG * sizeof(double)); +} + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM +inline unsigned long copy_ckfpr_to_user(void __user *to, + struct task_struct *task) +{ + return __copy_to_user(to, task->thread.ckfp_state.fpr, + ELF_NFPREG * sizeof(double)); +} + +inline unsigned long copy_ckfpr_from_user(struct task_struct *task, + void __user *from) +{ + return __copy_from_user(task->thread.ckfp_state.fpr, from, + ELF_NFPREG * sizeof(double)); +} +#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ +#endif + /* Log an error when sending an unhandled signal to a process. Controlled * through debug.exception-trace sysctl. */ diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c index 1b090a76b444..4f96d29a22bf 100644 --- a/arch/powerpc/kernel/signal_32.c +++ b/arch/powerpc/kernel/signal_32.c @@ -235,146 +235,6 @@ struct rt_sigframe { int abigap[56]; }; -#ifdef CONFIG_VSX -unsigned long copy_fpr_to_user(void __user *to, - struct task_struct *task) -{ - u64 buf[ELF_NFPREG]; - int i; - - /* save FPR copy to local buffer then write to the thread_struct */ - for (i = 0; i < (ELF_NFPREG - 1) ; i++) - buf[i] = task->thread.TS_FPR(i); - buf[i] = task->thread.fp_state.fpscr; - return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double)); -} - -unsigned long copy_fpr_from_user(struct task_struct *task, - void __user *from) -{ - u64 buf[ELF_NFPREG]; - int i; - - if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double))) - return 1; - for (i = 0; i < (ELF_NFPREG - 1) ; i++) - task->thread.TS_FPR(i) = buf[i]; - task->thread.fp_state.fpscr = buf[i]; - - return 0; -} - -unsigned long copy_vsx_to_user(void __user *to, - struct task_struct *task) -{ - u64 buf[ELF_NVSRHALFREG]; - int i; - - /* save FPR copy to local buffer then write to the thread_struct */ - for (i = 0; i < ELF_NVSRHALFREG; i++) - buf[i] = task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET]; - return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double)); -} - -unsigned long copy_vsx_from_user(struct task_struct *task, - void __user *from) -{ - u64 buf[ELF_NVSRHALFREG]; - int i; - - if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double))) - return 1; - for (i = 0; i < ELF_NVSRHALFREG ; i++) - task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i]; - return 0; -} - -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM -unsigned long copy_ckfpr_to_user(void __user *to, - struct task_struct *task) -{ - u64 buf[ELF_NFPREG]; - int i; - - /* save FPR copy to local buffer then write to the thread_struct */ - for (i = 0; i < (ELF_NFPREG - 1) ; i++) - buf[i] = task->thread.TS_CKFPR(i); - buf[i] = task->thread.ckfp_state.fpscr; - return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double)); -} - -unsigned long copy_ckfpr_from_user(struct task_struct *task, - void __user *from) -{ - u64 buf[ELF_NFPREG]; - int i; - - if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double))) - return 1; - for (i = 0; i < (ELF_NFPREG - 1) ; i++) - task->thread.TS_CKFPR(i) = buf[i]; - task->thread.ckfp_state.fpscr = buf[i]; - - return 0; -} - -unsigned long copy_ckvsx_to_user(void __user *to, - struct task_struct *task) -{ - u64 buf[ELF_NVSRHALFREG]; - int i; - - /* save FPR copy to local buffer then write to the thread_struct */ - for (i = 0; i < ELF_NVSRHALFREG; i++) - buf[i] = task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET]; - return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double)); -} - -unsigned long copy_ckvsx_from_user(struct task_struct *task, - void __user *from) -{ - u64 buf[ELF_NVSRHALFREG]; - int i; - - if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double))) - return 1; - for (i = 0; i < ELF_NVSRHALFREG ; i++) - task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i]; - return 0; -} -#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ -#else -inline unsigned long copy_fpr_to_user(void __user *to, - struct task_struct *task) -{ - return __copy_to_user(to, task->thread.fp_state.fpr, - ELF_NFPREG * sizeof(double)); -} - -inline unsigned long copy_fpr_from_user(struct task_struct *task, - void __user *from) -{ - return __copy_from_user(task->thread.fp_state.fpr, from, - ELF_NFPREG * sizeof(double)); -} - -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM -inline unsigned long copy_ckfpr_to_user(void __user *to, - struct task_struct *task) -{ - return __copy_to_user(to, task->thread.ckfp_state.fpr, - ELF_NFPREG * sizeof(double)); -} - -inline unsigned long copy_ckfpr_from_user(struct task_struct *task, - void __user *from) -{ - return __copy_from_user(task->thread.ckfp_state.fpr, from, - ELF_NFPREG * sizeof(double)); -} -#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ -#endif - /* * Save the current user registers on the user stack. * We only save the altivec/spe registers if the process has used From patchwork Fri Mar 20 10:20:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1258801 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48kKkM3gW4z9sRN for ; Fri, 20 Mar 2020 21:28:43 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48kKkM33sfzDrR7 for ; Fri, 20 Mar 2020 21:28:43 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx2.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48kKY20PtnzDrfc for ; Fri, 20 Mar 2020 21:20:37 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 6F690B303; Fri, 20 Mar 2020 10:20:34 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v12 3/8] powerpc/perf: consolidate read_user_stack_32 Date: Fri, 20 Mar 2020 11:20:14 +0100 Message-Id: <0c21c919ed1296420199c78f7c3cfd29d3c7e909.1584699455.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: <20200225173541.1549955-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Gustavo Luiz Duarte , Peter Zijlstra , Sebastian Andrzej Siewior , linux-kernel@vger.kernel.org, Paul Mackerras , Jiri Olsa , Rob Herring , Michael Neuling , Mauro Carvalho Chehab , Masahiro Yamada , Nayna Jain , Alexander Shishkin , Ingo Molnar , Allison Randal , Jordan Niethe , Michal Suchanek , Valentin Schneider , Arnd Bergmann , Arnaldo Carvalho de Melo , Alexander Viro , Jonathan Cameron , Namhyung Kim , Thomas Gleixner , Andy Shevchenko , Hari Bathini , Greg Kroah-Hartman , Nicholas Piggin , Claudio Carvalho , Eric Richter , "Eric W. Biederman" , linux-fsdevel@vger.kernel.org, "David S. Miller" , Thiago Jung Bauermann Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" There are two almost identical copies for 32bit and 64bit. The function is used only in 32bit code which will be split out in next patch so consolidate to one function. Signed-off-by: Michal Suchanek Reviewed-by: Christophe Leroy --- v6: new patch v8: move the consolidated function out of the ifdef block. v11: rebase on top of def0bfdbd603 --- arch/powerpc/perf/callchain.c | 48 +++++++++++++++++------------------ 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c index cbc251981209..c9a78c6e4361 100644 --- a/arch/powerpc/perf/callchain.c +++ b/arch/powerpc/perf/callchain.c @@ -161,18 +161,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret) return read_user_stack_slow(ptr, ret, 8); } -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret) -{ - if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) || - ((unsigned long)ptr & 3)) - return -EFAULT; - - if (!probe_user_read(ret, ptr, sizeof(*ret))) - return 0; - - return read_user_stack_slow(ptr, ret, 4); -} - static inline int valid_user_sp(unsigned long sp, int is_64) { if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32) @@ -277,19 +265,9 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, } #else /* CONFIG_PPC64 */ -/* - * On 32-bit we just access the address and let hash_page create a - * HPTE if necessary, so there is no need to fall back to reading - * the page tables. Since this is called at interrupt level, - * do_page_fault() won't treat a DSI as a page fault. - */ -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret) +static int read_user_stack_slow(void __user *ptr, void *buf, int nb) { - if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) || - ((unsigned long)ptr & 3)) - return -EFAULT; - - return probe_user_read(ret, ptr, sizeof(*ret)); + return 0; } static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, @@ -312,6 +290,28 @@ static inline int valid_user_sp(unsigned long sp, int is_64) #endif /* CONFIG_PPC64 */ +/* + * On 32-bit we just access the address and let hash_page create a + * HPTE if necessary, so there is no need to fall back to reading + * the page tables. Since this is called at interrupt level, + * do_page_fault() won't treat a DSI as a page fault. + */ +static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret) +{ + int rc; + + if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) || + ((unsigned long)ptr & 3)) + return -EFAULT; + + rc = probe_user_read(ret, ptr, sizeof(*ret)); + + if (IS_ENABLED(CONFIG_PPC64) && rc) + return read_user_stack_slow(ptr, ret, 4); + + return rc; +} + /* * Layout for non-RT signal frames */ From patchwork Fri Mar 20 10:20:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1258803 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48kKm00R4jz9sPF for ; Fri, 20 Mar 2020 21:30:08 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48kKm000myzDsNc for ; Fri, 20 Mar 2020 21:30:08 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx2.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48kKY3507VzDrfb for ; Fri, 20 Mar 2020 21:20:39 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id C4E25B308; Fri, 20 Mar 2020 10:20:35 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v12 4/8] powerpc/perf: consolidate valid_user_sp -> invalid_user_sp Date: Fri, 20 Mar 2020 11:20:15 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: <20200225173541.1549955-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Gustavo Luiz Duarte , Peter Zijlstra , Sebastian Andrzej Siewior , linux-kernel@vger.kernel.org, Paul Mackerras , Jiri Olsa , Rob Herring , Michael Neuling , Mauro Carvalho Chehab , Masahiro Yamada , Nayna Jain , Alexander Shishkin , Ingo Molnar , Allison Randal , Jordan Niethe , Michal Suchanek , Valentin Schneider , Arnd Bergmann , Arnaldo Carvalho de Melo , Alexander Viro , Jonathan Cameron , Namhyung Kim , Thomas Gleixner , Andy Shevchenko , Hari Bathini , Greg Kroah-Hartman , Nicholas Piggin , Claudio Carvalho , Eric Richter , "Eric W. Biederman" , linux-fsdevel@vger.kernel.org, "David S. Miller" , Thiago Jung Bauermann Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Merge the 32bit and 64bit version. Halve the check constants on 32bit. Use STACK_TOP since it is defined. Passing is_64 is now redundant since is_32bit_task() is used to determine which callchain variant should be used. Use STACK_TOP and is_32bit_task() directly. This removes a page from the valid 32bit area on 64bit: #define TASK_SIZE_USER32 (0x0000000100000000UL - (1 * PAGE_SIZE)) #define STACK_TOP_USER32 TASK_SIZE_USER32 Change return value to bool. It is inverted by users anyway. Change to invalid_user_sp to avoid inverting the return value twice. Signed-off-by: Michal Suchanek --- v8: new patch v11: simplify by using is_32bit_task() v12: - simplify by precalculating subexpresions - change return value to bool - remove double inversion --- arch/powerpc/perf/callchain.c | 26 ++++++++++---------------- 1 file changed, 10 insertions(+), 16 deletions(-) diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c index c9a78c6e4361..001d0473a61f 100644 --- a/arch/powerpc/perf/callchain.c +++ b/arch/powerpc/perf/callchain.c @@ -102,6 +102,14 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re } } +static inline bool invalid_user_sp(unsigned long sp) +{ + unsigned long mask = is_32bit_task() ? 3 : 7; + unsigned long top = STACK_TOP - (is_32bit_task() ? 16 : 32); + + return (!sp || (sp & mask) || (sp > top)); +} + #ifdef CONFIG_PPC64 /* * On 64-bit we don't want to invoke hash_page on user addresses from @@ -161,13 +169,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret) return read_user_stack_slow(ptr, ret, 8); } -static inline int valid_user_sp(unsigned long sp, int is_64) -{ - if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32) - return 0; - return 1; -} - /* * 64-bit user processes use the same stack frame for RT and non-RT signals. */ @@ -226,7 +227,7 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, while (entry->nr < entry->max_stack) { fp = (unsigned long __user *) sp; - if (!valid_user_sp(sp, 1) || read_user_stack_64(fp, &next_sp)) + if (invalid_user_sp(sp) || read_user_stack_64(fp, &next_sp)) return; if (level > 0 && read_user_stack_64(&fp[2], &next_ip)) return; @@ -275,13 +276,6 @@ static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry { } -static inline int valid_user_sp(unsigned long sp, int is_64) -{ - if (!sp || (sp & 7) || sp > TASK_SIZE - 32) - return 0; - return 1; -} - #define __SIGNAL_FRAMESIZE32 __SIGNAL_FRAMESIZE #define sigcontext32 sigcontext #define mcontext32 mcontext @@ -423,7 +417,7 @@ static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, while (entry->nr < entry->max_stack) { fp = (unsigned int __user *) (unsigned long) sp; - if (!valid_user_sp(sp, 0) || read_user_stack_32(fp, &next_sp)) + if (invalid_user_sp(sp) || read_user_stack_32(fp, &next_sp)) return; if (level > 0 && read_user_stack_32(&fp[1], &next_ip)) return; From patchwork Fri Mar 20 10:20:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1258804 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48kKp118k9z9sPF for ; Fri, 20 Mar 2020 21:31:53 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48kKp04sdBzDsXK for ; Fri, 20 Mar 2020 21:31:52 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx2.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48kKY46qv7zDrfc for ; Fri, 20 Mar 2020 21:20:40 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 28D88B306; Fri, 20 Mar 2020 10:20:37 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v12 5/8] powerpc/64: make buildable without CONFIG_COMPAT Date: Fri, 20 Mar 2020 11:20:16 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: <20200225173541.1549955-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Gustavo Luiz Duarte , Peter Zijlstra , Sebastian Andrzej Siewior , linux-kernel@vger.kernel.org, Paul Mackerras , Jiri Olsa , Rob Herring , Michael Neuling , Mauro Carvalho Chehab , Masahiro Yamada , Nayna Jain , Alexander Shishkin , Ingo Molnar , Allison Randal , Jordan Niethe , Michal Suchanek , Valentin Schneider , Arnd Bergmann , Arnaldo Carvalho de Melo , Alexander Viro , Jonathan Cameron , Namhyung Kim , Thomas Gleixner , Andy Shevchenko , Hari Bathini , Greg Kroah-Hartman , Nicholas Piggin , Claudio Carvalho , Eric Richter , "Eric W. Biederman" , linux-fsdevel@vger.kernel.org, "David S. Miller" , Thiago Jung Bauermann Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" There are numerous references to 32bit functions in generic and 64bit code so ifdef them out. Signed-off-by: Michal Suchanek --- v2: - fix 32bit ifdef condition in signal.c - simplify the compat ifdef condition in vdso.c - 64bit is redundant - simplify the compat ifdef condition in callchain.c - 64bit is redundant v3: - use IS_ENABLED and maybe_unused where possible - do not ifdef declarations - clean up Makefile v4: - further makefile cleanup - simplify is_32bit_task conditions - avoid ifdef in condition by using return v5: - avoid unreachable code on 32bit - make is_current_64bit constant on !COMPAT - add stub perf_callchain_user_32 to avoid some ifdefs v6: - consolidate current_is_64bit v7: - remove leftover perf_callchain_user_32 stub from previous series version v8: - fix build again - too trigger-happy with stub removal - remove a vdso.c hunk that causes warning according to kbuild test robot v9: - removed current_is_64bit in previous patch v10: - rebase on top of 70ed86f4de5bd --- arch/powerpc/include/asm/thread_info.h | 4 ++-- arch/powerpc/kernel/Makefile | 6 +++--- arch/powerpc/kernel/entry_64.S | 2 ++ arch/powerpc/kernel/signal.c | 3 +-- arch/powerpc/kernel/syscall_64.c | 6 ++---- arch/powerpc/kernel/vdso.c | 3 ++- arch/powerpc/perf/callchain.c | 8 +++++++- 7 files changed, 19 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h index a2270749b282..ca6c97025704 100644 --- a/arch/powerpc/include/asm/thread_info.h +++ b/arch/powerpc/include/asm/thread_info.h @@ -162,10 +162,10 @@ static inline bool test_thread_local_flags(unsigned int flags) return (ti->local_flags & flags) != 0; } -#ifdef CONFIG_PPC64 +#ifdef CONFIG_COMPAT #define is_32bit_task() (test_thread_flag(TIF_32BIT)) #else -#define is_32bit_task() (1) +#define is_32bit_task() (IS_ENABLED(CONFIG_PPC32)) #endif #if defined(CONFIG_PPC64) diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile index 5700231a8988..98a1c143b613 100644 --- a/arch/powerpc/kernel/Makefile +++ b/arch/powerpc/kernel/Makefile @@ -42,16 +42,16 @@ CFLAGS_btext.o += -DDISABLE_BRANCH_PROFILING endif obj-y := cputable.o ptrace.o syscalls.o \ - irq.o align.o signal_32.o pmc.o vdso.o \ + irq.o align.o signal_$(BITS).o pmc.o vdso.o \ process.o systbl.o idle.o \ signal.o sysfs.o cacheinfo.o time.o \ prom.o traps.o setup-common.o \ udbg.o misc.o io.o misc_$(BITS).o \ of_platform.o prom_parse.o -obj-$(CONFIG_PPC64) += setup_64.o sys_ppc32.o \ - signal_64.o ptrace32.o \ +obj-$(CONFIG_PPC64) += setup_64.o \ paca.o nvram_64.o firmware.o note.o \ syscall_64.o +obj-$(CONFIG_COMPAT) += sys_ppc32.o ptrace32.o signal_32.o obj-$(CONFIG_VDSO32) += vdso32/ obj-$(CONFIG_PPC_WATCHDOG) += watchdog.o obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S index 4c0d0400e93d..fe1421e08f09 100644 --- a/arch/powerpc/kernel/entry_64.S +++ b/arch/powerpc/kernel/entry_64.S @@ -52,8 +52,10 @@ SYS_CALL_TABLE: .tc sys_call_table[TC],sys_call_table +#ifdef CONFIG_COMPAT COMPAT_SYS_CALL_TABLE: .tc compat_sys_call_table[TC],compat_sys_call_table +#endif /* This value is used to mark exception frames on the stack. */ exception_marker: diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c index 4b0152108f61..a264989626fd 100644 --- a/arch/powerpc/kernel/signal.c +++ b/arch/powerpc/kernel/signal.c @@ -247,7 +247,6 @@ static void do_signal(struct task_struct *tsk) sigset_t *oldset = sigmask_to_save(); struct ksignal ksig = { .sig = 0 }; int ret; - int is32 = is_32bit_task(); BUG_ON(tsk != current); @@ -277,7 +276,7 @@ static void do_signal(struct task_struct *tsk) rseq_signal_deliver(&ksig, tsk->thread.regs); - if (is32) { + if (is_32bit_task()) { if (ksig.ka.sa.sa_flags & SA_SIGINFO) ret = handle_rt_signal32(&ksig, oldset, tsk); else diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c index 87d95b455b83..2dcbfe38f5ac 100644 --- a/arch/powerpc/kernel/syscall_64.c +++ b/arch/powerpc/kernel/syscall_64.c @@ -24,7 +24,6 @@ notrace long system_call_exception(long r3, long r4, long r5, long r6, long r7, long r8, unsigned long r0, struct pt_regs *regs) { - unsigned long ti_flags; syscall_fn f; if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) @@ -68,8 +67,7 @@ notrace long system_call_exception(long r3, long r4, long r5, local_irq_enable(); - ti_flags = current_thread_info()->flags; - if (unlikely(ti_flags & _TIF_SYSCALL_DOTRACE)) { + if (unlikely(current_thread_info()->flags & _TIF_SYSCALL_DOTRACE)) { /* * We use the return value of do_syscall_trace_enter() as the * syscall number. If the syscall was rejected for any reason @@ -94,7 +92,7 @@ notrace long system_call_exception(long r3, long r4, long r5, /* May be faster to do array_index_nospec? */ barrier_nospec(); - if (unlikely(ti_flags & _TIF_32BIT)) { + if (unlikely(is_32bit_task())) { f = (void *)compat_sys_call_table[r0]; r3 &= 0x00000000ffffffffULL; diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c index b9a108411c0d..77da3b7d304d 100644 --- a/arch/powerpc/kernel/vdso.c +++ b/arch/powerpc/kernel/vdso.c @@ -656,7 +656,8 @@ static void __init vdso_setup_syscall_map(void) if (sys_call_table[i] != sys_ni_syscall) vdso_data->syscall_map_64[i >> 5] |= 0x80000000UL >> (i & 0x1f); - if (compat_sys_call_table[i] != sys_ni_syscall) + if (IS_ENABLED(CONFIG_COMPAT) && + compat_sys_call_table[i] != sys_ni_syscall) vdso_data->syscall_map_32[i >> 5] |= 0x80000000UL >> (i & 0x1f); #else /* CONFIG_PPC64 */ diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c index 001d0473a61f..b5afd0bec4f8 100644 --- a/arch/powerpc/perf/callchain.c +++ b/arch/powerpc/perf/callchain.c @@ -15,7 +15,7 @@ #include #include #include -#ifdef CONFIG_PPC64 +#ifdef CONFIG_COMPAT #include "../kernel/ppc32.h" #endif #include @@ -284,6 +284,7 @@ static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry #endif /* CONFIG_PPC64 */ +#if defined(CONFIG_PPC32) || defined(CONFIG_COMPAT) /* * On 32-bit we just access the address and let hash_page create a * HPTE if necessary, so there is no need to fall back to reading @@ -447,6 +448,11 @@ static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, sp = next_sp; } } +#else /* 32bit */ +static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs) +{} +#endif /* 32bit */ void perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs) From patchwork Fri Mar 20 10:20:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1258808 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48kKqp6MTXz9sPF for ; Fri, 20 Mar 2020 21:33:26 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48kKqp5dq6zDrh9 for ; Fri, 20 Mar 2020 21:33:26 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx2.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48kKY52m1QzDrfd for ; Fri, 20 Mar 2020 21:20:41 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 76C71B309; Fri, 20 Mar 2020 10:20:38 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v12 6/8] powerpc/64: Make COMPAT user-selectable disabled on littleendian by default. Date: Fri, 20 Mar 2020 11:20:17 +0100 Message-Id: <41393d6e895b0d3a47ee62f8f51e1cf888ad6226.1584699455.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: <20200225173541.1549955-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Gustavo Luiz Duarte , Peter Zijlstra , Sebastian Andrzej Siewior , linux-kernel@vger.kernel.org, Paul Mackerras , Jiri Olsa , Rob Herring , Michael Neuling , Mauro Carvalho Chehab , Masahiro Yamada , Nayna Jain , Alexander Shishkin , Ingo Molnar , Allison Randal , Jordan Niethe , Michal Suchanek , Valentin Schneider , Arnd Bergmann , Arnaldo Carvalho de Melo , Alexander Viro , Jonathan Cameron , Namhyung Kim , Thomas Gleixner , Andy Shevchenko , Hari Bathini , Greg Kroah-Hartman , Nicholas Piggin , Claudio Carvalho , Eric Richter , "Eric W. Biederman" , linux-fsdevel@vger.kernel.org, "David S. Miller" , Thiago Jung Bauermann Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" On bigendian ppc64 it is common to have 32bit legacy binaries but much less so on littleendian. Signed-off-by: Michal Suchanek Reviewed-by: Christophe Leroy --- v3: make configurable --- arch/powerpc/Kconfig | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 497b7d0b2d7e..29d00b3959b9 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -264,8 +264,9 @@ config PANIC_TIMEOUT default 180 config COMPAT - bool - default y if PPC64 + bool "Enable support for 32bit binaries" + depends on PPC64 + default y if !CPU_LITTLE_ENDIAN select COMPAT_BINFMT_ELF select ARCH_WANT_OLD_COMPAT_IPC select COMPAT_OLD_SIGACTION From patchwork Fri Mar 20 10:20:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1258811 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48kKt81z1mz9sSR for ; Fri, 20 Mar 2020 21:35:28 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48kKt80NVXzF09n for ; Fri, 20 Mar 2020 21:35:28 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx2.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48kKY70gHfzDrfb for ; Fri, 20 Mar 2020 21:20:42 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id D5D8EB30B; Fri, 20 Mar 2020 10:20:39 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v12 7/8] powerpc/perf: split callchain.c by bitness Date: Fri, 20 Mar 2020 11:20:18 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: <20200225173541.1549955-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Gustavo Luiz Duarte , Peter Zijlstra , Sebastian Andrzej Siewior , linux-kernel@vger.kernel.org, Paul Mackerras , Jiri Olsa , Rob Herring , Michael Neuling , Mauro Carvalho Chehab , Masahiro Yamada , Nayna Jain , Alexander Shishkin , Ingo Molnar , Allison Randal , Jordan Niethe , Michal Suchanek , Valentin Schneider , Arnd Bergmann , Arnaldo Carvalho de Melo , Alexander Viro , Jonathan Cameron , Namhyung Kim , Thomas Gleixner , Andy Shevchenko , Hari Bathini , Greg Kroah-Hartman , Nicholas Piggin , Claudio Carvalho , Eric Richter , "Eric W. Biederman" , linux-fsdevel@vger.kernel.org, "David S. Miller" , Thiago Jung Bauermann Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Building callchain.c with !COMPAT proved quite ugly with all the defines. Splitting out the 32bit and 64bit parts looks better. No code change intended. Signed-off-by: Michal Suchanek --- v6: - move current_is_64bit consolidetaion to earlier patch - move defines to the top of callchain_32.c - Makefile cleanup v8: - fix valid_user_sp v11: - rebase on top of def0bfdbd603 --- arch/powerpc/perf/Makefile | 5 +- arch/powerpc/perf/callchain.c | 356 +------------------------------ arch/powerpc/perf/callchain.h | 19 ++ arch/powerpc/perf/callchain_32.c | 196 +++++++++++++++++ arch/powerpc/perf/callchain_64.c | 174 +++++++++++++++ 5 files changed, 394 insertions(+), 356 deletions(-) create mode 100644 arch/powerpc/perf/callchain.h create mode 100644 arch/powerpc/perf/callchain_32.c create mode 100644 arch/powerpc/perf/callchain_64.c diff --git a/arch/powerpc/perf/Makefile b/arch/powerpc/perf/Makefile index c155dcbb8691..53d614e98537 100644 --- a/arch/powerpc/perf/Makefile +++ b/arch/powerpc/perf/Makefile @@ -1,6 +1,9 @@ # SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_PERF_EVENTS) += callchain.o perf_regs.o +obj-$(CONFIG_PERF_EVENTS) += callchain.o callchain_$(BITS).o perf_regs.o +ifdef CONFIG_COMPAT +obj-$(CONFIG_PERF_EVENTS) += callchain_32.o +endif obj-$(CONFIG_PPC_PERF_CTRS) += core-book3s.o bhrb.o obj64-$(CONFIG_PPC_PERF_CTRS) += ppc970-pmu.o power5-pmu.o \ diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c index b5afd0bec4f8..dd5051015008 100644 --- a/arch/powerpc/perf/callchain.c +++ b/arch/powerpc/perf/callchain.c @@ -15,11 +15,9 @@ #include #include #include -#ifdef CONFIG_COMPAT -#include "../kernel/ppc32.h" -#endif #include +#include "callchain.h" /* * Is sp valid as the address of the next kernel stack frame after prev_sp? @@ -102,358 +100,6 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re } } -static inline bool invalid_user_sp(unsigned long sp) -{ - unsigned long mask = is_32bit_task() ? 3 : 7; - unsigned long top = STACK_TOP - (is_32bit_task() ? 16 : 32); - - return (!sp || (sp & mask) || (sp > top)); -} - -#ifdef CONFIG_PPC64 -/* - * On 64-bit we don't want to invoke hash_page on user addresses from - * interrupt context, so if the access faults, we read the page tables - * to find which page (if any) is mapped and access it directly. - */ -static int read_user_stack_slow(void __user *ptr, void *buf, int nb) -{ - int ret = -EFAULT; - pgd_t *pgdir; - pte_t *ptep, pte; - unsigned shift; - unsigned long addr = (unsigned long) ptr; - unsigned long offset; - unsigned long pfn, flags; - void *kaddr; - - pgdir = current->mm->pgd; - if (!pgdir) - return -EFAULT; - - local_irq_save(flags); - ptep = find_current_mm_pte(pgdir, addr, NULL, &shift); - if (!ptep) - goto err_out; - if (!shift) - shift = PAGE_SHIFT; - - /* align address to page boundary */ - offset = addr & ((1UL << shift) - 1); - - pte = READ_ONCE(*ptep); - if (!pte_present(pte) || !pte_user(pte)) - goto err_out; - pfn = pte_pfn(pte); - if (!page_is_ram(pfn)) - goto err_out; - - /* no highmem to worry about here */ - kaddr = pfn_to_kaddr(pfn); - memcpy(buf, kaddr + offset, nb); - ret = 0; -err_out: - local_irq_restore(flags); - return ret; -} - -static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret) -{ - if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned long) || - ((unsigned long)ptr & 7)) - return -EFAULT; - - if (!probe_user_read(ret, ptr, sizeof(*ret))) - return 0; - - return read_user_stack_slow(ptr, ret, 8); -} - -/* - * 64-bit user processes use the same stack frame for RT and non-RT signals. - */ -struct signal_frame_64 { - char dummy[__SIGNAL_FRAMESIZE]; - struct ucontext uc; - unsigned long unused[2]; - unsigned int tramp[6]; - struct siginfo *pinfo; - void *puc; - struct siginfo info; - char abigap[288]; -}; - -static int is_sigreturn_64_address(unsigned long nip, unsigned long fp) -{ - if (nip == fp + offsetof(struct signal_frame_64, tramp)) - return 1; - if (vdso64_rt_sigtramp && current->mm->context.vdso_base && - nip == current->mm->context.vdso_base + vdso64_rt_sigtramp) - return 1; - return 0; -} - -/* - * Do some sanity checking on the signal frame pointed to by sp. - * We check the pinfo and puc pointers in the frame. - */ -static int sane_signal_64_frame(unsigned long sp) -{ - struct signal_frame_64 __user *sf; - unsigned long pinfo, puc; - - sf = (struct signal_frame_64 __user *) sp; - if (read_user_stack_64((unsigned long __user *) &sf->pinfo, &pinfo) || - read_user_stack_64((unsigned long __user *) &sf->puc, &puc)) - return 0; - return pinfo == (unsigned long) &sf->info && - puc == (unsigned long) &sf->uc; -} - -static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, - struct pt_regs *regs) -{ - unsigned long sp, next_sp; - unsigned long next_ip; - unsigned long lr; - long level = 0; - struct signal_frame_64 __user *sigframe; - unsigned long __user *fp, *uregs; - - next_ip = perf_instruction_pointer(regs); - lr = regs->link; - sp = regs->gpr[1]; - perf_callchain_store(entry, next_ip); - - while (entry->nr < entry->max_stack) { - fp = (unsigned long __user *) sp; - if (invalid_user_sp(sp) || read_user_stack_64(fp, &next_sp)) - return; - if (level > 0 && read_user_stack_64(&fp[2], &next_ip)) - return; - - /* - * Note: the next_sp - sp >= signal frame size check - * is true when next_sp < sp, which can happen when - * transitioning from an alternate signal stack to the - * normal stack. - */ - if (next_sp - sp >= sizeof(struct signal_frame_64) && - (is_sigreturn_64_address(next_ip, sp) || - (level <= 1 && is_sigreturn_64_address(lr, sp))) && - sane_signal_64_frame(sp)) { - /* - * This looks like an signal frame - */ - sigframe = (struct signal_frame_64 __user *) sp; - uregs = sigframe->uc.uc_mcontext.gp_regs; - if (read_user_stack_64(&uregs[PT_NIP], &next_ip) || - read_user_stack_64(&uregs[PT_LNK], &lr) || - read_user_stack_64(&uregs[PT_R1], &sp)) - return; - level = 0; - perf_callchain_store_context(entry, PERF_CONTEXT_USER); - perf_callchain_store(entry, next_ip); - continue; - } - - if (level == 0) - next_ip = lr; - perf_callchain_store(entry, next_ip); - ++level; - sp = next_sp; - } -} - -#else /* CONFIG_PPC64 */ -static int read_user_stack_slow(void __user *ptr, void *buf, int nb) -{ - return 0; -} - -static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, - struct pt_regs *regs) -{ -} - -#define __SIGNAL_FRAMESIZE32 __SIGNAL_FRAMESIZE -#define sigcontext32 sigcontext -#define mcontext32 mcontext -#define ucontext32 ucontext -#define compat_siginfo_t struct siginfo - -#endif /* CONFIG_PPC64 */ - -#if defined(CONFIG_PPC32) || defined(CONFIG_COMPAT) -/* - * On 32-bit we just access the address and let hash_page create a - * HPTE if necessary, so there is no need to fall back to reading - * the page tables. Since this is called at interrupt level, - * do_page_fault() won't treat a DSI as a page fault. - */ -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret) -{ - int rc; - - if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) || - ((unsigned long)ptr & 3)) - return -EFAULT; - - rc = probe_user_read(ret, ptr, sizeof(*ret)); - - if (IS_ENABLED(CONFIG_PPC64) && rc) - return read_user_stack_slow(ptr, ret, 4); - - return rc; -} - -/* - * Layout for non-RT signal frames - */ -struct signal_frame_32 { - char dummy[__SIGNAL_FRAMESIZE32]; - struct sigcontext32 sctx; - struct mcontext32 mctx; - int abigap[56]; -}; - -/* - * Layout for RT signal frames - */ -struct rt_signal_frame_32 { - char dummy[__SIGNAL_FRAMESIZE32 + 16]; - compat_siginfo_t info; - struct ucontext32 uc; - int abigap[56]; -}; - -static int is_sigreturn_32_address(unsigned int nip, unsigned int fp) -{ - if (nip == fp + offsetof(struct signal_frame_32, mctx.mc_pad)) - return 1; - if (vdso32_sigtramp && current->mm->context.vdso_base && - nip == current->mm->context.vdso_base + vdso32_sigtramp) - return 1; - return 0; -} - -static int is_rt_sigreturn_32_address(unsigned int nip, unsigned int fp) -{ - if (nip == fp + offsetof(struct rt_signal_frame_32, - uc.uc_mcontext.mc_pad)) - return 1; - if (vdso32_rt_sigtramp && current->mm->context.vdso_base && - nip == current->mm->context.vdso_base + vdso32_rt_sigtramp) - return 1; - return 0; -} - -static int sane_signal_32_frame(unsigned int sp) -{ - struct signal_frame_32 __user *sf; - unsigned int regs; - - sf = (struct signal_frame_32 __user *) (unsigned long) sp; - if (read_user_stack_32((unsigned int __user *) &sf->sctx.regs, ®s)) - return 0; - return regs == (unsigned long) &sf->mctx; -} - -static int sane_rt_signal_32_frame(unsigned int sp) -{ - struct rt_signal_frame_32 __user *sf; - unsigned int regs; - - sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp; - if (read_user_stack_32((unsigned int __user *) &sf->uc.uc_regs, ®s)) - return 0; - return regs == (unsigned long) &sf->uc.uc_mcontext; -} - -static unsigned int __user *signal_frame_32_regs(unsigned int sp, - unsigned int next_sp, unsigned int next_ip) -{ - struct mcontext32 __user *mctx = NULL; - struct signal_frame_32 __user *sf; - struct rt_signal_frame_32 __user *rt_sf; - - /* - * Note: the next_sp - sp >= signal frame size check - * is true when next_sp < sp, for example, when - * transitioning from an alternate signal stack to the - * normal stack. - */ - if (next_sp - sp >= sizeof(struct signal_frame_32) && - is_sigreturn_32_address(next_ip, sp) && - sane_signal_32_frame(sp)) { - sf = (struct signal_frame_32 __user *) (unsigned long) sp; - mctx = &sf->mctx; - } - - if (!mctx && next_sp - sp >= sizeof(struct rt_signal_frame_32) && - is_rt_sigreturn_32_address(next_ip, sp) && - sane_rt_signal_32_frame(sp)) { - rt_sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp; - mctx = &rt_sf->uc.uc_mcontext; - } - - if (!mctx) - return NULL; - return mctx->mc_gregs; -} - -static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, - struct pt_regs *regs) -{ - unsigned int sp, next_sp; - unsigned int next_ip; - unsigned int lr; - long level = 0; - unsigned int __user *fp, *uregs; - - next_ip = perf_instruction_pointer(regs); - lr = regs->link; - sp = regs->gpr[1]; - perf_callchain_store(entry, next_ip); - - while (entry->nr < entry->max_stack) { - fp = (unsigned int __user *) (unsigned long) sp; - if (invalid_user_sp(sp) || read_user_stack_32(fp, &next_sp)) - return; - if (level > 0 && read_user_stack_32(&fp[1], &next_ip)) - return; - - uregs = signal_frame_32_regs(sp, next_sp, next_ip); - if (!uregs && level <= 1) - uregs = signal_frame_32_regs(sp, next_sp, lr); - if (uregs) { - /* - * This looks like an signal frame, so restart - * the stack trace with the values in it. - */ - if (read_user_stack_32(&uregs[PT_NIP], &next_ip) || - read_user_stack_32(&uregs[PT_LNK], &lr) || - read_user_stack_32(&uregs[PT_R1], &sp)) - return; - level = 0; - perf_callchain_store_context(entry, PERF_CONTEXT_USER); - perf_callchain_store(entry, next_ip); - continue; - } - - if (level == 0) - next_ip = lr; - perf_callchain_store(entry, next_ip); - ++level; - sp = next_sp; - } -} -#else /* 32bit */ -static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, - struct pt_regs *regs) -{} -#endif /* 32bit */ - void perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs) { diff --git a/arch/powerpc/perf/callchain.h b/arch/powerpc/perf/callchain.h new file mode 100644 index 000000000000..7a2cb9e1181a --- /dev/null +++ b/arch/powerpc/perf/callchain.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _POWERPC_PERF_CALLCHAIN_H +#define _POWERPC_PERF_CALLCHAIN_H + +int read_user_stack_slow(void __user *ptr, void *buf, int nb); +void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs); +void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs); + +static inline bool invalid_user_sp(unsigned long sp) +{ + unsigned long mask = is_32bit_task() ? 3 : 7; + unsigned long top = STACK_TOP - (is_32bit_task() ? 16 : 32); + + return (!sp || (sp & mask) || (sp > top)); +} + +#endif /* _POWERPC_PERF_CALLCHAIN_H */ diff --git a/arch/powerpc/perf/callchain_32.c b/arch/powerpc/perf/callchain_32.c new file mode 100644 index 000000000000..8aa951003141 --- /dev/null +++ b/arch/powerpc/perf/callchain_32.c @@ -0,0 +1,196 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Performance counter callchain support - powerpc architecture code + * + * Copyright © 2009 Paul Mackerras, IBM Corporation. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "callchain.h" + +#ifdef CONFIG_PPC64 +#include "../kernel/ppc32.h" +#else /* CONFIG_PPC64 */ + +#define __SIGNAL_FRAMESIZE32 __SIGNAL_FRAMESIZE +#define sigcontext32 sigcontext +#define mcontext32 mcontext +#define ucontext32 ucontext +#define compat_siginfo_t struct siginfo + +#endif /* CONFIG_PPC64 */ + +/* + * On 32-bit we just access the address and let hash_page create a + * HPTE if necessary, so there is no need to fall back to reading + * the page tables. Since this is called at interrupt level, + * do_page_fault() won't treat a DSI as a page fault. + */ +static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret) +{ + int rc; + + if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) || + ((unsigned long)ptr & 3)) + return -EFAULT; + + rc = probe_user_read(ret, ptr, sizeof(*ret)); + + if (IS_ENABLED(CONFIG_PPC64) && rc) + return read_user_stack_slow(ptr, ret, 4); + + return rc; +} + +/* + * Layout for non-RT signal frames + */ +struct signal_frame_32 { + char dummy[__SIGNAL_FRAMESIZE32]; + struct sigcontext32 sctx; + struct mcontext32 mctx; + int abigap[56]; +}; + +/* + * Layout for RT signal frames + */ +struct rt_signal_frame_32 { + char dummy[__SIGNAL_FRAMESIZE32 + 16]; + compat_siginfo_t info; + struct ucontext32 uc; + int abigap[56]; +}; + +static int is_sigreturn_32_address(unsigned int nip, unsigned int fp) +{ + if (nip == fp + offsetof(struct signal_frame_32, mctx.mc_pad)) + return 1; + if (vdso32_sigtramp && current->mm->context.vdso_base && + nip == current->mm->context.vdso_base + vdso32_sigtramp) + return 1; + return 0; +} + +static int is_rt_sigreturn_32_address(unsigned int nip, unsigned int fp) +{ + if (nip == fp + offsetof(struct rt_signal_frame_32, + uc.uc_mcontext.mc_pad)) + return 1; + if (vdso32_rt_sigtramp && current->mm->context.vdso_base && + nip == current->mm->context.vdso_base + vdso32_rt_sigtramp) + return 1; + return 0; +} + +static int sane_signal_32_frame(unsigned int sp) +{ + struct signal_frame_32 __user *sf; + unsigned int regs; + + sf = (struct signal_frame_32 __user *) (unsigned long) sp; + if (read_user_stack_32((unsigned int __user *) &sf->sctx.regs, ®s)) + return 0; + return regs == (unsigned long) &sf->mctx; +} + +static int sane_rt_signal_32_frame(unsigned int sp) +{ + struct rt_signal_frame_32 __user *sf; + unsigned int regs; + + sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp; + if (read_user_stack_32((unsigned int __user *) &sf->uc.uc_regs, ®s)) + return 0; + return regs == (unsigned long) &sf->uc.uc_mcontext; +} + +static unsigned int __user *signal_frame_32_regs(unsigned int sp, + unsigned int next_sp, unsigned int next_ip) +{ + struct mcontext32 __user *mctx = NULL; + struct signal_frame_32 __user *sf; + struct rt_signal_frame_32 __user *rt_sf; + + /* + * Note: the next_sp - sp >= signal frame size check + * is true when next_sp < sp, for example, when + * transitioning from an alternate signal stack to the + * normal stack. + */ + if (next_sp - sp >= sizeof(struct signal_frame_32) && + is_sigreturn_32_address(next_ip, sp) && + sane_signal_32_frame(sp)) { + sf = (struct signal_frame_32 __user *) (unsigned long) sp; + mctx = &sf->mctx; + } + + if (!mctx && next_sp - sp >= sizeof(struct rt_signal_frame_32) && + is_rt_sigreturn_32_address(next_ip, sp) && + sane_rt_signal_32_frame(sp)) { + rt_sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp; + mctx = &rt_sf->uc.uc_mcontext; + } + + if (!mctx) + return NULL; + return mctx->mc_gregs; +} + +void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs) +{ + unsigned int sp, next_sp; + unsigned int next_ip; + unsigned int lr; + long level = 0; + unsigned int __user *fp, *uregs; + + next_ip = perf_instruction_pointer(regs); + lr = regs->link; + sp = regs->gpr[1]; + perf_callchain_store(entry, next_ip); + + while (entry->nr < entry->max_stack) { + fp = (unsigned int __user *) (unsigned long) sp; + if (invalid_user_sp(sp) || read_user_stack_32(fp, &next_sp)) + return; + if (level > 0 && read_user_stack_32(&fp[1], &next_ip)) + return; + + uregs = signal_frame_32_regs(sp, next_sp, next_ip); + if (!uregs && level <= 1) + uregs = signal_frame_32_regs(sp, next_sp, lr); + if (uregs) { + /* + * This looks like an signal frame, so restart + * the stack trace with the values in it. + */ + if (read_user_stack_32(&uregs[PT_NIP], &next_ip) || + read_user_stack_32(&uregs[PT_LNK], &lr) || + read_user_stack_32(&uregs[PT_R1], &sp)) + return; + level = 0; + perf_callchain_store_context(entry, PERF_CONTEXT_USER); + perf_callchain_store(entry, next_ip); + continue; + } + + if (level == 0) + next_ip = lr; + perf_callchain_store(entry, next_ip); + ++level; + sp = next_sp; + } +} diff --git a/arch/powerpc/perf/callchain_64.c b/arch/powerpc/perf/callchain_64.c new file mode 100644 index 000000000000..df1ffd8b20f2 --- /dev/null +++ b/arch/powerpc/perf/callchain_64.c @@ -0,0 +1,174 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Performance counter callchain support - powerpc architecture code + * + * Copyright © 2009 Paul Mackerras, IBM Corporation. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "callchain.h" + +/* + * On 64-bit we don't want to invoke hash_page on user addresses from + * interrupt context, so if the access faults, we read the page tables + * to find which page (if any) is mapped and access it directly. + */ +int read_user_stack_slow(void __user *ptr, void *buf, int nb) +{ + int ret = -EFAULT; + pgd_t *pgdir; + pte_t *ptep, pte; + unsigned int shift; + unsigned long addr = (unsigned long) ptr; + unsigned long offset; + unsigned long pfn, flags; + void *kaddr; + + pgdir = current->mm->pgd; + if (!pgdir) + return -EFAULT; + + local_irq_save(flags); + ptep = find_current_mm_pte(pgdir, addr, NULL, &shift); + if (!ptep) + goto err_out; + if (!shift) + shift = PAGE_SHIFT; + + /* align address to page boundary */ + offset = addr & ((1UL << shift) - 1); + + pte = READ_ONCE(*ptep); + if (!pte_present(pte) || !pte_user(pte)) + goto err_out; + pfn = pte_pfn(pte); + if (!page_is_ram(pfn)) + goto err_out; + + /* no highmem to worry about here */ + kaddr = pfn_to_kaddr(pfn); + memcpy(buf, kaddr + offset, nb); + ret = 0; +err_out: + local_irq_restore(flags); + return ret; +} + +static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret) +{ + if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned long) || + ((unsigned long)ptr & 7)) + return -EFAULT; + + if (!probe_user_read(ret, ptr, sizeof(*ret))) + return 0; + + return read_user_stack_slow(ptr, ret, 8); +} + +/* + * 64-bit user processes use the same stack frame for RT and non-RT signals. + */ +struct signal_frame_64 { + char dummy[__SIGNAL_FRAMESIZE]; + struct ucontext uc; + unsigned long unused[2]; + unsigned int tramp[6]; + struct siginfo *pinfo; + void *puc; + struct siginfo info; + char abigap[288]; +}; + +static int is_sigreturn_64_address(unsigned long nip, unsigned long fp) +{ + if (nip == fp + offsetof(struct signal_frame_64, tramp)) + return 1; + if (vdso64_rt_sigtramp && current->mm->context.vdso_base && + nip == current->mm->context.vdso_base + vdso64_rt_sigtramp) + return 1; + return 0; +} + +/* + * Do some sanity checking on the signal frame pointed to by sp. + * We check the pinfo and puc pointers in the frame. + */ +static int sane_signal_64_frame(unsigned long sp) +{ + struct signal_frame_64 __user *sf; + unsigned long pinfo, puc; + + sf = (struct signal_frame_64 __user *) sp; + if (read_user_stack_64((unsigned long __user *) &sf->pinfo, &pinfo) || + read_user_stack_64((unsigned long __user *) &sf->puc, &puc)) + return 0; + return pinfo == (unsigned long) &sf->info && + puc == (unsigned long) &sf->uc; +} + +void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs) +{ + unsigned long sp, next_sp; + unsigned long next_ip; + unsigned long lr; + long level = 0; + struct signal_frame_64 __user *sigframe; + unsigned long __user *fp, *uregs; + + next_ip = perf_instruction_pointer(regs); + lr = regs->link; + sp = regs->gpr[1]; + perf_callchain_store(entry, next_ip); + + while (entry->nr < entry->max_stack) { + fp = (unsigned long __user *) sp; + if (invalid_user_sp(sp) || read_user_stack_64(fp, &next_sp)) + return; + if (level > 0 && read_user_stack_64(&fp[2], &next_ip)) + return; + + /* + * Note: the next_sp - sp >= signal frame size check + * is true when next_sp < sp, which can happen when + * transitioning from an alternate signal stack to the + * normal stack. + */ + if (next_sp - sp >= sizeof(struct signal_frame_64) && + (is_sigreturn_64_address(next_ip, sp) || + (level <= 1 && is_sigreturn_64_address(lr, sp))) && + sane_signal_64_frame(sp)) { + /* + * This looks like an signal frame + */ + sigframe = (struct signal_frame_64 __user *) sp; + uregs = sigframe->uc.uc_mcontext.gp_regs; + if (read_user_stack_64(&uregs[PT_NIP], &next_ip) || + read_user_stack_64(&uregs[PT_LNK], &lr) || + read_user_stack_64(&uregs[PT_R1], &sp)) + return; + level = 0; + perf_callchain_store_context(entry, PERF_CONTEXT_USER); + perf_callchain_store(entry, next_ip); + continue; + } + + if (level == 0) + next_ip = lr; + perf_callchain_store(entry, next_ip); + ++level; + sp = next_sp; + } +} From patchwork Fri Mar 20 10:20:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1258816 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48kKwM1Vvdz9sPF for ; Fri, 20 Mar 2020 21:37:23 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48kKwM0vwNzDvrC for ; Fri, 20 Mar 2020 21:37:23 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx2.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48kKY80jNnzDrfd for ; Fri, 20 Mar 2020 21:20:44 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 3FA44B310; Fri, 20 Mar 2020 10:20:41 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry. Date: Fri, 20 Mar 2020 11:20:19 +0100 Message-Id: <4b150d01c60bd37705789200d9adee9f1c9b50ce.1584699455.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: <20200225173541.1549955-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Gustavo Luiz Duarte , Peter Zijlstra , Sebastian Andrzej Siewior , linux-kernel@vger.kernel.org, Paul Mackerras , Jiri Olsa , Rob Herring , Michael Neuling , Mauro Carvalho Chehab , Masahiro Yamada , Nayna Jain , Alexander Shishkin , Ingo Molnar , Allison Randal , Jordan Niethe , Michal Suchanek , Valentin Schneider , Arnd Bergmann , Arnaldo Carvalho de Melo , Alexander Viro , Jonathan Cameron , Namhyung Kim , Thomas Gleixner , Andy Shevchenko , Hari Bathini , Greg Kroah-Hartman , Nicholas Piggin , Claudio Carvalho , Eric Richter , "Eric W. Biederman" , linux-fsdevel@vger.kernel.org, "David S. Miller" , Thiago Jung Bauermann Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" While at it also simplify the existing perf patterns. Signed-off-by: Michal Suchanek --- v10: new patch V12: remove redundant entries --- MAINTAINERS | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/MAINTAINERS b/MAINTAINERS index e1a99197fb34..578429d22220 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -13080,7 +13080,7 @@ R: Namhyung Kim L: linux-kernel@vger.kernel.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core S: Supported -F: kernel/events/* +F: kernel/events/ F: include/linux/perf_event.h F: include/uapi/linux/perf_event.h F: arch/*/kernel/perf_event*.c @@ -13088,8 +13088,8 @@ F: arch/*/kernel/*/perf_event*.c F: arch/*/kernel/*/*/perf_event*.c F: arch/*/include/asm/perf_event.h F: arch/*/kernel/perf_callchain.c -F: arch/*/events/* -F: arch/*/events/*/* +F: arch/*/events/ +F: arch/*/perf/ F: tools/perf/ PERFORMANCE EVENTS SUBSYSTEM ARM64 PMU EVENTS