From patchwork Tue Nov 12 16:51:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193692 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CDYk6SPCz9sPV for ; Wed, 13 Nov 2019 04:02:02 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CDYj4yNjzF5BZ for ; Wed, 13 Nov 2019 04:01:59 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDMh3y5nzF0jn for ; Wed, 13 Nov 2019 03:53:19 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id B6092B109; Tue, 12 Nov 2019 16:53:15 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 01/33] powerpc/64s/exception: Introduce INT_DEFINE parameter block for code generation Date: Tue, 12 Nov 2019 17:51:59 +0100 Message-Id: <854059454c690c6f63957a2130aa04e2cd501af2.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin The code generation macro arguments are difficult to read, and defaults can't easily be used. This introduces a block where parameters can be set for interrupt handler code generation by the subsequent macros, and adds the first generation macro for interrupt entry. One interrupt handler is converted to the new macros to demonstrate the change, the rest will be coverted all at once. No generated code change. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 77 ++++++++++++++++++++++++++-- 1 file changed, 73 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index d0018dd17e0a..e6ad6e6cf65e 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -193,6 +193,61 @@ END_FTR_SECTION_NESTED(ftr,ftr,943) mtctr reg; \ bctr +/* + * Interrupt code generation macros + */ +#define IVEC .L_IVEC_\name\() +#define IHSRR .L_IHSRR_\name\() +#define IAREA .L_IAREA_\name\() +#define IDAR .L_IDAR_\name\() +#define IDSISR .L_IDSISR_\name\() +#define ISET_RI .L_ISET_RI_\name\() +#define IEARLY .L_IEARLY_\name\() +#define IMASK .L_IMASK_\name\() +#define IKVM_REAL .L_IKVM_REAL_\name\() +#define IKVM_VIRT .L_IKVM_VIRT_\name\() + +#define INT_DEFINE_BEGIN(n) \ +.macro int_define_ ## n name + +#define INT_DEFINE_END(n) \ +.endm ; \ +int_define_ ## n n ; \ +do_define_int n + +.macro do_define_int name + .ifndef IVEC + .error "IVEC not defined" + .endif + .ifndef IHSRR + IHSRR=EXC_STD + .endif + .ifndef IAREA + IAREA=PACA_EXGEN + .endif + .ifndef IDAR + IDAR=0 + .endif + .ifndef IDSISR + IDSISR=0 + .endif + .ifndef ISET_RI + ISET_RI=1 + .endif + .ifndef IEARLY + IEARLY=0 + .endif + .ifndef IMASK + IMASK=0 + .endif + .ifndef IKVM_REAL + IKVM_REAL=0 + .endif + .ifndef IKVM_VIRT + IKVM_VIRT=0 + .endif +.endm + .macro INT_KVM_HANDLER name, vec, hsrr, area, skip TRAMP_KVM_BEGIN(\name\()_kvm) KVM_HANDLER \vec, \hsrr, \area, \skip @@ -474,7 +529,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) */ GET_SCRATCH0(r10) std r10,\area\()+EX_R13(r13) - .if \dar + .if \dar == 1 .if \hsrr mfspr r10,SPRN_HDAR .else @@ -482,7 +537,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) .endif std r10,\area\()+EX_DAR(r13) .endif - .if \dsisr + .if \dsisr == 1 .if \hsrr mfspr r10,SPRN_HDSISR .else @@ -506,6 +561,14 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) .endif .endm +.macro GEN_INT_ENTRY name, virt, ool=0 + .if ! \virt + INT_HANDLER \name, IVEC, \ool, IEARLY, \virt, IHSRR, IAREA, ISET_RI, IDAR, IDSISR, IMASK, IKVM_REAL + .else + INT_HANDLER \name, IVEC, \ool, IEARLY, \virt, IHSRR, IAREA, ISET_RI, IDAR, IDSISR, IMASK, IKVM_VIRT + .endif +.endm + /* * On entry r13 points to the paca, r9-r13 are saved in the paca, * r9 contains the saved CR, r11 and r12 contain the saved SRR0 and @@ -1143,12 +1206,18 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE) bl unrecoverable_exception b . +INT_DEFINE_BEGIN(data_access) + IVEC=0x300 + IDAR=1 + IDSISR=1 + IKVM_REAL=1 +INT_DEFINE_END(data_access) EXC_REAL_BEGIN(data_access, 0x300, 0x80) - INT_HANDLER data_access, 0x300, ool=1, dar=1, dsisr=1, kvm=1 + GEN_INT_ENTRY data_access, virt=0, ool=1 EXC_REAL_END(data_access, 0x300, 0x80) EXC_VIRT_BEGIN(data_access, 0x4300, 0x80) - INT_HANDLER data_access, 0x300, virt=1, dar=1, dsisr=1 + GEN_INT_ENTRY data_access, virt=1 EXC_VIRT_END(data_access, 0x4300, 0x80) INT_KVM_HANDLER data_access, 0x300, EXC_STD, PACA_EXGEN, 1 EXC_COMMON_BEGIN(data_access_common) From patchwork Tue Nov 12 16:52:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193693 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CDdP4C8rz9sP4 for ; Wed, 13 Nov 2019 04:05:13 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CDdP1dznzF3qT for ; Wed, 13 Nov 2019 04:05:13 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDMh4czCzF0mQ for ; Wed, 13 Nov 2019 03:53:20 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 419B8B2B9; Tue, 12 Nov 2019 16:53:17 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 02/33] powerpc/64s/exception: Add GEN_COMMON macro that uses INT_DEFINE parameters Date: Tue, 12 Nov 2019 17:52:00 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin No generated code change. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index e6ad6e6cf65e..591ae2a73e18 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -206,6 +206,9 @@ END_FTR_SECTION_NESTED(ftr,ftr,943) #define IMASK .L_IMASK_\name\() #define IKVM_REAL .L_IKVM_REAL_\name\() #define IKVM_VIRT .L_IKVM_VIRT_\name\() +#define ISTACK .L_ISTACK_\name\() +#define IRECONCILE .L_IRECONCILE_\name\() +#define IKUAP .L_IKUAP_\name\() #define INT_DEFINE_BEGIN(n) \ .macro int_define_ ## n name @@ -246,6 +249,15 @@ do_define_int n .ifndef IKVM_VIRT IKVM_VIRT=0 .endif + .ifndef ISTACK + ISTACK=1 + .endif + .ifndef IRECONCILE + IRECONCILE=1 + .endif + .ifndef IKUAP + IKUAP=1 + .endif .endm .macro INT_KVM_HANDLER name, vec, hsrr, area, skip @@ -670,6 +682,10 @@ END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66) .endif .endm +.macro GEN_COMMON name + INT_COMMON IVEC, IAREA, ISTACK, IKUAP, IRECONCILE, IDAR, IDSISR +.endm + /* * Restore all registers including H/SRR0/1 saved in a stack frame of a * standard exception. @@ -1221,13 +1237,7 @@ EXC_VIRT_BEGIN(data_access, 0x4300, 0x80) EXC_VIRT_END(data_access, 0x4300, 0x80) INT_KVM_HANDLER data_access, 0x300, EXC_STD, PACA_EXGEN, 1 EXC_COMMON_BEGIN(data_access_common) - /* - * Here r13 points to the paca, r9 contains the saved CR, - * SRR0 and SRR1 are saved in r11 and r12, - * r9 - r13 are saved in paca->exgen. - * EX_DAR and EX_DSISR have saved DAR/DSISR - */ - INT_COMMON 0x300, PACA_EXGEN, 1, 1, 1, 1, 1 + GEN_COMMON data_access ld r4,_DAR(r1) ld r5,_DSISR(r1) BEGIN_MMU_FTR_SECTION From patchwork Tue Nov 12 16:52:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193694 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CDhz0V83z9s7T for ; Wed, 13 Nov 2019 04:08:19 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CDhy5VXKzF3Bm for ; Wed, 13 Nov 2019 04:08:18 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDMk0cmNzF0jn for ; Wed, 13 Nov 2019 03:53:21 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id BD8BDB2BF; Tue, 12 Nov 2019 16:53:18 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 03/33] powerpc/64s/exception: Add GEN_KVM macro that uses INT_DEFINE parameters Date: Tue, 12 Nov 2019 17:52:01 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin No generated code change. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 591ae2a73e18..0e39e98ef719 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -204,6 +204,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943) #define ISET_RI .L_ISET_RI_\name\() #define IEARLY .L_IEARLY_\name\() #define IMASK .L_IMASK_\name\() +#define IKVM_SKIP .L_IKVM_SKIP_\name\() #define IKVM_REAL .L_IKVM_REAL_\name\() #define IKVM_VIRT .L_IKVM_VIRT_\name\() #define ISTACK .L_ISTACK_\name\() @@ -243,6 +244,9 @@ do_define_int n .ifndef IMASK IMASK=0 .endif + .ifndef IKVM_SKIP + IKVM_SKIP=0 + .endif .ifndef IKVM_REAL IKVM_REAL=0 .endif @@ -265,6 +269,10 @@ do_define_int n KVM_HANDLER \vec, \hsrr, \area, \skip .endm +.macro GEN_KVM name + KVM_HANDLER IVEC, IHSRR, IAREA, IKVM_SKIP +.endm + #ifdef CONFIG_KVM_BOOK3S_64_HANDLER #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE /* @@ -1226,6 +1234,7 @@ INT_DEFINE_BEGIN(data_access) IVEC=0x300 IDAR=1 IDSISR=1 + IKVM_SKIP=1 IKVM_REAL=1 INT_DEFINE_END(data_access) @@ -1235,7 +1244,8 @@ EXC_REAL_END(data_access, 0x300, 0x80) EXC_VIRT_BEGIN(data_access, 0x4300, 0x80) GEN_INT_ENTRY data_access, virt=1 EXC_VIRT_END(data_access, 0x4300, 0x80) -INT_KVM_HANDLER data_access, 0x300, EXC_STD, PACA_EXGEN, 1 +TRAMP_KVM_BEGIN(data_access_kvm) + GEN_KVM data_access EXC_COMMON_BEGIN(data_access_common) GEN_COMMON data_access ld r4,_DAR(r1) From patchwork Tue Nov 12 16:52:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193695 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CDmj6cv9z9s7T for ; Wed, 13 Nov 2019 04:11:33 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CDmj4fFdzF5Dx for ; Wed, 13 Nov 2019 04:11:33 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDMl3bkgzF0jn for ; Wed, 13 Nov 2019 03:53:23 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 3ECCFB2EF; Tue, 12 Nov 2019 16:53:20 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 04/33] powerpc/64s/exception: Expand EXC_COMMON and EXC_COMMON_ASYNC macros Date: Tue, 12 Nov 2019 17:52:02 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin These don't provide a large amount of code sharing. Removing them makes code easier to shuffle around. For example, some of the common instructions will be moved into the common code gen macro. No generated code change. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 160 ++++++++++++++++++++------- 1 file changed, 117 insertions(+), 43 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 0e39e98ef719..828fa4df15cf 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -757,28 +757,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP) #define FINISH_NAP #endif -#define EXC_COMMON(name, realvec, hdlr) \ - EXC_COMMON_BEGIN(name); \ - INT_COMMON realvec, PACA_EXGEN, 1, 1, 1, 0, 0 ; \ - bl save_nvgprs; \ - addi r3,r1,STACK_FRAME_OVERHEAD; \ - bl hdlr; \ - b ret_from_except - -/* - * Like EXC_COMMON, but for exceptions that can occur in the idle task and - * therefore need the special idle handling (finish nap and runlatch) - */ -#define EXC_COMMON_ASYNC(name, realvec, hdlr) \ - EXC_COMMON_BEGIN(name); \ - INT_COMMON realvec, PACA_EXGEN, 1, 1, 1, 0, 0 ; \ - FINISH_NAP; \ - RUNLATCH_ON; \ - addi r3,r1,STACK_FRAME_OVERHEAD; \ - bl hdlr; \ - b ret_from_except_lite - - /* * There are a few constraints to be concerned with. * - Real mode exceptions code/data must be located at their physical location. @@ -1349,7 +1327,13 @@ EXC_VIRT_BEGIN(hardware_interrupt, 0x4500, 0x100) INT_HANDLER hardware_interrupt, 0x500, virt=1, hsrr=EXC_HV_OR_STD, bitmask=IRQS_DISABLED, kvm=1 EXC_VIRT_END(hardware_interrupt, 0x4500, 0x100) INT_KVM_HANDLER hardware_interrupt, 0x500, EXC_HV_OR_STD, PACA_EXGEN, 0 -EXC_COMMON_ASYNC(hardware_interrupt_common, 0x500, do_IRQ) +EXC_COMMON_BEGIN(hardware_interrupt_common) + INT_COMMON 0x500, PACA_EXGEN, 1, 1, 1, 0, 0 + FINISH_NAP + RUNLATCH_ON + addi r3,r1,STACK_FRAME_OVERHEAD + bl do_IRQ + b ret_from_except_lite EXC_REAL_BEGIN(alignment, 0x600, 0x100) @@ -1455,7 +1439,13 @@ EXC_VIRT_BEGIN(decrementer, 0x4900, 0x80) INT_HANDLER decrementer, 0x900, virt=1, bitmask=IRQS_DISABLED EXC_VIRT_END(decrementer, 0x4900, 0x80) INT_KVM_HANDLER decrementer, 0x900, EXC_STD, PACA_EXGEN, 0 -EXC_COMMON_ASYNC(decrementer_common, 0x900, timer_interrupt) +EXC_COMMON_BEGIN(decrementer_common) + INT_COMMON 0x900, PACA_EXGEN, 1, 1, 1, 0, 0 + FINISH_NAP + RUNLATCH_ON + addi r3,r1,STACK_FRAME_OVERHEAD + bl timer_interrupt + b ret_from_except_lite EXC_REAL_BEGIN(hdecrementer, 0x980, 0x80) @@ -1465,7 +1455,12 @@ EXC_VIRT_BEGIN(hdecrementer, 0x4980, 0x80) INT_HANDLER hdecrementer, 0x980, virt=1, hsrr=EXC_HV, kvm=1 EXC_VIRT_END(hdecrementer, 0x4980, 0x80) INT_KVM_HANDLER hdecrementer, 0x980, EXC_HV, PACA_EXGEN, 0 -EXC_COMMON(hdecrementer_common, 0x980, hdec_interrupt) +EXC_COMMON_BEGIN(hdecrementer_common) + INT_COMMON 0x980, PACA_EXGEN, 1, 1, 1, 0, 0 + bl save_nvgprs + addi r3,r1,STACK_FRAME_OVERHEAD + bl hdec_interrupt + b ret_from_except EXC_REAL_BEGIN(doorbell_super, 0xa00, 0x100) @@ -1475,11 +1470,17 @@ EXC_VIRT_BEGIN(doorbell_super, 0x4a00, 0x100) INT_HANDLER doorbell_super, 0xa00, virt=1, bitmask=IRQS_DISABLED EXC_VIRT_END(doorbell_super, 0x4a00, 0x100) INT_KVM_HANDLER doorbell_super, 0xa00, EXC_STD, PACA_EXGEN, 0 +EXC_COMMON_BEGIN(doorbell_super_common) + INT_COMMON 0xa00, PACA_EXGEN, 1, 1, 1, 0, 0 + FINISH_NAP + RUNLATCH_ON + addi r3,r1,STACK_FRAME_OVERHEAD #ifdef CONFIG_PPC_DOORBELL -EXC_COMMON_ASYNC(doorbell_super_common, 0xa00, doorbell_exception) + bl doorbell_exception #else -EXC_COMMON_ASYNC(doorbell_super_common, 0xa00, unknown_exception) + bl unknown_exception #endif + b ret_from_except_lite EXC_REAL_NONE(0xb00, 0x100) @@ -1623,7 +1624,12 @@ EXC_VIRT_BEGIN(single_step, 0x4d00, 0x100) INT_HANDLER single_step, 0xd00, virt=1 EXC_VIRT_END(single_step, 0x4d00, 0x100) INT_KVM_HANDLER single_step, 0xd00, EXC_STD, PACA_EXGEN, 0 -EXC_COMMON(single_step_common, 0xd00, single_step_exception) +EXC_COMMON_BEGIN(single_step_common) + INT_COMMON 0xd00, PACA_EXGEN, 1, 1, 1, 0, 0 + bl save_nvgprs + addi r3,r1,STACK_FRAME_OVERHEAD + bl single_step_exception + b ret_from_except EXC_REAL_BEGIN(h_data_storage, 0xe00, 0x20) @@ -1654,7 +1660,12 @@ EXC_VIRT_BEGIN(h_instr_storage, 0x4e20, 0x20) INT_HANDLER h_instr_storage, 0xe20, ool=1, virt=1, hsrr=EXC_HV, kvm=1 EXC_VIRT_END(h_instr_storage, 0x4e20, 0x20) INT_KVM_HANDLER h_instr_storage, 0xe20, EXC_HV, PACA_EXGEN, 0 -EXC_COMMON(h_instr_storage_common, 0xe20, unknown_exception) +EXC_COMMON_BEGIN(h_instr_storage_common) + INT_COMMON 0xe20, PACA_EXGEN, 1, 1, 1, 0, 0 + bl save_nvgprs + addi r3,r1,STACK_FRAME_OVERHEAD + bl unknown_exception + b ret_from_except EXC_REAL_BEGIN(emulation_assist, 0xe40, 0x20) @@ -1664,7 +1675,12 @@ EXC_VIRT_BEGIN(emulation_assist, 0x4e40, 0x20) INT_HANDLER emulation_assist, 0xe40, ool=1, virt=1, hsrr=EXC_HV, kvm=1 EXC_VIRT_END(emulation_assist, 0x4e40, 0x20) INT_KVM_HANDLER emulation_assist, 0xe40, EXC_HV, PACA_EXGEN, 0 -EXC_COMMON(emulation_assist_common, 0xe40, emulation_assist_interrupt) +EXC_COMMON_BEGIN(emulation_assist_common) + INT_COMMON 0xe40, PACA_EXGEN, 1, 1, 1, 0, 0 + bl save_nvgprs + addi r3,r1,STACK_FRAME_OVERHEAD + bl emulation_assist_interrupt + b ret_from_except /* @@ -1721,11 +1737,17 @@ EXC_VIRT_BEGIN(h_doorbell, 0x4e80, 0x20) INT_HANDLER h_doorbell, 0xe80, ool=1, virt=1, hsrr=EXC_HV, bitmask=IRQS_DISABLED, kvm=1 EXC_VIRT_END(h_doorbell, 0x4e80, 0x20) INT_KVM_HANDLER h_doorbell, 0xe80, EXC_HV, PACA_EXGEN, 0 +EXC_COMMON_BEGIN(h_doorbell_common) + INT_COMMON 0xe80, PACA_EXGEN, 1, 1, 1, 0, 0 + FINISH_NAP + RUNLATCH_ON + addi r3,r1,STACK_FRAME_OVERHEAD #ifdef CONFIG_PPC_DOORBELL -EXC_COMMON_ASYNC(h_doorbell_common, 0xe80, doorbell_exception) + bl doorbell_exception #else -EXC_COMMON_ASYNC(h_doorbell_common, 0xe80, unknown_exception) + bl unknown_exception #endif + b ret_from_except_lite EXC_REAL_BEGIN(h_virt_irq, 0xea0, 0x20) @@ -1735,7 +1757,13 @@ EXC_VIRT_BEGIN(h_virt_irq, 0x4ea0, 0x20) INT_HANDLER h_virt_irq, 0xea0, ool=1, virt=1, hsrr=EXC_HV, bitmask=IRQS_DISABLED, kvm=1 EXC_VIRT_END(h_virt_irq, 0x4ea0, 0x20) INT_KVM_HANDLER h_virt_irq, 0xea0, EXC_HV, PACA_EXGEN, 0 -EXC_COMMON_ASYNC(h_virt_irq_common, 0xea0, do_IRQ) +EXC_COMMON_BEGIN(h_virt_irq_common) + INT_COMMON 0xea0, PACA_EXGEN, 1, 1, 1, 0, 0 + FINISH_NAP + RUNLATCH_ON + addi r3,r1,STACK_FRAME_OVERHEAD + bl do_IRQ + b ret_from_except_lite EXC_REAL_NONE(0xec0, 0x20) @@ -1751,7 +1779,13 @@ EXC_VIRT_BEGIN(performance_monitor, 0x4f00, 0x20) INT_HANDLER performance_monitor, 0xf00, ool=1, virt=1, bitmask=IRQS_PMI_DISABLED EXC_VIRT_END(performance_monitor, 0x4f00, 0x20) INT_KVM_HANDLER performance_monitor, 0xf00, EXC_STD, PACA_EXGEN, 0 -EXC_COMMON_ASYNC(performance_monitor_common, 0xf00, performance_monitor_exception) +EXC_COMMON_BEGIN(performance_monitor_common) + INT_COMMON 0xf00, PACA_EXGEN, 1, 1, 1, 0, 0 + FINISH_NAP + RUNLATCH_ON + addi r3,r1,STACK_FRAME_OVERHEAD + bl performance_monitor_exception + b ret_from_except_lite EXC_REAL_BEGIN(altivec_unavailable, 0xf20, 0x20) @@ -1842,7 +1876,12 @@ EXC_VIRT_BEGIN(facility_unavailable, 0x4f60, 0x20) INT_HANDLER facility_unavailable, 0xf60, ool=1, virt=1 EXC_VIRT_END(facility_unavailable, 0x4f60, 0x20) INT_KVM_HANDLER facility_unavailable, 0xf60, EXC_STD, PACA_EXGEN, 0 -EXC_COMMON(facility_unavailable_common, 0xf60, facility_unavailable_exception) +EXC_COMMON_BEGIN(facility_unavailable_common) + INT_COMMON 0xf60, PACA_EXGEN, 1, 1, 1, 0, 0 + bl save_nvgprs + addi r3,r1,STACK_FRAME_OVERHEAD + bl facility_unavailable_exception + b ret_from_except EXC_REAL_BEGIN(h_facility_unavailable, 0xf80, 0x20) @@ -1852,7 +1891,12 @@ EXC_VIRT_BEGIN(h_facility_unavailable, 0x4f80, 0x20) INT_HANDLER h_facility_unavailable, 0xf80, ool=1, virt=1, hsrr=EXC_HV, kvm=1 EXC_VIRT_END(h_facility_unavailable, 0x4f80, 0x20) INT_KVM_HANDLER h_facility_unavailable, 0xf80, EXC_HV, PACA_EXGEN, 0 -EXC_COMMON(h_facility_unavailable_common, 0xf80, facility_unavailable_exception) +EXC_COMMON_BEGIN(h_facility_unavailable_common) + INT_COMMON 0xf80, PACA_EXGEN, 1, 1, 1, 0, 0 + bl save_nvgprs + addi r3,r1,STACK_FRAME_OVERHEAD + bl facility_unavailable_exception + b ret_from_except EXC_REAL_NONE(0xfa0, 0x20) @@ -1873,7 +1917,12 @@ EXC_REAL_BEGIN(cbe_system_error, 0x1200, 0x100) EXC_REAL_END(cbe_system_error, 0x1200, 0x100) EXC_VIRT_NONE(0x5200, 0x100) INT_KVM_HANDLER cbe_system_error, 0x1200, EXC_HV, PACA_EXGEN, 1 -EXC_COMMON(cbe_system_error_common, 0x1200, cbe_system_error_exception) +EXC_COMMON_BEGIN(cbe_system_error_common) + INT_COMMON 0x1200, PACA_EXGEN, 1, 1, 1, 0, 0 + bl save_nvgprs + addi r3,r1,STACK_FRAME_OVERHEAD + bl cbe_system_error_exception + b ret_from_except #else /* CONFIG_CBE_RAS */ EXC_REAL_NONE(0x1200, 0x100) EXC_VIRT_NONE(0x5200, 0x100) @@ -1887,7 +1936,12 @@ EXC_VIRT_BEGIN(instruction_breakpoint, 0x5300, 0x100) INT_HANDLER instruction_breakpoint, 0x1300, virt=1 EXC_VIRT_END(instruction_breakpoint, 0x5300, 0x100) INT_KVM_HANDLER instruction_breakpoint, 0x1300, EXC_STD, PACA_EXGEN, 1 -EXC_COMMON(instruction_breakpoint_common, 0x1300, instruction_breakpoint_exception) +EXC_COMMON_BEGIN(instruction_breakpoint_common) + INT_COMMON 0x1300, PACA_EXGEN, 1, 1, 1, 0, 0 + bl save_nvgprs + addi r3,r1,STACK_FRAME_OVERHEAD + bl instruction_breakpoint_exception + b ret_from_except EXC_REAL_NONE(0x1400, 0x100) @@ -1987,7 +2041,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR) b . #endif -EXC_COMMON(denorm_common, 0x1500, unknown_exception) +EXC_COMMON_BEGIN(denorm_common) + INT_COMMON 0x1500, PACA_EXGEN, 1, 1, 1, 0, 0 + bl save_nvgprs + addi r3,r1,STACK_FRAME_OVERHEAD + bl unknown_exception + b ret_from_except #ifdef CONFIG_CBE_RAS @@ -1996,7 +2055,12 @@ EXC_REAL_BEGIN(cbe_maintenance, 0x1600, 0x100) EXC_REAL_END(cbe_maintenance, 0x1600, 0x100) EXC_VIRT_NONE(0x5600, 0x100) INT_KVM_HANDLER cbe_maintenance, 0x1600, EXC_HV, PACA_EXGEN, 1 -EXC_COMMON(cbe_maintenance_common, 0x1600, cbe_maintenance_exception) +EXC_COMMON_BEGIN(cbe_maintenance_common) + INT_COMMON 0x1600, PACA_EXGEN, 1, 1, 1, 0, 0 + bl save_nvgprs + addi r3,r1,STACK_FRAME_OVERHEAD + bl cbe_maintenance_exception + b ret_from_except #else /* CONFIG_CBE_RAS */ EXC_REAL_NONE(0x1600, 0x100) EXC_VIRT_NONE(0x5600, 0x100) @@ -2010,11 +2074,16 @@ EXC_VIRT_BEGIN(altivec_assist, 0x5700, 0x100) INT_HANDLER altivec_assist, 0x1700, virt=1 EXC_VIRT_END(altivec_assist, 0x5700, 0x100) INT_KVM_HANDLER altivec_assist, 0x1700, EXC_STD, PACA_EXGEN, 0 +EXC_COMMON_BEGIN(altivec_assist_common) + INT_COMMON 0x1700, PACA_EXGEN, 1, 1, 1, 0, 0 + bl save_nvgprs + addi r3,r1,STACK_FRAME_OVERHEAD #ifdef CONFIG_ALTIVEC -EXC_COMMON(altivec_assist_common, 0x1700, altivec_assist_exception) + bl altivec_assist_exception #else -EXC_COMMON(altivec_assist_common, 0x1700, unknown_exception) + bl unknown_exception #endif + b ret_from_except #ifdef CONFIG_CBE_RAS @@ -2023,7 +2092,12 @@ EXC_REAL_BEGIN(cbe_thermal, 0x1800, 0x100) EXC_REAL_END(cbe_thermal, 0x1800, 0x100) EXC_VIRT_NONE(0x5800, 0x100) INT_KVM_HANDLER cbe_thermal, 0x1800, EXC_HV, PACA_EXGEN, 1 -EXC_COMMON(cbe_thermal_common, 0x1800, cbe_thermal_exception) +EXC_COMMON_BEGIN(cbe_thermal_common) + INT_COMMON 0x1800, PACA_EXGEN, 1, 1, 1, 0, 0 + bl save_nvgprs + addi r3,r1,STACK_FRAME_OVERHEAD + bl cbe_thermal_exception + b ret_from_except #else /* CONFIG_CBE_RAS */ EXC_REAL_NONE(0x1800, 0x100) EXC_VIRT_NONE(0x5800, 0x100) From patchwork Tue Nov 12 16:52:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193699 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CDqW6f3Bz9s7T for ; Wed, 13 Nov 2019 04:13:59 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CDqW12ZdzF3pL for ; Wed, 13 Nov 2019 04:13:59 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDMm6MXSzF0jn for ; Wed, 13 Nov 2019 03:53:24 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id B8D00B2FF; Tue, 12 Nov 2019 16:53:21 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 05/33] powerpc/64s/exception: Move all interrupt handlers to new style code gen macros Date: Tue, 12 Nov 2019 17:52:03 +0100 Message-Id: <2af9ddf8f0302cc49ba6bac126c882c1e23a1609.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin Aside from label names and BUG line numbers, the generated code change is an additional HMI KVM handler added for the "late" KVM handler, because early and late HMI generation is achieved by defining two different interrupt types. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 556 ++++++++++++++++++++------- 1 file changed, 418 insertions(+), 138 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 828fa4df15cf..b5decc9a0cbf 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -206,8 +206,10 @@ END_FTR_SECTION_NESTED(ftr,ftr,943) #define IMASK .L_IMASK_\name\() #define IKVM_SKIP .L_IKVM_SKIP_\name\() #define IKVM_REAL .L_IKVM_REAL_\name\() +#define __IKVM_REAL(name) .L_IKVM_REAL_ ## name #define IKVM_VIRT .L_IKVM_VIRT_\name\() #define ISTACK .L_ISTACK_\name\() +#define __ISTACK(name) .L_ISTACK_ ## name #define IRECONCILE .L_IRECONCILE_\name\() #define IKUAP .L_IKUAP_\name\() @@ -570,7 +572,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) /* nothing more */ .elseif \early mfctr r10 /* save ctr, even for !RELOCATABLE */ - BRANCH_TO_C000(r11, \name\()_early_common) + BRANCH_TO_C000(r11, \name\()_common) .elseif !\virt INT_SAVE_SRR_AND_JUMP \name\()_common, \hsrr, \ri .else @@ -843,6 +845,19 @@ __start_interrupts: EXC_VIRT_NONE(0x4000, 0x100) +INT_DEFINE_BEGIN(system_reset) + IVEC=0x100 + IAREA=PACA_EXNMI + /* + * MSR_RI is not enabled, because PACA_EXNMI and nmi stack is + * being used, so a nested NMI exception would corrupt it. + */ + ISET_RI=0 + ISTACK=0 + IRECONCILE=0 + IKVM_REAL=1 +INT_DEFINE_END(system_reset) + EXC_REAL_BEGIN(system_reset, 0x100, 0x100) #ifdef CONFIG_PPC_P7_NAP /* @@ -880,11 +895,8 @@ BEGIN_FTR_SECTION END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) #endif - INT_HANDLER system_reset, 0x100, area=PACA_EXNMI, ri=0, kvm=1 + GEN_INT_ENTRY system_reset, virt=0 /* - * MSR_RI is not enabled, because PACA_EXNMI and nmi stack is - * being used, so a nested NMI exception would corrupt it. - * * In theory, we should not enable relocation here if it was disabled * in SRR1, because the MMU may not be configured to support it (e.g., * SLB may have been cleared). In practice, there should only be a few @@ -893,7 +905,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) */ EXC_REAL_END(system_reset, 0x100, 0x100) EXC_VIRT_NONE(0x4100, 0x100) -INT_KVM_HANDLER system_reset 0x100, EXC_STD, PACA_EXNMI, 0 +TRAMP_KVM_BEGIN(system_reset_kvm) + GEN_KVM system_reset #ifdef CONFIG_PPC_P7_NAP TRAMP_REAL_BEGIN(system_reset_idle_wake) @@ -908,8 +921,8 @@ TRAMP_REAL_BEGIN(system_reset_idle_wake) * Vectors for the FWNMI option. Share common code. */ TRAMP_REAL_BEGIN(system_reset_fwnmi) - /* See comment at system_reset exception, don't turn on RI */ - INT_HANDLER system_reset, 0x100, area=PACA_EXNMI, ri=0 + __IKVM_REAL(system_reset)=0 + GEN_INT_ENTRY system_reset, virt=0 #endif /* CONFIG_PPC_PSERIES */ @@ -929,7 +942,7 @@ EXC_COMMON_BEGIN(system_reset_common) mr r10,r1 ld r1,PACA_NMI_EMERG_SP(r13) subi r1,r1,INT_FRAME_SIZE - INT_COMMON 0x100, PACA_EXNMI, 0, 1, 0, 0, 0 + GEN_COMMON system_reset bl save_nvgprs /* * Set IRQS_ALL_DISABLED unconditionally so arch_irqs_disabled does @@ -971,23 +984,46 @@ EXC_COMMON_BEGIN(system_reset_common) RFI_TO_USER_OR_KERNEL -EXC_REAL_BEGIN(machine_check, 0x200, 0x100) - INT_HANDLER machine_check, 0x200, early=1, area=PACA_EXMC, dar=1, dsisr=1 +INT_DEFINE_BEGIN(machine_check_early) + IVEC=0x200 + IAREA=PACA_EXMC /* * MSR_RI is not enabled, because PACA_EXMC is being used, so a * nested machine check corrupts it. machine_check_common enables * MSR_RI. */ + ISET_RI=0 + ISTACK=0 + IEARLY=1 + IDAR=1 + IDSISR=1 + IRECONCILE=0 + IKUAP=0 /* We don't touch AMR here, we never go to virtual mode */ +INT_DEFINE_END(machine_check_early) + +INT_DEFINE_BEGIN(machine_check) + IVEC=0x200 + IAREA=PACA_EXMC + ISET_RI=0 + IDAR=1 + IDSISR=1 + IKVM_SKIP=1 + IKVM_REAL=1 +INT_DEFINE_END(machine_check) + +EXC_REAL_BEGIN(machine_check, 0x200, 0x100) + GEN_INT_ENTRY machine_check_early, virt=0 EXC_REAL_END(machine_check, 0x200, 0x100) EXC_VIRT_NONE(0x4200, 0x100) #ifdef CONFIG_PPC_PSERIES TRAMP_REAL_BEGIN(machine_check_fwnmi) /* See comment at machine_check exception, don't turn on RI */ - INT_HANDLER machine_check, 0x200, early=1, area=PACA_EXMC, dar=1, dsisr=1 + GEN_INT_ENTRY machine_check_early, virt=0 #endif -INT_KVM_HANDLER machine_check 0x200, EXC_STD, PACA_EXMC, 1 +TRAMP_KVM_BEGIN(machine_check_kvm) + GEN_KVM machine_check #define MACHINE_CHECK_HANDLER_WINDUP \ /* Clear MSR_RI before setting SRR0 and SRR1. */\ @@ -1039,8 +1075,7 @@ EXC_COMMON_BEGIN(machine_check_early_common) bgt cr1,unrecoverable_mce /* Check if we hit limit of 4 */ subi r1,r1,INT_FRAME_SIZE /* alloc stack frame */ - /* We don't touch AMR here, we never go to virtual mode */ - INT_COMMON 0x200, PACA_EXMC, 0, 0, 0, 1, 1 + GEN_COMMON machine_check_early BEGIN_FTR_SECTION bl enable_machine_check @@ -1128,15 +1163,15 @@ BEGIN_FTR_SECTION mtspr SPRN_CFAR,r10 END_FTR_SECTION_IFSET(CPU_FTR_CFAR) MACHINE_CHECK_HANDLER_WINDUP - /* See comment at machine_check exception, don't turn on RI */ - INT_HANDLER machine_check, 0x200, area=PACA_EXMC, ri=0, dar=1, dsisr=1, kvm=1 + GEN_INT_ENTRY machine_check, virt=0 EXC_COMMON_BEGIN(machine_check_common) /* * Machine check is different because we use a different * save area: PACA_EXMC instead of PACA_EXGEN. */ - INT_COMMON 0x200, PACA_EXMC, 1, 1, 1, 1, 1 + GEN_COMMON machine_check + FINISH_NAP /* Enable MSR_RI when finished with PACA_EXMC */ li r10,MSR_RI @@ -1208,6 +1243,22 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE) bl unrecoverable_exception b . + +/** + * 0x300 - Data Storage Interrupt (DSI) + * This interrupt is generated due to a data access which does not have a valid + * page table entry with permissions to allow the data access to be performed. + * DAWR matches also fault here, as do RC updates, and minor misc errors e.g., + * copy/paste, AMO, certain invalid CI accesses, etc. + * + * This interrupt is delivered to the guest (HV bit unchanged). + * + * Linux HPT responds by first attempting to refill the hash table from the + * Linux page table, then going to a full page fault if the Linux page table + * entry was insufficient. RPT goes straight to full page fault. + * + * PR KVM ...? + */ INT_DEFINE_BEGIN(data_access) IVEC=0x300 IDAR=1 @@ -1237,15 +1288,25 @@ MMU_FTR_SECTION_ELSE ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) +INT_DEFINE_BEGIN(data_access_slb) + IVEC=0x380 + IAREA=PACA_EXSLB + IRECONCILE=0 + IDAR=1 + IKVM_SKIP=1 + IKVM_REAL=1 +INT_DEFINE_END(data_access_slb) + EXC_REAL_BEGIN(data_access_slb, 0x380, 0x80) - INT_HANDLER data_access_slb, 0x380, ool=1, area=PACA_EXSLB, dar=1, kvm=1 + GEN_INT_ENTRY data_access_slb, virt=0, ool=1 EXC_REAL_END(data_access_slb, 0x380, 0x80) EXC_VIRT_BEGIN(data_access_slb, 0x4380, 0x80) - INT_HANDLER data_access_slb, 0x380, virt=1, area=PACA_EXSLB, dar=1 + GEN_INT_ENTRY data_access_slb, virt=1 EXC_VIRT_END(data_access_slb, 0x4380, 0x80) -INT_KVM_HANDLER data_access_slb, 0x380, EXC_STD, PACA_EXSLB, 1 +TRAMP_KVM_BEGIN(data_access_slb_kvm) + GEN_KVM data_access_slb EXC_COMMON_BEGIN(data_access_slb_common) - INT_COMMON 0x380, PACA_EXSLB, 1, 1, 0, 1, 0 + GEN_COMMON data_access_slb ld r4,_DAR(r1) addi r3,r1,STACK_FRAME_OVERHEAD BEGIN_MMU_FTR_SECTION @@ -1269,15 +1330,23 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) b ret_from_except +INT_DEFINE_BEGIN(instruction_access) + IVEC=0x400 + IDAR=2 + IDSISR=2 + IKVM_REAL=1 +INT_DEFINE_END(instruction_access) + EXC_REAL_BEGIN(instruction_access, 0x400, 0x80) - INT_HANDLER instruction_access, 0x400, kvm=1 + GEN_INT_ENTRY instruction_access, virt=0 EXC_REAL_END(instruction_access, 0x400, 0x80) EXC_VIRT_BEGIN(instruction_access, 0x4400, 0x80) - INT_HANDLER instruction_access, 0x400, virt=1 + GEN_INT_ENTRY instruction_access, virt=1 EXC_VIRT_END(instruction_access, 0x4400, 0x80) -INT_KVM_HANDLER instruction_access, 0x400, EXC_STD, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(instruction_access_kvm) + GEN_KVM instruction_access EXC_COMMON_BEGIN(instruction_access_common) - INT_COMMON 0x400, PACA_EXGEN, 1, 1, 1, 2, 2 + GEN_COMMON instruction_access ld r4,_DAR(r1) ld r5,_DSISR(r1) BEGIN_MMU_FTR_SECTION @@ -1289,15 +1358,24 @@ MMU_FTR_SECTION_ELSE ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) +INT_DEFINE_BEGIN(instruction_access_slb) + IVEC=0x480 + IAREA=PACA_EXSLB + IRECONCILE=0 + IDAR=2 + IKVM_REAL=1 +INT_DEFINE_END(instruction_access_slb) + EXC_REAL_BEGIN(instruction_access_slb, 0x480, 0x80) - INT_HANDLER instruction_access_slb, 0x480, area=PACA_EXSLB, kvm=1 + GEN_INT_ENTRY instruction_access_slb, virt=0 EXC_REAL_END(instruction_access_slb, 0x480, 0x80) EXC_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x80) - INT_HANDLER instruction_access_slb, 0x480, virt=1, area=PACA_EXSLB + GEN_INT_ENTRY instruction_access_slb, virt=1 EXC_VIRT_END(instruction_access_slb, 0x4480, 0x80) -INT_KVM_HANDLER instruction_access_slb, 0x480, EXC_STD, PACA_EXSLB, 0 +TRAMP_KVM_BEGIN(instruction_access_slb_kvm) + GEN_KVM instruction_access_slb EXC_COMMON_BEGIN(instruction_access_slb_common) - INT_COMMON 0x480, PACA_EXSLB, 1, 1, 0, 2, 0 + GEN_COMMON instruction_access_slb ld r4,_DAR(r1) addi r3,r1,STACK_FRAME_OVERHEAD BEGIN_MMU_FTR_SECTION @@ -1320,15 +1398,24 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) bl do_bad_slb_fault b ret_from_except +INT_DEFINE_BEGIN(hardware_interrupt) + IVEC=0x500 + IHSRR=EXC_HV_OR_STD + IMASK=IRQS_DISABLED + IKVM_REAL=1 + IKVM_VIRT=1 +INT_DEFINE_END(hardware_interrupt) + EXC_REAL_BEGIN(hardware_interrupt, 0x500, 0x100) - INT_HANDLER hardware_interrupt, 0x500, hsrr=EXC_HV_OR_STD, bitmask=IRQS_DISABLED, kvm=1 + GEN_INT_ENTRY hardware_interrupt, virt=0 EXC_REAL_END(hardware_interrupt, 0x500, 0x100) EXC_VIRT_BEGIN(hardware_interrupt, 0x4500, 0x100) - INT_HANDLER hardware_interrupt, 0x500, virt=1, hsrr=EXC_HV_OR_STD, bitmask=IRQS_DISABLED, kvm=1 + GEN_INT_ENTRY hardware_interrupt, virt=1 EXC_VIRT_END(hardware_interrupt, 0x4500, 0x100) -INT_KVM_HANDLER hardware_interrupt, 0x500, EXC_HV_OR_STD, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(hardware_interrupt_kvm) + GEN_KVM hardware_interrupt EXC_COMMON_BEGIN(hardware_interrupt_common) - INT_COMMON 0x500, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON hardware_interrupt FINISH_NAP RUNLATCH_ON addi r3,r1,STACK_FRAME_OVERHEAD @@ -1336,28 +1423,42 @@ EXC_COMMON_BEGIN(hardware_interrupt_common) b ret_from_except_lite +INT_DEFINE_BEGIN(alignment) + IVEC=0x600 + IDAR=1 + IDSISR=1 + IKVM_REAL=1 +INT_DEFINE_END(alignment) + EXC_REAL_BEGIN(alignment, 0x600, 0x100) - INT_HANDLER alignment, 0x600, dar=1, dsisr=1, kvm=1 + GEN_INT_ENTRY alignment, virt=0 EXC_REAL_END(alignment, 0x600, 0x100) EXC_VIRT_BEGIN(alignment, 0x4600, 0x100) - INT_HANDLER alignment, 0x600, virt=1, dar=1, dsisr=1 + GEN_INT_ENTRY alignment, virt=1 EXC_VIRT_END(alignment, 0x4600, 0x100) -INT_KVM_HANDLER alignment, 0x600, EXC_STD, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(alignment_kvm) + GEN_KVM alignment EXC_COMMON_BEGIN(alignment_common) - INT_COMMON 0x600, PACA_EXGEN, 1, 1, 1, 1, 1 + GEN_COMMON alignment bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl alignment_exception b ret_from_except +INT_DEFINE_BEGIN(program_check) + IVEC=0x700 + IKVM_REAL=1 +INT_DEFINE_END(program_check) + EXC_REAL_BEGIN(program_check, 0x700, 0x100) - INT_HANDLER program_check, 0x700, kvm=1 + GEN_INT_ENTRY program_check, virt=0 EXC_REAL_END(program_check, 0x700, 0x100) EXC_VIRT_BEGIN(program_check, 0x4700, 0x100) - INT_HANDLER program_check, 0x700, virt=1 + GEN_INT_ENTRY program_check, virt=1 EXC_VIRT_END(program_check, 0x4700, 0x100) -INT_KVM_HANDLER program_check, 0x700, EXC_STD, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(program_check_kvm) + GEN_KVM program_check EXC_COMMON_BEGIN(program_check_common) /* * It's possible to receive a TM Bad Thing type program check with @@ -1383,10 +1484,12 @@ EXC_COMMON_BEGIN(program_check_common) mr r10,r1 /* Save r1 */ ld r1,PACAEMERGSP(r13) /* Use emergency stack */ subi r1,r1,INT_FRAME_SIZE /* alloc stack frame */ - INT_COMMON 0x700, PACA_EXGEN, 0, 1, 1, 0, 0 + __ISTACK(program_check)=0 + GEN_COMMON program_check b 3f 2: - INT_COMMON 0x700, PACA_EXGEN, 1, 1, 1, 0, 0 + __ISTACK(program_check)=1 + GEN_COMMON program_check 3: bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD @@ -1394,15 +1497,22 @@ EXC_COMMON_BEGIN(program_check_common) b ret_from_except +INT_DEFINE_BEGIN(fp_unavailable) + IVEC=0x800 + IRECONCILE=0 + IKVM_REAL=1 +INT_DEFINE_END(fp_unavailable) + EXC_REAL_BEGIN(fp_unavailable, 0x800, 0x100) - INT_HANDLER fp_unavailable, 0x800, kvm=1 + GEN_INT_ENTRY fp_unavailable, virt=0 EXC_REAL_END(fp_unavailable, 0x800, 0x100) EXC_VIRT_BEGIN(fp_unavailable, 0x4800, 0x100) - INT_HANDLER fp_unavailable, 0x800, virt=1 + GEN_INT_ENTRY fp_unavailable, virt=1 EXC_VIRT_END(fp_unavailable, 0x4800, 0x100) -INT_KVM_HANDLER fp_unavailable, 0x800, EXC_STD, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(fp_unavailable_kvm) + GEN_KVM fp_unavailable EXC_COMMON_BEGIN(fp_unavailable_common) - INT_COMMON 0x800, PACA_EXGEN, 1, 1, 0, 0, 0 + GEN_COMMON fp_unavailable bne 1f /* if from user, just load it up */ bl save_nvgprs RECONCILE_IRQ_STATE(r10, r11) @@ -1432,15 +1542,22 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM) #endif +INT_DEFINE_BEGIN(decrementer) + IVEC=0x900 + IMASK=IRQS_DISABLED + IKVM_REAL=1 +INT_DEFINE_END(decrementer) + EXC_REAL_BEGIN(decrementer, 0x900, 0x80) - INT_HANDLER decrementer, 0x900, ool=1, bitmask=IRQS_DISABLED, kvm=1 + GEN_INT_ENTRY decrementer, virt=0, ool=1 EXC_REAL_END(decrementer, 0x900, 0x80) EXC_VIRT_BEGIN(decrementer, 0x4900, 0x80) - INT_HANDLER decrementer, 0x900, virt=1, bitmask=IRQS_DISABLED + GEN_INT_ENTRY decrementer, virt=1 EXC_VIRT_END(decrementer, 0x4900, 0x80) -INT_KVM_HANDLER decrementer, 0x900, EXC_STD, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(decrementer_kvm) + GEN_KVM decrementer EXC_COMMON_BEGIN(decrementer_common) - INT_COMMON 0x900, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON decrementer FINISH_NAP RUNLATCH_ON addi r3,r1,STACK_FRAME_OVERHEAD @@ -1448,30 +1565,45 @@ EXC_COMMON_BEGIN(decrementer_common) b ret_from_except_lite +INT_DEFINE_BEGIN(hdecrementer) + IVEC=0x980 + IHSRR=EXC_HV + IKVM_REAL=1 + IKVM_VIRT=1 +INT_DEFINE_END(hdecrementer) + EXC_REAL_BEGIN(hdecrementer, 0x980, 0x80) - INT_HANDLER hdecrementer, 0x980, hsrr=EXC_HV, kvm=1 + GEN_INT_ENTRY hdecrementer, virt=0 EXC_REAL_END(hdecrementer, 0x980, 0x80) EXC_VIRT_BEGIN(hdecrementer, 0x4980, 0x80) - INT_HANDLER hdecrementer, 0x980, virt=1, hsrr=EXC_HV, kvm=1 + GEN_INT_ENTRY hdecrementer, virt=1 EXC_VIRT_END(hdecrementer, 0x4980, 0x80) -INT_KVM_HANDLER hdecrementer, 0x980, EXC_HV, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(hdecrementer_kvm) + GEN_KVM hdecrementer EXC_COMMON_BEGIN(hdecrementer_common) - INT_COMMON 0x980, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON hdecrementer bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl hdec_interrupt b ret_from_except +INT_DEFINE_BEGIN(doorbell_super) + IVEC=0xa00 + IMASK=IRQS_DISABLED + IKVM_REAL=1 +INT_DEFINE_END(doorbell_super) + EXC_REAL_BEGIN(doorbell_super, 0xa00, 0x100) - INT_HANDLER doorbell_super, 0xa00, bitmask=IRQS_DISABLED, kvm=1 + GEN_INT_ENTRY doorbell_super, virt=0 EXC_REAL_END(doorbell_super, 0xa00, 0x100) EXC_VIRT_BEGIN(doorbell_super, 0x4a00, 0x100) - INT_HANDLER doorbell_super, 0xa00, virt=1, bitmask=IRQS_DISABLED + GEN_INT_ENTRY doorbell_super, virt=1 EXC_VIRT_END(doorbell_super, 0x4a00, 0x100) -INT_KVM_HANDLER doorbell_super, 0xa00, EXC_STD, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(doorbell_super_kvm) + GEN_KVM doorbell_super EXC_COMMON_BEGIN(doorbell_super_common) - INT_COMMON 0xa00, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON doorbell_super FINISH_NAP RUNLATCH_ON addi r3,r1,STACK_FRAME_OVERHEAD @@ -1617,30 +1749,47 @@ TRAMP_KVM_BEGIN(system_call_kvm) #endif +INT_DEFINE_BEGIN(single_step) + IVEC=0xd00 + IKVM_REAL=1 +INT_DEFINE_END(single_step) + EXC_REAL_BEGIN(single_step, 0xd00, 0x100) - INT_HANDLER single_step, 0xd00, kvm=1 + GEN_INT_ENTRY single_step, virt=0 EXC_REAL_END(single_step, 0xd00, 0x100) EXC_VIRT_BEGIN(single_step, 0x4d00, 0x100) - INT_HANDLER single_step, 0xd00, virt=1 + GEN_INT_ENTRY single_step, virt=1 EXC_VIRT_END(single_step, 0x4d00, 0x100) -INT_KVM_HANDLER single_step, 0xd00, EXC_STD, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(single_step_kvm) + GEN_KVM single_step EXC_COMMON_BEGIN(single_step_common) - INT_COMMON 0xd00, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON single_step bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl single_step_exception b ret_from_except +INT_DEFINE_BEGIN(h_data_storage) + IVEC=0xe00 + IHSRR=EXC_HV + IDAR=1 + IDSISR=1 + IKVM_SKIP=1 + IKVM_REAL=1 + IKVM_VIRT=1 +INT_DEFINE_END(h_data_storage) + EXC_REAL_BEGIN(h_data_storage, 0xe00, 0x20) - INT_HANDLER h_data_storage, 0xe00, ool=1, hsrr=EXC_HV, dar=1, dsisr=1, kvm=1 + GEN_INT_ENTRY h_data_storage, virt=0, ool=1 EXC_REAL_END(h_data_storage, 0xe00, 0x20) EXC_VIRT_BEGIN(h_data_storage, 0x4e00, 0x20) - INT_HANDLER h_data_storage, 0xe00, ool=1, virt=1, hsrr=EXC_HV, dar=1, dsisr=1, kvm=1 + GEN_INT_ENTRY h_data_storage, virt=1, ool=1 EXC_VIRT_END(h_data_storage, 0x4e00, 0x20) -INT_KVM_HANDLER h_data_storage, 0xe00, EXC_HV, PACA_EXGEN, 1 +TRAMP_KVM_BEGIN(h_data_storage_kvm) + GEN_KVM h_data_storage EXC_COMMON_BEGIN(h_data_storage_common) - INT_COMMON 0xe00, PACA_EXGEN, 1, 1, 1, 1, 1 + GEN_COMMON h_data_storage bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD BEGIN_MMU_FTR_SECTION @@ -1653,30 +1802,46 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_TYPE_RADIX) b ret_from_except +INT_DEFINE_BEGIN(h_instr_storage) + IVEC=0xe20 + IHSRR=EXC_HV + IKVM_REAL=1 + IKVM_VIRT=1 +INT_DEFINE_END(h_instr_storage) + EXC_REAL_BEGIN(h_instr_storage, 0xe20, 0x20) - INT_HANDLER h_instr_storage, 0xe20, ool=1, hsrr=EXC_HV, kvm=1 + GEN_INT_ENTRY h_instr_storage, virt=0, ool=1 EXC_REAL_END(h_instr_storage, 0xe20, 0x20) EXC_VIRT_BEGIN(h_instr_storage, 0x4e20, 0x20) - INT_HANDLER h_instr_storage, 0xe20, ool=1, virt=1, hsrr=EXC_HV, kvm=1 + GEN_INT_ENTRY h_instr_storage, virt=1, ool=1 EXC_VIRT_END(h_instr_storage, 0x4e20, 0x20) -INT_KVM_HANDLER h_instr_storage, 0xe20, EXC_HV, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(h_instr_storage_kvm) + GEN_KVM h_instr_storage EXC_COMMON_BEGIN(h_instr_storage_common) - INT_COMMON 0xe20, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON h_instr_storage bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl unknown_exception b ret_from_except +INT_DEFINE_BEGIN(emulation_assist) + IVEC=0xe40 + IHSRR=EXC_HV + IKVM_REAL=1 + IKVM_VIRT=1 +INT_DEFINE_END(emulation_assist) + EXC_REAL_BEGIN(emulation_assist, 0xe40, 0x20) - INT_HANDLER emulation_assist, 0xe40, ool=1, hsrr=EXC_HV, kvm=1 + GEN_INT_ENTRY emulation_assist, virt=0, ool=1 EXC_REAL_END(emulation_assist, 0xe40, 0x20) EXC_VIRT_BEGIN(emulation_assist, 0x4e40, 0x20) - INT_HANDLER emulation_assist, 0xe40, ool=1, virt=1, hsrr=EXC_HV, kvm=1 + GEN_INT_ENTRY emulation_assist, virt=1, ool=1 EXC_VIRT_END(emulation_assist, 0x4e40, 0x20) -INT_KVM_HANDLER emulation_assist, 0xe40, EXC_HV, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(emulation_assist_kvm) + GEN_KVM emulation_assist EXC_COMMON_BEGIN(emulation_assist_common) - INT_COMMON 0xe40, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON emulation_assist bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl emulation_assist_interrupt @@ -1688,11 +1853,32 @@ EXC_COMMON_BEGIN(emulation_assist_common) * first, and then eventaully from there to the trampoline to get into virtual * mode. */ +INT_DEFINE_BEGIN(hmi_exception_early) + IVEC=0xe60 + IHSRR=EXC_HV + IEARLY=1 + ISTACK=0 + IRECONCILE=0 + IKUAP=0 /* We don't touch AMR here, we never go to virtual mode */ + IKVM_REAL=1 +INT_DEFINE_END(hmi_exception_early) + +INT_DEFINE_BEGIN(hmi_exception) + IVEC=0xe60 + IHSRR=EXC_HV + IMASK=IRQS_DISABLED + IKVM_REAL=1 +INT_DEFINE_END(hmi_exception) + EXC_REAL_BEGIN(hmi_exception, 0xe60, 0x20) - INT_HANDLER hmi_exception, 0xe60, ool=1, early=1, hsrr=EXC_HV, ri=0, kvm=1 + GEN_INT_ENTRY hmi_exception_early, virt=0, ool=1 EXC_REAL_END(hmi_exception, 0xe60, 0x20) EXC_VIRT_NONE(0x4e60, 0x20) -INT_KVM_HANDLER hmi_exception, 0xe60, EXC_HV, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(hmi_exception_early_kvm) + GEN_KVM hmi_exception_early +TRAMP_KVM_BEGIN(hmi_exception_kvm) + GEN_KVM hmi_exception + EXC_COMMON_BEGIN(hmi_exception_early_common) mtctr r10 /* Restore ctr */ mfspr r11,SPRN_HSRR0 /* Save HSRR0 */ @@ -1701,8 +1887,7 @@ EXC_COMMON_BEGIN(hmi_exception_early_common) ld r1,PACAEMERGSP(r13) /* Use emergency stack for realmode */ subi r1,r1,INT_FRAME_SIZE /* alloc stack frame */ - /* We don't touch AMR here, we never go to virtual mode */ - INT_COMMON 0xe60, PACA_EXGEN, 0, 0, 0, 0, 0 + GEN_COMMON hmi_exception_early addi r3,r1,STACK_FRAME_OVERHEAD bl hmi_exception_realmode @@ -1718,10 +1903,10 @@ EXC_COMMON_BEGIN(hmi_exception_early_common) * firmware. */ EXCEPTION_RESTORE_REGS EXC_HV - INT_HANDLER hmi_exception, 0xe60, hsrr=EXC_HV, bitmask=IRQS_DISABLED, kvm=1 + GEN_INT_ENTRY hmi_exception, virt=0 EXC_COMMON_BEGIN(hmi_exception_common) - INT_COMMON 0xe60, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON hmi_exception FINISH_NAP RUNLATCH_ON bl save_nvgprs @@ -1730,15 +1915,24 @@ EXC_COMMON_BEGIN(hmi_exception_common) b ret_from_except +INT_DEFINE_BEGIN(h_doorbell) + IVEC=0xe80 + IHSRR=EXC_HV + IMASK=IRQS_DISABLED + IKVM_REAL=1 + IKVM_VIRT=1 +INT_DEFINE_END(h_doorbell) + EXC_REAL_BEGIN(h_doorbell, 0xe80, 0x20) - INT_HANDLER h_doorbell, 0xe80, ool=1, hsrr=EXC_HV, bitmask=IRQS_DISABLED, kvm=1 + GEN_INT_ENTRY h_doorbell, virt=0, ool=1 EXC_REAL_END(h_doorbell, 0xe80, 0x20) EXC_VIRT_BEGIN(h_doorbell, 0x4e80, 0x20) - INT_HANDLER h_doorbell, 0xe80, ool=1, virt=1, hsrr=EXC_HV, bitmask=IRQS_DISABLED, kvm=1 + GEN_INT_ENTRY h_doorbell, virt=1, ool=1 EXC_VIRT_END(h_doorbell, 0x4e80, 0x20) -INT_KVM_HANDLER h_doorbell, 0xe80, EXC_HV, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(h_doorbell_kvm) + GEN_KVM h_doorbell EXC_COMMON_BEGIN(h_doorbell_common) - INT_COMMON 0xe80, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON h_doorbell FINISH_NAP RUNLATCH_ON addi r3,r1,STACK_FRAME_OVERHEAD @@ -1750,15 +1944,24 @@ EXC_COMMON_BEGIN(h_doorbell_common) b ret_from_except_lite +INT_DEFINE_BEGIN(h_virt_irq) + IVEC=0xea0 + IHSRR=EXC_HV + IMASK=IRQS_DISABLED + IKVM_REAL=1 + IKVM_VIRT=1 +INT_DEFINE_END(h_virt_irq) + EXC_REAL_BEGIN(h_virt_irq, 0xea0, 0x20) - INT_HANDLER h_virt_irq, 0xea0, ool=1, hsrr=EXC_HV, bitmask=IRQS_DISABLED, kvm=1 + GEN_INT_ENTRY h_virt_irq, virt=0, ool=1 EXC_REAL_END(h_virt_irq, 0xea0, 0x20) EXC_VIRT_BEGIN(h_virt_irq, 0x4ea0, 0x20) - INT_HANDLER h_virt_irq, 0xea0, ool=1, virt=1, hsrr=EXC_HV, bitmask=IRQS_DISABLED, kvm=1 + GEN_INT_ENTRY h_virt_irq, virt=1, ool=1 EXC_VIRT_END(h_virt_irq, 0x4ea0, 0x20) -INT_KVM_HANDLER h_virt_irq, 0xea0, EXC_HV, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(h_virt_irq_kvm) + GEN_KVM h_virt_irq EXC_COMMON_BEGIN(h_virt_irq_common) - INT_COMMON 0xea0, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON h_virt_irq FINISH_NAP RUNLATCH_ON addi r3,r1,STACK_FRAME_OVERHEAD @@ -1772,15 +1975,22 @@ EXC_REAL_NONE(0xee0, 0x20) EXC_VIRT_NONE(0x4ee0, 0x20) +INT_DEFINE_BEGIN(performance_monitor) + IVEC=0xf00 + IMASK=IRQS_PMI_DISABLED + IKVM_REAL=1 +INT_DEFINE_END(performance_monitor) + EXC_REAL_BEGIN(performance_monitor, 0xf00, 0x20) - INT_HANDLER performance_monitor, 0xf00, ool=1, bitmask=IRQS_PMI_DISABLED, kvm=1 + GEN_INT_ENTRY performance_monitor, virt=0, ool=1 EXC_REAL_END(performance_monitor, 0xf00, 0x20) EXC_VIRT_BEGIN(performance_monitor, 0x4f00, 0x20) - INT_HANDLER performance_monitor, 0xf00, ool=1, virt=1, bitmask=IRQS_PMI_DISABLED + GEN_INT_ENTRY performance_monitor, virt=1, ool=1 EXC_VIRT_END(performance_monitor, 0x4f00, 0x20) -INT_KVM_HANDLER performance_monitor, 0xf00, EXC_STD, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(performance_monitor_kvm) + GEN_KVM performance_monitor EXC_COMMON_BEGIN(performance_monitor_common) - INT_COMMON 0xf00, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON performance_monitor FINISH_NAP RUNLATCH_ON addi r3,r1,STACK_FRAME_OVERHEAD @@ -1788,15 +1998,22 @@ EXC_COMMON_BEGIN(performance_monitor_common) b ret_from_except_lite +INT_DEFINE_BEGIN(altivec_unavailable) + IVEC=0xf20 + IRECONCILE=0 + IKVM_REAL=1 +INT_DEFINE_END(altivec_unavailable) + EXC_REAL_BEGIN(altivec_unavailable, 0xf20, 0x20) - INT_HANDLER altivec_unavailable, 0xf20, ool=1, kvm=1 + GEN_INT_ENTRY altivec_unavailable, virt=0, ool=1 EXC_REAL_END(altivec_unavailable, 0xf20, 0x20) EXC_VIRT_BEGIN(altivec_unavailable, 0x4f20, 0x20) - INT_HANDLER altivec_unavailable, 0xf20, ool=1, virt=1 + GEN_INT_ENTRY altivec_unavailable, virt=1, ool=1 EXC_VIRT_END(altivec_unavailable, 0x4f20, 0x20) -INT_KVM_HANDLER altivec_unavailable, 0xf20, EXC_STD, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(altivec_unavailable_kvm) + GEN_KVM altivec_unavailable EXC_COMMON_BEGIN(altivec_unavailable_common) - INT_COMMON 0xf20, PACA_EXGEN, 1, 1, 0, 0, 0 + GEN_COMMON altivec_unavailable #ifdef CONFIG_ALTIVEC BEGIN_FTR_SECTION beq 1f @@ -1829,15 +2046,22 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) b ret_from_except +INT_DEFINE_BEGIN(vsx_unavailable) + IVEC=0xf40 + IRECONCILE=0 + IKVM_REAL=1 +INT_DEFINE_END(vsx_unavailable) + EXC_REAL_BEGIN(vsx_unavailable, 0xf40, 0x20) - INT_HANDLER vsx_unavailable, 0xf40, ool=1, kvm=1 + GEN_INT_ENTRY vsx_unavailable, virt=0, ool=1 EXC_REAL_END(vsx_unavailable, 0xf40, 0x20) EXC_VIRT_BEGIN(vsx_unavailable, 0x4f40, 0x20) - INT_HANDLER vsx_unavailable, 0xf40, ool=1, virt=1 + GEN_INT_ENTRY vsx_unavailable, virt=1, ool=1 EXC_VIRT_END(vsx_unavailable, 0x4f40, 0x20) -INT_KVM_HANDLER vsx_unavailable, 0xf40, EXC_STD, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(vsx_unavailable_kvm) + GEN_KVM vsx_unavailable EXC_COMMON_BEGIN(vsx_unavailable_common) - INT_COMMON 0xf40, PACA_EXGEN, 1, 1, 0, 0, 0 + GEN_COMMON vsx_unavailable #ifdef CONFIG_VSX BEGIN_FTR_SECTION beq 1f @@ -1869,30 +2093,44 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX) b ret_from_except +INT_DEFINE_BEGIN(facility_unavailable) + IVEC=0xf60 + IKVM_REAL=1 +INT_DEFINE_END(facility_unavailable) + EXC_REAL_BEGIN(facility_unavailable, 0xf60, 0x20) - INT_HANDLER facility_unavailable, 0xf60, ool=1, kvm=1 + GEN_INT_ENTRY facility_unavailable, virt=0, ool=1 EXC_REAL_END(facility_unavailable, 0xf60, 0x20) EXC_VIRT_BEGIN(facility_unavailable, 0x4f60, 0x20) - INT_HANDLER facility_unavailable, 0xf60, ool=1, virt=1 + GEN_INT_ENTRY facility_unavailable, virt=1, ool=1 EXC_VIRT_END(facility_unavailable, 0x4f60, 0x20) -INT_KVM_HANDLER facility_unavailable, 0xf60, EXC_STD, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(facility_unavailable_kvm) + GEN_KVM facility_unavailable EXC_COMMON_BEGIN(facility_unavailable_common) - INT_COMMON 0xf60, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON facility_unavailable bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl facility_unavailable_exception b ret_from_except +INT_DEFINE_BEGIN(h_facility_unavailable) + IVEC=0xf80 + IHSRR=EXC_HV + IKVM_REAL=1 + IKVM_VIRT=1 +INT_DEFINE_END(h_facility_unavailable) + EXC_REAL_BEGIN(h_facility_unavailable, 0xf80, 0x20) - INT_HANDLER h_facility_unavailable, 0xf80, ool=1, hsrr=EXC_HV, kvm=1 + GEN_INT_ENTRY h_facility_unavailable, virt=0, ool=1 EXC_REAL_END(h_facility_unavailable, 0xf80, 0x20) EXC_VIRT_BEGIN(h_facility_unavailable, 0x4f80, 0x20) - INT_HANDLER h_facility_unavailable, 0xf80, ool=1, virt=1, hsrr=EXC_HV, kvm=1 + GEN_INT_ENTRY h_facility_unavailable, virt=1, ool=1 EXC_VIRT_END(h_facility_unavailable, 0x4f80, 0x20) -INT_KVM_HANDLER h_facility_unavailable, 0xf80, EXC_HV, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(h_facility_unavailable_kvm) + GEN_KVM h_facility_unavailable EXC_COMMON_BEGIN(h_facility_unavailable_common) - INT_COMMON 0xf80, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON h_facility_unavailable bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl facility_unavailable_exception @@ -1912,13 +2150,21 @@ EXC_REAL_NONE(0x1100, 0x100) EXC_VIRT_NONE(0x5100, 0x100) #ifdef CONFIG_CBE_RAS +INT_DEFINE_BEGIN(cbe_system_error) + IVEC=0x1200 + IHSRR=EXC_HV + IKVM_SKIP=1 + IKVM_REAL=1 +INT_DEFINE_END(cbe_system_error) + EXC_REAL_BEGIN(cbe_system_error, 0x1200, 0x100) - INT_HANDLER cbe_system_error, 0x1200, ool=1, hsrr=EXC_HV, kvm=1 + GEN_INT_ENTRY cbe_system_error, virt=0 EXC_REAL_END(cbe_system_error, 0x1200, 0x100) EXC_VIRT_NONE(0x5200, 0x100) -INT_KVM_HANDLER cbe_system_error, 0x1200, EXC_HV, PACA_EXGEN, 1 +TRAMP_KVM_BEGIN(cbe_system_error_kvm) + GEN_KVM cbe_system_error EXC_COMMON_BEGIN(cbe_system_error_common) - INT_COMMON 0x1200, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON cbe_system_error bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl cbe_system_error_exception @@ -1929,15 +2175,22 @@ EXC_VIRT_NONE(0x5200, 0x100) #endif +INT_DEFINE_BEGIN(instruction_breakpoint) + IVEC=0x1300 + IKVM_SKIP=1 + IKVM_REAL=1 +INT_DEFINE_END(instruction_breakpoint) + EXC_REAL_BEGIN(instruction_breakpoint, 0x1300, 0x100) - INT_HANDLER instruction_breakpoint, 0x1300, kvm=1 + GEN_INT_ENTRY instruction_breakpoint, virt=0 EXC_REAL_END(instruction_breakpoint, 0x1300, 0x100) EXC_VIRT_BEGIN(instruction_breakpoint, 0x5300, 0x100) - INT_HANDLER instruction_breakpoint, 0x1300, virt=1 + GEN_INT_ENTRY instruction_breakpoint, virt=1 EXC_VIRT_END(instruction_breakpoint, 0x5300, 0x100) -INT_KVM_HANDLER instruction_breakpoint, 0x1300, EXC_STD, PACA_EXGEN, 1 +TRAMP_KVM_BEGIN(instruction_breakpoint_kvm) + GEN_KVM instruction_breakpoint EXC_COMMON_BEGIN(instruction_breakpoint_common) - INT_COMMON 0x1300, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON instruction_breakpoint bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl instruction_breakpoint_exception @@ -1947,30 +2200,35 @@ EXC_COMMON_BEGIN(instruction_breakpoint_common) EXC_REAL_NONE(0x1400, 0x100) EXC_VIRT_NONE(0x5400, 0x100) -EXC_REAL_BEGIN(denorm_exception_hv, 0x1500, 0x100) - INT_HANDLER denorm_exception_hv, 0x1500, early=2, hsrr=EXC_HV +INT_DEFINE_BEGIN(denorm_exception) + IVEC=0x1500 + IHSRR=EXC_HV + IEARLY=2 +INT_DEFINE_END(denorm_exception) + +EXC_REAL_BEGIN(denorm_exception, 0x1500, 0x100) + GEN_INT_ENTRY denorm_exception, virt=0 #ifdef CONFIG_PPC_DENORMALISATION mfspr r10,SPRN_HSRR1 andis. r10,r10,(HSRR1_DENORM)@h /* denorm? */ bne+ denorm_assist #endif - KVMTEST denorm_exception_hv, EXC_HV 0x1500 - INT_SAVE_SRR_AND_JUMP denorm_common, EXC_HV, 1 -EXC_REAL_END(denorm_exception_hv, 0x1500, 0x100) - + KVMTEST denorm_exception, EXC_HV, 0x1500 + INT_SAVE_SRR_AND_JUMP denorm_exception_common, EXC_HV, 1 +EXC_REAL_END(denorm_exception, 0x1500, 0x100) #ifdef CONFIG_PPC_DENORMALISATION EXC_VIRT_BEGIN(denorm_exception, 0x5500, 0x100) - INT_HANDLER denorm_exception, 0x1500, 0, 2, 1, EXC_HV, PACA_EXGEN, 1, 0, 0, 0, 0 + GEN_INT_ENTRY denorm_exception, virt=1 mfspr r10,SPRN_HSRR1 andis. r10,r10,(HSRR1_DENORM)@h /* denorm? */ bne+ denorm_assist - INT_VIRT_SAVE_SRR_AND_JUMP denorm_common, EXC_HV + INT_VIRT_SAVE_SRR_AND_JUMP denorm_exception_common, EXC_HV EXC_VIRT_END(denorm_exception, 0x5500, 0x100) #else EXC_VIRT_NONE(0x5500, 0x100) #endif - -INT_KVM_HANDLER denorm_exception_hv, 0x1500, EXC_HV, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(denorm_exception_kvm) + GEN_KVM denorm_exception #ifdef CONFIG_PPC_DENORMALISATION TRAMP_REAL_BEGIN(denorm_assist) @@ -2041,8 +2299,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR) b . #endif -EXC_COMMON_BEGIN(denorm_common) - INT_COMMON 0x1500, PACA_EXGEN, 1, 1, 1, 0, 0 +EXC_COMMON_BEGIN(denorm_exception_common) + GEN_COMMON denorm_exception bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl unknown_exception @@ -2050,13 +2308,21 @@ EXC_COMMON_BEGIN(denorm_common) #ifdef CONFIG_CBE_RAS +INT_DEFINE_BEGIN(cbe_maintenance) + IVEC=0x1600 + IHSRR=EXC_HV + IKVM_SKIP=1 + IKVM_REAL=1 +INT_DEFINE_END(cbe_maintenance) + EXC_REAL_BEGIN(cbe_maintenance, 0x1600, 0x100) - INT_HANDLER cbe_maintenance, 0x1600, ool=1, hsrr=EXC_HV, kvm=1 + GEN_INT_ENTRY cbe_maintenance, virt=0 EXC_REAL_END(cbe_maintenance, 0x1600, 0x100) EXC_VIRT_NONE(0x5600, 0x100) -INT_KVM_HANDLER cbe_maintenance, 0x1600, EXC_HV, PACA_EXGEN, 1 +TRAMP_KVM_BEGIN(cbe_maintenance_kvm) + GEN_KVM cbe_maintenance EXC_COMMON_BEGIN(cbe_maintenance_common) - INT_COMMON 0x1600, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON cbe_maintenance bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl cbe_maintenance_exception @@ -2067,15 +2333,21 @@ EXC_VIRT_NONE(0x5600, 0x100) #endif +INT_DEFINE_BEGIN(altivec_assist) + IVEC=0x1700 + IKVM_REAL=1 +INT_DEFINE_END(altivec_assist) + EXC_REAL_BEGIN(altivec_assist, 0x1700, 0x100) - INT_HANDLER altivec_assist, 0x1700, kvm=1 + GEN_INT_ENTRY altivec_assist, virt=0 EXC_REAL_END(altivec_assist, 0x1700, 0x100) EXC_VIRT_BEGIN(altivec_assist, 0x5700, 0x100) - INT_HANDLER altivec_assist, 0x1700, virt=1 + GEN_INT_ENTRY altivec_assist, virt=1 EXC_VIRT_END(altivec_assist, 0x5700, 0x100) -INT_KVM_HANDLER altivec_assist, 0x1700, EXC_STD, PACA_EXGEN, 0 +TRAMP_KVM_BEGIN(altivec_assist_kvm) + GEN_KVM altivec_assist EXC_COMMON_BEGIN(altivec_assist_common) - INT_COMMON 0x1700, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON altivec_assist bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD #ifdef CONFIG_ALTIVEC @@ -2087,13 +2359,21 @@ EXC_COMMON_BEGIN(altivec_assist_common) #ifdef CONFIG_CBE_RAS +INT_DEFINE_BEGIN(cbe_thermal) + IVEC=0x1800 + IHSRR=EXC_HV + IKVM_SKIP=1 + IKVM_REAL=1 +INT_DEFINE_END(cbe_thermal) + EXC_REAL_BEGIN(cbe_thermal, 0x1800, 0x100) - INT_HANDLER cbe_thermal, 0x1800, ool=1, hsrr=EXC_HV, kvm=1 + GEN_INT_ENTRY cbe_thermal, virt=0 EXC_REAL_END(cbe_thermal, 0x1800, 0x100) EXC_VIRT_NONE(0x5800, 0x100) -INT_KVM_HANDLER cbe_thermal, 0x1800, EXC_HV, PACA_EXGEN, 1 +TRAMP_KVM_BEGIN(cbe_thermal_kvm) + GEN_KVM cbe_thermal EXC_COMMON_BEGIN(cbe_thermal_common) - INT_COMMON 0x1800, PACA_EXGEN, 1, 1, 1, 0, 0 + GEN_COMMON cbe_thermal bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl cbe_thermal_exception From patchwork Tue Nov 12 16:52:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193706 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CDtQ6ztnz9s7T for ; Wed, 13 Nov 2019 04:16:30 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CDtQ2CtHzF5Fq for ; Wed, 13 Nov 2019 04:16:30 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDMp2mDkzF18B for ; Wed, 13 Nov 2019 03:53:26 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 4D72FB312; Tue, 12 Nov 2019 16:53:23 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 06/33] powerpc/64s/exception: Remove old INT_ENTRY macro Date: Tue, 12 Nov 2019 17:52:04 +0100 Message-Id: <419bffb38de391d451c9395d10ceeb906e2578e1.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 68 ++++++++++++---------------- 1 file changed, 30 insertions(+), 38 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index b5decc9a0cbf..ba2dcd91aaaf 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -482,13 +482,13 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) * - Fall through and continue executing in real, unrelocated mode. * This is done if early=2. */ -.macro INT_HANDLER name, vec, ool=0, early=0, virt=0, hsrr=0, area=PACA_EXGEN, ri=1, dar=0, dsisr=0, bitmask=0, kvm=0 +.macro GEN_INT_ENTRY name, virt, ool=0 SET_SCRATCH0(r13) /* save r13 */ GET_PACA(r13) - std r9,\area\()+EX_R9(r13) /* save r9 */ + std r9,IAREA+EX_R9(r13) /* save r9 */ OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR) HMT_MEDIUM - std r10,\area\()+EX_R10(r13) /* save r10 - r12 */ + std r10,IAREA+EX_R10(r13) /* save r10 - r12 */ OPT_GET_SPR(r10, SPRN_CFAR, CPU_FTR_CFAR) .if \ool .if !\virt @@ -502,47 +502,47 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) .endif .endif - OPT_SAVE_REG_TO_PACA(\area\()+EX_PPR, r9, CPU_FTR_HAS_PPR) - OPT_SAVE_REG_TO_PACA(\area\()+EX_CFAR, r10, CPU_FTR_CFAR) + OPT_SAVE_REG_TO_PACA(IAREA+EX_PPR, r9, CPU_FTR_HAS_PPR) + OPT_SAVE_REG_TO_PACA(IAREA+EX_CFAR, r10, CPU_FTR_CFAR) INTERRUPT_TO_KERNEL - SAVE_CTR(r10, \area\()) + SAVE_CTR(r10, IAREA) mfcr r9 - .if \kvm - KVMTEST \name \hsrr \vec + .if (!\virt && IKVM_REAL) || (\virt && IKVM_VIRT) + KVMTEST \name IHSRR IVEC .endif - .if \bitmask + .if IMASK lbz r10,PACAIRQSOFTMASK(r13) - andi. r10,r10,\bitmask + andi. r10,r10,IMASK /* Associate vector numbers with bits in paca->irq_happened */ - .if \vec == 0x500 || \vec == 0xea0 + .if IVEC == 0x500 || IVEC == 0xea0 li r10,PACA_IRQ_EE - .elseif \vec == 0x900 + .elseif IVEC == 0x900 li r10,PACA_IRQ_DEC - .elseif \vec == 0xa00 || \vec == 0xe80 + .elseif IVEC == 0xa00 || IVEC == 0xe80 li r10,PACA_IRQ_DBELL - .elseif \vec == 0xe60 + .elseif IVEC == 0xe60 li r10,PACA_IRQ_HMI - .elseif \vec == 0xf00 + .elseif IVEC == 0xf00 li r10,PACA_IRQ_PMI .else .abort "Bad maskable vector" .endif - .if \hsrr == EXC_HV_OR_STD + .if IHSRR == EXC_HV_OR_STD BEGIN_FTR_SECTION bne masked_Hinterrupt FTR_SECTION_ELSE bne masked_interrupt ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) - .elseif \hsrr + .elseif IHSRR bne masked_Hinterrupt .else bne masked_interrupt .endif .endif - std r11,\area\()+EX_R11(r13) - std r12,\area\()+EX_R12(r13) + std r11,IAREA+EX_R11(r13) + std r12,IAREA+EX_R12(r13) /* * DAR/DSISR, SCRATCH0 must be read before setting MSR[RI], @@ -550,47 +550,39 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) * not recoverable if they are live. */ GET_SCRATCH0(r10) - std r10,\area\()+EX_R13(r13) - .if \dar == 1 - .if \hsrr + std r10,IAREA+EX_R13(r13) + .if IDAR == 1 + .if IHSRR mfspr r10,SPRN_HDAR .else mfspr r10,SPRN_DAR .endif - std r10,\area\()+EX_DAR(r13) + std r10,IAREA+EX_DAR(r13) .endif - .if \dsisr == 1 - .if \hsrr + .if IDSISR == 1 + .if IHSRR mfspr r10,SPRN_HDSISR .else mfspr r10,SPRN_DSISR .endif - stw r10,\area\()+EX_DSISR(r13) + stw r10,IAREA+EX_DSISR(r13) .endif - .if \early == 2 + .if IEARLY == 2 /* nothing more */ - .elseif \early + .elseif IEARLY mfctr r10 /* save ctr, even for !RELOCATABLE */ BRANCH_TO_C000(r11, \name\()_common) .elseif !\virt - INT_SAVE_SRR_AND_JUMP \name\()_common, \hsrr, \ri + INT_SAVE_SRR_AND_JUMP \name\()_common, IHSRR, ISET_RI .else - INT_VIRT_SAVE_SRR_AND_JUMP \name\()_common, \hsrr + INT_VIRT_SAVE_SRR_AND_JUMP \name\()_common, IHSRR .endif .if \ool .popsection .endif .endm -.macro GEN_INT_ENTRY name, virt, ool=0 - .if ! \virt - INT_HANDLER \name, IVEC, \ool, IEARLY, \virt, IHSRR, IAREA, ISET_RI, IDAR, IDSISR, IMASK, IKVM_REAL - .else - INT_HANDLER \name, IVEC, \ool, IEARLY, \virt, IHSRR, IAREA, ISET_RI, IDAR, IDSISR, IMASK, IKVM_VIRT - .endif -.endm - /* * On entry r13 points to the paca, r9-r13 are saved in the paca, * r9 contains the saved CR, r11 and r12 contain the saved SRR0 and From patchwork Tue Nov 12 16:52:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193708 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CF1K6klVz9s7T for ; Wed, 13 Nov 2019 04:22:29 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CF1K5GLYzF5JS for ; Wed, 13 Nov 2019 04:22:29 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDMq6XsYzF1Hp for ; Wed, 13 Nov 2019 03:53:27 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id CD9A9B326; Tue, 12 Nov 2019 16:53:24 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 07/33] powerpc/64s/exception: Remove old INT_COMMON macro Date: Tue, 12 Nov 2019 17:52:05 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 51 +++++++++++++--------------- 1 file changed, 24 insertions(+), 27 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index ba2dcd91aaaf..f318869607db 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -591,8 +591,8 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) * If stack=0, then the stack is already set in r1, and r1 is saved in r10. * PPR save and CPU accounting is not done for the !stack case (XXX why not?) */ -.macro INT_COMMON vec, area, stack, kaup, reconcile, dar, dsisr - .if \stack +.macro GEN_COMMON name + .if ISTACK andi. r10,r12,MSR_PR /* See if coming from user */ mr r10,r1 /* Save r1 */ subi r1,r1,INT_FRAME_SIZE /* alloc frame on kernel stack */ @@ -609,54 +609,54 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) std r0,GPR0(r1) /* save r0 in stackframe */ std r10,GPR1(r1) /* save r1 in stackframe */ - .if \stack - .if \kaup + .if ISTACK + .if IKUAP kuap_save_amr_and_lock r9, r10, cr1, cr0 .endif beq 101f /* if from kernel mode */ ACCOUNT_CPU_USER_ENTRY(r13, r9, r10) - SAVE_PPR(\area, r9) + SAVE_PPR(IAREA, r9) 101: .else - .if \kaup + .if IKUAP kuap_save_amr_and_lock r9, r10, cr1 .endif .endif /* Save original regs values from save area to stack frame. */ - ld r9,\area+EX_R9(r13) /* move r9, r10 to stackframe */ - ld r10,\area+EX_R10(r13) + ld r9,IAREA+EX_R9(r13) /* move r9, r10 to stackframe */ + ld r10,IAREA+EX_R10(r13) std r9,GPR9(r1) std r10,GPR10(r1) - ld r9,\area+EX_R11(r13) /* move r11 - r13 to stackframe */ - ld r10,\area+EX_R12(r13) - ld r11,\area+EX_R13(r13) + ld r9,IAREA+EX_R11(r13) /* move r11 - r13 to stackframe */ + ld r10,IAREA+EX_R12(r13) + ld r11,IAREA+EX_R13(r13) std r9,GPR11(r1) std r10,GPR12(r1) std r11,GPR13(r1) - .if \dar - .if \dar == 2 + .if IDAR + .if IDAR == 2 ld r10,_NIP(r1) .else - ld r10,\area+EX_DAR(r13) + ld r10,IAREA+EX_DAR(r13) .endif std r10,_DAR(r1) .endif - .if \dsisr - .if \dsisr == 2 + .if IDSISR + .if IDSISR == 2 ld r10,_MSR(r1) lis r11,DSISR_SRR1_MATCH_64S@h and r10,r10,r11 .else - lwz r10,\area+EX_DSISR(r13) + lwz r10,IAREA+EX_DSISR(r13) .endif std r10,_DSISR(r1) .endif BEGIN_FTR_SECTION_NESTED(66) - ld r10,\area+EX_CFAR(r13) + ld r10,IAREA+EX_CFAR(r13) std r10,ORIG_GPR3(r1) END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66) - GET_CTR(r10, \area) + GET_CTR(r10, IAREA) std r10,_CTR(r1) std r2,GPR2(r1) /* save r2 in stackframe */ SAVE_4GPRS(3, r1) /* save r3 - r6 in stackframe */ @@ -668,26 +668,22 @@ END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66) mfspr r11,SPRN_XER /* save XER in stackframe */ std r10,SOFTE(r1) std r11,_XER(r1) - li r9,(\vec)+1 + li r9,(IVEC)+1 std r9,_TRAP(r1) /* set trap number */ li r10,0 ld r11,exception_marker@toc(r2) std r10,RESULT(r1) /* clear regs->result */ std r11,STACK_FRAME_OVERHEAD-16(r1) /* mark the frame */ - .if \stack + .if ISTACK ACCOUNT_STOLEN_TIME .endif - .if \reconcile + .if IRECONCILE RECONCILE_IRQ_STATE(r10, r11) .endif .endm -.macro GEN_COMMON name - INT_COMMON IVEC, IAREA, ISTACK, IKUAP, IRECONCILE, IDAR, IDSISR -.endm - /* * Restore all registers including H/SRR0/1 saved in a stack frame of a * standard exception. @@ -2400,7 +2396,8 @@ EXC_COMMON_BEGIN(soft_nmi_common) mr r10,r1 ld r1,PACAEMERGSP(r13) subi r1,r1,INT_FRAME_SIZE - INT_COMMON 0x900, PACA_EXGEN, 0, 1, 1, 0, 0 + __ISTACK(decrementer)=0 + GEN_COMMON decrementer bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl soft_nmi_interrupt From patchwork Tue Nov 12 16:52:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193707 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CDy55SPbz9sP4 for ; Wed, 13 Nov 2019 04:19:41 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CDy46srDzF4pG for ; Wed, 13 Nov 2019 04:19:40 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDMs2kjmzF1Nv for ; Wed, 13 Nov 2019 03:53:29 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 57C1AB090; Tue, 12 Nov 2019 16:53:26 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 08/33] powerpc/64s/exception: Remove old INT_KVM_HANDLER Date: Tue, 12 Nov 2019 17:52:06 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 55 +++++++++++++--------------- 1 file changed, 26 insertions(+), 29 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index f318869607db..bef0c2eee7dc 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -266,15 +266,6 @@ do_define_int n .endif .endm -.macro INT_KVM_HANDLER name, vec, hsrr, area, skip - TRAMP_KVM_BEGIN(\name\()_kvm) - KVM_HANDLER \vec, \hsrr, \area, \skip -.endm - -.macro GEN_KVM name - KVM_HANDLER IVEC, IHSRR, IAREA, IKVM_SKIP -.endm - #ifdef CONFIG_KVM_BOOK3S_64_HANDLER #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE /* @@ -293,35 +284,35 @@ do_define_int n bne \name\()_kvm .endm -.macro KVM_HANDLER vec, hsrr, area, skip - .if \skip +.macro GEN_KVM name + .if IKVM_SKIP cmpwi r10,KVM_GUEST_MODE_SKIP beq 89f .else BEGIN_FTR_SECTION_NESTED(947) - ld r10,\area+EX_CFAR(r13) + ld r10,IAREA+EX_CFAR(r13) std r10,HSTATE_CFAR(r13) END_FTR_SECTION_NESTED(CPU_FTR_CFAR,CPU_FTR_CFAR,947) .endif BEGIN_FTR_SECTION_NESTED(948) - ld r10,\area+EX_PPR(r13) + ld r10,IAREA+EX_PPR(r13) std r10,HSTATE_PPR(r13) END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) - ld r10,\area+EX_R10(r13) + ld r10,IAREA+EX_R10(r13) std r12,HSTATE_SCRATCH0(r13) sldi r12,r9,32 /* HSRR variants have the 0x2 bit added to their trap number */ - .if \hsrr == EXC_HV_OR_STD + .if IHSRR == EXC_HV_OR_STD BEGIN_FTR_SECTION - ori r12,r12,(\vec + 0x2) + ori r12,r12,(IVEC + 0x2) FTR_SECTION_ELSE - ori r12,r12,(\vec) + ori r12,r12,(IVEC) ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) - .elseif \hsrr - ori r12,r12,(\vec + 0x2) + .elseif IHSRR + ori r12,r12,(IVEC+ 0x2) .else - ori r12,r12,(\vec) + ori r12,r12,(IVEC) .endif #ifdef CONFIG_RELOCATABLE @@ -334,25 +325,25 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) std r9,HSTATE_SCRATCH1(r13) __LOAD_FAR_HANDLER(r9, kvmppc_interrupt) mtctr r9 - ld r9,\area+EX_R9(r13) + ld r9,IAREA+EX_R9(r13) bctr #else - ld r9,\area+EX_R9(r13) + ld r9,IAREA+EX_R9(r13) b kvmppc_interrupt #endif - .if \skip + .if IKVM_SKIP 89: mtocrf 0x80,r9 - ld r9,\area+EX_R9(r13) - ld r10,\area+EX_R10(r13) - .if \hsrr == EXC_HV_OR_STD + ld r9,IAREA+EX_R9(r13) + ld r10,IAREA+EX_R10(r13) + .if IHSRR == EXC_HV_OR_STD BEGIN_FTR_SECTION b kvmppc_skip_Hinterrupt FTR_SECTION_ELSE b kvmppc_skip_interrupt ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) - .elseif \hsrr + .elseif IHSRR b kvmppc_skip_Hinterrupt .else b kvmppc_skip_interrupt @@ -363,7 +354,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) #else .macro KVMTEST name, hsrr, n .endm -.macro KVM_HANDLER name, vec, hsrr, area, skip +.macro GEN_KVM name .endm #endif @@ -1640,6 +1631,12 @@ EXC_VIRT_NONE(0x4b00, 0x100) * without saving, though xer is not a good idea to use, as hardware may * interpret some bits so it may be costly to change them. */ +INT_DEFINE_BEGIN(system_call) + IVEC=0xc00 + IKVM_REAL=1 + IKVM_VIRT=1 +INT_DEFINE_END(system_call) + .macro SYSTEM_CALL virt #ifdef CONFIG_KVM_BOOK3S_64_HANDLER /* @@ -1733,7 +1730,7 @@ TRAMP_KVM_BEGIN(system_call_kvm) SET_SCRATCH0(r10) std r9,PACA_EXGEN+EX_R9(r13) mfcr r9 - KVM_HANDLER 0xc00, EXC_STD, PACA_EXGEN, 0 + GEN_KVM system_call #endif From patchwork Tue Nov 12 16:52:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193710 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CF4Q704vz9sP4 for ; Wed, 13 Nov 2019 04:25:10 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CF4Q5q0qzF4rv for ; Wed, 13 Nov 2019 04:25:10 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDMt6cRbzF1Rb for ; Wed, 13 Nov 2019 03:53:30 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id D442DAEE1; Tue, 12 Nov 2019 16:53:27 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 09/33] powerpc/64s/exception: Add ISIDE option Date: Tue, 12 Nov 2019 17:52:07 +0100 Message-Id: <4936975fdc328f95a57c5db632588f17c8544fd5.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin Rather than using DAR=2 to select the i-side registers, add an explicit option. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index bef0c2eee7dc..b8588618cdc3 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -199,6 +199,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943) #define IVEC .L_IVEC_\name\() #define IHSRR .L_IHSRR_\name\() #define IAREA .L_IAREA_\name\() +#define IISIDE .L_IISIDE_\name\() #define IDAR .L_IDAR_\name\() #define IDSISR .L_IDSISR_\name\() #define ISET_RI .L_ISET_RI_\name\() @@ -231,6 +232,9 @@ do_define_int n .ifndef IAREA IAREA=PACA_EXGEN .endif + .ifndef IISIDE + IISIDE=0 + .endif .ifndef IDAR IDAR=0 .endif @@ -542,7 +546,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) */ GET_SCRATCH0(r10) std r10,IAREA+EX_R13(r13) - .if IDAR == 1 + .if IDAR && !IISIDE .if IHSRR mfspr r10,SPRN_HDAR .else @@ -550,7 +554,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) .endif std r10,IAREA+EX_DAR(r13) .endif - .if IDSISR == 1 + .if IDSISR && !IISIDE .if IHSRR mfspr r10,SPRN_HDSISR .else @@ -625,16 +629,18 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) std r9,GPR11(r1) std r10,GPR12(r1) std r11,GPR13(r1) + .if IDAR - .if IDAR == 2 + .if IISIDE ld r10,_NIP(r1) .else ld r10,IAREA+EX_DAR(r13) .endif std r10,_DAR(r1) .endif + .if IDSISR - .if IDSISR == 2 + .if IISIDE ld r10,_MSR(r1) lis r11,DSISR_SRR1_MATCH_64S@h and r10,r10,r11 @@ -643,6 +649,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) .endif std r10,_DSISR(r1) .endif + BEGIN_FTR_SECTION_NESTED(66) ld r10,IAREA+EX_CFAR(r13) std r10,ORIG_GPR3(r1) @@ -1311,8 +1318,9 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) INT_DEFINE_BEGIN(instruction_access) IVEC=0x400 - IDAR=2 - IDSISR=2 + IISIDE=1 + IDAR=1 + IDSISR=1 IKVM_REAL=1 INT_DEFINE_END(instruction_access) @@ -1341,7 +1349,8 @@ INT_DEFINE_BEGIN(instruction_access_slb) IVEC=0x480 IAREA=PACA_EXSLB IRECONCILE=0 - IDAR=2 + IISIDE=1 + IDAR=1 IKVM_REAL=1 INT_DEFINE_END(instruction_access_slb) From patchwork Tue Nov 12 16:52:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193711 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CF7T57Vkz9sNx for ; Wed, 13 Nov 2019 04:27:49 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CF7S665tzF5LS for ; Wed, 13 Nov 2019 04:27:48 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDMw2wn7zF1Hp for ; Wed, 13 Nov 2019 03:53:32 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 580B3B365; Tue, 12 Nov 2019 16:53:29 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 10/33] powerpc/64s/exception: move real->virt switch into the common handler Date: Tue, 12 Nov 2019 17:52:08 +0100 Message-Id: <7bc75059055afe1b10aaf5c3c06af4f4a4b60eb4.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin The real mode interrupt entry points currently use rfid to branch to the common handler in virtual mode. This is a significant amount of code, and forces other code (notably the KVM test) to live in the real mode handler. In the interest of minimising the amount of code that runs unrelocated move the switch to virt mode into the common code, and do it with mtmsrd, which avoids clobbering SRRs (although the post-KVMTEST performance of real-mode interrupt handlers is not a big concern these days). This requires CTR to always be saved (real-mode needs to reach 0xc...) but that's not a huge impact these days. It could be optimized away in future. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/exception-64s.h | 4 - arch/powerpc/kernel/exceptions-64s.S | 251 ++++++++++------------- 2 files changed, 109 insertions(+), 146 deletions(-) diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h index 33f4f72eb035..47bd4ea0837d 100644 --- a/arch/powerpc/include/asm/exception-64s.h +++ b/arch/powerpc/include/asm/exception-64s.h @@ -33,11 +33,7 @@ #include /* PACA save area size in u64 units (exgen, exmc, etc) */ -#if defined(CONFIG_RELOCATABLE) #define EX_SIZE 10 -#else -#define EX_SIZE 9 -#endif /* * maximum recursive depth of MCE exceptions diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index b8588618cdc3..5803ce3b9404 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -32,16 +32,10 @@ #define EX_CCR 52 #define EX_CFAR 56 #define EX_PPR 64 -#if defined(CONFIG_RELOCATABLE) #define EX_CTR 72 .if EX_SIZE != 10 .error "EX_SIZE is wrong" .endif -#else -.if EX_SIZE != 9 - .error "EX_SIZE is wrong" -.endif -#endif /* * Following are fixed section helper macros. @@ -124,22 +118,6 @@ name: #define EXC_HV 1 #define EXC_STD 0 -#if defined(CONFIG_RELOCATABLE) -/* - * If we support interrupts with relocation on AND we're a relocatable kernel, - * we need to use CTR to get to the 2nd level handler. So, save/restore it - * when required. - */ -#define SAVE_CTR(reg, area) mfctr reg ; std reg,area+EX_CTR(r13) -#define GET_CTR(reg, area) ld reg,area+EX_CTR(r13) -#define RESTORE_CTR(reg, area) ld reg,area+EX_CTR(r13) ; mtctr reg -#else -/* ...else CTR is unused and in register. */ -#define SAVE_CTR(reg, area) -#define GET_CTR(reg, area) mfctr reg -#define RESTORE_CTR(reg, area) -#endif - /* * PPR save/restore macros used in exceptions-64s.S * Used for P7 or later processors @@ -199,6 +177,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943) #define IVEC .L_IVEC_\name\() #define IHSRR .L_IHSRR_\name\() #define IAREA .L_IAREA_\name\() +#define IVIRT .L_IVIRT_\name\() #define IISIDE .L_IISIDE_\name\() #define IDAR .L_IDAR_\name\() #define IDSISR .L_IDSISR_\name\() @@ -232,6 +211,9 @@ do_define_int n .ifndef IAREA IAREA=PACA_EXGEN .endif + .ifndef IVIRT + IVIRT=1 + .endif .ifndef IISIDE IISIDE=0 .endif @@ -325,7 +307,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) * outside the head section. CONFIG_RELOCATABLE KVM expects CTR * to be saved in HSTATE_SCRATCH1. */ - mfctr r9 + ld r9,IAREA+EX_CTR(r13) std r9,HSTATE_SCRATCH1(r13) __LOAD_FAR_HANDLER(r9, kvmppc_interrupt) mtctr r9 @@ -362,101 +344,6 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) .endm #endif -.macro INT_SAVE_SRR_AND_JUMP label, hsrr, set_ri - ld r10,PACAKMSR(r13) /* get MSR value for kernel */ - .if ! \set_ri - xori r10,r10,MSR_RI /* Clear MSR_RI */ - .endif - .if \hsrr == EXC_HV_OR_STD - BEGIN_FTR_SECTION - mfspr r11,SPRN_HSRR0 /* save HSRR0 */ - mfspr r12,SPRN_HSRR1 /* and HSRR1 */ - mtspr SPRN_HSRR1,r10 - FTR_SECTION_ELSE - mfspr r11,SPRN_SRR0 /* save SRR0 */ - mfspr r12,SPRN_SRR1 /* and SRR1 */ - mtspr SPRN_SRR1,r10 - ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) - .elseif \hsrr - mfspr r11,SPRN_HSRR0 /* save HSRR0 */ - mfspr r12,SPRN_HSRR1 /* and HSRR1 */ - mtspr SPRN_HSRR1,r10 - .else - mfspr r11,SPRN_SRR0 /* save SRR0 */ - mfspr r12,SPRN_SRR1 /* and SRR1 */ - mtspr SPRN_SRR1,r10 - .endif - LOAD_HANDLER(r10, \label\()) - .if \hsrr == EXC_HV_OR_STD - BEGIN_FTR_SECTION - mtspr SPRN_HSRR0,r10 - HRFI_TO_KERNEL - FTR_SECTION_ELSE - mtspr SPRN_SRR0,r10 - RFI_TO_KERNEL - ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) - .elseif \hsrr - mtspr SPRN_HSRR0,r10 - HRFI_TO_KERNEL - .else - mtspr SPRN_SRR0,r10 - RFI_TO_KERNEL - .endif - b . /* prevent speculative execution */ -.endm - -/* INT_SAVE_SRR_AND_JUMP works for real or virt, this is faster but virt only */ -.macro INT_VIRT_SAVE_SRR_AND_JUMP label, hsrr -#ifdef CONFIG_RELOCATABLE - .if \hsrr == EXC_HV_OR_STD - BEGIN_FTR_SECTION - mfspr r11,SPRN_HSRR0 /* save HSRR0 */ - FTR_SECTION_ELSE - mfspr r11,SPRN_SRR0 /* save SRR0 */ - ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) - .elseif \hsrr - mfspr r11,SPRN_HSRR0 /* save HSRR0 */ - .else - mfspr r11,SPRN_SRR0 /* save SRR0 */ - .endif - LOAD_HANDLER(r12, \label\()) - mtctr r12 - .if \hsrr == EXC_HV_OR_STD - BEGIN_FTR_SECTION - mfspr r12,SPRN_HSRR1 /* and HSRR1 */ - FTR_SECTION_ELSE - mfspr r12,SPRN_SRR1 /* and HSRR1 */ - ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) - .elseif \hsrr - mfspr r12,SPRN_HSRR1 /* and HSRR1 */ - .else - mfspr r12,SPRN_SRR1 /* and HSRR1 */ - .endif - li r10,MSR_RI - mtmsrd r10,1 /* Set RI (EE=0) */ - bctr -#else - .if \hsrr == EXC_HV_OR_STD - BEGIN_FTR_SECTION - mfspr r11,SPRN_HSRR0 /* save HSRR0 */ - mfspr r12,SPRN_HSRR1 /* and HSRR1 */ - FTR_SECTION_ELSE - mfspr r11,SPRN_SRR0 /* save SRR0 */ - mfspr r12,SPRN_SRR1 /* and SRR1 */ - ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) - .elseif \hsrr - mfspr r11,SPRN_HSRR0 /* save HSRR0 */ - mfspr r12,SPRN_HSRR1 /* and HSRR1 */ - .else - mfspr r11,SPRN_SRR0 /* save SRR0 */ - mfspr r12,SPRN_SRR1 /* and SRR1 */ - .endif - li r10,MSR_RI - mtmsrd r10,1 /* Set RI (EE=0) */ - b \label -#endif -.endm - /* * This is the BOOK3S interrupt entry code macro. * @@ -477,6 +364,23 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) * - Fall through and continue executing in real, unrelocated mode. * This is done if early=2. */ + +.macro GEN_BRANCH_TO_COMMON name, virt + .if \virt +#ifndef CONFIG_RELOCATABLE + b \name\()_common_virt +#else + LOAD_HANDLER(r10, \name\()_common_virt) + mtctr r10 + bctr +#endif + .else + LOAD_HANDLER(r10, \name\()_common_real) + mtctr r10 + bctr + .endif +.endm + .macro GEN_INT_ENTRY name, virt, ool=0 SET_SCRATCH0(r13) /* save r13 */ GET_PACA(r13) @@ -500,8 +404,10 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) OPT_SAVE_REG_TO_PACA(IAREA+EX_PPR, r9, CPU_FTR_HAS_PPR) OPT_SAVE_REG_TO_PACA(IAREA+EX_CFAR, r10, CPU_FTR_CFAR) INTERRUPT_TO_KERNEL - SAVE_CTR(r10, IAREA) + mfctr r10 + std r10,IAREA+EX_CTR(r13) mfcr r9 + .if (!\virt && IKVM_REAL) || (\virt && IKVM_VIRT) KVMTEST \name IHSRR IVEC .endif @@ -566,27 +472,58 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) .if IEARLY == 2 /* nothing more */ .elseif IEARLY - mfctr r10 /* save ctr, even for !RELOCATABLE */ BRANCH_TO_C000(r11, \name\()_common) - .elseif !\virt - INT_SAVE_SRR_AND_JUMP \name\()_common, IHSRR, ISET_RI .else - INT_VIRT_SAVE_SRR_AND_JUMP \name\()_common, IHSRR + .if IHSRR == EXC_HV_OR_STD + BEGIN_FTR_SECTION + mfspr r11,SPRN_HSRR0 /* save HSRR0 */ + mfspr r12,SPRN_HSRR1 /* and HSRR1 */ + FTR_SECTION_ELSE + mfspr r11,SPRN_SRR0 /* save SRR0 */ + mfspr r12,SPRN_SRR1 /* and SRR1 */ + ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) + .elseif IHSRR + mfspr r11,SPRN_HSRR0 /* save HSRR0 */ + mfspr r12,SPRN_HSRR1 /* and HSRR1 */ + .else + mfspr r11,SPRN_SRR0 /* save SRR0 */ + mfspr r12,SPRN_SRR1 /* and SRR1 */ .endif + GEN_BRANCH_TO_COMMON \name \virt + .endif + .if \ool .popsection .endif .endm /* - * On entry r13 points to the paca, r9-r13 are saved in the paca, - * r9 contains the saved CR, r11 and r12 contain the saved SRR0 and - * SRR1, and relocation is on. - * - * If stack=0, then the stack is already set in r1, and r1 is saved in r10. - * PPR save and CPU accounting is not done for the !stack case (XXX why not?) + * __GEN_COMMON_ENTRY is required to receive the branch from interrupt + * entry, except in the case of the IEARLY handlers. + * This switches to virtual mode and sets MSR[RI]. */ -.macro GEN_COMMON name +.macro __GEN_COMMON_ENTRY name +DEFINE_FIXED_SYMBOL(\name\()_common_real) +\name\()_common_real: + ld r10,PACAKMSR(r13) /* get MSR value for kernel */ + /* MSR[RI] is clear iff using SRR regs */ + .if IHSRR == EXC_HV_OR_STD + BEGIN_FTR_SECTION + xori r10,r10,MSR_RI + END_FTR_SECTION_IFCLR(CPU_FTR_HVMODE) + .elseif ! IHSRR + xori r10,r10,MSR_RI + .endif + mtmsrd r10 + + .if IVIRT + .balign IFETCH_ALIGN_BYTES +DEFINE_FIXED_SYMBOL(\name\()_common_virt) +\name\()_common_virt: + .endif /* IVIRT */ +.endm + +.macro __GEN_COMMON_BODY name .if ISTACK andi. r10,r12,MSR_PR /* See if coming from user */ mr r10,r1 /* Save r1 */ @@ -604,6 +541,11 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) std r0,GPR0(r1) /* save r0 in stackframe */ std r10,GPR1(r1) /* save r1 in stackframe */ + .if ISET_RI + li r10,MSR_RI + mtmsrd r10,1 /* Set MSR_RI */ + .endif + .if ISTACK .if IKUAP kuap_save_amr_and_lock r9, r10, cr1, cr0 @@ -654,7 +596,7 @@ BEGIN_FTR_SECTION_NESTED(66) ld r10,IAREA+EX_CFAR(r13) std r10,ORIG_GPR3(r1) END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66) - GET_CTR(r10, IAREA) + ld r10,IAREA+EX_CTR(r13) std r10,_CTR(r1) std r2,GPR2(r1) /* save r2 in stackframe */ SAVE_4GPRS(3, r1) /* save r3 - r6 in stackframe */ @@ -682,6 +624,19 @@ END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66) .endif .endm +/* + * On entry r13 points to the paca, r9-r13 are saved in the paca, + * r9 contains the saved CR, r11 and r12 contain the saved SRR0 and + * SRR1, and relocation is on. + * + * If stack=0, then the stack is already set in r1, and r1 is saved in r10. + * PPR save and CPU accounting is not done for the !stack case (XXX why not?) + */ +.macro GEN_COMMON name + __GEN_COMMON_ENTRY \name + __GEN_COMMON_BODY \name +.endm + /* * Restore all registers including H/SRR0/1 saved in a stack frame of a * standard exception. @@ -834,6 +789,7 @@ EXC_VIRT_NONE(0x4000, 0x100) INT_DEFINE_BEGIN(system_reset) IVEC=0x100 IAREA=PACA_EXNMI + IVIRT=0 /* no virt entry point */ /* * MSR_RI is not enabled, because PACA_EXNMI and nmi stack is * being used, so a nested NMI exception would corrupt it. @@ -913,6 +869,7 @@ TRAMP_REAL_BEGIN(system_reset_fwnmi) #endif /* CONFIG_PPC_PSERIES */ EXC_COMMON_BEGIN(system_reset_common) + __GEN_COMMON_ENTRY system_reset /* * Increment paca->in_nmi then enable MSR_RI. SLB or MCE will be able * to recover, but nested NMI will notice in_nmi and not recover @@ -928,7 +885,7 @@ EXC_COMMON_BEGIN(system_reset_common) mr r10,r1 ld r1,PACA_NMI_EMERG_SP(r13) subi r1,r1,INT_FRAME_SIZE - GEN_COMMON system_reset + __GEN_COMMON_BODY system_reset bl save_nvgprs /* * Set IRQS_ALL_DISABLED unconditionally so arch_irqs_disabled does @@ -973,6 +930,7 @@ EXC_COMMON_BEGIN(system_reset_common) INT_DEFINE_BEGIN(machine_check_early) IVEC=0x200 IAREA=PACA_EXMC + IVIRT=0 /* no virt entry point */ /* * MSR_RI is not enabled, because PACA_EXMC is being used, so a * nested machine check corrupts it. machine_check_common enables @@ -990,6 +948,7 @@ INT_DEFINE_END(machine_check_early) INT_DEFINE_BEGIN(machine_check) IVEC=0x200 IAREA=PACA_EXMC + IVIRT=0 /* no virt entry point */ ISET_RI=0 IDAR=1 IDSISR=1 @@ -1022,7 +981,6 @@ TRAMP_KVM_BEGIN(machine_check_kvm) EXCEPTION_RESTORE_REGS EXC_STD EXC_COMMON_BEGIN(machine_check_early_common) - mtctr r10 /* Restore ctr */ mfspr r11,SPRN_SRR0 mfspr r12,SPRN_SRR1 @@ -1061,7 +1019,7 @@ EXC_COMMON_BEGIN(machine_check_early_common) bgt cr1,unrecoverable_mce /* Check if we hit limit of 4 */ subi r1,r1,INT_FRAME_SIZE /* alloc stack frame */ - GEN_COMMON machine_check_early + __GEN_COMMON_BODY machine_check_early BEGIN_FTR_SECTION bl enable_machine_check @@ -1448,6 +1406,8 @@ EXC_VIRT_END(program_check, 0x4700, 0x100) TRAMP_KVM_BEGIN(program_check_kvm) GEN_KVM program_check EXC_COMMON_BEGIN(program_check_common) + __GEN_COMMON_ENTRY program_check + /* * It's possible to receive a TM Bad Thing type program check with * userspace register values (in particular r1), but with SRR1 reporting @@ -1473,11 +1433,11 @@ EXC_COMMON_BEGIN(program_check_common) ld r1,PACAEMERGSP(r13) /* Use emergency stack */ subi r1,r1,INT_FRAME_SIZE /* alloc stack frame */ __ISTACK(program_check)=0 - GEN_COMMON program_check + __GEN_COMMON_BODY program_check b 3f 2: __ISTACK(program_check)=1 - GEN_COMMON program_check + __GEN_COMMON_BODY program_check 3: bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD @@ -1874,14 +1834,13 @@ TRAMP_KVM_BEGIN(hmi_exception_kvm) GEN_KVM hmi_exception EXC_COMMON_BEGIN(hmi_exception_early_common) - mtctr r10 /* Restore ctr */ mfspr r11,SPRN_HSRR0 /* Save HSRR0 */ mfspr r12,SPRN_HSRR1 /* Save HSRR1 */ mr r10,r1 /* Save r1 */ ld r1,PACAEMERGSP(r13) /* Use emergency stack for realmode */ subi r1,r1,INT_FRAME_SIZE /* alloc stack frame */ - GEN_COMMON hmi_exception_early + __GEN_COMMON_BODY hmi_exception_early addi r3,r1,STACK_FRAME_OVERHEAD bl hmi_exception_realmode @@ -2208,7 +2167,9 @@ EXC_REAL_BEGIN(denorm_exception, 0x1500, 0x100) bne+ denorm_assist #endif KVMTEST denorm_exception, EXC_HV, 0x1500 - INT_SAVE_SRR_AND_JUMP denorm_exception_common, EXC_HV, 1 + mfspr r11,SPRN_HSRR0 + mfspr r12,SPRN_HSRR1 + GEN_BRANCH_TO_COMMON denorm_exception, virt=0 EXC_REAL_END(denorm_exception, 0x1500, 0x100) #ifdef CONFIG_PPC_DENORMALISATION EXC_VIRT_BEGIN(denorm_exception, 0x5500, 0x100) @@ -2216,7 +2177,9 @@ EXC_VIRT_BEGIN(denorm_exception, 0x5500, 0x100) mfspr r10,SPRN_HSRR1 andis. r10,r10,(HSRR1_DENORM)@h /* denorm? */ bne+ denorm_assist - INT_VIRT_SAVE_SRR_AND_JUMP denorm_exception_common, EXC_HV + mfspr r11,SPRN_HSRR0 + mfspr r12,SPRN_HSRR1 + GEN_BRANCH_TO_COMMON denorm_exception, virt=1 EXC_VIRT_END(denorm_exception, 0x5500, 0x100) #else EXC_VIRT_NONE(0x5500, 0x100) @@ -2387,7 +2350,11 @@ EXC_VIRT_NONE(0x5800, 0x100) std r12,PACA_EXGEN+EX_R12(r13); \ GET_SCRATCH0(r10); \ std r10,PACA_EXGEN+EX_R13(r13); \ - INT_SAVE_SRR_AND_JUMP soft_nmi_common, _H, 1 + mfspr r11,SPRN_SRR0; /* save SRR0 */ \ + mfspr r12,SPRN_SRR1; /* and SRR1 */ \ + LOAD_HANDLER(r10, soft_nmi_common); \ + mtctr r10; \ + bctr /* * Branch to soft_nmi_interrupt using the emergency stack. The emergency @@ -2403,7 +2370,7 @@ EXC_COMMON_BEGIN(soft_nmi_common) ld r1,PACAEMERGSP(r13) subi r1,r1,INT_FRAME_SIZE __ISTACK(decrementer)=0 - GEN_COMMON decrementer + __GEN_COMMON_BODY decrementer bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl soft_nmi_interrupt From patchwork Tue Nov 12 16:52:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193712 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CFB52KWgz9sNx for ; Wed, 13 Nov 2019 04:30:05 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CFB503jFzF4lp for ; Wed, 13 Nov 2019 04:30:05 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDMy2kBCzF1lZ for ; Wed, 13 Nov 2019 03:53:33 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id D1940B089; Tue, 12 Nov 2019 16:53:30 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 11/33] powerpc/64s/exception: move soft-mask test to common code Date: Tue, 12 Nov 2019 17:52:09 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin As well as moving code out of the unrelocated vectors, this allows the masked handlers to be moved to common code, and allows the soft_nmi handler to be generated more like a regular handler. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 106 +++++++++++++-------------- 1 file changed, 49 insertions(+), 57 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 5803ce3b9404..fbc3fbb293f7 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -411,36 +411,6 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) .if (!\virt && IKVM_REAL) || (\virt && IKVM_VIRT) KVMTEST \name IHSRR IVEC .endif - .if IMASK - lbz r10,PACAIRQSOFTMASK(r13) - andi. r10,r10,IMASK - /* Associate vector numbers with bits in paca->irq_happened */ - .if IVEC == 0x500 || IVEC == 0xea0 - li r10,PACA_IRQ_EE - .elseif IVEC == 0x900 - li r10,PACA_IRQ_DEC - .elseif IVEC == 0xa00 || IVEC == 0xe80 - li r10,PACA_IRQ_DBELL - .elseif IVEC == 0xe60 - li r10,PACA_IRQ_HMI - .elseif IVEC == 0xf00 - li r10,PACA_IRQ_PMI - .else - .abort "Bad maskable vector" - .endif - - .if IHSRR == EXC_HV_OR_STD - BEGIN_FTR_SECTION - bne masked_Hinterrupt - FTR_SECTION_ELSE - bne masked_interrupt - ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) - .elseif IHSRR - bne masked_Hinterrupt - .else - bne masked_interrupt - .endif - .endif std r11,IAREA+EX_R11(r13) std r12,IAREA+EX_R12(r13) @@ -524,6 +494,37 @@ DEFINE_FIXED_SYMBOL(\name\()_common_virt) .endm .macro __GEN_COMMON_BODY name + .if IMASK + lbz r10,PACAIRQSOFTMASK(r13) + andi. r10,r10,IMASK + /* Associate vector numbers with bits in paca->irq_happened */ + .if IVEC == 0x500 || IVEC == 0xea0 + li r10,PACA_IRQ_EE + .elseif IVEC == 0x900 + li r10,PACA_IRQ_DEC + .elseif IVEC == 0xa00 || IVEC == 0xe80 + li r10,PACA_IRQ_DBELL + .elseif IVEC == 0xe60 + li r10,PACA_IRQ_HMI + .elseif IVEC == 0xf00 + li r10,PACA_IRQ_PMI + .else + .abort "Bad maskable vector" + .endif + + .if IHSRR == EXC_HV_OR_STD + BEGIN_FTR_SECTION + bne masked_Hinterrupt + FTR_SECTION_ELSE + bne masked_interrupt + ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) + .elseif IHSRR + bne masked_Hinterrupt + .else + bne masked_interrupt + .endif + .endif + .if ISTACK andi. r10,r12,MSR_PR /* See if coming from user */ mr r10,r1 /* Save r1 */ @@ -2343,18 +2344,10 @@ EXC_VIRT_NONE(0x5800, 0x100) #ifdef CONFIG_PPC_WATCHDOG -#define MASKED_DEC_HANDLER_LABEL 3f - -#define MASKED_DEC_HANDLER(_H) \ -3: /* soft-nmi */ \ - std r12,PACA_EXGEN+EX_R12(r13); \ - GET_SCRATCH0(r10); \ - std r10,PACA_EXGEN+EX_R13(r13); \ - mfspr r11,SPRN_SRR0; /* save SRR0 */ \ - mfspr r12,SPRN_SRR1; /* and SRR1 */ \ - LOAD_HANDLER(r10, soft_nmi_common); \ - mtctr r10; \ - bctr +INT_DEFINE_BEGIN(soft_nmi) + IVEC=0x900 + ISTACK=0 +INT_DEFINE_END(soft_nmi) /* * Branch to soft_nmi_interrupt using the emergency stack. The emergency @@ -2366,19 +2359,16 @@ EXC_VIRT_NONE(0x5800, 0x100) * and run it entirely with interrupts hard disabled. */ EXC_COMMON_BEGIN(soft_nmi_common) + mfspr r11,SPRN_SRR0 mr r10,r1 ld r1,PACAEMERGSP(r13) subi r1,r1,INT_FRAME_SIZE - __ISTACK(decrementer)=0 - __GEN_COMMON_BODY decrementer + __GEN_COMMON_BODY soft_nmi bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl soft_nmi_interrupt b ret_from_except -#else /* CONFIG_PPC_WATCHDOG */ -#define MASKED_DEC_HANDLER_LABEL 2f /* normal return */ -#define MASKED_DEC_HANDLER(_H) #endif /* CONFIG_PPC_WATCHDOG */ /* @@ -2397,7 +2387,6 @@ masked_Hinterrupt: .else masked_interrupt: .endif - std r11,PACA_EXGEN+EX_R11(r13) lbz r11,PACAIRQHAPPENED(r13) or r11,r11,r10 stb r11,PACAIRQHAPPENED(r13) @@ -2406,26 +2395,30 @@ masked_interrupt: lis r10,0x7fff ori r10,r10,0xffff mtspr SPRN_DEC,r10 - b MASKED_DEC_HANDLER_LABEL +#ifdef CONFIG_PPC_WATCHDOG + b soft_nmi_common +#else + b 2f +#endif 1: andi. r10,r10,PACA_IRQ_MUST_HARD_MASK beq 2f + xori r12,r12,MSR_EE /* clear MSR_EE */ .if \hsrr - mfspr r10,SPRN_HSRR1 - xori r10,r10,MSR_EE /* clear MSR_EE */ - mtspr SPRN_HSRR1,r10 + mtspr SPRN_HSRR1,r12 .else - mfspr r10,SPRN_SRR1 - xori r10,r10,MSR_EE /* clear MSR_EE */ - mtspr SPRN_SRR1,r10 + mtspr SPRN_SRR1,r12 .endif ori r11,r11,PACA_IRQ_HARD_DIS stb r11,PACAIRQHAPPENED(r13) 2: /* done */ + ld r10,PACA_EXGEN+EX_CTR(r13) + mtctr r10 mtcrf 0x80,r9 std r1,PACAR1(r13) ld r9,PACA_EXGEN+EX_R9(r13) ld r10,PACA_EXGEN+EX_R10(r13) ld r11,PACA_EXGEN+EX_R11(r13) + ld r12,PACA_EXGEN+EX_R12(r13) /* returns to kernel where r13 must be set up, so don't restore it */ .if \hsrr HRFI_TO_KERNEL @@ -2433,7 +2426,6 @@ masked_interrupt: RFI_TO_KERNEL .endif b . - MASKED_DEC_HANDLER(\hsrr\()) .endm TRAMP_REAL_BEGIN(stf_barrier_fallback) @@ -2540,7 +2532,7 @@ TRAMP_REAL_BEGIN(hrfi_flush_fallback) * instruction code patches (which end up in the common .text area) * cannot reach these if they are put there. */ -USE_FIXED_SECTION(virt_trampolines) +USE_TEXT_SECTION() MASKED_INTERRUPT EXC_STD MASKED_INTERRUPT EXC_HV From patchwork Tue Nov 12 16:52:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193713 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CFDm48Tkz9sNx for ; Wed, 13 Nov 2019 04:32:24 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CFDl6s3dzF5Lt for ; Wed, 13 Nov 2019 04:32:23 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDMz3SjYzF1Hp for ; Wed, 13 Nov 2019 03:53:35 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 52FB3B127; Tue, 12 Nov 2019 16:53:32 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 12/33] powerpc/64s/exception: move KVM test to common code Date: Tue, 12 Nov 2019 17:52:10 +0100 Message-Id: <2f3a10c0e5538097ed4285e44ba8be0c9ae305cc.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin This allows more code to be moved out of unrelocated regions. The system call KVMTEST is changed to be open-coded and remain in the tramp area to avoid having to move it to entry_64.S. The custom nature of the system call entry code means the hcall case can be made more streamlined than regular interrupt handlers. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 239 ++++++++++++------------ arch/powerpc/kvm/book3s_hv_rmhandlers.S | 11 -- arch/powerpc/kvm/book3s_segment.S | 7 - 3 files changed, 119 insertions(+), 138 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index fbc3fbb293f7..7db76e7be0aa 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -44,7 +44,6 @@ * EXC_VIRT_BEGIN/END - virt (AIL), unrelocated exception vectors * TRAMP_REAL_BEGIN - real, unrelocated helpers (virt may call these) * TRAMP_VIRT_BEGIN - virt, unreloc helpers (in practice, real can use) - * TRAMP_KVM_BEGIN - KVM handlers, these are put into real, unrelocated * EXC_COMMON - After switching to virtual, relocated mode. */ @@ -74,13 +73,6 @@ name: #define TRAMP_VIRT_BEGIN(name) \ FIXED_SECTION_ENTRY_BEGIN(virt_trampolines, name) -#ifdef CONFIG_KVM_BOOK3S_64_HANDLER -#define TRAMP_KVM_BEGIN(name) \ - TRAMP_VIRT_BEGIN(name) -#else -#define TRAMP_KVM_BEGIN(name) -#endif - #define EXC_REAL_NONE(start, size) \ FIXED_SECTION_ENTRY_BEGIN_LOCATION(real_vectors, exc_real_##start##_##unused, start, size); \ FIXED_SECTION_ENTRY_END_LOCATION(real_vectors, exc_real_##start##_##unused, start, size) @@ -271,6 +263,9 @@ do_define_int n .endm .macro GEN_KVM name + .balign IFETCH_ALIGN_BYTES +\name\()_kvm: + .if IKVM_SKIP cmpwi r10,KVM_GUEST_MODE_SKIP beq 89f @@ -281,13 +276,18 @@ BEGIN_FTR_SECTION_NESTED(947) END_FTR_SECTION_NESTED(CPU_FTR_CFAR,CPU_FTR_CFAR,947) .endif + ld r10,PACA_EXGEN+EX_CTR(r13) + mtctr r10 BEGIN_FTR_SECTION_NESTED(948) ld r10,IAREA+EX_PPR(r13) std r10,HSTATE_PPR(r13) END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) - ld r10,IAREA+EX_R10(r13) + ld r11,IAREA+EX_R11(r13) + ld r12,IAREA+EX_R12(r13) std r12,HSTATE_SCRATCH0(r13) sldi r12,r9,32 + ld r9,IAREA+EX_R9(r13) + ld r10,IAREA+EX_R10(r13) /* HSRR variants have the 0x2 bit added to their trap number */ .if IHSRR == EXC_HV_OR_STD BEGIN_FTR_SECTION @@ -300,29 +300,16 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) .else ori r12,r12,(IVEC) .endif - -#ifdef CONFIG_RELOCATABLE - /* - * KVM requires __LOAD_FAR_HANDLER beause kvmppc_interrupt lives - * outside the head section. CONFIG_RELOCATABLE KVM expects CTR - * to be saved in HSTATE_SCRATCH1. - */ - ld r9,IAREA+EX_CTR(r13) - std r9,HSTATE_SCRATCH1(r13) - __LOAD_FAR_HANDLER(r9, kvmppc_interrupt) - mtctr r9 - ld r9,IAREA+EX_R9(r13) - bctr -#else - ld r9,IAREA+EX_R9(r13) b kvmppc_interrupt -#endif - .if IKVM_SKIP 89: mtocrf 0x80,r9 + ld r10,PACA_EXGEN+EX_CTR(r13) + mtctr r10 ld r9,IAREA+EX_R9(r13) ld r10,IAREA+EX_R10(r13) + ld r11,IAREA+EX_R11(r13) + ld r12,IAREA+EX_R12(r13) .if IHSRR == EXC_HV_OR_STD BEGIN_FTR_SECTION b kvmppc_skip_Hinterrupt @@ -407,11 +394,6 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) mfctr r10 std r10,IAREA+EX_CTR(r13) mfcr r9 - - .if (!\virt && IKVM_REAL) || (\virt && IKVM_VIRT) - KVMTEST \name IHSRR IVEC - .endif - std r11,IAREA+EX_R11(r13) std r12,IAREA+EX_R12(r13) @@ -475,6 +457,10 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) .macro __GEN_COMMON_ENTRY name DEFINE_FIXED_SYMBOL(\name\()_common_real) \name\()_common_real: + .if IKVM_REAL + KVMTEST \name IHSRR IVEC + .endif + ld r10,PACAKMSR(r13) /* get MSR value for kernel */ /* MSR[RI] is clear iff using SRR regs */ .if IHSRR == EXC_HV_OR_STD @@ -487,9 +473,17 @@ DEFINE_FIXED_SYMBOL(\name\()_common_real) mtmsrd r10 .if IVIRT + .if IKVM_VIRT + b 1f /* skip the virt test coming from real */ + .endif + .balign IFETCH_ALIGN_BYTES DEFINE_FIXED_SYMBOL(\name\()_common_virt) \name\()_common_virt: + .if IKVM_VIRT + KVMTEST \name IHSRR IVEC +1: + .endif .endif /* IVIRT */ .endm @@ -848,8 +842,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) */ EXC_REAL_END(system_reset, 0x100, 0x100) EXC_VIRT_NONE(0x4100, 0x100) -TRAMP_KVM_BEGIN(system_reset_kvm) - GEN_KVM system_reset #ifdef CONFIG_PPC_P7_NAP TRAMP_REAL_BEGIN(system_reset_idle_wake) @@ -927,6 +919,8 @@ EXC_COMMON_BEGIN(system_reset_common) EXCEPTION_RESTORE_REGS EXC_STD RFI_TO_USER_OR_KERNEL + GEN_KVM system_reset + INT_DEFINE_BEGIN(machine_check_early) IVEC=0x200 @@ -968,9 +962,6 @@ TRAMP_REAL_BEGIN(machine_check_fwnmi) GEN_INT_ENTRY machine_check_early, virt=0 #endif -TRAMP_KVM_BEGIN(machine_check_kvm) - GEN_KVM machine_check - #define MACHINE_CHECK_HANDLER_WINDUP \ /* Clear MSR_RI before setting SRR0 and SRR1. */\ li r9,0; \ @@ -1126,6 +1117,9 @@ EXC_COMMON_BEGIN(machine_check_common) bl machine_check_exception b ret_from_except + GEN_KVM machine_check + + #ifdef CONFIG_PPC_P7_NAP /* * This is an idle wakeup. Low level machine check has already been @@ -1218,8 +1212,6 @@ EXC_REAL_END(data_access, 0x300, 0x80) EXC_VIRT_BEGIN(data_access, 0x4300, 0x80) GEN_INT_ENTRY data_access, virt=1 EXC_VIRT_END(data_access, 0x4300, 0x80) -TRAMP_KVM_BEGIN(data_access_kvm) - GEN_KVM data_access EXC_COMMON_BEGIN(data_access_common) GEN_COMMON data_access ld r4,_DAR(r1) @@ -1232,6 +1224,8 @@ MMU_FTR_SECTION_ELSE b handle_page_fault ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) + GEN_KVM data_access + INT_DEFINE_BEGIN(data_access_slb) IVEC=0x380 @@ -1248,8 +1242,6 @@ EXC_REAL_END(data_access_slb, 0x380, 0x80) EXC_VIRT_BEGIN(data_access_slb, 0x4380, 0x80) GEN_INT_ENTRY data_access_slb, virt=1 EXC_VIRT_END(data_access_slb, 0x4380, 0x80) -TRAMP_KVM_BEGIN(data_access_slb_kvm) - GEN_KVM data_access_slb EXC_COMMON_BEGIN(data_access_slb_common) GEN_COMMON data_access_slb ld r4,_DAR(r1) @@ -1274,6 +1266,8 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) bl do_bad_slb_fault b ret_from_except + GEN_KVM data_access_slb + INT_DEFINE_BEGIN(instruction_access) IVEC=0x400 @@ -1289,8 +1283,6 @@ EXC_REAL_END(instruction_access, 0x400, 0x80) EXC_VIRT_BEGIN(instruction_access, 0x4400, 0x80) GEN_INT_ENTRY instruction_access, virt=1 EXC_VIRT_END(instruction_access, 0x4400, 0x80) -TRAMP_KVM_BEGIN(instruction_access_kvm) - GEN_KVM instruction_access EXC_COMMON_BEGIN(instruction_access_common) GEN_COMMON instruction_access ld r4,_DAR(r1) @@ -1303,6 +1295,8 @@ MMU_FTR_SECTION_ELSE b handle_page_fault ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) + GEN_KVM instruction_access + INT_DEFINE_BEGIN(instruction_access_slb) IVEC=0x480 @@ -1319,8 +1313,6 @@ EXC_REAL_END(instruction_access_slb, 0x480, 0x80) EXC_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x80) GEN_INT_ENTRY instruction_access_slb, virt=1 EXC_VIRT_END(instruction_access_slb, 0x4480, 0x80) -TRAMP_KVM_BEGIN(instruction_access_slb_kvm) - GEN_KVM instruction_access_slb EXC_COMMON_BEGIN(instruction_access_slb_common) GEN_COMMON instruction_access_slb ld r4,_DAR(r1) @@ -1345,6 +1337,9 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) bl do_bad_slb_fault b ret_from_except + GEN_KVM instruction_access_slb + + INT_DEFINE_BEGIN(hardware_interrupt) IVEC=0x500 IHSRR=EXC_HV_OR_STD @@ -1359,8 +1354,6 @@ EXC_REAL_END(hardware_interrupt, 0x500, 0x100) EXC_VIRT_BEGIN(hardware_interrupt, 0x4500, 0x100) GEN_INT_ENTRY hardware_interrupt, virt=1 EXC_VIRT_END(hardware_interrupt, 0x4500, 0x100) -TRAMP_KVM_BEGIN(hardware_interrupt_kvm) - GEN_KVM hardware_interrupt EXC_COMMON_BEGIN(hardware_interrupt_common) GEN_COMMON hardware_interrupt FINISH_NAP @@ -1369,6 +1362,8 @@ EXC_COMMON_BEGIN(hardware_interrupt_common) bl do_IRQ b ret_from_except_lite + GEN_KVM hardware_interrupt + INT_DEFINE_BEGIN(alignment) IVEC=0x600 @@ -1383,8 +1378,6 @@ EXC_REAL_END(alignment, 0x600, 0x100) EXC_VIRT_BEGIN(alignment, 0x4600, 0x100) GEN_INT_ENTRY alignment, virt=1 EXC_VIRT_END(alignment, 0x4600, 0x100) -TRAMP_KVM_BEGIN(alignment_kvm) - GEN_KVM alignment EXC_COMMON_BEGIN(alignment_common) GEN_COMMON alignment bl save_nvgprs @@ -1392,6 +1385,8 @@ EXC_COMMON_BEGIN(alignment_common) bl alignment_exception b ret_from_except + GEN_KVM alignment + INT_DEFINE_BEGIN(program_check) IVEC=0x700 @@ -1404,8 +1399,6 @@ EXC_REAL_END(program_check, 0x700, 0x100) EXC_VIRT_BEGIN(program_check, 0x4700, 0x100) GEN_INT_ENTRY program_check, virt=1 EXC_VIRT_END(program_check, 0x4700, 0x100) -TRAMP_KVM_BEGIN(program_check_kvm) - GEN_KVM program_check EXC_COMMON_BEGIN(program_check_common) __GEN_COMMON_ENTRY program_check @@ -1445,6 +1438,8 @@ EXC_COMMON_BEGIN(program_check_common) bl program_check_exception b ret_from_except + GEN_KVM program_check + INT_DEFINE_BEGIN(fp_unavailable) IVEC=0x800 @@ -1458,8 +1453,6 @@ EXC_REAL_END(fp_unavailable, 0x800, 0x100) EXC_VIRT_BEGIN(fp_unavailable, 0x4800, 0x100) GEN_INT_ENTRY fp_unavailable, virt=1 EXC_VIRT_END(fp_unavailable, 0x4800, 0x100) -TRAMP_KVM_BEGIN(fp_unavailable_kvm) - GEN_KVM fp_unavailable EXC_COMMON_BEGIN(fp_unavailable_common) GEN_COMMON fp_unavailable bne 1f /* if from user, just load it up */ @@ -1490,6 +1483,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM) b ret_from_except #endif + GEN_KVM fp_unavailable + INT_DEFINE_BEGIN(decrementer) IVEC=0x900 @@ -1503,8 +1498,6 @@ EXC_REAL_END(decrementer, 0x900, 0x80) EXC_VIRT_BEGIN(decrementer, 0x4900, 0x80) GEN_INT_ENTRY decrementer, virt=1 EXC_VIRT_END(decrementer, 0x4900, 0x80) -TRAMP_KVM_BEGIN(decrementer_kvm) - GEN_KVM decrementer EXC_COMMON_BEGIN(decrementer_common) GEN_COMMON decrementer FINISH_NAP @@ -1513,6 +1506,8 @@ EXC_COMMON_BEGIN(decrementer_common) bl timer_interrupt b ret_from_except_lite + GEN_KVM decrementer + INT_DEFINE_BEGIN(hdecrementer) IVEC=0x980 @@ -1527,8 +1522,6 @@ EXC_REAL_END(hdecrementer, 0x980, 0x80) EXC_VIRT_BEGIN(hdecrementer, 0x4980, 0x80) GEN_INT_ENTRY hdecrementer, virt=1 EXC_VIRT_END(hdecrementer, 0x4980, 0x80) -TRAMP_KVM_BEGIN(hdecrementer_kvm) - GEN_KVM hdecrementer EXC_COMMON_BEGIN(hdecrementer_common) GEN_COMMON hdecrementer bl save_nvgprs @@ -1536,6 +1529,8 @@ EXC_COMMON_BEGIN(hdecrementer_common) bl hdec_interrupt b ret_from_except + GEN_KVM hdecrementer + INT_DEFINE_BEGIN(doorbell_super) IVEC=0xa00 @@ -1549,8 +1544,6 @@ EXC_REAL_END(doorbell_super, 0xa00, 0x100) EXC_VIRT_BEGIN(doorbell_super, 0x4a00, 0x100) GEN_INT_ENTRY doorbell_super, virt=1 EXC_VIRT_END(doorbell_super, 0x4a00, 0x100) -TRAMP_KVM_BEGIN(doorbell_super_kvm) - GEN_KVM doorbell_super EXC_COMMON_BEGIN(doorbell_super_common) GEN_COMMON doorbell_super FINISH_NAP @@ -1563,6 +1556,8 @@ EXC_COMMON_BEGIN(doorbell_super_common) #endif b ret_from_except_lite + GEN_KVM doorbell_super + EXC_REAL_NONE(0xb00, 0x100) EXC_VIRT_NONE(0x4b00, 0x100) @@ -1680,6 +1675,7 @@ EXC_VIRT_BEGIN(system_call, 0x4c00, 0x100) EXC_VIRT_END(system_call, 0x4c00, 0x100) #ifdef CONFIG_KVM_BOOK3S_64_HANDLER +TRAMP_REAL_BEGIN(system_call_kvm) /* * This is a hcall, so register convention is as above, with these * differences: @@ -1687,20 +1683,35 @@ EXC_VIRT_END(system_call, 0x4c00, 0x100) * ctr = orig r13 * orig r10 saved in PACA */ -TRAMP_KVM_BEGIN(system_call_kvm) /* * Save the PPR (on systems that support it) before changing to * HMT_MEDIUM. That allows the KVM code to save that value into the * guest state (it is the guest's PPR value). */ - OPT_GET_SPR(r10, SPRN_PPR, CPU_FTR_HAS_PPR) +BEGIN_FTR_SECTION_NESTED(948) + mfspr r10,SPRN_PPR + std r10,HSTATE_PPR(r13) +END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) HMT_MEDIUM - OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r10, CPU_FTR_HAS_PPR) mfctr r10 SET_SCRATCH0(r10) - std r9,PACA_EXGEN+EX_R9(r13) - mfcr r9 - GEN_KVM system_call + mfcr r10 + std r12,HSTATE_SCRATCH0(r13) + sldi r12,r10,32 + ori r12,r12,0xc00 +#ifdef CONFIG_RELOCATABLE + /* + * Requires __LOAD_FAR_HANDLER beause kvmppc_interrupt lives + * outside the head section. + */ + __LOAD_FAR_HANDLER(r10, kvmppc_interrupt) + mtctr r10 + ld r10,PACA_EXGEN+EX_R10(r13) + bctr +#else + ld r10,PACA_EXGEN+EX_R10(r13) + b kvmppc_interrupt +#endif #endif @@ -1715,8 +1726,6 @@ EXC_REAL_END(single_step, 0xd00, 0x100) EXC_VIRT_BEGIN(single_step, 0x4d00, 0x100) GEN_INT_ENTRY single_step, virt=1 EXC_VIRT_END(single_step, 0x4d00, 0x100) -TRAMP_KVM_BEGIN(single_step_kvm) - GEN_KVM single_step EXC_COMMON_BEGIN(single_step_common) GEN_COMMON single_step bl save_nvgprs @@ -1724,6 +1733,8 @@ EXC_COMMON_BEGIN(single_step_common) bl single_step_exception b ret_from_except + GEN_KVM single_step + INT_DEFINE_BEGIN(h_data_storage) IVEC=0xe00 @@ -1741,8 +1752,6 @@ EXC_REAL_END(h_data_storage, 0xe00, 0x20) EXC_VIRT_BEGIN(h_data_storage, 0x4e00, 0x20) GEN_INT_ENTRY h_data_storage, virt=1, ool=1 EXC_VIRT_END(h_data_storage, 0x4e00, 0x20) -TRAMP_KVM_BEGIN(h_data_storage_kvm) - GEN_KVM h_data_storage EXC_COMMON_BEGIN(h_data_storage_common) GEN_COMMON h_data_storage bl save_nvgprs @@ -1756,6 +1765,8 @@ MMU_FTR_SECTION_ELSE ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_TYPE_RADIX) b ret_from_except + GEN_KVM h_data_storage + INT_DEFINE_BEGIN(h_instr_storage) IVEC=0xe20 @@ -1770,8 +1781,6 @@ EXC_REAL_END(h_instr_storage, 0xe20, 0x20) EXC_VIRT_BEGIN(h_instr_storage, 0x4e20, 0x20) GEN_INT_ENTRY h_instr_storage, virt=1, ool=1 EXC_VIRT_END(h_instr_storage, 0x4e20, 0x20) -TRAMP_KVM_BEGIN(h_instr_storage_kvm) - GEN_KVM h_instr_storage EXC_COMMON_BEGIN(h_instr_storage_common) GEN_COMMON h_instr_storage bl save_nvgprs @@ -1779,6 +1788,8 @@ EXC_COMMON_BEGIN(h_instr_storage_common) bl unknown_exception b ret_from_except + GEN_KVM h_instr_storage + INT_DEFINE_BEGIN(emulation_assist) IVEC=0xe40 @@ -1793,8 +1804,6 @@ EXC_REAL_END(emulation_assist, 0xe40, 0x20) EXC_VIRT_BEGIN(emulation_assist, 0x4e40, 0x20) GEN_INT_ENTRY emulation_assist, virt=1, ool=1 EXC_VIRT_END(emulation_assist, 0x4e40, 0x20) -TRAMP_KVM_BEGIN(emulation_assist_kvm) - GEN_KVM emulation_assist EXC_COMMON_BEGIN(emulation_assist_common) GEN_COMMON emulation_assist bl save_nvgprs @@ -1802,6 +1811,8 @@ EXC_COMMON_BEGIN(emulation_assist_common) bl emulation_assist_interrupt b ret_from_except + GEN_KVM emulation_assist + /* * hmi_exception trampoline is a special case. It jumps to hmi_exception_early @@ -1829,10 +1840,6 @@ EXC_REAL_BEGIN(hmi_exception, 0xe60, 0x20) GEN_INT_ENTRY hmi_exception_early, virt=0, ool=1 EXC_REAL_END(hmi_exception, 0xe60, 0x20) EXC_VIRT_NONE(0x4e60, 0x20) -TRAMP_KVM_BEGIN(hmi_exception_early_kvm) - GEN_KVM hmi_exception_early -TRAMP_KVM_BEGIN(hmi_exception_kvm) - GEN_KVM hmi_exception EXC_COMMON_BEGIN(hmi_exception_early_common) mfspr r11,SPRN_HSRR0 /* Save HSRR0 */ @@ -1859,6 +1866,8 @@ EXC_COMMON_BEGIN(hmi_exception_early_common) EXCEPTION_RESTORE_REGS EXC_HV GEN_INT_ENTRY hmi_exception, virt=0 + GEN_KVM hmi_exception_early + EXC_COMMON_BEGIN(hmi_exception_common) GEN_COMMON hmi_exception FINISH_NAP @@ -1868,6 +1877,8 @@ EXC_COMMON_BEGIN(hmi_exception_common) bl handle_hmi_exception b ret_from_except + GEN_KVM hmi_exception + INT_DEFINE_BEGIN(h_doorbell) IVEC=0xe80 @@ -1883,8 +1894,6 @@ EXC_REAL_END(h_doorbell, 0xe80, 0x20) EXC_VIRT_BEGIN(h_doorbell, 0x4e80, 0x20) GEN_INT_ENTRY h_doorbell, virt=1, ool=1 EXC_VIRT_END(h_doorbell, 0x4e80, 0x20) -TRAMP_KVM_BEGIN(h_doorbell_kvm) - GEN_KVM h_doorbell EXC_COMMON_BEGIN(h_doorbell_common) GEN_COMMON h_doorbell FINISH_NAP @@ -1897,6 +1906,8 @@ EXC_COMMON_BEGIN(h_doorbell_common) #endif b ret_from_except_lite + GEN_KVM h_doorbell + INT_DEFINE_BEGIN(h_virt_irq) IVEC=0xea0 @@ -1912,8 +1923,6 @@ EXC_REAL_END(h_virt_irq, 0xea0, 0x20) EXC_VIRT_BEGIN(h_virt_irq, 0x4ea0, 0x20) GEN_INT_ENTRY h_virt_irq, virt=1, ool=1 EXC_VIRT_END(h_virt_irq, 0x4ea0, 0x20) -TRAMP_KVM_BEGIN(h_virt_irq_kvm) - GEN_KVM h_virt_irq EXC_COMMON_BEGIN(h_virt_irq_common) GEN_COMMON h_virt_irq FINISH_NAP @@ -1922,6 +1931,8 @@ EXC_COMMON_BEGIN(h_virt_irq_common) bl do_IRQ b ret_from_except_lite + GEN_KVM h_virt_irq + EXC_REAL_NONE(0xec0, 0x20) EXC_VIRT_NONE(0x4ec0, 0x20) @@ -1941,8 +1952,6 @@ EXC_REAL_END(performance_monitor, 0xf00, 0x20) EXC_VIRT_BEGIN(performance_monitor, 0x4f00, 0x20) GEN_INT_ENTRY performance_monitor, virt=1, ool=1 EXC_VIRT_END(performance_monitor, 0x4f00, 0x20) -TRAMP_KVM_BEGIN(performance_monitor_kvm) - GEN_KVM performance_monitor EXC_COMMON_BEGIN(performance_monitor_common) GEN_COMMON performance_monitor FINISH_NAP @@ -1951,6 +1960,8 @@ EXC_COMMON_BEGIN(performance_monitor_common) bl performance_monitor_exception b ret_from_except_lite + GEN_KVM performance_monitor + INT_DEFINE_BEGIN(altivec_unavailable) IVEC=0xf20 @@ -1964,8 +1975,6 @@ EXC_REAL_END(altivec_unavailable, 0xf20, 0x20) EXC_VIRT_BEGIN(altivec_unavailable, 0x4f20, 0x20) GEN_INT_ENTRY altivec_unavailable, virt=1, ool=1 EXC_VIRT_END(altivec_unavailable, 0x4f20, 0x20) -TRAMP_KVM_BEGIN(altivec_unavailable_kvm) - GEN_KVM altivec_unavailable EXC_COMMON_BEGIN(altivec_unavailable_common) GEN_COMMON altivec_unavailable #ifdef CONFIG_ALTIVEC @@ -1999,6 +2008,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) bl altivec_unavailable_exception b ret_from_except + GEN_KVM altivec_unavailable + INT_DEFINE_BEGIN(vsx_unavailable) IVEC=0xf40 @@ -2012,8 +2023,6 @@ EXC_REAL_END(vsx_unavailable, 0xf40, 0x20) EXC_VIRT_BEGIN(vsx_unavailable, 0x4f40, 0x20) GEN_INT_ENTRY vsx_unavailable, virt=1, ool=1 EXC_VIRT_END(vsx_unavailable, 0x4f40, 0x20) -TRAMP_KVM_BEGIN(vsx_unavailable_kvm) - GEN_KVM vsx_unavailable EXC_COMMON_BEGIN(vsx_unavailable_common) GEN_COMMON vsx_unavailable #ifdef CONFIG_VSX @@ -2046,6 +2055,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX) bl vsx_unavailable_exception b ret_from_except + GEN_KVM vsx_unavailable + INT_DEFINE_BEGIN(facility_unavailable) IVEC=0xf60 @@ -2058,8 +2069,6 @@ EXC_REAL_END(facility_unavailable, 0xf60, 0x20) EXC_VIRT_BEGIN(facility_unavailable, 0x4f60, 0x20) GEN_INT_ENTRY facility_unavailable, virt=1, ool=1 EXC_VIRT_END(facility_unavailable, 0x4f60, 0x20) -TRAMP_KVM_BEGIN(facility_unavailable_kvm) - GEN_KVM facility_unavailable EXC_COMMON_BEGIN(facility_unavailable_common) GEN_COMMON facility_unavailable bl save_nvgprs @@ -2067,6 +2076,8 @@ EXC_COMMON_BEGIN(facility_unavailable_common) bl facility_unavailable_exception b ret_from_except + GEN_KVM facility_unavailable + INT_DEFINE_BEGIN(h_facility_unavailable) IVEC=0xf80 @@ -2081,8 +2092,6 @@ EXC_REAL_END(h_facility_unavailable, 0xf80, 0x20) EXC_VIRT_BEGIN(h_facility_unavailable, 0x4f80, 0x20) GEN_INT_ENTRY h_facility_unavailable, virt=1, ool=1 EXC_VIRT_END(h_facility_unavailable, 0x4f80, 0x20) -TRAMP_KVM_BEGIN(h_facility_unavailable_kvm) - GEN_KVM h_facility_unavailable EXC_COMMON_BEGIN(h_facility_unavailable_common) GEN_COMMON h_facility_unavailable bl save_nvgprs @@ -2090,6 +2099,8 @@ EXC_COMMON_BEGIN(h_facility_unavailable_common) bl facility_unavailable_exception b ret_from_except + GEN_KVM h_facility_unavailable + EXC_REAL_NONE(0xfa0, 0x20) EXC_VIRT_NONE(0x4fa0, 0x20) @@ -2115,14 +2126,15 @@ EXC_REAL_BEGIN(cbe_system_error, 0x1200, 0x100) GEN_INT_ENTRY cbe_system_error, virt=0 EXC_REAL_END(cbe_system_error, 0x1200, 0x100) EXC_VIRT_NONE(0x5200, 0x100) -TRAMP_KVM_BEGIN(cbe_system_error_kvm) - GEN_KVM cbe_system_error EXC_COMMON_BEGIN(cbe_system_error_common) GEN_COMMON cbe_system_error bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl cbe_system_error_exception b ret_from_except + + GEN_KVM cbe_system_error + #else /* CONFIG_CBE_RAS */ EXC_REAL_NONE(0x1200, 0x100) EXC_VIRT_NONE(0x5200, 0x100) @@ -2141,8 +2153,6 @@ EXC_REAL_END(instruction_breakpoint, 0x1300, 0x100) EXC_VIRT_BEGIN(instruction_breakpoint, 0x5300, 0x100) GEN_INT_ENTRY instruction_breakpoint, virt=1 EXC_VIRT_END(instruction_breakpoint, 0x5300, 0x100) -TRAMP_KVM_BEGIN(instruction_breakpoint_kvm) - GEN_KVM instruction_breakpoint EXC_COMMON_BEGIN(instruction_breakpoint_common) GEN_COMMON instruction_breakpoint bl save_nvgprs @@ -2150,6 +2160,8 @@ EXC_COMMON_BEGIN(instruction_breakpoint_common) bl instruction_breakpoint_exception b ret_from_except + GEN_KVM instruction_breakpoint + EXC_REAL_NONE(0x1400, 0x100) EXC_VIRT_NONE(0x5400, 0x100) @@ -2158,6 +2170,7 @@ INT_DEFINE_BEGIN(denorm_exception) IVEC=0x1500 IHSRR=EXC_HV IEARLY=2 + IKVM_REAL=1 INT_DEFINE_END(denorm_exception) EXC_REAL_BEGIN(denorm_exception, 0x1500, 0x100) @@ -2167,7 +2180,6 @@ EXC_REAL_BEGIN(denorm_exception, 0x1500, 0x100) andis. r10,r10,(HSRR1_DENORM)@h /* denorm? */ bne+ denorm_assist #endif - KVMTEST denorm_exception, EXC_HV, 0x1500 mfspr r11,SPRN_HSRR0 mfspr r12,SPRN_HSRR1 GEN_BRANCH_TO_COMMON denorm_exception, virt=0 @@ -2185,8 +2197,6 @@ EXC_VIRT_END(denorm_exception, 0x5500, 0x100) #else EXC_VIRT_NONE(0x5500, 0x100) #endif -TRAMP_KVM_BEGIN(denorm_exception_kvm) - GEN_KVM denorm_exception #ifdef CONFIG_PPC_DENORMALISATION TRAMP_REAL_BEGIN(denorm_assist) @@ -2264,6 +2274,8 @@ EXC_COMMON_BEGIN(denorm_exception_common) bl unknown_exception b ret_from_except + GEN_KVM denorm_exception + #ifdef CONFIG_CBE_RAS INT_DEFINE_BEGIN(cbe_maintenance) @@ -2277,14 +2289,15 @@ EXC_REAL_BEGIN(cbe_maintenance, 0x1600, 0x100) GEN_INT_ENTRY cbe_maintenance, virt=0 EXC_REAL_END(cbe_maintenance, 0x1600, 0x100) EXC_VIRT_NONE(0x5600, 0x100) -TRAMP_KVM_BEGIN(cbe_maintenance_kvm) - GEN_KVM cbe_maintenance EXC_COMMON_BEGIN(cbe_maintenance_common) GEN_COMMON cbe_maintenance bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl cbe_maintenance_exception b ret_from_except + + GEN_KVM cbe_maintenance + #else /* CONFIG_CBE_RAS */ EXC_REAL_NONE(0x1600, 0x100) EXC_VIRT_NONE(0x5600, 0x100) @@ -2302,8 +2315,6 @@ EXC_REAL_END(altivec_assist, 0x1700, 0x100) EXC_VIRT_BEGIN(altivec_assist, 0x5700, 0x100) GEN_INT_ENTRY altivec_assist, virt=1 EXC_VIRT_END(altivec_assist, 0x5700, 0x100) -TRAMP_KVM_BEGIN(altivec_assist_kvm) - GEN_KVM altivec_assist EXC_COMMON_BEGIN(altivec_assist_common) GEN_COMMON altivec_assist bl save_nvgprs @@ -2315,6 +2326,8 @@ EXC_COMMON_BEGIN(altivec_assist_common) #endif b ret_from_except + GEN_KVM altivec_assist + #ifdef CONFIG_CBE_RAS INT_DEFINE_BEGIN(cbe_thermal) @@ -2328,14 +2341,15 @@ EXC_REAL_BEGIN(cbe_thermal, 0x1800, 0x100) GEN_INT_ENTRY cbe_thermal, virt=0 EXC_REAL_END(cbe_thermal, 0x1800, 0x100) EXC_VIRT_NONE(0x5800, 0x100) -TRAMP_KVM_BEGIN(cbe_thermal_kvm) - GEN_KVM cbe_thermal EXC_COMMON_BEGIN(cbe_thermal_common) GEN_COMMON cbe_thermal bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl cbe_thermal_exception b ret_from_except + + GEN_KVM cbe_thermal + #else /* CONFIG_CBE_RAS */ EXC_REAL_NONE(0x1800, 0x100) EXC_VIRT_NONE(0x5800, 0x100) @@ -2527,17 +2541,12 @@ TRAMP_REAL_BEGIN(hrfi_flush_fallback) GET_SCRATCH0(r13); hrfid -/* - * Real mode exceptions actually use this too, but alternate - * instruction code patches (which end up in the common .text area) - * cannot reach these if they are put there. - */ USE_TEXT_SECTION() MASKED_INTERRUPT EXC_STD MASKED_INTERRUPT EXC_HV #ifdef CONFIG_KVM_BOOK3S_64_HANDLER -TRAMP_REAL_BEGIN(kvmppc_skip_interrupt) +kvmppc_skip_interrupt: /* * Here all GPRs are unchanged from when the interrupt happened * except for r13, which is saved in SPRG_SCRATCH0. @@ -2549,7 +2558,7 @@ TRAMP_REAL_BEGIN(kvmppc_skip_interrupt) RFI_TO_KERNEL b . -TRAMP_REAL_BEGIN(kvmppc_skip_Hinterrupt) +kvmppc_skip_Hinterrupt: /* * Here all GPRs are unchanged from when the interrupt happened * except for r13, which is saved in SPRG_SCRATCH0. @@ -2562,16 +2571,6 @@ TRAMP_REAL_BEGIN(kvmppc_skip_Hinterrupt) b . #endif -/* - * Ensure that any handlers that get invoked from the exception prologs - * above are below the first 64KB (0x10000) of the kernel image because - * the prologs assemble the addresses of these handlers using the - * LOAD_HANDLER macro, which uses an ori instruction. - */ - -/*** Common interrupt handlers ***/ - - /* * Relocation-on interrupts: A subset of the interrupts can be delivered * with IR=1/DR=1, if AIL==2 and MSR.HV won't be changed by delivering diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index faebcbb8c4db..d51fa8a17d42 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -1265,7 +1265,6 @@ kvmppc_interrupt_hv: * R12 = (guest CR << 32) | interrupt vector * R13 = PACA * guest R12 saved in shadow VCPU SCRATCH0 - * guest CTR saved in shadow VCPU SCRATCH1 if RELOCATABLE * guest R13 saved in SPRN_SCRATCH0 */ std r9, HSTATE_SCRATCH2(r13) @@ -1366,12 +1365,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) 11: stw r3,VCPU_HEIR(r9) /* these are volatile across C function calls */ -#ifdef CONFIG_RELOCATABLE - ld r3, HSTATE_SCRATCH1(r13) - mtctr r3 -#else mfctr r3 -#endif mfxer r4 std r3, VCPU_CTR(r9) std r4, VCPU_XER(r9) @@ -3226,7 +3220,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_P9_TM_HV_ASSIST) * r12 is (CR << 32) | vector * r13 points to our PACA * r12 is saved in HSTATE_SCRATCH0(r13) - * ctr is saved in HSTATE_SCRATCH1(r13) if RELOCATABLE * r9 is saved in HSTATE_SCRATCH2(r13) * r13 is saved in HSPRG1 * cfar is saved in HSTATE_CFAR(r13) @@ -3275,11 +3268,7 @@ kvmppc_bad_host_intr: ld r5, HSTATE_CFAR(r13) std r5, ORIG_GPR3(r1) mflr r3 -#ifdef CONFIG_RELOCATABLE - ld r4, HSTATE_SCRATCH1(r13) -#else mfctr r4 -#endif mfxer r5 lbz r6, PACAIRQSOFTMASK(r13) std r3, _LINK(r1) diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S index 0169bab544dd..1f492aa4c8d6 100644 --- a/arch/powerpc/kvm/book3s_segment.S +++ b/arch/powerpc/kvm/book3s_segment.S @@ -167,16 +167,9 @@ kvmppc_interrupt_pr: * R12 = (guest CR << 32) | exit handler id * R13 = PACA * HSTATE.SCRATCH0 = guest R12 - * HSTATE.SCRATCH1 = guest CTR if RELOCATABLE */ #ifdef CONFIG_PPC64 /* Match 32-bit entry */ -#ifdef CONFIG_RELOCATABLE - std r9, HSTATE_SCRATCH2(r13) - ld r9, HSTATE_SCRATCH1(r13) - mtctr r9 - ld r9, HSTATE_SCRATCH2(r13) -#endif rotldi r12, r12, 32 /* Flip R12 halves for stw */ stw r12, HSTATE_SCRATCH1(r13) /* CR is now in the low half */ srdi r12, r12, 32 /* shift trap into low half */ From patchwork Tue Nov 12 16:52:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193717 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CFHl308hz9sNx for ; Wed, 13 Nov 2019 04:34:59 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CFHl1sf2zF4hF for ; Wed, 13 Nov 2019 04:34:59 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDN100NqzF1lZ for ; Wed, 13 Nov 2019 03:53:36 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id D226DAF5D; Tue, 12 Nov 2019 16:53:33 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 13/33] powerpc/64s/exception: remove confusing IEARLY option Date: Tue, 12 Nov 2019 17:52:11 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin Replace IEARLY=1 and IEARLY=2 with IBRANCH_COMMON, which controls if the entry code branches to a common handler; and IREALMODE_COMMON, which controls whether the common handler should remain in real mode. These special cases no longer avoid loading the SRR registers, there is no point as most of them load the registers immediately anyway. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 48 ++++++++++++++-------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 7db76e7be0aa..716a95ba814f 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -174,7 +174,8 @@ END_FTR_SECTION_NESTED(ftr,ftr,943) #define IDAR .L_IDAR_\name\() #define IDSISR .L_IDSISR_\name\() #define ISET_RI .L_ISET_RI_\name\() -#define IEARLY .L_IEARLY_\name\() +#define IBRANCH_TO_COMMON .L_IBRANCH_TO_COMMON_\name\() +#define IREALMODE_COMMON .L_IREALMODE_COMMON_\name\() #define IMASK .L_IMASK_\name\() #define IKVM_SKIP .L_IKVM_SKIP_\name\() #define IKVM_REAL .L_IKVM_REAL_\name\() @@ -218,8 +219,15 @@ do_define_int n .ifndef ISET_RI ISET_RI=1 .endif - .ifndef IEARLY - IEARLY=0 + .ifndef IBRANCH_TO_COMMON + IBRANCH_TO_COMMON=1 + .endif + .ifndef IREALMODE_COMMON + IREALMODE_COMMON=0 + .else + .if ! IBRANCH_TO_COMMON + .error "IREALMODE_COMMON=1 but IBRANCH_TO_COMMON=0" + .endif .endif .ifndef IMASK IMASK=0 @@ -353,6 +361,11 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) */ .macro GEN_BRANCH_TO_COMMON name, virt + .if IREALMODE_COMMON + LOAD_HANDLER(r10, \name\()_common) + mtctr r10 + bctr + .else .if \virt #ifndef CONFIG_RELOCATABLE b \name\()_common_virt @@ -366,6 +379,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) mtctr r10 bctr .endif + .endif .endm .macro GEN_INT_ENTRY name, virt, ool=0 @@ -421,11 +435,6 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) stw r10,IAREA+EX_DSISR(r13) .endif - .if IEARLY == 2 - /* nothing more */ - .elseif IEARLY - BRANCH_TO_C000(r11, \name\()_common) - .else .if IHSRR == EXC_HV_OR_STD BEGIN_FTR_SECTION mfspr r11,SPRN_HSRR0 /* save HSRR0 */ @@ -441,6 +450,8 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) mfspr r11,SPRN_SRR0 /* save SRR0 */ mfspr r12,SPRN_SRR1 /* and SRR1 */ .endif + + .if IBRANCH_TO_COMMON GEN_BRANCH_TO_COMMON \name \virt .endif @@ -926,6 +937,7 @@ INT_DEFINE_BEGIN(machine_check_early) IVEC=0x200 IAREA=PACA_EXMC IVIRT=0 /* no virt entry point */ + IREALMODE_COMMON=1 /* * MSR_RI is not enabled, because PACA_EXMC is being used, so a * nested machine check corrupts it. machine_check_common enables @@ -933,7 +945,6 @@ INT_DEFINE_BEGIN(machine_check_early) */ ISET_RI=0 ISTACK=0 - IEARLY=1 IDAR=1 IDSISR=1 IRECONCILE=0 @@ -973,9 +984,6 @@ TRAMP_REAL_BEGIN(machine_check_fwnmi) EXCEPTION_RESTORE_REGS EXC_STD EXC_COMMON_BEGIN(machine_check_early_common) - mfspr r11,SPRN_SRR0 - mfspr r12,SPRN_SRR1 - /* * Switch to mc_emergency stack and handle re-entrancy (we limit * the nested MCE upto level 4 to avoid stack overflow). @@ -1822,7 +1830,7 @@ EXC_COMMON_BEGIN(emulation_assist_common) INT_DEFINE_BEGIN(hmi_exception_early) IVEC=0xe60 IHSRR=EXC_HV - IEARLY=1 + IREALMODE_COMMON=1 ISTACK=0 IRECONCILE=0 IKUAP=0 /* We don't touch AMR here, we never go to virtual mode */ @@ -1842,8 +1850,6 @@ EXC_REAL_END(hmi_exception, 0xe60, 0x20) EXC_VIRT_NONE(0x4e60, 0x20) EXC_COMMON_BEGIN(hmi_exception_early_common) - mfspr r11,SPRN_HSRR0 /* Save HSRR0 */ - mfspr r12,SPRN_HSRR1 /* Save HSRR1 */ mr r10,r1 /* Save r1 */ ld r1,PACAEMERGSP(r13) /* Use emergency stack for realmode */ subi r1,r1,INT_FRAME_SIZE /* alloc stack frame */ @@ -2169,29 +2175,23 @@ EXC_VIRT_NONE(0x5400, 0x100) INT_DEFINE_BEGIN(denorm_exception) IVEC=0x1500 IHSRR=EXC_HV - IEARLY=2 + IBRANCH_TO_COMMON=0 IKVM_REAL=1 INT_DEFINE_END(denorm_exception) EXC_REAL_BEGIN(denorm_exception, 0x1500, 0x100) GEN_INT_ENTRY denorm_exception, virt=0 #ifdef CONFIG_PPC_DENORMALISATION - mfspr r10,SPRN_HSRR1 - andis. r10,r10,(HSRR1_DENORM)@h /* denorm? */ + andis. r10,r12,(HSRR1_DENORM)@h /* denorm? */ bne+ denorm_assist #endif - mfspr r11,SPRN_HSRR0 - mfspr r12,SPRN_HSRR1 GEN_BRANCH_TO_COMMON denorm_exception, virt=0 EXC_REAL_END(denorm_exception, 0x1500, 0x100) #ifdef CONFIG_PPC_DENORMALISATION EXC_VIRT_BEGIN(denorm_exception, 0x5500, 0x100) GEN_INT_ENTRY denorm_exception, virt=1 - mfspr r10,SPRN_HSRR1 - andis. r10,r10,(HSRR1_DENORM)@h /* denorm? */ + andis. r10,r12,(HSRR1_DENORM)@h /* denorm? */ bne+ denorm_assist - mfspr r11,SPRN_HSRR0 - mfspr r12,SPRN_HSRR1 GEN_BRANCH_TO_COMMON denorm_exception, virt=1 EXC_VIRT_END(denorm_exception, 0x5500, 0x100) #else From patchwork Tue Nov 12 16:52:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193722 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CFLf72v5z9sNx for ; Wed, 13 Nov 2019 04:37:30 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CFLf195TzF5ML for ; Wed, 13 Nov 2019 04:37:30 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDN24BnVzF24s for ; Wed, 13 Nov 2019 03:53:38 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 57119B0BA; Tue, 12 Nov 2019 16:53:35 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 14/33] powerpc/64s/exception: remove the SPR saving patch code macros Date: Tue, 12 Nov 2019 17:52:12 +0100 Message-Id: <2fd0f0cb35f3632b6f3474860981455b2d15af39.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin These are used infrequently enough they don't provide much help, so inline them. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 82 ++++++++++------------------ 1 file changed, 28 insertions(+), 54 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 716a95ba814f..abf26db36427 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -110,46 +110,6 @@ name: #define EXC_HV 1 #define EXC_STD 0 -/* - * PPR save/restore macros used in exceptions-64s.S - * Used for P7 or later processors - */ -#define SAVE_PPR(area, ra) \ -BEGIN_FTR_SECTION_NESTED(940) \ - ld ra,area+EX_PPR(r13); /* Read PPR from paca */ \ - std ra,_PPR(r1); \ -END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,940) - -#define RESTORE_PPR_PACA(area, ra) \ -BEGIN_FTR_SECTION_NESTED(941) \ - ld ra,area+EX_PPR(r13); \ - mtspr SPRN_PPR,ra; \ -END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,941) - -/* - * Get an SPR into a register if the CPU has the given feature - */ -#define OPT_GET_SPR(ra, spr, ftr) \ -BEGIN_FTR_SECTION_NESTED(943) \ - mfspr ra,spr; \ -END_FTR_SECTION_NESTED(ftr,ftr,943) - -/* - * Set an SPR from a register if the CPU has the given feature - */ -#define OPT_SET_SPR(ra, spr, ftr) \ -BEGIN_FTR_SECTION_NESTED(943) \ - mtspr spr,ra; \ -END_FTR_SECTION_NESTED(ftr,ftr,943) - -/* - * Save a register to the PACA if the CPU has the given feature - */ -#define OPT_SAVE_REG_TO_PACA(offset, ra, ftr) \ -BEGIN_FTR_SECTION_NESTED(943) \ - std ra,offset(r13); \ -END_FTR_SECTION_NESTED(ftr,ftr,943) - /* * Branch to label using its 0xC000 address. This results in instruction * address suitable for MSR[IR]=0 or 1, which allows relocation to be turned @@ -278,18 +238,18 @@ do_define_int n cmpwi r10,KVM_GUEST_MODE_SKIP beq 89f .else -BEGIN_FTR_SECTION_NESTED(947) +BEGIN_FTR_SECTION ld r10,IAREA+EX_CFAR(r13) std r10,HSTATE_CFAR(r13) -END_FTR_SECTION_NESTED(CPU_FTR_CFAR,CPU_FTR_CFAR,947) +END_FTR_SECTION_IFSET(CPU_FTR_CFAR) .endif ld r10,PACA_EXGEN+EX_CTR(r13) mtctr r10 -BEGIN_FTR_SECTION_NESTED(948) +BEGIN_FTR_SECTION ld r10,IAREA+EX_PPR(r13) std r10,HSTATE_PPR(r13) -END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) +END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) ld r11,IAREA+EX_R11(r13) ld r12,IAREA+EX_R12(r13) std r12,HSTATE_SCRATCH0(r13) @@ -386,10 +346,14 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) SET_SCRATCH0(r13) /* save r13 */ GET_PACA(r13) std r9,IAREA+EX_R9(r13) /* save r9 */ - OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR) +BEGIN_FTR_SECTION + mfspr r9,SPRN_PPR +END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) HMT_MEDIUM std r10,IAREA+EX_R10(r13) /* save r10 - r12 */ - OPT_GET_SPR(r10, SPRN_CFAR, CPU_FTR_CFAR) +BEGIN_FTR_SECTION + mfspr r10,SPRN_CFAR +END_FTR_SECTION_IFSET(CPU_FTR_CFAR) .if \ool .if !\virt b tramp_real_\name @@ -402,8 +366,12 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) .endif .endif - OPT_SAVE_REG_TO_PACA(IAREA+EX_PPR, r9, CPU_FTR_HAS_PPR) - OPT_SAVE_REG_TO_PACA(IAREA+EX_CFAR, r10, CPU_FTR_CFAR) +BEGIN_FTR_SECTION + std r9,IAREA+EX_PPR(r13) +END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) +BEGIN_FTR_SECTION + std r10,IAREA+EX_CFAR(r13) +END_FTR_SECTION_IFSET(CPU_FTR_CFAR) INTERRUPT_TO_KERNEL mfctr r10 std r10,IAREA+EX_CTR(r13) @@ -558,7 +526,10 @@ DEFINE_FIXED_SYMBOL(\name\()_common_virt) .endif beq 101f /* if from kernel mode */ ACCOUNT_CPU_USER_ENTRY(r13, r9, r10) - SAVE_PPR(IAREA, r9) +BEGIN_FTR_SECTION + ld r9,IAREA+EX_PPR(r13) /* Read PPR from paca */ + std r9,_PPR(r1) +END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) 101: .else .if IKUAP @@ -598,10 +569,10 @@ DEFINE_FIXED_SYMBOL(\name\()_common_virt) std r10,_DSISR(r1) .endif -BEGIN_FTR_SECTION_NESTED(66) +BEGIN_FTR_SECTION ld r10,IAREA+EX_CFAR(r13) std r10,ORIG_GPR3(r1) -END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66) +END_FTR_SECTION_IFSET(CPU_FTR_CFAR) ld r10,IAREA+EX_CTR(r13) std r10,_CTR(r1) std r2,GPR2(r1) /* save r2 in stackframe */ @@ -1696,10 +1667,10 @@ TRAMP_REAL_BEGIN(system_call_kvm) * HMT_MEDIUM. That allows the KVM code to save that value into the * guest state (it is the guest's PPR value). */ -BEGIN_FTR_SECTION_NESTED(948) +BEGIN_FTR_SECTION mfspr r10,SPRN_PPR std r10,HSTATE_PPR(r13) -END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) +END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) HMT_MEDIUM mfctr r10 SET_SCRATCH0(r10) @@ -2254,7 +2225,10 @@ denorm_done: mtspr SPRN_HSRR0,r11 mtcrf 0x80,r9 ld r9,PACA_EXGEN+EX_R9(r13) - RESTORE_PPR_PACA(PACA_EXGEN, r10) +BEGIN_FTR_SECTION + ld r10,PACA_EXGEN+EX_PPR(r13) + mtspr SPRN_PPR,r10 +END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) BEGIN_FTR_SECTION ld r10,PACA_EXGEN+EX_CFAR(r13) mtspr SPRN_CFAR,r10 From patchwork Tue Nov 12 16:52:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193725 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CFPT1xFpz9sNx for ; Wed, 13 Nov 2019 04:39:57 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CFPT0sBnzF4TZ for ; Wed, 13 Nov 2019 04:39:57 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDN40sgtzF24s for ; Wed, 13 Nov 2019 03:53:39 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id CB4D3B37C; Tue, 12 Nov 2019 16:53:36 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 15/33] powerpc/64s/exception: trim unused arguments from KVMTEST macro Date: Tue, 12 Nov 2019 17:52:13 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index abf26db36427..9fa71d51ecf4 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -224,7 +224,7 @@ do_define_int n #define kvmppc_interrupt kvmppc_interrupt_pr #endif -.macro KVMTEST name, hsrr, n +.macro KVMTEST name lbz r10,HSTATE_IN_GUEST(r13) cmpwi r10,0 bne \name\()_kvm @@ -293,7 +293,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) .endm #else -.macro KVMTEST name, hsrr, n +.macro KVMTEST name .endm .macro GEN_KVM name .endm @@ -437,7 +437,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR) DEFINE_FIXED_SYMBOL(\name\()_common_real) \name\()_common_real: .if IKVM_REAL - KVMTEST \name IHSRR IVEC + KVMTEST \name .endif ld r10,PACAKMSR(r13) /* get MSR value for kernel */ @@ -460,7 +460,7 @@ DEFINE_FIXED_SYMBOL(\name\()_common_real) DEFINE_FIXED_SYMBOL(\name\()_common_virt) \name\()_common_virt: .if IKVM_VIRT - KVMTEST \name IHSRR IVEC + KVMTEST \name 1: .endif .endif /* IVIRT */ @@ -1595,7 +1595,7 @@ INT_DEFINE_END(system_call) GET_PACA(r13) std r10,PACA_EXGEN+EX_R10(r13) INTERRUPT_TO_KERNEL - KVMTEST system_call EXC_STD 0xc00 /* uses r10, branch to system_call_kvm */ + KVMTEST system_call /* uses r10, branch to system_call_kvm */ mfctr r9 #else mr r9,r13 From patchwork Tue Nov 12 16:52:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193729 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CFSJ3B1Mz9sPc for ; Wed, 13 Nov 2019 04:42:24 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CFSJ1tf1zF5PV for ; Wed, 13 Nov 2019 04:42:24 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDN643jPzF1tS for ; Wed, 13 Nov 2019 03:53:42 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 5526BB398; Tue, 12 Nov 2019 16:53:38 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 16/33] powerpc/64s/exception: hdecrementer avoid touching the stack Date: Tue, 12 Nov 2019 17:52:14 +0100 Message-Id: <7d943bd8afda8c4b9a36080caadc9b53d1ffb54c.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin The hdec interrupt handler is reported to sometimes fire in Linux if KVM leaves it pending after a guest exists. This is harmless, so there is a no-op handler for it. The interrupt handler currently uses the regular kernel stack. Change this to avoid touching the stack entirely. This should be the last place where the regular Linux stack can be accessed with asynchronous interrupts (including PMI) soft-masked. It might be possible to take advantage of this invariant, e.g., to context switch the kernel stack SLB entry without clearing MSR[EE]. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/time.h | 1 - arch/powerpc/kernel/exceptions-64s.S | 25 ++++++++++++++++++++----- arch/powerpc/kernel/time.c | 9 --------- 3 files changed, 20 insertions(+), 15 deletions(-) diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index 08dbe3e6831c..e0107495c4de 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -24,7 +24,6 @@ extern struct clock_event_device decrementer_clockevent; extern void generic_calibrate_decr(void); -extern void hdec_interrupt(struct pt_regs *regs); /* Some sane defaults: 125 MHz timebase, 1GHz processor */ extern unsigned long ppc_proc_freq; diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 9fa71d51ecf4..7a234e6d7bf5 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -1491,6 +1491,8 @@ EXC_COMMON_BEGIN(decrementer_common) INT_DEFINE_BEGIN(hdecrementer) IVEC=0x980 IHSRR=EXC_HV + ISTACK=0 + IRECONCILE=0 IKVM_REAL=1 IKVM_VIRT=1 INT_DEFINE_END(hdecrementer) @@ -1502,11 +1504,24 @@ EXC_VIRT_BEGIN(hdecrementer, 0x4980, 0x80) GEN_INT_ENTRY hdecrementer, virt=1 EXC_VIRT_END(hdecrementer, 0x4980, 0x80) EXC_COMMON_BEGIN(hdecrementer_common) - GEN_COMMON hdecrementer - bl save_nvgprs - addi r3,r1,STACK_FRAME_OVERHEAD - bl hdec_interrupt - b ret_from_except + __GEN_COMMON_ENTRY hdecrementer + /* + * Hypervisor decrementer interrupts not caught by the KVM test + * shouldn't occur but are sometimes left pending on exit from a KVM + * guest. We don't need to do anything to clear them, as they are + * edge-triggered. + * + * Be careful to avoid touching the kernel stack. + */ + ld r10,PACA_EXGEN+EX_CTR(r13) + mtctr r10 + mtcrf 0x80,r9 + ld r9,PACA_EXGEN+EX_R9(r13) + ld r10,PACA_EXGEN+EX_R10(r13) + ld r11,PACA_EXGEN+EX_R11(r13) + ld r12,PACA_EXGEN+EX_R12(r13) + ld r13,PACA_EXGEN+EX_R13(r13) + HRFI_TO_KERNEL GEN_KVM hdecrementer diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index 694522308cd5..bebc8c440289 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -663,15 +663,6 @@ void timer_broadcast_interrupt(void) } #endif -/* - * Hypervisor decrementer interrupts shouldn't occur but are sometimes - * left pending on exit from a KVM guest. We don't need to do anything - * to clear them, as they are edge-triggered. - */ -void hdec_interrupt(struct pt_regs *regs) -{ -} - #ifdef CONFIG_SUSPEND static void generic_suspend_disable_irqs(void) { From patchwork Tue Nov 12 16:52:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193736 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CFWX57mRz9sNx for ; Wed, 13 Nov 2019 04:45:12 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CFWX2GhBzF5KS for ; Wed, 13 Nov 2019 04:45:12 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDN71JVCzF31D for ; Wed, 13 Nov 2019 03:53:42 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id D00A6AFF0; Tue, 12 Nov 2019 16:53:39 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 17/33] powerpc/64s/exception: re-inline some handlers Date: Tue, 12 Nov 2019 17:52:15 +0100 Message-Id: <9c3642df8f701f3b7027e25debac374b74e6de19.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin The reduction in interrupt entry size allows some handlers to be re-inlined. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 7a234e6d7bf5..9494403b9586 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -1186,7 +1186,7 @@ INT_DEFINE_BEGIN(data_access) INT_DEFINE_END(data_access) EXC_REAL_BEGIN(data_access, 0x300, 0x80) - GEN_INT_ENTRY data_access, virt=0, ool=1 + GEN_INT_ENTRY data_access, virt=0 EXC_REAL_END(data_access, 0x300, 0x80) EXC_VIRT_BEGIN(data_access, 0x4300, 0x80) GEN_INT_ENTRY data_access, virt=1 @@ -1216,7 +1216,7 @@ INT_DEFINE_BEGIN(data_access_slb) INT_DEFINE_END(data_access_slb) EXC_REAL_BEGIN(data_access_slb, 0x380, 0x80) - GEN_INT_ENTRY data_access_slb, virt=0, ool=1 + GEN_INT_ENTRY data_access_slb, virt=0 EXC_REAL_END(data_access_slb, 0x380, 0x80) EXC_VIRT_BEGIN(data_access_slb, 0x4380, 0x80) GEN_INT_ENTRY data_access_slb, virt=1 @@ -1472,7 +1472,7 @@ INT_DEFINE_BEGIN(decrementer) INT_DEFINE_END(decrementer) EXC_REAL_BEGIN(decrementer, 0x900, 0x80) - GEN_INT_ENTRY decrementer, virt=0, ool=1 + GEN_INT_ENTRY decrementer, virt=0 EXC_REAL_END(decrementer, 0x900, 0x80) EXC_VIRT_BEGIN(decrementer, 0x4900, 0x80) GEN_INT_ENTRY decrementer, virt=1 From patchwork Tue Nov 12 16:52:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193739 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CFZM6Qnjz9sPk for ; Wed, 13 Nov 2019 04:47:39 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CFZK6kTszF5Qd for ; Wed, 13 Nov 2019 04:47:37 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDN85T12zF32J for ; Wed, 13 Nov 2019 03:53:44 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 5930AB383; Tue, 12 Nov 2019 16:53:41 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 18/33] powerpc/64s/exception: Clean up SRR specifiers Date: Tue, 12 Nov 2019 17:52:16 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin Remove more magic numbers and replace with nicely named bools. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 68 +++++++++++++--------------- 1 file changed, 32 insertions(+), 36 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 9494403b9586..ef37d0ab6594 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -105,11 +105,6 @@ name: ori reg,reg,(ABS_ADDR(label))@l; \ addis reg,reg,(ABS_ADDR(label))@h -/* Exception register prefixes */ -#define EXC_HV_OR_STD 2 /* depends on HVMODE */ -#define EXC_HV 1 -#define EXC_STD 0 - /* * Branch to label using its 0xC000 address. This results in instruction * address suitable for MSR[IR]=0 or 1, which allows relocation to be turned @@ -128,6 +123,7 @@ name: */ #define IVEC .L_IVEC_\name\() #define IHSRR .L_IHSRR_\name\() +#define IHSRR_IF_HVMODE .L_IHSRR_IF_HVMODE_\name\() #define IAREA .L_IAREA_\name\() #define IVIRT .L_IVIRT_\name\() #define IISIDE .L_IISIDE_\name\() @@ -159,7 +155,10 @@ do_define_int n .error "IVEC not defined" .endif .ifndef IHSRR - IHSRR=EXC_STD + IHSRR=0 + .endif + .ifndef IHSRR_IF_HVMODE + IHSRR_IF_HVMODE=0 .endif .ifndef IAREA IAREA=PACA_EXGEN @@ -257,7 +256,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) ld r9,IAREA+EX_R9(r13) ld r10,IAREA+EX_R10(r13) /* HSRR variants have the 0x2 bit added to their trap number */ - .if IHSRR == EXC_HV_OR_STD + .if IHSRR_IF_HVMODE BEGIN_FTR_SECTION ori r12,r12,(IVEC + 0x2) FTR_SECTION_ELSE @@ -278,7 +277,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) ld r10,IAREA+EX_R10(r13) ld r11,IAREA+EX_R11(r13) ld r12,IAREA+EX_R12(r13) - .if IHSRR == EXC_HV_OR_STD + .if IHSRR_IF_HVMODE BEGIN_FTR_SECTION b kvmppc_skip_Hinterrupt FTR_SECTION_ELSE @@ -403,7 +402,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR) stw r10,IAREA+EX_DSISR(r13) .endif - .if IHSRR == EXC_HV_OR_STD + .if IHSRR_IF_HVMODE BEGIN_FTR_SECTION mfspr r11,SPRN_HSRR0 /* save HSRR0 */ mfspr r12,SPRN_HSRR1 /* and HSRR1 */ @@ -485,7 +484,7 @@ DEFINE_FIXED_SYMBOL(\name\()_common_virt) .abort "Bad maskable vector" .endif - .if IHSRR == EXC_HV_OR_STD + .if IHSRR_IF_HVMODE BEGIN_FTR_SECTION bne masked_Hinterrupt FTR_SECTION_ELSE @@ -618,12 +617,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR) * Restore all registers including H/SRR0/1 saved in a stack frame of a * standard exception. */ -.macro EXCEPTION_RESTORE_REGS hsrr +.macro EXCEPTION_RESTORE_REGS hsrr=0 /* Move original SRR0 and SRR1 into the respective regs */ ld r9,_MSR(r1) - .if \hsrr == EXC_HV_OR_STD - .error "EXC_HV_OR_STD Not implemented for EXCEPTION_RESTORE_REGS" - .endif .if \hsrr mtspr SPRN_HSRR1,r9 .else @@ -898,7 +894,7 @@ EXC_COMMON_BEGIN(system_reset_common) ld r10,SOFTE(r1) stb r10,PACAIRQSOFTMASK(r13) - EXCEPTION_RESTORE_REGS EXC_STD + EXCEPTION_RESTORE_REGS RFI_TO_USER_OR_KERNEL GEN_KVM system_reset @@ -952,7 +948,7 @@ TRAMP_REAL_BEGIN(machine_check_fwnmi) lhz r12,PACA_IN_MCE(r13); \ subi r12,r12,1; \ sth r12,PACA_IN_MCE(r13); \ - EXCEPTION_RESTORE_REGS EXC_STD + EXCEPTION_RESTORE_REGS EXC_COMMON_BEGIN(machine_check_early_common) /* @@ -1321,7 +1317,7 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) INT_DEFINE_BEGIN(hardware_interrupt) IVEC=0x500 - IHSRR=EXC_HV_OR_STD + IHSRR_IF_HVMODE=1 IMASK=IRQS_DISABLED IKVM_REAL=1 IKVM_VIRT=1 @@ -1490,7 +1486,7 @@ EXC_COMMON_BEGIN(decrementer_common) INT_DEFINE_BEGIN(hdecrementer) IVEC=0x980 - IHSRR=EXC_HV + IHSRR=1 ISTACK=0 IRECONCILE=0 IKVM_REAL=1 @@ -1732,7 +1728,7 @@ EXC_COMMON_BEGIN(single_step_common) INT_DEFINE_BEGIN(h_data_storage) IVEC=0xe00 - IHSRR=EXC_HV + IHSRR=1 IDAR=1 IDSISR=1 IKVM_SKIP=1 @@ -1764,7 +1760,7 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_TYPE_RADIX) INT_DEFINE_BEGIN(h_instr_storage) IVEC=0xe20 - IHSRR=EXC_HV + IHSRR=1 IKVM_REAL=1 IKVM_VIRT=1 INT_DEFINE_END(h_instr_storage) @@ -1787,7 +1783,7 @@ EXC_COMMON_BEGIN(h_instr_storage_common) INT_DEFINE_BEGIN(emulation_assist) IVEC=0xe40 - IHSRR=EXC_HV + IHSRR=1 IKVM_REAL=1 IKVM_VIRT=1 INT_DEFINE_END(emulation_assist) @@ -1815,7 +1811,7 @@ EXC_COMMON_BEGIN(emulation_assist_common) */ INT_DEFINE_BEGIN(hmi_exception_early) IVEC=0xe60 - IHSRR=EXC_HV + IHSRR=1 IREALMODE_COMMON=1 ISTACK=0 IRECONCILE=0 @@ -1825,7 +1821,7 @@ INT_DEFINE_END(hmi_exception_early) INT_DEFINE_BEGIN(hmi_exception) IVEC=0xe60 - IHSRR=EXC_HV + IHSRR=1 IMASK=IRQS_DISABLED IKVM_REAL=1 INT_DEFINE_END(hmi_exception) @@ -1847,7 +1843,7 @@ EXC_COMMON_BEGIN(hmi_exception_early_common) cmpdi cr0,r3,0 bne 1f - EXCEPTION_RESTORE_REGS EXC_HV + EXCEPTION_RESTORE_REGS hsrr=1 HRFI_TO_USER_OR_KERNEL 1: @@ -1855,7 +1851,7 @@ EXC_COMMON_BEGIN(hmi_exception_early_common) * Go to virtual mode and pull the HMI event information from * firmware. */ - EXCEPTION_RESTORE_REGS EXC_HV + EXCEPTION_RESTORE_REGS hsrr=1 GEN_INT_ENTRY hmi_exception, virt=0 GEN_KVM hmi_exception_early @@ -1874,7 +1870,7 @@ EXC_COMMON_BEGIN(hmi_exception_common) INT_DEFINE_BEGIN(h_doorbell) IVEC=0xe80 - IHSRR=EXC_HV + IHSRR=1 IMASK=IRQS_DISABLED IKVM_REAL=1 IKVM_VIRT=1 @@ -1903,7 +1899,7 @@ EXC_COMMON_BEGIN(h_doorbell_common) INT_DEFINE_BEGIN(h_virt_irq) IVEC=0xea0 - IHSRR=EXC_HV + IHSRR=1 IMASK=IRQS_DISABLED IKVM_REAL=1 IKVM_VIRT=1 @@ -2073,7 +2069,7 @@ EXC_COMMON_BEGIN(facility_unavailable_common) INT_DEFINE_BEGIN(h_facility_unavailable) IVEC=0xf80 - IHSRR=EXC_HV + IHSRR=1 IKVM_REAL=1 IKVM_VIRT=1 INT_DEFINE_END(h_facility_unavailable) @@ -2109,7 +2105,7 @@ EXC_VIRT_NONE(0x5100, 0x100) #ifdef CONFIG_CBE_RAS INT_DEFINE_BEGIN(cbe_system_error) IVEC=0x1200 - IHSRR=EXC_HV + IHSRR=1 IKVM_SKIP=1 IKVM_REAL=1 INT_DEFINE_END(cbe_system_error) @@ -2160,8 +2156,8 @@ EXC_VIRT_NONE(0x5400, 0x100) INT_DEFINE_BEGIN(denorm_exception) IVEC=0x1500 - IHSRR=EXC_HV - IBRANCH_TO_COMMON=0 + IHSRR=1 + IBRANCH_COMMON=0 IKVM_REAL=1 INT_DEFINE_END(denorm_exception) @@ -2269,7 +2265,7 @@ EXC_COMMON_BEGIN(denorm_exception_common) #ifdef CONFIG_CBE_RAS INT_DEFINE_BEGIN(cbe_maintenance) IVEC=0x1600 - IHSRR=EXC_HV + IHSRR=1 IKVM_SKIP=1 IKVM_REAL=1 INT_DEFINE_END(cbe_maintenance) @@ -2321,7 +2317,7 @@ EXC_COMMON_BEGIN(altivec_assist_common) #ifdef CONFIG_CBE_RAS INT_DEFINE_BEGIN(cbe_thermal) IVEC=0x1800 - IHSRR=EXC_HV + IHSRR=1 IKVM_SKIP=1 IKVM_REAL=1 INT_DEFINE_END(cbe_thermal) @@ -2384,7 +2380,7 @@ EXC_COMMON_BEGIN(soft_nmi_common) * - Else it is one of PACA_IRQ_MUST_HARD_MASK, so hard disable and return. * This is called with r10 containing the value to OR to the paca field. */ -.macro MASKED_INTERRUPT hsrr +.macro MASKED_INTERRUPT hsrr=0 .if \hsrr masked_Hinterrupt: .else @@ -2531,8 +2527,8 @@ TRAMP_REAL_BEGIN(hrfi_flush_fallback) hrfid USE_TEXT_SECTION() - MASKED_INTERRUPT EXC_STD - MASKED_INTERRUPT EXC_HV + MASKED_INTERRUPT + MASKED_INTERRUPT hsrr=1 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER kvmppc_skip_interrupt: From patchwork Tue Nov 12 16:52:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193743 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CFd31ydmz9sPc for ; Wed, 13 Nov 2019 04:49:58 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CFd15h03zF3wF for ; Wed, 13 Nov 2019 04:49:57 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNB1f2lzF1Hp for ; Wed, 13 Nov 2019 03:53:46 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id D491EB39E; Tue, 12 Nov 2019 16:53:42 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 19/33] powerpc/64s/exception: add more comments for interrupt handlers Date: Tue, 12 Nov 2019 17:52:17 +0100 Message-Id: <8553a076b57e8888422962b8be0dc01c1e5f38f2.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin A few of the non-standard handlers are left uncommented. Some more description could be added to some. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 391 ++++++++++++++++++++++++--- 1 file changed, 353 insertions(+), 38 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index ef37d0ab6594..2f50587392aa 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -121,26 +121,26 @@ name: /* * Interrupt code generation macros */ -#define IVEC .L_IVEC_\name\() -#define IHSRR .L_IHSRR_\name\() -#define IHSRR_IF_HVMODE .L_IHSRR_IF_HVMODE_\name\() -#define IAREA .L_IAREA_\name\() -#define IVIRT .L_IVIRT_\name\() -#define IISIDE .L_IISIDE_\name\() -#define IDAR .L_IDAR_\name\() -#define IDSISR .L_IDSISR_\name\() -#define ISET_RI .L_ISET_RI_\name\() -#define IBRANCH_TO_COMMON .L_IBRANCH_TO_COMMON_\name\() -#define IREALMODE_COMMON .L_IREALMODE_COMMON_\name\() -#define IMASK .L_IMASK_\name\() -#define IKVM_SKIP .L_IKVM_SKIP_\name\() -#define IKVM_REAL .L_IKVM_REAL_\name\() +#define IVEC .L_IVEC_\name\() /* Interrupt vector address */ +#define IHSRR .L_IHSRR_\name\() /* Sets SRR or HSRR registers */ +#define IHSRR_IF_HVMODE .L_IHSRR_IF_HVMODE_\name\() /* HSRR if HV else SRR */ +#define IAREA .L_IAREA_\name\() /* PACA save area */ +#define IVIRT .L_IVIRT_\name\() /* Has virt mode entry point */ +#define IISIDE .L_IISIDE_\name\() /* Uses SRR0/1 not DAR/DSISR */ +#define IDAR .L_IDAR_\name\() /* Uses DAR (or SRR0) */ +#define IDSISR .L_IDSISR_\name\() /* Uses DSISR (or SRR1) */ +#define ISET_RI .L_ISET_RI_\name\() /* Run common code w/ MSR[RI]=1 */ +#define IBRANCH_TO_COMMON .L_IBRANCH_TO_COMMON_\name\() /* ENTRY branch to common */ +#define IREALMODE_COMMON .L_IREALMODE_COMMON_\name\() /* Common runs in realmode */ +#define IMASK .L_IMASK_\name\() /* IRQ soft-mask bit */ +#define IKVM_SKIP .L_IKVM_SKIP_\name\() /* Generate KVM skip handler */ +#define IKVM_REAL .L_IKVM_REAL_\name\() /* Real entry tests KVM */ #define __IKVM_REAL(name) .L_IKVM_REAL_ ## name -#define IKVM_VIRT .L_IKVM_VIRT_\name\() -#define ISTACK .L_ISTACK_\name\() +#define IKVM_VIRT .L_IKVM_VIRT_\name\() /* Virt entry tests KVM */ +#define ISTACK .L_ISTACK_\name\() /* Set regular kernel stack */ #define __ISTACK(name) .L_ISTACK_ ## name -#define IRECONCILE .L_IRECONCILE_\name\() -#define IKUAP .L_IKUAP_\name\() +#define IRECONCILE .L_IRECONCILE_\name\() /* Do RECONCILE_IRQ_STATE */ +#define IKUAP .L_IKUAP_\name\() /* Do KUAP lock */ #define INT_DEFINE_BEGIN(n) \ .macro int_define_ ## n name @@ -759,6 +759,39 @@ __start_interrupts: EXC_VIRT_NONE(0x4000, 0x100) +/** + * Interrupt 0x100 - System Reset Interrupt (SRESET aka NMI). + * This is a non-maskable, asynchronous interrupt always taken in real-mode. + * It is caused by: + * - Wake from power-saving state, on powernv. + * - An NMI from another CPU, triggered by firmware or hypercall. + * - As crash/debug signal injected from BMC, firmware or hypervisor. + * + * Handling: + * Power-save wakeup is the only performance critical path, so this is + * determined quickly as possible first. In this case volatile registers + * can be discarded and SPRs like CFAR don't need to be read. + * + * If not a powersave wakeup, then it's run as a regular interrupt, however + * it uses its own stack and PACA save area to preserve the regular kernel + * environment for debugging. + * + * This interrupt is not maskable, so triggering it when MSR[RI] is clear, + * or SCRATCH0 is in use, etc. may cause a crash. It's also not entirely + * correct to switch to virtual mode to run the regular interrupt handler + * because it might be interrupted when the MMU is in a bad state (e.g., SLB + * is clear). + * + * FWNMI: + * PAPR specifies a "fwnmi" facility which sends the sreset to a different + * entry point with a different register set up. Some hypervisors will + * send the sreset to 0x100 in the guest if it is not fwnmi capable. + * + * KVM: + * Unlike most SRR interrupts, this may be taken by the host while executing + * in a guest, so a KVM test is required. KVM will pull the CPU out of guest + * mode and then raise the sreset. + */ INT_DEFINE_BEGIN(system_reset) IVEC=0x100 IAREA=PACA_EXNMI @@ -834,6 +867,7 @@ TRAMP_REAL_BEGIN(system_reset_idle_wake) * Vectors for the FWNMI option. Share common code. */ TRAMP_REAL_BEGIN(system_reset_fwnmi) + /* XXX: fwnmi guest could run a nested/PR guest, so why no test? */ __IKVM_REAL(system_reset)=0 GEN_INT_ENTRY system_reset, virt=0 @@ -900,6 +934,44 @@ EXC_COMMON_BEGIN(system_reset_common) GEN_KVM system_reset +/** + * Interrupt 0x200 - Machine Check Interrupt (MCE). + * This is a non-maskable interrupt always taken in real-mode. It can be + * synchronous or asynchronous, caused by hardware or software, and it may be + * taken in a power-saving state. + * + * Handling: + * Similarly to system reset, this uses its own stack and PACA save area, + * the difference is re-entrancy is allowed on the machine check stack. + * + * machine_check_early is run in real mode, and carefully decodes the + * machine check and tries to handle it (e.g., flush the SLB if there was an + * error detected there), determines if it was recoverable and logs the + * event. + * + * Then, depending on the execution context when the interrupt is taken, there + * are 3 main actions: + * - Executing in kernel mode. The event is queued with irq_work, which means + * it is handled when it is next safe to do so (i.e., the kernel has enabled + * interrupts), which could be immediately when the interrupt returns. This + * avoids nasty issues like switching to virtual mode when the MMU is in a + * bad state, or when executing OPAL code. (SRESET is exposed to such issues, + * but it has different priorities). Check to see if the CPU was in power + * save, and return via the wake up code if it was. + * + * - Executing in user mode. machine_check_exception is run like a normal + * interrupt handler, which processes the data generated by the early handler. + * + * - Executing in guest mode. The interrupt is run with its KVM test, and + * branches to KVM to deal with. KVM may queue the event for the host + * to report later. + * + * This interrupt is not maskable, so if it triggers when MSR[RI] is clear, + * or SCRATCH0 is in use, it may cause a crash. + * + * KVM: + * See SRESET. + */ INT_DEFINE_BEGIN(machine_check_early) IVEC=0x200 IAREA=PACA_EXMC @@ -1159,19 +1231,28 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE) /** - * 0x300 - Data Storage Interrupt (DSI) - * This interrupt is generated due to a data access which does not have a valid - * page table entry with permissions to allow the data access to be performed. - * DAWR matches also fault here, as do RC updates, and minor misc errors e.g., - * copy/paste, AMO, certain invalid CI accesses, etc. + * Interrupt 0x300 - Data Storage Interrupt (DSI). + * This is a synchronous interrupt generated due to a data access exception, + * e.g., a load orstore which does not have a valid page table entry with + * permissions. DAWR matches also fault here, as do RC updates, and minor misc + * errors e.g., copy/paste, AMO, certain invalid CI accesses, etc. + * + * Handling: + * - Hash MMU + * Go to do_hash_page first to see if the HPT can be filled from an entry in + * the Linux page table. Hash faults can hit in kernel mode in a fairly + * arbitrary state (e.g., interrupts disabled, locks held) when accessing + * "non-bolted" regions, e.g., vmalloc space. However these should always be + * backed by Linux page tables. * - * This interrupt is delivered to the guest (HV bit unchanged). + * If none is found, do a Linux page fault. Linux page faults can happen in + * kernel mode due to user copy operations of course. * - * Linux HPT responds by first attempting to refill the hash table from the - * Linux page table, then going to a full page fault if the Linux page table - * entry was insufficient. RPT goes straight to full page fault. + * - Radix MMU + * The hardware loads from the Linux page table directly, so a fault goes + * immediately to Linux page fault. * - * PR KVM ...? + * Conditions like DAWR match are handled on the way in to Linux page fault. */ INT_DEFINE_BEGIN(data_access) IVEC=0x300 @@ -1202,6 +1283,24 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) GEN_KVM data_access +/** + * Interrupt 0x380 - Data Segment Interrupt (DSLB). + * This is a synchronous interrupt in response to an MMU fault missing SLB + * entry for HPT, or an address outside RPT translation range. + * + * Handling: + * - HPT: + * This refills the SLB, or reports an access fault similarly to a bad page + * fault. When coming from user-mode, the SLB handler may access any kernel + * data, though it may itself take a DSLB. When coming from kernel mode, + * recursive faults must be avoided so access is restricted to the kernel + * image text/data, kernel stack, and any data allocated below + * ppc64_bolted_size (first segment). The kernel handler must avoid stomping + * on user-handler data structures. + * + * A dedicated save area EXSLB is used (XXX: but it actually need not be + * these days, we could use EXGEN). + */ INT_DEFINE_BEGIN(data_access_slb) IVEC=0x380 IAREA=PACA_EXSLB @@ -1244,6 +1343,15 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) GEN_KVM data_access_slb +/** + * Interrupt 0x400 - Instruction Storage Interrupt (ISI). + * This is a synchronous interrupt in response to an MMU fault due to an + * instruction fetch. + * + * Handling: + * Similar to DSI, though in response to fetch. The faulting address is found + * in SRR0 (rather than DAR), and status in SRR1 (rather than DSISR). + */ INT_DEFINE_BEGIN(instruction_access) IVEC=0x400 IISIDE=1 @@ -1273,6 +1381,15 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) GEN_KVM instruction_access +/** + * Interrupt 0x480 - Instruction Segment Interrupt (ISLB). + * This is a synchronous interrupt in response to an MMU fault due to an + * instruction fetch. + * + * Handling: + * Similar to DSLB, though in response to fetch. The faulting address is found + * in SRR0 (rather than DAR). + */ INT_DEFINE_BEGIN(instruction_access_slb) IVEC=0x480 IAREA=PACA_EXSLB @@ -1315,6 +1432,29 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) GEN_KVM instruction_access_slb +/** + * Interrupt 0x500 - External Interrupt. + * This is an asynchronous maskable interrupt in response to an "external + * exception" from the interrupt controller or hypervisor (e.g., device + * interrupt). It is maskable in hardware by clearing MSR[EE], and + * soft-maskable with IRQS_DISABLED mask (i.e., local_irq_disable()). + * + * When running in HV mode, Linux sets up the LPCR[LPES] bit such that + * interrupts are delivered with HSRR registers, guests use SRRs, which + * reqiures IHSRR_IF_HVMODE. + * + * On bare metal POWER9 and later, Linux sets the LPCR[HVICE] bit such that + * external interrupts are delivered as Hypervisor Virtualization Interrupts + * rather than External Interrupts. + * + * Handling: + * This calls into Linux IRQ handler. NVGPRs are not saved to reduce overhead, + * because registers at the time of the interrupt are not so important as it is + * asynchronous. + * + * If soft masked, the masked handler will note the pending interrupt for + * replay, and clear MSR[EE] in the interrupted context. + */ INT_DEFINE_BEGIN(hardware_interrupt) IVEC=0x500 IHSRR_IF_HVMODE=1 @@ -1340,6 +1480,10 @@ EXC_COMMON_BEGIN(hardware_interrupt_common) GEN_KVM hardware_interrupt +/** + * Interrupt 0x600 - Alignment Interrupt + * This is a synchronous interrupt in response to data alignment fault. + */ INT_DEFINE_BEGIN(alignment) IVEC=0x600 IDAR=1 @@ -1363,6 +1507,15 @@ EXC_COMMON_BEGIN(alignment_common) GEN_KVM alignment +/** + * Interrupt 0x700 - Program Interrupt (program check). + * This is a synchronous interrupt in response to various instruction faults: + * traps, privilege errors, TM errors, floating point exceptions. + * + * Handling: + * This interrupt may use the "emergency stack" in some cases when being taken + * from kernel context, which complicates handling. + */ INT_DEFINE_BEGIN(program_check) IVEC=0x700 IKVM_REAL=1 @@ -1416,6 +1569,15 @@ EXC_COMMON_BEGIN(program_check_common) GEN_KVM program_check +/* + * Interrupt 0x800 - Floating-Point Unavailable Interrupt. + * This is a synchronous interrupt in response to executing an fp instruction + * with MSR[FP]=0. + * + * Handling: + * This will load FP registers and enable the FP bit if coming from userspace, + * otherwise report a bad kernel use of FP. + */ INT_DEFINE_BEGIN(fp_unavailable) IVEC=0x800 IRECONCILE=0 @@ -1461,6 +1623,23 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM) GEN_KVM fp_unavailable +/** + * Interrupt 0x900 - Decrementer Interrupt. + * This is an asynchronous interrupt in response to a decrementer exception + * (e.g., DEC has wrapped below zero). It is maskable in hardware by clearing + * MSR[EE], and soft-maskable with IRQS_DISABLED mask (i.e., + * local_irq_disable()). + * + * Handling: + * This calls into Linux timer handler. NVGPRs are not saved (see 0x500). + * + * If soft masked, the masked handler will note the pending interrupt for + * replay, and bump the decrementer to a high value, leaving MSR[EE] enabled + * in the interrupted context. + * If PPC_WATCHDOG is configured, the soft masked handler will actually set + * things back up to run soft_nmi_interrupt as a regular interrupt handler + * on the emergency stack. + */ INT_DEFINE_BEGIN(decrementer) IVEC=0x900 IMASK=IRQS_DISABLED @@ -1484,6 +1663,16 @@ EXC_COMMON_BEGIN(decrementer_common) GEN_KVM decrementer +/** + * Interrupt 0x980 - Hypervisor Decrementer Interrupt. + * This is an asynchronous interrupt, similar to 0x900 but for the HDEC + * register. + * + * Handling: + * Linux does not use this outside KVM where it's used to keep a host timer + * while the guest is given control of DEC. It should normally be caught by + * the KVM test and routed there. + */ INT_DEFINE_BEGIN(hdecrementer) IVEC=0x980 IHSRR=1 @@ -1522,6 +1711,20 @@ EXC_COMMON_BEGIN(hdecrementer_common) GEN_KVM hdecrementer +/** + * Interrupt 0xa00 - Directed Privileged Doorbell Interrupt. + * This is an asynchronous interrupt in response to a msgsndp doorbell. + * It is maskable in hardware by clearing MSR[EE], and soft-maskable with + * IRQS_DISABLED mask (i.e., local_irq_disable()). + * + * Handling: + * Guests may use this for IPIs between threads in a core if the + * hypervisor supports it. NVGPRS are not saved (see 0x500). + * + * If soft masked, the masked handler will note the pending interrupt for + * replay, leaving MSR[EE] enabled in the interrupted context because the + * doorbells are edge triggered. + */ INT_DEFINE_BEGIN(doorbell_super) IVEC=0xa00 IMASK=IRQS_DISABLED @@ -1552,16 +1755,20 @@ EXC_COMMON_BEGIN(doorbell_super_common) EXC_REAL_NONE(0xb00, 0x100) EXC_VIRT_NONE(0x4b00, 0x100) -/* - * system call / hypercall (0xc00, 0x4c00) - * - * The system call exception is invoked with "sc 0" and does not alter HV bit. - * - * The hypercall is invoked with "sc 1" and sets HV=1. +/** + * Interrupt 0xc00 - System Call Interrupt (syscall, hcall). + * This is a synchronous interrupt invoked with the "sc" instruction. The + * system call is invoked with "sc 0" and does not alter the HV bit, so it + * is directed to the currently running OS. The hypercall is invoked with + * "sc 1" and it sets HV=1, so it elevates to hypervisor. * * In HPT, sc 1 always goes to 0xc00 real mode. In RADIX, sc 1 can go to * 0x4c00 virtual mode. * + * Handling: + * If the KVM test fires then it was due to a hypercall and is accordingly + * routed to KVM. Otherwise this executes a normal Linux system call. + * * Call convention: * * syscall register convention is in Documentation/powerpc/syscall64-abi.rst @@ -1705,6 +1912,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) #endif +/** + * Interrupt 0xd00 - Trace Interrupt. + * This is a synchronous interrupt in response to instruction step or + * breakpoint faults. + */ INT_DEFINE_BEGIN(single_step) IVEC=0xd00 IKVM_REAL=1 @@ -1726,6 +1938,18 @@ EXC_COMMON_BEGIN(single_step_common) GEN_KVM single_step +/** + * Interrupt 0xe00 - Hypervisor Data Storage Interrupt (HDSI). + * This is a synchronous interrupt in response to an MMU fault caused by a + * guest data access. + * + * Handling: + * This should always get routed to KVM. In radix MMU mode, this is caused + * by a guest nested radix access that can't be performed due to the + * partition scope page table. In hash mode, this can be caused by guests + * running with translation disabled (virtual real mode) or with VPM enabled. + * KVM will update the page table structures or disallow the access. + */ INT_DEFINE_BEGIN(h_data_storage) IVEC=0xe00 IHSRR=1 @@ -1758,6 +1982,11 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_TYPE_RADIX) GEN_KVM h_data_storage +/** + * Interrupt 0xe20 - Hypervisor Instruction Storage Interrupt (HISI). + * This is a synchronous interrupt in response to an MMU fault caused by a + * guest instruction fetch, similar to HDSI. + */ INT_DEFINE_BEGIN(h_instr_storage) IVEC=0xe20 IHSRR=1 @@ -1781,6 +2010,9 @@ EXC_COMMON_BEGIN(h_instr_storage_common) GEN_KVM h_instr_storage +/** + * Interrupt 0xe40 - Hypervisor Emulation Assistance Interrupt. + */ INT_DEFINE_BEGIN(emulation_assist) IVEC=0xe40 IHSRR=1 @@ -1804,10 +2036,29 @@ EXC_COMMON_BEGIN(emulation_assist_common) GEN_KVM emulation_assist -/* - * hmi_exception trampoline is a special case. It jumps to hmi_exception_early - * first, and then eventaully from there to the trampoline to get into virtual - * mode. +/** + * Interrupt 0xe60 - Hypervisor Maintenance Interrupt (HMI). + * This is an asynchronous interrupt caused by a Hypervisor Maintenance + * Exception. It is always taken in real mode but uses HSRR registers + * unlike SRESET and MCE. + * + * It is maskable in hardware by clearing MSR[EE], and partially soft-maskable + * with IRQS_DISABLED mask (i.e., local_irq_disable()). + * + * Handling: + * This is a special case, this is handled similarly to machine checks, with an + * initial real mode handler that is not soft-masked, which attempts to fix the + * problem. Then a regular handler which is soft-maskable and reports the + * problem. + * + * The emergency stack is used for the early real mode handler. + * + * XXX: unclear why MCE and HMI schemes could not be made common, e.g., + * either use soft-masking for the MCE, or use irq_work for the HMI. + * + * KVM: + * Unlike MCE, this calls into KVM without calling the real mode handler + * first. */ INT_DEFINE_BEGIN(hmi_exception_early) IVEC=0xe60 @@ -1868,6 +2119,11 @@ EXC_COMMON_BEGIN(hmi_exception_common) GEN_KVM hmi_exception +/** + * Interrupt 0xe80 - Directed Hypervisor Doorbell Interrupt. + * This is an asynchronous interrupt in response to a msgsnd doorbell. + * Similar to the 0xa00 doorbell but for host rather than guest. + */ INT_DEFINE_BEGIN(h_doorbell) IVEC=0xe80 IHSRR=1 @@ -1897,6 +2153,11 @@ EXC_COMMON_BEGIN(h_doorbell_common) GEN_KVM h_doorbell +/** + * Interrupt 0xea0 - Hypervisor Virtualization Interrupt. + * This is an asynchronous interrupt in response to an "external exception". + * Similar to 0x500 but for host only. + */ INT_DEFINE_BEGIN(h_virt_irq) IVEC=0xea0 IHSRR=1 @@ -1928,6 +2189,22 @@ EXC_REAL_NONE(0xee0, 0x20) EXC_VIRT_NONE(0x4ee0, 0x20) +/* + * Interrupt 0xf00 - Performance Monitor Interrupt (PMI, PMU). + * This is an asynchronous interrupt in response to a PMU exception. + * It is maskable in hardware by clearing MSR[EE], and soft-maskable with + * IRQS_PMI_DISABLED mask (NOTE: NOT local_irq_disable()). + * + * Handling: + * This calls into the perf subsystem. + * + * Like the watchdog soft-nmi, it appears an NMI interrupt to Linux, in that it + * runs under local_irq_disable. However it may be soft-masked in + * powerpc-specific code. + * + * If soft masked, the masked handler will note the pending interrupt for + * replay, and clear MSR[EE] in the interrupted context. + */ INT_DEFINE_BEGIN(performance_monitor) IVEC=0xf00 IMASK=IRQS_PMI_DISABLED @@ -1951,6 +2228,12 @@ EXC_COMMON_BEGIN(performance_monitor_common) GEN_KVM performance_monitor +/** + * Interrupt 0xf20 - Vector Unavailable Interrupt. + * This is a synchronous interrupt in response to + * executing a vector (or altivec) instruction with MSR[VEC]=0. + * Similar to FP unavailable. + */ INT_DEFINE_BEGIN(altivec_unavailable) IVEC=0xf20 IRECONCILE=0 @@ -1999,6 +2282,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) GEN_KVM altivec_unavailable +/** + * Interrupt 0xf40 - VSX Unavailable Interrupt. + * This is a synchronous interrupt in response to + * executing a VSX instruction with MSR[VSX]=0. + * Similar to FP unavailable. + */ INT_DEFINE_BEGIN(vsx_unavailable) IVEC=0xf40 IRECONCILE=0 @@ -2046,6 +2335,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX) GEN_KVM vsx_unavailable +/** + * Interrupt 0xf60 - Facility Unavailable Interrupt. + * This is a synchronous interrupt in response to + * executing an instruction without access to the facility that can be + * resolved by the OS (e.g., FSCR, MSR). + * Similar to FP unavailable. + */ INT_DEFINE_BEGIN(facility_unavailable) IVEC=0xf60 IKVM_REAL=1 @@ -2067,6 +2363,13 @@ EXC_COMMON_BEGIN(facility_unavailable_common) GEN_KVM facility_unavailable +/** + * Interrupt 0xf60 - Hypervisor Facility Unavailable Interrupt. + * This is a synchronous interrupt in response to + * executing an instruction without access to the facility that can only + * be resolved in HV mode (e.g., HFSCR). + * Similar to FP unavailable. + */ INT_DEFINE_BEGIN(h_facility_unavailable) IVEC=0xf80 IHSRR=1 @@ -2154,6 +2457,18 @@ EXC_COMMON_BEGIN(instruction_breakpoint_common) EXC_REAL_NONE(0x1400, 0x100) EXC_VIRT_NONE(0x5400, 0x100) +/** + * Interrupt 0x1500 - Soft Patch Interrupt + * + * Handling: + * This is an implementation specific interrupt which can be used for a + * range of exceptions. + * + * This interrupt handler is unique in that it runs the denormal assist + * code even for guests (and even in guest context) without going to KVM, + * for speed. POWER9 does not raise denorm exceptions, so this special case + * could be phased out in future to reduce special cases. + */ INT_DEFINE_BEGIN(denorm_exception) IVEC=0x1500 IHSRR=1 From patchwork Tue Nov 12 16:52:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193746 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CFgn1gHTz9sQp for ; Wed, 13 Nov 2019 04:52:21 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CFgm2tYrzF5Rx for ; Wed, 13 Nov 2019 04:52:20 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNC6GdHzF33T for ; Wed, 13 Nov 2019 03:53:47 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 57F59B167; Tue, 12 Nov 2019 16:53:44 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 20/33] powerpc/64s/exception: only test KVM in SRR interrupts when PR KVM is supported Date: Tue, 12 Nov 2019 17:52:18 +0100 Message-Id: <3fabf68dbaaffed3a3737ab61a79f8f2b47c5ab1.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin Apart from SRESET, MCE, and syscall (hcall variant), the SRR type interrupts are not escalated to hypervisor mode, so delivered to the OS. When running PR KVM, the OS is the hypervisor, and the guest runs with MSR[PR]=1, so these interrupts must test if a guest was running when interrupted. These tests are required at the real-mode entry points because the PR KVM host runs with LPCR[AIL]=0. In HV KVM and nested HV KVM, the guest always receives these interrupts, so there is no need for the host to make this test. So remove the tests if PR KVM is not configured. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 65 ++++++++++++++++++++++++++-- 1 file changed, 62 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 2f50587392aa..38bc66b95516 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -214,9 +214,36 @@ do_define_int n #ifdef CONFIG_KVM_BOOK3S_64_HANDLER #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE /* - * If hv is possible, interrupts come into to the hv version - * of the kvmppc_interrupt code, which then jumps to the PR handler, - * kvmppc_interrupt_pr, if the guest is a PR guest. + * All interrupts which set HSRR registers, as well as SRESET and MCE and + * syscall when invoked with "sc 1" switch to MSR[HV]=1 (HVMODE) to be taken, + * so they all generally need to test whether they were taken in guest context. + * + * Note: SRESET and MCE may also be sent to the guest by the hypervisor, and be + * taken with MSR[HV]=0. + * + * Interrupts which set SRR registers (with the above exceptions) do not + * elevate to MSR[HV]=1 mode, though most can be taken when running with + * MSR[HV]=1 (e.g., bare metal kernel and userspace). So these interrupts do + * not need to test whether a guest is running because they get delivered to + * the guest directly, including nested HV KVM guests. + * + * The exception is PR KVM, where the guest runs with MSR[PR]=1 and the host + * runs with MSR[HV]=0, so the host takes all interrupts on behalf of the + * guest. PR KVM runs with LPCR[AIL]=0 which causes interrupts to always be + * delivered to the real-mode entry point, therefore such interrupts only test + * KVM in their real mode handlers, and only when PR KVM is possible. + * + * Interrupts that are taken in MSR[HV]=0 and escalate to MSR[HV]=1 are always + * delivered in real-mode when the MMU is in hash mode because the MMU + * registers are not set appropriately to translate host addresses. In nested + * radix mode these can be delivered in virt-mode as the host translations are + * used implicitly (see: effective LPID, effective PID). + */ + +/* + * If an interrupt is taken while a guest is running, it is immediately routed + * to KVM to handle. If both HV and PR KVM arepossible, KVM interrupts go first + * to kvmppc_interrupt_hv, which handles the PR guest case. */ #define kvmppc_interrupt kvmppc_interrupt_hv #else @@ -1258,8 +1285,10 @@ INT_DEFINE_BEGIN(data_access) IVEC=0x300 IDAR=1 IDSISR=1 +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_SKIP=1 IKVM_REAL=1 +#endif INT_DEFINE_END(data_access) EXC_REAL_BEGIN(data_access, 0x300, 0x80) @@ -1306,8 +1335,10 @@ INT_DEFINE_BEGIN(data_access_slb) IAREA=PACA_EXSLB IRECONCILE=0 IDAR=1 +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_SKIP=1 IKVM_REAL=1 +#endif INT_DEFINE_END(data_access_slb) EXC_REAL_BEGIN(data_access_slb, 0x380, 0x80) @@ -1357,7 +1388,9 @@ INT_DEFINE_BEGIN(instruction_access) IISIDE=1 IDAR=1 IDSISR=1 +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_REAL=1 +#endif INT_DEFINE_END(instruction_access) EXC_REAL_BEGIN(instruction_access, 0x400, 0x80) @@ -1396,7 +1429,9 @@ INT_DEFINE_BEGIN(instruction_access_slb) IRECONCILE=0 IISIDE=1 IDAR=1 +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_REAL=1 +#endif INT_DEFINE_END(instruction_access_slb) EXC_REAL_BEGIN(instruction_access_slb, 0x480, 0x80) @@ -1488,7 +1523,9 @@ INT_DEFINE_BEGIN(alignment) IVEC=0x600 IDAR=1 IDSISR=1 +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_REAL=1 +#endif INT_DEFINE_END(alignment) EXC_REAL_BEGIN(alignment, 0x600, 0x100) @@ -1518,7 +1555,9 @@ EXC_COMMON_BEGIN(alignment_common) */ INT_DEFINE_BEGIN(program_check) IVEC=0x700 +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_REAL=1 +#endif INT_DEFINE_END(program_check) EXC_REAL_BEGIN(program_check, 0x700, 0x100) @@ -1581,7 +1620,9 @@ EXC_COMMON_BEGIN(program_check_common) INT_DEFINE_BEGIN(fp_unavailable) IVEC=0x800 IRECONCILE=0 +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_REAL=1 +#endif INT_DEFINE_END(fp_unavailable) EXC_REAL_BEGIN(fp_unavailable, 0x800, 0x100) @@ -1643,7 +1684,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM) INT_DEFINE_BEGIN(decrementer) IVEC=0x900 IMASK=IRQS_DISABLED +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_REAL=1 +#endif INT_DEFINE_END(decrementer) EXC_REAL_BEGIN(decrementer, 0x900, 0x80) @@ -1728,7 +1771,9 @@ EXC_COMMON_BEGIN(hdecrementer_common) INT_DEFINE_BEGIN(doorbell_super) IVEC=0xa00 IMASK=IRQS_DISABLED +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_REAL=1 +#endif INT_DEFINE_END(doorbell_super) EXC_REAL_BEGIN(doorbell_super, 0xa00, 0x100) @@ -1919,7 +1964,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) */ INT_DEFINE_BEGIN(single_step) IVEC=0xd00 +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_REAL=1 +#endif INT_DEFINE_END(single_step) EXC_REAL_BEGIN(single_step, 0xd00, 0x100) @@ -2208,7 +2255,9 @@ EXC_VIRT_NONE(0x4ee0, 0x20) INT_DEFINE_BEGIN(performance_monitor) IVEC=0xf00 IMASK=IRQS_PMI_DISABLED +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_REAL=1 +#endif INT_DEFINE_END(performance_monitor) EXC_REAL_BEGIN(performance_monitor, 0xf00, 0x20) @@ -2237,7 +2286,9 @@ EXC_COMMON_BEGIN(performance_monitor_common) INT_DEFINE_BEGIN(altivec_unavailable) IVEC=0xf20 IRECONCILE=0 +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_REAL=1 +#endif INT_DEFINE_END(altivec_unavailable) EXC_REAL_BEGIN(altivec_unavailable, 0xf20, 0x20) @@ -2291,7 +2342,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) INT_DEFINE_BEGIN(vsx_unavailable) IVEC=0xf40 IRECONCILE=0 +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_REAL=1 +#endif INT_DEFINE_END(vsx_unavailable) EXC_REAL_BEGIN(vsx_unavailable, 0xf40, 0x20) @@ -2344,7 +2397,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX) */ INT_DEFINE_BEGIN(facility_unavailable) IVEC=0xf60 +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_REAL=1 +#endif INT_DEFINE_END(facility_unavailable) EXC_REAL_BEGIN(facility_unavailable, 0xf60, 0x20) @@ -2434,8 +2489,10 @@ EXC_VIRT_NONE(0x5200, 0x100) INT_DEFINE_BEGIN(instruction_breakpoint) IVEC=0x1300 +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_SKIP=1 IKVM_REAL=1 +#endif INT_DEFINE_END(instruction_breakpoint) EXC_REAL_BEGIN(instruction_breakpoint, 0x1300, 0x100) @@ -2606,7 +2663,9 @@ EXC_VIRT_NONE(0x5600, 0x100) INT_DEFINE_BEGIN(altivec_assist) IVEC=0x1700 +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE IKVM_REAL=1 +#endif INT_DEFINE_END(altivec_assist) EXC_REAL_BEGIN(altivec_assist, 0x1700, 0x100) From patchwork Tue Nov 12 16:52:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193750 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CFkD2gbWz9sPh for ; Wed, 13 Nov 2019 04:54:28 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CFkC0GHYzF3m7 for ; Wed, 13 Nov 2019 04:54:27 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNF0PpWzF33t for ; Wed, 13 Nov 2019 03:53:49 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id DA4ECB13D; Tue, 12 Nov 2019 16:53:45 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 21/33] powerpc/64s/exception: soft nmi interrupt should not use ret_from_except Date: Tue, 12 Nov 2019 17:52:19 +0100 Message-Id: <85d3a8b3062501f545d16d4caf1c4925ce2fe618.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin The soft nmi handler does not reconcile interrupt state, so it should not return via the normal ret_from_except path. Return like other NMIs, using the EXCEPTION_RESTORE_REGS macro. This becomes important when the scv interrupt is implemented, which must handle soft-masked interrupts that have r13 set to something other than the PACA -- returning to kernel in this case must restore r13. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 38bc66b95516..af1264cd005f 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -2740,7 +2740,11 @@ EXC_COMMON_BEGIN(soft_nmi_common) bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl soft_nmi_interrupt - b ret_from_except + /* Clear MSR_RI before setting SRR0 and SRR1. */ + li r9,0 + mtmsrd r9,1 + EXCEPTION_RESTORE_REGS hsrr=0 + RFI_TO_KERNEL #endif /* CONFIG_PPC_WATCHDOG */ From patchwork Tue Nov 12 16:52:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193753 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CFnr2r89z9sNx for ; Wed, 13 Nov 2019 04:57:36 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CFnr0RPSzF5TP for ; Wed, 13 Nov 2019 04:57:36 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNH48j8zF1wq for ; Wed, 13 Nov 2019 03:53:51 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 66187B286; Tue, 12 Nov 2019 16:53:47 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 22/33] powerpc/64: system call remove non-volatile GPR save optimisation Date: Tue, 12 Nov 2019 17:52:20 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin powerpc has an optimisation where interrupts avoid saving the non-volatile (or callee saved) registers to the interrupt stack frame if they are not required. Two problems with this are that an interrupt does not always know whether it will need non-volatiles; and if it does need them, they can only be saved from the entry-scoped asm code (because we don't control what the C compiler does with these registers). system calls are the most difficult: some system calls always require all registers (e.g., fork, to copy regs into the child). Sometimes registers are only required under certain conditions (e.g., tracing, signal delivery). These cases require ugly logic in the call chains (e.g., ppc_fork), and require a lot of logic to be implemented in asm. So remove the optimisation for system calls, and always save NVGPRs on entry. Modern high performance CPUs are not so sensitive, because the stores are dense in cache and can be hidden by other expensive work in the syscall path -- the null syscall selftests benchmark on POWER9 is not slowed (124.40ns before and 123.64ns after, i.e., within the noise). Other interrupts retain the NVGPR optimisation for now. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/entry_64.S | 72 +++++------------------- arch/powerpc/kernel/syscalls/syscall.tbl | 22 +++++--- 2 files changed, 28 insertions(+), 66 deletions(-) diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S index 6467bdab8d40..5a3e0b5c9ad1 100644 --- a/arch/powerpc/kernel/entry_64.S +++ b/arch/powerpc/kernel/entry_64.S @@ -98,13 +98,14 @@ END_BTB_FLUSH_SECTION std r11,_XER(r1) std r11,_CTR(r1) std r9,GPR13(r1) + SAVE_NVGPRS(r1) mflr r10 /* * This clears CR0.SO (bit 28), which is the error indication on * return from this system call. */ rldimi r2,r11,28,(63-28) - li r11,0xc01 + li r11,0xc00 std r10,_LINK(r1) std r11,_TRAP(r1) std r3,ORIG_GPR3(r1) @@ -323,7 +324,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) /* Traced system call support */ .Lsyscall_dotrace: - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl do_syscall_trace_enter @@ -408,7 +408,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) mtmsrd r10,1 #endif /* CONFIG_PPC_BOOK3E */ - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl do_syscall_trace_leave b ret_from_except @@ -442,62 +441,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) _ASM_NOKPROBE_SYMBOL(system_call_common); _ASM_NOKPROBE_SYMBOL(system_call_exit); -/* Save non-volatile GPRs, if not already saved. */ -_GLOBAL(save_nvgprs) - ld r11,_TRAP(r1) - andi. r0,r11,1 - beqlr- - SAVE_NVGPRS(r1) - clrrdi r0,r11,1 - std r0,_TRAP(r1) - blr -_ASM_NOKPROBE_SYMBOL(save_nvgprs); - - -/* - * The sigsuspend and rt_sigsuspend system calls can call do_signal - * and thus put the process into the stopped state where we might - * want to examine its user state with ptrace. Therefore we need - * to save all the nonvolatile registers (r14 - r31) before calling - * the C code. Similarly, fork, vfork and clone need the full - * register state on the stack so that it can be copied to the child. - */ - -_GLOBAL(ppc_fork) - bl save_nvgprs - bl sys_fork - b .Lsyscall_exit - -_GLOBAL(ppc_vfork) - bl save_nvgprs - bl sys_vfork - b .Lsyscall_exit - -_GLOBAL(ppc_clone) - bl save_nvgprs - bl sys_clone - b .Lsyscall_exit - -_GLOBAL(ppc_clone3) - bl save_nvgprs - bl sys_clone3 - b .Lsyscall_exit - -_GLOBAL(ppc32_swapcontext) - bl save_nvgprs - bl compat_sys_swapcontext - b .Lsyscall_exit - -_GLOBAL(ppc64_swapcontext) - bl save_nvgprs - bl sys_swapcontext - b .Lsyscall_exit - -_GLOBAL(ppc_switch_endian) - bl save_nvgprs - bl sys_switch_endian - b .Lsyscall_exit - _GLOBAL(ret_from_fork) bl schedule_tail REST_NVGPRS(r1) @@ -516,6 +459,17 @@ _GLOBAL(ret_from_kernel_thread) li r3,0 b .Lsyscall_exit +/* Save non-volatile GPRs, if not already saved. */ +_GLOBAL(save_nvgprs) + ld r11,_TRAP(r1) + andi. r0,r11,1 + beqlr- + SAVE_NVGPRS(r1) + clrrdi r0,r11,1 + std r0,_TRAP(r1) + blr +_ASM_NOKPROBE_SYMBOL(save_nvgprs); + #ifdef CONFIG_PPC_BOOK3S_64 #define FLUSH_COUNT_CACHE \ diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl index 43f736ed47f2..d899bcb5343e 100644 --- a/arch/powerpc/kernel/syscalls/syscall.tbl +++ b/arch/powerpc/kernel/syscalls/syscall.tbl @@ -9,7 +9,9 @@ # 0 nospu restart_syscall sys_restart_syscall 1 nospu exit sys_exit -2 nospu fork ppc_fork +2 32 fork ppc_fork sys_fork +2 64 fork sys_fork +2 spu fork sys_ni_syscall 3 common read sys_read 4 common write sys_write 5 common open sys_open compat_sys_open @@ -158,7 +160,9 @@ 119 32 sigreturn sys_sigreturn compat_sys_sigreturn 119 64 sigreturn sys_ni_syscall 119 spu sigreturn sys_ni_syscall -120 nospu clone ppc_clone +120 32 clone ppc_clone sys_clone +120 64 clone sys_clone +120 spu clone sys_ni_syscall 121 common setdomainname sys_setdomainname 122 common uname sys_newuname 123 common modify_ldt sys_ni_syscall @@ -240,7 +244,9 @@ 186 spu sendfile sys_sendfile64 187 common getpmsg sys_ni_syscall 188 common putpmsg sys_ni_syscall -189 nospu vfork ppc_vfork +189 32 vfork ppc_vfork sys_vfork +189 64 vfork sys_vfork +189 spu vfork sys_ni_syscall 190 common ugetrlimit sys_getrlimit compat_sys_getrlimit 191 common readahead sys_readahead compat_sys_readahead 192 32 mmap2 sys_mmap2 compat_sys_mmap2 @@ -316,8 +322,8 @@ 248 32 clock_nanosleep sys_clock_nanosleep_time32 248 64 clock_nanosleep sys_clock_nanosleep 248 spu clock_nanosleep sys_clock_nanosleep -249 32 swapcontext ppc_swapcontext ppc32_swapcontext -249 64 swapcontext ppc64_swapcontext +249 32 swapcontext ppc_swapcontext compat_sys_swapcontext +249 64 swapcontext sys_swapcontext 249 spu swapcontext sys_ni_syscall 250 common tgkill sys_tgkill 251 32 utimes sys_utimes_time32 @@ -456,7 +462,7 @@ 361 common bpf sys_bpf 362 nospu execveat sys_execveat compat_sys_execveat 363 32 switch_endian sys_ni_syscall -363 64 switch_endian ppc_switch_endian +363 64 switch_endian sys_switch_endian 363 spu switch_endian sys_ni_syscall 364 common userfaultfd sys_userfaultfd 365 common membarrier sys_membarrier @@ -516,4 +522,6 @@ 432 common fsmount sys_fsmount 433 common fspick sys_fspick 434 common pidfd_open sys_pidfd_open -435 nospu clone3 ppc_clone3 +435 32 clone3 ppc_clone3 sys_clone3 +435 64 clone3 sys_clone3 +435 spu clone3 sys_ni_syscall From patchwork Tue Nov 12 16:52:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193754 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CFrr3XwVz9sNH for ; Wed, 13 Nov 2019 05:00:12 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CFrr0vpCzF3Rw for ; Wed, 13 Nov 2019 05:00:12 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNJ1NnczF29V for ; Wed, 13 Nov 2019 03:53:52 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id E610EB298; Tue, 12 Nov 2019 16:53:48 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 23/33] powerpc/64: system call implement the bulk of the logic in C Date: Tue, 12 Nov 2019 17:52:21 +0100 Message-Id: <0f77785b20de5f410eb5cdf862b44d280375e08e.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin System call entry and particularly exit code is beyond the limit of what is reasonable to implement in asm. This conversion moves all conditional branches out of the asm code, except for the case that all GPRs should be restored at exit. Null syscall test is about 5% faster after this patch, because the exit work is handled under local_irq_disable, and the hard mask and pending interrupt replay is handled after that, which avoids games with MSR. Signed-off-by: Nicholas Piggin [ms: add endian conversion for dtl_idx] Signed-off-by: Michal Suchanek v3: - Fix !KUAP build [mpe] - Fix BookE build/boot [mpe] - Don't trace irqs with MSR[RI]=0 - Don't allow syscall_exit_prepare to be ftraced, because function graph tracing which traces exits barfs after the IRQ state is prepared for kernel exit. - Fix BE syscall table to use normal function descriptors now that they are called from C. - Comment syscall_exit_prepare. --- arch/powerpc/include/asm/asm-prototypes.h | 11 - .../powerpc/include/asm/book3s/64/kup-radix.h | 14 +- arch/powerpc/include/asm/cputime.h | 24 ++ arch/powerpc/include/asm/hw_irq.h | 4 + arch/powerpc/include/asm/ptrace.h | 3 + arch/powerpc/include/asm/signal.h | 3 + arch/powerpc/include/asm/switch_to.h | 5 + arch/powerpc/include/asm/time.h | 3 + arch/powerpc/kernel/Makefile | 3 +- arch/powerpc/kernel/entry_64.S | 337 +++--------------- arch/powerpc/kernel/signal.h | 2 - arch/powerpc/kernel/syscall_64.c | 195 ++++++++++ arch/powerpc/kernel/systbl.S | 9 +- 13 files changed, 300 insertions(+), 313 deletions(-) create mode 100644 arch/powerpc/kernel/syscall_64.c diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h index 8561498e653c..399ca63196e4 100644 --- a/arch/powerpc/include/asm/asm-prototypes.h +++ b/arch/powerpc/include/asm/asm-prototypes.h @@ -103,14 +103,6 @@ long sys_switch_endian(void); notrace unsigned int __check_irq_replay(void); void notrace restore_interrupts(void); -/* ptrace */ -long do_syscall_trace_enter(struct pt_regs *regs); -void do_syscall_trace_leave(struct pt_regs *regs); - -/* process */ -void restore_math(struct pt_regs *regs); -void restore_tm_state(struct pt_regs *regs); - /* prom_init (OpenFirmware) */ unsigned long __init prom_init(unsigned long r3, unsigned long r4, unsigned long pp, @@ -121,9 +113,6 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4, void __init early_setup(unsigned long dt_ptr); void early_setup_secondary(void); -/* time */ -void accumulate_stolen_time(void); - /* misc runtime */ extern u64 __bswapdi2(u64); extern s64 __lshrdi3(s64, int); diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h index f254de956d6a..07058edc5970 100644 --- a/arch/powerpc/include/asm/book3s/64/kup-radix.h +++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h @@ -3,6 +3,7 @@ #define _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H #include +#include #define AMR_KUAP_BLOCK_READ UL(0x4000000000000000) #define AMR_KUAP_BLOCK_WRITE UL(0x8000000000000000) @@ -56,7 +57,14 @@ #ifdef CONFIG_PPC_KUAP -#include +#include +#include + +static inline void kuap_check_amr(void) +{ + if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && mmu_has_feature(MMU_FTR_RADIX_KUAP)) + WARN_ON_ONCE(mfspr(SPRN_AMR) != AMR_KUAP_BLOCKED); +} /* * We support individually allowing read or write, but we don't support nesting @@ -101,6 +109,10 @@ static inline bool bad_kuap_fault(struct pt_regs *regs, bool is_write) (regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : AMR_KUAP_BLOCK_READ)), "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read"); } +#else /* CONFIG_PPC_KUAP */ +static inline void kuap_check_amr(void) +{ +} #endif /* CONFIG_PPC_KUAP */ #endif /* __ASSEMBLY__ */ diff --git a/arch/powerpc/include/asm/cputime.h b/arch/powerpc/include/asm/cputime.h index 2431b4ada2fa..c43614cffaac 100644 --- a/arch/powerpc/include/asm/cputime.h +++ b/arch/powerpc/include/asm/cputime.h @@ -60,6 +60,30 @@ static inline void arch_vtime_task_switch(struct task_struct *prev) } #endif +static inline void account_cpu_user_entry(void) +{ + unsigned long tb = mftb(); + struct cpu_accounting_data *acct = get_accounting(current); + + acct->utime += (tb - acct->starttime_user); + acct->starttime = tb; +} +static inline void account_cpu_user_exit(void) +{ + unsigned long tb = mftb(); + struct cpu_accounting_data *acct = get_accounting(current); + + acct->stime += (tb - acct->starttime); + acct->starttime_user = tb; +} + #endif /* __KERNEL__ */ +#else /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */ +static inline void account_cpu_user_entry(void) +{ +} +static inline void account_cpu_user_exit(void) +{ +} #endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */ #endif /* __POWERPC_CPUTIME_H */ diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h index 32a18f2f49bc..7e1c7f5873a8 100644 --- a/arch/powerpc/include/asm/hw_irq.h +++ b/arch/powerpc/include/asm/hw_irq.h @@ -228,9 +228,13 @@ static inline bool arch_irqs_disabled(void) #ifdef CONFIG_PPC_BOOK3E #define __hard_irq_enable() asm volatile("wrteei 1" : : : "memory") #define __hard_irq_disable() asm volatile("wrteei 0" : : : "memory") +#define __hard_EE_RI_disable() asm volatile("wrteei 0" : : : "memory") +#define __hard_RI_enable() do { } while (0) #else #define __hard_irq_enable() __mtmsrd(MSR_EE|MSR_RI, 1) #define __hard_irq_disable() __mtmsrd(MSR_RI, 1) +#define __hard_EE_RI_disable() __mtmsrd(0, 1) +#define __hard_RI_enable() __mtmsrd(MSR_RI, 1) #endif #define hard_irq_disable() do { \ diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h index ee3ada66deb5..082a40153b94 100644 --- a/arch/powerpc/include/asm/ptrace.h +++ b/arch/powerpc/include/asm/ptrace.h @@ -138,6 +138,9 @@ extern unsigned long profile_pc(struct pt_regs *regs); #define profile_pc(regs) instruction_pointer(regs) #endif +long do_syscall_trace_enter(struct pt_regs *regs); +void do_syscall_trace_leave(struct pt_regs *regs); + #define kernel_stack_pointer(regs) ((regs)->gpr[1]) static inline int is_syscall_success(struct pt_regs *regs) { diff --git a/arch/powerpc/include/asm/signal.h b/arch/powerpc/include/asm/signal.h index 0803ca8b9149..99e1c6de27bc 100644 --- a/arch/powerpc/include/asm/signal.h +++ b/arch/powerpc/include/asm/signal.h @@ -6,4 +6,7 @@ #include #include +struct pt_regs; +void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags); + #endif /* _ASM_POWERPC_SIGNAL_H */ diff --git a/arch/powerpc/include/asm/switch_to.h b/arch/powerpc/include/asm/switch_to.h index 5b03d8a82409..476008bc3d08 100644 --- a/arch/powerpc/include/asm/switch_to.h +++ b/arch/powerpc/include/asm/switch_to.h @@ -5,6 +5,7 @@ #ifndef _ASM_POWERPC_SWITCH_TO_H #define _ASM_POWERPC_SWITCH_TO_H +#include #include struct thread_struct; @@ -22,6 +23,10 @@ extern void switch_booke_debug_regs(struct debug_reg *new_debug); extern int emulate_altivec(struct pt_regs *); +void restore_math(struct pt_regs *regs); + +void restore_tm_state(struct pt_regs *regs); + extern void flush_all_to_thread(struct task_struct *); extern void giveup_all(struct task_struct *); diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index e0107495c4de..39ce95016a3a 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -194,5 +194,8 @@ DECLARE_PER_CPU(u64, decrementers_next_tb); /* Convert timebase ticks to nanoseconds */ unsigned long long tb_to_ns(unsigned long long tb_ticks); +/* SPLPAR */ +void accumulate_stolen_time(void); + #endif /* __KERNEL__ */ #endif /* __POWERPC_TIME_H */ diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile index a7ca8fe62368..45f1d5e54671 100644 --- a/arch/powerpc/kernel/Makefile +++ b/arch/powerpc/kernel/Makefile @@ -52,7 +52,8 @@ obj-y := cputable.o ptrace.o syscalls.o \ of_platform.o prom_parse.o obj-$(CONFIG_PPC64) += setup_64.o sys_ppc32.o \ signal_64.o ptrace32.o \ - paca.o nvram_64.o firmware.o note.o + paca.o nvram_64.o firmware.o note.o \ + syscall_64.o obj-$(CONFIG_VDSO32) += vdso32/ obj-$(CONFIG_PPC_WATCHDOG) += watchdog.o obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S index 5a3e0b5c9ad1..15bc2a872a76 100644 --- a/arch/powerpc/kernel/entry_64.S +++ b/arch/powerpc/kernel/entry_64.S @@ -63,12 +63,6 @@ exception_marker: .globl system_call_common system_call_common: -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM -BEGIN_FTR_SECTION - extrdi. r10, r12, 1, (63-MSR_TS_T_LG) /* transaction active? */ - bne .Ltabort_syscall -END_FTR_SECTION_IFSET(CPU_FTR_TM) -#endif mr r10,r1 ld r1,PACAKSAVE(r13) std r10,0(r1) @@ -76,350 +70,111 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM) std r12,_MSR(r1) std r0,GPR0(r1) std r10,GPR1(r1) + std r2,GPR2(r1) #ifdef CONFIG_PPC_FSL_BOOK3E START_BTB_FLUSH_SECTION BTB_FLUSH(r10) END_BTB_FLUSH_SECTION #endif - ACCOUNT_CPU_USER_ENTRY(r13, r10, r11) - std r2,GPR2(r1) + ld r2,PACATOC(r13) + mfcr r12 + li r11,0 + /* Can we avoid saving r3-r8 in common case? */ std r3,GPR3(r1) - mfcr r2 std r4,GPR4(r1) std r5,GPR5(r1) std r6,GPR6(r1) std r7,GPR7(r1) std r8,GPR8(r1) - li r11,0 + /* Zero r9-r12, this should only be required when restoring all GPRs */ std r11,GPR9(r1) std r11,GPR10(r1) std r11,GPR11(r1) std r11,GPR12(r1) - std r11,_XER(r1) - std r11,_CTR(r1) std r9,GPR13(r1) SAVE_NVGPRS(r1) + std r11,_XER(r1) + std r11,_CTR(r1) mflr r10 + /* * This clears CR0.SO (bit 28), which is the error indication on * return from this system call. */ - rldimi r2,r11,28,(63-28) + rldimi r12,r11,28,(63-28) li r11,0xc00 std r10,_LINK(r1) std r11,_TRAP(r1) + std r12,_CCR(r1) std r3,ORIG_GPR3(r1) - std r2,_CCR(r1) - ld r2,PACATOC(r13) - addi r9,r1,STACK_FRAME_OVERHEAD + addi r10,r1,STACK_FRAME_OVERHEAD ld r11,exception_marker@toc(r2) - std r11,-16(r9) /* "regshere" marker */ - - kuap_check_amr r10, r11 - -#if defined(CONFIG_VIRT_CPU_ACCOUNTING_NATIVE) && defined(CONFIG_PPC_SPLPAR) -BEGIN_FW_FTR_SECTION - /* see if there are any DTL entries to process */ - ld r10,PACALPPACAPTR(r13) /* get ptr to VPA */ - ld r11,PACA_DTL_RIDX(r13) /* get log read index */ - addi r10,r10,LPPACA_DTLIDX - LDX_BE r10,0,r10 /* get log write index */ - cmpd r11,r10 - beq+ 33f - bl accumulate_stolen_time - REST_GPR(0,r1) - REST_4GPRS(3,r1) - REST_2GPRS(7,r1) - addi r9,r1,STACK_FRAME_OVERHEAD -33: -END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR) -#endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE && CONFIG_PPC_SPLPAR */ - - /* - * A syscall should always be called with interrupts enabled - * so we just unconditionally hard-enable here. When some kind - * of irq tracing is used, we additionally check that condition - * is correct - */ -#if defined(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG) && defined(CONFIG_BUG) - lbz r10,PACAIRQSOFTMASK(r13) -1: tdnei r10,IRQS_ENABLED - EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING -#endif - -#ifdef CONFIG_PPC_BOOK3E - wrteei 1 -#else - li r11,MSR_RI - ori r11,r11,MSR_EE - mtmsrd r11,1 -#endif /* CONFIG_PPC_BOOK3E */ - -system_call: /* label this so stack traces look sane */ - /* We do need to set SOFTE in the stack frame or the return - * from interrupt will be painful - */ - li r10,IRQS_ENABLED - std r10,SOFTE(r1) + std r11,-16(r10) /* "regshere" marker */ - ld r11, PACA_THREAD_INFO(r13) - ld r10,TI_FLAGS(r11) - andi. r11,r10,_TIF_SYSCALL_DOTRACE - bne .Lsyscall_dotrace /* does not return */ - cmpldi 0,r0,NR_syscalls - bge- .Lsyscall_enosys + /* Calling convention has r9 = orig r0, r10 = regs */ + mr r9,r0 + bl system_call_exception -.Lsyscall: -/* - * Need to vector to 32 Bit or default sys_call_table here, - * based on caller's run-mode / personality. - */ - ld r11,SYS_CALL_TABLE@toc(2) - andis. r10,r10,_TIF_32BIT@h - beq 15f - ld r11,COMPAT_SYS_CALL_TABLE@toc(2) - clrldi r3,r3,32 - clrldi r4,r4,32 - clrldi r5,r5,32 - clrldi r6,r6,32 - clrldi r7,r7,32 - clrldi r8,r8,32 -15: - slwi r0,r0,3 - - barrier_nospec_asm - /* - * Prevent the load of the handler below (based on the user-passed - * system call number) being speculatively executed until the test - * against NR_syscalls and branch to .Lsyscall_enosys above has - * committed. - */ - - ldx r12,r11,r0 /* Fetch system call handler [ptr] */ - mtctr r12 - bctrl /* Call handler */ - - /* syscall_exit can exit to kernel mode, via ret_from_kernel_thread */ .Lsyscall_exit: - std r3,RESULT(r1) + addi r4,r1,STACK_FRAME_OVERHEAD + bl syscall_exit_prepare -#ifdef CONFIG_DEBUG_RSEQ - /* Check whether the syscall is issued inside a restartable sequence */ - addi r3,r1,STACK_FRAME_OVERHEAD - bl rseq_syscall - ld r3,RESULT(r1) -#endif - - ld r12, PACA_THREAD_INFO(r13) - - ld r8,_MSR(r1) - -/* - * This is a few instructions into the actual syscall exit path (which actually - * starts at .Lsyscall_exit) to cater to kprobe blacklisting and to reduce the - * number of visible symbols for profiling purposes. - * - * We can probe from system_call until this point as MSR_RI is set. But once it - * is cleared below, we won't be able to take a trap. - * - * This is blacklisted from kprobes further below with _ASM_NOKPROBE_SYMBOL(). - */ -system_call_exit: - /* - * Disable interrupts so current_thread_info()->flags can't change, - * and so that we don't get interrupted after loading SRR0/1. - * - * Leave MSR_RI enabled for now, because with THREAD_INFO_IN_TASK we - * could fault on the load of the TI_FLAGS below. - */ -#ifdef CONFIG_PPC_BOOK3E - wrteei 0 -#else - li r11,MSR_RI - mtmsrd r11,1 -#endif /* CONFIG_PPC_BOOK3E */ - - ld r9,TI_FLAGS(r12) - li r11,-MAX_ERRNO - andi. r0,r9,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP|_TIF_USER_WORK_MASK|_TIF_PERSYSCALL_MASK) - bne- .Lsyscall_exit_work + ld r2,_CCR(r1) + ld r4,_NIP(r1) + ld r5,_MSR(r1) + ld r6,_LINK(r1) - andi. r0,r8,MSR_FP - beq 2f -#ifdef CONFIG_ALTIVEC - andis. r0,r8,MSR_VEC@h - bne 3f -#endif -2: addi r3,r1,STACK_FRAME_OVERHEAD - bl restore_math - ld r8,_MSR(r1) - ld r3,RESULT(r1) - li r11,-MAX_ERRNO - -3: cmpld r3,r11 - ld r5,_CCR(r1) - bge- .Lsyscall_error -.Lsyscall_error_cont: - ld r7,_NIP(r1) BEGIN_FTR_SECTION stdcx. r0,0,r1 /* to clear the reservation */ END_FTR_SECTION_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS) - andi. r6,r8,MSR_PR - ld r4,_LINK(r1) - kuap_check_amr r10, r11 + mtspr SPRN_SRR0,r4 + mtspr SPRN_SRR1,r5 + mtlr r6 -#ifdef CONFIG_PPC_BOOK3S - /* - * Clear MSR_RI, MSR_EE is already and remains disabled. We could do - * this later, but testing shows that doing it here causes less slow - * down than doing it closer to the rfid. - */ - li r11,0 - mtmsrd r11,1 -#endif - - beq- 1f - ACCOUNT_CPU_USER_EXIT(r13, r11, r12) + cmpdi r3,0 + bne syscall_restore_regs +.Lsyscall_restore_regs_cont: BEGIN_FTR_SECTION HMT_MEDIUM_LOW END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM - std r8, PACATMSCRATCH(r13) -#endif - /* * We don't need to restore AMR on the way back to userspace for KUAP. * The value of AMR only matters while we're in the kernel. */ - ld r13,GPR13(r1) /* only restore r13 if returning to usermode */ + mtcr r2 ld r2,GPR2(r1) + ld r3,GPR3(r1) + ld r13,GPR13(r1) ld r1,GPR1(r1) - mtlr r4 - mtcr r5 - mtspr SPRN_SRR0,r7 - mtspr SPRN_SRR1,r8 RFI_TO_USER b . /* prevent speculative execution */ +_ASM_NOKPROBE_SYMBOL(system_call_common); -1: /* exit to kernel */ - kuap_restore_amr r2 - - ld r2,GPR2(r1) - ld r1,GPR1(r1) - mtlr r4 - mtcr r5 - mtspr SPRN_SRR0,r7 - mtspr SPRN_SRR1,r8 - RFI_TO_KERNEL - b . /* prevent speculative execution */ - -.Lsyscall_error: - oris r5,r5,0x1000 /* Set SO bit in CR */ - neg r3,r3 - std r5,_CCR(r1) - b .Lsyscall_error_cont - -/* Traced system call support */ -.Lsyscall_dotrace: - addi r3,r1,STACK_FRAME_OVERHEAD - bl do_syscall_trace_enter - - /* - * We use the return value of do_syscall_trace_enter() as the syscall - * number. If the syscall was rejected for any reason do_syscall_trace_enter() - * returns an invalid syscall number and the test below against - * NR_syscalls will fail. - */ - mr r0,r3 - - /* Restore argument registers just clobbered and/or possibly changed. */ - ld r3,GPR3(r1) - ld r4,GPR4(r1) - ld r5,GPR5(r1) - ld r6,GPR6(r1) - ld r7,GPR7(r1) - ld r8,GPR8(r1) - - /* Repopulate r9 and r10 for the syscall path */ - addi r9,r1,STACK_FRAME_OVERHEAD - ld r10, PACA_THREAD_INFO(r13) - ld r10,TI_FLAGS(r10) - - cmpldi r0,NR_syscalls - blt+ .Lsyscall - - /* Return code is already in r3 thanks to do_syscall_trace_enter() */ - b .Lsyscall_exit - - -.Lsyscall_enosys: - li r3,-ENOSYS - b .Lsyscall_exit - -.Lsyscall_exit_work: - /* If TIF_RESTOREALL is set, don't scribble on either r3 or ccr. - If TIF_NOERROR is set, just save r3 as it is. */ - - andi. r0,r9,_TIF_RESTOREALL - beq+ 0f +syscall_restore_regs: + ld r3,_CTR(r1) + ld r4,_XER(r1) REST_NVGPRS(r1) - b 2f -0: cmpld r3,r11 /* r11 is -MAX_ERRNO */ - blt+ 1f - andi. r0,r9,_TIF_NOERROR - bne- 1f - ld r5,_CCR(r1) - neg r3,r3 - oris r5,r5,0x1000 /* Set SO bit in CR */ - std r5,_CCR(r1) -1: std r3,GPR3(r1) -2: andi. r0,r9,(_TIF_PERSYSCALL_MASK) - beq 4f - - /* Clear per-syscall TIF flags if any are set. */ - - li r11,_TIF_PERSYSCALL_MASK - addi r12,r12,TI_FLAGS -3: ldarx r10,0,r12 - andc r10,r10,r11 - stdcx. r10,0,r12 - bne- 3b - subi r12,r12,TI_FLAGS - -4: /* Anything else left to do? */ -BEGIN_FTR_SECTION - lis r3,DEFAULT_PPR@highest /* Set default PPR */ - sldi r3,r3,32 /* bits 11-13 are used for ppr */ - std r3,_PPR(r1) -END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) - - andi. r0,r9,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP) - beq ret_from_except_lite - - /* Re-enable interrupts */ -#ifdef CONFIG_PPC_BOOK3E - wrteei 1 -#else - li r10,MSR_RI - ori r10,r10,MSR_EE - mtmsrd r10,1 -#endif /* CONFIG_PPC_BOOK3E */ - - addi r3,r1,STACK_FRAME_OVERHEAD - bl do_syscall_trace_leave - b ret_from_except + mtctr r3 + mtspr SPRN_XER,r4 + ld r0,GPR0(r1) + REST_8GPRS(4, r1) + ld r12,GPR12(r1) + b .Lsyscall_restore_regs_cont #ifdef CONFIG_PPC_TRANSACTIONAL_MEM -.Ltabort_syscall: +_GLOBAL(tabort_syscall) /* Firstly we need to enable TM in the kernel */ mfmsr r10 li r9, 1 rldimi r10, r9, MSR_TM_LG, 63-MSR_TM_LG mtmsrd r10, 0 + ld r11,_NIP(r13) + ld r12,_MSR(r13) + /* tabort, this dooms the transaction, nothing else */ li r9, (TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT) TABORT(R9) @@ -438,8 +193,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) RFI_TO_USER b . /* prevent speculative execution */ #endif -_ASM_NOKPROBE_SYMBOL(system_call_common); -_ASM_NOKPROBE_SYMBOL(system_call_exit); _GLOBAL(ret_from_fork) bl schedule_tail diff --git a/arch/powerpc/kernel/signal.h b/arch/powerpc/kernel/signal.h index 800433685888..d396efca4068 100644 --- a/arch/powerpc/kernel/signal.h +++ b/arch/powerpc/kernel/signal.h @@ -10,8 +10,6 @@ #ifndef _POWERPC_ARCH_SIGNAL_H #define _POWERPC_ARCH_SIGNAL_H -extern void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags); - extern void __user *get_sigframe(struct ksignal *ksig, unsigned long sp, size_t frame_size, int is_32); diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c new file mode 100644 index 000000000000..ce4e1b5a9c27 --- /dev/null +++ b/arch/powerpc/kernel/syscall_64.c @@ -0,0 +1,195 @@ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +extern void __noreturn tabort_syscall(void); + +typedef long (*syscall_fn)(long, long, long, long, long, long); + +long system_call_exception(long r3, long r4, long r5, long r6, long r7, long r8, unsigned long r0, struct pt_regs *regs) +{ + unsigned long ti_flags; + syscall_fn f; + + BUG_ON(!(regs->msr & MSR_PR)); + + if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && + unlikely(regs->msr & MSR_TS_T)) + tabort_syscall(); + + account_cpu_user_entry(); + +#ifdef CONFIG_PPC_SPLPAR + if (IS_ENABLED(CONFIG_VIRT_CPU_ACCOUNTING_NATIVE) && + firmware_has_feature(FW_FEATURE_SPLPAR)) { + struct lppaca *lp = get_lppaca(); + + if (unlikely(local_paca->dtl_ridx != be64_to_cpu(lp->dtl_idx))) + accumulate_stolen_time(); + } +#endif + + kuap_check_amr(); + + /* + * A syscall should always be called with interrupts enabled + * so we just unconditionally hard-enable here. When some kind + * of irq tracing is used, we additionally check that condition + * is correct + */ + if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) { + WARN_ON(irq_soft_mask_return() != IRQS_ENABLED); + WARN_ON(local_paca->irq_happened); + } + /* + * This is not required for the syscall exit path, but makes the + * stack frame look nicer. If this was initialised in the first stack + * frame, or if the unwinder was taught the first stack frame always + * returns to user with IRQS_ENABLED, this store could be avoided! + */ + regs->softe = IRQS_ENABLED; + + __hard_irq_enable(); + + ti_flags = current_thread_info()->flags; + if (unlikely(ti_flags & _TIF_SYSCALL_DOTRACE)) { + /* + * We use the return value of do_syscall_trace_enter() as the + * syscall number. If the syscall was rejected for any reason + * do_syscall_trace_enter() returns an invalid syscall number + * and the test below against NR_syscalls will fail. + */ + r0 = do_syscall_trace_enter(regs); + } + + if (unlikely(r0 >= NR_syscalls)) + return -ENOSYS; + + /* May be faster to do array_index_nospec? */ + barrier_nospec(); + + if (unlikely(ti_flags & _TIF_32BIT)) { + f = (void *)compat_sys_call_table[r0]; + + r3 &= 0x00000000ffffffffULL; + r4 &= 0x00000000ffffffffULL; + r5 &= 0x00000000ffffffffULL; + r6 &= 0x00000000ffffffffULL; + r7 &= 0x00000000ffffffffULL; + r8 &= 0x00000000ffffffffULL; + + } else { + f = (void *)sys_call_table[r0]; + } + + return f(r3, r4, r5, r6, r7, r8); +} + +/* + * This should be called after a syscall returns, with r3 the return value + * from the syscall. If this function returns non-zero, the system call + * exit assembly should additionally load all GPR registers and CTR and XER + * from the interrupt frame. + * + * The function graph tracer can not trace the return side of this function, + * because RI=0 and soft mask state is "unreconciled", so it is marked notrace. + */ +notrace unsigned long syscall_exit_prepare(unsigned long r3, struct pt_regs *regs) +{ + unsigned long *ti_flagsp = ¤t_thread_info()->flags; + unsigned long ti_flags; + unsigned long ret = 0; + + regs->result = r3; + + /* Check whether the syscall is issued inside a restartable sequence */ + rseq_syscall(regs); + + ti_flags = *ti_flagsp; + + if (unlikely(r3 >= (unsigned long)-MAX_ERRNO)) { + if (likely(!(ti_flags & (_TIF_NOERROR | _TIF_RESTOREALL)))) { + r3 = -r3; + regs->ccr |= 0x10000000; /* Set SO bit in CR */ + } + } + + if (unlikely(ti_flags & _TIF_PERSYSCALL_MASK)) { + if (ti_flags & _TIF_RESTOREALL) + ret = _TIF_RESTOREALL; + else + regs->gpr[3] = r3; + clear_bits(_TIF_PERSYSCALL_MASK, ti_flagsp); + } else { + regs->gpr[3] = r3; + } + + if (unlikely(ti_flags & _TIF_SYSCALL_DOTRACE)) + do_syscall_trace_leave(regs); + +again: + local_irq_disable(); + ti_flags = READ_ONCE(*ti_flagsp); + while (unlikely(ti_flags & _TIF_USER_WORK_MASK)) { + local_irq_enable(); + if (ti_flags & _TIF_NEED_RESCHED) { + schedule(); + } else { + /* + * SIGPENDING must restore signal handler function + * argument GPRs, and some non-volatiles (e.g., r1). + * Restore all for now. This could be made lighter. + */ + if (ti_flags & _TIF_SIGPENDING) + ret |= _TIF_RESTOREALL; + do_notify_resume(regs, ti_flags); + } + local_irq_disable(); + ti_flags = READ_ONCE(*ti_flagsp); + } + + if (IS_ENABLED(CONFIG_PPC_BOOK3S) && IS_ENABLED(CONFIG_PPC_FPU)) { + unsigned long mathflags = MSR_FP; + + if (IS_ENABLED(CONFIG_ALTIVEC)) + mathflags |= MSR_VEC; + + if ((regs->msr & mathflags) != mathflags) + restore_math(regs); + } + + /* This must be done with RI=1 because tracing may touch vmaps */ + trace_hardirqs_on(); + + /* This pattern matches prep_irq_for_idle */ + __hard_EE_RI_disable(); + if (unlikely(lazy_irq_pending())) { + __hard_RI_enable(); + trace_hardirqs_off(); + local_paca->irq_happened |= PACA_IRQ_HARD_DIS; + local_irq_enable(); + /* Took an interrupt which may have more exit work to do. */ + goto again; + } + local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS; + irq_soft_mask_set(IRQS_ENABLED); + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + local_paca->tm_scratch = regs->msr; +#endif + + kuap_check_amr(); + + account_cpu_user_exit(); + + return ret; +} diff --git a/arch/powerpc/kernel/systbl.S b/arch/powerpc/kernel/systbl.S index 5b905a2f4e4d..d34276f3c495 100644 --- a/arch/powerpc/kernel/systbl.S +++ b/arch/powerpc/kernel/systbl.S @@ -16,25 +16,22 @@ #ifdef CONFIG_PPC64 .p2align 3 +#define __SYSCALL(nr, entry) .8byte entry +#else +#define __SYSCALL(nr, entry) .long entry #endif .globl sys_call_table sys_call_table: #ifdef CONFIG_PPC64 -#define __SYSCALL(nr, entry) .8byte DOTSYM(entry) #include -#undef __SYSCALL #else -#define __SYSCALL(nr, entry) .long entry #include -#undef __SYSCALL #endif #ifdef CONFIG_COMPAT .globl compat_sys_call_table compat_sys_call_table: #define compat_sys_sigsuspend sys_sigsuspend -#define __SYSCALL(nr, entry) .8byte DOTSYM(entry) #include -#undef __SYSCALL #endif From patchwork Tue Nov 12 16:52:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193755 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CFw20QS0z9sPc for ; Wed, 13 Nov 2019 05:02:58 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CFw12NDMzDqcL for ; Wed, 13 Nov 2019 05:02:57 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNL4XRyzF33j for ; Wed, 13 Nov 2019 03:53:54 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 72DEDAF84; Tue, 12 Nov 2019 16:53:50 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 24/33] powerpc/64s: interrupt return in C Date: Tue, 12 Nov 2019 17:52:22 +0100 Message-Id: <263f19c57249f107c0d164544dcc3ae73e0cb53a.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin Implement the bulk of interrupt return logic in C. The asm return code must handle a few cases: restoring full GPRs, and emulating stack store. The asm return code is moved into 64e for now. The new logic has made allowance for 64e, but I don't have a full environment that works well to test it, and even booting in emulated qemu is not great for stress testing. 64e shouldn't be too far off working with this, given a bit more testing and auditing of the logic. This is slightly faster on a POWER9 (page fault speed increases about 1.1%), probably due to reduced mtmsrd. Signed-off-by: Nicholas Piggin [ms: move the FP restore functions to restore_math. They are not used anywhere else and when restore_math is not built gcc warns about them being unused.] Signed-off-by: Michal Suchanek --- .../powerpc/include/asm/book3s/64/kup-radix.h | 10 + arch/powerpc/include/asm/switch_to.h | 6 + arch/powerpc/kernel/entry_64.S | 475 ++++-------------- arch/powerpc/kernel/exceptions-64e.S | 254 +++++++++- arch/powerpc/kernel/exceptions-64s.S | 119 ++--- arch/powerpc/kernel/process.c | 89 ++-- arch/powerpc/kernel/syscall_64.c | 157 +++++- arch/powerpc/kernel/vector.S | 2 +- 8 files changed, 622 insertions(+), 490 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h index 07058edc5970..762afbed4762 100644 --- a/arch/powerpc/include/asm/book3s/64/kup-radix.h +++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h @@ -60,6 +60,12 @@ #include #include +static inline void kuap_restore_amr(struct pt_regs *regs) +{ + if (mmu_has_feature(MMU_FTR_RADIX_KUAP)) + mtspr(SPRN_AMR, regs->kuap); +} + static inline void kuap_check_amr(void) { if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && mmu_has_feature(MMU_FTR_RADIX_KUAP)) @@ -110,6 +116,10 @@ static inline bool bad_kuap_fault(struct pt_regs *regs, bool is_write) "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read"); } #else /* CONFIG_PPC_KUAP */ +static inline void kuap_restore_amr(struct pt_regs *regs) +{ +} + static inline void kuap_check_amr(void) { } diff --git a/arch/powerpc/include/asm/switch_to.h b/arch/powerpc/include/asm/switch_to.h index 476008bc3d08..b867b58b1093 100644 --- a/arch/powerpc/include/asm/switch_to.h +++ b/arch/powerpc/include/asm/switch_to.h @@ -23,7 +23,13 @@ extern void switch_booke_debug_regs(struct debug_reg *new_debug); extern int emulate_altivec(struct pt_regs *); +#ifdef CONFIG_PPC_BOOK3S_64 void restore_math(struct pt_regs *regs); +#else +static inline void restore_math(struct pt_regs *regs) +{ +} +#endif void restore_tm_state(struct pt_regs *regs); diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S index 15bc2a872a76..b2e68f5ca8f7 100644 --- a/arch/powerpc/kernel/entry_64.S +++ b/arch/powerpc/kernel/entry_64.S @@ -16,6 +16,7 @@ #include #include +#include #include #include #include @@ -279,7 +280,7 @@ flush_count_cache: * state of one is saved on its kernel stack. Then the state * of the other is restored from its kernel stack. The memory * management hardware is updated to the second process's state. - * Finally, we can return to the second process, via ret_from_except. + * Finally, we can return to the second process, via interrupt_return. * On entry, r3 points to the THREAD for the current task, r4 * points to the THREAD for the new task. * @@ -433,408 +434,150 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) addi r1,r1,SWITCH_FRAME_SIZE blr - .align 7 -_GLOBAL(ret_from_except) - ld r11,_TRAP(r1) - andi. r0,r11,1 - bne ret_from_except_lite - REST_NVGPRS(r1) - -_GLOBAL(ret_from_except_lite) +#ifdef CONFIG_PPC_BOOK3S /* - * Disable interrupts so that current_thread_info()->flags - * can't change between when we test it and when we return - * from the interrupt. - */ -#ifdef CONFIG_PPC_BOOK3E - wrteei 0 -#else - li r10,MSR_RI - mtmsrd r10,1 /* Update machine state */ -#endif /* CONFIG_PPC_BOOK3E */ + * If MSR EE/RI was never enabled, IRQs not reconciled, NVGPRs not + * touched, AMR not set, no exit work created, then this can be used. + */ + .balign IFETCH_ALIGN_BYTES +_GLOBAL(fast_interrupt_return) + ld r4,_MSR(r1) + andi. r0,r4,MSR_PR + bne .Lfast_user_interrupt_return + andi. r0,r4,MSR_RI + bne+ .Lfast_kernel_interrupt_return + addi r3,r1,STACK_FRAME_OVERHEAD + bl unrecoverable_exception + b . /* should not get here */ - ld r9, PACA_THREAD_INFO(r13) - ld r3,_MSR(r1) -#ifdef CONFIG_PPC_BOOK3E - ld r10,PACACURRENT(r13) -#endif /* CONFIG_PPC_BOOK3E */ - ld r4,TI_FLAGS(r9) - andi. r3,r3,MSR_PR - beq resume_kernel -#ifdef CONFIG_PPC_BOOK3E - lwz r3,(THREAD+THREAD_DBCR0)(r10) -#endif /* CONFIG_PPC_BOOK3E */ + .balign IFETCH_ALIGN_BYTES +_GLOBAL(interrupt_return) + REST_NVGPRS(r1) - /* Check current_thread_info()->flags */ - andi. r0,r4,_TIF_USER_WORK_MASK - bne 1f -#ifdef CONFIG_PPC_BOOK3E - /* - * Check to see if the dbcr0 register is set up to debug. - * Use the internal debug mode bit to do this. - */ - andis. r0,r3,DBCR0_IDM@h - beq restore - mfmsr r0 - rlwinm r0,r0,0,~MSR_DE /* Clear MSR.DE */ - mtmsr r0 - mtspr SPRN_DBCR0,r3 - li r10, -1 - mtspr SPRN_DBSR,r10 - b restore -#else + .balign IFETCH_ALIGN_BYTES +_GLOBAL(interrupt_return_lite) + ld r4,_MSR(r1) + andi. r0,r4,MSR_PR + beq kernel_interrupt_return +user_interrupt_return: addi r3,r1,STACK_FRAME_OVERHEAD - bl restore_math - b restore -#endif -1: andi. r0,r4,_TIF_NEED_RESCHED - beq 2f - bl restore_interrupts - SCHEDULE_USER - b ret_from_except_lite -2: -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM - andi. r0,r4,_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM - bne 3f /* only restore TM if nothing else to do */ - addi r3,r1,STACK_FRAME_OVERHEAD - bl restore_tm_state - b restore -3: -#endif - bl save_nvgprs - /* - * Use a non volatile GPR to save and restore our thread_info flags - * across the call to restore_interrupts. - */ - mr r30,r4 - bl restore_interrupts - mr r4,r30 - addi r3,r1,STACK_FRAME_OVERHEAD - bl do_notify_resume - b ret_from_except - -resume_kernel: - /* check current_thread_info, _TIF_EMULATE_STACK_STORE */ - andis. r8,r4,_TIF_EMULATE_STACK_STORE@h - beq+ 1f + bl interrupt_exit_user_prepare + cmpdi r3,0 + bne- .Lrestore_nvgprs - addi r8,r1,INT_FRAME_SIZE /* Get the kprobed function entry */ +.Lfast_user_interrupt_return: + ld r11,_NIP(r1) + ld r12,_MSR(r1) +BEGIN_FTR_SECTION + ld r10,_PPR(r1) + mtspr SPRN_PPR,r10 +END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) + mtspr SPRN_SRR0,r11 + mtspr SPRN_SRR1,r12 - ld r3,GPR1(r1) - subi r3,r3,INT_FRAME_SIZE /* dst: Allocate a trampoline exception frame */ - mr r4,r1 /* src: current exception frame */ - mr r1,r3 /* Reroute the trampoline frame to r1 */ +BEGIN_FTR_SECTION + stdcx. r0,0,r1 /* to clear the reservation */ +FTR_SECTION_ELSE + ldarx r0,0,r1 +ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS) - /* Copy from the original to the trampoline. */ - li r5,INT_FRAME_SIZE/8 /* size: INT_FRAME_SIZE */ - li r6,0 /* start offset: 0 */ - mtctr r5 -2: ldx r0,r6,r4 - stdx r0,r6,r3 - addi r6,r6,8 - bdnz 2b - - /* Do real store operation to complete stdu */ - ld r5,GPR1(r1) - std r8,0(r5) - - /* Clear _TIF_EMULATE_STACK_STORE flag */ - lis r11,_TIF_EMULATE_STACK_STORE@h - addi r5,r9,TI_FLAGS -0: ldarx r4,0,r5 - andc r4,r4,r11 - stdcx. r4,0,r5 - bne- 0b -1: - -#ifdef CONFIG_PREEMPT - /* Check if we need to preempt */ - andi. r0,r4,_TIF_NEED_RESCHED - beq+ restore - /* Check that preempt_count() == 0 and interrupts are enabled */ - lwz r8,TI_PREEMPT(r9) - cmpwi cr0,r8,0 - bne restore - ld r0,SOFTE(r1) - andi. r0,r0,IRQS_DISABLED - bne restore + ld r3,_CCR(r1) + ld r4,_LINK(r1) + ld r5,_CTR(r1) + ld r6,_XER(r1) + li r0,0 - /* - * Here we are preempting the current task. We want to make - * sure we are soft-disabled first and reconcile irq state. - */ - RECONCILE_IRQ_STATE(r3,r4) - bl preempt_schedule_irq + REST_4GPRS(7, r1) + REST_2GPRS(11, r1) + REST_GPR(13, r1) - /* - * arch_local_irq_restore() from preempt_schedule_irq above may - * enable hard interrupt but we really should disable interrupts - * when we return from the interrupt, and so that we don't get - * interrupted after loading SRR0/1. - */ -#ifdef CONFIG_PPC_BOOK3E - wrteei 0 -#else - li r10,MSR_RI - mtmsrd r10,1 /* Update machine state */ -#endif /* CONFIG_PPC_BOOK3E */ -#endif /* CONFIG_PREEMPT */ + mtcr r3 + mtlr r4 + mtctr r5 + mtspr SPRN_XER,r6 - .globl fast_exc_return_irq -fast_exc_return_irq: -restore: - /* - * This is the main kernel exit path. First we check if we - * are about to re-enable interrupts - */ - ld r5,SOFTE(r1) - lbz r6,PACAIRQSOFTMASK(r13) - andi. r5,r5,IRQS_DISABLED - bne .Lrestore_irq_off + REST_4GPRS(2, r1) + REST_GPR(6, r1) + REST_GPR(0, r1) + REST_GPR(1, r1) + RFI_TO_USER + b . /* prevent speculative execution */ - /* We are enabling, were we already enabled ? Yes, just return */ - andi. r6,r6,IRQS_DISABLED - beq cr0,.Ldo_restore +.Lrestore_nvgprs: + REST_NVGPRS(r1) + b .Lfast_user_interrupt_return - /* - * We are about to soft-enable interrupts (we are hard disabled - * at this point). We check if there's anything that needs to - * be replayed first. - */ - lbz r0,PACAIRQHAPPENED(r13) - cmpwi cr0,r0,0 - bne- .Lrestore_check_irq_replay + .balign IFETCH_ALIGN_BYTES +kernel_interrupt_return: + addi r3,r1,STACK_FRAME_OVERHEAD + bl interrupt_exit_kernel_prepare + cmpdi cr1,r3,0 - /* - * Get here when nothing happened while soft-disabled, just - * soft-enable and move-on. We will hard-enable as a side - * effect of rfi - */ -.Lrestore_no_replay: - TRACE_ENABLE_INTS - li r0,IRQS_ENABLED - stb r0,PACAIRQSOFTMASK(r13); +.Lfast_kernel_interrupt_return: + ld r11,_NIP(r1) + ld r12,_MSR(r1) + mtspr SPRN_SRR0,r11 + mtspr SPRN_SRR1,r12 - /* - * Final return path. BookE is handled in a different file - */ -.Ldo_restore: -#ifdef CONFIG_PPC_BOOK3E - b exception_return_book3e -#else - /* - * Clear the reservation. If we know the CPU tracks the address of - * the reservation then we can potentially save some cycles and use - * a larx. On POWER6 and POWER7 this is significantly faster. - */ BEGIN_FTR_SECTION stdcx. r0,0,r1 /* to clear the reservation */ FTR_SECTION_ELSE - ldarx r4,0,r1 + ldarx r0,0,r1 ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS) - /* - * Some code path such as load_up_fpu or altivec return directly - * here. They run entirely hard disabled and do not alter the - * interrupt state. They also don't use lwarx/stwcx. and thus - * are known not to leave dangling reservations. - */ - .globl fast_exception_return -fast_exception_return: - ld r3,_MSR(r1) - ld r4,_CTR(r1) - ld r0,_LINK(r1) - mtctr r4 - mtlr r0 - ld r4,_XER(r1) - mtspr SPRN_XER,r4 - - kuap_check_amr r5, r6 - - REST_8GPRS(5, r1) + ld r3,_CCR(r1) + ld r4,_LINK(r1) + ld r5,_CTR(r1) + ld r6,_XER(r1) + li r0,0 - andi. r0,r3,MSR_RI - beq- .Lunrecov_restore + REST_4GPRS(7, r1) + REST_2GPRS(11, r1) - /* - * Clear RI before restoring r13. If we are returning to - * userspace and we take an exception after restoring r13, - * we end up corrupting the userspace r13 value. - */ - li r4,0 - mtmsrd r4,1 - -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM - /* TM debug */ - std r3, PACATMSCRATCH(r13) /* Stash returned-to MSR */ -#endif - /* - * r13 is our per cpu area, only restore it if we are returning to - * userspace the value stored in the stack frame may belong to - * another CPU. - */ - andi. r0,r3,MSR_PR - beq 1f -BEGIN_FTR_SECTION - /* Restore PPR */ - ld r2,_PPR(r1) - mtspr SPRN_PPR,r2 -END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) - ACCOUNT_CPU_USER_EXIT(r13, r2, r4) - REST_GPR(13, r1) + bne- cr1,1f /* emulate stack store */ + mtcr r3 + mtlr r4 + mtctr r5 + mtspr SPRN_XER,r6 /* - * We don't need to restore AMR on the way back to userspace for KUAP. - * The value of AMR only matters while we're in the kernel. + * Leaving a stale exception_marker on the stack can confuse + * the reliable stack unwinder later on. Clear it. */ - mtspr SPRN_SRR1,r3 - - ld r2,_CCR(r1) - mtcrf 0xFF,r2 - ld r2,_NIP(r1) - mtspr SPRN_SRR0,r2 + std r0,STACK_FRAME_OVERHEAD-16(r1) - ld r0,GPR0(r1) - ld r2,GPR2(r1) - ld r3,GPR3(r1) - ld r4,GPR4(r1) - ld r1,GPR1(r1) - RFI_TO_USER + REST_4GPRS(2, r1) + REST_GPR(6, r1) + REST_GPR(0, r1) + REST_GPR(1, r1) + RFI_TO_KERNEL b . /* prevent speculative execution */ -1: mtspr SPRN_SRR1,r3 - - ld r2,_CCR(r1) - mtcrf 0xFF,r2 - ld r2,_NIP(r1) - mtspr SPRN_SRR0,r2 +1: mtcr r3 + mtlr r4 + mtctr r5 + mtspr SPRN_XER,r6 /* * Leaving a stale exception_marker on the stack can confuse * the reliable stack unwinder later on. Clear it. */ - li r2,0 - std r2,STACK_FRAME_OVERHEAD-16(r1) + std r0,STACK_FRAME_OVERHEAD-16(r1) - ld r0,GPR0(r1) - ld r2,GPR2(r1) - ld r3,GPR3(r1) + REST_4GPRS(2, r1) + REST_GPR(6, r1) + REST_GPR(0, r1) - kuap_restore_amr r4 + /* Nasty emulate stack store case. */ + std r9,PACA_EXGEN+0(r13) + addi r9,r1,INT_FRAME_SIZE /* get original r1 */ + REST_GPR(1, r1) + std r9,0(r1) + ld r9,PACA_EXGEN+0(r13) - ld r4,GPR4(r1) - ld r1,GPR1(r1) RFI_TO_KERNEL b . /* prevent speculative execution */ - -#endif /* CONFIG_PPC_BOOK3E */ - - /* - * We are returning to a context with interrupts soft disabled. - * - * However, we may also about to hard enable, so we need to - * make sure that in this case, we also clear PACA_IRQ_HARD_DIS - * or that bit can get out of sync and bad things will happen - */ -.Lrestore_irq_off: - ld r3,_MSR(r1) - lbz r7,PACAIRQHAPPENED(r13) - andi. r0,r3,MSR_EE - beq 1f - rlwinm r7,r7,0,~PACA_IRQ_HARD_DIS - stb r7,PACAIRQHAPPENED(r13) -1: -#if defined(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG) && defined(CONFIG_BUG) - /* The interrupt should not have soft enabled. */ - lbz r7,PACAIRQSOFTMASK(r13) -1: tdeqi r7,IRQS_ENABLED - EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING -#endif - b .Ldo_restore - - /* - * Something did happen, check if a re-emit is needed - * (this also clears paca->irq_happened) - */ -.Lrestore_check_irq_replay: - /* XXX: We could implement a fast path here where we check - * for irq_happened being just 0x01, in which case we can - * clear it and return. That means that we would potentially - * miss a decrementer having wrapped all the way around. - * - * Still, this might be useful for things like hash_page - */ - bl __check_irq_replay - cmpwi cr0,r3,0 - beq .Lrestore_no_replay - - /* - * We need to re-emit an interrupt. We do so by re-using our - * existing exception frame. We first change the trap value, - * but we need to ensure we preserve the low nibble of it - */ - ld r4,_TRAP(r1) - clrldi r4,r4,60 - or r4,r4,r3 - std r4,_TRAP(r1) - - /* - * PACA_IRQ_HARD_DIS won't always be set here, so set it now - * to reconcile the IRQ state. Tracing is already accounted for. - */ - lbz r4,PACAIRQHAPPENED(r13) - ori r4,r4,PACA_IRQ_HARD_DIS - stb r4,PACAIRQHAPPENED(r13) - - /* - * Then find the right handler and call it. Interrupts are - * still soft-disabled and we keep them that way. - */ - cmpwi cr0,r3,0x500 - bne 1f - addi r3,r1,STACK_FRAME_OVERHEAD; - bl do_IRQ - b ret_from_except -1: cmpwi cr0,r3,0xf00 - bne 1f - addi r3,r1,STACK_FRAME_OVERHEAD; - bl performance_monitor_exception - b ret_from_except -1: cmpwi cr0,r3,0xe60 - bne 1f - addi r3,r1,STACK_FRAME_OVERHEAD; - bl handle_hmi_exception - b ret_from_except -1: cmpwi cr0,r3,0x900 - bne 1f - addi r3,r1,STACK_FRAME_OVERHEAD; - bl timer_interrupt - b ret_from_except -#ifdef CONFIG_PPC_DOORBELL -1: -#ifdef CONFIG_PPC_BOOK3E - cmpwi cr0,r3,0x280 -#else - cmpwi cr0,r3,0xa00 -#endif /* CONFIG_PPC_BOOK3E */ - bne 1f - addi r3,r1,STACK_FRAME_OVERHEAD; - bl doorbell_exception -#endif /* CONFIG_PPC_DOORBELL */ -1: b ret_from_except /* What else to do here ? */ - -.Lunrecov_restore: - addi r3,r1,STACK_FRAME_OVERHEAD - bl unrecoverable_exception - b .Lunrecov_restore - -_ASM_NOKPROBE_SYMBOL(ret_from_except); -_ASM_NOKPROBE_SYMBOL(ret_from_except_lite); -_ASM_NOKPROBE_SYMBOL(resume_kernel); -_ASM_NOKPROBE_SYMBOL(fast_exc_return_irq); -_ASM_NOKPROBE_SYMBOL(restore); -_ASM_NOKPROBE_SYMBOL(fast_exception_return); - +#endif /* CONFIG_PPC_BOOK3S */ #ifdef CONFIG_PPC_RTAS /* diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S index 829950b96d29..5443f84bb0ab 100644 --- a/arch/powerpc/kernel/exceptions-64e.S +++ b/arch/powerpc/kernel/exceptions-64e.S @@ -1073,17 +1073,161 @@ alignment_more: bl alignment_exception b ret_from_except -/* - * We branch here from entry_64.S for the last stage of the exception - * return code path. MSR:EE is expected to be off at that point - */ -_GLOBAL(exception_return_book3e) - b 1f + .align 7 +_GLOBAL(ret_from_except) + ld r11,_TRAP(r1) + andi. r0,r11,1 + bne ret_from_except_lite + REST_NVGPRS(r1) + +_GLOBAL(ret_from_except_lite) + /* + * Disable interrupts so that current_thread_info()->flags + * can't change between when we test it and when we return + * from the interrupt. + */ + wrteei 0 + + ld r9, PACA_THREAD_INFO(r13) + ld r3,_MSR(r1) + ld r10,PACACURRENT(r13) + ld r4,TI_FLAGS(r9) + andi. r3,r3,MSR_PR + beq resume_kernel + lwz r3,(THREAD+THREAD_DBCR0)(r10) + + /* Check current_thread_info()->flags */ + andi. r0,r4,_TIF_USER_WORK_MASK + bne 1f + /* + * Check to see if the dbcr0 register is set up to debug. + * Use the internal debug mode bit to do this. + */ + andis. r0,r3,DBCR0_IDM@h + beq restore + mfmsr r0 + rlwinm r0,r0,0,~MSR_DE /* Clear MSR.DE */ + mtmsr r0 + mtspr SPRN_DBCR0,r3 + li r10, -1 + mtspr SPRN_DBSR,r10 + b restore +1: andi. r0,r4,_TIF_NEED_RESCHED + beq 2f + bl restore_interrupts + SCHEDULE_USER + b ret_from_except_lite +2: + bl save_nvgprs + /* + * Use a non volatile GPR to save and restore our thread_info flags + * across the call to restore_interrupts. + */ + mr r30,r4 + bl restore_interrupts + mr r4,r30 + addi r3,r1,STACK_FRAME_OVERHEAD + bl do_notify_resume + b ret_from_except + +resume_kernel: + /* check current_thread_info, _TIF_EMULATE_STACK_STORE */ + andis. r8,r4,_TIF_EMULATE_STACK_STORE@h + beq+ 1f + + addi r8,r1,INT_FRAME_SIZE /* Get the kprobed function entry */ + + ld r3,GPR1(r1) + subi r3,r3,INT_FRAME_SIZE /* dst: Allocate a trampoline exception frame */ + mr r4,r1 /* src: current exception frame */ + mr r1,r3 /* Reroute the trampoline frame to r1 */ + + /* Copy from the original to the trampoline. */ + li r5,INT_FRAME_SIZE/8 /* size: INT_FRAME_SIZE */ + li r6,0 /* start offset: 0 */ + mtctr r5 +2: ldx r0,r6,r4 + stdx r0,r6,r3 + addi r6,r6,8 + bdnz 2b + + /* Do real store operation to complete stdu */ + ld r5,GPR1(r1) + std r8,0(r5) + + /* Clear _TIF_EMULATE_STACK_STORE flag */ + lis r11,_TIF_EMULATE_STACK_STORE@h + addi r5,r9,TI_FLAGS +0: ldarx r4,0,r5 + andc r4,r4,r11 + stdcx. r4,0,r5 + bne- 0b +1: + +#ifdef CONFIG_PREEMPT + /* Check if we need to preempt */ + andi. r0,r4,_TIF_NEED_RESCHED + beq+ restore + /* Check that preempt_count() == 0 and interrupts are enabled */ + lwz r8,TI_PREEMPT(r9) + cmpwi cr0,r8,0 + bne restore + ld r0,SOFTE(r1) + andi. r0,r0,IRQS_DISABLED + bne restore + + /* + * Here we are preempting the current task. We want to make + * sure we are soft-disabled first and reconcile irq state. + */ + RECONCILE_IRQ_STATE(r3,r4) + bl preempt_schedule_irq + + /* + * arch_local_irq_restore() from preempt_schedule_irq above may + * enable hard interrupt but we really should disable interrupts + * when we return from the interrupt, and so that we don't get + * interrupted after loading SRR0/1. + */ + wrteei 0 +#endif /* CONFIG_PREEMPT */ + +restore: + /* + * This is the main kernel exit path. First we check if we + * are about to re-enable interrupts + */ + ld r5,SOFTE(r1) + lbz r6,PACAIRQSOFTMASK(r13) + andi. r5,r5,IRQS_DISABLED + bne .Lrestore_irq_off + + /* We are enabling, were we already enabled ? Yes, just return */ + andi. r6,r6,IRQS_DISABLED + beq cr0,fast_exception_return + + /* + * We are about to soft-enable interrupts (we are hard disabled + * at this point). We check if there's anything that needs to + * be replayed first. + */ + lbz r0,PACAIRQHAPPENED(r13) + cmpwi cr0,r0,0 + bne- .Lrestore_check_irq_replay + + /* + * Get here when nothing happened while soft-disabled, just + * soft-enable and move-on. We will hard-enable as a side + * effect of rfi + */ +.Lrestore_no_replay: + TRACE_ENABLE_INTS + li r0,IRQS_ENABLED + stb r0,PACAIRQSOFTMASK(r13); /* This is the return from load_up_fpu fast path which could do with * less GPR restores in fact, but for now we have a single return path */ - .globl fast_exception_return fast_exception_return: wrteei 0 1: mr r0,r13 @@ -1124,6 +1268,102 @@ fast_exception_return: mfspr r13,SPRN_SPRG_GEN_SCRATCH rfi + /* + * We are returning to a context with interrupts soft disabled. + * + * However, we may also about to hard enable, so we need to + * make sure that in this case, we also clear PACA_IRQ_HARD_DIS + * or that bit can get out of sync and bad things will happen + */ +.Lrestore_irq_off: + ld r3,_MSR(r1) + lbz r7,PACAIRQHAPPENED(r13) + andi. r0,r3,MSR_EE + beq 1f + rlwinm r7,r7,0,~PACA_IRQ_HARD_DIS + stb r7,PACAIRQHAPPENED(r13) +1: +#if defined(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG) && defined(CONFIG_BUG) + /* The interrupt should not have soft enabled. */ + lbz r7,PACAIRQSOFTMASK(r13) +1: tdeqi r7,IRQS_ENABLED + EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING +#endif + b fast_exception_return + + /* + * Something did happen, check if a re-emit is needed + * (this also clears paca->irq_happened) + */ +.Lrestore_check_irq_replay: + /* XXX: We could implement a fast path here where we check + * for irq_happened being just 0x01, in which case we can + * clear it and return. That means that we would potentially + * miss a decrementer having wrapped all the way around. + * + * Still, this might be useful for things like hash_page + */ + bl __check_irq_replay + cmpwi cr0,r3,0 + beq .Lrestore_no_replay + + /* + * We need to re-emit an interrupt. We do so by re-using our + * existing exception frame. We first change the trap value, + * but we need to ensure we preserve the low nibble of it + */ + ld r4,_TRAP(r1) + clrldi r4,r4,60 + or r4,r4,r3 + std r4,_TRAP(r1) + + /* + * PACA_IRQ_HARD_DIS won't always be set here, so set it now + * to reconcile the IRQ state. Tracing is already accounted for. + */ + lbz r4,PACAIRQHAPPENED(r13) + ori r4,r4,PACA_IRQ_HARD_DIS + stb r4,PACAIRQHAPPENED(r13) + + /* + * Then find the right handler and call it. Interrupts are + * still soft-disabled and we keep them that way. + */ + cmpwi cr0,r3,0x500 + bne 1f + addi r3,r1,STACK_FRAME_OVERHEAD; + bl do_IRQ + b ret_from_except +1: cmpwi cr0,r3,0xf00 + bne 1f + addi r3,r1,STACK_FRAME_OVERHEAD; + bl performance_monitor_exception + b ret_from_except +1: cmpwi cr0,r3,0xe60 + bne 1f + addi r3,r1,STACK_FRAME_OVERHEAD; + bl handle_hmi_exception + b ret_from_except +1: cmpwi cr0,r3,0x900 + bne 1f + addi r3,r1,STACK_FRAME_OVERHEAD; + bl timer_interrupt + b ret_from_except +#ifdef CONFIG_PPC_DOORBELL +1: + cmpwi cr0,r3,0x280 + bne 1f + addi r3,r1,STACK_FRAME_OVERHEAD; + bl doorbell_exception +#endif /* CONFIG_PPC_DOORBELL */ +1: b ret_from_except /* What else to do here ? */ + +_ASM_NOKPROBE_SYMBOL(ret_from_except); +_ASM_NOKPROBE_SYMBOL(ret_from_except_lite); +_ASM_NOKPROBE_SYMBOL(resume_kernel); +_ASM_NOKPROBE_SYMBOL(restore); +_ASM_NOKPROBE_SYMBOL(fast_exception_return); + /* * Trampolines used when spotting a bad kernel stack pointer in * the exception entry code. diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index af1264cd005f..269edd1460be 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -575,6 +575,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) std r10,GPR12(r1) std r11,GPR13(r1) + SAVE_NVGPRS(r1) + .if IDAR .if IISIDE ld r10,_NIP(r1) @@ -611,7 +613,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR) mfspr r11,SPRN_XER /* save XER in stackframe */ std r10,SOFTE(r1) std r11,_XER(r1) - li r9,(IVEC)+1 + li r9,IVEC std r9,_TRAP(r1) /* set trap number */ li r10,0 ld r11,exception_marker@toc(r2) @@ -918,7 +920,6 @@ EXC_COMMON_BEGIN(system_reset_common) ld r1,PACA_NMI_EMERG_SP(r13) subi r1,r1,INT_FRAME_SIZE __GEN_COMMON_BODY system_reset - bl save_nvgprs /* * Set IRQS_ALL_DISABLED unconditionally so arch_irqs_disabled does * the right thing. We do not want to reconcile because that goes @@ -1093,7 +1094,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE) li r10,MSR_RI mtmsrd r10,1 - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl machine_check_early std r3,RESULT(r1) /* Save result */ @@ -1186,10 +1186,9 @@ EXC_COMMON_BEGIN(machine_check_common) /* Enable MSR_RI when finished with PACA_EXMC */ li r10,MSR_RI mtmsrd r10,1 - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl machine_check_exception - b ret_from_except + b interrupt_return GEN_KVM machine_check @@ -1356,20 +1355,19 @@ BEGIN_MMU_FTR_SECTION bl do_slb_fault cmpdi r3,0 bne- 1f - b fast_exception_return + b fast_interrupt_return 1: /* Error case */ MMU_FTR_SECTION_ELSE /* Radix case, access is outside page table range */ li r3,-EFAULT ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) std r3,RESULT(r1) - bl save_nvgprs RECONCILE_IRQ_STATE(r10, r11) ld r4,_DAR(r1) ld r5,RESULT(r1) addi r3,r1,STACK_FRAME_OVERHEAD bl do_bad_slb_fault - b ret_from_except + b interrupt_return GEN_KVM data_access_slb @@ -1449,20 +1447,19 @@ BEGIN_MMU_FTR_SECTION bl do_slb_fault cmpdi r3,0 bne- 1f - b fast_exception_return + b fast_interrupt_return 1: /* Error case */ MMU_FTR_SECTION_ELSE /* Radix case, access is outside page table range */ li r3,-EFAULT ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) std r3,RESULT(r1) - bl save_nvgprs RECONCILE_IRQ_STATE(r10, r11) ld r4,_DAR(r1) ld r5,RESULT(r1) addi r3,r1,STACK_FRAME_OVERHEAD bl do_bad_slb_fault - b ret_from_except + b interrupt_return GEN_KVM instruction_access_slb @@ -1510,7 +1507,7 @@ EXC_COMMON_BEGIN(hardware_interrupt_common) RUNLATCH_ON addi r3,r1,STACK_FRAME_OVERHEAD bl do_IRQ - b ret_from_except_lite + b interrupt_return_lite GEN_KVM hardware_interrupt @@ -1536,10 +1533,9 @@ EXC_VIRT_BEGIN(alignment, 0x4600, 0x100) EXC_VIRT_END(alignment, 0x4600, 0x100) EXC_COMMON_BEGIN(alignment_common) GEN_COMMON alignment - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl alignment_exception - b ret_from_except + b interrupt_return GEN_KVM alignment @@ -1600,10 +1596,9 @@ EXC_COMMON_BEGIN(program_check_common) __ISTACK(program_check)=1 __GEN_COMMON_BODY program_check 3: - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl program_check_exception - b ret_from_except + b interrupt_return GEN_KVM program_check @@ -1634,7 +1629,6 @@ EXC_VIRT_END(fp_unavailable, 0x4800, 0x100) EXC_COMMON_BEGIN(fp_unavailable_common) GEN_COMMON fp_unavailable bne 1f /* if from user, just load it up */ - bl save_nvgprs RECONCILE_IRQ_STATE(r10, r11) addi r3,r1,STACK_FRAME_OVERHEAD bl kernel_fp_unavailable_exception @@ -1651,14 +1645,13 @@ BEGIN_FTR_SECTION END_FTR_SECTION_IFSET(CPU_FTR_TM) #endif bl load_up_fpu - b fast_exception_return + b fast_interrupt_return #ifdef CONFIG_PPC_TRANSACTIONAL_MEM 2: /* User process was in a transaction */ - bl save_nvgprs RECONCILE_IRQ_STATE(r10, r11) addi r3,r1,STACK_FRAME_OVERHEAD bl fp_unavailable_tm - b ret_from_except + b interrupt_return #endif GEN_KVM fp_unavailable @@ -1701,7 +1694,7 @@ EXC_COMMON_BEGIN(decrementer_common) RUNLATCH_ON addi r3,r1,STACK_FRAME_OVERHEAD bl timer_interrupt - b ret_from_except_lite + b interrupt_return_lite GEN_KVM decrementer @@ -1792,7 +1785,7 @@ EXC_COMMON_BEGIN(doorbell_super_common) #else bl unknown_exception #endif - b ret_from_except_lite + b interrupt_return_lite GEN_KVM doorbell_super @@ -1977,10 +1970,9 @@ EXC_VIRT_BEGIN(single_step, 0x4d00, 0x100) EXC_VIRT_END(single_step, 0x4d00, 0x100) EXC_COMMON_BEGIN(single_step_common) GEN_COMMON single_step - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl single_step_exception - b ret_from_except + b interrupt_return GEN_KVM single_step @@ -2015,7 +2007,6 @@ EXC_VIRT_BEGIN(h_data_storage, 0x4e00, 0x20) EXC_VIRT_END(h_data_storage, 0x4e00, 0x20) EXC_COMMON_BEGIN(h_data_storage_common) GEN_COMMON h_data_storage - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD BEGIN_MMU_FTR_SECTION ld r4,_DAR(r1) @@ -2024,7 +2015,7 @@ BEGIN_MMU_FTR_SECTION MMU_FTR_SECTION_ELSE bl unknown_exception ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_TYPE_RADIX) - b ret_from_except + b interrupt_return GEN_KVM h_data_storage @@ -2049,10 +2040,9 @@ EXC_VIRT_BEGIN(h_instr_storage, 0x4e20, 0x20) EXC_VIRT_END(h_instr_storage, 0x4e20, 0x20) EXC_COMMON_BEGIN(h_instr_storage_common) GEN_COMMON h_instr_storage - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl unknown_exception - b ret_from_except + b interrupt_return GEN_KVM h_instr_storage @@ -2075,10 +2065,9 @@ EXC_VIRT_BEGIN(emulation_assist, 0x4e40, 0x20) EXC_VIRT_END(emulation_assist, 0x4e40, 0x20) EXC_COMMON_BEGIN(emulation_assist_common) GEN_COMMON emulation_assist - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl emulation_assist_interrupt - b ret_from_except + b interrupt_return GEN_KVM emulation_assist @@ -2158,10 +2147,9 @@ EXC_COMMON_BEGIN(hmi_exception_common) GEN_COMMON hmi_exception FINISH_NAP RUNLATCH_ON - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl handle_hmi_exception - b ret_from_except + b interrupt_return GEN_KVM hmi_exception @@ -2195,7 +2183,7 @@ EXC_COMMON_BEGIN(h_doorbell_common) #else bl unknown_exception #endif - b ret_from_except_lite + b interrupt_return_lite GEN_KVM h_doorbell @@ -2225,7 +2213,7 @@ EXC_COMMON_BEGIN(h_virt_irq_common) RUNLATCH_ON addi r3,r1,STACK_FRAME_OVERHEAD bl do_IRQ - b ret_from_except_lite + b interrupt_return_lite GEN_KVM h_virt_irq @@ -2272,7 +2260,7 @@ EXC_COMMON_BEGIN(performance_monitor_common) RUNLATCH_ON addi r3,r1,STACK_FRAME_OVERHEAD bl performance_monitor_exception - b ret_from_except_lite + b interrupt_return_lite GEN_KVM performance_monitor @@ -2312,23 +2300,21 @@ BEGIN_FTR_SECTION END_FTR_SECTION_NESTED(CPU_FTR_TM, CPU_FTR_TM, 69) #endif bl load_up_altivec - b fast_exception_return + b fast_interrupt_return #ifdef CONFIG_PPC_TRANSACTIONAL_MEM 2: /* User process was in a transaction */ - bl save_nvgprs RECONCILE_IRQ_STATE(r10, r11) addi r3,r1,STACK_FRAME_OVERHEAD bl altivec_unavailable_tm - b ret_from_except + b interrupt_return #endif 1: END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) #endif - bl save_nvgprs RECONCILE_IRQ_STATE(r10, r11) addi r3,r1,STACK_FRAME_OVERHEAD bl altivec_unavailable_exception - b ret_from_except + b interrupt_return GEN_KVM altivec_unavailable @@ -2370,20 +2356,18 @@ BEGIN_FTR_SECTION b load_up_vsx #ifdef CONFIG_PPC_TRANSACTIONAL_MEM 2: /* User process was in a transaction */ - bl save_nvgprs RECONCILE_IRQ_STATE(r10, r11) addi r3,r1,STACK_FRAME_OVERHEAD bl vsx_unavailable_tm - b ret_from_except + b interrupt_return #endif 1: END_FTR_SECTION_IFSET(CPU_FTR_VSX) #endif - bl save_nvgprs RECONCILE_IRQ_STATE(r10, r11) addi r3,r1,STACK_FRAME_OVERHEAD bl vsx_unavailable_exception - b ret_from_except + b interrupt_return GEN_KVM vsx_unavailable @@ -2410,10 +2394,9 @@ EXC_VIRT_BEGIN(facility_unavailable, 0x4f60, 0x20) EXC_VIRT_END(facility_unavailable, 0x4f60, 0x20) EXC_COMMON_BEGIN(facility_unavailable_common) GEN_COMMON facility_unavailable - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl facility_unavailable_exception - b ret_from_except + b interrupt_return GEN_KVM facility_unavailable @@ -2440,10 +2423,9 @@ EXC_VIRT_BEGIN(h_facility_unavailable, 0x4f80, 0x20) EXC_VIRT_END(h_facility_unavailable, 0x4f80, 0x20) EXC_COMMON_BEGIN(h_facility_unavailable_common) GEN_COMMON h_facility_unavailable - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl facility_unavailable_exception - b ret_from_except + b interrupt_return GEN_KVM h_facility_unavailable @@ -2474,10 +2456,9 @@ EXC_REAL_END(cbe_system_error, 0x1200, 0x100) EXC_VIRT_NONE(0x5200, 0x100) EXC_COMMON_BEGIN(cbe_system_error_common) GEN_COMMON cbe_system_error - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl cbe_system_error_exception - b ret_from_except + b interrupt_return GEN_KVM cbe_system_error @@ -2503,10 +2484,9 @@ EXC_VIRT_BEGIN(instruction_breakpoint, 0x5300, 0x100) EXC_VIRT_END(instruction_breakpoint, 0x5300, 0x100) EXC_COMMON_BEGIN(instruction_breakpoint_common) GEN_COMMON instruction_breakpoint - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl instruction_breakpoint_exception - b ret_from_except + b interrupt_return GEN_KVM instruction_breakpoint @@ -2626,10 +2606,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR) EXC_COMMON_BEGIN(denorm_exception_common) GEN_COMMON denorm_exception - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl unknown_exception - b ret_from_except + b interrupt_return GEN_KVM denorm_exception @@ -2648,10 +2627,9 @@ EXC_REAL_END(cbe_maintenance, 0x1600, 0x100) EXC_VIRT_NONE(0x5600, 0x100) EXC_COMMON_BEGIN(cbe_maintenance_common) GEN_COMMON cbe_maintenance - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl cbe_maintenance_exception - b ret_from_except + b interrupt_return GEN_KVM cbe_maintenance @@ -2676,14 +2654,13 @@ EXC_VIRT_BEGIN(altivec_assist, 0x5700, 0x100) EXC_VIRT_END(altivec_assist, 0x5700, 0x100) EXC_COMMON_BEGIN(altivec_assist_common) GEN_COMMON altivec_assist - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD #ifdef CONFIG_ALTIVEC bl altivec_assist_exception #else bl unknown_exception #endif - b ret_from_except + b interrupt_return GEN_KVM altivec_assist @@ -2702,10 +2679,9 @@ EXC_REAL_END(cbe_thermal, 0x1800, 0x100) EXC_VIRT_NONE(0x5800, 0x100) EXC_COMMON_BEGIN(cbe_thermal_common) GEN_COMMON cbe_thermal - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl cbe_thermal_exception - b ret_from_except + b interrupt_return GEN_KVM cbe_thermal @@ -2737,7 +2713,6 @@ EXC_COMMON_BEGIN(soft_nmi_common) ld r1,PACAEMERGSP(r13) subi r1,r1,INT_FRAME_SIZE __GEN_COMMON_BODY soft_nmi - bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl soft_nmi_interrupt /* Clear MSR_RI before setting SRR0 and SRR1. */ @@ -3038,7 +3013,7 @@ do_hash_page: cmpdi r3,0 /* see if __hash_page succeeded */ /* Success */ - beq fast_exc_return_irq /* Return from exception on success */ + beq interrupt_return_lite /* Return from exception on success */ /* Error */ blt- 13f @@ -3055,17 +3030,15 @@ handle_page_fault: addi r3,r1,STACK_FRAME_OVERHEAD bl do_page_fault cmpdi r3,0 - beq+ ret_from_except_lite - bl save_nvgprs + beq+ interrupt_return_lite mr r5,r3 addi r3,r1,STACK_FRAME_OVERHEAD ld r4,_DAR(r1) bl bad_page_fault - b ret_from_except + b interrupt_return /* We have a data breakpoint exception - handle it */ handle_dabr_fault: - bl save_nvgprs ld r4,_DAR(r1) ld r5,_DSISR(r1) addi r3,r1,STACK_FRAME_OVERHEAD @@ -3073,21 +3046,20 @@ handle_dabr_fault: /* * do_break() may have changed the NV GPRS while handling a breakpoint. * If so, we need to restore them with their updated values. Don't use - * ret_from_except_lite here. + * interrupt_return_lite here. */ - b ret_from_except + b interrupt_return #ifdef CONFIG_PPC_BOOK3S_64 /* We have a page fault that hash_page could handle but HV refused * the PTE insertion */ -13: bl save_nvgprs - mr r5,r3 +13: mr r5,r3 addi r3,r1,STACK_FRAME_OVERHEAD ld r4,_DAR(r1) bl low_hash_fault - b ret_from_except + b interrupt_return #endif /* @@ -3097,11 +3069,10 @@ handle_dabr_fault: * were soft-disabled. We want to invoke the exception handler for * the access, or panic if there isn't a handler. */ -77: bl save_nvgprs - addi r3,r1,STACK_FRAME_OVERHEAD +77: addi r3,r1,STACK_FRAME_OVERHEAD li r5,SIGSEGV bl bad_page_fault - b ret_from_except + b interrupt_return /* * When doorbell is triggered from system reset wakeup, the message is diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 639ceae7da9d..fb74b81fa643 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -236,23 +236,9 @@ void enable_kernel_fp(void) } } EXPORT_SYMBOL(enable_kernel_fp); - -static int restore_fp(struct task_struct *tsk) -{ - if (tsk->thread.load_fp) { - load_fp_state(¤t->thread.fp_state); - current->thread.load_fp++; - return 1; - } - return 0; -} -#else -static int restore_fp(struct task_struct *tsk) { return 0; } #endif /* CONFIG_PPC_FPU */ #ifdef CONFIG_ALTIVEC -#define loadvec(thr) ((thr).load_vec) - static void __giveup_altivec(struct task_struct *tsk) { unsigned long msr; @@ -318,21 +304,6 @@ void flush_altivec_to_thread(struct task_struct *tsk) } } EXPORT_SYMBOL_GPL(flush_altivec_to_thread); - -static int restore_altivec(struct task_struct *tsk) -{ - if (cpu_has_feature(CPU_FTR_ALTIVEC) && (tsk->thread.load_vec)) { - load_vr_state(&tsk->thread.vr_state); - tsk->thread.used_vr = 1; - tsk->thread.load_vec++; - - return 1; - } - return 0; -} -#else -#define loadvec(thr) 0 -static inline int restore_altivec(struct task_struct *tsk) { return 0; } #endif /* CONFIG_ALTIVEC */ #ifdef CONFIG_VSX @@ -400,18 +371,6 @@ void flush_vsx_to_thread(struct task_struct *tsk) } } EXPORT_SYMBOL_GPL(flush_vsx_to_thread); - -static int restore_vsx(struct task_struct *tsk) -{ - if (cpu_has_feature(CPU_FTR_VSX)) { - tsk->thread.used_vsr = 1; - return 1; - } - - return 0; -} -#else -static inline int restore_vsx(struct task_struct *tsk) { return 0; } #endif /* CONFIG_VSX */ #ifdef CONFIG_SPE @@ -511,6 +470,53 @@ void giveup_all(struct task_struct *tsk) } EXPORT_SYMBOL(giveup_all); +#ifdef CONFIG_PPC_BOOK3S_64 +#ifdef CONFIG_PPC_FPU +static int restore_fp(struct task_struct *tsk) +{ + if (tsk->thread.load_fp) { + load_fp_state(¤t->thread.fp_state); + current->thread.load_fp++; + return 1; + } + return 0; +} +#else +static int restore_fp(struct task_struct *tsk) { return 0; } +#endif /* CONFIG_PPC_FPU */ + +#ifdef CONFIG_ALTIVEC +#define loadvec(thr) ((thr).load_vec) +static int restore_altivec(struct task_struct *tsk) +{ + if (cpu_has_feature(CPU_FTR_ALTIVEC) && (tsk->thread.load_vec)) { + load_vr_state(&tsk->thread.vr_state); + tsk->thread.used_vr = 1; + tsk->thread.load_vec++; + + return 1; + } + return 0; +} +#else +#define loadvec(thr) 0 +static inline int restore_altivec(struct task_struct *tsk) { return 0; } +#endif /* CONFIG_ALTIVEC */ + +#ifdef CONFIG_VSX +static int restore_vsx(struct task_struct *tsk) +{ + if (cpu_has_feature(CPU_FTR_VSX)) { + tsk->thread.used_vsr = 1; + return 1; + } + + return 0; +} +#else +static inline int restore_vsx(struct task_struct *tsk) { return 0; } +#endif /* CONFIG_VSX */ + /* * The exception exit path calls restore_math() with interrupts hard disabled * but the soft irq state not "reconciled". ftrace code that calls @@ -551,6 +557,7 @@ void notrace restore_math(struct pt_regs *regs) regs->msr = msr; } +#endif static void save_all(struct task_struct *tsk) { diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c index ce4e1b5a9c27..d00cfc4a39a9 100644 --- a/arch/powerpc/kernel/syscall_64.c +++ b/arch/powerpc/kernel/syscall_64.c @@ -20,7 +20,11 @@ long system_call_exception(long r3, long r4, long r5, long r6, long r7, long r8, unsigned long ti_flags; syscall_fn f; + if (IS_ENABLED(CONFIG_PPC_BOOK3S)) + BUG_ON(!(regs->msr & MSR_RI)); BUG_ON(!(regs->msr & MSR_PR)); + BUG_ON(!FULL_REGS(regs)); + BUG_ON(regs->softe != IRQS_ENABLED); if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && unlikely(regs->msr & MSR_TS_T)) @@ -177,7 +181,7 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3, struct pt_regs *reg trace_hardirqs_off(); local_paca->irq_happened |= PACA_IRQ_HARD_DIS; local_irq_enable(); - /* Took an interrupt which may have more exit work to do. */ + /* Took an interrupt, may have more exit work to do. */ goto again; } local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS; @@ -193,3 +197,154 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3, struct pt_regs *reg return ret; } + +#ifdef CONFIG_PPC_BOOK3S /* BOOK3E not yet using this */ +notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned long msr) +{ +#ifdef CONFIG_PPC_BOOK3E + struct thread_struct *ts = ¤t->thread; +#endif + unsigned long *ti_flagsp = ¤t_thread_info()->flags; + unsigned long ti_flags; + unsigned long flags; + unsigned long ret = 0; + + if (IS_ENABLED(CONFIG_PPC_BOOK3S)) + BUG_ON(!(regs->msr & MSR_RI)); + BUG_ON(!(regs->msr & MSR_PR)); + BUG_ON(!FULL_REGS(regs)); + BUG_ON(regs->softe != IRQS_ENABLED); + + local_irq_save(flags); + +again: + ti_flags = READ_ONCE(*ti_flagsp); + while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) { + local_irq_enable(); /* returning to user: may enable */ + if (ti_flags & _TIF_NEED_RESCHED) { + schedule(); + } else { + if (ti_flags & _TIF_SIGPENDING) + ret |= _TIF_RESTOREALL; + do_notify_resume(regs, ti_flags); + } + local_irq_disable(); + ti_flags = READ_ONCE(*ti_flagsp); + } + + if (IS_ENABLED(CONFIG_PPC_BOOK3S)) { + unsigned long mathflags = 0; + + if (IS_ENABLED(CONFIG_PPC_FPU)) + mathflags |= MSR_FP; + if (IS_ENABLED(CONFIG_ALTIVEC)) + mathflags |= MSR_VEC; + + if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && + (ti_flags & _TIF_RESTORE_TM)) + restore_tm_state(regs); + else if ((regs->msr & mathflags) != mathflags) + restore_math(regs); + } + + trace_hardirqs_on(); + __hard_EE_RI_disable(); + if (unlikely(lazy_irq_pending())) { + __hard_RI_enable(); + trace_hardirqs_off(); + local_paca->irq_happened |= PACA_IRQ_HARD_DIS; + local_irq_enable(); + local_irq_disable(); + /* Took an interrupt, may have more exit work to do. */ + goto again; + } + local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS; + irq_soft_mask_set(IRQS_ENABLED); + +#ifdef CONFIG_PPC_BOOK3E + if (unlikely(ts->debug.dbcr0 & DBCR0_IDM)) { + /* + * Check to see if the dbcr0 register is set up to debug. + * Use the internal debug mode bit to do this. + */ + mtmsr(mfmsr() & ~MSR_DE); + mtspr(SPRN_DBCR0, ts->debug.dbcr0); + mtspr(SPRN_DBSR, -1); + } +#endif + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + local_paca->tm_scratch = regs->msr; +#endif + + kuap_check_amr(); + + account_cpu_user_exit(); + + return ret; +} + +void unrecoverable_exception(struct pt_regs *regs); +void preempt_schedule_irq(void); + +notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsigned long msr) +{ + unsigned long *ti_flagsp = ¤t_thread_info()->flags; + unsigned long flags; + + if (IS_ENABLED(CONFIG_PPC_BOOK3S) && unlikely(!(regs->msr & MSR_RI))) + unrecoverable_exception(regs); + BUG_ON(regs->msr & MSR_PR); + BUG_ON(!FULL_REGS(regs)); + + local_irq_save(flags); + + if (regs->softe == IRQS_ENABLED) { + /* Returning to a kernel context with local irqs enabled. */ +again: + if (IS_ENABLED(CONFIG_PREEMPT)) { + /* Return to preemptible kernel context */ + if (unlikely(*ti_flagsp & _TIF_NEED_RESCHED)) { + if (preempt_count() == 0) + preempt_schedule_irq(); + } + } + + trace_hardirqs_on(); + __hard_EE_RI_disable(); + if (unlikely(lazy_irq_pending())) { + __hard_RI_enable(); + trace_hardirqs_off(); + local_paca->irq_happened |= PACA_IRQ_HARD_DIS; + local_irq_enable(); + local_irq_disable(); + /* Took an interrupt, may have more exit work to do. */ + goto again; + } + irq_soft_mask_set(IRQS_ENABLED); + } else { + /* Returning to a kernel context with local irqs disabled. */ + trace_hardirqs_on(); + __hard_EE_RI_disable(); + } + + if (regs->msr & MSR_EE) + local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS; + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + local_paca->tm_scratch = regs->msr; +#endif + + /* + * We don't need to restore AMR on the way back to userspace for KUAP. + * The value of AMR only matters while we're in the kernel. + */ + kuap_restore_amr(regs); + + if (unlikely(*ti_flagsp & _TIF_EMULATE_STACK_STORE)) { + clear_bits(_TIF_EMULATE_STACK_STORE, ti_flagsp); + return 1; + } + return 0; +} +#endif diff --git a/arch/powerpc/kernel/vector.S b/arch/powerpc/kernel/vector.S index 8eb867dbad5f..44e7a776e56f 100644 --- a/arch/powerpc/kernel/vector.S +++ b/arch/powerpc/kernel/vector.S @@ -131,7 +131,7 @@ _GLOBAL(load_up_vsx) /* enable use of VSX after return */ oris r12,r12,MSR_VSX@h std r12,_MSR(r1) - b fast_exception_return + b fast_interrupt_return #endif /* CONFIG_VSX */ From patchwork Tue Nov 12 16:52:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193756 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CG0Q6bTRz9sPh for ; Wed, 13 Nov 2019 05:06:46 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CG0Q14J7zF5Wn for ; Wed, 13 Nov 2019 05:06:46 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNM04ZzzF1wq for ; Wed, 13 Nov 2019 03:53:55 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id EFBA0AD72; Tue, 12 Nov 2019 16:53:51 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 25/33] powerpc/64s/exception: remove lite interrupt return Date: Tue, 12 Nov 2019 17:52:23 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Nicholas Piggin The difference between lite and regular returns is that the lite case restores all NVGPRs, whereas lite skips that. This is quite clumsy though, most interrupts want the NVGPRs saved for debugging, not to modify in the caller, so the NVGPRs restore is not necessary most of the time. Restore NVGPRs explicitly for one case that requires it, and move everything else over to avoiding the restore unless the interrupt return demands it (e.g., handling a signal). Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/entry_64.S | 4 ---- arch/powerpc/kernel/exceptions-64s.S | 21 +++++++++++---------- 2 files changed, 11 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S index b2e68f5ca8f7..00173cc904ef 100644 --- a/arch/powerpc/kernel/entry_64.S +++ b/arch/powerpc/kernel/entry_64.S @@ -452,10 +452,6 @@ _GLOBAL(fast_interrupt_return) .balign IFETCH_ALIGN_BYTES _GLOBAL(interrupt_return) - REST_NVGPRS(r1) - - .balign IFETCH_ALIGN_BYTES -_GLOBAL(interrupt_return_lite) ld r4,_MSR(r1) andi. r0,r4,MSR_PR beq kernel_interrupt_return diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 269edd1460be..1bccc869ebd3 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -1507,7 +1507,7 @@ EXC_COMMON_BEGIN(hardware_interrupt_common) RUNLATCH_ON addi r3,r1,STACK_FRAME_OVERHEAD bl do_IRQ - b interrupt_return_lite + b interrupt_return GEN_KVM hardware_interrupt @@ -1694,7 +1694,7 @@ EXC_COMMON_BEGIN(decrementer_common) RUNLATCH_ON addi r3,r1,STACK_FRAME_OVERHEAD bl timer_interrupt - b interrupt_return_lite + b interrupt_return GEN_KVM decrementer @@ -1785,7 +1785,7 @@ EXC_COMMON_BEGIN(doorbell_super_common) #else bl unknown_exception #endif - b interrupt_return_lite + b interrupt_return GEN_KVM doorbell_super @@ -2183,7 +2183,7 @@ EXC_COMMON_BEGIN(h_doorbell_common) #else bl unknown_exception #endif - b interrupt_return_lite + b interrupt_return GEN_KVM h_doorbell @@ -2213,7 +2213,7 @@ EXC_COMMON_BEGIN(h_virt_irq_common) RUNLATCH_ON addi r3,r1,STACK_FRAME_OVERHEAD bl do_IRQ - b interrupt_return_lite + b interrupt_return GEN_KVM h_virt_irq @@ -2260,7 +2260,7 @@ EXC_COMMON_BEGIN(performance_monitor_common) RUNLATCH_ON addi r3,r1,STACK_FRAME_OVERHEAD bl performance_monitor_exception - b interrupt_return_lite + b interrupt_return GEN_KVM performance_monitor @@ -3013,7 +3013,7 @@ do_hash_page: cmpdi r3,0 /* see if __hash_page succeeded */ /* Success */ - beq interrupt_return_lite /* Return from exception on success */ + beq interrupt_return /* Return from exception on success */ /* Error */ blt- 13f @@ -3027,10 +3027,11 @@ do_hash_page: handle_page_fault: 11: andis. r0,r5,DSISR_DABRMATCH@h bne- handle_dabr_fault + bl save_nvgprs addi r3,r1,STACK_FRAME_OVERHEAD bl do_page_fault cmpdi r3,0 - beq+ interrupt_return_lite + beq+ interrupt_return mr r5,r3 addi r3,r1,STACK_FRAME_OVERHEAD ld r4,_DAR(r1) @@ -3045,9 +3046,9 @@ handle_dabr_fault: bl do_break /* * do_break() may have changed the NV GPRS while handling a breakpoint. - * If so, we need to restore them with their updated values. Don't use - * interrupt_return_lite here. + * If so, we need to restore them with their updated values. */ + REST_NVGPRS(r1) b interrupt_return From patchwork Tue Nov 12 16:52:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193757 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CGDk4DYBz9sNx for ; Wed, 13 Nov 2019 05:17:26 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CGDk1xxjzF5S8 for ; Wed, 13 Nov 2019 05:17:26 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNN4lxmzF33j for ; Wed, 13 Nov 2019 03:53:56 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 80717ACFE; Tue, 12 Nov 2019 16:53:53 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 26/33] powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro Date: Tue, 12 Nov 2019 17:52:24 +0100 Message-Id: <964c32e47c17190386f9257de050249834161115.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" This partially reverts commit caf6f9c8a326 ("asm-generic: Remove unneeded __ARCH_WANT_SYS_LLSEEK macro") When CONFIG_COMPAT is disabled on ppc64 the kernel does not build. There is resistance to both removing the llseek syscall from the 64bit syscall tables and building the llseek interface unconditionally. Link: https://lore.kernel.org/lkml/20190828151552.GA16855@infradead.org/ Link: https://lore.kernel.org/lkml/20190829214319.498c7de2@naga/ Signed-off-by: Michal Suchanek Reviewed-by: Arnd Bergmann --- arch/powerpc/include/asm/unistd.h | 1 + fs/read_write.c | 3 ++- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h index b0720c7c3fcf..700fcdac2e3c 100644 --- a/arch/powerpc/include/asm/unistd.h +++ b/arch/powerpc/include/asm/unistd.h @@ -31,6 +31,7 @@ #define __ARCH_WANT_SYS_SOCKETCALL #define __ARCH_WANT_SYS_FADVISE64 #define __ARCH_WANT_SYS_GETPGRP +#define __ARCH_WANT_SYS_LLSEEK #define __ARCH_WANT_SYS_NICE #define __ARCH_WANT_SYS_OLD_GETRLIMIT #define __ARCH_WANT_SYS_OLD_UNAME diff --git a/fs/read_write.c b/fs/read_write.c index 5bbf587f5bc1..89aa2701dbeb 100644 --- a/fs/read_write.c +++ b/fs/read_write.c @@ -331,7 +331,8 @@ COMPAT_SYSCALL_DEFINE3(lseek, unsigned int, fd, compat_off_t, offset, unsigned i } #endif -#if !defined(CONFIG_64BIT) || defined(CONFIG_COMPAT) +#if !defined(CONFIG_64BIT) || defined(CONFIG_COMPAT) || \ + defined(__ARCH_WANT_SYS_LLSEEK) SYSCALL_DEFINE5(llseek, unsigned int, fd, unsigned long, offset_high, unsigned long, offset_low, loff_t __user *, result, unsigned int, whence) From patchwork Tue Nov 12 16:52:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193758 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CGML01bNz9sNx for ; Wed, 13 Nov 2019 05:23:10 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CGMK4nvjzDqhn for ; Wed, 13 Nov 2019 05:23:09 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNQ19v6zF32P for ; Wed, 13 Nov 2019 03:53:58 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 02617AFA0; Tue, 12 Nov 2019 16:53:55 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 27/33] powerpc: move common register copy functions from signal_32.c to signal.c Date: Tue, 12 Nov 2019 17:52:25 +0100 Message-Id: <5a6eab1eb499fad59ef5d39e5c51b105409c1c6c.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" These functions are required for 64bit as well. Signed-off-by: Michal Suchanek Reviewed-by: Christophe Leroy --- arch/powerpc/kernel/signal.c | 141 ++++++++++++++++++++++++++++++++ arch/powerpc/kernel/signal_32.c | 140 ------------------------------- 2 files changed, 141 insertions(+), 140 deletions(-) diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c index e6c30cee6abf..60436432399f 100644 --- a/arch/powerpc/kernel/signal.c +++ b/arch/powerpc/kernel/signal.c @@ -18,12 +18,153 @@ #include #include #include +#include #include #include #include #include "signal.h" +#ifdef CONFIG_VSX +unsigned long copy_fpr_to_user(void __user *to, + struct task_struct *task) +{ + u64 buf[ELF_NFPREG]; + int i; + + /* save FPR copy to local buffer then write to the thread_struct */ + for (i = 0; i < (ELF_NFPREG - 1) ; i++) + buf[i] = task->thread.TS_FPR(i); + buf[i] = task->thread.fp_state.fpscr; + return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double)); +} + +unsigned long copy_fpr_from_user(struct task_struct *task, + void __user *from) +{ + u64 buf[ELF_NFPREG]; + int i; + + if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double))) + return 1; + for (i = 0; i < (ELF_NFPREG - 1) ; i++) + task->thread.TS_FPR(i) = buf[i]; + task->thread.fp_state.fpscr = buf[i]; + + return 0; +} + +unsigned long copy_vsx_to_user(void __user *to, + struct task_struct *task) +{ + u64 buf[ELF_NVSRHALFREG]; + int i; + + /* save FPR copy to local buffer then write to the thread_struct */ + for (i = 0; i < ELF_NVSRHALFREG; i++) + buf[i] = task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET]; + return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double)); +} + +unsigned long copy_vsx_from_user(struct task_struct *task, + void __user *from) +{ + u64 buf[ELF_NVSRHALFREG]; + int i; + + if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double))) + return 1; + for (i = 0; i < ELF_NVSRHALFREG ; i++) + task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i]; + return 0; +} + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM +unsigned long copy_ckfpr_to_user(void __user *to, + struct task_struct *task) +{ + u64 buf[ELF_NFPREG]; + int i; + + /* save FPR copy to local buffer then write to the thread_struct */ + for (i = 0; i < (ELF_NFPREG - 1) ; i++) + buf[i] = task->thread.TS_CKFPR(i); + buf[i] = task->thread.ckfp_state.fpscr; + return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double)); +} + +unsigned long copy_ckfpr_from_user(struct task_struct *task, + void __user *from) +{ + u64 buf[ELF_NFPREG]; + int i; + + if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double))) + return 1; + for (i = 0; i < (ELF_NFPREG - 1) ; i++) + task->thread.TS_CKFPR(i) = buf[i]; + task->thread.ckfp_state.fpscr = buf[i]; + + return 0; +} + +unsigned long copy_ckvsx_to_user(void __user *to, + struct task_struct *task) +{ + u64 buf[ELF_NVSRHALFREG]; + int i; + + /* save FPR copy to local buffer then write to the thread_struct */ + for (i = 0; i < ELF_NVSRHALFREG; i++) + buf[i] = task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET]; + return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double)); +} + +unsigned long copy_ckvsx_from_user(struct task_struct *task, + void __user *from) +{ + u64 buf[ELF_NVSRHALFREG]; + int i; + + if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double))) + return 1; + for (i = 0; i < ELF_NVSRHALFREG ; i++) + task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i]; + return 0; +} +#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ +#else +inline unsigned long copy_fpr_to_user(void __user *to, + struct task_struct *task) +{ + return __copy_to_user(to, task->thread.fp_state.fpr, + ELF_NFPREG * sizeof(double)); +} + +inline unsigned long copy_fpr_from_user(struct task_struct *task, + void __user *from) +{ + return __copy_from_user(task->thread.fp_state.fpr, from, + ELF_NFPREG * sizeof(double)); +} + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM +inline unsigned long copy_ckfpr_to_user(void __user *to, + struct task_struct *task) +{ + return __copy_to_user(to, task->thread.ckfp_state.fpr, + ELF_NFPREG * sizeof(double)); +} + +inline unsigned long copy_ckfpr_from_user(struct task_struct *task, + void __user *from) +{ + return __copy_from_user(task->thread.ckfp_state.fpr, from, + ELF_NFPREG * sizeof(double)); +} +#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ +#endif + /* Log an error when sending an unhandled signal to a process. Controlled * through debug.exception-trace sysctl. */ diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c index 98600b276f76..c93c937ea568 100644 --- a/arch/powerpc/kernel/signal_32.c +++ b/arch/powerpc/kernel/signal_32.c @@ -235,146 +235,6 @@ struct rt_sigframe { int abigap[56]; }; -#ifdef CONFIG_VSX -unsigned long copy_fpr_to_user(void __user *to, - struct task_struct *task) -{ - u64 buf[ELF_NFPREG]; - int i; - - /* save FPR copy to local buffer then write to the thread_struct */ - for (i = 0; i < (ELF_NFPREG - 1) ; i++) - buf[i] = task->thread.TS_FPR(i); - buf[i] = task->thread.fp_state.fpscr; - return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double)); -} - -unsigned long copy_fpr_from_user(struct task_struct *task, - void __user *from) -{ - u64 buf[ELF_NFPREG]; - int i; - - if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double))) - return 1; - for (i = 0; i < (ELF_NFPREG - 1) ; i++) - task->thread.TS_FPR(i) = buf[i]; - task->thread.fp_state.fpscr = buf[i]; - - return 0; -} - -unsigned long copy_vsx_to_user(void __user *to, - struct task_struct *task) -{ - u64 buf[ELF_NVSRHALFREG]; - int i; - - /* save FPR copy to local buffer then write to the thread_struct */ - for (i = 0; i < ELF_NVSRHALFREG; i++) - buf[i] = task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET]; - return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double)); -} - -unsigned long copy_vsx_from_user(struct task_struct *task, - void __user *from) -{ - u64 buf[ELF_NVSRHALFREG]; - int i; - - if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double))) - return 1; - for (i = 0; i < ELF_NVSRHALFREG ; i++) - task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i]; - return 0; -} - -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM -unsigned long copy_ckfpr_to_user(void __user *to, - struct task_struct *task) -{ - u64 buf[ELF_NFPREG]; - int i; - - /* save FPR copy to local buffer then write to the thread_struct */ - for (i = 0; i < (ELF_NFPREG - 1) ; i++) - buf[i] = task->thread.TS_CKFPR(i); - buf[i] = task->thread.ckfp_state.fpscr; - return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double)); -} - -unsigned long copy_ckfpr_from_user(struct task_struct *task, - void __user *from) -{ - u64 buf[ELF_NFPREG]; - int i; - - if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double))) - return 1; - for (i = 0; i < (ELF_NFPREG - 1) ; i++) - task->thread.TS_CKFPR(i) = buf[i]; - task->thread.ckfp_state.fpscr = buf[i]; - - return 0; -} - -unsigned long copy_ckvsx_to_user(void __user *to, - struct task_struct *task) -{ - u64 buf[ELF_NVSRHALFREG]; - int i; - - /* save FPR copy to local buffer then write to the thread_struct */ - for (i = 0; i < ELF_NVSRHALFREG; i++) - buf[i] = task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET]; - return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double)); -} - -unsigned long copy_ckvsx_from_user(struct task_struct *task, - void __user *from) -{ - u64 buf[ELF_NVSRHALFREG]; - int i; - - if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double))) - return 1; - for (i = 0; i < ELF_NVSRHALFREG ; i++) - task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i]; - return 0; -} -#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ -#else -inline unsigned long copy_fpr_to_user(void __user *to, - struct task_struct *task) -{ - return __copy_to_user(to, task->thread.fp_state.fpr, - ELF_NFPREG * sizeof(double)); -} - -inline unsigned long copy_fpr_from_user(struct task_struct *task, - void __user *from) -{ - return __copy_from_user(task->thread.fp_state.fpr, from, - ELF_NFPREG * sizeof(double)); -} - -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM -inline unsigned long copy_ckfpr_to_user(void __user *to, - struct task_struct *task) -{ - return __copy_to_user(to, task->thread.ckfp_state.fpr, - ELF_NFPREG * sizeof(double)); -} - -inline unsigned long copy_ckfpr_from_user(struct task_struct *task, - void __user *from) -{ - return __copy_from_user(task->thread.ckfp_state.fpr, from, - ELF_NFPREG * sizeof(double)); -} -#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ -#endif - /* * Save the current user registers on the user stack. * We only save the altivec/spe registers if the process has used From patchwork Tue Nov 12 16:52:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193759 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CGQg75p2z9sPJ for ; Wed, 13 Nov 2019 05:26:03 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CGQg5WPxzF4nq for ; Wed, 13 Nov 2019 05:26:03 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNR65vJzF31s for ; Wed, 13 Nov 2019 03:53:59 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 6A4C3B132; Tue, 12 Nov 2019 16:53:56 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 28/33] powerpc/perf: consolidate read_user_stack_32 Date: Tue, 12 Nov 2019 17:52:26 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" There are two almost identical copies for 32bit and 64bit. The function is used only in 32bit code which will be split out in next patch so consolidate to one function. Signed-off-by: Michal Suchanek Reviewed-by: Christophe Leroy --- v6: new patch v8: move the consolidated function out of the ifdef block. --- arch/powerpc/perf/callchain.c | 59 +++++++++++++++-------------------- 1 file changed, 25 insertions(+), 34 deletions(-) diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c index c84bbd4298a0..d86bdbffda9e 100644 --- a/arch/powerpc/perf/callchain.c +++ b/arch/powerpc/perf/callchain.c @@ -165,22 +165,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret) return read_user_stack_slow(ptr, ret, 8); } -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret) -{ - if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) || - ((unsigned long)ptr & 3)) - return -EFAULT; - - pagefault_disable(); - if (!__get_user_inatomic(*ret, ptr)) { - pagefault_enable(); - return 0; - } - pagefault_enable(); - - return read_user_stack_slow(ptr, ret, 4); -} - static inline int valid_user_sp(unsigned long sp, int is_64) { if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32) @@ -295,25 +279,9 @@ static inline int current_is_64bit(void) } #else /* CONFIG_PPC64 */ -/* - * On 32-bit we just access the address and let hash_page create a - * HPTE if necessary, so there is no need to fall back to reading - * the page tables. Since this is called at interrupt level, - * do_page_fault() won't treat a DSI as a page fault. - */ -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret) +static int read_user_stack_slow(void __user *ptr, void *buf, int nb) { - int rc; - - if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) || - ((unsigned long)ptr & 3)) - return -EFAULT; - - pagefault_disable(); - rc = __get_user_inatomic(*ret, ptr); - pagefault_enable(); - - return rc; + return 0; } static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, @@ -341,6 +309,29 @@ static inline int valid_user_sp(unsigned long sp, int is_64) #endif /* CONFIG_PPC64 */ +/* + * On 32-bit we just access the address and let hash_page create a + * HPTE if necessary, so there is no need to fall back to reading + * the page tables. Since this is called at interrupt level, + * do_page_fault() won't treat a DSI as a page fault. + */ +static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret) +{ + int rc; + + if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) || + ((unsigned long)ptr & 3)) + return -EFAULT; + + pagefault_disable(); + rc = __get_user_inatomic(*ret, ptr); + pagefault_enable(); + + if (IS_ENABLED(CONFIG_PPC64) && rc) + return read_user_stack_slow(ptr, ret, 4); + return rc; +} + /* * Layout for non-RT signal frames */ From patchwork Tue Nov 12 16:52:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193760 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CGTN4N1hz9sNx for ; Wed, 13 Nov 2019 05:28:24 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CGTM1HXlzF1f2 for ; Wed, 13 Nov 2019 05:28:23 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNT16tPzF3By for ; Wed, 13 Nov 2019 03:54:01 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id DEBDFB133; Tue, 12 Nov 2019 16:53:57 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 29/33] powerpc/perf: consolidate valid_user_sp Date: Tue, 12 Nov 2019 17:52:27 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Merge the 32bit and 64bit version. Halve the check constants on 32bit. Use STACK_TOP since it is defined. This removes a page from the valid 32bit area on 64bit: #define TASK_SIZE_USER32 (0x0000000100000000UL - (1 * PAGE_SIZE)) #define STACK_TOP_USER32 TASK_SIZE_USER32 Signed-off-by: Michal Suchanek --- v8: new patch --- arch/powerpc/perf/callchain.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c index d86bdbffda9e..7863ee0a0e69 100644 --- a/arch/powerpc/perf/callchain.c +++ b/arch/powerpc/perf/callchain.c @@ -102,6 +102,20 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re } } +static inline int valid_user_sp(unsigned long sp, int is_64) +{ + unsigned long stack_top; + + if (IS_ENABLED(CONFIG_PPC32)) + stack_top = STACK_TOP; + else /* STACK_TOP uses is_32bit_task() but we want is_64 */ + stack_top = is_64 ? STACK_TOP_USER64 : STACK_TOP_USER32; + + if (!sp || (sp & (is_64 ? 7 : 3)) || sp > stack_top - (is_64 ? 32 : 16)) + return 0; + return 1; +} + #ifdef CONFIG_PPC64 /* * On 64-bit we don't want to invoke hash_page on user addresses from @@ -165,13 +179,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret) return read_user_stack_slow(ptr, ret, 8); } -static inline int valid_user_sp(unsigned long sp, int is_64) -{ - if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32) - return 0; - return 1; -} - /* * 64-bit user processes use the same stack frame for RT and non-RT signals. */ @@ -294,13 +301,6 @@ static inline int current_is_64bit(void) return 0; } -static inline int valid_user_sp(unsigned long sp, int is_64) -{ - if (!sp || (sp & 7) || sp > TASK_SIZE - 32) - return 0; - return 1; -} - #define __SIGNAL_FRAMESIZE32 __SIGNAL_FRAMESIZE #define sigcontext32 sigcontext #define mcontext32 mcontext From patchwork Tue Nov 12 16:52:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193761 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CGXf1FLWz9sNx for ; Wed, 13 Nov 2019 05:31:14 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CGXf08bFzF5Zp for ; Wed, 13 Nov 2019 05:31:14 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNW4sSqzF1Nv for ; Wed, 13 Nov 2019 03:54:03 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 628D7B134; Tue, 12 Nov 2019 16:53:59 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 30/33] powerpc/perf: remove current_is_64bit() Date: Tue, 12 Nov 2019 17:52:28 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Since commit ed1cd6deb013 ("powerpc: Activate CONFIG_THREAD_INFO_IN_TASK") current_is_64bit() is quivalent to !is_32bit_task(). Remove the redundant function. Link: https://github.com/linuxppc/issues/issues/275 Link: https://lkml.org/lkml/2019/9/12/540 Fixes: linuxppc#275 Suggested-by: Christophe Leroy Signed-off-by: Michal Suchanek --- arch/powerpc/perf/callchain.c | 17 +---------------- 1 file changed, 1 insertion(+), 16 deletions(-) diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c index 7863ee0a0e69..fbf76cb01026 100644 --- a/arch/powerpc/perf/callchain.c +++ b/arch/powerpc/perf/callchain.c @@ -275,16 +275,6 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, } } -static inline int current_is_64bit(void) -{ - /* - * We can't use test_thread_flag() here because we may be on an - * interrupt stack, and the thread flags don't get copied over - * from the thread_info on the main stack to the interrupt stack. - */ - return !test_ti_thread_flag(task_thread_info(current), TIF_32BIT); -} - #else /* CONFIG_PPC64 */ static int read_user_stack_slow(void __user *ptr, void *buf, int nb) { @@ -296,11 +286,6 @@ static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry { } -static inline int current_is_64bit(void) -{ - return 0; -} - #define __SIGNAL_FRAMESIZE32 __SIGNAL_FRAMESIZE #define sigcontext32 sigcontext #define mcontext32 mcontext @@ -477,7 +462,7 @@ static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, void perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs) { - if (current_is_64bit()) + if (!is_32bit_task()) perf_callchain_user_64(entry, regs); else perf_callchain_user_32(entry, regs); From patchwork Tue Nov 12 16:52:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193763 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CGb75RGHz9sNx for ; Wed, 13 Nov 2019 05:33:23 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CGb62Q4HzDsQg for ; Wed, 13 Nov 2019 05:33:22 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNW6dV4zF3Dd for ; Wed, 13 Nov 2019 03:54:03 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id DA2B6B211; Tue, 12 Nov 2019 16:54:00 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 31/33] powerpc/64: make buildable without CONFIG_COMPAT Date: Tue, 12 Nov 2019 17:52:29 +0100 Message-Id: <13fa324dc879a7f325290bf2e131b87eb491cd7b.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" There are numerous references to 32bit functions in generic and 64bit code so ifdef them out. Signed-off-by: Michal Suchanek --- v2: - fix 32bit ifdef condition in signal.c - simplify the compat ifdef condition in vdso.c - 64bit is redundant - simplify the compat ifdef condition in callchain.c - 64bit is redundant v3: - use IS_ENABLED and maybe_unused where possible - do not ifdef declarations - clean up Makefile v4: - further makefile cleanup - simplify is_32bit_task conditions - avoid ifdef in condition by using return v5: - avoid unreachable code on 32bit - make is_current_64bit constant on !COMPAT - add stub perf_callchain_user_32 to avoid some ifdefs v6: - consolidate current_is_64bit v7: - remove leftover perf_callchain_user_32 stub from previous series version v8: - fix build again - too trigger-happy with stub removal - remove a vdso.c hunk that causes warning according to kbuild test robot v9: - removed current_is_64bit in previous patch v10: - rebase on top of 70ed86f4de5bd --- arch/powerpc/include/asm/thread_info.h | 4 ++-- arch/powerpc/kernel/Makefile | 6 +++--- arch/powerpc/kernel/entry_64.S | 2 ++ arch/powerpc/kernel/signal.c | 3 +-- arch/powerpc/kernel/syscall_64.c | 6 ++---- arch/powerpc/kernel/vdso.c | 3 ++- arch/powerpc/perf/callchain.c | 8 +++++++- 7 files changed, 19 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h index 8e1d0195ac36..c128d8a48ea3 100644 --- a/arch/powerpc/include/asm/thread_info.h +++ b/arch/powerpc/include/asm/thread_info.h @@ -144,10 +144,10 @@ static inline bool test_thread_local_flags(unsigned int flags) return (ti->local_flags & flags) != 0; } -#ifdef CONFIG_PPC64 +#ifdef CONFIG_COMPAT #define is_32bit_task() (test_thread_flag(TIF_32BIT)) #else -#define is_32bit_task() (1) +#define is_32bit_task() (IS_ENABLED(CONFIG_PPC32)) #endif #if defined(CONFIG_PPC64) diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile index 45f1d5e54671..35874119b398 100644 --- a/arch/powerpc/kernel/Makefile +++ b/arch/powerpc/kernel/Makefile @@ -44,16 +44,16 @@ CFLAGS_btext.o += -DDISABLE_BRANCH_PROFILING endif obj-y := cputable.o ptrace.o syscalls.o \ - irq.o align.o signal_32.o pmc.o vdso.o \ + irq.o align.o signal_$(BITS).o pmc.o vdso.o \ process.o systbl.o idle.o \ signal.o sysfs.o cacheinfo.o time.o \ prom.o traps.o setup-common.o \ udbg.o misc.o io.o misc_$(BITS).o \ of_platform.o prom_parse.o -obj-$(CONFIG_PPC64) += setup_64.o sys_ppc32.o \ - signal_64.o ptrace32.o \ +obj-$(CONFIG_PPC64) += setup_64.o \ paca.o nvram_64.o firmware.o note.o \ syscall_64.o +obj-$(CONFIG_COMPAT) += sys_ppc32.o ptrace32.o signal_32.o obj-$(CONFIG_VDSO32) += vdso32/ obj-$(CONFIG_PPC_WATCHDOG) += watchdog.o obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S index 00173cc904ef..c339a984958f 100644 --- a/arch/powerpc/kernel/entry_64.S +++ b/arch/powerpc/kernel/entry_64.S @@ -52,8 +52,10 @@ SYS_CALL_TABLE: .tc sys_call_table[TC],sys_call_table +#ifdef CONFIG_COMPAT COMPAT_SYS_CALL_TABLE: .tc compat_sys_call_table[TC],compat_sys_call_table +#endif /* This value is used to mark exception frames on the stack. */ exception_marker: diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c index 60436432399f..61678cb0e6a1 100644 --- a/arch/powerpc/kernel/signal.c +++ b/arch/powerpc/kernel/signal.c @@ -247,7 +247,6 @@ static void do_signal(struct task_struct *tsk) sigset_t *oldset = sigmask_to_save(); struct ksignal ksig = { .sig = 0 }; int ret; - int is32 = is_32bit_task(); BUG_ON(tsk != current); @@ -277,7 +276,7 @@ static void do_signal(struct task_struct *tsk) rseq_signal_deliver(&ksig, tsk->thread.regs); - if (is32) { + if (is_32bit_task()) { if (ksig.ka.sa.sa_flags & SA_SIGINFO) ret = handle_rt_signal32(&ksig, oldset, tsk); else diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c index d00cfc4a39a9..319ebd4f494d 100644 --- a/arch/powerpc/kernel/syscall_64.c +++ b/arch/powerpc/kernel/syscall_64.c @@ -17,7 +17,6 @@ typedef long (*syscall_fn)(long, long, long, long, long, long); long system_call_exception(long r3, long r4, long r5, long r6, long r7, long r8, unsigned long r0, struct pt_regs *regs) { - unsigned long ti_flags; syscall_fn f; if (IS_ENABLED(CONFIG_PPC_BOOK3S)) @@ -64,8 +63,7 @@ long system_call_exception(long r3, long r4, long r5, long r6, long r7, long r8, __hard_irq_enable(); - ti_flags = current_thread_info()->flags; - if (unlikely(ti_flags & _TIF_SYSCALL_DOTRACE)) { + if (unlikely(current_thread_info()->flags & _TIF_SYSCALL_DOTRACE)) { /* * We use the return value of do_syscall_trace_enter() as the * syscall number. If the syscall was rejected for any reason @@ -81,7 +79,7 @@ long system_call_exception(long r3, long r4, long r5, long r6, long r7, long r8, /* May be faster to do array_index_nospec? */ barrier_nospec(); - if (unlikely(ti_flags & _TIF_32BIT)) { + if (unlikely(is_32bit_task())) { f = (void *)compat_sys_call_table[r0]; r3 &= 0x00000000ffffffffULL; diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c index eae9ddaecbcf..daa95c1f6d57 100644 --- a/arch/powerpc/kernel/vdso.c +++ b/arch/powerpc/kernel/vdso.c @@ -656,7 +656,8 @@ static void __init vdso_setup_syscall_map(void) if (sys_call_table[i] != sys_ni_syscall) vdso_data->syscall_map_64[i >> 5] |= 0x80000000UL >> (i & 0x1f); - if (compat_sys_call_table[i] != sys_ni_syscall) + if (IS_ENABLED(CONFIG_COMPAT) && + compat_sys_call_table[i] != sys_ni_syscall) vdso_data->syscall_map_32[i >> 5] |= 0x80000000UL >> (i & 0x1f); #else /* CONFIG_PPC64 */ diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c index fbf76cb01026..d6ab1a734a6a 100644 --- a/arch/powerpc/perf/callchain.c +++ b/arch/powerpc/perf/callchain.c @@ -15,7 +15,7 @@ #include #include #include -#ifdef CONFIG_PPC64 +#ifdef CONFIG_COMPAT #include "../kernel/ppc32.h" #endif #include @@ -294,6 +294,7 @@ static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry #endif /* CONFIG_PPC64 */ +#if defined(CONFIG_PPC32) || defined(CONFIG_COMPAT) /* * On 32-bit we just access the address and let hash_page create a * HPTE if necessary, so there is no need to fall back to reading @@ -458,6 +459,11 @@ static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, sp = next_sp; } } +#else /* 32bit */ +static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs) +{} +#endif /* 32bit */ void perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs) From patchwork Tue Nov 12 16:52:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193764 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CGfv4Mqgz9sQp for ; Wed, 13 Nov 2019 05:36:39 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CGfv36tTzF5bw for ; Wed, 13 Nov 2019 05:36:39 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNY3x4vzF3Dp for ; Wed, 13 Nov 2019 03:54:05 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 6042EB293; Tue, 12 Nov 2019 16:54:02 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 32/33] powerpc/64: Make COMPAT user-selectable disabled on littleendian by default. Date: Tue, 12 Nov 2019 17:52:30 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" On bigendian ppc64 it is common to have 32bit legacy binaries but much less so on littleendian. Signed-off-by: Michal Suchanek Reviewed-by: Christophe Leroy --- v3: make configurable --- arch/powerpc/Kconfig | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 3e56c9c2f16e..825528db2921 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -266,8 +266,9 @@ config PANIC_TIMEOUT default 180 config COMPAT - bool - default y if PPC64 + bool "Enable support for 32bit binaries" + depends on PPC64 + default y if !CPU_LITTLE_ENDIAN select COMPAT_BINFMT_ELF select ARCH_WANT_OLD_COMPAT_IPC select COMPAT_OLD_SIGACTION From patchwork Tue Nov 12 16:52:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Michal_Such=C3=A1nek?= X-Patchwork-Id: 1193766 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47CGlB4XGvz9s7T for ; Wed, 13 Nov 2019 05:40:22 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47CGlB37NWzF4xT for ; Wed, 13 Nov 2019 05:40:22 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx1.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47CDNZ6rzLzF3Fx for ; Wed, 13 Nov 2019 03:54:06 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id DE915B2CF; Tue, 12 Nov 2019 16:54:03 +0000 (UTC) From: Michal Suchanek To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 33/33] powerpc/perf: split callchain.c by bitness Date: Tue, 12 Nov 2019 17:52:31 +0100 Message-Id: <1e362abe4fc0eca01e67b059e625f3cb86cc95f4.1573576649.git.msuchanek@suse.de> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , David Hildenbrand , Heiko Carstens , Claudio Carvalho , David Howells , Masahiro Yamada , Paul Mackerras , Christian Brauner , Breno Leitao , Michael Neuling , Nicolai Stange , Diana Craciun , Firoz Khan , Allison Randal , Mahesh Salgaonkar , Geert Uytterhoeven , "Naveen N. Rao" , Michal Suchanek , Valentin Schneider , Jagadeesh Pagadala , Arnd Bergmann , Nicholas Piggin , Alexander Viro , Steven Rostedt , Thomas Gleixner , Dmitry Vyukov , Daniel Axtens , Gustavo Romero , Mathieu Malaterre , Greg Kroah-Hartman , Oleg Nesterov , linux-kernel@vger.kernel.org, "Eric W. Biederman" , Andrew Donnellan , Brajeswar Ghosh , Hari Bathini , Andrew Morton Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Building callchain.c with !COMPAT proved quite ugly with all the defines. Splitting out the 32bit and 64bit parts looks better. No code change intended. Signed-off-by: Michal Suchanek --- v6: - move current_is_64bit consolidetaion to earlier patch - move defines to the top of callchain_32.c - Makefile cleanup v8: - fix valid_user_sp --- arch/powerpc/perf/Makefile | 5 +- arch/powerpc/perf/callchain.c | 367 +------------------------------ arch/powerpc/perf/callchain.h | 25 +++ arch/powerpc/perf/callchain_32.c | 197 +++++++++++++++++ arch/powerpc/perf/callchain_64.c | 178 +++++++++++++++ 5 files changed, 405 insertions(+), 367 deletions(-) create mode 100644 arch/powerpc/perf/callchain.h create mode 100644 arch/powerpc/perf/callchain_32.c create mode 100644 arch/powerpc/perf/callchain_64.c diff --git a/arch/powerpc/perf/Makefile b/arch/powerpc/perf/Makefile index c155dcbb8691..53d614e98537 100644 --- a/arch/powerpc/perf/Makefile +++ b/arch/powerpc/perf/Makefile @@ -1,6 +1,9 @@ # SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_PERF_EVENTS) += callchain.o perf_regs.o +obj-$(CONFIG_PERF_EVENTS) += callchain.o callchain_$(BITS).o perf_regs.o +ifdef CONFIG_COMPAT +obj-$(CONFIG_PERF_EVENTS) += callchain_32.o +endif obj-$(CONFIG_PPC_PERF_CTRS) += core-book3s.o bhrb.o obj64-$(CONFIG_PPC_PERF_CTRS) += ppc970-pmu.o power5-pmu.o \ diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c index d6ab1a734a6a..dd5051015008 100644 --- a/arch/powerpc/perf/callchain.c +++ b/arch/powerpc/perf/callchain.c @@ -15,11 +15,9 @@ #include #include #include -#ifdef CONFIG_COMPAT -#include "../kernel/ppc32.h" -#endif #include +#include "callchain.h" /* * Is sp valid as the address of the next kernel stack frame after prev_sp? @@ -102,369 +100,6 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re } } -static inline int valid_user_sp(unsigned long sp, int is_64) -{ - unsigned long stack_top; - - if (IS_ENABLED(CONFIG_PPC32)) - stack_top = STACK_TOP; - else /* STACK_TOP uses is_32bit_task() but we want is_64 */ - stack_top = is_64 ? STACK_TOP_USER64 : STACK_TOP_USER32; - - if (!sp || (sp & (is_64 ? 7 : 3)) || sp > stack_top - (is_64 ? 32 : 16)) - return 0; - return 1; -} - -#ifdef CONFIG_PPC64 -/* - * On 64-bit we don't want to invoke hash_page on user addresses from - * interrupt context, so if the access faults, we read the page tables - * to find which page (if any) is mapped and access it directly. - */ -static int read_user_stack_slow(void __user *ptr, void *buf, int nb) -{ - int ret = -EFAULT; - pgd_t *pgdir; - pte_t *ptep, pte; - unsigned shift; - unsigned long addr = (unsigned long) ptr; - unsigned long offset; - unsigned long pfn, flags; - void *kaddr; - - pgdir = current->mm->pgd; - if (!pgdir) - return -EFAULT; - - local_irq_save(flags); - ptep = find_current_mm_pte(pgdir, addr, NULL, &shift); - if (!ptep) - goto err_out; - if (!shift) - shift = PAGE_SHIFT; - - /* align address to page boundary */ - offset = addr & ((1UL << shift) - 1); - - pte = READ_ONCE(*ptep); - if (!pte_present(pte) || !pte_user(pte)) - goto err_out; - pfn = pte_pfn(pte); - if (!page_is_ram(pfn)) - goto err_out; - - /* no highmem to worry about here */ - kaddr = pfn_to_kaddr(pfn); - memcpy(buf, kaddr + offset, nb); - ret = 0; -err_out: - local_irq_restore(flags); - return ret; -} - -static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret) -{ - if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned long) || - ((unsigned long)ptr & 7)) - return -EFAULT; - - pagefault_disable(); - if (!__get_user_inatomic(*ret, ptr)) { - pagefault_enable(); - return 0; - } - pagefault_enable(); - - return read_user_stack_slow(ptr, ret, 8); -} - -/* - * 64-bit user processes use the same stack frame for RT and non-RT signals. - */ -struct signal_frame_64 { - char dummy[__SIGNAL_FRAMESIZE]; - struct ucontext uc; - unsigned long unused[2]; - unsigned int tramp[6]; - struct siginfo *pinfo; - void *puc; - struct siginfo info; - char abigap[288]; -}; - -static int is_sigreturn_64_address(unsigned long nip, unsigned long fp) -{ - if (nip == fp + offsetof(struct signal_frame_64, tramp)) - return 1; - if (vdso64_rt_sigtramp && current->mm->context.vdso_base && - nip == current->mm->context.vdso_base + vdso64_rt_sigtramp) - return 1; - return 0; -} - -/* - * Do some sanity checking on the signal frame pointed to by sp. - * We check the pinfo and puc pointers in the frame. - */ -static int sane_signal_64_frame(unsigned long sp) -{ - struct signal_frame_64 __user *sf; - unsigned long pinfo, puc; - - sf = (struct signal_frame_64 __user *) sp; - if (read_user_stack_64((unsigned long __user *) &sf->pinfo, &pinfo) || - read_user_stack_64((unsigned long __user *) &sf->puc, &puc)) - return 0; - return pinfo == (unsigned long) &sf->info && - puc == (unsigned long) &sf->uc; -} - -static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, - struct pt_regs *regs) -{ - unsigned long sp, next_sp; - unsigned long next_ip; - unsigned long lr; - long level = 0; - struct signal_frame_64 __user *sigframe; - unsigned long __user *fp, *uregs; - - next_ip = perf_instruction_pointer(regs); - lr = regs->link; - sp = regs->gpr[1]; - perf_callchain_store(entry, next_ip); - - while (entry->nr < entry->max_stack) { - fp = (unsigned long __user *) sp; - if (!valid_user_sp(sp, 1) || read_user_stack_64(fp, &next_sp)) - return; - if (level > 0 && read_user_stack_64(&fp[2], &next_ip)) - return; - - /* - * Note: the next_sp - sp >= signal frame size check - * is true when next_sp < sp, which can happen when - * transitioning from an alternate signal stack to the - * normal stack. - */ - if (next_sp - sp >= sizeof(struct signal_frame_64) && - (is_sigreturn_64_address(next_ip, sp) || - (level <= 1 && is_sigreturn_64_address(lr, sp))) && - sane_signal_64_frame(sp)) { - /* - * This looks like an signal frame - */ - sigframe = (struct signal_frame_64 __user *) sp; - uregs = sigframe->uc.uc_mcontext.gp_regs; - if (read_user_stack_64(&uregs[PT_NIP], &next_ip) || - read_user_stack_64(&uregs[PT_LNK], &lr) || - read_user_stack_64(&uregs[PT_R1], &sp)) - return; - level = 0; - perf_callchain_store_context(entry, PERF_CONTEXT_USER); - perf_callchain_store(entry, next_ip); - continue; - } - - if (level == 0) - next_ip = lr; - perf_callchain_store(entry, next_ip); - ++level; - sp = next_sp; - } -} - -#else /* CONFIG_PPC64 */ -static int read_user_stack_slow(void __user *ptr, void *buf, int nb) -{ - return 0; -} - -static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, - struct pt_regs *regs) -{ -} - -#define __SIGNAL_FRAMESIZE32 __SIGNAL_FRAMESIZE -#define sigcontext32 sigcontext -#define mcontext32 mcontext -#define ucontext32 ucontext -#define compat_siginfo_t struct siginfo - -#endif /* CONFIG_PPC64 */ - -#if defined(CONFIG_PPC32) || defined(CONFIG_COMPAT) -/* - * On 32-bit we just access the address and let hash_page create a - * HPTE if necessary, so there is no need to fall back to reading - * the page tables. Since this is called at interrupt level, - * do_page_fault() won't treat a DSI as a page fault. - */ -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret) -{ - int rc; - - if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) || - ((unsigned long)ptr & 3)) - return -EFAULT; - - pagefault_disable(); - rc = __get_user_inatomic(*ret, ptr); - pagefault_enable(); - - if (IS_ENABLED(CONFIG_PPC64) && rc) - return read_user_stack_slow(ptr, ret, 4); - return rc; -} - -/* - * Layout for non-RT signal frames - */ -struct signal_frame_32 { - char dummy[__SIGNAL_FRAMESIZE32]; - struct sigcontext32 sctx; - struct mcontext32 mctx; - int abigap[56]; -}; - -/* - * Layout for RT signal frames - */ -struct rt_signal_frame_32 { - char dummy[__SIGNAL_FRAMESIZE32 + 16]; - compat_siginfo_t info; - struct ucontext32 uc; - int abigap[56]; -}; - -static int is_sigreturn_32_address(unsigned int nip, unsigned int fp) -{ - if (nip == fp + offsetof(struct signal_frame_32, mctx.mc_pad)) - return 1; - if (vdso32_sigtramp && current->mm->context.vdso_base && - nip == current->mm->context.vdso_base + vdso32_sigtramp) - return 1; - return 0; -} - -static int is_rt_sigreturn_32_address(unsigned int nip, unsigned int fp) -{ - if (nip == fp + offsetof(struct rt_signal_frame_32, - uc.uc_mcontext.mc_pad)) - return 1; - if (vdso32_rt_sigtramp && current->mm->context.vdso_base && - nip == current->mm->context.vdso_base + vdso32_rt_sigtramp) - return 1; - return 0; -} - -static int sane_signal_32_frame(unsigned int sp) -{ - struct signal_frame_32 __user *sf; - unsigned int regs; - - sf = (struct signal_frame_32 __user *) (unsigned long) sp; - if (read_user_stack_32((unsigned int __user *) &sf->sctx.regs, ®s)) - return 0; - return regs == (unsigned long) &sf->mctx; -} - -static int sane_rt_signal_32_frame(unsigned int sp) -{ - struct rt_signal_frame_32 __user *sf; - unsigned int regs; - - sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp; - if (read_user_stack_32((unsigned int __user *) &sf->uc.uc_regs, ®s)) - return 0; - return regs == (unsigned long) &sf->uc.uc_mcontext; -} - -static unsigned int __user *signal_frame_32_regs(unsigned int sp, - unsigned int next_sp, unsigned int next_ip) -{ - struct mcontext32 __user *mctx = NULL; - struct signal_frame_32 __user *sf; - struct rt_signal_frame_32 __user *rt_sf; - - /* - * Note: the next_sp - sp >= signal frame size check - * is true when next_sp < sp, for example, when - * transitioning from an alternate signal stack to the - * normal stack. - */ - if (next_sp - sp >= sizeof(struct signal_frame_32) && - is_sigreturn_32_address(next_ip, sp) && - sane_signal_32_frame(sp)) { - sf = (struct signal_frame_32 __user *) (unsigned long) sp; - mctx = &sf->mctx; - } - - if (!mctx && next_sp - sp >= sizeof(struct rt_signal_frame_32) && - is_rt_sigreturn_32_address(next_ip, sp) && - sane_rt_signal_32_frame(sp)) { - rt_sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp; - mctx = &rt_sf->uc.uc_mcontext; - } - - if (!mctx) - return NULL; - return mctx->mc_gregs; -} - -static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, - struct pt_regs *regs) -{ - unsigned int sp, next_sp; - unsigned int next_ip; - unsigned int lr; - long level = 0; - unsigned int __user *fp, *uregs; - - next_ip = perf_instruction_pointer(regs); - lr = regs->link; - sp = regs->gpr[1]; - perf_callchain_store(entry, next_ip); - - while (entry->nr < entry->max_stack) { - fp = (unsigned int __user *) (unsigned long) sp; - if (!valid_user_sp(sp, 0) || read_user_stack_32(fp, &next_sp)) - return; - if (level > 0 && read_user_stack_32(&fp[1], &next_ip)) - return; - - uregs = signal_frame_32_regs(sp, next_sp, next_ip); - if (!uregs && level <= 1) - uregs = signal_frame_32_regs(sp, next_sp, lr); - if (uregs) { - /* - * This looks like an signal frame, so restart - * the stack trace with the values in it. - */ - if (read_user_stack_32(&uregs[PT_NIP], &next_ip) || - read_user_stack_32(&uregs[PT_LNK], &lr) || - read_user_stack_32(&uregs[PT_R1], &sp)) - return; - level = 0; - perf_callchain_store_context(entry, PERF_CONTEXT_USER); - perf_callchain_store(entry, next_ip); - continue; - } - - if (level == 0) - next_ip = lr; - perf_callchain_store(entry, next_ip); - ++level; - sp = next_sp; - } -} -#else /* 32bit */ -static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, - struct pt_regs *regs) -{} -#endif /* 32bit */ - void perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs) { diff --git a/arch/powerpc/perf/callchain.h b/arch/powerpc/perf/callchain.h new file mode 100644 index 000000000000..76905c195497 --- /dev/null +++ b/arch/powerpc/perf/callchain.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _POWERPC_PERF_CALLCHAIN_H +#define _POWERPC_PERF_CALLCHAIN_H + +int read_user_stack_slow(void __user *ptr, void *buf, int nb); +void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs); +void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs); + +static inline int valid_user_sp(unsigned long sp, int is_64) +{ + unsigned long stack_top; + + if (IS_ENABLED(CONFIG_PPC32)) + stack_top = STACK_TOP; + else /* STACK_TOP uses is_32bit_task() but we want is_64 */ + stack_top = is_64 ? STACK_TOP_USER64 : STACK_TOP_USER32; + + if (!sp || (sp & (is_64 ? 7 : 3)) || sp > stack_top - (is_64 ? 32 : 16)) + return 0; + return 1; +} + +#endif /* _POWERPC_PERF_CALLCHAIN_H */ diff --git a/arch/powerpc/perf/callchain_32.c b/arch/powerpc/perf/callchain_32.c new file mode 100644 index 000000000000..ae69d60953b8 --- /dev/null +++ b/arch/powerpc/perf/callchain_32.c @@ -0,0 +1,197 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Performance counter callchain support - powerpc architecture code + * + * Copyright © 2009 Paul Mackerras, IBM Corporation. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "callchain.h" + +#ifdef CONFIG_PPC64 +#include "../kernel/ppc32.h" +#else /* CONFIG_PPC64 */ + +#define __SIGNAL_FRAMESIZE32 __SIGNAL_FRAMESIZE +#define sigcontext32 sigcontext +#define mcontext32 mcontext +#define ucontext32 ucontext +#define compat_siginfo_t struct siginfo + +#endif /* CONFIG_PPC64 */ + +/* + * On 32-bit we just access the address and let hash_page create a + * HPTE if necessary, so there is no need to fall back to reading + * the page tables. Since this is called at interrupt level, + * do_page_fault() won't treat a DSI as a page fault. + */ +static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret) +{ + int rc; + + if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) || + ((unsigned long)ptr & 3)) + return -EFAULT; + + pagefault_disable(); + rc = __get_user_inatomic(*ret, ptr); + pagefault_enable(); + + if (IS_ENABLED(CONFIG_PPC64) && rc) + return read_user_stack_slow(ptr, ret, 4); + return rc; +} + +/* + * Layout for non-RT signal frames + */ +struct signal_frame_32 { + char dummy[__SIGNAL_FRAMESIZE32]; + struct sigcontext32 sctx; + struct mcontext32 mctx; + int abigap[56]; +}; + +/* + * Layout for RT signal frames + */ +struct rt_signal_frame_32 { + char dummy[__SIGNAL_FRAMESIZE32 + 16]; + compat_siginfo_t info; + struct ucontext32 uc; + int abigap[56]; +}; + +static int is_sigreturn_32_address(unsigned int nip, unsigned int fp) +{ + if (nip == fp + offsetof(struct signal_frame_32, mctx.mc_pad)) + return 1; + if (vdso32_sigtramp && current->mm->context.vdso_base && + nip == current->mm->context.vdso_base + vdso32_sigtramp) + return 1; + return 0; +} + +static int is_rt_sigreturn_32_address(unsigned int nip, unsigned int fp) +{ + if (nip == fp + offsetof(struct rt_signal_frame_32, + uc.uc_mcontext.mc_pad)) + return 1; + if (vdso32_rt_sigtramp && current->mm->context.vdso_base && + nip == current->mm->context.vdso_base + vdso32_rt_sigtramp) + return 1; + return 0; +} + +static int sane_signal_32_frame(unsigned int sp) +{ + struct signal_frame_32 __user *sf; + unsigned int regs; + + sf = (struct signal_frame_32 __user *) (unsigned long) sp; + if (read_user_stack_32((unsigned int __user *) &sf->sctx.regs, ®s)) + return 0; + return regs == (unsigned long) &sf->mctx; +} + +static int sane_rt_signal_32_frame(unsigned int sp) +{ + struct rt_signal_frame_32 __user *sf; + unsigned int regs; + + sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp; + if (read_user_stack_32((unsigned int __user *) &sf->uc.uc_regs, ®s)) + return 0; + return regs == (unsigned long) &sf->uc.uc_mcontext; +} + +static unsigned int __user *signal_frame_32_regs(unsigned int sp, + unsigned int next_sp, unsigned int next_ip) +{ + struct mcontext32 __user *mctx = NULL; + struct signal_frame_32 __user *sf; + struct rt_signal_frame_32 __user *rt_sf; + + /* + * Note: the next_sp - sp >= signal frame size check + * is true when next_sp < sp, for example, when + * transitioning from an alternate signal stack to the + * normal stack. + */ + if (next_sp - sp >= sizeof(struct signal_frame_32) && + is_sigreturn_32_address(next_ip, sp) && + sane_signal_32_frame(sp)) { + sf = (struct signal_frame_32 __user *) (unsigned long) sp; + mctx = &sf->mctx; + } + + if (!mctx && next_sp - sp >= sizeof(struct rt_signal_frame_32) && + is_rt_sigreturn_32_address(next_ip, sp) && + sane_rt_signal_32_frame(sp)) { + rt_sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp; + mctx = &rt_sf->uc.uc_mcontext; + } + + if (!mctx) + return NULL; + return mctx->mc_gregs; +} + +void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs) +{ + unsigned int sp, next_sp; + unsigned int next_ip; + unsigned int lr; + long level = 0; + unsigned int __user *fp, *uregs; + + next_ip = perf_instruction_pointer(regs); + lr = regs->link; + sp = regs->gpr[1]; + perf_callchain_store(entry, next_ip); + + while (entry->nr < entry->max_stack) { + fp = (unsigned int __user *) (unsigned long) sp; + if (!valid_user_sp(sp, 0) || read_user_stack_32(fp, &next_sp)) + return; + if (level > 0 && read_user_stack_32(&fp[1], &next_ip)) + return; + + uregs = signal_frame_32_regs(sp, next_sp, next_ip); + if (!uregs && level <= 1) + uregs = signal_frame_32_regs(sp, next_sp, lr); + if (uregs) { + /* + * This looks like an signal frame, so restart + * the stack trace with the values in it. + */ + if (read_user_stack_32(&uregs[PT_NIP], &next_ip) || + read_user_stack_32(&uregs[PT_LNK], &lr) || + read_user_stack_32(&uregs[PT_R1], &sp)) + return; + level = 0; + perf_callchain_store_context(entry, PERF_CONTEXT_USER); + perf_callchain_store(entry, next_ip); + continue; + } + + if (level == 0) + next_ip = lr; + perf_callchain_store(entry, next_ip); + ++level; + sp = next_sp; + } +} diff --git a/arch/powerpc/perf/callchain_64.c b/arch/powerpc/perf/callchain_64.c new file mode 100644 index 000000000000..0a32e0bc4f03 --- /dev/null +++ b/arch/powerpc/perf/callchain_64.c @@ -0,0 +1,178 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Performance counter callchain support - powerpc architecture code + * + * Copyright © 2009 Paul Mackerras, IBM Corporation. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "callchain.h" + +/* + * On 64-bit we don't want to invoke hash_page on user addresses from + * interrupt context, so if the access faults, we read the page tables + * to find which page (if any) is mapped and access it directly. + */ +int read_user_stack_slow(void __user *ptr, void *buf, int nb) +{ + int ret = -EFAULT; + pgd_t *pgdir; + pte_t *ptep, pte; + unsigned int shift; + unsigned long addr = (unsigned long) ptr; + unsigned long offset; + unsigned long pfn, flags; + void *kaddr; + + pgdir = current->mm->pgd; + if (!pgdir) + return -EFAULT; + + local_irq_save(flags); + ptep = find_current_mm_pte(pgdir, addr, NULL, &shift); + if (!ptep) + goto err_out; + if (!shift) + shift = PAGE_SHIFT; + + /* align address to page boundary */ + offset = addr & ((1UL << shift) - 1); + + pte = READ_ONCE(*ptep); + if (!pte_present(pte) || !pte_user(pte)) + goto err_out; + pfn = pte_pfn(pte); + if (!page_is_ram(pfn)) + goto err_out; + + /* no highmem to worry about here */ + kaddr = pfn_to_kaddr(pfn); + memcpy(buf, kaddr + offset, nb); + ret = 0; +err_out: + local_irq_restore(flags); + return ret; +} + +static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret) +{ + if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned long) || + ((unsigned long)ptr & 7)) + return -EFAULT; + + pagefault_disable(); + if (!__get_user_inatomic(*ret, ptr)) { + pagefault_enable(); + return 0; + } + pagefault_enable(); + + return read_user_stack_slow(ptr, ret, 8); +} + +/* + * 64-bit user processes use the same stack frame for RT and non-RT signals. + */ +struct signal_frame_64 { + char dummy[__SIGNAL_FRAMESIZE]; + struct ucontext uc; + unsigned long unused[2]; + unsigned int tramp[6]; + struct siginfo *pinfo; + void *puc; + struct siginfo info; + char abigap[288]; +}; + +static int is_sigreturn_64_address(unsigned long nip, unsigned long fp) +{ + if (nip == fp + offsetof(struct signal_frame_64, tramp)) + return 1; + if (vdso64_rt_sigtramp && current->mm->context.vdso_base && + nip == current->mm->context.vdso_base + vdso64_rt_sigtramp) + return 1; + return 0; +} + +/* + * Do some sanity checking on the signal frame pointed to by sp. + * We check the pinfo and puc pointers in the frame. + */ +static int sane_signal_64_frame(unsigned long sp) +{ + struct signal_frame_64 __user *sf; + unsigned long pinfo, puc; + + sf = (struct signal_frame_64 __user *) sp; + if (read_user_stack_64((unsigned long __user *) &sf->pinfo, &pinfo) || + read_user_stack_64((unsigned long __user *) &sf->puc, &puc)) + return 0; + return pinfo == (unsigned long) &sf->info && + puc == (unsigned long) &sf->uc; +} + +void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs) +{ + unsigned long sp, next_sp; + unsigned long next_ip; + unsigned long lr; + long level = 0; + struct signal_frame_64 __user *sigframe; + unsigned long __user *fp, *uregs; + + next_ip = perf_instruction_pointer(regs); + lr = regs->link; + sp = regs->gpr[1]; + perf_callchain_store(entry, next_ip); + + while (entry->nr < entry->max_stack) { + fp = (unsigned long __user *) sp; + if (!valid_user_sp(sp, 1) || read_user_stack_64(fp, &next_sp)) + return; + if (level > 0 && read_user_stack_64(&fp[2], &next_ip)) + return; + + /* + * Note: the next_sp - sp >= signal frame size check + * is true when next_sp < sp, which can happen when + * transitioning from an alternate signal stack to the + * normal stack. + */ + if (next_sp - sp >= sizeof(struct signal_frame_64) && + (is_sigreturn_64_address(next_ip, sp) || + (level <= 1 && is_sigreturn_64_address(lr, sp))) && + sane_signal_64_frame(sp)) { + /* + * This looks like an signal frame + */ + sigframe = (struct signal_frame_64 __user *) sp; + uregs = sigframe->uc.uc_mcontext.gp_regs; + if (read_user_stack_64(&uregs[PT_NIP], &next_ip) || + read_user_stack_64(&uregs[PT_LNK], &lr) || + read_user_stack_64(&uregs[PT_R1], &sp)) + return; + level = 0; + perf_callchain_store_context(entry, PERF_CONTEXT_USER); + perf_callchain_store(entry, next_ip); + continue; + } + + if (level == 0) + next_ip = lr; + perf_callchain_store(entry, next_ip); + ++level; + sp = next_sp; + } +}