From patchwork Thu Jul 13 09:53:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 1807190 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org (client-ip=112.213.38.117; helo=lists.ozlabs.org; envelope-from=linuxppc-dev-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org; receiver=) Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4R1qNS4gjMz20Ph for ; Thu, 13 Jul 2023 19:40:40 +1000 (AEST) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4R1qNQ67tHz3cBQ for ; Thu, 13 Jul 2023 19:40:38 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=huawei.com (client-ip=45.249.212.255; helo=szxga08-in.huawei.com; envelope-from=wangkefeng.wang@huawei.com; receiver=lists.ozlabs.org) Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4R1qNF184Tz3bcH for ; Thu, 13 Jul 2023 19:40:29 +1000 (AEST) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.53]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4R1qMT1xcPz18MB1; Thu, 13 Jul 2023 17:39:49 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 13 Jul 2023 17:40:23 +0800 From: Kefeng Wang To: , Andrew Morton , Subject: [PATCH rfc -next 02/10] x86: mm: use try_vma_locked_page_fault() Date: Thu, 13 Jul 2023 17:53:30 +0800 Message-ID: <20230713095339.189715-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230713095339.189715-1-wangkefeng.wang@huawei.com> References: <20230713095339.189715-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kefeng Wang , x86@kernel.org, loongarch@lists.linux.dev, Peter Zijlstra , Catalin Marinas , Dave Hansen , WANG Xuerui , Will Deacon , Alexander Gordeev , linux-s390@vger.kernel.org, Huacai Chen , Russell King , Ingo Molnar , Gerald Schaefer , Christian Borntraeger , Albert Ou , Vasily Gorbik , Heiko Carstens , Nicholas Piggin , Borislav Petkov , Andy Lutomirski , Paul Walmsley , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , Sven Schnelle , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/x86/mm/fault.c | 39 +++++++++++++++------------------------ 1 file changed, 15 insertions(+), 24 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 56b4f9faf8c4..3f3b8b0a87de 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1213,6 +1213,16 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code, } NOKPROBE_SYMBOL(do_kern_addr_fault); +#ifdef CONFIG_PER_VMA_LOCK +int arch_vma_check_access(struct vm_area_struct *vma, + struct vm_locked_fault *vmlf) +{ + if (unlikely(access_error(vmlf->fault_code, vma))) + return -EINVAL; + return 0; +} +#endif + /* * Handle faults in the user portion of the address space. Nothing in here * should check X86_PF_USER without a specific justification: for almost @@ -1231,6 +1241,7 @@ void do_user_addr_fault(struct pt_regs *regs, struct mm_struct *mm; vm_fault_t fault; unsigned int flags = FAULT_FLAG_DEFAULT; + struct vm_locked_fault vmlf; tsk = current; mm = tsk->mm; @@ -1328,27 +1339,11 @@ void do_user_addr_fault(struct pt_regs *regs, } #endif -#ifdef CONFIG_PER_VMA_LOCK - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma = lock_vma_under_rcu(mm, address); - if (!vma) - goto lock_mmap; - - if (unlikely(access_error(error_code, vma))) { - vma_end_read(vma); - goto lock_mmap; - } - fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + VM_LOCKED_FAULT_INIT(vmlf, mm, address, flags, 0, regs, error_code); + if (try_vma_locked_page_fault(&vmlf, &fault)) + goto retry; + else if (!(fault | VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -1358,8 +1353,6 @@ void do_user_addr_fault(struct pt_regs *regs, ARCH_DEFAULT_PKEY); return; } -lock_mmap: -#endif /* CONFIG_PER_VMA_LOCK */ retry: vma = lock_mm_and_find_vma(mm, address, regs); @@ -1419,9 +1412,7 @@ void do_user_addr_fault(struct pt_regs *regs, } mmap_read_unlock(mm); -#ifdef CONFIG_PER_VMA_LOCK done: -#endif if (likely(!(fault & VM_FAULT_ERROR))) return; From patchwork Thu Jul 13 09:53:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 1807192 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org (client-ip=112.213.38.117; helo=lists.ozlabs.org; envelope-from=linuxppc-dev-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org; receiver=) Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4R1qPT6FHyz20Ph for ; Thu, 13 Jul 2023 19:41:33 +1000 (AEST) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4R1qPT5VHNz3c5H for ; Thu, 13 Jul 2023 19:41:33 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=huawei.com (client-ip=45.249.212.188; helo=szxga02-in.huawei.com; envelope-from=wangkefeng.wang@huawei.com; receiver=lists.ozlabs.org) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4R1qNL6Kqyz3c2k for ; Thu, 13 Jul 2023 19:40:34 +1000 (AEST) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4R1qLp2H4wzVjg5; Thu, 13 Jul 2023 17:39:14 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 13 Jul 2023 17:40:27 +0800 From: Kefeng Wang To: , Andrew Morton , Subject: [PATCH rfc -next 05/10] powerpc: mm: use try_vma_locked_page_fault() Date: Thu, 13 Jul 2023 17:53:33 +0800 Message-ID: <20230713095339.189715-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230713095339.189715-1-wangkefeng.wang@huawei.com> References: <20230713095339.189715-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kefeng Wang , x86@kernel.org, loongarch@lists.linux.dev, Peter Zijlstra , Catalin Marinas , Dave Hansen , WANG Xuerui , Will Deacon , Alexander Gordeev , linux-s390@vger.kernel.org, Huacai Chen , Russell King , Ingo Molnar , Gerald Schaefer , Christian Borntraeger , Albert Ou , Vasily Gorbik , Heiko Carstens , Nicholas Piggin , Borislav Petkov , Andy Lutomirski , Paul Walmsley , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , Sven Schnelle , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/powerpc/mm/fault.c | 54 +++++++++++++++++------------------------ 1 file changed, 22 insertions(+), 32 deletions(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 82954d0e6906..dd4832a3cf10 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -391,6 +391,23 @@ static int page_fault_is_bad(unsigned long err) #define page_fault_is_bad(__err) ((__err) & DSISR_BAD_FAULT_32S) #endif +#ifdef CONFIG_PER_VMA_LOCK +int arch_vma_check_access(struct vm_area_struct *vma, + struct vm_locked_fault *vmlf) +{ + int is_exec = TRAP(vmlf->regs) == INTERRUPT_INST_STORAGE; + int is_write = page_fault_is_write(vmlf->fault_code); + + if (unlikely(access_pkey_error(is_write, is_exec, + (vmlf->fault_code & DSISR_KEYFAULT), vma))) + return -EINVAL; + + if (unlikely(access_error(is_write, is_exec, vma))) + return -EINVAL; + return 0; +} +#endif + /* * For 600- and 800-family processors, the error_code parameter is DSISR * for a data fault, SRR1 for an instruction fault. @@ -413,6 +430,7 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, int is_write = page_fault_is_write(error_code); vm_fault_t fault, major = 0; bool kprobe_fault = kprobe_page_fault(regs, 11); + struct vm_locked_fault vmlf; if (unlikely(debugger_fault_handler(regs) || kprobe_fault)) return 0; @@ -469,41 +487,15 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, if (is_exec) flags |= FAULT_FLAG_INSTRUCTION; -#ifdef CONFIG_PER_VMA_LOCK - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - - vma = lock_vma_under_rcu(mm, address); - if (!vma) - goto lock_mmap; - - if (unlikely(access_pkey_error(is_write, is_exec, - (error_code & DSISR_KEYFAULT), vma))) { - vma_end_read(vma); - goto lock_mmap; - } - - if (unlikely(access_error(is_write, is_exec, vma))) { - vma_end_read(vma); - goto lock_mmap; - } - - fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + VM_LOCKED_FAULT_INIT(vmlf, mm, address, flags, 0, regs, error_code); + if (try_vma_locked_page_fault(&vmlf, &fault)) + goto retry; + else if (!(fault | VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); if (fault_signal_pending(fault, regs)) return user_mode(regs) ? 0 : SIGBUS; -lock_mmap: -#endif /* CONFIG_PER_VMA_LOCK */ - /* When running in the kernel we expect faults to occur only to * addresses in user space. All other faults represent errors in the * kernel and should generate an OOPS. Unfortunately, in the case of an @@ -552,9 +544,7 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, mmap_read_unlock(current->mm); -#ifdef CONFIG_PER_VMA_LOCK done: -#endif if (unlikely(fault & VM_FAULT_ERROR)) return mm_fault_error(regs, address, fault); From patchwork Thu Jul 13 09:53:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 1807191 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org (client-ip=2404:9400:2:0:216:3eff:fee1:b9f1; helo=lists.ozlabs.org; envelope-from=linuxppc-dev-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org; receiver=) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2404:9400:2:0:216:3eff:fee1:b9f1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4R1qNy3dCZz20Ph for ; Thu, 13 Jul 2023 19:41:06 +1000 (AEST) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4R1qNy2lqcz3cPs for ; Thu, 13 Jul 2023 19:41:06 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=huawei.com (client-ip=45.249.212.188; helo=szxga02-in.huawei.com; envelope-from=wangkefeng.wang@huawei.com; receiver=lists.ozlabs.org) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4R1qNK6szJz3c2k for ; Thu, 13 Jul 2023 19:40:33 +1000 (AEST) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4R1qLq3yDQzVjXt; Thu, 13 Jul 2023 17:39:15 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 13 Jul 2023 17:40:28 +0800 From: Kefeng Wang To: , Andrew Morton , Subject: [PATCH rfc -next 06/10] riscv: mm: use try_vma_locked_page_fault() Date: Thu, 13 Jul 2023 17:53:34 +0800 Message-ID: <20230713095339.189715-7-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230713095339.189715-1-wangkefeng.wang@huawei.com> References: <20230713095339.189715-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kefeng Wang , x86@kernel.org, loongarch@lists.linux.dev, Peter Zijlstra , Catalin Marinas , Dave Hansen , WANG Xuerui , Will Deacon , Alexander Gordeev , linux-s390@vger.kernel.org, Huacai Chen , Russell King , Ingo Molnar , Gerald Schaefer , Christian Borntraeger , Albert Ou , Vasily Gorbik , Heiko Carstens , Nicholas Piggin , Borislav Petkov , Andy Lutomirski , Paul Walmsley , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , Sven Schnelle , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Use new try_vma_locked_page_fault() helper to simplify code. No functional change intended. Signed-off-by: Kefeng Wang --- arch/riscv/mm/fault.c | 38 +++++++++++++++----------------------- 1 file changed, 15 insertions(+), 23 deletions(-) diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 6ea2cce4cc17..13bc60370b5c 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -215,6 +215,16 @@ static inline bool access_error(unsigned long cause, struct vm_area_struct *vma) return false; } +#ifdef CONFIG_PER_VMA_LOCK +int arch_vma_check_access(struct vm_area_struct *vma, + struct vm_locked_fault *vmlf) +{ + if (unlikely(access_error(vmlf->fault_code, vma))) + return -EINVAL; + return 0; +} +#endif + /* * This routine handles page faults. It determines the address and the * problem, and then passes it off to one of the appropriate routines. @@ -228,6 +238,7 @@ void handle_page_fault(struct pt_regs *regs) unsigned int flags = FAULT_FLAG_DEFAULT; int code = SEGV_MAPERR; vm_fault_t fault; + struct vm_locked_fault vmlf; cause = regs->cause; addr = regs->badaddr; @@ -283,35 +294,18 @@ void handle_page_fault(struct pt_regs *regs) flags |= FAULT_FLAG_WRITE; else if (cause == EXC_INST_PAGE_FAULT) flags |= FAULT_FLAG_INSTRUCTION; -#ifdef CONFIG_PER_VMA_LOCK - if (!(flags & FAULT_FLAG_USER)) - goto lock_mmap; - vma = lock_vma_under_rcu(mm, addr); - if (!vma) - goto lock_mmap; - - if (unlikely(access_error(cause, vma))) { - vma_end_read(vma); - goto lock_mmap; - } - - fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); - - if (!(fault & VM_FAULT_RETRY)) { - count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + VM_LOCKED_FAULT_INIT(vmlf, mm, addr, flags, 0, regs, cause); + if (try_vma_locked_page_fault(&vmlf, &fault)) + goto retry; + else if (!(fault | VM_FAULT_RETRY)) goto done; - } - count_vm_vma_lock_event(VMA_LOCK_RETRY); if (fault_signal_pending(fault, regs)) { if (!user_mode(regs)) no_context(regs, addr); return; } -lock_mmap: -#endif /* CONFIG_PER_VMA_LOCK */ retry: vma = lock_mm_and_find_vma(mm, addr, regs); @@ -368,9 +362,7 @@ void handle_page_fault(struct pt_regs *regs) mmap_read_unlock(mm); -#ifdef CONFIG_PER_VMA_LOCK done: -#endif if (unlikely(fault & VM_FAULT_ERROR)) { tsk->thread.bad_cause = cause; mm_fault_error(regs, addr, fault); From patchwork Thu Jul 13 09:53:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 1807193 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org (client-ip=2404:9400:2:0:216:3eff:fee1:b9f1; helo=lists.ozlabs.org; envelope-from=linuxppc-dev-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org; receiver=) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2404:9400:2:0:216:3eff:fee1:b9f1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4R1qQ10wBdz20Ph for ; Thu, 13 Jul 2023 19:42:01 +1000 (AEST) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4R1qQ10K7Yz3c7y for ; Thu, 13 Jul 2023 19:42:01 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=huawei.com (client-ip=45.249.212.255; helo=szxga08-in.huawei.com; envelope-from=wangkefeng.wang@huawei.com; receiver=lists.ozlabs.org) Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4R1qNN5WZXz3c8c for ; Thu, 13 Jul 2023 19:40:36 +1000 (AEST) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4R1qMc4wfvz18Lt7; Thu, 13 Jul 2023 17:39:56 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 13 Jul 2023 17:40:31 +0800 From: Kefeng Wang To: , Andrew Morton , Subject: [PATCH rfc -next 08/10] loongarch: mm: cleanup __do_page_fault() Date: Thu, 13 Jul 2023 17:53:36 +0800 Message-ID: <20230713095339.189715-9-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230713095339.189715-1-wangkefeng.wang@huawei.com> References: <20230713095339.189715-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kefeng Wang , x86@kernel.org, loongarch@lists.linux.dev, Peter Zijlstra , Catalin Marinas , Dave Hansen , WANG Xuerui , Will Deacon , Alexander Gordeev , linux-s390@vger.kernel.org, Huacai Chen , Russell King , Ingo Molnar , Gerald Schaefer , Christian Borntraeger , Albert Ou , Vasily Gorbik , Heiko Carstens , Nicholas Piggin , Borislav Petkov , Andy Lutomirski , Paul Walmsley , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , Sven Schnelle , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Cleanup __do_page_fault() by reuse bad_area_nosemaphore and bad_area label. Signed-off-by: Kefeng Wang --- arch/loongarch/mm/fault.c | 36 +++++++++++------------------------- 1 file changed, 11 insertions(+), 25 deletions(-) diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c index da5b6d518cdb..03d06ee184da 100644 --- a/arch/loongarch/mm/fault.c +++ b/arch/loongarch/mm/fault.c @@ -151,18 +151,15 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, if (!user_mode(regs)) no_context(regs, address); else - do_sigsegv(regs, write, address, si_code); - return; + goto bad_area_nosemaphore; } /* * If we're in an interrupt or have no user * context, we must not take the fault.. */ - if (faulthandler_disabled() || !mm) { - do_sigsegv(regs, write, address, si_code); - return; - } + if (faulthandler_disabled() || !mm) + goto bad_area_nosemaphore; if (user_mode(regs)) flags |= FAULT_FLAG_USER; @@ -172,23 +169,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, vma = lock_mm_and_find_vma(mm, address, regs); if (unlikely(!vma)) goto bad_area_nosemaphore; - goto good_area; - -/* - * Something tried to access memory that isn't in our memory map.. - * Fix it, but check if it's kernel or user first.. - */ -bad_area: - mmap_read_unlock(mm); -bad_area_nosemaphore: - do_sigsegv(regs, write, address, si_code); - return; -/* - * Ok, we have a good vm_area for this memory access, so - * we can handle it.. - */ -good_area: si_code = SEGV_ACCERR; if (write) { @@ -229,14 +210,15 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, */ goto retry; } + + mmap_read_unlock(mm); + if (unlikely(fault & VM_FAULT_ERROR)) { - mmap_read_unlock(mm); if (fault & VM_FAULT_OOM) { do_out_of_memory(regs, address); return; } else if (fault & VM_FAULT_SIGSEGV) { - do_sigsegv(regs, write, address, si_code); - return; + goto bad_area_nosemaphore; } else if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) { do_sigbus(regs, write, address, si_code); return; @@ -244,7 +226,11 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, BUG(); } + return; +bad_area: mmap_read_unlock(mm); +bad_area_nosemaphore: + do_sigsegv(regs, write, address, si_code); } asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, From patchwork Thu Jul 13 09:53:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 1807194 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org (client-ip=112.213.38.117; helo=lists.ozlabs.org; envelope-from=linuxppc-dev-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org; receiver=) Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4R1qQY0vtrz20Ph for ; Thu, 13 Jul 2023 19:42:29 +1000 (AEST) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4R1qQY04M0z3cKV for ; Thu, 13 Jul 2023 19:42:29 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=huawei.com (client-ip=45.249.212.188; helo=szxga02-in.huawei.com; envelope-from=wangkefeng.wang@huawei.com; receiver=lists.ozlabs.org) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4R1qNQ02wWz3c9r for ; Thu, 13 Jul 2023 19:40:37 +1000 (AEST) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4R1qLv1v51zVjYP; Thu, 13 Jul 2023 17:39:19 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 13 Jul 2023 17:40:32 +0800 From: Kefeng Wang To: , Andrew Morton , Subject: [PATCH rfc -next 09/10] loongarch: mm: add access_error() helper Date: Thu, 13 Jul 2023 17:53:37 +0800 Message-ID: <20230713095339.189715-10-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230713095339.189715-1-wangkefeng.wang@huawei.com> References: <20230713095339.189715-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kefeng Wang , x86@kernel.org, loongarch@lists.linux.dev, Peter Zijlstra , Catalin Marinas , Dave Hansen , WANG Xuerui , Will Deacon , Alexander Gordeev , linux-s390@vger.kernel.org, Huacai Chen , Russell King , Ingo Molnar , Gerald Schaefer , Christian Borntraeger , Albert Ou , Vasily Gorbik , Heiko Carstens , Nicholas Piggin , Borislav Petkov , Andy Lutomirski , Paul Walmsley , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , Sven Schnelle , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Add access_error() to check whether vma could be accessible or not, which will be used __do_page_fault() and later vma locked based page fault. Signed-off-by: Kefeng Wang --- arch/loongarch/mm/fault.c | 30 ++++++++++++++++++++---------- 1 file changed, 20 insertions(+), 10 deletions(-) diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c index 03d06ee184da..cde2ea0119fa 100644 --- a/arch/loongarch/mm/fault.c +++ b/arch/loongarch/mm/fault.c @@ -120,6 +120,22 @@ static void __kprobes do_sigsegv(struct pt_regs *regs, force_sig_fault(SIGSEGV, si_code, (void __user *)address); } +static inline bool access_error(unsigned int flags, struct pt_regs *regs, + unsigned long addr, struct vm_area_struct *vma) +{ + if (flags & FAULT_FLAG_WRITE) { + if (!(vma->vm_flags & VM_WRITE)) + return true; + } else { + if (!(vma->vm_flags & VM_READ) && addr != exception_era(regs)) + return true; + if (!(vma->vm_flags & VM_EXEC) && addr == exception_era(regs)) + return true; + } + + return false; +} + /* * This routine handles page faults. It determines the address, * and the problem, and then passes it off to one of the appropriate @@ -163,6 +179,8 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, if (user_mode(regs)) flags |= FAULT_FLAG_USER; + if (write) + flags |= FAULT_FLAG_WRITE; perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); retry: @@ -172,16 +190,8 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, si_code = SEGV_ACCERR; - if (write) { - flags |= FAULT_FLAG_WRITE; - if (!(vma->vm_flags & VM_WRITE)) - goto bad_area; - } else { - if (!(vma->vm_flags & VM_READ) && address != exception_era(regs)) - goto bad_area; - if (!(vma->vm_flags & VM_EXEC) && address == exception_era(regs)) - goto bad_area; - } + if (access_error(flags, regs, vma)) + goto bad_area; /* * If for any reason at all we couldn't handle the fault, From patchwork Thu Jul 13 09:53:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 1807195 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org (client-ip=112.213.38.117; helo=lists.ozlabs.org; envelope-from=linuxppc-dev-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org; receiver=) Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4R1qR42KwZz20Ph for ; Thu, 13 Jul 2023 19:42:56 +1000 (AEST) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4R1qR41NMLz3cmQ for ; Thu, 13 Jul 2023 19:42:56 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=huawei.com (client-ip=45.249.212.255; helo=szxga08-in.huawei.com; envelope-from=wangkefeng.wang@huawei.com; receiver=lists.ozlabs.org) Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4R1qNR1MCBz3cBY for ; Thu, 13 Jul 2023 19:40:39 +1000 (AEST) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.53]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4R1qMg1LkQz18Lnh; Thu, 13 Jul 2023 17:39:59 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 13 Jul 2023 17:40:33 +0800 From: Kefeng Wang To: , Andrew Morton , Subject: [PATCH rfc -next 10/10] loongarch: mm: try VMA lock-based page fault handling first Date: Thu, 13 Jul 2023 17:53:38 +0800 Message-ID: <20230713095339.189715-11-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230713095339.189715-1-wangkefeng.wang@huawei.com> References: <20230713095339.189715-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kefeng Wang , x86@kernel.org, loongarch@lists.linux.dev, Peter Zijlstra , Catalin Marinas , Dave Hansen , WANG Xuerui , Will Deacon , Alexander Gordeev , linux-s390@vger.kernel.org, Huacai Chen , Russell King , Ingo Molnar , Gerald Schaefer , Christian Borntraeger , Albert Ou , Vasily Gorbik , Heiko Carstens , Nicholas Piggin , Borislav Petkov , Andy Lutomirski , Paul Walmsley , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , Sven Schnelle , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Attempt VMA lock-based page fault handling first, and fall back to the existing mmap_lock-based handling if that fails. Signed-off-by: Kefeng Wang --- arch/loongarch/Kconfig | 1 + arch/loongarch/mm/fault.c | 26 ++++++++++++++++++++++++++ 2 files changed, 27 insertions(+) diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index 397203e18800..afb0ccabab97 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -53,6 +53,7 @@ config LOONGARCH select ARCH_SUPPORTS_LTO_CLANG select ARCH_SUPPORTS_LTO_CLANG_THIN select ARCH_SUPPORTS_NUMA_BALANCING + select ARCH_SUPPORTS_PER_VMA_LOCK select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_QUEUED_RWLOCKS diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c index cde2ea0119fa..7e54bc48813e 100644 --- a/arch/loongarch/mm/fault.c +++ b/arch/loongarch/mm/fault.c @@ -136,6 +136,17 @@ static inline bool access_error(unsigned int flags, struct pt_regs *regs, return false; } +#ifdef CONFIG_PER_VMA_LOCK +int arch_vma_check_access(struct vm_area_struct *vma, + struct vm_locked_fault *vmlf) +{ + if (unlikely(access_error(vmlf->fault_flags, vmlf->regs, vmlf->address, + vma))) + return -EINVAL; + return 0; +} +#endif + /* * This routine handles page faults. It determines the address, * and the problem, and then passes it off to one of the appropriate @@ -149,6 +160,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, struct task_struct *tsk = current; struct mm_struct *mm = tsk->mm; struct vm_area_struct *vma = NULL; + struct vm_locked_fault vmlf; vm_fault_t fault; if (kprobe_page_fault(regs, current->thread.trap_nr)) @@ -183,6 +195,19 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, flags |= FAULT_FLAG_WRITE; perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + + VM_LOCKED_FAULT_INIT(vmlf, mm, address, flags, 0, regs, 0); + if (try_vma_locked_page_fault(&vmlf, &fault)) + goto retry; + else if (!(fault | VM_FAULT_RETRY)) + goto done; + + if (fault_signal_pending(fault, regs)) { + if (!user_mode(regs)) + no_context(regs, address); + return; + } + retry: vma = lock_mm_and_find_vma(mm, address, regs); if (unlikely(!vma)) @@ -223,6 +248,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, mmap_read_unlock(mm); +done: if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) { do_out_of_memory(regs, address);