From patchwork Wed Oct 11 13:52:25 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824428 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBwWj6Z7yz9sBW for ; Thu, 12 Oct 2017 00:56:49 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBwWj4TCYzDr6Q for ; Thu, 12 Oct 2017 00:56:49 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwRM2Yt3zDr5S for ; Thu, 12 Oct 2017 00:53:03 +1100 (AEDT) Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDphF0115108 for ; Wed, 11 Oct 2017 09:53:01 -0400 Received: from e06smtp13.uk.ibm.com (e06smtp13.uk.ibm.com [195.75.94.109]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhhdrw417-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:00 -0400 Received: from localhost by e06smtp13.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:52:58 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp13.uk.ibm.com (192.168.101.143) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:52:52 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDqpQV18219046; Wed, 11 Oct 2017 13:52:51 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 36BA44C044; Wed, 11 Oct 2017 14:48:48 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6A7E14C040; Wed, 11 Oct 2017 14:48:46 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:48:46 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 01/22] x86/mm: Define CONFIG_SPF Date: Wed, 11 Oct 2017 15:52:25 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0012-0000-0000-00000580C7E0 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0013-0000-0000-000018FAD656 Message-Id: <1507729966-10660-2-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Introduce CONFIG_SPF which turns on the Speculative Page Fault handler when building for 64bits with SMP. Signed-off-by: Laurent Dufour --- arch/x86/Kconfig | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 063f1e0d51aa..a726618b7018 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2865,6 +2865,10 @@ config X86_DMA_REMAP config HAVE_GENERIC_GUP def_bool y +config SPF + def_bool y + depends on X86_64 && SMP + source "net/Kconfig" source "drivers/Kconfig" From patchwork Wed Oct 11 13:52:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824429 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBwYs0XLMz9s7p for ; Thu, 12 Oct 2017 00:58:41 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBwYr6lj4zDrF3 for ; Thu, 12 Oct 2017 00:58:40 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwRT45NkzDr6L for ; Thu, 12 Oct 2017 00:53:09 +1100 (AEDT) Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDq5Sa030283 for ; Wed, 11 Oct 2017 09:53:06 -0400 Received: from e06smtp10.uk.ibm.com (e06smtp10.uk.ibm.com [195.75.94.106]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhjx5pyuq-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:06 -0400 Received: from localhost by e06smtp10.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:01 +0100 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp10.uk.ibm.com (192.168.101.140) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:52:55 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDqsBk23265404; Wed, 11 Oct 2017 13:52:54 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 447414C046; Wed, 11 Oct 2017 14:48:51 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 742154C040; Wed, 11 Oct 2017 14:48:49 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:48:49 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 02/22] powerpc/mm: Define CONFIG_SPF Date: Wed, 11 Oct 2017 15:52:26 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0040-0000-0000-000003E1C78E X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0041-0000-0000-000025E3D453 Message-Id: <1507729966-10660-3-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Define CONFIG_SPF for BOOK3S_64 and SMP. This enables the Speculative Page Fault handler. Support is only provide for BOOK3S_64 currently because: - require CONFIG_PPC_STD_MMU because checks done in set_access_flags_filter() - require BOOK3S because we can't support for book3e_hugetlb_preload() called by update_mmu_cache() Signed-off-by: Laurent Dufour --- arch/powerpc/Kconfig | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 809c468edab1..661ba5bcf60e 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -1207,6 +1207,10 @@ endif config ARCH_RANDOM def_bool n +config SPF + def_bool y + depends on PPC_BOOK3S_64 && SMP + source "net/Kconfig" source "drivers/Kconfig" From patchwork Wed Oct 11 13:52:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824433 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBwbZ6JWhz9s8J for ; Thu, 12 Oct 2017 01:00:10 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBwbZ5GtXzDrFJ for ; Thu, 12 Oct 2017 01:00:10 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwRV5mN4zDr9F for ; Thu, 12 Oct 2017 00:53:10 +1100 (AEDT) Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDqsMI059010 for ; Wed, 11 Oct 2017 09:53:08 -0400 Received: from e06smtp10.uk.ibm.com (e06smtp10.uk.ibm.com [195.75.94.106]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhj86hupn-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:08 -0400 Received: from localhost by e06smtp10.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:05 +0100 Received: from b06cxnps3075.portsmouth.uk.ibm.com (9.149.109.195) by e06smtp10.uk.ibm.com (192.168.101.140) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:52:58 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDqvP524379474; Wed, 11 Oct 2017 13:52:57 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5C5714C046; Wed, 11 Oct 2017 14:48:54 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 97E564C040; Wed, 11 Oct 2017 14:48:52 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:48:52 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 03/22] mm: Dont assume page-table invariance during faults Date: Wed, 11 Oct 2017 15:52:27 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0040-0000-0000-000003E1C78F X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0041-0000-0000-000025E3D455 Message-Id: <1507729966-10660-4-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Peter Zijlstra One of the side effects of speculating on faults (without holding mmap_sem) is that we can race with free_pgtables() and therefore we cannot assume the page-tables will stick around. Remove the reliance on the pte pointer. Signed-off-by: Peter Zijlstra (Intel) [Remove only if !CONFIG_SPF] Signed-off-by: Laurent Dufour --- mm/memory.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.c index 6632c9b357c9..b7a9baf3df8a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2287,6 +2287,7 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL_GPL(apply_to_page_range); +#ifndef CONFIG_SPF /* * handle_pte_fault chooses page fault handler according to an entry which was * read non-atomically. Before making any commitment, on those architectures @@ -2296,7 +2297,7 @@ EXPORT_SYMBOL_GPL(apply_to_page_range); * and do_anonymous_page can safely check later on). */ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, - pte_t *page_table, pte_t orig_pte) + pte_t *page_table, pte_t orig_pte) { int same = 1; #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT) @@ -2310,6 +2311,7 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, pte_unmap(page_table); return same; } +#endif /* CONFIG_SPF */ static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma) { @@ -2871,11 +2873,14 @@ int do_swap_page(struct vm_fault *vmf) if (vma_readahead) page = swap_readahead_detect(vmf, &swap_ra); + +#ifndef CONFIG_SPF if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte)) { if (page) put_page(page); goto out; } +#endif entry = pte_to_swp_entry(vmf->orig_pte); if (unlikely(non_swap_entry(entry))) { From patchwork Wed Oct 11 13:52:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824434 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBwdf2qx0z9s83 for ; Thu, 12 Oct 2017 01:01:58 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBwdf1wFGzDrBt for ; Thu, 12 Oct 2017 01:01:58 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwRX3twpzDr6M for ; Thu, 12 Oct 2017 00:53:12 +1100 (AEDT) Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDmtrP174348 for ; Wed, 11 Oct 2017 09:53:10 -0400 Received: from e06smtp15.uk.ibm.com (e06smtp15.uk.ibm.com [195.75.94.111]) by mx0b-001b2d01.pphosted.com with ESMTP id 2dhhkh4mme-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:09 -0400 Received: from localhost by e06smtp15.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:07 +0100 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp15.uk.ibm.com (192.168.101.145) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:01 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDr0XK18808886; Wed, 11 Oct 2017 13:53:00 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 673EE4C044; Wed, 11 Oct 2017 14:48:57 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A8E2B4C040; Wed, 11 Oct 2017 14:48:55 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:48:55 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 04/22] mm: Prepare for FAULT_FLAG_SPECULATIVE Date: Wed, 11 Oct 2017 15:52:28 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0020-0000-0000-000003BFC748 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0021-0000-0000-00004253053E Message-Id: <1507729966-10660-5-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Peter Zijlstra When speculating faults (without holding mmap_sem) we need to validate that the vma against which we loaded pages is still valid when we're ready to install the new PTE. Therefore, replace the pte_offset_map_lock() calls that (re)take the PTL with pte_map_lock() which can fail in case we find the VMA changed since we started the fault. Signed-off-by: Peter Zijlstra (Intel) [Port to 4.12 kernel] [Remove the comment about the fault_env structure which has been implemented as the vm_fault structure in the kernel] Signed-off-by: Laurent Dufour --- include/linux/mm.h | 1 + mm/memory.c | 55 ++++++++++++++++++++++++++++++++++++++---------------- 2 files changed, 40 insertions(+), 16 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 3cc407421f17..fb3477f7f742 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -291,6 +291,7 @@ extern pgprot_t protection_map[16]; #define FAULT_FLAG_USER 0x40 /* The fault originated in userspace */ #define FAULT_FLAG_REMOTE 0x80 /* faulting for non current tsk/mm */ #define FAULT_FLAG_INSTRUCTION 0x100 /* The fault was during an instruction fetch */ +#define FAULT_FLAG_SPECULATIVE 0x200 /* Speculative fault, not holding mmap_sem */ #define FAULT_FLAG_TRACE \ { FAULT_FLAG_WRITE, "WRITE" }, \ diff --git a/mm/memory.c b/mm/memory.c index b7a9baf3df8a..65859e123b16 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2451,6 +2451,12 @@ static inline void wp_page_reuse(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); } +static bool pte_map_lock(struct vm_fault *vmf) +{ + vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); + return true; +} + /* * Handle the case of a page which we actually need to copy to a new page. * @@ -2478,6 +2484,7 @@ static int wp_page_copy(struct vm_fault *vmf) const unsigned long mmun_start = vmf->address & PAGE_MASK; const unsigned long mmun_end = mmun_start + PAGE_SIZE; struct mem_cgroup *memcg; + int ret = VM_FAULT_OOM; if (unlikely(anon_vma_prepare(vma))) goto oom; @@ -2505,7 +2512,11 @@ static int wp_page_copy(struct vm_fault *vmf) /* * Re-check the pte - we dropped the lock */ - vmf->pte = pte_offset_map_lock(mm, vmf->pmd, vmf->address, &vmf->ptl); + if (!pte_map_lock(vmf)) { + mem_cgroup_cancel_charge(new_page, memcg, false); + ret = VM_FAULT_RETRY; + goto oom_free_new; + } if (likely(pte_same(*vmf->pte, vmf->orig_pte))) { if (old_page) { if (!PageAnon(old_page)) { @@ -2593,7 +2604,7 @@ static int wp_page_copy(struct vm_fault *vmf) oom: if (old_page) put_page(old_page); - return VM_FAULT_OOM; + return ret; } /** @@ -2614,8 +2625,8 @@ static int wp_page_copy(struct vm_fault *vmf) int finish_mkwrite_fault(struct vm_fault *vmf) { WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED)); - vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address, - &vmf->ptl); + if (!pte_map_lock(vmf)) + return VM_FAULT_RETRY; /* * We might have raced with another page fault while we released the * pte_offset_map_lock. @@ -2733,8 +2744,11 @@ static int do_wp_page(struct vm_fault *vmf) get_page(vmf->page); pte_unmap_unlock(vmf->pte, vmf->ptl); lock_page(vmf->page); - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, - vmf->address, &vmf->ptl); + if (!pte_map_lock(vmf)) { + unlock_page(vmf->page); + put_page(vmf->page); + return VM_FAULT_RETRY; + } if (!pte_same(*vmf->pte, vmf->orig_pte)) { unlock_page(vmf->page); pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -2937,8 +2951,10 @@ int do_swap_page(struct vm_fault *vmf) * Back out if somebody else faulted in this pte * while we released the pte lock. */ - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, - vmf->address, &vmf->ptl); + if (!pte_map_lock(vmf)) { + delayacct_clear_flag(DELAYACCT_PF_SWAPIN); + return VM_FAULT_RETRY; + } if (likely(pte_same(*vmf->pte, vmf->orig_pte))) ret = VM_FAULT_OOM; delayacct_clear_flag(DELAYACCT_PF_SWAPIN); @@ -2994,8 +3010,11 @@ int do_swap_page(struct vm_fault *vmf) /* * Back out if somebody else already faulted in this pte. */ - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, - &vmf->ptl); + if (!pte_map_lock(vmf)) { + ret = VM_FAULT_RETRY; + mem_cgroup_cancel_charge(page, memcg, false); + goto out_page; + } if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) goto out_nomap; @@ -3124,8 +3143,8 @@ static int do_anonymous_page(struct vm_fault *vmf) !mm_forbids_zeropage(vma->vm_mm)) { entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address), vma->vm_page_prot)); - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, - vmf->address, &vmf->ptl); + if (!pte_map_lock(vmf)) + return VM_FAULT_RETRY; if (!pte_none(*vmf->pte)) goto unlock; ret = check_stable_address_space(vma->vm_mm); @@ -3160,8 +3179,11 @@ static int do_anonymous_page(struct vm_fault *vmf) if (vma->vm_flags & VM_WRITE) entry = pte_mkwrite(pte_mkdirty(entry)); - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, - &vmf->ptl); + if (!pte_map_lock(vmf)) { + mem_cgroup_cancel_charge(page, memcg, false); + put_page(page); + return VM_FAULT_RETRY; + } if (!pte_none(*vmf->pte)) goto release; @@ -3285,8 +3307,9 @@ static int pte_alloc_one_map(struct vm_fault *vmf) * pte_none() under vmf->ptl protection when we return to * alloc_set_pte(). */ - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, - &vmf->ptl); + if (!pte_map_lock(vmf)) + return VM_FAULT_RETRY; + return 0; } From patchwork Wed Oct 11 13:52:29 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824436 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBwgT0vTSz9sBW for ; Thu, 12 Oct 2017 01:03:33 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBwgS5SvMzDrDC for ; Thu, 12 Oct 2017 01:03:32 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwRc218kzDr69 for ; Thu, 12 Oct 2017 00:53:16 +1100 (AEDT) Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDmshM022483 for ; Wed, 11 Oct 2017 09:53:14 -0400 Received: from e06smtp12.uk.ibm.com (e06smtp12.uk.ibm.com [195.75.94.108]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhh9wnwbb-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:13 -0400 Received: from localhost by e06smtp12.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:10 +0100 Received: from b06cxnps3074.portsmouth.uk.ibm.com (9.149.109.194) by e06smtp12.uk.ibm.com (192.168.101.142) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:04 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDr3H924903842; Wed, 11 Oct 2017 13:53:03 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 64FF24C044; Wed, 11 Oct 2017 14:49:00 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AED0D4C040; Wed, 11 Oct 2017 14:48:58 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:48:58 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 05/22] mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIVE Date: Wed, 11 Oct 2017 15:52:29 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0008-0000-0000-0000049EC808 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0009-0000-0000-00001E30D567 Message-Id: <1507729966-10660-6-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" When handling page fault without holding the mmap_sem the fetch of the pte lock pointer and the locking will have to be done while ensuring that the VMA is not touched in our back. So move the fetch and locking operations in a dedicated function. Signed-off-by: Laurent Dufour --- mm/memory.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 65859e123b16..0fd72b19c730 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2451,6 +2451,13 @@ static inline void wp_page_reuse(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); } +static bool pte_spinlock(struct vm_fault *vmf) +{ + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd); + spin_lock(vmf->ptl); + return true; +} + static bool pte_map_lock(struct vm_fault *vmf) { vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); @@ -3788,8 +3795,8 @@ static int do_numa_page(struct vm_fault *vmf) * validation through pte_unmap_same(). It's of NUMA type but * the pfn may be screwed if the read is non atomic. */ - vmf->ptl = pte_lockptr(vma->vm_mm, vmf->pmd); - spin_lock(vmf->ptl); + if (!pte_spinlock(vmf)) + return VM_FAULT_RETRY; if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) { pte_unmap_unlock(vmf->pte, vmf->ptl); goto out; @@ -3981,8 +3988,8 @@ static int handle_pte_fault(struct vm_fault *vmf) if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma)) return do_numa_page(vmf); - vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd); - spin_lock(vmf->ptl); + if (!pte_spinlock(vmf)) + return VM_FAULT_RETRY; entry = vmf->orig_pte; if (unlikely(!pte_same(*vmf->pte, entry))) goto unlock; From patchwork Wed Oct 11 13:52:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824439 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBwj12pjZz9s83 for ; Thu, 12 Oct 2017 01:04:53 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBwj11zRtzDrKx for ; Thu, 12 Oct 2017 01:04:53 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwRg6Qx9zDr6V for ; Thu, 12 Oct 2017 00:53:19 +1100 (AEDT) Received: from pps.filterd (m0098417.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDn2DE132294 for ; Wed, 11 Oct 2017 09:53:17 -0400 Received: from e06smtp12.uk.ibm.com (e06smtp12.uk.ibm.com [195.75.94.108]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhhqec9nm-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:17 -0400 Received: from localhost by e06smtp12.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:13 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp12.uk.ibm.com (192.168.101.142) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:07 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDr61q16515096; Wed, 11 Oct 2017 13:53:06 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 3FFD54C040; Wed, 11 Oct 2017 14:49:03 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 801B04C044; Wed, 11 Oct 2017 14:49:01 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:01 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 06/22] mm: VMA sequence count Date: Wed, 11 Oct 2017 15:52:30 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0008-0000-0000-0000049EC80B X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0009-0000-0000-00001E30D56A Message-Id: <1507729966-10660-7-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Peter Zijlstra Wrap the VMA modifications (vma_adjust/unmap_page_range) with sequence counts such that we can easily test if a VMA is changed. The unmap_page_range() one allows us to make assumptions about page-tables; when we find the seqcount hasn't changed we can assume page-tables are still valid. The flip side is that we cannot distinguish between a vma_adjust() and the unmap_page_range() -- where with the former we could have re-checked the vma bounds against the address. Signed-off-by: Peter Zijlstra (Intel) [Port to 4.12 kernel] [Build depends on CONFIG_SPF] [Introduce vm_write_* inline function depending on CONFIG_SPF] [Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by using vm_raw_write* functions] Signed-off-by: Laurent Dufour Fix locked by raw function undo lockdep fix as raw services are now used --- include/linux/mm.h | 41 +++++++++++++++++++++++++++++++++++++++++ include/linux/mm_types.h | 3 +++ mm/memory.c | 2 ++ mm/mmap.c | 40 +++++++++++++++++++++++++++++++++++++--- 4 files changed, 83 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index fb3477f7f742..1e7740170c24 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1349,6 +1349,47 @@ static inline int fixup_user_fault(struct task_struct *tsk, } #endif +#ifdef CONFIG_SPF +static inline void vm_write_begin(struct vm_area_struct *vma) +{ + write_seqcount_begin(&vma->vm_sequence); +} +static inline void vm_write_begin_nested(struct vm_area_struct *vma, + int subclass) +{ + write_seqcount_begin_nested(&vma->vm_sequence, subclass); +} +static inline void vm_write_end(struct vm_area_struct *vma) +{ + write_seqcount_end(&vma->vm_sequence); +} +static inline void vm_raw_write_begin(struct vm_area_struct *vma) +{ + raw_write_seqcount_begin(&vma->vm_sequence); +} +static inline void vm_raw_write_end(struct vm_area_struct *vma) +{ + raw_write_seqcount_end(&vma->vm_sequence); +} +#else +static inline void vm_write_begin(struct vm_area_struct *vma) +{ +} +static inline void vm_write_begin_nested(struct vm_area_struct *vma, + int subclass) +{ +} +static inline void vm_write_end(struct vm_area_struct *vma) +{ +} +static inline void vm_raw_write_begin(struct vm_area_struct *vma) +{ +} +static inline void vm_raw_write_end(struct vm_area_struct *vma) +{ +} +#endif /* CONFIG_SPF */ + extern int access_process_vm(struct task_struct *tsk, unsigned long addr, void *buf, int len, unsigned int gup_flags); extern int access_remote_vm(struct mm_struct *mm, unsigned long addr, diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index d1e5062b0c00..3819fec89c7b 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -346,6 +346,9 @@ struct vm_area_struct { struct mempolicy *vm_policy; /* NUMA policy for the VMA */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_SPF + seqcount_t vm_sequence; +#endif } __randomize_layout; struct core_thread { diff --git a/mm/memory.c b/mm/memory.c index 0fd72b19c730..eaa86e41124f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1504,6 +1504,7 @@ void unmap_page_range(struct mmu_gather *tlb, unsigned long next; BUG_ON(addr >= end); + vm_write_begin(vma); tlb_start_vma(tlb, vma); pgd = pgd_offset(vma->vm_mm, addr); do { @@ -1513,6 +1514,7 @@ void unmap_page_range(struct mmu_gather *tlb, next = zap_p4d_range(tlb, vma, pgd, addr, next, details); } while (pgd++, addr = next, addr != end); tlb_end_vma(tlb, vma); + vm_write_end(vma); } diff --git a/mm/mmap.c b/mm/mmap.c index 2f3971a051c6..0e90b469fd97 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -558,6 +558,10 @@ void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma, else mm->highest_vm_end = vm_end_gap(vma); +#ifdef CONFIG_SPF + seqcount_init(&vma->vm_sequence); +#endif + /* * vma->vm_prev wasn't known when we followed the rbtree to find the * correct insertion point for that vma. As a result, we could not @@ -692,6 +696,30 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, long adjust_next = 0; int remove_next = 0; + /* + * Why using vm_raw_write*() functions here to avoid lockdep's warning ? + * + * Locked is complaining about a theoretical lock dependency, involving + * 3 locks: + * mapping->i_mmap_rwsem --> vma->vm_sequence --> fs_reclaim + * + * Here are the major path leading to this dependency : + * 1. __vma_adjust() mmap_sem -> vm_sequence -> i_mmap_rwsem + * 2. move_vmap() mmap_sem -> vm_sequence -> fs_reclaim + * 3. __alloc_pages_nodemask() fs_reclaim -> i_mmap_rwsem + * 4. unmap_mapping_range() i_mmap_rwsem -> vm_sequence + * + * So there is no way to solve this easily, especially because in + * unmap_mapping_range() the i_mmap_rwsem is grab while the impacted + * VMAs are not yet known. + * However, the way the vm_seq is used is guarantying that we will + * never block on it since we just check for its value and never wait + * for it to move, see vma_has_changed() and handle_speculative_fault(). + */ + vm_raw_write_begin(vma); + if (next) + vm_raw_write_begin(next); + if (next && !insert) { struct vm_area_struct *exporter = NULL, *importer = NULL; @@ -903,6 +931,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, mm->map_count--; mpol_put(vma_policy(next)); kmem_cache_free(vm_area_cachep, next); + vm_raw_write_end(next); /* * In mprotect's case 6 (see comments on vma_merge), * we must remove another next too. It would clutter @@ -916,6 +945,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, * "vma->vm_next" gap must be updated. */ next = vma->vm_next; + if (next) + vm_raw_write_begin(next); } else { /* * For the scope of the comment "next" and @@ -933,10 +964,9 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, remove_next = 1; end = next->vm_end; goto again; - } - else if (next) + } else if (next) { vma_gap_update(next); - else { + } else { /* * If remove_next == 2 we obviously can't * reach this path. @@ -962,6 +992,10 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, if (insert && file) uprobe_mmap(insert); + if (next && next != vma) + vm_raw_write_end(next); + vm_raw_write_end(vma); + validate_mm(mm); return 0; From patchwork Wed Oct 11 13:52:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824441 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBwmt2hhTz9s83 for ; Thu, 12 Oct 2017 01:08:14 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBwmt1kTjzDrWS for ; Thu, 12 Oct 2017 01:08:14 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwRn0qyszDr6r for ; Thu, 12 Oct 2017 00:53:24 +1100 (AEDT) Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDoPLh114010 for ; Wed, 11 Oct 2017 09:53:21 -0400 Received: from e06smtp12.uk.ibm.com (e06smtp12.uk.ibm.com [195.75.94.108]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhhahdrvq-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:21 -0400 Received: from localhost by e06smtp12.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:17 +0100 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp12.uk.ibm.com (192.168.101.142) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:10 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDr9xO7995606; Wed, 11 Oct 2017 13:53:09 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 225C54C046; Wed, 11 Oct 2017 14:49:06 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 65E3F4C04A; Wed, 11 Oct 2017 14:49:04 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:04 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 07/22] mm: Protect VMA modifications using VMA sequence count Date: Wed, 11 Oct 2017 15:52:31 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0008-0000-0000-0000049EC80D X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0009-0000-0000-00001E30D56D Message-Id: <1507729966-10660-8-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" The VMA sequence count has been introduced to allow fast detection of VMA modification when running a page fault handler without holding the mmap_sem. This patch provides protection against the VMA modification done in : - madvise() - mremap() - mpol_rebind_policy() - vma_replace_policy() - change_prot_numa() - mlock(), munlock() - mprotect() - mmap_region() - collapse_huge_page() - userfaultd registering services In addition, VMA fields which will be read during the speculative fault path needs to be written using WRITE_ONCE to prevent write to be split and intermediate values to be pushed to other CPUs. Signed-off-by: Laurent Dufour --- fs/proc/task_mmu.c | 5 ++++- fs/userfaultfd.c | 17 +++++++++++++---- mm/khugepaged.c | 3 +++ mm/madvise.c | 6 +++++- mm/mempolicy.c | 51 ++++++++++++++++++++++++++++++++++----------------- mm/mlock.c | 13 ++++++++----- mm/mmap.c | 17 ++++++++++------- mm/mprotect.c | 4 +++- mm/mremap.c | 6 ++++++ 9 files changed, 86 insertions(+), 36 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 0bf9e423aa99..4fc37f88437c 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1155,8 +1155,11 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, goto out_mm; } for (vma = mm->mmap; vma; vma = vma->vm_next) { - vma->vm_flags &= ~VM_SOFTDIRTY; + vm_write_begin(vma); + WRITE_ONCE(vma->vm_flags, + vma->vm_flags & ~VM_SOFTDIRTY); vma_set_page_prot(vma); + vm_write_end(vma); } downgrade_write(&mm->mmap_sem); break; diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 1c713fd5b3e6..426af4fd407c 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -640,8 +640,11 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) octx = vma->vm_userfaultfd_ctx.ctx; if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) { + vm_write_begin(vma); vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; - vma->vm_flags &= ~(VM_UFFD_WP | VM_UFFD_MISSING); + WRITE_ONCE(vma->vm_flags, + vma->vm_flags & ~(VM_UFFD_WP | VM_UFFD_MISSING)); + vm_write_end(vma); return 0; } @@ -866,8 +869,10 @@ static int userfaultfd_release(struct inode *inode, struct file *file) vma = prev; else prev = vma; - vma->vm_flags = new_flags; + vm_write_begin(vma); + WRITE_ONCE(vma->vm_flags, new_flags); vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; + vm_write_end(vma); } up_write(&mm->mmap_sem); mmput(mm); @@ -1425,8 +1430,10 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, * the next vma was merged into the current one and * the current one has not been updated yet. */ - vma->vm_flags = new_flags; + vm_write_begin(vma); + WRITE_ONCE(vma->vm_flags, new_flags); vma->vm_userfaultfd_ctx.ctx = ctx; + vm_write_end(vma); skip: prev = vma; @@ -1583,8 +1590,10 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, * the next vma was merged into the current one and * the current one has not been updated yet. */ - vma->vm_flags = new_flags; + vm_write_begin(vma); + WRITE_ONCE(vma->vm_flags, new_flags); vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; + vm_write_end(vma); skip: prev = vma; diff --git a/mm/khugepaged.c b/mm/khugepaged.c index c01f177a1120..f723d42140db 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1005,6 +1005,7 @@ static void collapse_huge_page(struct mm_struct *mm, if (mm_find_pmd(mm, address) != pmd) goto out; + vm_write_begin(vma); anon_vma_lock_write(vma->anon_vma); pte = pte_offset_map(pmd, address); @@ -1040,6 +1041,7 @@ static void collapse_huge_page(struct mm_struct *mm, pmd_populate(mm, pmd, pmd_pgtable(_pmd)); spin_unlock(pmd_ptl); anon_vma_unlock_write(vma->anon_vma); + vm_write_end(vma); result = SCAN_FAIL; goto out; } @@ -1074,6 +1076,7 @@ static void collapse_huge_page(struct mm_struct *mm, set_pmd_at(mm, address, pmd, _pmd); update_mmu_cache_pmd(vma, address, pmd); spin_unlock(pmd_ptl); + vm_write_end(vma); *hpage = NULL; diff --git a/mm/madvise.c b/mm/madvise.c index acdd39fb1f5d..707e0657b33f 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -183,7 +183,9 @@ static long madvise_behavior(struct vm_area_struct *vma, /* * vm_flags is protected by the mmap_sem held in write mode. */ - vma->vm_flags = new_flags; + vm_write_begin(vma); + WRITE_ONCE(vma->vm_flags, new_flags); + vm_write_end(vma); out: return error; } @@ -451,9 +453,11 @@ static void madvise_free_page_range(struct mmu_gather *tlb, .private = tlb, }; + vm_write_begin(vma); tlb_start_vma(tlb, vma); walk_page_range(addr, end, &free_walk); tlb_end_vma(tlb, vma); + vm_write_end(vma); } static int madvise_free_single_vma(struct vm_area_struct *vma, diff --git a/mm/mempolicy.c b/mm/mempolicy.c index a2af6d58a68f..72645928daa0 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -379,8 +379,11 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new) struct vm_area_struct *vma; down_write(&mm->mmap_sem); - for (vma = mm->mmap; vma; vma = vma->vm_next) + for (vma = mm->mmap; vma; vma = vma->vm_next) { + vm_write_begin(vma); mpol_rebind_policy(vma->vm_policy, new); + vm_write_end(vma); + } up_write(&mm->mmap_sem); } @@ -578,9 +581,11 @@ unsigned long change_prot_numa(struct vm_area_struct *vma, { int nr_updated; + vm_write_begin(vma); nr_updated = change_protection(vma, addr, end, PAGE_NONE, 0, 1); if (nr_updated) count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated); + vm_write_end(vma); return nr_updated; } @@ -681,6 +686,7 @@ static int vma_replace_policy(struct vm_area_struct *vma, if (IS_ERR(new)) return PTR_ERR(new); + vm_write_begin(vma); if (vma->vm_ops && vma->vm_ops->set_policy) { err = vma->vm_ops->set_policy(vma, new); if (err) @@ -688,11 +694,17 @@ static int vma_replace_policy(struct vm_area_struct *vma, } old = vma->vm_policy; - vma->vm_policy = new; /* protected by mmap_sem */ + /* + * The speculative page fault handler access this field without + * hodling the mmap_sem. + */ + WRITE_ONCE(vma->vm_policy, new); + vm_write_end(vma); mpol_put(old); return 0; err_out: + vm_write_end(vma); mpol_put(new); return err; } @@ -1562,23 +1574,28 @@ COMPAT_SYSCALL_DEFINE6(mbind, compat_ulong_t, start, compat_ulong_t, len, struct mempolicy *__get_vma_policy(struct vm_area_struct *vma, unsigned long addr) { - struct mempolicy *pol = NULL; + struct mempolicy *pol; - if (vma) { - if (vma->vm_ops && vma->vm_ops->get_policy) { - pol = vma->vm_ops->get_policy(vma, addr); - } else if (vma->vm_policy) { - pol = vma->vm_policy; + if (!vma) + return NULL; - /* - * shmem_alloc_page() passes MPOL_F_SHARED policy with - * a pseudo vma whose vma->vm_ops=NULL. Take a reference - * count on these policies which will be dropped by - * mpol_cond_put() later - */ - if (mpol_needs_cond_ref(pol)) - mpol_get(pol); - } + if (vma->vm_ops && vma->vm_ops->get_policy) + return vma->vm_ops->get_policy(vma, addr); + + /* + * This could be called without holding the mmap_sem in the + * speculative page fault handler's path. + */ + pol = READ_ONCE(vma->vm_policy); + if (pol) { + /* + * shmem_alloc_page() passes MPOL_F_SHARED policy with + * a pseudo vma whose vma->vm_ops=NULL. Take a reference + * count on these policies which will be dropped by + * mpol_cond_put() later + */ + if (mpol_needs_cond_ref(pol)) + mpol_get(pol); } return pol; diff --git a/mm/mlock.c b/mm/mlock.c index 4d009350893f..83e49f99ad38 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -438,7 +438,9 @@ static unsigned long __munlock_pagevec_fill(struct pagevec *pvec, void munlock_vma_pages_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - vma->vm_flags &= VM_LOCKED_CLEAR_MASK; + vm_write_begin(vma); + WRITE_ONCE(vma->vm_flags, vma->vm_flags & VM_LOCKED_CLEAR_MASK); + vm_write_end(vma); while (start < end) { struct page *page; @@ -562,10 +564,11 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev, * It's okay if try_to_unmap_one unmaps a page just after we * set VM_LOCKED, populate_vma_page_range will bring it back. */ - - if (lock) - vma->vm_flags = newflags; - else + if (lock) { + vm_write_begin(vma); + WRITE_ONCE(vma->vm_flags, newflags); + vm_write_end(vma); + } else munlock_vma_pages_range(vma, start, end); out: diff --git a/mm/mmap.c b/mm/mmap.c index 0e90b469fd97..e28136cd3275 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -847,17 +847,18 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, } if (start != vma->vm_start) { - vma->vm_start = start; + WRITE_ONCE(vma->vm_start, start); start_changed = true; } if (end != vma->vm_end) { - vma->vm_end = end; + WRITE_ONCE(vma->vm_end, end); end_changed = true; } - vma->vm_pgoff = pgoff; + WRITE_ONCE(vma->vm_pgoff, pgoff); if (adjust_next) { - next->vm_start += adjust_next << PAGE_SHIFT; - next->vm_pgoff += adjust_next; + WRITE_ONCE(next->vm_start, + next->vm_start + (adjust_next << PAGE_SHIFT)); + WRITE_ONCE(next->vm_pgoff, next->vm_pgoff + adjust_next); } if (root) { @@ -1754,6 +1755,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr, out: perf_event_mmap(vma); + vm_write_begin(vma); vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT); if (vm_flags & VM_LOCKED) { if (!((vm_flags & VM_SPECIAL) || is_vm_hugetlb_page(vma) || @@ -1777,6 +1779,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr, vma->vm_flags |= VM_SOFTDIRTY; vma_set_page_prot(vma); + vm_write_end(vma); return addr; @@ -2405,8 +2408,8 @@ int expand_downwards(struct vm_area_struct *vma, mm->locked_vm += grow; vm_stat_account(mm, vma->vm_flags, grow); anon_vma_interval_tree_pre_update_vma(vma); - vma->vm_start = address; - vma->vm_pgoff -= grow; + WRITE_ONCE(vma->vm_start, address); + WRITE_ONCE(vma->vm_pgoff, vma->vm_pgoff - grow); anon_vma_interval_tree_post_update_vma(vma); vma_gap_update(vma); spin_unlock(&mm->page_table_lock); diff --git a/mm/mprotect.c b/mm/mprotect.c index 6d3e2f082290..ce93c4f6af70 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -358,7 +358,8 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, * vm_flags and vm_page_prot are protected by the mmap_sem * held in write mode. */ - vma->vm_flags = newflags; + vm_write_begin(vma); + WRITE_ONCE(vma->vm_flags, newflags); dirty_accountable = vma_wants_writenotify(vma, vma->vm_page_prot); vma_set_page_prot(vma); @@ -373,6 +374,7 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, (newflags & VM_WRITE)) { populate_vma_page_range(vma, start, end, NULL); } + vm_write_end(vma); vm_stat_account(mm, oldflags, -nrpages); vm_stat_account(mm, newflags, nrpages); diff --git a/mm/mremap.c b/mm/mremap.c index cfec004c4ff9..ca0b5cb6ed4d 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -301,6 +301,9 @@ static unsigned long move_vma(struct vm_area_struct *vma, if (!new_vma) return -ENOMEM; + vm_write_begin(vma); + vm_write_begin_nested(new_vma, SINGLE_DEPTH_NESTING); + moved_len = move_page_tables(vma, old_addr, new_vma, new_addr, old_len, need_rmap_locks); if (moved_len < old_len) { @@ -317,6 +320,7 @@ static unsigned long move_vma(struct vm_area_struct *vma, */ move_page_tables(new_vma, new_addr, vma, old_addr, moved_len, true); + vm_write_end(vma); vma = new_vma; old_len = new_len; old_addr = new_addr; @@ -325,7 +329,9 @@ static unsigned long move_vma(struct vm_area_struct *vma, mremap_userfaultfd_prep(new_vma, uf); arch_remap(mm, old_addr, old_addr + old_len, new_addr, new_addr + new_len); + vm_write_end(vma); } + vm_write_end(new_vma); /* Conceal VM_ACCOUNT so old reservation is not undone */ if (vm_flags & VM_ACCOUNT) { From patchwork Wed Oct 11 13:52:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824440 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBwl80TGKz9s83 for ; Thu, 12 Oct 2017 01:06:44 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBwl76hSyzDrMx for ; Thu, 12 Oct 2017 01:06:43 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwRm5yD3zDr6L for ; Thu, 12 Oct 2017 00:53:24 +1100 (AEDT) Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDrNCw120598 for ; Wed, 11 Oct 2017 09:53:23 -0400 Received: from e06smtp12.uk.ibm.com (e06smtp12.uk.ibm.com [195.75.94.108]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhhdrw4j3-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:23 -0400 Received: from localhost by e06smtp12.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:20 +0100 Received: from b06cxnps3075.portsmouth.uk.ibm.com (9.149.109.195) by e06smtp12.uk.ibm.com (192.168.101.142) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:12 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrCxm26017948; Wed, 11 Oct 2017 13:53:12 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E95674C046; Wed, 11 Oct 2017 14:49:08 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 3D52E4C040; Wed, 11 Oct 2017 14:49:07 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:07 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 08/22] mm: RCU free VMAs Date: Wed, 11 Oct 2017 15:52:32 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0008-0000-0000-0000049EC80F X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0009-0000-0000-00001E30D56F Message-Id: <1507729966-10660-9-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Peter Zijlstra Manage the VMAs with SRCU such that we can do a lockless VMA lookup. We put the fput(vma->vm_file) in the SRCU callback, this keeps files valid during speculative faults, this is possible due to the delayed fput work by Al Viro -- do we need srcu_barrier() in unmount someplace? We guard the mm_rb tree with a seqlock (this could be a seqcount but we'd have to disable preemption around the write side in order to make the retry loop in __read_seqcount_begin() work) such that we can know if the rb tree walk was correct. We cannot trust the restult of a lockless tree walk in the face of concurrent tree rotations; although we can trust on the termination of such walks -- tree rotations guarantee the end result is a tree again after all. Furthermore, we rely on the WMB implied by the write_seqlock/count_begin() to separate the VMA initialization and the publishing stores, analogous to the RELEASE in rcu_assign_pointer(). We also rely on the RMB from read_seqretry() to separate the vma load from further loads like the smp_read_barrier_depends() in regular RCU. We must not touch the vmacache while doing SRCU lookups as that is not properly serialized against changes. We update gap information after publishing the VMA, but A) we don't use that and B) the seqlock read side would fix that anyhow. We clear vma->vm_rb for nodes removed from the vma tree such that we can easily detect such 'dead' nodes, we rely on the WMB from write_sequnlock() to separate the tree removal and clearing the node. Provide find_vma_srcu() which wraps the required magic. Signed-off-by: Peter Zijlstra (Intel) [Remove the warnings in description about the SRCU global lock which has been removed now] [Rename vma_is_dead() to vma_has_changed() and move its adding to the next patch] [Postpone call to mpol_put() as the policy can be used during the speculative path] Signed-off-by: Laurent Dufour mm/spf: Fix policy free --- include/linux/mm_types.h | 2 + kernel/fork.c | 1 + mm/init-mm.c | 1 + mm/internal.h | 5 +++ mm/mmap.c | 103 ++++++++++++++++++++++++++++++++++------------- 5 files changed, 84 insertions(+), 28 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 3819fec89c7b..d88311dff6ce 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -349,6 +349,7 @@ struct vm_area_struct { #ifdef CONFIG_SPF seqcount_t vm_sequence; #endif + struct rcu_head vm_rcu_head; } __randomize_layout; struct core_thread { @@ -366,6 +367,7 @@ struct kioctx_table; struct mm_struct { struct vm_area_struct *mmap; /* list of VMAs */ struct rb_root mm_rb; + seqlock_t mm_seq; u32 vmacache_seqnum; /* per-thread vmacache */ #ifdef CONFIG_MMU unsigned long (*get_unmapped_area) (struct file *filp, diff --git a/kernel/fork.c b/kernel/fork.c index 778fbe83829b..792788384be1 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -900,6 +900,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, mm->mmap = NULL; mm->mm_rb = RB_ROOT; mm->vmacache_seqnum = 0; + seqlock_init(&mm->mm_seq); atomic_set(&mm->mm_users, 1); atomic_set(&mm->mm_count, 1); init_rwsem(&mm->mmap_sem); diff --git a/mm/init-mm.c b/mm/init-mm.c index 975e49f00f34..2b1fa061684f 100644 --- a/mm/init-mm.c +++ b/mm/init-mm.c @@ -16,6 +16,7 @@ struct mm_struct init_mm = { .mm_rb = RB_ROOT, + .mm_seq = __SEQLOCK_UNLOCKED(init_mm.mm_seq), .pgd = swapper_pg_dir, .mm_users = ATOMIC_INIT(2), .mm_count = ATOMIC_INIT(1), diff --git a/mm/internal.h b/mm/internal.h index 1df011f62480..aced97623bff 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -40,6 +40,11 @@ void page_writeback_init(void); int do_swap_page(struct vm_fault *vmf); +extern struct srcu_struct vma_srcu; + +extern struct vm_area_struct *find_vma_srcu(struct mm_struct *mm, + unsigned long addr); + void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma, unsigned long floor, unsigned long ceiling); diff --git a/mm/mmap.c b/mm/mmap.c index e28136cd3275..2149c3d2a1ef 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -160,6 +160,24 @@ void unlink_file_vma(struct vm_area_struct *vma) } } +DEFINE_SRCU(vma_srcu); + +static void __free_vma(struct rcu_head *head) +{ + struct vm_area_struct *vma = + container_of(head, struct vm_area_struct, vm_rcu_head); + + if (vma->vm_file) + fput(vma->vm_file); + mpol_put(vma_policy(vma)); + kmem_cache_free(vm_area_cachep, vma); +} + +static void free_vma(struct vm_area_struct *vma) +{ + call_srcu(&vma_srcu, &vma->vm_rcu_head, __free_vma); +} + /* * Close a vm structure and free it, returning the next. */ @@ -170,10 +188,7 @@ static struct vm_area_struct *remove_vma(struct vm_area_struct *vma) might_sleep(); if (vma->vm_ops && vma->vm_ops->close) vma->vm_ops->close(vma); - if (vma->vm_file) - fput(vma->vm_file); - mpol_put(vma_policy(vma)); - kmem_cache_free(vm_area_cachep, vma); + free_vma(vma); return next; } @@ -411,26 +426,37 @@ static void vma_gap_update(struct vm_area_struct *vma) } static inline void vma_rb_insert(struct vm_area_struct *vma, - struct rb_root *root) + struct mm_struct *mm) { + struct rb_root *root = &mm->mm_rb; + /* All rb_subtree_gap values must be consistent prior to insertion */ validate_mm_rb(root, NULL); rb_insert_augmented(&vma->vm_rb, root, &vma_gap_callbacks); } -static void __vma_rb_erase(struct vm_area_struct *vma, struct rb_root *root) +static void __vma_rb_erase(struct vm_area_struct *vma, struct mm_struct *mm) { + struct rb_root *root = &mm->mm_rb; /* * Note rb_erase_augmented is a fairly large inline function, * so make sure we instantiate it only once with our desired * augmented rbtree callbacks. */ + write_seqlock(&mm->mm_seq); rb_erase_augmented(&vma->vm_rb, root, &vma_gap_callbacks); + write_sequnlock(&mm->mm_seq); /* wmb */ + + /* + * Ensure the removal is complete before clearing the node. + * Matched by vma_has_changed()/handle_speculative_fault(). + */ + RB_CLEAR_NODE(&vma->vm_rb); } static __always_inline void vma_rb_erase_ignore(struct vm_area_struct *vma, - struct rb_root *root, + struct mm_struct *mm, struct vm_area_struct *ignore) { /* @@ -438,21 +464,21 @@ static __always_inline void vma_rb_erase_ignore(struct vm_area_struct *vma, * with the possible exception of the "next" vma being erased if * next->vm_start was reduced. */ - validate_mm_rb(root, ignore); + validate_mm_rb(&mm->mm_rb, ignore); - __vma_rb_erase(vma, root); + __vma_rb_erase(vma, mm); } static __always_inline void vma_rb_erase(struct vm_area_struct *vma, - struct rb_root *root) + struct mm_struct *mm) { /* * All rb_subtree_gap values must be consistent prior to erase, * with the possible exception of the vma being erased. */ - validate_mm_rb(root, vma); + validate_mm_rb(&mm->mm_rb, vma); - __vma_rb_erase(vma, root); + __vma_rb_erase(vma, mm); } /* @@ -571,10 +597,12 @@ void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma, * immediately update the gap to the correct value. Finally we * rebalance the rbtree after all augmented values have been set. */ + write_seqlock(&mm->mm_seq); rb_link_node(&vma->vm_rb, rb_parent, rb_link); vma->rb_subtree_gap = 0; vma_gap_update(vma); - vma_rb_insert(vma, &mm->mm_rb); + vma_rb_insert(vma, mm); + write_sequnlock(&mm->mm_seq); } static void __vma_link_file(struct vm_area_struct *vma) @@ -650,7 +678,7 @@ static __always_inline void __vma_unlink_common(struct mm_struct *mm, { struct vm_area_struct *next; - vma_rb_erase_ignore(vma, &mm->mm_rb, ignore); + vma_rb_erase_ignore(vma, mm, ignore); next = vma->vm_next; if (has_prev) prev->vm_next = next; @@ -923,15 +951,12 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, } if (remove_next) { - if (file) { + if (file) uprobe_munmap(next, next->vm_start, next->vm_end); - fput(file); - } if (next->anon_vma) anon_vma_merge(vma, next); mm->map_count--; - mpol_put(vma_policy(next)); - kmem_cache_free(vm_area_cachep, next); + free_vma(next); vm_raw_write_end(next); /* * In mprotect's case 6 (see comments on vma_merge), @@ -2151,15 +2176,10 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, EXPORT_SYMBOL(get_unmapped_area); /* Look up the first VMA which satisfies addr < vm_end, NULL if none. */ -struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr) +static struct vm_area_struct *__find_vma(struct mm_struct *mm, unsigned long addr) { struct rb_node *rb_node; - struct vm_area_struct *vma; - - /* Check the cache first. */ - vma = vmacache_find(mm, addr); - if (likely(vma)) - return vma; + struct vm_area_struct *vma = NULL; rb_node = mm->mm_rb.rb_node; @@ -2177,13 +2197,40 @@ struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr) rb_node = rb_node->rb_right; } + return vma; +} + +struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr) +{ + struct vm_area_struct *vma; + + /* Check the cache first. */ + vma = vmacache_find(mm, addr); + if (likely(vma)) + return vma; + + vma = __find_vma(mm, addr); if (vma) vmacache_update(addr, vma); return vma; } - EXPORT_SYMBOL(find_vma); +struct vm_area_struct *find_vma_srcu(struct mm_struct *mm, unsigned long addr) +{ + struct vm_area_struct *vma; + unsigned int seq; + + WARN_ON_ONCE(!srcu_read_lock_held(&vma_srcu)); + + do { + seq = read_seqbegin(&mm->mm_seq); + vma = __find_vma(mm, addr); + } while (read_seqretry(&mm->mm_seq, seq)); + + return vma; +} + /* * Same as find_vma, but also return a pointer to the previous VMA in *pprev. */ @@ -2551,7 +2598,7 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma, insertion_point = (prev ? &prev->vm_next : &mm->mmap); vma->vm_prev = NULL; do { - vma_rb_erase(vma, &mm->mm_rb); + vma_rb_erase(vma, mm); mm->map_count--; tail_vma = vma; vma = vma->vm_next; From patchwork Wed Oct 11 13:52:33 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824443 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBwpX0bGHz9rxl for ; Thu, 12 Oct 2017 01:09:40 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBwpW6syPzDrVC for ; Thu, 12 Oct 2017 01:09:39 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwRq2ZblzDr6Q for ; Thu, 12 Oct 2017 00:53:27 +1100 (AEDT) Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDqYAU076189 for ; Wed, 11 Oct 2017 09:53:25 -0400 Received: from e06smtp10.uk.ibm.com (e06smtp10.uk.ibm.com [195.75.94.106]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhmgc88xj-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:25 -0400 Received: from localhost by e06smtp10.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:22 +0100 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp10.uk.ibm.com (192.168.101.140) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:15 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrFd927001036; Wed, 11 Oct 2017 13:53:15 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B975D4C046; Wed, 11 Oct 2017 14:49:11 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0FD3A4C040; Wed, 11 Oct 2017 14:49:10 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:09 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 09/22] mm: Cache some VMA fields in the vm_fault structure Date: Wed, 11 Oct 2017 15:52:33 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0040-0000-0000-000003E1C797 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0041-0000-0000-000025E3D45E Message-Id: <1507729966-10660-10-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" When handling speculative page fault, the vma->vm_flags and vma->vm_page_prot fields are read once the page table lock is released. So there is no more guarantee that these fields would not change in our back. They will be saved in the vm_fault structure before the VMA is checked for changes. This patch also set the fields in hugetlb_no_page() and __collapse_huge_page_swapin even if it is not need for the callee. Signed-off-by: Laurent Dufour --- include/linux/mm.h | 6 ++++++ mm/hugetlb.c | 2 ++ mm/khugepaged.c | 2 ++ mm/memory.c | 38 ++++++++++++++++++++------------------ 4 files changed, 30 insertions(+), 18 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 1e7740170c24..fa7d5d330014 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -350,6 +350,12 @@ struct vm_fault { * page table to avoid allocation from * atomic context. */ + /* + * These entries are required when handling speculative page fault. + * This way the page handling is done using consistent field values. + */ + unsigned long vma_flags; + pgprot_t vma_page_prot; }; /* page entry size for vm->huge_fault() */ diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 72fb45db8330..a9ed37a92ac6 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3686,6 +3686,8 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma, .vma = vma, .address = address, .flags = flags, + .vma_flags = vma->vm_flags, + .vma_page_prot = vma->vm_page_prot, /* * Hard to debug if it ends up being * used by a callee that assumes diff --git a/mm/khugepaged.c b/mm/khugepaged.c index f723d42140db..f9bf337f73c7 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -881,6 +881,8 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, .flags = FAULT_FLAG_ALLOW_RETRY, .pmd = pmd, .pgoff = linear_page_index(vma, address), + .vma_flags = vma->vm_flags, + .vma_page_prot = vma->vm_page_prot, }; /* we only decide to swapin, if there is enough young ptes */ diff --git a/mm/memory.c b/mm/memory.c index eaa86e41124f..b2a2c2ef6e0e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2599,7 +2599,7 @@ static int wp_page_copy(struct vm_fault *vmf) * Don't let another task, with possibly unlocked vma, * keep the mlocked page. */ - if (page_copied && (vma->vm_flags & VM_LOCKED)) { + if (page_copied && (vmf->vma_flags & VM_LOCKED)) { lock_page(old_page); /* LRU manipulation */ if (PageMlocked(old_page)) munlock_vma_page(old_page); @@ -2633,7 +2633,7 @@ static int wp_page_copy(struct vm_fault *vmf) */ int finish_mkwrite_fault(struct vm_fault *vmf) { - WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED)); + WARN_ON_ONCE(!(vmf->vma_flags & VM_SHARED)); if (!pte_map_lock(vmf)) return VM_FAULT_RETRY; /* @@ -2735,7 +2735,7 @@ static int do_wp_page(struct vm_fault *vmf) * We should not cow pages in a shared writeable mapping. * Just mark the pages writable and/or call ops->pfn_mkwrite. */ - if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) == + if ((vmf->vma_flags & (VM_WRITE|VM_SHARED)) == (VM_WRITE|VM_SHARED)) return wp_pfn_shared(vmf); @@ -2782,7 +2782,7 @@ static int do_wp_page(struct vm_fault *vmf) return VM_FAULT_WRITE; } unlock_page(vmf->page); - } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) == + } else if (unlikely((vmf->vma_flags & (VM_WRITE|VM_SHARED)) == (VM_WRITE|VM_SHARED))) { return wp_page_shared(vmf); } @@ -3044,7 +3044,7 @@ int do_swap_page(struct vm_fault *vmf) inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS); - pte = mk_pte(page, vma->vm_page_prot); + pte = mk_pte(page, vmf->vma_page_prot); if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) { pte = maybe_mkwrite(pte_mkdirty(pte), vma); vmf->flags &= ~FAULT_FLAG_WRITE; @@ -3070,7 +3070,7 @@ int do_swap_page(struct vm_fault *vmf) swap_free(entry); if (mem_cgroup_swap_full(page) || - (vma->vm_flags & VM_LOCKED) || PageMlocked(page)) + (vmf->vma_flags & VM_LOCKED) || PageMlocked(page)) try_to_free_swap(page); unlock_page(page); if (page != swapcache && swapcache) { @@ -3127,7 +3127,7 @@ static int do_anonymous_page(struct vm_fault *vmf) pte_t entry; /* File mapping without ->vm_ops ? */ - if (vma->vm_flags & VM_SHARED) + if (vmf->vma_flags & VM_SHARED) return VM_FAULT_SIGBUS; /* @@ -3151,7 +3151,7 @@ static int do_anonymous_page(struct vm_fault *vmf) if (!(vmf->flags & FAULT_FLAG_WRITE) && !mm_forbids_zeropage(vma->vm_mm)) { entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address), - vma->vm_page_prot)); + vmf->vma_page_prot)); if (!pte_map_lock(vmf)) return VM_FAULT_RETRY; if (!pte_none(*vmf->pte)) @@ -3184,8 +3184,8 @@ static int do_anonymous_page(struct vm_fault *vmf) */ __SetPageUptodate(page); - entry = mk_pte(page, vma->vm_page_prot); - if (vma->vm_flags & VM_WRITE) + entry = mk_pte(page, vmf->vma_page_prot); + if (vmf->vma_flags & VM_WRITE) entry = pte_mkwrite(pte_mkdirty(entry)); if (!pte_map_lock(vmf)) { @@ -3381,7 +3381,7 @@ static int do_set_pmd(struct vm_fault *vmf, struct page *page) for (i = 0; i < HPAGE_PMD_NR; i++) flush_icache_page(vma, page + i); - entry = mk_huge_pmd(page, vma->vm_page_prot); + entry = mk_huge_pmd(page, vmf->vma_page_prot); if (write) entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); @@ -3455,11 +3455,11 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, return VM_FAULT_NOPAGE; flush_icache_page(vma, page); - entry = mk_pte(page, vma->vm_page_prot); + entry = mk_pte(page, vmf->vma_page_prot); if (write) entry = maybe_mkwrite(pte_mkdirty(entry), vma); /* copy-on-write page */ - if (write && !(vma->vm_flags & VM_SHARED)) { + if (write && !(vmf->vma_flags & VM_SHARED)) { inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); @@ -3498,7 +3498,7 @@ int finish_fault(struct vm_fault *vmf) /* Did we COW the page? */ if ((vmf->flags & FAULT_FLAG_WRITE) && - !(vmf->vma->vm_flags & VM_SHARED)) + !(vmf->vma_flags & VM_SHARED)) page = vmf->cow_page; else page = vmf->page; @@ -3752,7 +3752,7 @@ static int do_fault(struct vm_fault *vmf) ret = VM_FAULT_SIGBUS; else if (!(vmf->flags & FAULT_FLAG_WRITE)) ret = do_read_fault(vmf); - else if (!(vma->vm_flags & VM_SHARED)) + else if (!(vmf->vma_flags & VM_SHARED)) ret = do_cow_fault(vmf); else ret = do_shared_fault(vmf); @@ -3809,7 +3809,7 @@ static int do_numa_page(struct vm_fault *vmf) * accessible ptes, some can allow access by kernel mode. */ pte = ptep_modify_prot_start(vma->vm_mm, vmf->address, vmf->pte); - pte = pte_modify(pte, vma->vm_page_prot); + pte = pte_modify(pte, vmf->vma_page_prot); pte = pte_mkyoung(pte); if (was_writable) pte = pte_mkwrite(pte); @@ -3843,7 +3843,7 @@ static int do_numa_page(struct vm_fault *vmf) * Flag if the page is shared between multiple address spaces. This * is later used when determining whether to group tasks together */ - if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED)) + if (page_mapcount(page) > 1 && (vmf->vma_flags & VM_SHARED)) flags |= TNF_SHARED; last_cpupid = page_cpupid_last(page); @@ -3887,7 +3887,7 @@ static int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd) return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD); /* COW handled on pte level: split pmd */ - VM_BUG_ON_VMA(vmf->vma->vm_flags & VM_SHARED, vmf->vma); + VM_BUG_ON_VMA(vmf->vma_flags & VM_SHARED, vmf->vma); __split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL); return VM_FAULT_FALLBACK; @@ -4034,6 +4034,8 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address, .flags = flags, .pgoff = linear_page_index(vma, address), .gfp_mask = __get_fault_gfp_mask(vma), + .vma_flags = vma->vm_flags, + .vma_page_prot = vma->vm_page_prot, }; unsigned int dirty = flags & FAULT_FLAG_WRITE; struct mm_struct *mm = vma->vm_mm; From patchwork Wed Oct 11 13:52:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824444 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBwrR4jsJz9rxl for ; Thu, 12 Oct 2017 01:11:19 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBwrR3PrjzDrSV for ; Thu, 12 Oct 2017 01:11:19 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwRv2W4DzDr5w for ; Thu, 12 Oct 2017 00:53:31 +1100 (AEDT) Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDoiqh112295 for ; Wed, 11 Oct 2017 09:53:29 -0400 Received: from e06smtp10.uk.ibm.com (e06smtp10.uk.ibm.com [195.75.94.106]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhhdrw4ph-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:28 -0400 Received: from localhost by e06smtp10.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:25 +0100 Received: from b06cxnps3075.portsmouth.uk.ibm.com (9.149.109.195) by e06smtp10.uk.ibm.com (192.168.101.140) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:18 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrIFc29687894; Wed, 11 Oct 2017 13:53:18 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B47D14C046; Wed, 11 Oct 2017 14:49:14 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 14B034C040; Wed, 11 Oct 2017 14:49:13 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:12 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 10/22] mm: Protect SPF handler against anon_vma changes Date: Wed, 11 Oct 2017 15:52:34 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0040-0000-0000-000003E1C798 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0041-0000-0000-000025E3D461 Message-Id: <1507729966-10660-11-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" The speculative page fault handler must be protected against anon_vma changes. This is because page_add_new_anon_rmap() is called during the speculative path. In addition, don't try speculative page fault if the VMA don't have an anon_vma structure allocated because its allocation should be protected by the mmap_sem. In __vma_adjust() when importer->anon_vma is set, there is no need to protect against speculative page faults since speculative page fault is aborted if the vma->anon_vma is not set. When calling page_add_new_anon_rmap() vma->anon_vma is necessarily valid since we checked for it when locking the pte and the anon_vma is removed once the pte is unlocked. So even if the speculative page fault handler is running concurrently with do_unmap(), as the pte is locked in unmap_region() - through unmap_vmas() - and the anon_vma unlinked later, because we check for the vma sequence counter which is updated in unmap_page_range() before locking the pte, and then in free_pgtables() so when locking the pte the change will be detected. Signed-off-by: Laurent Dufour --- mm/memory.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index b2a2c2ef6e0e..681a04f843f0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -618,7 +618,9 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, * Hide vma from rmap and truncate_pagecache before freeing * pgtables */ + vm_write_begin(vma); unlink_anon_vmas(vma); + vm_write_end(vma); unlink_file_vma(vma); if (is_vm_hugetlb_page(vma)) { @@ -632,7 +634,9 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, && !is_vm_hugetlb_page(next)) { vma = next; next = vma->vm_next; + vm_write_begin(vma); unlink_anon_vmas(vma); + vm_write_end(vma); unlink_file_vma(vma); } free_pgd_range(tlb, addr, vma->vm_end, From patchwork Wed Oct 11 13:52:35 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824445 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBwsr5xF0z9rxl for ; Thu, 12 Oct 2017 01:12:32 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBwsr4vVWzDrBt for ; Thu, 12 Oct 2017 01:12:32 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwRx57SXzDr5S for ; Thu, 12 Oct 2017 00:53:33 +1100 (AEDT) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDnLae121008 for ; Wed, 11 Oct 2017 09:53:31 -0400 Received: from e06smtp11.uk.ibm.com (e06smtp11.uk.ibm.com [195.75.94.107]) by mx0b-001b2d01.pphosted.com with ESMTP id 2dhhbm51w2-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:30 -0400 Received: from localhost by e06smtp11.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:28 +0100 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp11.uk.ibm.com (192.168.101.141) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:21 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrLT926083348; Wed, 11 Oct 2017 13:53:21 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 92ED54C04A; Wed, 11 Oct 2017 14:49:17 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D93CC4C040; Wed, 11 Oct 2017 14:49:15 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:15 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 11/22] mm/migrate: Pass vm_fault pointer to migrate_misplaced_page() Date: Wed, 11 Oct 2017 15:52:35 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0040-0000-0000-00000401C88C X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0041-0000-0000-000020A3D5BF Message-Id: <1507729966-10660-12-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" migrate_misplaced_page() is only called during the page fault handling so it's better to pass the pointer to the struct vm_fault instead of the vma. This way during the speculative page fault path the saved vma->vm_flags could be used. Signed-off-by: Laurent Dufour --- include/linux/migrate.h | 4 ++-- mm/memory.c | 2 +- mm/migrate.c | 4 ++-- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 643c7ae7d7b4..a815ed6838b3 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -126,14 +126,14 @@ static inline void __ClearPageMovable(struct page *page) #ifdef CONFIG_NUMA_BALANCING extern bool pmd_trans_migrating(pmd_t pmd); extern int migrate_misplaced_page(struct page *page, - struct vm_area_struct *vma, int node); + struct vm_fault *vmf, int node); #else static inline bool pmd_trans_migrating(pmd_t pmd) { return false; } static inline int migrate_misplaced_page(struct page *page, - struct vm_area_struct *vma, int node) + struct vm_fault *vmf, int node) { return -EAGAIN; /* can't migrate now */ } diff --git a/mm/memory.c b/mm/memory.c index 681a04f843f0..94de8e976c01 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3861,7 +3861,7 @@ static int do_numa_page(struct vm_fault *vmf) } /* Migrate to the requested node */ - migrated = migrate_misplaced_page(page, vma, target_nid); + migrated = migrate_misplaced_page(page, vmf, target_nid); if (migrated) { page_nid = target_nid; flags |= TNF_MIGRATED; diff --git a/mm/migrate.c b/mm/migrate.c index e603c123c0d1..386772582e1d 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1939,7 +1939,7 @@ bool pmd_trans_migrating(pmd_t pmd) * node. Caller is expected to have an elevated reference count on * the page that will be dropped by this function before returning. */ -int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, +int migrate_misplaced_page(struct page *page, struct vm_fault *vmf, int node) { pg_data_t *pgdat = NODE_DATA(node); @@ -1952,7 +1952,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, * with execute permissions as they are probably shared libraries. */ if (page_mapcount(page) != 1 && page_is_file_cache(page) && - (vma->vm_flags & VM_EXEC)) + (vmf->vma_flags & VM_EXEC)) goto out; /* From patchwork Wed Oct 11 13:52:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824446 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBwvV4fy1z9ryT for ; Thu, 12 Oct 2017 01:13:58 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBwvV3fRvzDsWx for ; Thu, 12 Oct 2017 01:13:58 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwS020FVzDr6M for ; Thu, 12 Oct 2017 00:53:36 +1100 (AEDT) Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDmwak028178 for ; Wed, 11 Oct 2017 09:53:34 -0400 Received: from e06smtp14.uk.ibm.com (e06smtp14.uk.ibm.com [195.75.94.110]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhfepb8d0-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:33 -0400 Received: from localhost by e06smtp14.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:30 +0100 Received: from b06cxnps3074.portsmouth.uk.ibm.com (9.149.109.194) by e06smtp14.uk.ibm.com (192.168.101.144) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:24 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrNxd23789626; Wed, 11 Oct 2017 13:53:23 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6A2B74C04E; Wed, 11 Oct 2017 14:49:20 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AE7DE4C040; Wed, 11 Oct 2017 14:49:18 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:18 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 12/22] mm: Introduce __lru_cache_add_active_or_unevictable Date: Wed, 11 Oct 2017 15:52:36 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0016-0000-0000-000004F4C7E0 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0017-0000-0000-0000282FD628 Message-Id: <1507729966-10660-13-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" The speculative page fault handler which is run without holding the mmap_sem is calling lru_cache_add_active_or_unevictable() but the vm_flags is not guaranteed to remain constant. Introducing __lru_cache_add_active_or_unevictable() which has the vma flags value parameter instead of the vma pointer. Signed-off-by: Laurent Dufour --- include/linux/swap.h | 11 +++++++++-- mm/memory.c | 8 ++++---- mm/swap.c | 12 ++++++------ 3 files changed, 19 insertions(+), 12 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index cd2f66fdfc2d..a50d64f06bcf 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -324,8 +324,15 @@ extern void swap_setup(void); extern void add_page_to_unevictable_list(struct page *page); -extern void lru_cache_add_active_or_unevictable(struct page *page, - struct vm_area_struct *vma); +extern void __lru_cache_add_active_or_unevictable(struct page *page, + unsigned long vma_flags); + +static inline void lru_cache_add_active_or_unevictable(struct page *page, + struct vm_area_struct *vma) +{ + return __lru_cache_add_active_or_unevictable(page, vma->vm_flags); +} + /* linux/mm/vmscan.c */ extern unsigned long zone_reclaimable_pages(struct zone *zone); diff --git a/mm/memory.c b/mm/memory.c index 94de8e976c01..5e5b8e3189f2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2552,7 +2552,7 @@ static int wp_page_copy(struct vm_fault *vmf) ptep_clear_flush_notify(vma, vmf->address, vmf->pte); page_add_new_anon_rmap(new_page, vma, vmf->address, false); mem_cgroup_commit_charge(new_page, memcg, false, false); - lru_cache_add_active_or_unevictable(new_page, vma); + __lru_cache_add_active_or_unevictable(new_page, vmf->vma_flags); /* * We call the notify macro here because, when using secondary * mmu page tables (such as kvm shadow page tables), we want the @@ -3065,7 +3065,7 @@ int do_swap_page(struct vm_fault *vmf) if (unlikely(page != swapcache && swapcache)) { page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); - lru_cache_add_active_or_unevictable(page, vma); + __lru_cache_add_active_or_unevictable(page, vmf->vma_flags); } else { do_page_add_anon_rmap(page, vma, vmf->address, exclusive); mem_cgroup_commit_charge(page, memcg, true, false); @@ -3215,7 +3215,7 @@ static int do_anonymous_page(struct vm_fault *vmf) inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); - lru_cache_add_active_or_unevictable(page, vma); + __lru_cache_add_active_or_unevictable(page, vmf->vma_flags); setpte: set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); @@ -3467,7 +3467,7 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); - lru_cache_add_active_or_unevictable(page, vma); + __lru_cache_add_active_or_unevictable(page, vmf->vma_flags); } else { inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page)); page_add_file_rmap(page, false); diff --git a/mm/swap.c b/mm/swap.c index a77d68f2c1b6..30e1b038f9b0 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -470,21 +470,21 @@ void add_page_to_unevictable_list(struct page *page) } /** - * lru_cache_add_active_or_unevictable - * @page: the page to be added to LRU - * @vma: vma in which page is mapped for determining reclaimability + * __lru_cache_add_active_or_unevictable + * @page: the page to be added to LRU + * @vma_flags: vma in which page is mapped for determining reclaimability * * Place @page on the active or unevictable LRU list, depending on its * evictability. Note that if the page is not evictable, it goes * directly back onto it's zone's unevictable list, it does NOT use a * per cpu pagevec. */ -void lru_cache_add_active_or_unevictable(struct page *page, - struct vm_area_struct *vma) +void __lru_cache_add_active_or_unevictable(struct page *page, + unsigned long vma_flags) { VM_BUG_ON_PAGE(PageLRU(page), page); - if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED)) { + if (likely((vma_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED)) { SetPageActive(page); lru_cache_add(page); return; From patchwork Wed Oct 11 13:52:37 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824447 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBwxc28Qlz9s7F for ; Thu, 12 Oct 2017 01:15:48 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBwxc1MwMzDsVT for ; Thu, 12 Oct 2017 01:15:48 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwS33Yy2zDr6L for ; Thu, 12 Oct 2017 00:53:39 +1100 (AEDT) Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDrND4120598 for ; Wed, 11 Oct 2017 09:53:37 -0400 Received: from e06smtp11.uk.ibm.com (e06smtp11.uk.ibm.com [195.75.94.107]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhhdrw4w3-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:37 -0400 Received: from localhost by e06smtp11.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:34 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp11.uk.ibm.com (192.168.101.141) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:27 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrQVi21364924; Wed, 11 Oct 2017 13:53:26 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 55D3B4C050; Wed, 11 Oct 2017 14:49:23 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 864A04C044; Wed, 11 Oct 2017 14:49:21 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:21 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 13/22] mm: Introduce __maybe_mkwrite() Date: Wed, 11 Oct 2017 15:52:37 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0040-0000-0000-00000401C890 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0041-0000-0000-000020A3D5C7 Message-Id: <1507729966-10660-14-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" The current maybe_mkwrite() is getting passed the pointer to the vma structure to fetch the vm_flags field. When dealing with the speculative page fault handler, it will be better to rely on the cached vm_flags value stored in the vm_fault structure. This patch introduce a __maybe_mkwrite() service which can be called by passing the value of the vm_flags field. There is no change functional changes expected for the other callers of maybe_mkwrite(). Signed-off-by: Laurent Dufour --- include/linux/mm.h | 9 +++++++-- mm/memory.c | 6 +++--- 2 files changed, 10 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index fa7d5d330014..9290d331a8dc 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -673,13 +673,18 @@ void free_compound_page(struct page *page); * pte_mkwrite. But get_user_pages can cause write faults for mappings * that do not have writing enabled, when used by access_process_vm. */ -static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) +static inline pte_t __maybe_mkwrite(pte_t pte, unsigned long vma_flags) { - if (likely(vma->vm_flags & VM_WRITE)) + if (likely(vma_flags & VM_WRITE)) pte = pte_mkwrite(pte); return pte; } +static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) +{ + return __maybe_mkwrite(pte, vma->vm_flags); +} + int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, struct page *page); int finish_fault(struct vm_fault *vmf); diff --git a/mm/memory.c b/mm/memory.c index 5e5b8e3189f2..a940883013c3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2451,7 +2451,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf) flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); entry = pte_mkyoung(vmf->orig_pte); - entry = maybe_mkwrite(pte_mkdirty(entry), vma); + entry = __maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags); if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1)) update_mmu_cache(vma, vmf->address, vmf->pte); pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -2541,8 +2541,8 @@ static int wp_page_copy(struct vm_fault *vmf) inc_mm_counter_fast(mm, MM_ANONPAGES); } flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); - entry = mk_pte(new_page, vma->vm_page_prot); - entry = maybe_mkwrite(pte_mkdirty(entry), vma); + entry = mk_pte(new_page, vmf->vma_page_prot); + entry = __maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags); /* * Clear the pte entry and flush it first, before updating the * pte with the new entry. This will avoid a race condition From patchwork Wed Oct 11 13:52:38 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824448 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBwz81RTFz9s7F for ; Thu, 12 Oct 2017 01:17:08 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBwz80Vf5zDrX9 for ; Thu, 12 Oct 2017 01:17:08 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwS62g51zDr6r for ; Thu, 12 Oct 2017 00:53:42 +1100 (AEDT) Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDojUo052306 for ; Wed, 11 Oct 2017 09:53:40 -0400 Received: from e06smtp13.uk.ibm.com (e06smtp13.uk.ibm.com [195.75.94.109]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhj86hvej-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:40 -0400 Received: from localhost by e06smtp13.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:37 +0100 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp13.uk.ibm.com (192.168.101.143) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:30 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrTp726476578; Wed, 11 Oct 2017 13:53:29 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 369EA4C044; Wed, 11 Oct 2017 14:49:26 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 70BA14C040; Wed, 11 Oct 2017 14:49:24 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:24 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 14/22] mm: Introduce __vm_normal_page() Date: Wed, 11 Oct 2017 15:52:38 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0012-0000-0000-00000580C7F8 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0013-0000-0000-000018FAD672 Message-Id: <1507729966-10660-15-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" When dealing with the speculative fault path we should use the VMA's field cached value stored in the vm_fault structure. Currently vm_normal_page() is using the pointer to the VMA to fetch the vm_flags value. This patch provides a new __vm_normal_page() which is receiving the vm_flags flags value as parameter. Note: The speculative path is turned on for architecture providing support for special PTE flag. So only the first block of vm_normal_page is used during the speculative path. Signed-off-by: Laurent Dufour --- include/linux/mm.h | 7 +++++-- mm/memory.c | 18 ++++++++++-------- 2 files changed, 15 insertions(+), 10 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 9290d331a8dc..6feff53f4775 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1246,8 +1246,11 @@ struct zap_details { pgoff_t last_index; /* Highest page->index to unmap */ }; -struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr, - pte_t pte, bool with_public_device); +struct page *__vm_normal_page(struct vm_area_struct *vma, unsigned long addr, + pte_t pte, bool with_public_device, + unsigned long vma_flags); +#define _vm_normal_page(vma, addr, pte, with_public_device) \ + __vm_normal_page(vma, addr, pte, with_public_device, (vma)->vm_flags) #define vm_normal_page(vma, addr, pte) _vm_normal_page(vma, addr, pte, false) struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, diff --git a/mm/memory.c b/mm/memory.c index a940883013c3..4ad4f0a6f652 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -823,8 +823,9 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr, #else # define HAVE_PTE_SPECIAL 0 #endif -struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr, - pte_t pte, bool with_public_device) +struct page *__vm_normal_page(struct vm_area_struct *vma, unsigned long addr, + pte_t pte, bool with_public_device, + unsigned long vma_flags) { unsigned long pfn = pte_pfn(pte); @@ -833,7 +834,7 @@ struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr, goto check_pfn; if (vma->vm_ops && vma->vm_ops->find_special_page) return vma->vm_ops->find_special_page(vma, addr); - if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP)) + if (vma_flags & (VM_PFNMAP | VM_MIXEDMAP)) return NULL; if (pte_devmap(pte)) return NULL; @@ -867,8 +868,8 @@ struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr, /* !HAVE_PTE_SPECIAL case follows: */ - if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) { - if (vma->vm_flags & VM_MIXEDMAP) { + if (unlikely(vma_flags & (VM_PFNMAP|VM_MIXEDMAP))) { + if (vma_flags & VM_MIXEDMAP) { if (!pfn_valid(pfn)) return NULL; goto out; @@ -877,7 +878,7 @@ struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr, off = (addr - vma->vm_start) >> PAGE_SHIFT; if (pfn == vma->vm_pgoff + off) return NULL; - if (!is_cow_mapping(vma->vm_flags)) + if (!is_cow_mapping(vma_flags)) return NULL; } } @@ -2730,7 +2731,8 @@ static int do_wp_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; - vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte); + vmf->page = __vm_normal_page(vma, vmf->address, vmf->orig_pte, false, + vmf->vma_flags); if (!vmf->page) { /* * VM_MIXEDMAP !pfn_valid() case, or VM_SOFTDIRTY clear on a @@ -3820,7 +3822,7 @@ static int do_numa_page(struct vm_fault *vmf) ptep_modify_prot_commit(vma->vm_mm, vmf->address, vmf->pte, pte); update_mmu_cache(vma, vmf->address, vmf->pte); - page = vm_normal_page(vma, vmf->address, pte); + page = __vm_normal_page(vma, vmf->address, pte, false, vmf->vma_flags); if (!page) { pte_unmap_unlock(vmf->pte, vmf->ptl); return 0; From patchwork Wed Oct 11 13:52:39 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824449 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBx0n1cW9z9s7F for ; Thu, 12 Oct 2017 01:18:33 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBx0n0L49zDrns for ; Thu, 12 Oct 2017 01:18:33 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwS83pSZzDr6D for ; Thu, 12 Oct 2017 00:53:44 +1100 (AEDT) Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDoO1V113996 for ; Wed, 11 Oct 2017 09:53:42 -0400 Received: from e06smtp10.uk.ibm.com (e06smtp10.uk.ibm.com [195.75.94.106]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhhahdse9-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:42 -0400 Received: from localhost by e06smtp10.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:39 +0100 Received: from b06cxnps3075.portsmouth.uk.ibm.com (9.149.109.195) by e06smtp10.uk.ibm.com (192.168.101.140) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:32 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrWk529491380; Wed, 11 Oct 2017 13:53:32 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 00E3B4C044; Wed, 11 Oct 2017 14:49:29 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 510074C040; Wed, 11 Oct 2017 14:49:27 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:27 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 15/22] mm: Introduce __page_add_new_anon_rmap() Date: Wed, 11 Oct 2017 15:52:39 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0040-0000-0000-000003E1C79D X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0041-0000-0000-000025E3D466 Message-Id: <1507729966-10660-16-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" When dealing with speculative page fault handler, we may race with VMA being split or merged. In this case the vma->vm_start and vm->vm_end fields may not match the address the page fault is occurring. This can only happens when the VMA is split but in that case, the anon_vma pointer of the new VMA will be the same as the original one, because in __split_vma the new->anon_vma is set to src->anon_vma when *new = *vma. So even if the VMA boundaries are not correct, the anon_vma pointer is still valid. If the VMA has been merged, then the VMA in which it has been merged must have the same anon_vma pointer otherwise the merge can't be done. So in all the case we know that the anon_vma is valid, since we have checked before starting the speculative page fault that the anon_vma pointer is valid for this VMA and since there is an anon_vma this means that at one time a page has been backed and that before the VMA is cleaned, the page table lock would have to be grab to clean the PTE, and the anon_vma field is checked once the PTE is locked. This patch introduce a new __page_add_new_anon_rmap() service which doesn't check for the VMA boundaries, and create a new inline one which do the check. When called from a page fault handler, if this is not a speculative one, there is a guarantee that vm_start and vm_end match the faulting address, so this check is useless. In the context of the speculative page fault handler, this check may be wrong but anon_vma is still valid as explained above. Signed-off-by: Laurent Dufour --- include/linux/rmap.h | 12 ++++++++++-- mm/memory.c | 8 ++++---- mm/rmap.c | 5 ++--- 3 files changed, 16 insertions(+), 9 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 733d3d8181e2..d91be69c1c60 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -173,8 +173,16 @@ void page_add_anon_rmap(struct page *, struct vm_area_struct *, unsigned long, bool); void do_page_add_anon_rmap(struct page *, struct vm_area_struct *, unsigned long, int); -void page_add_new_anon_rmap(struct page *, struct vm_area_struct *, - unsigned long, bool); +void __page_add_new_anon_rmap(struct page *, struct vm_area_struct *, + unsigned long, bool); +static inline void page_add_new_anon_rmap(struct page *page, + struct vm_area_struct *vma, + unsigned long address, bool compound) +{ + VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); + __page_add_new_anon_rmap(page, vma, address, compound); +} + void page_add_file_rmap(struct page *, bool); void page_remove_rmap(struct page *, bool); diff --git a/mm/memory.c b/mm/memory.c index 4ad4f0a6f652..3705ff3e04d5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2551,7 +2551,7 @@ static int wp_page_copy(struct vm_fault *vmf) * thread doing COW. */ ptep_clear_flush_notify(vma, vmf->address, vmf->pte); - page_add_new_anon_rmap(new_page, vma, vmf->address, false); + __page_add_new_anon_rmap(new_page, vma, vmf->address, false); mem_cgroup_commit_charge(new_page, memcg, false, false); __lru_cache_add_active_or_unevictable(new_page, vmf->vma_flags); /* @@ -3065,7 +3065,7 @@ int do_swap_page(struct vm_fault *vmf) /* ksm created a completely new copy */ if (unlikely(page != swapcache && swapcache)) { - page_add_new_anon_rmap(page, vma, vmf->address, false); + __page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); __lru_cache_add_active_or_unevictable(page, vmf->vma_flags); } else { @@ -3215,7 +3215,7 @@ static int do_anonymous_page(struct vm_fault *vmf) } inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, vma, vmf->address, false); + __page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); __lru_cache_add_active_or_unevictable(page, vmf->vma_flags); setpte: @@ -3467,7 +3467,7 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, /* copy-on-write page */ if (write && !(vmf->vma_flags & VM_SHARED)) { inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, vma, vmf->address, false); + __page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); __lru_cache_add_active_or_unevictable(page, vmf->vma_flags); } else { diff --git a/mm/rmap.c b/mm/rmap.c index 787c07fb37dc..357ea765e795 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1138,7 +1138,7 @@ void do_page_add_anon_rmap(struct page *page, } /** - * page_add_new_anon_rmap - add pte mapping to a new anonymous page + * __page_add_new_anon_rmap - add pte mapping to a new anonymous page * @page: the page to add the mapping to * @vma: the vm area in which the mapping is added * @address: the user virtual address mapped @@ -1148,12 +1148,11 @@ void do_page_add_anon_rmap(struct page *page, * This means the inc-and-test can be bypassed. * Page does not have to be locked. */ -void page_add_new_anon_rmap(struct page *page, +void __page_add_new_anon_rmap(struct page *page, struct vm_area_struct *vma, unsigned long address, bool compound) { int nr = compound ? hpage_nr_pages(page) : 1; - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); __SetPageSwapBacked(page); if (compound) { VM_BUG_ON_PAGE(!PageTransHuge(page), page); From patchwork Wed Oct 11 13:52:40 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824450 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBx2L2ljXz9s7M for ; Thu, 12 Oct 2017 01:19:54 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBx2L1nqczDrV3 for ; Thu, 12 Oct 2017 01:19:54 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwSB43gZzDr9C for ; Thu, 12 Oct 2017 00:53:46 +1100 (AEDT) Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDo0gA025889 for ; Wed, 11 Oct 2017 09:53:44 -0400 Received: from e06smtp13.uk.ibm.com (e06smtp13.uk.ibm.com [195.75.94.109]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhh9wnx2f-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:44 -0400 Received: from localhost by e06smtp13.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:41 +0100 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp13.uk.ibm.com (192.168.101.143) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:35 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrZwV27263202; Wed, 11 Oct 2017 13:53:35 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E24714C046; Wed, 11 Oct 2017 14:49:31 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1C7A64C04A; Wed, 11 Oct 2017 14:49:30 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:30 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 16/22] mm: Provide speculative fault infrastructure Date: Wed, 11 Oct 2017 15:52:40 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0012-0000-0000-00000580C7FC X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0013-0000-0000-000018FAD675 Message-Id: <1507729966-10660-17-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Peter Zijlstra Provide infrastructure to do a speculative fault (not holding mmap_sem). The not holding of mmap_sem means we can race against VMA change/removal and page-table destruction. We use the SRCU VMA freeing to keep the VMA around. We use the VMA seqcount to detect change (including umapping / page-table deletion) and we use gup_fast() style page-table walking to deal with page-table races. Once we've obtained the page and are ready to update the PTE, we validate if the state we started the fault with is still valid, if not, we'll fail the fault with VM_FAULT_RETRY, otherwise we update the PTE and we're done. Signed-off-by: Peter Zijlstra (Intel) [Manage the newly introduced pte_spinlock() for speculative page fault to fail if the VMA is touched in our back] [Rename vma_is_dead() to vma_has_changed() and declare it here] [Call p4d_alloc() as it is safe since pgd is valid] [Call pud_alloc() as it is safe since p4d is valid] [Set fe.sequence in __handle_mm_fault()] [Abort speculative path when handle_userfault() has to be called] [Add additional VMA's flags checks in handle_speculative_fault()] [Clear FAULT_FLAG_ALLOW_RETRY in handle_speculative_fault()] [Don't set vmf->pte and vmf->ptl if pte_map_lock() failed] [Remove warning comment about waiting for !seq&1 since we don't want to wait] [Remove warning about no huge page support, mention it explictly] [Don't call do_fault() in the speculative path as __do_fault() calls vma->vm_ops->fault() which may want to release mmap_sem] [Only vm_fault pointer argument for vma_has_changed()] [Fix check against huge page, calling pmd_trans_huge()] [Use READ_ONCE() when reading VMA's fields in the speculative path] [Explicitly check for __HAVE_ARCH_PTE_SPECIAL as we can't support for processing done in vm_normal_page()] [Check that vma->anon_vma is already set when starting the speculative path] [Check for memory policy as we can't support MPOL_INTERLEAVE case due to the processing done in mpol_misplaced()] [Don't support VMA growing up or down] [Move check on vm_sequence just before calling handle_pte_fault()] [Don't build SPF services if !CONFIG_SPF] [Add mem cgroup oom check] [Use use READ_ONCE to access p*d entries] Signed-off-by: Laurent Dufour --- include/linux/hugetlb_inline.h | 2 +- include/linux/mm.h | 5 + include/linux/pagemap.h | 4 +- mm/internal.h | 16 +++ mm/memory.c | 285 ++++++++++++++++++++++++++++++++++++++++- 5 files changed, 306 insertions(+), 6 deletions(-) diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h index a4e7ca0f3585..6cfdfca4cc2a 100644 --- a/include/linux/hugetlb_inline.h +++ b/include/linux/hugetlb_inline.h @@ -7,7 +7,7 @@ static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma) { - return !!(vma->vm_flags & VM_HUGETLB); + return !!(READ_ONCE(vma->vm_flags) & VM_HUGETLB); } #else diff --git a/include/linux/mm.h b/include/linux/mm.h index 6feff53f4775..00ca43bf28da 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -320,6 +320,7 @@ struct vm_fault { gfp_t gfp_mask; /* gfp mask to be used for allocations */ pgoff_t pgoff; /* Logical page offset based on vma */ unsigned long address; /* Faulting virtual address */ + unsigned int sequence; pmd_t *pmd; /* Pointer to pmd entry matching * the 'address' */ pud_t *pud; /* Pointer to pud entry matching @@ -1342,6 +1343,10 @@ int invalidate_inode_page(struct page *page); #ifdef CONFIG_MMU extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags); +#ifdef CONFIG_SPF +extern int handle_speculative_fault(struct mm_struct *mm, + unsigned long address, unsigned int flags); +#endif /* CONFIG_SPF */ extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked); diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 5bbd6780f205..832aa3ec7d00 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -451,8 +451,8 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma, pgoff_t pgoff; if (unlikely(is_vm_hugetlb_page(vma))) return linear_hugepage_index(vma, address); - pgoff = (address - vma->vm_start) >> PAGE_SHIFT; - pgoff += vma->vm_pgoff; + pgoff = (address - READ_ONCE(vma->vm_start)) >> PAGE_SHIFT; + pgoff += READ_ONCE(vma->vm_pgoff); return pgoff; } diff --git a/mm/internal.h b/mm/internal.h index aced97623bff..f75fe537286a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -45,6 +45,22 @@ extern struct srcu_struct vma_srcu; extern struct vm_area_struct *find_vma_srcu(struct mm_struct *mm, unsigned long addr); +#ifdef CONFIG_SPF +static inline bool vma_has_changed(struct vm_fault *vmf) +{ + int ret = RB_EMPTY_NODE(&vmf->vma->vm_rb); + unsigned seq = ACCESS_ONCE(vmf->vma->vm_sequence.sequence); + + /* + * Matches both the wmb in write_seqlock_{begin,end}() and + * the wmb in vma_rb_erase(). + */ + smp_rmb(); + + return ret || seq != vmf->sequence; +} +#endif /* CONFIG_SPF */ + void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma, unsigned long floor, unsigned long ceiling); diff --git a/mm/memory.c b/mm/memory.c index 3705ff3e04d5..eff40abfc1a6 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -763,7 +763,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr, if (page) dump_page(page, "bad pte"); pr_alert("addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n", - (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index); + (void *)addr, READ_ONCE(vma->vm_flags), vma->anon_vma, + mapping, index); /* * Choose text because data symbols depend on CONFIG_KALLSYMS_ALL=y */ @@ -2458,18 +2459,90 @@ static inline void wp_page_reuse(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); } +#ifdef CONFIG_SPF static bool pte_spinlock(struct vm_fault *vmf) { + bool ret = false; + + /* Check if vma is still valid */ + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) { + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd); + spin_lock(vmf->ptl); + return true; + } + + local_irq_disable(); + if (vma_has_changed(vmf)) + goto out; + + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd); + spin_lock(vmf->ptl); + + if (vma_has_changed(vmf)) { + spin_unlock(vmf->ptl); + goto out; + } + + ret = true; +out: + local_irq_enable(); + return ret; +} +#else +static inline bool pte_spinlock(struct vm_fault *vmf) +{ vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd); spin_lock(vmf->ptl); return true; } +#endif /* CONFIG_SPF */ +#ifdef CONFIG_SPF static bool pte_map_lock(struct vm_fault *vmf) { - vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); + bool ret = false; + pte_t *pte; + spinlock_t *ptl; + + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) { + vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, + vmf->address, &vmf->ptl); + return true; + } + + /* + * The first vma_has_changed() guarantees the page-tables are still + * valid, having IRQs disabled ensures they stay around, hence the + * second vma_has_changed() to make sure they are still valid once + * we've got the lock. After that a concurrent zap_pte_range() will + * block on the PTL and thus we're safe. + */ + local_irq_disable(); + if (vma_has_changed(vmf)) + goto out; + + pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, + vmf->address, &ptl); + if (vma_has_changed(vmf)) { + pte_unmap_unlock(pte, ptl); + goto out; + } + + vmf->pte = pte; + vmf->ptl = ptl; + ret = true; +out: + local_irq_enable(); + return ret; +} +#else +static inline bool pte_map_lock(struct vm_fault *vmf) +{ + vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, + vmf->address, &vmf->ptl); return true; } +#endif /* CONFIG_SPF */ /* * Handle the case of a page which we actually need to copy to a new page. @@ -3165,6 +3238,14 @@ static int do_anonymous_page(struct vm_fault *vmf) ret = check_stable_address_space(vma->vm_mm); if (ret) goto unlock; + /* + * Don't call the userfaultfd during the speculative path. + * We already checked for the VMA to not be managed through + * userfaultfd, but it may be set in our back once we have lock + * the pte. In such a case we can ignore it this time. + */ + if (vmf->flags & FAULT_FLAG_SPECULATIVE) + goto setpte; /* Deliver the page fault to userland, check inside PT lock */ if (userfaultfd_missing(vma)) { pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -3207,7 +3288,7 @@ static int do_anonymous_page(struct vm_fault *vmf) goto release; /* Deliver the page fault to userland, check inside PT lock */ - if (userfaultfd_missing(vma)) { + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE) && userfaultfd_missing(vma)) { pte_unmap_unlock(vmf->pte, vmf->ptl); mem_cgroup_cancel_charge(page, memcg, false); put_page(page); @@ -3986,6 +4067,8 @@ static int handle_pte_fault(struct vm_fault *vmf) if (!vmf->pte) { if (vma_is_anonymous(vmf->vma)) return do_anonymous_page(vmf); + else if (vmf->flags & FAULT_FLAG_SPECULATIVE) + return VM_FAULT_RETRY; else return do_fault(vmf); } @@ -4083,6 +4166,9 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address, vmf.pmd = pmd_alloc(mm, vmf.pud, address); if (!vmf.pmd) return VM_FAULT_OOM; +#ifdef CONFIG_SPF + vmf.sequence = raw_read_seqcount(&vma->vm_sequence); +#endif if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) { ret = create_huge_pmd(&vmf); if (!(ret & VM_FAULT_FALLBACK)) @@ -4116,6 +4202,199 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address, return handle_pte_fault(&vmf); } +#ifdef CONFIG_SPF + +#ifndef __HAVE_ARCH_PTE_SPECIAL +/* This is required by vm_normal_page() */ +#error "Speculative page fault handler requires __HAVE_ARCH_PTE_SPECIAL" +#endif + +/* + * vm_normal_page() adds some processing which should be done while + * hodling the mmap_sem. + */ +int handle_speculative_fault(struct mm_struct *mm, unsigned long address, + unsigned int flags) +{ + struct vm_fault vmf = { + .address = address, + }; + pgd_t *pgd, pgde; + p4d_t *p4d, p4de; + pud_t *pud, pude; + pmd_t *pmd, pmde; + int dead, seq, idx, ret = VM_FAULT_RETRY; + struct vm_area_struct *vma; +#ifdef CONFIG_NUMA + struct mempolicy *pol; +#endif + + /* Clear flags that may lead to release the mmap_sem to retry */ + flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE); + flags |= FAULT_FLAG_SPECULATIVE; + + idx = srcu_read_lock(&vma_srcu); + vma = find_vma_srcu(mm, address); + if (!vma) + goto unlock; + + /* + * Validate the VMA found by the lockless lookup. + */ + dead = RB_EMPTY_NODE(&vma->vm_rb); + seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */ + if ((seq & 1) || dead) + goto unlock; + + /* + * Can't call vm_ops service has we don't know what they would do + * with the VMA. + * This include huge page from hugetlbfs. + */ + if (vma->vm_ops) + goto unlock; + + /* + * __anon_vma_prepare() requires the mmap_sem to be held + * because vm_next and vm_prev must be safe. This can't be guaranteed + * in the speculative path. + */ + if (unlikely(!vma->anon_vma)) + goto unlock; + + vmf.vma_flags = READ_ONCE(vma->vm_flags); + vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot); + + /* Can't call userland page fault handler in the speculative path */ + if (unlikely(vmf.vma_flags & VM_UFFD_MISSING)) + goto unlock; + +#ifdef CONFIG_NUMA + /* + * MPOL_INTERLEAVE implies additional check in mpol_misplaced() which + * are not compatible with the speculative page fault processing. + */ + pol = __get_vma_policy(vma, address); + if (!pol) + pol = get_task_policy(current); + if (pol && pol->mode == MPOL_INTERLEAVE) + goto unlock; +#endif + + if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP) + /* + * This could be detected by the check address against VMA's + * boundaries but we want to trace it as not supported instead + * of changed. + */ + goto unlock; + + if (address < READ_ONCE(vma->vm_start) + || READ_ONCE(vma->vm_end) <= address) + goto unlock; + + if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE, + flags & FAULT_FLAG_INSTRUCTION, + flags & FAULT_FLAG_REMOTE)) { + ret = VM_FAULT_SIGSEGV; + goto unlock; + } + + /* This is one is required to check that the VMA has write access set */ + if (flags & FAULT_FLAG_WRITE) { + if (unlikely(!(vmf.vma_flags & VM_WRITE))) { + ret = VM_FAULT_SIGSEGV; + goto unlock; + } + } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) { + ret = VM_FAULT_SIGSEGV; + goto unlock; + } + + /* + * Do a speculative lookup of the PTE entry. + */ + local_irq_disable(); + pgd = pgd_offset(mm, address); + pgde = READ_ONCE(*pgd); + if (pgd_none(pgde) || unlikely(pgd_bad(pgde))) + goto out_walk; + + p4d = p4d_alloc(mm, pgd, address); + p4de = READ_ONCE(*p4d); + if (p4d_none(p4de) || unlikely(p4d_bad(p4de))) + goto out_walk; + + pud = pud_alloc(mm, p4d, address); + pude = READ_ONCE(*pud); + if (pud_none(pude) || unlikely(pud_bad(pude))) + goto out_walk; + + /* Transparent huge pages are not supported. */ + if (unlikely(pud_trans_huge(pude))) + goto out_walk; + + pmd = pmd_offset(pud, address); + pmde = READ_ONCE(*pmd); + if (pmd_none(pmde) || unlikely(pmd_bad(pmde))) + goto out_walk; + + /* + * The above does not allocate/instantiate page-tables because doing so + * would lead to the possibility of instantiating page-tables after + * free_pgtables() -- and consequently leaking them. + * + * The result is that we take at least one !speculative fault per PMD + * in order to instantiate it. + */ + /* Transparent huge pages are not supported. */ + if (unlikely(pmd_trans_huge(pmde))) + goto out_walk; + + vmf.vma = vma; + vmf.pmd = pmd; + vmf.pgoff = linear_page_index(vma, address); + vmf.gfp_mask = __get_fault_gfp_mask(vma); + vmf.sequence = seq; + vmf.flags = flags; + + local_irq_enable(); + + /* + * We need to re-validate the VMA after checking the bounds, otherwise + * we might have a false positive on the bounds. + */ + if (read_seqcount_retry(&vma->vm_sequence, seq)) + goto unlock; + + mem_cgroup_oom_enable(); + ret = handle_pte_fault(&vmf); + mem_cgroup_oom_disable(); + + /* + * There is no more need to hold SRCU since the VMA pointer is no more + * used. Release it right now to avoid longer SRCU grace period. + */ + srcu_read_unlock(&vma_srcu, idx); + + /* + * The task may have entered a memcg OOM situation but + * if the allocation error was handled gracefully (no + * VM_FAULT_OOM), there is no need to kill anything. + * Just clean up the OOM state peacefully. + */ + if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM)) + mem_cgroup_oom_synchronize(false); + return ret; + +out_walk: + local_irq_enable(); +unlock: + srcu_read_unlock(&vma_srcu, idx); + return ret; +} +#endif /* CONFIG_SPF */ + /* * By the time we get here, we already hold the mm semaphore * From patchwork Wed Oct 11 13:52:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824451 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBx4P0H68z9s7M for ; Thu, 12 Oct 2017 01:21:41 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBx4N6VRmzDsF9 for ; Thu, 12 Oct 2017 01:21:40 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwSH0pcBzDr9F for ; Thu, 12 Oct 2017 00:53:50 +1100 (AEDT) Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDonjA117539 for ; Wed, 11 Oct 2017 09:53:48 -0400 Received: from e06smtp12.uk.ibm.com (e06smtp12.uk.ibm.com [195.75.94.108]) by mx0b-001b2d01.pphosted.com with ESMTP id 2dhhqbv03h-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:48 -0400 Received: from localhost by e06smtp12.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:45 +0100 Received: from b06cxnps3075.portsmouth.uk.ibm.com (9.149.109.195) by e06smtp12.uk.ibm.com (192.168.101.142) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:38 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrcHY31981706; Wed, 11 Oct 2017 13:53:38 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B06AD4C058; Wed, 11 Oct 2017 14:49:34 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 08FF64C044; Wed, 11 Oct 2017 14:49:33 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:32 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 17/22] mm: Try spin lock in speculative path Date: Wed, 11 Oct 2017 15:52:41 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0008-0000-0000-0000049EC815 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0009-0000-0000-00001E30D577 Message-Id: <1507729966-10660-18-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" There is a deadlock when a CPU is doing a speculative page fault and another one is calling do_unmap(). The deadlock occurred because the speculative path try to spinlock the pte while the interrupt are disabled. When the other CPU in the unmap's path has locked the pte then is waiting for all the CPU to invalidate the TLB. As the CPU doing the speculative fault have the interrupt disable it can't invalidate the TLB, and can't get the lock. Since we are in a speculative path, we can race with other mm action. So let assume that the lock may not get acquired and fail the speculative page fault. Here are the stacks captured during the deadlock: CPU 0 native_flush_tlb_others+0x7c/0x260 flush_tlb_mm_range+0x6a/0x220 tlb_flush_mmu_tlbonly+0x63/0xc0 unmap_page_range+0x897/0x9d0 ? unmap_single_vma+0x7d/0xe0 ? release_pages+0x2b3/0x360 unmap_single_vma+0x7d/0xe0 unmap_vmas+0x51/0xa0 unmap_region+0xbd/0x130 do_munmap+0x279/0x460 SyS_munmap+0x53/0x70 CPU 1 do_raw_spin_lock+0x14e/0x160 _raw_spin_lock+0x5d/0x80 ? pte_map_lock+0x169/0x1b0 pte_map_lock+0x169/0x1b0 handle_pte_fault+0xbf2/0xd80 ? trace_hardirqs_on+0xd/0x10 handle_speculative_fault+0x272/0x280 handle_speculative_fault+0x5/0x280 __do_page_fault+0x187/0x580 trace_do_page_fault+0x52/0x260 do_async_page_fault+0x19/0x70 async_page_fault+0x28/0x30 Signed-off-by: Laurent Dufour --- mm/memory.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index eff40abfc1a6..d1278fc15a91 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2476,7 +2476,8 @@ static bool pte_spinlock(struct vm_fault *vmf) goto out; vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd); - spin_lock(vmf->ptl); + if (unlikely(!spin_trylock(vmf->ptl))) + goto out; if (vma_has_changed(vmf)) { spin_unlock(vmf->ptl); @@ -2521,8 +2522,20 @@ static bool pte_map_lock(struct vm_fault *vmf) if (vma_has_changed(vmf)) goto out; - pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, - vmf->address, &ptl); + /* + * Same as pte_offset_map_lock() except that we call + * spin_trylock() in place of spin_lock() to avoid race with + * unmap path which may have the lock and wait for this CPU + * to invalidate TLB but this CPU has irq disabled. + * Since we are in a speculative patch, accept it could fail + */ + ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd); + pte = pte_offset_map(vmf->pmd, vmf->address); + if (unlikely(!spin_trylock(ptl))) { + pte_unmap(pte); + goto out; + } + if (vma_has_changed(vmf)) { pte_unmap_unlock(pte, ptl); goto out; From patchwork Wed Oct 11 13:52:42 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824453 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBx5t667xz9s7M for ; Thu, 12 Oct 2017 01:22:58 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBx5t5C2TzDrWn for ; Thu, 12 Oct 2017 01:22:58 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwSL05PnzDr6n for ; Thu, 12 Oct 2017 00:53:53 +1100 (AEDT) Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDogdx070673 for ; Wed, 11 Oct 2017 09:53:51 -0400 Received: from e06smtp14.uk.ibm.com (e06smtp14.uk.ibm.com [195.75.94.110]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dhmgc89fu-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:51 -0400 Received: from localhost by e06smtp14.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:48 +0100 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp14.uk.ibm.com (192.168.101.144) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:41 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrfjk23396374; Wed, 11 Oct 2017 13:53:41 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8A6104C044; Wed, 11 Oct 2017 14:49:37 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id CB4C34C040; Wed, 11 Oct 2017 14:49:35 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:35 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 18/22] mm: Adding speculative page fault failure trace events Date: Wed, 11 Oct 2017 15:52:42 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0016-0000-0000-000004F4C7EE X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0017-0000-0000-0000282FD633 Message-Id: <1507729966-10660-19-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" This patch a set of new trace events to collect the speculative page fault event failures. Signed-off-by: Laurent Dufour --- include/trace/events/pagefault.h | 87 ++++++++++++++++++++++++++++++++++++++++ mm/memory.c | 59 ++++++++++++++++++++++----- 2 files changed, 135 insertions(+), 11 deletions(-) create mode 100644 include/trace/events/pagefault.h diff --git a/include/trace/events/pagefault.h b/include/trace/events/pagefault.h new file mode 100644 index 000000000000..d7d56f8102d1 --- /dev/null +++ b/include/trace/events/pagefault.h @@ -0,0 +1,87 @@ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM pagefault + +#if !defined(_TRACE_PAGEFAULT_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_PAGEFAULT_H + +#include +#include + +DECLARE_EVENT_CLASS(spf, + + TP_PROTO(unsigned long caller, + struct vm_area_struct *vma, unsigned long address), + + TP_ARGS(caller, vma, address), + + TP_STRUCT__entry( + __field(unsigned long, caller) + __field(unsigned long, vm_start) + __field(unsigned long, vm_end) + __field(unsigned long, address) + ), + + TP_fast_assign( + __entry->caller = caller; + __entry->vm_start = vma->vm_start; + __entry->vm_end = vma->vm_end; + __entry->address = address; + ), + + TP_printk("ip:%lx vma:%lu-%lx address:%lx", + __entry->caller, __entry->vm_start, __entry->vm_end, + __entry->address) +); + +DEFINE_EVENT(spf, spf_pte_lock, + + TP_PROTO(unsigned long caller, + struct vm_area_struct *vma, unsigned long address), + + TP_ARGS(caller, vma, address) +); + +DEFINE_EVENT(spf, spf_vma_changed, + + TP_PROTO(unsigned long caller, + struct vm_area_struct *vma, unsigned long address), + + TP_ARGS(caller, vma, address) +); + +DEFINE_EVENT(spf, spf_vma_dead, + + TP_PROTO(unsigned long caller, + struct vm_area_struct *vma, unsigned long address), + + TP_ARGS(caller, vma, address) +); + +DEFINE_EVENT(spf, spf_vma_noanon, + + TP_PROTO(unsigned long caller, + struct vm_area_struct *vma, unsigned long address), + + TP_ARGS(caller, vma, address) +); + +DEFINE_EVENT(spf, spf_vma_notsup, + + TP_PROTO(unsigned long caller, + struct vm_area_struct *vma, unsigned long address), + + TP_ARGS(caller, vma, address) +); + +DEFINE_EVENT(spf, spf_vma_access, + + TP_PROTO(unsigned long caller, + struct vm_area_struct *vma, unsigned long address), + + TP_ARGS(caller, vma, address) +); + +#endif /* _TRACE_PAGEFAULT_H */ + +/* This part must be outside protection */ +#include diff --git a/mm/memory.c b/mm/memory.c index d1278fc15a91..24008a86ef12 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -81,6 +81,9 @@ #include "internal.h" +#define CREATE_TRACE_POINTS +#include + #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid. #endif @@ -2472,15 +2475,20 @@ static bool pte_spinlock(struct vm_fault *vmf) } local_irq_disable(); - if (vma_has_changed(vmf)) + if (vma_has_changed(vmf)) { + trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address); goto out; + } vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd); - if (unlikely(!spin_trylock(vmf->ptl))) + if (unlikely(!spin_trylock(vmf->ptl))) { + trace_spf_pte_lock(_RET_IP_, vmf->vma, vmf->address); goto out; + } if (vma_has_changed(vmf)) { spin_unlock(vmf->ptl); + trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address); goto out; } @@ -2519,8 +2527,10 @@ static bool pte_map_lock(struct vm_fault *vmf) * block on the PTL and thus we're safe. */ local_irq_disable(); - if (vma_has_changed(vmf)) + if (vma_has_changed(vmf)) { + trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address); goto out; + } /* * Same as pte_offset_map_lock() except that we call @@ -2533,11 +2543,13 @@ static bool pte_map_lock(struct vm_fault *vmf) pte = pte_offset_map(vmf->pmd, vmf->address); if (unlikely(!spin_trylock(ptl))) { pte_unmap(pte); + trace_spf_pte_lock(_RET_IP_, vmf->vma, vmf->address); goto out; } if (vma_has_changed(vmf)) { pte_unmap_unlock(pte, ptl); + trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address); goto out; } @@ -4255,32 +4267,45 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, * Validate the VMA found by the lockless lookup. */ dead = RB_EMPTY_NODE(&vma->vm_rb); + if (dead) { + trace_spf_vma_dead(_RET_IP_, vma, address); + goto unlock; + } + seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */ - if ((seq & 1) || dead) + if (seq & 1) { + trace_spf_vma_changed(_RET_IP_, vma, address); goto unlock; + } /* * Can't call vm_ops service has we don't know what they would do * with the VMA. * This include huge page from hugetlbfs. */ - if (vma->vm_ops) + if (vma->vm_ops) { + trace_spf_vma_notsup(_RET_IP_, vma, address); goto unlock; + } /* * __anon_vma_prepare() requires the mmap_sem to be held * because vm_next and vm_prev must be safe. This can't be guaranteed * in the speculative path. */ - if (unlikely(!vma->anon_vma)) + if (unlikely(!vma->anon_vma)) { + trace_spf_vma_notsup(_RET_IP_, vma, address); goto unlock; + } vmf.vma_flags = READ_ONCE(vma->vm_flags); vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot); /* Can't call userland page fault handler in the speculative path */ - if (unlikely(vmf.vma_flags & VM_UFFD_MISSING)) + if (unlikely(vmf.vma_flags & VM_UFFD_MISSING)) { + trace_spf_vma_notsup(_RET_IP_, vma, address); goto unlock; + } #ifdef CONFIG_NUMA /* @@ -4290,25 +4315,32 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, pol = __get_vma_policy(vma, address); if (!pol) pol = get_task_policy(current); - if (pol && pol->mode == MPOL_INTERLEAVE) + if (pol && pol->mode == MPOL_INTERLEAVE) { + trace_spf_vma_notsup(_RET_IP_, vma, address); goto unlock; + } #endif - if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP) + if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP) { /* * This could be detected by the check address against VMA's * boundaries but we want to trace it as not supported instead * of changed. */ + trace_spf_vma_notsup(_RET_IP_, vma, address); goto unlock; + } if (address < READ_ONCE(vma->vm_start) - || READ_ONCE(vma->vm_end) <= address) + || READ_ONCE(vma->vm_end) <= address) { + trace_spf_vma_changed(_RET_IP_, vma, address); goto unlock; + } if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE, flags & FAULT_FLAG_INSTRUCTION, flags & FAULT_FLAG_REMOTE)) { + trace_spf_vma_access(_RET_IP_, vma, address); ret = VM_FAULT_SIGSEGV; goto unlock; } @@ -4316,10 +4348,12 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, /* This is one is required to check that the VMA has write access set */ if (flags & FAULT_FLAG_WRITE) { if (unlikely(!(vmf.vma_flags & VM_WRITE))) { + trace_spf_vma_access(_RET_IP_, vma, address); ret = VM_FAULT_SIGSEGV; goto unlock; } } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) { + trace_spf_vma_access(_RET_IP_, vma, address); ret = VM_FAULT_SIGSEGV; goto unlock; } @@ -4377,8 +4411,10 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, * We need to re-validate the VMA after checking the bounds, otherwise * we might have a false positive on the bounds. */ - if (read_seqcount_retry(&vma->vm_sequence, seq)) + if (read_seqcount_retry(&vma->vm_sequence, seq)) { + trace_spf_vma_changed(_RET_IP_, vma, address); goto unlock; + } mem_cgroup_oom_enable(); ret = handle_pte_fault(&vmf); @@ -4401,6 +4437,7 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address, return ret; out_walk: + trace_spf_vma_notsup(_RET_IP_, vma, address); local_irq_enable(); unlock: srcu_read_unlock(&vma_srcu, idx); From patchwork Wed Oct 11 13:52:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824457 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBx7V1VjYz9s7M for ; Thu, 12 Oct 2017 01:24:22 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBx7V0JfHzDsVm for ; Thu, 12 Oct 2017 01:24:22 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwSN1BC9zDr9F for ; Thu, 12 Oct 2017 00:53:55 +1100 (AEDT) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDnMcV121127 for ; Wed, 11 Oct 2017 09:53:54 -0400 Received: from e06smtp14.uk.ibm.com (e06smtp14.uk.ibm.com [195.75.94.110]) by mx0b-001b2d01.pphosted.com with ESMTP id 2dhhbm52ch-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:53 -0400 Received: from localhost by e06smtp14.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:51 +0100 Received: from b06cxnps3074.portsmouth.uk.ibm.com (9.149.109.194) by e06smtp14.uk.ibm.com (192.168.101.144) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:44 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrhMb22675488; Wed, 11 Oct 2017 13:53:44 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 821344C050; Wed, 11 Oct 2017 14:49:40 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C37B14C044; Wed, 11 Oct 2017 14:49:38 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:38 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 19/22] perf: Add a speculative page fault sw event Date: Wed, 11 Oct 2017 15:52:43 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0016-0000-0000-000004F4C7F0 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0017-0000-0000-0000282FD636 Message-Id: <1507729966-10660-20-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Add a new software event to count succeeded speculative page faults. Signed-off-by: Laurent Dufour --- include/uapi/linux/perf_event.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h index 140ae638cfd6..101e509ee39b 100644 --- a/include/uapi/linux/perf_event.h +++ b/include/uapi/linux/perf_event.h @@ -111,6 +111,7 @@ enum perf_sw_ids { PERF_COUNT_SW_EMULATION_FAULTS = 8, PERF_COUNT_SW_DUMMY = 9, PERF_COUNT_SW_BPF_OUTPUT = 10, + PERF_COUNT_SW_SPF = 11, PERF_COUNT_SW_MAX, /* non-ABI */ }; From patchwork Wed Oct 11 13:52:44 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824459 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBx9B0m4Hz9s7M for ; Thu, 12 Oct 2017 01:25:50 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBx996mYczDrN5 for ; Thu, 12 Oct 2017 01:25:49 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwSR2lTKzDrBX for ; Thu, 12 Oct 2017 00:53:59 +1100 (AEDT) Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDoief117392 for ; Wed, 11 Oct 2017 09:53:57 -0400 Received: from e06smtp13.uk.ibm.com (e06smtp13.uk.ibm.com [195.75.94.109]) by mx0b-001b2d01.pphosted.com with ESMTP id 2dhhqbv0aw-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:53:57 -0400 Received: from localhost by e06smtp13.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:54 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp13.uk.ibm.com (192.168.101.143) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:47 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrlcY21823530; Wed, 11 Oct 2017 13:53:47 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D6FE44C044; Wed, 11 Oct 2017 14:49:43 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id BCFE44C040; Wed, 11 Oct 2017 14:49:41 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:41 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 20/22] perf tools: Add support for the SPF perf event Date: Wed, 11 Oct 2017 15:52:44 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0012-0000-0000-00000580C803 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0013-0000-0000-000018FAD67B Message-Id: <1507729966-10660-21-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Add support for the new speculative faults event. Signed-off-by: Laurent Dufour --- tools/include/uapi/linux/perf_event.h | 1 + tools/perf/util/evsel.c | 1 + tools/perf/util/parse-events.c | 4 ++++ tools/perf/util/parse-events.l | 1 + tools/perf/util/python.c | 1 + 5 files changed, 8 insertions(+) diff --git a/tools/include/uapi/linux/perf_event.h b/tools/include/uapi/linux/perf_event.h index 140ae638cfd6..101e509ee39b 100644 --- a/tools/include/uapi/linux/perf_event.h +++ b/tools/include/uapi/linux/perf_event.h @@ -111,6 +111,7 @@ enum perf_sw_ids { PERF_COUNT_SW_EMULATION_FAULTS = 8, PERF_COUNT_SW_DUMMY = 9, PERF_COUNT_SW_BPF_OUTPUT = 10, + PERF_COUNT_SW_SPF = 11, PERF_COUNT_SW_MAX, /* non-ABI */ }; diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c index f894893c203d..5cff5936a9ac 100644 --- a/tools/perf/util/evsel.c +++ b/tools/perf/util/evsel.c @@ -437,6 +437,7 @@ const char *perf_evsel__sw_names[PERF_COUNT_SW_MAX] = { "alignment-faults", "emulation-faults", "dummy", + "speculative-faults", }; static const char *__perf_evsel__sw_name(u64 config) diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c index 9183913a6174..0914d3b94ae8 100644 --- a/tools/perf/util/parse-events.c +++ b/tools/perf/util/parse-events.c @@ -136,6 +136,10 @@ struct event_symbol event_symbols_sw[PERF_COUNT_SW_MAX] = { .symbol = "bpf-output", .alias = "", }, + [PERF_COUNT_SW_SPF] = { + .symbol = "speculative-faults", + .alias = "spf", + }, }; #define __PERF_EVENT_FIELD(config, name) \ diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l index ea2426daf7e8..67d2d9c79c28 100644 --- a/tools/perf/util/parse-events.l +++ b/tools/perf/util/parse-events.l @@ -290,6 +290,7 @@ emulation-faults { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_EM dummy { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_DUMMY); } duration_time { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_DUMMY); } bpf-output { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_BPF_OUTPUT); } +speculative-faults|spf { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_SPF); } /* * We have to handle the kernel PMU event cycles-ct/cycles-t/mem-loads/mem-stores separately. diff --git a/tools/perf/util/python.c b/tools/perf/util/python.c index c129e99114ae..12209adb7cb5 100644 --- a/tools/perf/util/python.c +++ b/tools/perf/util/python.c @@ -1141,6 +1141,7 @@ static struct { PERF_CONST(COUNT_SW_ALIGNMENT_FAULTS), PERF_CONST(COUNT_SW_EMULATION_FAULTS), PERF_CONST(COUNT_SW_DUMMY), + PERF_CONST(COUNT_SW_SPF), PERF_CONST(SAMPLE_IP), PERF_CONST(SAMPLE_TID), From patchwork Wed Oct 11 13:52:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824463 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBxC26j5tz9s7M for ; Thu, 12 Oct 2017 01:27:26 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBxC25vlHzDrbX for ; Thu, 12 Oct 2017 01:27:26 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwSX1hYYzDr5T for ; Thu, 12 Oct 2017 00:54:04 +1100 (AEDT) Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDopWu117723 for ; Wed, 11 Oct 2017 09:54:02 -0400 Received: from e06smtp15.uk.ibm.com (e06smtp15.uk.ibm.com [195.75.94.111]) by mx0b-001b2d01.pphosted.com with ESMTP id 2dhhqbv0ex-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:54:01 -0400 Received: from localhost by e06smtp15.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:53:59 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp15.uk.ibm.com (192.168.101.145) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:52 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrpeE18612272; Wed, 11 Oct 2017 13:53:51 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 601EC4C040; Wed, 11 Oct 2017 14:49:48 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 3A09E4C04E; Wed, 11 Oct 2017 14:49:45 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:44 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 21/22] x86/mm: Add speculative pagefault handling Date: Wed, 11 Oct 2017 15:52:45 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0020-0000-0000-000003BFC75D X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0021-0000-0000-000042530559 Message-Id: <1507729966-10660-22-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Peter Zijlstra Try a speculative fault before acquiring mmap_sem, if it returns with VM_FAULT_RETRY continue with the mmap_sem acquisition and do the traditional fault. Signed-off-by: Peter Zijlstra (Intel) [Clearing of FAULT_FLAG_ALLOW_RETRY is now done in handle_speculative_fault()] [Retry with usual fault path in the case VM_ERROR is returned by handle_speculative_fault(). This allows signal to be delivered] [Don't build SPF call if !CONFIG_SPF] Signed-off-by: Laurent Dufour --- arch/x86/mm/fault.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index e2baeaa053a5..583e19d10b21 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1364,6 +1364,24 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code, if (error_code & PF_INSTR) flags |= FAULT_FLAG_INSTRUCTION; +#ifdef CONFIG_SPF + if (error_code & PF_USER) { + fault = handle_speculative_fault(mm, address, flags); + + /* + * We also check against VM_FAULT_ERROR because we have to + * raise a signal by calling later mm_fault_error() which + * requires the vma pointer to be set. So in that case, + * we fall through the normal path. + */ + if (!(fault & VM_FAULT_RETRY || fault & VM_FAULT_ERROR)) { + perf_sw_event(PERF_COUNT_SW_SPF, 1, + regs, address); + goto done; + } + } +#endif /* CONFIG_SPF */ + /* * When running in the kernel we expect faults to occur only to * addresses in user space. All other faults represent errors in @@ -1474,6 +1492,9 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code, return; } +#ifdef CONFIG_SPF +done: +#endif /* * Major/minor page fault accounting. If any of the events * returned VM_FAULT_MAJOR, we account it as a major fault. From patchwork Wed Oct 11 13:52:46 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Dufour X-Patchwork-Id: 824464 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yBxDb6bHcz9s7M for ; Thu, 12 Oct 2017 01:28:47 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3yBxDb5JfJzDrqF for ; Thu, 12 Oct 2017 01:28:47 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=ldufour@linux.vnet.ibm.com; receiver=) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3yBwSf6wgmzDr8K for ; Thu, 12 Oct 2017 00:54:10 +1100 (AEDT) Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9BDmrFx089223 for ; Wed, 11 Oct 2017 09:54:08 -0400 Received: from e06smtp12.uk.ibm.com (e06smtp12.uk.ibm.com [195.75.94.108]) by mx0b-001b2d01.pphosted.com with ESMTP id 2dhk0hpnc6-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 11 Oct 2017 09:54:08 -0400 Received: from localhost by e06smtp12.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 11 Oct 2017 14:54:02 +0100 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp12.uk.ibm.com (192.168.101.142) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 11 Oct 2017 14:53:55 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9BDrsqx7602346; Wed, 11 Oct 2017 13:53:54 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 30CA64C04E; Wed, 11 Oct 2017 14:49:51 +0100 (BST) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7B5734C040; Wed, 11 Oct 2017 14:49:49 +0100 (BST) Received: from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.30.240]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2017 14:49:49 +0100 (BST) From: Laurent Dufour To: paulmck@linux.vnet.ibm.com, peterz@infradead.org, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov Subject: [PATCH v5 22/22] powerpc/mm: Add speculative page fault Date: Wed, 11 Oct 2017 15:52:46 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> References: <1507729966-10660-1-git-send-email-ldufour@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17101113-0008-0000-0000-0000049EC820 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17101113-0009-0000-0000-00001E30D580 Message-Id: <1507729966-10660-23-git-send-email-ldufour@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-11_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710110191 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org, Tim Chen , haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" This patch enable the speculative page fault on the PowerPC architecture. This will try a speculative page fault without holding the mmap_sem, if it returns with VM_FAULT_RETRY, the mmap_sem is acquired and the traditional page fault processing is done. Build on if CONFIG_SPF is defined (currently for BOOK3S_64 && SMP). Signed-off-by: Laurent Dufour --- arch/powerpc/mm/fault.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 4797d08581ce..c018c2554cc8 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -442,6 +442,20 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address, if (is_exec) flags |= FAULT_FLAG_INSTRUCTION; +#ifdef CONFIG_SPF + if (is_user) { + /* let's try a speculative page fault without grabbing the + * mmap_sem. + */ + fault = handle_speculative_fault(mm, address, flags); + if (!(fault & VM_FAULT_RETRY)) { + perf_sw_event(PERF_COUNT_SW_SPF, 1, + regs, address); + goto done; + } + } +#endif /* CONFIG_SPF */ + /* When running in the kernel we expect faults to occur only to * addresses in user space. All other faults represent errors in the * kernel and should generate an OOPS. Unfortunately, in the case of an @@ -526,6 +540,9 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address, up_read(¤t->mm->mmap_sem); +#ifdef CONFIG_SPF +done: +#endif if (unlikely(fault & VM_FAULT_ERROR)) return mm_fault_error(regs, address, fault);