{"id":811799,"url":"http://patchwork.ozlabs.org/api/1.2/patches/811799/?format=json","web_url":"http://patchwork.ozlabs.org/project/linuxppc-dev/patch/1504894024-2750-14-git-send-email-ldufour@linux.vnet.ibm.com/","project":{"id":2,"url":"http://patchwork.ozlabs.org/api/1.2/projects/2/?format=json","name":"Linux PPC development","link_name":"linuxppc-dev","list_id":"linuxppc-dev.lists.ozlabs.org","list_email":"linuxppc-dev@lists.ozlabs.org","web_url":"https://github.com/linuxppc/wiki/wiki","scm_url":"https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git","webscm_url":"https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/","list_archive_url":"https://lore.kernel.org/linuxppc-dev/","list_archive_url_format":"https://lore.kernel.org/linuxppc-dev/{}/","commit_url_format":"https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?id={}"},"msgid":"<1504894024-2750-14-git-send-email-ldufour@linux.vnet.ibm.com>","list_archive_url":"https://lore.kernel.org/linuxppc-dev/1504894024-2750-14-git-send-email-ldufour@linux.vnet.ibm.com/","date":"2017-09-08T18:06:57","name":"[v3,13/20] mm: Introduce __page_add_new_anon_rmap()","commit_ref":null,"pull_url":null,"state":"not-applicable","archived":false,"hash":"54d3585ae0b588c2b72e6b317306c75a2b40fe35","submitter":{"id":40248,"url":"http://patchwork.ozlabs.org/api/1.2/people/40248/?format=json","name":"Laurent Dufour","email":"ldufour@linux.vnet.ibm.com"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/linuxppc-dev/patch/1504894024-2750-14-git-send-email-ldufour@linux.vnet.ibm.com/mbox/","series":[{"id":2269,"url":"http://patchwork.ozlabs.org/api/1.2/series/2269/?format=json","web_url":"http://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=2269","date":"2017-09-08T18:06:44","name":"Speculative page faults","version":3,"mbox":"http://patchwork.ozlabs.org/series/2269/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/811799/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/811799/checks/","tags":{},"related":[],"headers":{"Return-Path":"<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>","X-Original-To":["patchwork-incoming@ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":["patchwork-incoming@ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3])\n\t(using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 3xpmXG4xSFz9sBd\n\tfor <patchwork-incoming@ozlabs.org>;\n\tSat,  9 Sep 2017 04:47:26 +1000 (AEST)","from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 3xpmXG3vkZzDrXS\n\tfor <patchwork-incoming@ozlabs.org>;\n\tSat,  9 Sep 2017 04:47:26 +1000 (AEST)","from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com\n\t[148.163.156.1])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256\n\tbits)) (No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 3xplfn2ZHWzDrYm\n\tfor <linuxppc-dev@lists.ozlabs.org>;\n\tSat,  9 Sep 2017 04:08:01 +1000 (AEST)","from pps.filterd (m0098393.ppops.net [127.0.0.1])\n\tby mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id\n\tv88I5Jvp105168\n\tfor <linuxppc-dev@lists.ozlabs.org>; Fri, 8 Sep 2017 14:07:59 -0400","from e06smtp14.uk.ibm.com (e06smtp14.uk.ibm.com [195.75.94.110])\n\tby mx0a-001b2d01.pphosted.com with ESMTP id 2cuvgwkkwf-1\n\t(version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT)\n\tfor <linuxppc-dev@lists.ozlabs.org>; Fri, 08 Sep 2017 14:07:59 -0400","from localhost\n\tby e06smtp14.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use\n\tOnly! Violators will be prosecuted\n\tfor <linuxppc-dev@lists.ozlabs.org> from <ldufour@linux.vnet.ibm.com>;\n\tFri, 8 Sep 2017 19:07:56 +0100","from b06cxnps3075.portsmouth.uk.ibm.com (9.149.109.195)\n\tby e06smtp14.uk.ibm.com (192.168.101.144) with IBM ESMTP SMTP\n\tGateway: Authorized Use Only! Violators will be prosecuted; \n\tFri, 8 Sep 2017 19:07:50 +0100","from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com\n\t[9.149.105.60])\n\tby b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with\n\tESMTP id v88I7nGP31260914; Fri, 8 Sep 2017 18:07:49 GMT","from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1])\n\tby IMSVA (Postfix) with ESMTP id B592B42042;\n\tFri,  8 Sep 2017 19:04:16 +0100 (BST)","from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1])\n\tby IMSVA (Postfix) with ESMTP id 9A0244203F;\n\tFri,  8 Sep 2017 19:04:13 +0100 (BST)","from nimbus.lab.toulouse-stg.fr.ibm.com (unknown [9.145.31.125])\n\tby d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP;\n\tFri,  8 Sep 2017 19:04:13 +0100 (BST)"],"Authentication-Results":"ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com\n\t(client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com;\n\tenvelope-from=ldufour@linux.vnet.ibm.com; receiver=<UNKNOWN>)","From":"Laurent Dufour <ldufour@linux.vnet.ibm.com>","To":"paulmck@linux.vnet.ibm.com, peterz@infradead.org,\n\takpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, \n\tmhocko@kernel.org, dave@stgolabs.net, jack@suse.cz,\n\tMatthew Wilcox <willy@infradead.org>, benh@kernel.crashing.org,\n\tmpe@ellerman.id.au, paulus@samba.org,\n\tThomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, \n\thpa@zytor.com, Will Deacon <will.deacon@arm.com>,\n\tSergey Senozhatsky <sergey.senozhatsky@gmail.com>","Subject":"[PATCH v3 13/20] mm: Introduce __page_add_new_anon_rmap()","Date":"Fri,  8 Sep 2017 20:06:57 +0200","X-Mailer":"git-send-email 2.7.4","In-Reply-To":"<1504894024-2750-1-git-send-email-ldufour@linux.vnet.ibm.com>","References":"<1504894024-2750-1-git-send-email-ldufour@linux.vnet.ibm.com>","X-TM-AS-GCONF":"00","x-cbid":"17090818-0016-0000-0000-000004EB9F57","X-IBM-AV-DETECTION":"SAVI=unused REMOTE=unused XFE=unused","x-cbparentid":"17090818-0017-0000-0000-00002825A70C","Message-Id":"<1504894024-2750-14-git-send-email-ldufour@linux.vnet.ibm.com>","X-Proofpoint-Virus-Version":"vendor=fsecure engine=2.50.10432:, ,\n\tdefinitions=2017-09-08_12:, , signatures=0","X-Proofpoint-Spam-Details":"rule=outbound_notspam policy=outbound score=0\n\tspamscore=0 suspectscore=0\n\tmalwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam\n\tadjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000\n\tdefinitions=main-1709080270","X-BeenThere":"linuxppc-dev@lists.ozlabs.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"Linux on PowerPC Developers Mail List\n\t<linuxppc-dev.lists.ozlabs.org>","List-Unsubscribe":"<https://lists.ozlabs.org/options/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=unsubscribe>","List-Archive":"<http://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=help>","List-Subscribe":"<https://lists.ozlabs.org/listinfo/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=subscribe>","Cc":"linuxppc-dev@lists.ozlabs.org, x86@kernel.org,\n\tlinux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org,\n\tTim Chen <tim.c.chen@linux.intel.com>, \n\tharen@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com","Errors-To":"linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org","Sender":"\"Linuxppc-dev\"\n\t<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>"},"content":"When dealing with speculative page fault handler, we may race with VMA\nbeing split or merged. In this case the vma->vm_start and vm->vm_end\nfields may not match the address the page fault is occurring.\n\nThis can only happens when the VMA is split but in that case, the\nanon_vma pointer of the new VMA will be the same as the original one,\nbecause in __split_vma the new->anon_vma is set to src->anon_vma when\n*new = *vma.\n\nSo even if the VMA boundaries are not correct, the anon_vma pointer is\nstill valid.\n\nIf the VMA has been merged, then the VMA in which it has been merged\nmust have the same anon_vma pointer otherwise the merge can't be done.\n\nSo in all the case we know that the anon_vma is valid, since we have\nchecked before starting the speculative page fault that the anon_vma\npointer is valid for this VMA and since there is an anon_vma this\nmeans that at one time a page has been backed and that before the VMA\nis cleaned, the page table lock would have to be grab to clean the\nPTE, and the anon_vma field is checked once the PTE is locked.\n\nThis patch introduce a new __page_add_new_anon_rmap() service which\ndoesn't check for the VMA boundaries, and create a new inline one\nwhich do the check.\n\nWhen called from a page fault handler, if this is not a speculative one,\nthere is a guarantee that vm_start and vm_end match the faulting address,\nso this check is useless. In the context of the speculative page fault\nhandler, this check may be wrong but anon_vma is still valid as explained\nabove.\n\nSigned-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>\n---\n include/linux/rmap.h | 12 ++++++++++--\n mm/memory.c          |  8 ++++----\n mm/rmap.c            |  5 ++---\n 3 files changed, 16 insertions(+), 9 deletions(-)","diff":"diff --git a/include/linux/rmap.h b/include/linux/rmap.h\nindex 733d3d8181e2..d91be69c1c60 100644\n--- a/include/linux/rmap.h\n+++ b/include/linux/rmap.h\n@@ -173,8 +173,16 @@ void page_add_anon_rmap(struct page *, struct vm_area_struct *,\n \t\tunsigned long, bool);\n void do_page_add_anon_rmap(struct page *, struct vm_area_struct *,\n \t\t\t   unsigned long, int);\n-void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,\n-\t\tunsigned long, bool);\n+void __page_add_new_anon_rmap(struct page *, struct vm_area_struct *,\n+\t\t\t      unsigned long, bool);\n+static inline void page_add_new_anon_rmap(struct page *page,\n+\t\t\t\t\t  struct vm_area_struct *vma,\n+\t\t\t\t\t  unsigned long address, bool compound)\n+{\n+\tVM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);\n+\t__page_add_new_anon_rmap(page, vma, address, compound);\n+}\n+\n void page_add_file_rmap(struct page *, bool);\n void page_remove_rmap(struct page *, bool);\n \ndiff --git a/mm/memory.c b/mm/memory.c\nindex a5b5fe833ed3..479b47a8ed7c 100644\n--- a/mm/memory.c\n+++ b/mm/memory.c\n@@ -2508,7 +2508,7 @@ static int wp_page_copy(struct vm_fault *vmf)\n \t\t * thread doing COW.\n \t\t */\n \t\tptep_clear_flush_notify(vma, vmf->address, vmf->pte);\n-\t\tpage_add_new_anon_rmap(new_page, vma, vmf->address, false);\n+\t\t__page_add_new_anon_rmap(new_page, vma, vmf->address, false);\n \t\tmem_cgroup_commit_charge(new_page, memcg, false, false);\n \t\t__lru_cache_add_active_or_unevictable(new_page, vmf->vma_flags);\n \t\t/*\n@@ -2998,7 +2998,7 @@ int do_swap_page(struct vm_fault *vmf)\n \t\tmem_cgroup_commit_charge(page, memcg, true, false);\n \t\tactivate_page(page);\n \t} else { /* ksm created a completely new copy */\n-\t\tpage_add_new_anon_rmap(page, vma, vmf->address, false);\n+\t\t__page_add_new_anon_rmap(page, vma, vmf->address, false);\n \t\tmem_cgroup_commit_charge(page, memcg, false, false);\n \t\t__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);\n \t}\n@@ -3144,7 +3144,7 @@ static int do_anonymous_page(struct vm_fault *vmf)\n \t}\n \n \tinc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);\n-\tpage_add_new_anon_rmap(page, vma, vmf->address, false);\n+\t__page_add_new_anon_rmap(page, vma, vmf->address, false);\n \tmem_cgroup_commit_charge(page, memcg, false, false);\n \t__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);\n setpte:\n@@ -3396,7 +3396,7 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,\n \t/* copy-on-write page */\n \tif (write && !(vmf->vma_flags & VM_SHARED)) {\n \t\tinc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);\n-\t\tpage_add_new_anon_rmap(page, vma, vmf->address, false);\n+\t\t__page_add_new_anon_rmap(page, vma, vmf->address, false);\n \t\tmem_cgroup_commit_charge(page, memcg, false, false);\n \t\t__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);\n \t} else {\ndiff --git a/mm/rmap.c b/mm/rmap.c\nindex b874c4761e84..5d657329191e 100644\n--- a/mm/rmap.c\n+++ b/mm/rmap.c\n@@ -1133,7 +1133,7 @@ void do_page_add_anon_rmap(struct page *page,\n }\n \n /**\n- * page_add_new_anon_rmap - add pte mapping to a new anonymous page\n+ * __page_add_new_anon_rmap - add pte mapping to a new anonymous page\n  * @page:\tthe page to add the mapping to\n  * @vma:\tthe vm area in which the mapping is added\n  * @address:\tthe user virtual address mapped\n@@ -1143,12 +1143,11 @@ void do_page_add_anon_rmap(struct page *page,\n  * This means the inc-and-test can be bypassed.\n  * Page does not have to be locked.\n  */\n-void page_add_new_anon_rmap(struct page *page,\n+void __page_add_new_anon_rmap(struct page *page,\n \tstruct vm_area_struct *vma, unsigned long address, bool compound)\n {\n \tint nr = compound ? hpage_nr_pages(page) : 1;\n \n-\tVM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);\n \t__SetPageSwapBacked(page);\n \tif (compound) {\n \t\tVM_BUG_ON_PAGE(!PageTransHuge(page), page);\n","prefixes":["v3","13/20"]}