{"id":2197311,"url":"http://patchwork.ozlabs.org/api/1.0/patches/2197311/?format=json","project":{"id":2,"url":"http://patchwork.ozlabs.org/api/1.0/projects/2/?format=json","name":"Linux PPC development","link_name":"linuxppc-dev","list_id":"linuxppc-dev.lists.ozlabs.org","list_email":"linuxppc-dev@lists.ozlabs.org","web_url":"https://github.com/linuxppc/wiki/wiki","scm_url":"https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git","webscm_url":"https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/"},"msgid":"<20260217163250.2326001-3-surenb@google.com>","date":"2026-02-17T16:32:49","name":"[v2,2/3] mm: replace vma_start_write() with vma_start_write_killable()","commit_ref":null,"pull_url":null,"state":"handled-elsewhere","archived":false,"hash":"a9e0e6877379dd1e91e6d5c8c6053b52087fe2af","submitter":{"id":74729,"url":"http://patchwork.ozlabs.org/api/1.0/people/74729/?format=json","name":"Suren Baghdasaryan","email":"surenb@google.com"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/linuxppc-dev/patch/20260217163250.2326001-3-surenb@google.com/mbox/","series":[{"id":492454,"url":"http://patchwork.ozlabs.org/api/1.0/series/492454/?format=json","date":"2026-02-17T16:32:47","name":"Use killable vma write locking in most places","version":2,"mbox":"http://patchwork.ozlabs.org/series/492454/mbox/"}],"check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2197311/checks/","tags":{},"headers":{"Return-Path":"\n <linuxppc-dev+bounces-16908-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256\n header.s=20230601 header.b=v2E2zhTu;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=112.213.38.117; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-16908-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=none smtp.remote-ip=\"2607:f8b0:4864:20::1249\"","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=google.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256\n header.s=20230601 header.b=v2E2zhTu;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=flex--surenb.bounces.google.com\n (client-ip=2607:f8b0:4864:20::1249; helo=mail-dl1-x1249.google.com;\n envelope-from=3oziuaqykdba8a7u3rw44w1u.s421y3ad55s-tub1y898.4f1qr8.47w@flex--surenb.bounces.google.com;\n receiver=lists.ozlabs.org)"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4fFlZ36XNpz1xpY\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 18 Feb 2026 03:33:15 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4fFlYp0kckz3bp0;\n\tWed, 18 Feb 2026 03:33:02 +1100 (AEDT)","from mail-dl1-x1249.google.com (mail-dl1-x1249.google.com\n [IPv6:2607:f8b0:4864:20::1249])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4fFlYm3ztJz2xjP\n\tfor <linuxppc-dev@lists.ozlabs.org>; Wed, 18 Feb 2026 03:33:00 +1100 (AEDT)","by mail-dl1-x1249.google.com with SMTP id\n a92af1059eb24-12711ec96fbso23532776c88.0\n        for <linuxppc-dev@lists.ozlabs.org>;\n Tue, 17 Feb 2026 08:33:00 -0800 (PST)"],"ARC-Seal":"i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1771345981;\n\tcv=none;\n b=bNqtwcCwYgWkL38y6I2Y563eeQMJmb6NJVVujYVZ2KLutuGMskpYNJX3RQ1eEkMaFBRWBllNAFAWsvXa6CSONh22I2Y/LBbORjVz+1bd+hESrDpJQVKA3FVovduzKzs3qXYb+WaPCwGWuFvuxveld/q5fZu4o2SB6dL6/Cb6UiqLNGQgKxNC9VWN9gOu1lKqe0eykAQLO2YUA9k8R3DLzkfDxly6fTWuLVbtEunjC528y1eCetf6OI4eKhFKvU6WnBQuR5XxsiMkKp1DTjXlZ0vM9YPce0pqbD2ZSlLy8/3BIaWjxJz6P0bhBp6/ay6padLF9hkoVYlgCsRRILF7+w==","ARC-Message-Signature":"i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1771345981; c=relaxed/relaxed;\n\tbh=iaZ0FDTJwGlgc+6zx0Iu+po26eUYY+Uw/UyBjgMao2k=;\n\th=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From:\n\t To:Cc:Content-Type;\n b=S0kWQ2NxIPKLGAW2gTAQb63wSgwTXPyqpqkYRsTx9bp1tjuR1oY+53z/w6ep1ycoi7Q0ys3YGwFN7cc4v8iDktkfPQbCihUp6iA0rxeQYfXK1O12ymWdrFteyXR1qKHh6vQCE/6LX5R5TFloT0AQyH+Qsez3zgk2kUvKlGPNwtyYhGNO4qgq+QnIWLKHa1/f6iK55U1OyQE4n2X8mhjc3X2vU01+lJ1FPxNMMlCna5zfyp3m8lygG63nT9EwVWXQ5xivNX+9gxs2cZ6Oz3KApcEJCpP2RwfiQhS8KNiFCQ7JhFgpOpDg3mVWzCgHR7q/v4OKd7dl/tA0iMWbYpTYRw==","ARC-Authentication-Results":"i=1; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=google.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256\n header.s=20230601 header.b=v2E2zhTu; dkim-atps=neutral;\n spf=pass (client-ip=2607:f8b0:4864:20::1249; helo=mail-dl1-x1249.google.com;\n envelope-from=3oziuaqykdba8a7u3rw44w1u.s421y3ad55s-tub1y898.4f1qr8.47w@flex--surenb.bounces.google.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=flex--surenb.bounces.google.com","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n        d=google.com; s=20230601; t=1771345978; x=1771950778;\n darn=lists.ozlabs.org;\n        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to\n         :date:from:to:cc:subject:date:message-id:reply-to;\n        bh=iaZ0FDTJwGlgc+6zx0Iu+po26eUYY+Uw/UyBjgMao2k=;\n        b=v2E2zhTupB7sXUr5r9buy0ziW6qT32C3UmicVEKGHnpm9lv9R55hoksWNHnPrHtgcb\n         sEr2AQEGgnq+nT9Bomokw7StieAQU2Ge7R5Qec0FMZo/TTcJO0v72qPpgYqiF3WeevxW\n         Gat9qEwtZ/mGTODWzXnlRNz/OhZcl2OW9lDXzCK/jLfI9SYvFTWpSVhiVFhUvAoIszjG\n         qRB3+r43ctw0sDHLuMkAISbCqC2IQRc8pozshZMfw61F7qE5jXV0g3U38Qt5RL4a/oi+\n         6PkPgkbjJni49gZlxAAhs3aRyKHVv3EmNni/REhNnssoHeTyNDRaJ5efSgOlgGlraMdN\n         XyrA==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n        d=1e100.net; s=20230601; t=1771345978; x=1771950778;\n        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to\n         :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;\n        bh=iaZ0FDTJwGlgc+6zx0Iu+po26eUYY+Uw/UyBjgMao2k=;\n        b=rRwpCHysuVQtidWg/jtx9GBk75BqnIvgxNIxuvXsFuDLQP0+i3bcVwwRhsDmEZWMhe\n         pb3EQV7zMmS6s4CN2CPMP740JRHGR3KYBHU9Y0dKKkhywWJBVc+e8hV0DzyMsOcJJ7Ts\n         lK6UTiml3yHBP+iPFZewIQk3PD/tkh5Oghg5ciOJIu5jH0Rtgvs7yxCtfQ7FbaOCeyN7\n         au6qhf6FzVlbCZJUxfV2DxkCeSpBrXFdSJlO1CutbAzb0JJh4wpqaPitIislhxgQxvNc\n         cQiH28pb9zyIR0m9YWP7DILSY0lDzQk9PpfjhnOROslHXC+vSJAFpuQY+Vy2gGs5gZYe\n         eY0w==","X-Forwarded-Encrypted":"i=1;\n AJvYcCXZ5rB3432SmC0iqm4Co+Ulhrg6l0LUVrZImgZQkLs8AV9LMTbkpnmOiBQtUOgBeqgn+6Fqn3dzfcXCZSA=@lists.ozlabs.org","X-Gm-Message-State":"AOJu0Yx5AQ+fTLsEMd0gxeAe2/zFwWfWIyIW70z5ztrF7vdF6BJDmyFh\n\t48kxr6beSfQH+CzDouk/rDsPGI5yK9QJsudqyrscYmNv2i27WWAbxiJ0gL6+xiKdRTYzefsL4YQ\n\tlxerN4w==","X-Received":"from dlbvg11.prod.google.com\n ([2002:a05:7022:7f0b:b0:127:bd1:76f0])\n (user=surenb job=prod-delivery.src-stubby-dispatcher) by\n 2002:a05:7022:6897:b0:127:337e:3301\n with SMTP id a92af1059eb24-12741b6fd42mr4785616c88.12.1771345977912; Tue, 17\n Feb 2026 08:32:57 -0800 (PST)","Date":"Tue, 17 Feb 2026 08:32:49 -0800","In-Reply-To":"<20260217163250.2326001-1-surenb@google.com>","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","Mime-Version":"1.0","References":"<20260217163250.2326001-1-surenb@google.com>","X-Mailer":"git-send-email 2.53.0.273.g2a3d683680-goog","Message-ID":"<20260217163250.2326001-3-surenb@google.com>","Subject":"[PATCH v2 2/3] mm: replace vma_start_write() with\n vma_start_write_killable()","From":"Suren Baghdasaryan <surenb@google.com>","To":"akpm@linux-foundation.org","Cc":"willy@infradead.org, david@kernel.org, ziy@nvidia.com,\n\tmatthew.brost@intel.com, joshua.hahnjy@gmail.com, rakie.kim@sk.com,\n\tbyungchul@sk.com, gourry@gourry.net, ying.huang@linux.alibaba.com,\n\tapopple@nvidia.com, lorenzo.stoakes@oracle.com,\n baolin.wang@linux.alibaba.com,\n\tLiam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com,\n\tdev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev, vbabka@suse.cz,\n\tjannh@google.com, rppt@kernel.org, mhocko@suse.com, pfalcato@suse.de,\n\tkees@kernel.org, maddy@linux.ibm.com, npiggin@gmail.com, mpe@ellerman.id.au,\n\tchleroy@kernel.org, borntraeger@linux.ibm.com, frankja@linux.ibm.com,\n\timbrenda@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com,\n\tagordeev@linux.ibm.com, svens@linux.ibm.com, gerald.schaefer@linux.ibm.com,\n\tlinux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org,\n\tlinux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, surenb@google.com,\n\t\"Ritesh Harjani (IBM)\" <ritesh.list@gmail.com>","Content-Type":"text/plain; charset=\"UTF-8\"","X-Spam-Status":"No, score=-7.6 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED,\n\tDKIM_VALID,DKIM_VALID_AU,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,\n\tUSER_IN_DEF_DKIM_WL autolearn=disabled version=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"},"content":"Now that we have vma_start_write_killable() we can replace most of the\nvma_start_write() calls with it, improving reaction time to the kill\nsignal.\n\nThere are several places which are left untouched by this patch:\n\n1. free_pgtables() because function should free page tables even if a\nfatal signal is pending.\n\n2. process_vma_walk_lock(), which requires changes in its callers and\nwill be handled in the next patch.\n\n3. userfaultd code, where some paths calling vma_start_write() can\nhandle EINTR and some can't without a deeper code refactoring.\n\n4. vm_flags_{set|mod|clear} require refactoring that involves moving\nvma_start_write() out of these functions and replacing it with\nvma_assert_write_locked(), then callers of these functions should\nlock the vma themselves using vma_start_write_killable() whenever\npossible.\n\nSuggested-by: Matthew Wilcox <willy@infradead.org>\nSigned-off-by: Suren Baghdasaryan <surenb@google.com>\nReviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> # powerpc\n---\n arch/powerpc/kvm/book3s_hv_uvmem.c |  5 +-\n include/linux/mempolicy.h          |  5 +-\n mm/khugepaged.c                    |  5 +-\n mm/madvise.c                       |  4 +-\n mm/memory.c                        |  2 +\n mm/mempolicy.c                     | 23 ++++++--\n mm/mlock.c                         | 20 +++++--\n mm/mprotect.c                      |  4 +-\n mm/mremap.c                        |  4 +-\n mm/vma.c                           | 93 +++++++++++++++++++++---------\n mm/vma_exec.c                      |  6 +-\n 11 files changed, 123 insertions(+), 48 deletions(-)","diff":"diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c\nindex 7cf9310de0ec..69750edcf8d5 100644\n--- a/arch/powerpc/kvm/book3s_hv_uvmem.c\n+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c\n@@ -410,7 +410,10 @@ static int kvmppc_memslot_page_merge(struct kvm *kvm,\n \t\t\tret = H_STATE;\n \t\t\tbreak;\n \t\t}\n-\t\tvma_start_write(vma);\n+\t\tif (vma_start_write_killable(vma)) {\n+\t\t\tret = H_STATE;\n+\t\t\tbreak;\n+\t\t}\n \t\t/* Copy vm_flags to avoid partial modifications in ksm_madvise */\n \t\tvm_flags = vma->vm_flags;\n \t\tret = ksm_madvise(vma, vma->vm_start, vma->vm_end,\ndiff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h\nindex 0fe96f3ab3ef..762930edde5a 100644\n--- a/include/linux/mempolicy.h\n+++ b/include/linux/mempolicy.h\n@@ -137,7 +137,7 @@ bool vma_policy_mof(struct vm_area_struct *vma);\n extern void numa_default_policy(void);\n extern void numa_policy_init(void);\n extern void mpol_rebind_task(struct task_struct *tsk, const nodemask_t *new);\n-extern void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new);\n+extern int mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new);\n \n extern int huge_node(struct vm_area_struct *vma,\n \t\t\t\tunsigned long addr, gfp_t gfp_flags,\n@@ -251,8 +251,9 @@ static inline void mpol_rebind_task(struct task_struct *tsk,\n {\n }\n \n-static inline void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)\n+static inline int mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)\n {\n+\treturn 0;\n }\n \n static inline int huge_node(struct vm_area_struct *vma,\ndiff --git a/mm/khugepaged.c b/mm/khugepaged.c\nindex fa1e57fd2c46..392dde66fa86 100644\n--- a/mm/khugepaged.c\n+++ b/mm/khugepaged.c\n@@ -1150,7 +1150,10 @@ static enum scan_result collapse_huge_page(struct mm_struct *mm, unsigned long a\n \tif (result != SCAN_SUCCEED)\n \t\tgoto out_up_write;\n \t/* check if the pmd is still valid */\n-\tvma_start_write(vma);\n+\tif (vma_start_write_killable(vma)) {\n+\t\tresult = SCAN_FAIL;\n+\t\tgoto out_up_write;\n+\t}\n \tresult = check_pmd_still_valid(mm, address, pmd);\n \tif (result != SCAN_SUCCEED)\n \t\tgoto out_up_write;\ndiff --git a/mm/madvise.c b/mm/madvise.c\nindex 8debb2d434aa..b41e64231c31 100644\n--- a/mm/madvise.c\n+++ b/mm/madvise.c\n@@ -173,7 +173,9 @@ static int madvise_update_vma(vm_flags_t new_flags,\n \tmadv_behavior->vma = vma;\n \n \t/* vm_flags is protected by the mmap_lock held in write mode. */\n-\tvma_start_write(vma);\n+\tif (vma_start_write_killable(vma))\n+\t\treturn -EINTR;\n+\n \tvm_flags_reset(vma, new_flags);\n \tif (set_new_anon_name)\n \t\treturn replace_anon_vma_name(vma, anon_name);\ndiff --git a/mm/memory.c b/mm/memory.c\nindex dc0e5da70cdc..29e12f063c7b 100644\n--- a/mm/memory.c\n+++ b/mm/memory.c\n@@ -379,6 +379,8 @@ void free_pgd_range(struct mmu_gather *tlb,\n  * page tables that should be removed.  This can differ from the vma mappings on\n  * some archs that may have mappings that need to be removed outside the vmas.\n  * Note that the prev->vm_end and next->vm_start are often used.\n+ * We don't use vma_start_write_killable() because page tables should be freed\n+ * even if the task is being killed.\n  *\n  * The vma_end differs from the pg_end when a dup_mmap() failed and the tree has\n  * unrelated data to the mm_struct being torn down.\ndiff --git a/mm/mempolicy.c b/mm/mempolicy.c\nindex dbd48502ac24..5f6302d227f5 100644\n--- a/mm/mempolicy.c\n+++ b/mm/mempolicy.c\n@@ -556,17 +556,25 @@ void mpol_rebind_task(struct task_struct *tsk, const nodemask_t *new)\n  *\n  * Call holding a reference to mm.  Takes mm->mmap_lock during call.\n  */\n-void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)\n+int mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)\n {\n \tstruct vm_area_struct *vma;\n \tVMA_ITERATOR(vmi, mm, 0);\n+\tint ret = 0;\n+\n+\tif (mmap_write_lock_killable(mm))\n+\t\treturn -EINTR;\n \n-\tmmap_write_lock(mm);\n \tfor_each_vma(vmi, vma) {\n-\t\tvma_start_write(vma);\n+\t\tif (vma_start_write_killable(vma)) {\n+\t\t\tret = -EINTR;\n+\t\t\tbreak;\n+\t\t}\n \t\tmpol_rebind_policy(vma->vm_policy, new);\n \t}\n \tmmap_write_unlock(mm);\n+\n+\treturn ret;\n }\n \n static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {\n@@ -1785,9 +1793,15 @@ SYSCALL_DEFINE4(set_mempolicy_home_node, unsigned long, start, unsigned long, le\n \t\treturn -EINVAL;\n \tif (end == start)\n \t\treturn 0;\n-\tmmap_write_lock(mm);\n+\tif (mmap_write_lock_killable(mm))\n+\t\treturn -EINTR;\n \tprev = vma_prev(&vmi);\n \tfor_each_vma_range(vmi, vma, end) {\n+\t\tif (vma_start_write_killable(vma)) {\n+\t\t\terr = -EINTR;\n+\t\t\tbreak;\n+\t\t}\n+\n \t\t/*\n \t\t * If any vma in the range got policy other than MPOL_BIND\n \t\t * or MPOL_PREFERRED_MANY we return error. We don't reset\n@@ -1808,7 +1822,6 @@ SYSCALL_DEFINE4(set_mempolicy_home_node, unsigned long, start, unsigned long, le\n \t\t\tbreak;\n \t\t}\n \n-\t\tvma_start_write(vma);\n \t\tnew->home_node = home_node;\n \t\terr = mbind_range(&vmi, vma, &prev, start, end, new);\n \t\tmpol_put(new);\ndiff --git a/mm/mlock.c b/mm/mlock.c\nindex 2f699c3497a5..2885b858aa0f 100644\n--- a/mm/mlock.c\n+++ b/mm/mlock.c\n@@ -420,7 +420,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr,\n  * Called for mlock(), mlock2() and mlockall(), to set @vma VM_LOCKED;\n  * called for munlock() and munlockall(), to clear VM_LOCKED from @vma.\n  */\n-static void mlock_vma_pages_range(struct vm_area_struct *vma,\n+static int mlock_vma_pages_range(struct vm_area_struct *vma,\n \tunsigned long start, unsigned long end, vm_flags_t newflags)\n {\n \tstatic const struct mm_walk_ops mlock_walk_ops = {\n@@ -441,7 +441,9 @@ static void mlock_vma_pages_range(struct vm_area_struct *vma,\n \t */\n \tif (newflags & VM_LOCKED)\n \t\tnewflags |= VM_IO;\n-\tvma_start_write(vma);\n+\tif (vma_start_write_killable(vma))\n+\t\treturn -EINTR;\n+\n \tvm_flags_reset_once(vma, newflags);\n \n \tlru_add_drain();\n@@ -452,6 +454,7 @@ static void mlock_vma_pages_range(struct vm_area_struct *vma,\n \t\tnewflags &= ~VM_IO;\n \t\tvm_flags_reset_once(vma, newflags);\n \t}\n+\treturn 0;\n }\n \n /*\n@@ -501,10 +504,12 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,\n \t */\n \tif ((newflags & VM_LOCKED) && (oldflags & VM_LOCKED)) {\n \t\t/* No work to do, and mlocking twice would be wrong */\n-\t\tvma_start_write(vma);\n+\t\tret = vma_start_write_killable(vma);\n+\t\tif (ret)\n+\t\t\tgoto out;\n \t\tvm_flags_reset(vma, newflags);\n \t} else {\n-\t\tmlock_vma_pages_range(vma, start, end, newflags);\n+\t\tret = mlock_vma_pages_range(vma, start, end, newflags);\n \t}\n out:\n \t*prev = vma;\n@@ -733,9 +738,12 @@ static int apply_mlockall_flags(int flags)\n \n \t\terror = mlock_fixup(&vmi, vma, &prev, vma->vm_start, vma->vm_end,\n \t\t\t\t    newflags);\n-\t\t/* Ignore errors, but prev needs fixing up. */\n-\t\tif (error)\n+\t\t/* Ignore errors except EINTR, but prev needs fixing up. */\n+\t\tif (error) {\n+\t\t\tif (error == -EINTR)\n+\t\t\t\tbreak;\n \t\t\tprev = vma;\n+\t\t}\n \t\tcond_resched();\n \t}\n out:\ndiff --git a/mm/mprotect.c b/mm/mprotect.c\nindex c0571445bef7..49dbb7156936 100644\n--- a/mm/mprotect.c\n+++ b/mm/mprotect.c\n@@ -765,7 +765,9 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,\n \t * vm_flags and vm_page_prot are protected by the mmap_lock\n \t * held in write mode.\n \t */\n-\tvma_start_write(vma);\n+\terror = vma_start_write_killable(vma);\n+\tif (error < 0)\n+\t\tgoto fail;\n \tvm_flags_reset_once(vma, newflags);\n \tif (vma_wants_manual_pte_write_upgrade(vma))\n \t\tmm_cp_flags |= MM_CP_TRY_CHANGE_WRITABLE;\ndiff --git a/mm/mremap.c b/mm/mremap.c\nindex 2be876a70cc0..aef1e5f373c7 100644\n--- a/mm/mremap.c\n+++ b/mm/mremap.c\n@@ -1286,7 +1286,9 @@ static unsigned long move_vma(struct vma_remap_struct *vrm)\n \t\treturn -ENOMEM;\n \n \t/* We don't want racing faults. */\n-\tvma_start_write(vrm->vma);\n+\terr = vma_start_write_killable(vrm->vma);\n+\tif (err)\n+\t\treturn err;\n \n \t/* Perform copy step. */\n \terr = copy_vma_and_data(vrm, &new_vma);\ndiff --git a/mm/vma.c b/mm/vma.c\nindex bb4d0326fecb..1d21351282cf 100644\n--- a/mm/vma.c\n+++ b/mm/vma.c\n@@ -530,6 +530,13 @@ __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,\n \tif (err)\n \t\tgoto out_free_vmi;\n \n+\terr = vma_start_write_killable(vma);\n+\tif (err)\n+\t\tgoto out_free_mpol;\n+\terr = vma_start_write_killable(new);\n+\tif (err)\n+\t\tgoto out_free_mpol;\n+\n \terr = anon_vma_clone(new, vma, VMA_OP_SPLIT);\n \tif (err)\n \t\tgoto out_free_mpol;\n@@ -540,9 +547,6 @@ __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,\n \tif (new->vm_ops && new->vm_ops->open)\n \t\tnew->vm_ops->open(new);\n \n-\tvma_start_write(vma);\n-\tvma_start_write(new);\n-\n \tinit_vma_prep(&vp, vma);\n \tvp.insert = new;\n \tvma_prepare(&vp);\n@@ -895,16 +899,22 @@ static __must_check struct vm_area_struct *vma_merge_existing_range(\n \t}\n \n \t/* No matter what happens, we will be adjusting middle. */\n-\tvma_start_write(middle);\n+\terr = vma_start_write_killable(middle);\n+\tif (err)\n+\t\tgoto abort;\n \n \tif (merge_right) {\n-\t\tvma_start_write(next);\n+\t\terr = vma_start_write_killable(next);\n+\t\tif (err)\n+\t\t\tgoto abort;\n \t\tvmg->target = next;\n \t\tsticky_flags |= (next->vm_flags & VM_STICKY);\n \t}\n \n \tif (merge_left) {\n-\t\tvma_start_write(prev);\n+\t\terr = vma_start_write_killable(prev);\n+\t\tif (err)\n+\t\t\tgoto abort;\n \t\tvmg->target = prev;\n \t\tsticky_flags |= (prev->vm_flags & VM_STICKY);\n \t}\n@@ -1155,10 +1165,12 @@ int vma_expand(struct vma_merge_struct *vmg)\n \tstruct vm_area_struct *next = vmg->next;\n \tbool remove_next = false;\n \tvm_flags_t sticky_flags;\n-\tint ret = 0;\n+\tint ret;\n \n \tmmap_assert_write_locked(vmg->mm);\n-\tvma_start_write(target);\n+\tret = vma_start_write_killable(target);\n+\tif (ret)\n+\t\treturn ret;\n \n \tif (next && target != next && vmg->end == next->vm_end)\n \t\tremove_next = true;\n@@ -1187,6 +1199,9 @@ int vma_expand(struct vma_merge_struct *vmg)\n \t * we don't need to account for vmg->give_up_on_mm here.\n \t */\n \tif (remove_next) {\n+\t\tret = vma_start_write_killable(next);\n+\t\tif (ret)\n+\t\t\treturn ret;\n \t\tret = dup_anon_vma(target, next, &anon_dup);\n \t\tif (ret)\n \t\t\treturn ret;\n@@ -1197,10 +1212,8 @@ int vma_expand(struct vma_merge_struct *vmg)\n \t\t\treturn ret;\n \t}\n \n-\tif (remove_next) {\n-\t\tvma_start_write(next);\n+\tif (remove_next)\n \t\tvmg->__remove_next = true;\n-\t}\n \tif (commit_merge(vmg))\n \t\tgoto nomem;\n \n@@ -1233,6 +1246,7 @@ int vma_shrink(struct vma_iterator *vmi, struct vm_area_struct *vma,\n \t       unsigned long start, unsigned long end, pgoff_t pgoff)\n {\n \tstruct vma_prepare vp;\n+\tint err;\n \n \tWARN_ON((vma->vm_start != start) && (vma->vm_end != end));\n \n@@ -1244,7 +1258,11 @@ int vma_shrink(struct vma_iterator *vmi, struct vm_area_struct *vma,\n \tif (vma_iter_prealloc(vmi, NULL))\n \t\treturn -ENOMEM;\n \n-\tvma_start_write(vma);\n+\terr = vma_start_write_killable(vma);\n+\tif (err) {\n+\t\tvma_iter_free(vmi);\n+\t\treturn err;\n+\t}\n \n \tinit_vma_prep(&vp, vma);\n \tvma_prepare(&vp);\n@@ -1434,7 +1452,9 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,\n \t\t\tif (error)\n \t\t\t\tgoto end_split_failed;\n \t\t}\n-\t\tvma_start_write(next);\n+\t\terror = vma_start_write_killable(next);\n+\t\tif (error)\n+\t\t\tgoto munmap_gather_failed;\n \t\tmas_set(mas_detach, vms->vma_count++);\n \t\terror = mas_store_gfp(mas_detach, next, GFP_KERNEL);\n \t\tif (error)\n@@ -1828,12 +1848,17 @@ static void vma_link_file(struct vm_area_struct *vma, bool hold_rmap_lock)\n static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)\n {\n \tVMA_ITERATOR(vmi, mm, 0);\n+\tint err;\n \n \tvma_iter_config(&vmi, vma->vm_start, vma->vm_end);\n \tif (vma_iter_prealloc(&vmi, vma))\n \t\treturn -ENOMEM;\n \n-\tvma_start_write(vma);\n+\terr = vma_start_write_killable(vma);\n+\tif (err) {\n+\t\tvma_iter_free(&vmi);\n+\t\treturn err;\n+\t}\n \tvma_iter_store_new(&vmi, vma);\n \tvma_link_file(vma, /* hold_rmap_lock= */false);\n \tmm->map_count++;\n@@ -2215,9 +2240,8 @@ int mm_take_all_locks(struct mm_struct *mm)\n \t * is reached.\n \t */\n \tfor_each_vma(vmi, vma) {\n-\t\tif (signal_pending(current))\n+\t\tif (signal_pending(current) || vma_start_write_killable(vma))\n \t\t\tgoto out_unlock;\n-\t\tvma_start_write(vma);\n \t}\n \n \tvma_iter_init(&vmi, mm, 0);\n@@ -2532,6 +2556,11 @@ static int __mmap_new_vma(struct mmap_state *map, struct vm_area_struct **vmap)\n \t\tgoto free_vma;\n \t}\n \n+\t/* Lock the VMA since it is modified after insertion into VMA tree */\n+\terror = vma_start_write_killable(vma);\n+\tif (error)\n+\t\tgoto free_iter_vma;\n+\n \tif (map->file)\n \t\terror = __mmap_new_file_vma(map, vma);\n \telse if (map->vm_flags & VM_SHARED)\n@@ -2552,8 +2581,6 @@ static int __mmap_new_vma(struct mmap_state *map, struct vm_area_struct **vmap)\n \tWARN_ON_ONCE(!arch_validate_flags(map->vm_flags));\n #endif\n \n-\t/* Lock the VMA since it is modified after insertion into VMA tree */\n-\tvma_start_write(vma);\n \tvma_iter_store_new(vmi, vma);\n \tmap->mm->map_count++;\n \tvma_link_file(vma, map->hold_file_rmap_lock);\n@@ -2864,6 +2891,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,\n \t\t unsigned long addr, unsigned long len, vm_flags_t vm_flags)\n {\n \tstruct mm_struct *mm = current->mm;\n+\tint err = -ENOMEM;\n \n \t/*\n \t * Check against address space limits by the changed size\n@@ -2908,7 +2936,10 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,\n \tvma_set_range(vma, addr, addr + len, addr >> PAGE_SHIFT);\n \tvm_flags_init(vma, vm_flags);\n \tvma->vm_page_prot = vm_get_page_prot(vm_flags);\n-\tvma_start_write(vma);\n+\tif (vma_start_write_killable(vma)) {\n+\t\terr = -EINTR;\n+\t\tgoto mas_store_fail;\n+\t}\n \tif (vma_iter_store_gfp(vmi, vma, GFP_KERNEL))\n \t\tgoto mas_store_fail;\n \n@@ -2928,7 +2959,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,\n \tvm_area_free(vma);\n unacct_fail:\n \tvm_unacct_memory(len >> PAGE_SHIFT);\n-\treturn -ENOMEM;\n+\treturn err;\n }\n \n /**\n@@ -3089,7 +3120,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)\n \tstruct mm_struct *mm = vma->vm_mm;\n \tstruct vm_area_struct *next;\n \tunsigned long gap_addr;\n-\tint error = 0;\n+\tint error;\n \tVMA_ITERATOR(vmi, mm, vma->vm_start);\n \n \tif (!(vma->vm_flags & VM_GROWSUP))\n@@ -3126,12 +3157,14 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)\n \n \t/* We must make sure the anon_vma is allocated. */\n \tif (unlikely(anon_vma_prepare(vma))) {\n-\t\tvma_iter_free(&vmi);\n-\t\treturn -ENOMEM;\n+\t\terror = -ENOMEM;\n+\t\tgoto free;\n \t}\n \n \t/* Lock the VMA before expanding to prevent concurrent page faults */\n-\tvma_start_write(vma);\n+\terror = vma_start_write_killable(vma);\n+\tif (error)\n+\t\tgoto free;\n \t/* We update the anon VMA tree. */\n \tanon_vma_lock_write(vma->anon_vma);\n \n@@ -3160,6 +3193,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)\n \t\t}\n \t}\n \tanon_vma_unlock_write(vma->anon_vma);\n+free:\n \tvma_iter_free(&vmi);\n \tvalidate_mm(mm);\n \treturn error;\n@@ -3174,7 +3208,7 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address)\n {\n \tstruct mm_struct *mm = vma->vm_mm;\n \tstruct vm_area_struct *prev;\n-\tint error = 0;\n+\tint error;\n \tVMA_ITERATOR(vmi, mm, vma->vm_start);\n \n \tif (!(vma->vm_flags & VM_GROWSDOWN))\n@@ -3205,12 +3239,14 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address)\n \n \t/* We must make sure the anon_vma is allocated. */\n \tif (unlikely(anon_vma_prepare(vma))) {\n-\t\tvma_iter_free(&vmi);\n-\t\treturn -ENOMEM;\n+\t\terror = -ENOMEM;\n+\t\tgoto free;\n \t}\n \n \t/* Lock the VMA before expanding to prevent concurrent page faults */\n-\tvma_start_write(vma);\n+\terror = vma_start_write_killable(vma);\n+\tif (error)\n+\t\tgoto free;\n \t/* We update the anon VMA tree. */\n \tanon_vma_lock_write(vma->anon_vma);\n \n@@ -3240,6 +3276,7 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address)\n \t\t}\n \t}\n \tanon_vma_unlock_write(vma->anon_vma);\n+free:\n \tvma_iter_free(&vmi);\n \tvalidate_mm(mm);\n \treturn error;\ndiff --git a/mm/vma_exec.c b/mm/vma_exec.c\nindex 8134e1afca68..a4addc2a8480 100644\n--- a/mm/vma_exec.c\n+++ b/mm/vma_exec.c\n@@ -40,6 +40,7 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift)\n \tstruct vm_area_struct *next;\n \tstruct mmu_gather tlb;\n \tPAGETABLE_MOVE(pmc, vma, vma, old_start, new_start, length);\n+\tint err;\n \n \tBUG_ON(new_start > new_end);\n \n@@ -55,8 +56,9 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift)\n \t * cover the whole range: [new_start, old_end)\n \t */\n \tvmg.target = vma;\n-\tif (vma_expand(&vmg))\n-\t\treturn -ENOMEM;\n+\terr = vma_expand(&vmg);\n+\tif (err)\n+\t\treturn err;\n \n \t/*\n \t * move the page tables downwards, on failure we rely on\n","prefixes":["v2","2/3"]}