[v3,08/20] mm: Protect SPF handler against anon_vma changes

Message ID 1504894024-2750-9-git-send-email-ldufour@linux.vnet.ibm.com
State Not Applicable
Headers show
  • Speculative page faults
Related show

Commit Message

Laurent Dufour Sept. 8, 2017, 6:06 p.m.
The speculative page fault handler must be protected against anon_vma
changes. This is because page_add_new_anon_rmap() is called during the
speculative path.

In addition, don't try speculative page fault if the VMA don't have an
anon_vma structure allocated because its allocation should be
protected by the mmap_sem.

In __vma_adjust() when importer->anon_vma is set, there is no need to
protect against speculative page faults since speculative page fault
is aborted if the vma->anon_vma is not set.

When calling page_add_new_anon_rmap() vma->anon_vma is necessarily
valid since we checked for it when locking the pte and the anon_vma is
removed once the pte is unlocked. So even if the speculative page
fault handler is running concurrently with do_unmap(), as the pte is
locked in unmap_region() - through unmap_vmas() - and the anon_vma
unlinked later, because we check for the vma sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
 mm/memory.c | 4 ++++
 1 file changed, 4 insertions(+)


diff --git a/mm/memory.c b/mm/memory.c
index f008042ab24e..401b13cbfc3c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -617,7 +617,9 @@  void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		 * Hide vma from rmap and truncate_pagecache before freeing
 		 * pgtables
+		write_seqcount_begin(&vma->vm_sequence);
+		write_seqcount_end(&vma->vm_sequence);
 		if (is_vm_hugetlb_page(vma)) {
@@ -631,7 +633,9 @@  void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
 			       && !is_vm_hugetlb_page(next)) {
 				vma = next;
 				next = vma->vm_next;
+				write_seqcount_begin(&vma->vm_sequence);
+				write_seqcount_end(&vma->vm_sequence);
 			free_pgd_range(tlb, addr, vma->vm_end,