From patchwork Fri Aug 16 23:22:08 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 267933 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id CADFF2C0293 for ; Sat, 17 Aug 2013 10:17:45 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754517Ab3HQARo (ORCPT ); Fri, 16 Aug 2013 20:17:44 -0400 Received: from mail-pd0-f178.google.com ([209.85.192.178]:33000 "EHLO mail-pd0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752917Ab3HQAR3 (ORCPT ); Fri, 16 Aug 2013 20:17:29 -0400 Received: by mail-pd0-f178.google.com with SMTP id w10so2855276pde.37 for ; Fri, 16 Aug 2013 17:17:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=oZyU1lHb09cgtnC6zzCShOPe9q0N9Z+pSRyzzxA+hHY=; b=dC7+s9JJPujtOT03s9l7M4mX/jFTmZOVd5f6EQ2VB/+o6WzR35twMYj7UakSGZ1y/j gZNAwImzmLHuCZ2cCcKZbZSIehEoKAPuTigGfKT3j4s4XHQnyApAwMhirkdQhrTydoVZ HoL/7i67YKlFyCNeBBJMEf0vPQexhuvtvz6CW2o6KKJAFs3r8GIgYPVTrKdtXsPM/Xsk i3+yTa/GvQPYBBtiCvrR7zV/T8iBnX2xtiAbnmFV0/Zm8q73JpnAUc7lyFd7kdsoM8Qb PeM7h9YWpWPYXa1jhuGIhUEz/bG/Xz0S419tf491o8+Rzql/lupp9RxDLC50+XX49IJJ BmYg== X-Gm-Message-State: ALoCoQlAn4IqgvKEvAtOQFO6egCwF6KnXe3aRNiMXF9ETwq7dBeF0fwG8iU/Znm+734MoxnPO12d X-Received: by 10.66.149.231 with SMTP id ud7mr99669pab.8.1376695342247; Fri, 16 Aug 2013 16:22:22 -0700 (PDT) Received: from localhost (50-76-60-73-ip-static.hfc.comcastbusiness.net. [50.76.60.73]) by mx.google.com with ESMTPSA id wr9sm100677pbc.7.1969.12.31.16.00.00 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Fri, 16 Aug 2013 16:22:21 -0700 (PDT) From: Andy Lutomirski To: linux-kernel@vger.kernel.org Cc: linux-ext4@vger.kernel.org, Dave Chinner , Theodore Ts'o , Dave Hansen , xfs@oss.sgi.com, Jan Kara , Tim Chen , Christoph Hellwig , Andy Lutomirski Subject: [PATCH v3 1/5] mm: Track mappings that have been written via ptes Date: Fri, 16 Aug 2013 16:22:08 -0700 Message-Id: <73931c4ad555e0c9872ac54b14c7516fd026ef6c.1376679411.git.luto@amacapital.net> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org This will allow filesystems to identify when their mappings have been modified using a pte. The idea is that, ideally, filesystems will update ctime and mtime sometime after any mmaped write. This is handled in core mm code for two reasons: 1. Performance. Setting a bit directly is faster than an indirect call to a vma op. 2. Simplicity. The cmtime bit is set with lots of mm locks held. Rather than making filesystems add a new vm operation that needs to be aware of locking, it's easier to just get it right in core code. For filesystems that don't use the deferred cmtime update mechanism, setting the AS_CMTIME bit has no effect. Signed-off-by: Andy Lutomirski --- include/linux/pagemap.h | 11 +++++++++++ mm/memory.c | 7 ++++++- mm/rmap.c | 27 +++++++++++++++++++++++++-- 3 files changed, 42 insertions(+), 3 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index e3dea75..f98fe2d 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -25,6 +25,7 @@ enum mapping_flags { AS_MM_ALL_LOCKS = __GFP_BITS_SHIFT + 2, /* under mm_take_all_locks() */ AS_UNEVICTABLE = __GFP_BITS_SHIFT + 3, /* e.g., ramdisk, SHM_LOCK */ AS_BALLOON_MAP = __GFP_BITS_SHIFT + 4, /* balloon page special map */ + AS_CMTIME = __GFP_BITS_SHIFT + 5, /* cmtime update deferred */ }; static inline void mapping_set_error(struct address_space *mapping, int error) @@ -74,6 +75,16 @@ static inline gfp_t mapping_gfp_mask(struct address_space * mapping) return (__force gfp_t)mapping->flags & __GFP_BITS_MASK; } +static inline void mapping_set_cmtime(struct address_space * mapping) +{ + set_bit(AS_CMTIME, &mapping->flags); +} + +static inline bool mapping_test_clear_cmtime(struct address_space * mapping) +{ + return test_and_clear_bit(AS_CMTIME, &mapping->flags); +} + /* * This is non-atomic. Only to be used before the mapping is activated. * Probably needs a barrier... diff --git a/mm/memory.c b/mm/memory.c index 4026841..1737a90 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1150,8 +1150,13 @@ again: if (PageAnon(page)) rss[MM_ANONPAGES]--; else { - if (pte_dirty(ptent)) + if (pte_dirty(ptent)) { + struct address_space *mapping = + page_mapping(page); + if (mapping) + mapping_set_cmtime(mapping); set_page_dirty(page); + } if (pte_young(ptent) && likely(!(vma->vm_flags & VM_SEQ_READ))) mark_page_accessed(page); diff --git a/mm/rmap.c b/mm/rmap.c index b2e29ac..2e3fb27 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -928,6 +928,10 @@ static int page_mkclean_file(struct address_space *mapping, struct page *page) } } mutex_unlock(&mapping->i_mmap_mutex); + + if (ret) + mapping_set_cmtime(mapping); + return ret; } @@ -1179,6 +1183,19 @@ out: } /* + * Mark a page's mapping for future cmtime update. It's safe to call this + * on any page, but it only has any effect if the page is backed by a mapping + * that uses mapping_test_clear_cmtime to handle file time updates. This means + * that there's no need to call this on for non-VM_SHARED vmas. + */ +static void page_set_cmtime(struct page *page) +{ + struct address_space *mapping = page_mapping(page); + if (mapping) + mapping_set_cmtime(mapping); +} + +/* * Subfunctions of try_to_unmap: try_to_unmap_one called * repeatedly from try_to_unmap_ksm, try_to_unmap_anon or try_to_unmap_file. */ @@ -1219,8 +1236,11 @@ int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, pteval = ptep_clear_flush(vma, address, pte); /* Move the dirty bit to the physical page now the pte is gone. */ - if (pte_dirty(pteval)) + if (pte_dirty(pteval)) { set_page_dirty(page); + if (vma->vm_flags & VM_SHARED) + page_set_cmtime(page); + } /* Update high watermark before we lower rss */ update_hiwater_rss(mm); @@ -1413,8 +1433,11 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount, } /* Move the dirty bit to the physical page now the pte is gone. */ - if (pte_dirty(pteval)) + if (pte_dirty(pteval)) { set_page_dirty(page); + if (vma->vm_flags & VM_SHARED) + page_set_cmtime(page); + } page_remove_rmap(page); page_cache_release(page);