From patchwork Wed Nov 21 02:00:41 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Darrick Wong X-Patchwork-Id: 200539 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id CC6B12C00A5 for ; Wed, 21 Nov 2012 13:02:20 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753912Ab2KUCBj (ORCPT ); Tue, 20 Nov 2012 21:01:39 -0500 Received: from aserp1040.oracle.com ([141.146.126.69]:42289 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753313Ab2KUCBi (ORCPT ); Tue, 20 Nov 2012 21:01:38 -0500 Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237]) by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with ESMTP id qAL20iYt012139 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 21 Nov 2012 02:00:45 GMT Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157]) by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id qAL20hGf008661 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 21 Nov 2012 02:00:44 GMT Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63]) by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id qAL20hli026599; Tue, 20 Nov 2012 20:00:43 -0600 Received: from localhost (/67.171.138.228) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 20 Nov 2012 18:00:43 -0800 Subject: [PATCH 2/4] mm: Only enforce stable page writes if the backing device requires it To: axboe@kernel.dk, lucho@ionkov.net, jack@suse.cz, ericvh@gmail.com, tytso@mit.edu, rminnich@sandia.gov, viro@zeniv.linux.org.uk From: "Darrick J. Wong" Cc: martin.petersen@oracle.com, neilb@suse.de, david@fromorbit.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, adilger.kernel@dilger.ca, bharrosh@panasas.com, jlayton@samba.org, v9fs-developer@lists.sourceforge.net, linux-ext4@vger.kernel.org Date: Tue, 20 Nov 2012 18:00:41 -0800 Message-ID: <20121121020041.10225.35875.stgit@blackbox.djwong.org> In-Reply-To: <20121121020027.10225.43206.stgit@blackbox.djwong.org> References: <20121121020027.10225.43206.stgit@blackbox.djwong.org> User-Agent: StGit/0.15 MIME-Version: 1.0 X-Source-IP: acsinet21.oracle.com [141.146.126.237] Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Create a helper function to check if a backing device requires stable page writes and, if so, performs the necessary wait. Then, make it so that all points in the memory manager that handle making pages writable use the helper function. This should provide stable page write support to most filesystems, while eliminating unnecessary waiting for devices that don't require the feature. Signed-off-by: Darrick J. Wong --- fs/buffer.c | 2 +- fs/ext4/inode.c | 2 +- include/linux/pagemap.h | 1 + mm/filemap.c | 3 ++- mm/page-writeback.c | 11 +++++++++++ 5 files changed, 16 insertions(+), 3 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/fs/buffer.c b/fs/buffer.c index b5f0442..4cf0b78 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2334,7 +2334,7 @@ int __block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf, if (unlikely(ret < 0)) goto out_unlock; set_page_dirty(page); - wait_on_page_writeback(page); + wait_if_stable_page_write(page); return 0; out_unlock: unlock_page(page); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index b3c243b..24c6193 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4814,7 +4814,7 @@ int ext4_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf) if (!walk_page_buffers(NULL, page_buffers(page), 0, len, NULL, ext4_bh_unmapped)) { /* Wait so that we don't change page under IO */ - wait_on_page_writeback(page); + wait_if_stable_page_write(page); ret = VM_FAULT_LOCKED; goto out; } diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index e42c762..1d67f81 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -398,6 +398,7 @@ static inline void wait_on_page_writeback(struct page *page) } extern void end_page_writeback(struct page *page); +void wait_if_stable_page_write(struct page *page); /* * Add an arbitrary waiter to a page's wait queue diff --git a/mm/filemap.c b/mm/filemap.c index 83efee7..34df185 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1728,6 +1728,7 @@ int filemap_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf) * see the dirty page and writeprotect it again. */ set_page_dirty(page); + wait_if_stable_page_write(page); out: sb_end_pagefault(inode->i_sb); return ret; @@ -2274,7 +2275,7 @@ repeat: return NULL; } found: - wait_on_page_writeback(page); + wait_if_stable_page_write(page); return page; } EXPORT_SYMBOL(grab_cache_page_write_begin); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 830893b..cbbaeca 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2275,3 +2275,14 @@ int mapping_tagged(struct address_space *mapping, int tag) return radix_tree_tagged(&mapping->page_tree, tag); } EXPORT_SYMBOL(mapping_tagged); + +void wait_if_stable_page_write(struct page *page) +{ + struct backing_dev_info *bdi = page->mapping->backing_dev_info; + + if (!bdi_cap_stable_pages_required(bdi)) + return; + + wait_on_page_writeback(page); +} +EXPORT_SYMBOL_GPL(wait_if_stable_page_write);