From patchwork Fri Jul 27 08:01:00 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Czerner X-Patchwork-Id: 173567 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id DAABF2C0096 for ; Fri, 27 Jul 2012 18:01:58 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752011Ab2G0IBs (ORCPT ); Fri, 27 Jul 2012 04:01:48 -0400 Received: from mx1.redhat.com ([209.132.183.28]:13933 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752048Ab2G0IBq (ORCPT ); Fri, 27 Jul 2012 04:01:46 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q6R81gmA011282 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 27 Jul 2012 04:01:42 -0400 Received: from vpn-10-43.rdu.redhat.com (vpn-10-43.rdu.redhat.com [10.11.10.43]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id q6R81SNK013681; Fri, 27 Jul 2012 04:01:40 -0400 From: Lukas Czerner To: linux-fsdevel@vger.kernel.org Cc: linux-ext4@vger.kernel.org, tytso@mit.edu, hughd@google.com, linux-mmc@vger.kernel.org, Lukas Czerner , Andrew Morton Subject: [PATCH 01/15] mm: add invalidatepage_range address space operation Date: Fri, 27 Jul 2012 10:01:00 +0200 Message-Id: <1343376074-28034-2-git-send-email-lczerner@redhat.com> In-Reply-To: <1343376074-28034-1-git-send-email-lczerner@redhat.com> References: <1343376074-28034-1-git-send-email-lczerner@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Currently there is no way to truncate partial page where the end truncate point is not at the end of the page. This is because it was not needed and the functionality was enough for file system truncate operation to work properly. However more file systems now support punch hole feature and it can benefit from mm supporting truncating page just up to the certain point. Specifically, with this functionality truncate_inode_pages_range() can be changed so it supports truncating partial page at the end of the range (currently it will BUG_ON() if 'end' is not at the end of the page). This commit add new address space operation invalidatepage_range which allows specifying length of bytes to invalidate, rather than assuming truncate to the end of the page. It also introduce block_invalidatepage_range() and do_invalidatepage)range() functions for exactly the same reason. The caller does not have to implement both aops (invalidatepage and invalidatepage_range) and the latter is preferred. The old method will be used only if invalidatepage_range is not implemented by the caller. Signed-off-by: Lukas Czerner Cc: Andrew Morton Cc: Hugh Dickins --- fs/buffer.c | 23 ++++++++++++++++++++++- include/linux/buffer_head.h | 2 ++ include/linux/fs.h | 2 ++ include/linux/mm.h | 2 ++ mm/truncate.c | 30 ++++++++++++++++++++++++++---- 5 files changed, 54 insertions(+), 5 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index c7062c8..5937f30 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -1457,13 +1457,27 @@ static void discard_buffer(struct buffer_head * bh) */ void block_invalidatepage(struct page *page, unsigned long offset) { + block_invalidatepage_range(page, offset, PAGE_CACHE_SIZE); +} +EXPORT_SYMBOL(block_invalidatepage); + +void block_invalidatepage_range(struct page *page, unsigned long offset, + unsigned long length) +{ struct buffer_head *head, *bh, *next; unsigned int curr_off = 0; + unsigned long stop = length + offset; BUG_ON(!PageLocked(page)); if (!page_has_buffers(page)) goto out; + /* + * Check for overflow + */ + if (stop < length) + stop = PAGE_CACHE_SIZE; + head = page_buffers(page); bh = head; do { @@ -1471,6 +1485,12 @@ void block_invalidatepage(struct page *page, unsigned long offset) next = bh->b_this_page; /* + * Are we still fully in range ? + */ + if (next_off > stop) + goto out; + + /* * is this block fully invalidated? */ if (offset <= curr_off) @@ -1489,7 +1509,8 @@ void block_invalidatepage(struct page *page, unsigned long offset) out: return; } -EXPORT_SYMBOL(block_invalidatepage); +EXPORT_SYMBOL(block_invalidatepage_range); + /* * We attach and possibly dirty the buffers atomically wrt diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index 458f497..9d55645 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -194,6 +194,8 @@ extern int buffer_heads_over_limit; * address_spaces. */ void block_invalidatepage(struct page *page, unsigned long offset); +void block_invalidatepage_range(struct page *page, unsigned long offset, + unsigned long length); int block_write_full_page(struct page *page, get_block_t *get_block, struct writeback_control *wbc); int block_write_full_page_endio(struct page *page, get_block_t *get_block, diff --git a/include/linux/fs.h b/include/linux/fs.h index 8fabb03..b9eaf0c 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -620,6 +620,8 @@ struct address_space_operations { /* Unfortunately this kludge is needed for FIBMAP. Don't use it */ sector_t (*bmap)(struct address_space *, sector_t); void (*invalidatepage) (struct page *, unsigned long); + void (*invalidatepage_range) (struct page *, unsigned long, + unsigned long); int (*releasepage) (struct page *, gfp_t); void (*freepage)(struct page *); ssize_t (*direct_IO)(int, struct kiocb *, const struct iovec *iov, diff --git a/include/linux/mm.h b/include/linux/mm.h index f9f279c..2db6a29 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -998,6 +998,8 @@ struct page *get_dump_page(unsigned long addr); extern int try_to_release_page(struct page * page, gfp_t gfp_mask); extern void do_invalidatepage(struct page *page, unsigned long offset); +extern void do_invalidatepage_range(struct page *page, unsigned long offset, + unsigned long length); int __set_page_dirty_nobuffers(struct page *page); int __set_page_dirty_no_writeback(struct page *page); diff --git a/mm/truncate.c b/mm/truncate.c index 75801ac..e29e5ea 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -39,14 +39,36 @@ */ void do_invalidatepage(struct page *page, unsigned long offset) { + do_invalidatepage_range(page, offset, PAGE_CACHE_SIZE); +} + +void do_invalidatepage_range(struct page *page, unsigned long offset, + unsigned long length) +{ + void (*invalidatepage_range)(struct page *, unsigned long, + unsigned long); void (*invalidatepage)(struct page *, unsigned long); + + /* + * Try invalidatepage_range first + */ + invalidatepage_range = page->mapping->a_ops->invalidatepage_range; + if (invalidatepage_range) + (*invalidatepage_range)(page, offset, length); + + /* + * When only invalidatepage is registered length must be + * PAGE_CACHE_SIZE + */ invalidatepage = page->mapping->a_ops->invalidatepage; + if (invalidatepage) { + BUG_ON(length != PAGE_CACHE_SIZE); + (*invalidatepage)(page, offset); + } #ifdef CONFIG_BLOCK - if (!invalidatepage) - invalidatepage = block_invalidatepage; + if (!invalidatepage_range && !invalidatepage) + block_invalidatepage_range(page, offset, length); #endif - if (invalidatepage) - (*invalidatepage)(page, offset); } static inline void truncate_partial_page(struct page *page, unsigned partial)