From patchwork Fri Jul 27 08:01:05 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Czerner X-Patchwork-Id: 173572 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id A37AE2C007E for ; Fri, 27 Jul 2012 18:02:06 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752558Ab2G0ICB (ORCPT ); Fri, 27 Jul 2012 04:02:01 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59200 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752440Ab2G0IB5 (ORCPT ); Fri, 27 Jul 2012 04:01:57 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q6R81sM8011315 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 27 Jul 2012 04:01:54 -0400 Received: from vpn-10-43.rdu.redhat.com (vpn-10-43.rdu.redhat.com [10.11.10.43]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id q6R81SNP013681; Fri, 27 Jul 2012 04:01:52 -0400 From: Lukas Czerner To: linux-fsdevel@vger.kernel.org Cc: linux-ext4@vger.kernel.org, tytso@mit.edu, hughd@google.com, linux-mmc@vger.kernel.org, Lukas Czerner , Andrew Morton Subject: [PATCH 06/15] mm: teach truncate_inode_pages_range() to handle non page aligned ranges Date: Fri, 27 Jul 2012 10:01:05 +0200 Message-Id: <1343376074-28034-7-git-send-email-lczerner@redhat.com> In-Reply-To: <1343376074-28034-1-git-send-email-lczerner@redhat.com> References: <1343376074-28034-1-git-send-email-lczerner@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org This commit changes truncate_inode_pages_range() so it can handle non page aligned regions of the truncate. Currently we can hit BUG_ON when the end of the range is not page aligned, but we can handle unaligned start of the range. Being able to handle non page aligned regions of the page can help file system punch_hole implementations and save some work, because once we're holding the page we might as well deal with it right away. In order for this to work correctly, called must register invalidatepage_range address space operation, or rely solely on the block_invalidatepage_range. That said it will BUG_ON() if caller implements invalidatepage(), does not implement invalidatepage_range() and use truncate_inode_pages_range() with unaligned end of the range. This was based on the code provided by Hugh Dickins with some small changes to make use of do_invalidatepage_range(). Signed-off-by: Lukas Czerner Cc: Andrew Morton Cc: Hugh Dickins Acked-by: Hugh Dickins --- mm/truncate.c | 77 +++++++++++++++++++++++++++++++++++++-------------------- 1 files changed, 50 insertions(+), 27 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index e29e5ea..1f6ea8b 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -71,14 +71,6 @@ void do_invalidatepage_range(struct page *page, unsigned long offset, #endif } -static inline void truncate_partial_page(struct page *page, unsigned partial) -{ - zero_user_segment(page, partial, PAGE_CACHE_SIZE); - cleancache_invalidate_page(page->mapping, page); - if (page_has_private(page)) - do_invalidatepage(page, partial); -} - /* * This cancels just the dirty bit on the kernel page itself, it * does NOT actually remove dirty bits on any mmap's that may be @@ -212,8 +204,8 @@ int invalidate_inode_page(struct page *page) * @lend: offset to which to truncate * * Truncate the page cache, removing the pages that are between - * specified offsets (and zeroing out partial page - * (if lstart is not page aligned)). + * specified offsets (and zeroing out partial pages + * if lstart or lend + 1 is not page aligned). * * Truncate takes two passes - the first pass is nonblocking. It will not * block on page locks and it will not block on writeback. The second pass @@ -224,35 +216,44 @@ int invalidate_inode_page(struct page *page) * We pass down the cache-hot hint to the page freeing code. Even if the * mapping is large, it is probably the case that the final pages are the most * recently touched, and freeing happens in ascending file offset order. + * + * Note that it is able to handle cases where lend + 1 is not page aligned. + * However in order for this to work caller have to register + * invalidatepage_range address space operation or rely solely on + * block_invalidatepage_range(). That said, do_invalidatepage_range() will + * BUG_ON() if caller implements invalidatapage(), does not implement + * invalidatepage_range() and uses truncate_inode_pages_range() with lend + 1 + * unaligned to the page cache size. */ void truncate_inode_pages_range(struct address_space *mapping, loff_t lstart, loff_t lend) { - const pgoff_t start = (lstart + PAGE_CACHE_SIZE-1) >> PAGE_CACHE_SHIFT; - const unsigned partial = lstart & (PAGE_CACHE_SIZE - 1); + pgoff_t start = (lstart + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; + pgoff_t end = (lend + 1) >> PAGE_CACHE_SHIFT; + unsigned int partial_start = lstart & (PAGE_CACHE_SIZE - 1); + unsigned int partial_end = (lend + 1) & (PAGE_CACHE_SIZE - 1); struct pagevec pvec; pgoff_t index; - pgoff_t end; int i; cleancache_invalidate_inode(mapping); if (mapping->nrpages == 0) return; - BUG_ON((lend & (PAGE_CACHE_SIZE - 1)) != (PAGE_CACHE_SIZE - 1)); - end = (lend >> PAGE_CACHE_SHIFT); + if (lend == -1) + end = -1; /* unsigned, so actually very big */ pagevec_init(&pvec, 0); index = start; - while (index <= end && pagevec_lookup(&pvec, mapping, index, - min(end - index, (pgoff_t)PAGEVEC_SIZE - 1) + 1)) { + while (index < end && pagevec_lookup(&pvec, mapping, index, + min(end - index, (pgoff_t)PAGEVEC_SIZE))) { mem_cgroup_uncharge_start(); for (i = 0; i < pagevec_count(&pvec); i++) { struct page *page = pvec.pages[i]; /* We rely upon deletion not changing page->index */ index = page->index; - if (index > end) + if (index >= end) break; if (!trylock_page(page)) @@ -271,27 +272,51 @@ void truncate_inode_pages_range(struct address_space *mapping, index++; } - if (partial) { + if (partial_start) { struct page *page = find_lock_page(mapping, start - 1); if (page) { + unsigned int top = PAGE_CACHE_SIZE; + if (start > end) { + top = partial_end; + partial_end = 0; + } + wait_on_page_writeback(page); + zero_user_segment(page, partial_start, top); + cleancache_invalidate_page(mapping, page); + if (page_has_private(page)) + do_invalidatepage_range(page, partial_start, + top); + unlock_page(page); + page_cache_release(page); + } + } + if (partial_end) { + struct page *page = find_lock_page(mapping, end); + if (page) { wait_on_page_writeback(page); - truncate_partial_page(page, partial); + zero_user_segment(page, 0, partial_end); + cleancache_invalidate_page(mapping, page); + if (page_has_private(page)) + do_invalidatepage_range(page, 0, + partial_end); unlock_page(page); page_cache_release(page); } } + if (start >= end) + return; index = start; for ( ; ; ) { cond_resched(); if (!pagevec_lookup(&pvec, mapping, index, - min(end - index, (pgoff_t)PAGEVEC_SIZE - 1) + 1)) { + min(end - index, (pgoff_t)PAGEVEC_SIZE))) { if (index == start) break; index = start; continue; } - if (index == start && pvec.pages[0]->index > end) { + if (index == start && pvec.pages[0]->index >= end) { pagevec_release(&pvec); break; } @@ -301,7 +326,7 @@ void truncate_inode_pages_range(struct address_space *mapping, /* We rely upon deletion not changing page->index */ index = page->index; - if (index > end) + if (index >= end) break; lock_page(page); @@ -646,10 +671,8 @@ void truncate_pagecache_range(struct inode *inode, loff_t lstart, loff_t lend) * This rounding is currently just for example: unmap_mapping_range * expands its hole outwards, whereas we want it to contract the hole * inwards. However, existing callers of truncate_pagecache_range are - * doing their own page rounding first; and truncate_inode_pages_range - * currently BUGs if lend is not pagealigned-1 (it handles partial - * page at start of hole, but not partial page at end of hole). Note - * unmap_mapping_range allows holelen 0 for all, and we allow lend -1. + * doing their own page rounding first. Note that unmap_mapping_range + * allows holelen 0 for all, and we allow lend -1 for end of file. */ /*