From patchwork Tue Jul 17 08:28:08 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 171359 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 6B3A92C01BC for ; Tue, 17 Jul 2012 18:29:01 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754704Ab2GQI25 (ORCPT ); Tue, 17 Jul 2012 04:28:57 -0400 Received: from mail-pb0-f46.google.com ([209.85.160.46]:53217 "EHLO mail-pb0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750976Ab2GQI2x (ORCPT ); Tue, 17 Jul 2012 04:28:53 -0400 Received: by pbbrp8 with SMTP id rp8so445162pbb.19 for ; Tue, 17 Jul 2012 01:28:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=date:from:x-x-sender:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version:content-type; bh=ML52UqbHY+Of05okvW7C4GAcQd8EpKLz/ppSCLeJAZ4=; b=f0o9eHlfBRNZxuTI8XCoWilzsUWznOf0a72swmm7ty+ykjfrbogFRLrTEUu0wDGm/F ZIat19V/hZ/0GXkdxQ6fNB8VIbky/1tBeUr5vpLl6xwFkxBp3NBRaT25Fo2PYz/DgTQf 15cf4L66iWynkznv+Ouv1bzKBAYt6qcbzaeemkohZMZyZ6m+xJeHU2jEp/hDPLlRNvVu scZh7WwKjyxDI2+rK3yuvJ/ef74ST76JBvSlf1T7lYs0H2p0J0lLK3cbmgLJMwguhgT+ IG75CQ5Pols3Ek65143UxlskKfQZg9WvgO73BLiGIkjBGjmUZ0R+CE+/GwtL/qIf2U02 Nj9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=date:from:x-x-sender:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version:content-type:x-gm-message-state; bh=ML52UqbHY+Of05okvW7C4GAcQd8EpKLz/ppSCLeJAZ4=; b=lCrIFrDifiWyPcRSYQ3nDi/yFbRaYM7/PISMyVt5CC24pSd1fVkz94MAsvLnjhTfS6 XpDj1mM6KyVDfYhjuYf3YlW54V7DdjsS4PmzVDWY1HA/uVpn/SzwQJkBwnfhb54HoxVB j5GM6vV0aO9oSKBic/cKv5JkiFb3G649dkFx1d98MpYwFXBx33zmWacMMiogDyB6KIxH X13lbzqAyzyrR5OBJ98Gs4jhobqpt70UK6yCG8fjt7tatVT6od/nT+P3d0vRNwiH8vTu 2MTi9UOr17go60CiWYqTpRyhZbIGEr1LFU3OmpXYk1fM3dGYPTiiMDCFv/8WaW5bbb47 Ph2A== Received: by 10.68.194.201 with SMTP id hy9mr4418525pbc.69.1342513733077; Tue, 17 Jul 2012 01:28:53 -0700 (PDT) Received: by 10.68.194.201 with SMTP id hy9mr4418487pbc.69.1342513732841; Tue, 17 Jul 2012 01:28:52 -0700 (PDT) Received: from [192.168.1.8] (c-67-188-178-35.hsd1.ca.comcast.net. [67.188.178.35]) by mx.google.com with ESMTPS id of4sm13611443pbb.51.2012.07.17.01.28.51 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 17 Jul 2012 01:28:52 -0700 (PDT) Date: Tue, 17 Jul 2012 01:28:08 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Lukas Czerner cc: Andrew Morton , Theodore Ts'o , Dave Chinner , linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, achender@linux.vnet.ibm.com Subject: Re: [PATCH 06/12 v2] mm: teach truncate_inode_pages_range() to hadnle non page aligned ranges In-Reply-To: <1342185555-21146-6-git-send-email-lczerner@redhat.com> Message-ID: References: <1342185555-21146-1-git-send-email-lczerner@redhat.com> <1342185555-21146-6-git-send-email-lczerner@redhat.com> User-Agent: Alpine 2.00 (LSU 1167 2008-08-23) MIME-Version: 1.0 X-Gm-Message-State: ALoCoQmWdRVmIwzxzvh/j2LfJH8DmKLeAf0u4CDAUUHwWpAzGv+WHpSoanrPZzGNv2l5ykuI4XoZikBtPE5Y7cDiHvlYd/bYpTXctJKPUjJZJej6HTadnN7A6zQsGP+jAprWKu+cunFxNrxAirwI5hYNYasEzVRu9qy66it+x0WwuSeP2e7DyYFMhoPZ3sZ0uhngcemlcJkQ Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Fri, 13 Jul 2012, Lukas Czerner wrote: > This commit changes truncate_inode_pages_range() so it can handle non > page aligned regions of the truncate. Currently we can hit BUG_ON when > the end of the range is not page aligned, but he can handle unaligned > start of the range. > > Being able to handle non page aligned regions of the page can help file > system punch_hole implementations and save some work, because once we're > holding the page we might as well deal with it right away. > > Signed-off-by: Lukas Czerner > Cc: Hugh Dickins As I said under 02/12, I'd much rather not change from the existing -1 convention: I don't think it's wonderful, but I do think it's confusing and a waste of effort to change from it; and I'd rather keep the code in truncate.c close to what's doing the same job in shmem.c. Here's what I came up with (and hacked tmpfs to use it without swap temporarily, so I could run fsx for an hour to validate it). But you can see I've a couple of questions; and probably ought to reduce the partial page code duplication once we're sure what should go in there. Hugh [PATCH]... Apply to truncate_inode_pages_range() the changes 83e4fa9c16e4 ("tmpfs: support fallocate FALLOC_FL_PUNCH_HOLE") made to shmem_truncate_range(): so the generic function can handle partial end offset for hole-punching. In doing tmpfs, I became convinced that it needed a set_page_dirty() on the partial pages, and I'm doing that here: but perhaps it should be the responsibility of the calling filesystem? I don't know. And I'm doubtful whether this code can be correct (on a filesystem with blocksize less than pagesize) without adding an end offset argument to address_space_operations invalidatepage(page, offset): convince me! Not-yet-signed-off-by: Hugh Dickins --- mm/truncate.c | 69 +++++++++++++++++++++++++++++------------------- 1 file changed, 42 insertions(+), 27 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html --- 3.5-rc7/mm/truncate.c 2012-06-03 06:42:11.249787128 -0700 +++ linux/mm/truncate.c 2012-07-16 22:54:16.903821549 -0700 @@ -49,14 +49,6 @@ void do_invalidatepage(struct page *page (*invalidatepage)(page, offset); } -static inline void truncate_partial_page(struct page *page, unsigned partial) -{ - zero_user_segment(page, partial, PAGE_CACHE_SIZE); - cleancache_invalidate_page(page->mapping, page); - if (page_has_private(page)) - do_invalidatepage(page, partial); -} - /* * This cancels just the dirty bit on the kernel page itself, it * does NOT actually remove dirty bits on any mmap's that may be @@ -190,8 +182,8 @@ int invalidate_inode_page(struct page *p * @lend: offset to which to truncate * * Truncate the page cache, removing the pages that are between - * specified offsets (and zeroing out partial page - * (if lstart is not page aligned)). + * specified offsets (and zeroing out partial pages + * if lstart or lend + 1 is not page aligned). * * Truncate takes two passes - the first pass is nonblocking. It will not * block on page locks and it will not block on writeback. The second pass @@ -206,31 +198,32 @@ int invalidate_inode_page(struct page *p void truncate_inode_pages_range(struct address_space *mapping, loff_t lstart, loff_t lend) { - const pgoff_t start = (lstart + PAGE_CACHE_SIZE-1) >> PAGE_CACHE_SHIFT; - const unsigned partial = lstart & (PAGE_CACHE_SIZE - 1); + pgoff_t start = (lstart + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; + pgoff_t end = (lend + 1) >> PAGE_CACHE_SHIFT; + unsigned int partial_start = lstart & (PAGE_CACHE_SIZE - 1); + unsigned int partial_end = (lend + 1) & (PAGE_CACHE_SIZE - 1); struct pagevec pvec; pgoff_t index; - pgoff_t end; int i; cleancache_invalidate_inode(mapping); if (mapping->nrpages == 0) return; - BUG_ON((lend & (PAGE_CACHE_SIZE - 1)) != (PAGE_CACHE_SIZE - 1)); - end = (lend >> PAGE_CACHE_SHIFT); + if (lend == -1) + end = -1; /* unsigned, so actually very big */ pagevec_init(&pvec, 0); index = start; - while (index <= end && pagevec_lookup(&pvec, mapping, index, - min(end - index, (pgoff_t)PAGEVEC_SIZE - 1) + 1)) { + while (index < end && pagevec_lookup(&pvec, mapping, index, + min(end - index, (pgoff_t)PAGEVEC_SIZE))) { mem_cgroup_uncharge_start(); for (i = 0; i < pagevec_count(&pvec); i++) { struct page *page = pvec.pages[i]; /* We rely upon deletion not changing page->index */ index = page->index; - if (index > end) + if (index >= end) break; if (!trylock_page(page)) @@ -249,27 +242,51 @@ void truncate_inode_pages_range(struct a index++; } - if (partial) { + if (partial_start) { struct page *page = find_lock_page(mapping, start - 1); if (page) { + unsigned int top = PAGE_CACHE_SIZE; + if (start > end) { + top = partial_end; + partial_end = 0; + } wait_on_page_writeback(page); - truncate_partial_page(page, partial); + zero_user_segment(page, partial_start, top); + cleancache_invalidate_page(mapping, page); + if (page_has_private(page)) + do_invalidatepage(page, partial_start); + set_page_dirty(page); unlock_page(page); page_cache_release(page); } } + if (partial_end) { + struct page *page = find_lock_page(mapping, end); + if (page) { + wait_on_page_writeback(page); + zero_user_segment(page, 0, partial_end); + cleancache_invalidate_page(mapping, page); + if (page_has_private(page)) + do_invalidatepage(page, 0); + set_page_dirty(page); + unlock_page(page); + page_cache_release(page); + } + } + if (start >= end) + return; index = start; for ( ; ; ) { cond_resched(); if (!pagevec_lookup(&pvec, mapping, index, - min(end - index, (pgoff_t)PAGEVEC_SIZE - 1) + 1)) { + min(end - index, (pgoff_t)PAGEVEC_SIZE))) { if (index == start) break; index = start; continue; } - if (index == start && pvec.pages[0]->index > end) { + if (index == start && pvec.pages[0]->index >= end) { pagevec_release(&pvec); break; } @@ -279,7 +296,7 @@ void truncate_inode_pages_range(struct a /* We rely upon deletion not changing page->index */ index = page->index; - if (index > end) + if (index >= end) break; lock_page(page); @@ -624,10 +641,8 @@ void truncate_pagecache_range(struct ino * This rounding is currently just for example: unmap_mapping_range * expands its hole outwards, whereas we want it to contract the hole * inwards. However, existing callers of truncate_pagecache_range are - * doing their own page rounding first; and truncate_inode_pages_range - * currently BUGs if lend is not pagealigned-1 (it handles partial - * page at start of hole, but not partial page at end of hole). Note - * unmap_mapping_range allows holelen 0 for all, and we allow lend -1. + * doing their own page rounding first. Note that unmap_mapping_range + * allows holelen 0 for all, and we allow lend -1 for end of file. */ /*