From patchwork Fri Jul 27 08:01:09 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Czerner X-Patchwork-Id: 173573 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 12E252C0093 for ; Fri, 27 Jul 2012 18:02:15 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752532Ab2G0ICG (ORCPT ); Fri, 27 Jul 2012 04:02:06 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46063 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752612Ab2G0ICE (ORCPT ); Fri, 27 Jul 2012 04:02:04 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q6R822Kf011349 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 27 Jul 2012 04:02:02 -0400 Received: from vpn-10-43.rdu.redhat.com (vpn-10-43.rdu.redhat.com [10.11.10.43]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id q6R81SNT013681; Fri, 27 Jul 2012 04:02:00 -0400 From: Lukas Czerner To: linux-fsdevel@vger.kernel.org Cc: linux-ext4@vger.kernel.org, tytso@mit.edu, hughd@google.com, linux-mmc@vger.kernel.org, Lukas Czerner Subject: [PATCH 10/15] ext4: use ext4_zero_partial_blocks in punch_hole Date: Fri, 27 Jul 2012 10:01:09 +0200 Message-Id: <1343376074-28034-11-git-send-email-lczerner@redhat.com> In-Reply-To: <1343376074-28034-1-git-send-email-lczerner@redhat.com> References: <1343376074-28034-1-git-send-email-lczerner@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org We're doing to get rid of ext4_discard_partial_page_buffers() since it is duplicating some code and also partially duplicating work of truncate_pagecache_range(), moreover the old implementation was much clearer. Now when the truncate_inode_pages_range() can handle truncating non page aligned regions we can use this to invalidate and zero out block aligned region of the punched out range and then use ext4_block_truncate_page() to zero the unaligned blocks on the start and end of the range. This will greatly simplify the punch hole code. Moreover after this commit we can get rid of the ext4_discard_partial_page_buffers() completely. This has been tested on ppc64 with 1k block size with fsx and xfstests without any problems. Signed-off-by: Lukas Czerner --- fs/ext4/ext4.h | 2 + fs/ext4/extents.c | 80 ++++++----------------------------------------------- fs/ext4/inode.c | 31 ++++++++++++++++++++ 3 files changed, 42 insertions(+), 71 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 439af1e..704ceab 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -1999,6 +1999,8 @@ extern int ext4_block_truncate_page(handle_t *handle, struct address_space *mapping, loff_t from); extern int ext4_block_zero_page_range(handle_t *handle, struct address_space *mapping, loff_t from, loff_t length); +extern int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode, + loff_t lstart, loff_t lend); extern int ext4_discard_partial_page_buffers(handle_t *handle, struct address_space *mapping, loff_t from, loff_t length, int flags); diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 9967947..7f5fe66 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4740,9 +4740,7 @@ int ext4_ext_punch_hole(struct file *file, loff_t offset, loff_t length) struct inode *inode = file->f_path.dentry->d_inode; struct super_block *sb = inode->i_sb; ext4_lblk_t first_block, stop_block; - struct address_space *mapping = inode->i_mapping; handle_t *handle; - loff_t first_page, last_page, page_len; loff_t first_page_offset, last_page_offset; int credits, err = 0; @@ -4762,17 +4760,13 @@ int ext4_ext_punch_hole(struct file *file, loff_t offset, loff_t length) offset; } - first_page = (offset + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; - last_page = (offset + length) >> PAGE_CACHE_SHIFT; - - first_page_offset = first_page << PAGE_CACHE_SHIFT; - last_page_offset = last_page << PAGE_CACHE_SHIFT; + first_page_offset = round_up(offset, sb->s_blocksize); + last_page_offset = round_down((offset + length), sb->s_blocksize) - 1; - /* Now release the pages */ - if (last_page_offset > first_page_offset) { + /* Now release the pages and zero block aligned part of pages*/ + if (last_page_offset > first_page_offset) truncate_pagecache_range(inode, first_page_offset, - last_page_offset - 1); - } + last_page_offset); /* finish any pending end_io work */ ext4_flush_completed_IO(inode); @@ -4788,66 +4782,10 @@ int ext4_ext_punch_hole(struct file *file, loff_t offset, loff_t length) if (err) goto out1; - /* - * Now we need to zero out the non-page-aligned data in the - * pages at the start and tail of the hole, and unmap the buffer - * heads for the block aligned regions of the page that were - * completely zeroed. - */ - if (first_page > last_page) { - /* - * If the file space being truncated is contained within a page - * just zero out and unmap the middle of that page - */ - err = ext4_discard_partial_page_buffers(handle, - mapping, offset, length, 0); - - if (err) - goto out; - } else { - /* - * zero out and unmap the partial page that contains - * the start of the hole - */ - page_len = first_page_offset - offset; - if (page_len > 0) { - err = ext4_discard_partial_page_buffers(handle, mapping, - offset, page_len, 0); - if (err) - goto out; - } - - /* - * zero out and unmap the partial page that contains - * the end of the hole - */ - page_len = offset + length - last_page_offset; - if (page_len > 0) { - err = ext4_discard_partial_page_buffers(handle, mapping, - last_page_offset, page_len, 0); - if (err) - goto out; - } - } - - /* - * If i_size is contained in the last page, we need to - * unmap and zero the partial page after i_size - */ - if (inode->i_size >> PAGE_CACHE_SHIFT == last_page && - inode->i_size % PAGE_CACHE_SIZE != 0) { - - page_len = PAGE_CACHE_SIZE - - (inode->i_size & (PAGE_CACHE_SIZE - 1)); - - if (page_len > 0) { - err = ext4_discard_partial_page_buffers(handle, - mapping, inode->i_size, page_len, 0); - - if (err) - goto out; - } - } + err = ext4_zero_partial_blocks(handle, inode, offset, + offset + length - 1); + if (err) + goto out; first_block = (offset + sb->s_blocksize - 1) >> EXT4_BLOCK_SIZE_BITS(sb); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 3e69a78..5205152 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3525,6 +3525,37 @@ unlock: return err; } +int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode, + loff_t lstart, loff_t lend) +{ + struct super_block *sb = inode->i_sb; + struct address_space *mapping = inode->i_mapping; + unsigned partial = lstart & (sb->s_blocksize - 1); + ext4_fsblk_t start = lstart >> sb->s_blocksize_bits; + ext4_fsblk_t end = lend >> sb->s_blocksize_bits; + int err = 0; + + /* Handle partial zero within the single block */ + if (start == end) { + err = ext4_block_zero_page_range(handle, mapping, + lstart, lend - lstart + 1); + return err; + } + /* Handle partial zero out on the start of the range */ + if (partial) { + err = ext4_block_zero_page_range(handle, mapping, + lstart, sb->s_blocksize); + if (err) + return err; + } + /* Handle partial zero out on the end of the range */ + partial = lend & (sb->s_blocksize - 1); + if (partial != sb->s_blocksize - 1) + err = ext4_block_zero_page_range(handle, mapping, + lend - partial, partial + 1); + return err; +} + int ext4_can_truncate(struct inode *inode) { if (S_ISREG(inode->i_mode))