From patchwork Tue Jul 10 19:10:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 942159 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.intel.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41QBcG5RC7z9s2L for ; Wed, 11 Jul 2018 05:10:38 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732608AbeGJTLB (ORCPT ); Tue, 10 Jul 2018 15:11:01 -0400 Received: from mga06.intel.com ([134.134.136.31]:5112 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732378AbeGJTLB (ORCPT ); Tue, 10 Jul 2018 15:11:01 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Jul 2018 12:10:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,335,1526367600"; d="scan'208";a="71635787" Received: from theros.lm.intel.com ([10.232.112.164]) by orsmga001.jf.intel.com with ESMTP; 10 Jul 2018 12:10:35 -0700 From: Ross Zwisler To: Jan Kara , Dan Williams , Dave Chinner , "Darrick J. Wong" , Christoph Hellwig , linux-nvdimm@lists.01.org, Jeff Moyer , linux-ext4@vger.kernel.org, Lukas Czerner Cc: Ross Zwisler Subject: [PATCH v4 1/2] dax: dax_layout_busy_page() warn on !exceptional Date: Tue, 10 Jul 2018 13:10:30 -0600 Message-Id: <20180710191031.17919-2-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180710191031.17919-1-ross.zwisler@linux.intel.com> References: <20180710191031.17919-1-ross.zwisler@linux.intel.com> Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Inodes using DAX should only ever have exceptional entries in their page caches. Make this clear by warning if the iteration in dax_layout_busy_page() ever sees a non-exceptional entry, and by adding a comment for the pagevec_release() call which only deals with struct page pointers. Signed-off-by: Ross Zwisler Reviewed-by: Jan Kara --- fs/dax.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/fs/dax.c b/fs/dax.c index 641192808bb6..897b51e41d8f 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -566,7 +566,8 @@ struct page *dax_layout_busy_page(struct address_space *mapping) if (index >= end) break; - if (!radix_tree_exceptional_entry(pvec_ent)) + if (WARN_ON_ONCE( + !radix_tree_exceptional_entry(pvec_ent))) continue; xa_lock_irq(&mapping->i_pages); @@ -578,6 +579,13 @@ struct page *dax_layout_busy_page(struct address_space *mapping) if (page) break; } + + /* + * We don't expect normal struct page entries to exist in our + * tree, but we keep these pagevec calls so that this code is + * consistent with the common pattern for handling pagevecs + * throughout the kernel. + */ pagevec_remove_exceptionals(&pvec); pagevec_release(&pvec); index++; From patchwork Tue Jul 10 19:10:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 942161 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.intel.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41QBcJ5HWBz9s00 for ; Wed, 11 Jul 2018 05:10:40 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732656AbeGJTLD (ORCPT ); Tue, 10 Jul 2018 15:11:03 -0400 Received: from mga06.intel.com ([134.134.136.31]:5112 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732378AbeGJTLD (ORCPT ); Tue, 10 Jul 2018 15:11:03 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Jul 2018 12:10:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,335,1526367600"; d="scan'208";a="71635797" Received: from theros.lm.intel.com ([10.232.112.164]) by orsmga001.jf.intel.com with ESMTP; 10 Jul 2018 12:10:38 -0700 From: Ross Zwisler To: Jan Kara , Dan Williams , Dave Chinner , "Darrick J. Wong" , Christoph Hellwig , linux-nvdimm@lists.01.org, Jeff Moyer , linux-ext4@vger.kernel.org, Lukas Czerner Cc: Ross Zwisler Subject: [PATCH v4 2/2] ext4: handle layout changes to pinned DAX mappings Date: Tue, 10 Jul 2018 13:10:31 -0600 Message-Id: <20180710191031.17919-3-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180710191031.17919-1-ross.zwisler@linux.intel.com> References: <20180710191031.17919-1-ross.zwisler@linux.intel.com> Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Follow the lead of xfs_break_dax_layouts() and add synchronization between operations in ext4 which remove blocks from an inode (hole punch, truncate down, etc.) and pages which are pinned due to DAX DMA operations. Signed-off-by: Ross Zwisler Reviewed-by: Jan Kara Reviewed-by: Lukas Czerner --- fs/ext4/ext4.h | 1 + fs/ext4/extents.c | 17 +++++++++++++++++ fs/ext4/inode.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++ fs/ext4/truncate.h | 4 ++++ 4 files changed, 68 insertions(+) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 0b127853c584..34bccd64d83d 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -2460,6 +2460,7 @@ extern int ext4_get_inode_loc(struct inode *, struct ext4_iloc *); extern int ext4_inode_attach_jinode(struct inode *inode); extern int ext4_can_truncate(struct inode *inode); extern int ext4_truncate(struct inode *); +extern int ext4_break_layouts(struct inode *); extern int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length); extern int ext4_truncate_restart_trans(handle_t *, struct inode *, int nblocks); extern void ext4_set_inode_flags(struct inode *); diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 0057fe3f248d..b8161e6b88d1 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4820,6 +4820,13 @@ static long ext4_zero_range(struct file *file, loff_t offset, * released from page cache. */ down_write(&EXT4_I(inode)->i_mmap_sem); + + ret = ext4_break_layouts(inode); + if (ret) { + up_write(&EXT4_I(inode)->i_mmap_sem); + goto out_mutex; + } + ret = ext4_update_disksize_before_punch(inode, offset, len); if (ret) { up_write(&EXT4_I(inode)->i_mmap_sem); @@ -5493,6 +5500,11 @@ int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len) * page cache. */ down_write(&EXT4_I(inode)->i_mmap_sem); + + ret = ext4_break_layouts(inode); + if (ret) + goto out_mmap; + /* * Need to round down offset to be aligned with page size boundary * for page size > block size. @@ -5641,6 +5653,11 @@ int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len) * page cache. */ down_write(&EXT4_I(inode)->i_mmap_sem); + + ret = ext4_break_layouts(inode); + if (ret) + goto out_mmap; + /* * Need to round down to align start offset to page size boundary * for page size > block size. diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 2ea07efbe016..fadb8ecacb1e 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4193,6 +4193,39 @@ int ext4_update_disksize_before_punch(struct inode *inode, loff_t offset, return 0; } +static void ext4_wait_dax_page(struct ext4_inode_info *ei, bool *did_unlock) +{ + *did_unlock = true; + up_write(&ei->i_mmap_sem); + schedule(); + down_write(&ei->i_mmap_sem); +} + +int ext4_break_layouts(struct inode *inode) +{ + struct ext4_inode_info *ei = EXT4_I(inode); + struct page *page; + bool retry; + int error; + + if (WARN_ON_ONCE(!rwsem_is_locked(&ei->i_mmap_sem))) + return -EINVAL; + + do { + retry = false; + page = dax_layout_busy_page(inode->i_mapping); + if (!page) + return 0; + + error = ___wait_var_event(&page->_refcount, + atomic_read(&page->_refcount) == 1, + TASK_INTERRUPTIBLE, 0, 0, + ext4_wait_dax_page(ei, &retry)); + } while (error == 0 && retry); + + return error; +} + /* * ext4_punch_hole: punches a hole in a file by releasing the blocks * associated with the given offset and length @@ -4266,6 +4299,11 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length) * page cache. */ down_write(&EXT4_I(inode)->i_mmap_sem); + + ret = ext4_break_layouts(inode); + if (ret) + goto out_dio; + first_block_offset = round_up(offset, sb->s_blocksize); last_block_offset = round_down((offset + length), sb->s_blocksize) - 1; @@ -5554,6 +5592,14 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr) ext4_wait_for_tail_page_commit(inode); } down_write(&EXT4_I(inode)->i_mmap_sem); + + rc = ext4_break_layouts(inode); + if (rc) { + up_write(&EXT4_I(inode)->i_mmap_sem); + error = rc; + goto err_out; + } + /* * Truncate pagecache after we've waited for commit * in data=journal mode to make pages freeable. diff --git a/fs/ext4/truncate.h b/fs/ext4/truncate.h index 0cb13badf473..bcbe3668c1d4 100644 --- a/fs/ext4/truncate.h +++ b/fs/ext4/truncate.h @@ -11,6 +11,10 @@ */ static inline void ext4_truncate_failed_write(struct inode *inode) { + /* + * We don't need to call ext4_break_layouts() because the blocks we + * are truncating were never visible to userspace. + */ down_write(&EXT4_I(inode)->i_mmap_sem); truncate_inode_pages(inode->i_mapping, inode->i_size); ext4_truncate(inode);