Message ID | 1466073736-30447-2-git-send-email-jack@suse.cz |
---|---|
State | New, archived |
Headers | show |
On Thu, Jun 16, 2016 at 12:42:13PM +0200, Jan Kara wrote: > Commit 06bd3c36a733 (ext4: fix data exposure after a crash) uncovered a > deadlock in ext4_writepages() which was previously much harder to hit. > After this commit xfstest generic/130 reproduces the deadlock on small > filesystems. > > The problem happens when ext4_do_update_inode() sets LARGE_FILE feature > and marks current inode handle as synchronous. That subsequently results > in ext4_journal_stop() called from ext4_writepages() to block waiting for > transaction commit while still holding page locks, reference to io_end, > and some prepared bio in mpd structure each of which can possibly block > transaction commit from completing and thus results in deadlock. Would it be safe to submit the bio *after* calling ext4_journal_stop()? It looks like that would be safe, and I'd prefer to minimize the time that a handle is open since that can really impact performance when trying to close all existing handles when we are starting commit processing. It looks to me like this would be safe in terms of avoiding deadlocks, yes? - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu 30-06-16 11:05:48, Ted Tso wrote: > On Thu, Jun 16, 2016 at 12:42:13PM +0200, Jan Kara wrote: > > Commit 06bd3c36a733 (ext4: fix data exposure after a crash) uncovered a > > deadlock in ext4_writepages() which was previously much harder to hit. > > After this commit xfstest generic/130 reproduces the deadlock on small > > filesystems. > > > > The problem happens when ext4_do_update_inode() sets LARGE_FILE feature > > and marks current inode handle as synchronous. That subsequently results > > in ext4_journal_stop() called from ext4_writepages() to block waiting for > > transaction commit while still holding page locks, reference to io_end, > > and some prepared bio in mpd structure each of which can possibly block > > transaction commit from completing and thus results in deadlock. > > Would it be safe to submit the bio *after* calling > ext4_journal_stop()? It looks like that would be safe, and I'd prefer > to minimize the time that a handle is open since that can really > impact performance when trying to close all existing handles when we > are starting commit processing. It looks to me like this would be > safe in terms of avoiding deadlocks, yes? But it is not safe - the bio contains pages, those pages have PageWriteback set and if the inode is part of the running transaction, ext4_journal_stop() will wait for transaction commit which will wait for all outstanding writeback on the inode, which will deadlock on those pages which are part of our unsubmitted bio. So the ordering really has to be the way it is... Honza
On Fri, Jul 01, 2016 at 11:09:50AM +0200, Jan Kara wrote: > But it is not safe - the bio contains pages, those pages have PageWriteback > set and if the inode is part of the running transaction, > ext4_journal_stop() will wait for transaction commit which will wait for > all outstanding writeback on the inode, which will deadlock on those pages > which are part of our unsubmitted bio. So the ordering really has to be the > way it is... So to be clear. the issue is that PageWriteback won't get cleared until we potentially do a uninit->init conversion, and this is what requires taking a transaction handle leading to the other half of the deadlock? ... and it's probably not safe to clear the PageWriteback early in the bio completion callback function, isn't it. Hmm.... - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri 01-07-16 12:53:39, Ted Tso wrote: > On Fri, Jul 01, 2016 at 11:09:50AM +0200, Jan Kara wrote: > > But it is not safe - the bio contains pages, those pages have PageWriteback > > set and if the inode is part of the running transaction, > > ext4_journal_stop() will wait for transaction commit which will wait for > > all outstanding writeback on the inode, which will deadlock on those pages > > which are part of our unsubmitted bio. So the ordering really has to be the > > way it is... > > So to be clear. the issue is that PageWriteback won't get cleared > until we potentially do a uninit->init conversion, and this is what > requires taking a transaction handle leading to the other half of the > deadlock? No. It is even simpler: ext4_writepages(inode == "foobar") prepares pages to write, sets PageWriteback ... mpage_map_and_submit_extent() // Writing data past i_size if (disksize > EXT4_I(inode)->i_disksize) { ... err2 = ext4_mark_inode_dirty(handle, inode); ext4_mark_iloc_dirty(handle, inode, &iloc); ext4_do_update_inode(handle, inode, iloc); // First file beyond 2 GB if (ei->i_disksize > 0x7fffffffULL) { if (!ext4_has_feature_large_file(sb) || ...) set_large_file = 1; } ... if (set_large_file) { ... ext4_handle_sync(handle); ... } ext4_journal_stop() jbd2_journal_stop(handle); ... if (handle->h_sync || ... ) { if (handle->h_sync && !(current->flags & PF_MEMALLOC)) wait_for_commit = 1; if (wait_for_commit) err = jbd2_log_wait_commit(journal, tid); So we are waiting for transaction commit to finish with unsubmitted pages that already have PageWriteback set (and also potentially other pages that are locked and we didn't prepare them for writing because the block mapping we got was too short). Now JBD2 goes on trying to do the transaction commit: jbd2_journal_commit_transaction() ... journal_finish_inode_data_buffers() list_for_each_entry(jinode, &commit_transaction->t_inode_list, i_list) { ... err = filemap_fdatawait(jinode->i_vfs_inode->i_mapping); // And when inode "foobar" is part of this transaction's inode list, this // call is going to wait for PageWriteback bits on all the pages of // the inode to get cleared - which never happens because the IO was // not even submitted for them. The bio is just sitting prepared in // mpd.io_submit in ext4_writepages() and would be submitted once // ext4_journal_stop() completes. Hope it is clearer now. Honza
On Fri, Jul 01, 2016 at 07:40:41PM +0200, Jan Kara wrote: > > So we are waiting for transaction commit to finish with unsubmitted pages > that already have PageWriteback set (and also potentially other pages that > are locked and we didn't prepare them for writing because the block mapping > we got was too short). Now JBD2 goes on trying to do the transaction > commit: Ah, I see, so this is only an issue in those cases where the handle is synchronous. Is this the only case where there is a concern? (e.g., could we test handle->h_sync and stop the handle early if h_sync is not set?) This would put the uninit->init conversion into potentially a separate transaction, but that should be OK. The reason why I'm pushing so hard here is that long running handles is a major contributor to ext4 haveing poor CPU scalability numbers, since we can end up having lots of threads waiting on the last transaction to complete. So keeping transactions small and fast is a big deal. - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri 01-07-16 17:26:34, Ted Tso wrote: > On Fri, Jul 01, 2016 at 07:40:41PM +0200, Jan Kara wrote: > > > > So we are waiting for transaction commit to finish with unsubmitted pages > > that already have PageWriteback set (and also potentially other pages that > > are locked and we didn't prepare them for writing because the block mapping > > we got was too short). Now JBD2 goes on trying to do the transaction > > commit: > > Ah, I see, so this is only an issue in those cases where the handle is > synchronous. Is this the only case where there is a concern? (e.g., > could we test handle->h_sync and stop the handle early if h_sync > is not set?) This would put the uninit->init conversion into > potentially a separate transaction, but that should be OK. So checking handle->h_sync is possible and it would handle the problem as well AFAICS. However I find it rather hacky to rely on the fact that ext4_journal_stop() can block only when handle->h_sync is set. With uninit->init conversion changes you likely mean ext4_put_io_end_defer() is run while the handle is still running - that is true but any real work is done from a workqueue so my patch doesn't really change in which transaction uninit->init conversion happens. > The reason why I'm pushing so hard here is that long running handles > is a major contributor to ext4 haveing poor CPU scalability numbers, > since we can end up having lots of threads waiting on the last > transaction to complete. So keeping transactions small and fast is a > big deal. OK, but we do all the block mappings, page locking etc. while the handle is started so it is not exactly a really short lived handle. The patch adds there a submission of a bio (we have the IO plugged so it will just add the bio to the list of submitted bios), unlock locked pages, drop refcount to ioend (unless IO is already completed, only refcount update is done, if IO is completed we defer any real work to workqueue anyway). So although we add some work which is done while the handle is still running, it is not that much. If you have some tests which show how the transaction wait time increased, I could be convinced the hack is worth it. But so far I don't think that messing with handle->h_sync is warranted. Honza
On Mon, Jul 04, 2016 at 04:00:12PM +0200, Jan Kara wrote: > OK, but we do all the block mappings, page locking etc. while the handle is > started so it is not exactly a really short lived handle. The patch adds > there a submission of a bio (we have the IO plugged so it will just add the > bio to the list of submitted bios), unlock locked pages, drop refcount to > ioend (unless IO is already completed, only refcount update is done, if IO > is completed we defer any real work to workqueue anyway). So although we > add some work which is done while the handle is still running, it is not > that much. Good point that the block device is plugged. Ultimately I suspect the way to fix the scalability problem will be move to dioread nolock as the default, and use separate transaction to map the blocks using the uninitialized flags, and then do a separate transaction to convert them afterwards. - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon 04-07-16 11:20:43, Ted Tso wrote: > On Mon, Jul 04, 2016 at 04:00:12PM +0200, Jan Kara wrote: > > OK, but we do all the block mappings, page locking etc. while the handle is > > started so it is not exactly a really short lived handle. The patch adds > > there a submission of a bio (we have the IO plugged so it will just add the > > bio to the list of submitted bios), unlock locked pages, drop refcount to > > ioend (unless IO is already completed, only refcount update is done, if IO > > is completed we defer any real work to workqueue anyway). So although we > > add some work which is done while the handle is still running, it is not > > that much. > > Good point that the block device is plugged. Ultimately I suspect the > way to fix the scalability problem will be move to dioread nolock as > the default, and use separate transaction to map the blocks using the > uninitialized flags, and then do a separate transaction to convert > them afterwards. This is what already happens currently - we only reserve a handle for conversion during writeback but that reservation fluently moves between running transactions until a point where the reserved handle is started - then the handle is pinned to the currently running transaction - and this happens only in the completion handler after IO is completed. Honza
On Mon, Jul 04, 2016 at 05:47:09PM +0200, Jan Kara wrote: > > Good point that the block device is plugged. Ultimately I suspect the > > way to fix the scalability problem will be move to dioread nolock as > > the default, and use separate transaction to map the blocks using the > > uninitialized flags, and then do a separate transaction to convert > > them afterwards. > > This is what already happens currently - we only reserve a handle for > conversion during writeback but that reservation fluently moves between > running transactions until a point where the reserved handle is started - > then the handle is pinned to the currently running transaction - and this > happens only in the completion handler after IO is completed. Yes, but dioread_nolock only works with block size == page size. What we need to do is to make it work for all block sizes, then make dioread_nolock the default, and then remove the old direct I/O write path.... - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon 04-07-16 22:43:30, Ted Tso wrote: > On Mon, Jul 04, 2016 at 05:47:09PM +0200, Jan Kara wrote: > > > Good point that the block device is plugged. Ultimately I suspect the > > > way to fix the scalability problem will be move to dioread nolock as > > > the default, and use separate transaction to map the blocks using the > > > uninitialized flags, and then do a separate transaction to convert > > > them afterwards. > > > > This is what already happens currently - we only reserve a handle for > > conversion during writeback but that reservation fluently moves between > > running transactions until a point where the reserved handle is started - > > then the handle is pinned to the currently running transaction - and this > > happens only in the completion handler after IO is completed. > > Yes, but dioread_nolock only works with block size == page size. What > we need to do is to make it work for all block sizes, then make > dioread_nolock the default, and then remove the old direct I/O write > path.... Yes, I completely agree. Honza
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index f7140ca66e3b..ba04d57656d4 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -2748,13 +2748,27 @@ retry: done = true; } } - ext4_journal_stop(handle); /* Submit prepared bio */ ext4_io_submit(&mpd.io_submit); /* Unlock pages we didn't use */ mpage_release_unused_pages(&mpd, give_up_on_write); - /* Drop our io_end reference we got from init */ - ext4_put_io_end(mpd.io_submit.io_end); + /* + * Drop our io_end reference we got from init. We have to be + * careful and use deferred io_end finishing as we can release + * the last reference to io_end which may end up doing unwritten + * extent conversion which we cannot do while holding + * transaction handle. + */ + ext4_put_io_end_defer(mpd.io_submit.io_end); + /* + * Caution: ext4_journal_stop() can wait for transaction commit + * to finish which may depend on writeback of pages to complete + * or on page lock to be released. So we can call it only + * after we have submitted all the IO, released page locks + * we hold, and dropped io_end reference (for extent conversion + * to be able to complete). + */ + ext4_journal_stop(handle); if (ret == -ENOSPC && sbi->s_journal) { /*
Commit 06bd3c36a733 (ext4: fix data exposure after a crash) uncovered a deadlock in ext4_writepages() which was previously much harder to hit. After this commit xfstest generic/130 reproduces the deadlock on small filesystems. The problem happens when ext4_do_update_inode() sets LARGE_FILE feature and marks current inode handle as synchronous. That subsequently results in ext4_journal_stop() called from ext4_writepages() to block waiting for transaction commit while still holding page locks, reference to io_end, and some prepared bio in mpd structure each of which can possibly block transaction commit from completing and thus results in deadlock. Fix the problem by releasing page locks, io_end reference, and submitting prepared bio before calling ext4_journal_stop(). Reported-and-tested-by: Eryu Guan <eguan@redhat.com> CC: stable@vger.kernel.org Signed-off-by: Jan Kara <jack@suse.cz> --- fs/ext4/inode.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-)