diff mbox

[-v2] ext4: avoid unnecessary transaction stalls during writeback

Message ID 20170430223053.20025-1-tytso@mit.edu
State Awaiting Upstream, archived
Headers show

Commit Message

Theodore Ts'o April 30, 2017, 10:30 p.m. UTC
From: Jan Kara <jack@suse.cz>

Currently ext4_writepages() submits all pages with transaction started.
When no page needs block allocation or extent conversion we can submit
all dirty pages in the inode while holding a single transaction handle
and when device is congested this can take significant amount of time.
Thus ext4_writepages() can block transaction commits for extended
periods of time.

Take for example a simple benchmark simulating PostgreSQL database
(pgioperf in mmtest). The benchmark runs 16 processes doing random reads
from a huge file, one process doing random writes to the huge file, and
one process doing sequential writes to a small files and frequently
running fsync. With unpatched kernel transaction commits take on average
~18s with standard deviation of ~41s, top 5 commit times are:

274.466639s, 126.467347s, 86.992429s, 34.351563s, 31.517653s.

After this patch transaction commits take on average 0.1s with standard
deviation of 0.15s, top 5 commit times are:

0.563792s, 0.519980s, 0.509841s, 0.471700s, 0.469899s

[ Modified so we use an explicit do_map flag instead of relying on
  io_end not being allocated, the since io_end->inode is needed for I/O
  error handling. -- tytso ]

Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
---
 fs/ext4/inode.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

Comments

Jan Kara May 2, 2017, 12:10 p.m. UTC | #1
On Sun 30-04-17 18:30:53, Ted Tso wrote:
> From: Jan Kara <jack@suse.cz>
> 
> Currently ext4_writepages() submits all pages with transaction started.
> When no page needs block allocation or extent conversion we can submit
> all dirty pages in the inode while holding a single transaction handle
> and when device is congested this can take significant amount of time.
> Thus ext4_writepages() can block transaction commits for extended
> periods of time.
> 
> Take for example a simple benchmark simulating PostgreSQL database
> (pgioperf in mmtest). The benchmark runs 16 processes doing random reads
> from a huge file, one process doing random writes to the huge file, and
> one process doing sequential writes to a small files and frequently
> running fsync. With unpatched kernel transaction commits take on average
> ~18s with standard deviation of ~41s, top 5 commit times are:
> 
> 274.466639s, 126.467347s, 86.992429s, 34.351563s, 31.517653s.
> 
> After this patch transaction commits take on average 0.1s with standard
> deviation of 0.15s, top 5 commit times are:
> 
> 0.563792s, 0.519980s, 0.509841s, 0.471700s, 0.469899s
> 
> [ Modified so we use an explicit do_map flag instead of relying on
>   io_end not being allocated, the since io_end->inode is needed for I/O
>   error handling. -- tytso ]

Yeah, thanks for finding the problem and fixing it up! The updated patch
looks good. It is actually relatively easy to fix the original approach so
that we don't have to allocate ioend unnecessarily (we just fetch the inode
pointer on error through bio_vec->bv_page->mapping->host) but I'll do that
separately on top of this change.

								Honza
Theodore Ts'o May 2, 2017, 2:18 p.m. UTC | #2
On Tue, May 02, 2017 at 02:10:44PM +0200, Jan Kara wrote:
> 
> Yeah, thanks for finding the problem and fixing it up! The updated patch
> looks good. It is actually relatively easy to fix the original approach so
> that we don't have to allocate ioend unnecessarily (we just fetch the inode
> pointer on error through bio_vec->bv_page->mapping->host) but I'll do that
> separately on top of this change.

We'll need to be a little bit careful to make sure this works for the
encryption case (where we use a temporary scratch/bounce buffer), but
it should be possible to make that work (we just have to look at the
right struct page).

					- Ted
diff mbox

Patch

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 178ac7c39403..c65e29953800 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1643,6 +1643,7 @@  struct mpage_da_data {
 	 */
 	struct ext4_map_blocks map;
 	struct ext4_io_submit io_submit;	/* IO submission data */
+	unsigned int do_map:1;
 };
 
 static void mpage_release_unused_pages(struct mpage_da_data *mpd,
@@ -2179,6 +2180,9 @@  static bool mpage_add_bh_to_extent(struct mpage_da_data *mpd, ext4_lblk_t lblk,
 
 	/* First block in the extent? */
 	if (map->m_len == 0) {
+		/* We cannot map unless handle is started... */
+		if (!mpd->do_map)
+			return false;
 		map->m_lblk = lblk;
 		map->m_len = 1;
 		map->m_flags = bh->b_state & BH_FLAGS;
@@ -2231,6 +2235,9 @@  static int mpage_process_page_bufs(struct mpage_da_data *mpd,
 			/* Found extent to map? */
 			if (mpd->map.m_len)
 				return 0;
+			/* Buffer needs mapping and handle is not started? */
+			if (!mpd->do_map)
+				return 0;
 			/* Everything mapped so far and we hit EOF */
 			break;
 		}
@@ -2747,6 +2754,29 @@  static int ext4_writepages(struct address_space *mapping,
 		tag_pages_for_writeback(mapping, mpd.first_page, mpd.last_page);
 	done = false;
 	blk_start_plug(&plug);
+
+	/*
+	 * First writeback pages that don't need mapping - we can avoid
+	 * starting a transaction unnecessarily and also avoid being blocked
+	 * in the block layer on device congestion while having transaction
+	 * started.
+	 */
+	mpd.do_map = 0;
+	mpd.io_submit.io_end = ext4_init_io_end(inode, GFP_KERNEL);
+	if (!mpd.io_submit.io_end) {
+		ret = -ENOMEM;
+		goto unplug;
+	}
+	ret = mpage_prepare_extent_to_map(&mpd);
+	/* Submit prepared bio */
+	ext4_io_submit(&mpd.io_submit);
+	ext4_put_io_end_defer(mpd.io_submit.io_end);
+	mpd.io_submit.io_end = NULL;
+	/* Unlock pages we didn't use */
+	mpage_release_unused_pages(&mpd, false);
+	if (ret < 0)
+		goto unplug;
+
 	while (!done && mpd.first_page <= mpd.last_page) {
 		/* For each extent of pages we use new io_end */
 		mpd.io_submit.io_end = ext4_init_io_end(inode, GFP_KERNEL);
@@ -2775,8 +2805,10 @@  static int ext4_writepages(struct address_space *mapping,
 				wbc->nr_to_write, inode->i_ino, ret);
 			/* Release allocated io_end */
 			ext4_put_io_end(mpd.io_submit.io_end);
+			mpd.io_submit.io_end = NULL;
 			break;
 		}
+		mpd.do_map = 1;
 
 		trace_ext4_da_write_pages(inode, mpd.first_page, mpd.wbc);
 		ret = mpage_prepare_extent_to_map(&mpd);
@@ -2807,6 +2839,7 @@  static int ext4_writepages(struct address_space *mapping,
 		if (!ext4_handle_valid(handle) || handle->h_sync == 0) {
 			ext4_journal_stop(handle);
 			handle = NULL;
+			mpd.do_map = 0;
 		}
 		/* Submit prepared bio */
 		ext4_io_submit(&mpd.io_submit);
@@ -2824,6 +2857,7 @@  static int ext4_writepages(struct address_space *mapping,
 			ext4_journal_stop(handle);
 		} else
 			ext4_put_io_end(mpd.io_submit.io_end);
+		mpd.io_submit.io_end = NULL;
 
 		if (ret == -ENOSPC && sbi->s_journal) {
 			/*
@@ -2839,6 +2873,7 @@  static int ext4_writepages(struct address_space *mapping,
 		if (ret)
 			break;
 	}
+unplug:
 	blk_finish_plug(&plug);
 	if (!ret && !cycled && wbc->nr_to_write > 0) {
 		cycled = 1;