Patchwork ext4: Fix performance regression in writeback of random writes

login
register
mail settings
Submitter Jan Kara
Date Sept. 10, 2013, 7:40 p.m.
Message ID <1378842006-15237-1-git-send-email-jack@suse.cz>
Download mbox | patch
Permalink /patch/274028/
State Accepted
Headers show

Comments

Jan Kara - Sept. 10, 2013, 7:40 p.m.
Linux Kernel Performance project guys have reported that commit 4e7ea81db5
introduces a performance regression for the following fio workload:
[global]
direct=0
ioengine=mmap
size=1500M
bs=4k
pre_read=1
numjobs=1
overwrite=1
loops=5
runtime=300
group_reporting
invalidate=0
directory=/mnt/
file_service_type=random:36
file_service_type=random:36

[job0]
startdelay=0
rw=randrw
filename=data0/f1:data0/f2

[job1]
startdelay=0
rw=randrw
filename=data0/f2:data0/f1
...

[job7]
startdelay=0
rw=randrw
filename=data0/f2:data0/f1

The culprit of the problem is that after the commit ext4_writepages()
are more aggressive in writing back pages. Thus we have less consecutive
dirty pages resulting in more seeking.

This increased aggressivity is caused by a bug in the condition
terminating ext4_writepages(). We start writing from the beginning of
the file even if we should have terminated ext4_writepages() because
wbc->nr_to_write <= 0.

After fixing the condition the throughput of the fio workload is about 20%
better than before writeback reorganization.

Reported-by: "Yan, Zheng" <zheng.z.yan@intel.com>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/ext4/inode.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
Bernd Schubert - Sept. 11, 2013, 9:45 a.m.
On 09/10/2013 09:40 PM, Jan Kara wrote:
> Linux Kernel Performance project guys have reported that commit 4e7ea81db5
> introduces a performance regression for the following fio workload:
> [global]
> direct=0
> ioengine=mmap
> size=1500M
> bs=4k
> pre_read=1
> numjobs=1
> overwrite=1
> loops=5
> runtime=300
> group_reporting
> invalidate=0
> directory=/mnt/
> file_service_type=random:36
> file_service_type=random:36
>
> [job0]
> startdelay=0
> rw=randrw
> filename=data0/f1:data0/f2
>
> [job1]
> startdelay=0
> rw=randrw
> filename=data0/f2:data0/f1
> ...
>
> [job7]
> startdelay=0
> rw=randrw
> filename=data0/f2:data0/f1
>
> The culprit of the problem is that after the commit ext4_writepages()
> are more aggressive in writing back pages. Thus we have less consecutive
> dirty pages resulting in more seeking.
>
> This increased aggressivity is caused by a bug in the condition
> terminating ext4_writepages(). We start writing from the beginning of
> the file even if we should have terminated ext4_writepages() because
> wbc->nr_to_write <= 0.
>
> After fixing the condition the throughput of the fio workload is about 20%
> better than before writeback reorganization.
>
> Reported-by: "Yan, Zheng" <zheng.z.yan@intel.com>
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>   fs/ext4/inode.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index c79fd7d..7914c05 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -2563,7 +2563,7 @@ retry:
>   			break;
>   	}
>   	blk_finish_plug(&plug);
> -	if (!ret && !cycled) {
> +	if (!ret && !cycled && wbc->nr_to_write > 0) {
>   		cycled = 1;
>   		mpd.last_page = writeback_index - 1;
>   		mpd.first_page = 0;
>

Interesting, doesn't that mean generic_writepages (sub-sequent 
write_cache_pages() ) and all other file systems implementing their own 
->writepages()  should be updated?



Thanks,
Bernd

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jan Kara - Sept. 11, 2013, 10:11 a.m.
On Wed 11-09-13 11:45:03, Bernd Schubert wrote:
> On 09/10/2013 09:40 PM, Jan Kara wrote:
> >Linux Kernel Performance project guys have reported that commit 4e7ea81db5
> >introduces a performance regression for the following fio workload:
> >[global]
> >direct=0
> >ioengine=mmap
> >size=1500M
> >bs=4k
> >pre_read=1
> >numjobs=1
> >overwrite=1
> >loops=5
> >runtime=300
> >group_reporting
> >invalidate=0
> >directory=/mnt/
> >file_service_type=random:36
> >file_service_type=random:36
> >
> >[job0]
> >startdelay=0
> >rw=randrw
> >filename=data0/f1:data0/f2
> >
> >[job1]
> >startdelay=0
> >rw=randrw
> >filename=data0/f2:data0/f1
> >...
> >
> >[job7]
> >startdelay=0
> >rw=randrw
> >filename=data0/f2:data0/f1
> >
> >The culprit of the problem is that after the commit ext4_writepages()
> >are more aggressive in writing back pages. Thus we have less consecutive
> >dirty pages resulting in more seeking.
> >
> >This increased aggressivity is caused by a bug in the condition
> >terminating ext4_writepages(). We start writing from the beginning of
> >the file even if we should have terminated ext4_writepages() because
> >wbc->nr_to_write <= 0.
> >
> >After fixing the condition the throughput of the fio workload is about 20%
> >better than before writeback reorganization.
> >
> >Reported-by: "Yan, Zheng" <zheng.z.yan@intel.com>
> >Signed-off-by: Jan Kara <jack@suse.cz>
> >---
> >  fs/ext4/inode.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> >diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> >index c79fd7d..7914c05 100644
> >--- a/fs/ext4/inode.c
> >+++ b/fs/ext4/inode.c
> >@@ -2563,7 +2563,7 @@ retry:
> >  			break;
> >  	}
> >  	blk_finish_plug(&plug);
> >-	if (!ret && !cycled) {
> >+	if (!ret && !cycled && wbc->nr_to_write > 0) {
> >  		cycled = 1;
> >  		mpd.last_page = writeback_index - 1;
> >  		mpd.first_page = 0;
> >
> 
> Interesting, doesn't that mean generic_writepages (sub-sequent
> write_cache_pages() ) and all other file systems implementing their
> own ->writepages()  should be updated?
  No. write_cache_pages() has the condition like:
if (!cycled && !done) {

  and 'done' is set when wbc->nr_to_write drops to zero. So that function
is OK. We cannot use 'done' in ext4_writepages() because the functions are
structured a bit differently and 'done' gets set also when reach end of
file.

								Honza
Bernd Schubert - Sept. 11, 2013, 11:13 a.m.
On 09/11/2013 12:11 PM, Jan Kara wrote:
> On Wed 11-09-13 11:45:03, Bernd Schubert wrote:
>> On 09/10/2013 09:40 PM, Jan Kara wrote:
>>> Linux Kernel Performance project guys have reported that commit 4e7ea81db5
>>> introduces a performance regression for the following fio workload:
>>> [global]
>>> direct=0
>>> ioengine=mmap
>>> size=1500M
>>> bs=4k
>>> pre_read=1
>>> numjobs=1
>>> overwrite=1
>>> loops=5
>>> runtime=300
>>> group_reporting
>>> invalidate=0
>>> directory=/mnt/
>>> file_service_type=random:36
>>> file_service_type=random:36
>>>
>>> [job0]
>>> startdelay=0
>>> rw=randrw
>>> filename=data0/f1:data0/f2
>>>
>>> [job1]
>>> startdelay=0
>>> rw=randrw
>>> filename=data0/f2:data0/f1
>>> ...
>>>
>>> [job7]
>>> startdelay=0
>>> rw=randrw
>>> filename=data0/f2:data0/f1
>>>
>>> The culprit of the problem is that after the commit ext4_writepages()
>>> are more aggressive in writing back pages. Thus we have less consecutive
>>> dirty pages resulting in more seeking.
>>>
>>> This increased aggressivity is caused by a bug in the condition
>>> terminating ext4_writepages(). We start writing from the beginning of
>>> the file even if we should have terminated ext4_writepages() because
>>> wbc->nr_to_write <= 0.
>>>
>>> After fixing the condition the throughput of the fio workload is about 20%
>>> better than before writeback reorganization.
>>>
>>> Reported-by: "Yan, Zheng" <zheng.z.yan@intel.com>
>>> Signed-off-by: Jan Kara <jack@suse.cz>
>>> ---
>>>   fs/ext4/inode.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
>>> index c79fd7d..7914c05 100644
>>> --- a/fs/ext4/inode.c
>>> +++ b/fs/ext4/inode.c
>>> @@ -2563,7 +2563,7 @@ retry:
>>>   			break;
>>>   	}
>>>   	blk_finish_plug(&plug);
>>> -	if (!ret && !cycled) {
>>> +	if (!ret && !cycled && wbc->nr_to_write > 0) {
>>>   		cycled = 1;
>>>   		mpd.last_page = writeback_index - 1;
>>>   		mpd.first_page = 0;
>>>
>>
>> Interesting, doesn't that mean generic_writepages (sub-sequent
>> write_cache_pages() ) and all other file systems implementing their
>> own ->writepages()  should be updated?
>    No. write_cache_pages() has the condition like:
> if (!cycled && !done) {
>
>    and 'done' is set when wbc->nr_to_write drops to zero. So that function
> is OK. We cannot use 'done' in ext4_writepages() because the functions are
> structured a bit differently and 'done' gets set also when reach end of
> file.

Ah right, I missed that. If pagevec_lookup_tag() returns 0 there is 
still a way to avoid setting done=1, but I guess wbc->nr_to_write also 
wouldn't be zero then.
Btrfs' extent_write_cache_pages is another candidate and in combination 
with the additional blk plug ext4 and generic_writepages are doing, it 
might explain why I noticed extensive btrfs-raid6-rmw writes some time 
ago. I'm going to check that and further discuss on that list.


Thanks,
Bernd

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Theodore Ts'o - Sept. 16, 2013, 12:28 p.m.
On Tue, Sep 10, 2013 at 09:40:06PM +0200, Jan Kara wrote:
> Linux Kernel Performance project guys have reported that commit 4e7ea81db5
> introduces a performance regression for the following fio workload:...

Applied, many thanks to Yan Zheng for finding this performance
regression and to Jan for fixing it!

	   					- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index c79fd7d..7914c05 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2563,7 +2563,7 @@  retry:
 			break;
 	}
 	blk_finish_plug(&plug);
-	if (!ret && !cycled) {
+	if (!ret && !cycled && wbc->nr_to_write > 0) {
 		cycled = 1;
 		mpd.last_page = writeback_index - 1;
 		mpd.first_page = 0;