diff mbox series

[RFC] Ext4: fix deadlock on dirty pages between fault and writeback

Message ID 1540858969-75803-1-git-send-email-bo.liu@linux.alibaba.com
State New, archived
Headers show
Series [RFC] Ext4: fix deadlock on dirty pages between fault and writeback | expand

Commit Message

Liu Bo Oct. 30, 2018, 12:22 a.m. UTC
mpage_prepare_extent_to_map() tries to build up a large bio to stuff down
the pipe.  But if it needs to wait for a page lock, it needs to make sure
and send down any pending writes so we don't deadlock with anyone who has
the page lock and is waiting for writeback of things inside the bio.

The related lock stack is shown as follows,

task1:
[<ffffffff811aaa52>] wait_on_page_bit+0x82/0xa0
[<ffffffff811c5777>] shrink_page_list+0x907/0x960
[<ffffffff811c6027>] shrink_inactive_list+0x2c7/0x680
[<ffffffff811c6ba4>] shrink_node_memcg+0x404/0x830
[<ffffffff811c70a8>] shrink_node+0xd8/0x300
[<ffffffff811c73dd>] do_try_to_free_pages+0x10d/0x330
[<ffffffff811c7865>] try_to_free_mem_cgroup_pages+0xd5/0x1b0
[<ffffffff8122df2d>] try_charge+0x14d/0x720
[<ffffffff812320cc>] memcg_kmem_charge_memcg+0x3c/0xa0
[<ffffffff812321ae>] memcg_kmem_charge+0x7e/0xd0
[<ffffffff811b68a8>] __alloc_pages_nodemask+0x178/0x260
[<ffffffff8120bff5>] alloc_pages_current+0x95/0x140
[<ffffffff81074247>] pte_alloc_one+0x17/0x40
[<ffffffff811e34de>] __pte_alloc+0x1e/0x110
[<ffffffffa06739de>] alloc_set_pte+0x5fe/0xc20
[<ffffffff811e5d93>] do_fault+0x103/0x970
[<ffffffff811e6e5e>] handle_mm_fault+0x61e/0xd10
[<ffffffff8106ea02>] __do_page_fault+0x252/0x4d0
[<ffffffff8106ecb0>] do_page_fault+0x30/0x80
[<ffffffff8171bce8>] page_fault+0x28/0x30
[<ffffffffffffffff>] 0xffffffffffffffff

task2:
[<ffffffff811aadc6>] __lock_page+0x86/0xa0
[<ffffffffa02f1e47>] mpage_prepare_extent_to_map+0x2e7/0x310 [ext4]
[<ffffffffa08a2689>] ext4_writepages+0x479/0xd60
[<ffffffff811bbede>] do_writepages+0x1e/0x30
[<ffffffff812725e5>] __writeback_single_inode+0x45/0x320
[<ffffffff81272de2>] writeback_sb_inodes+0x272/0x600
[<ffffffff81273202>] __writeback_inodes_wb+0x92/0xc0
[<ffffffff81273568>] wb_writeback+0x268/0x300
[<ffffffff81273d24>] wb_workfn+0xb4/0x390
[<ffffffff810a2f19>] process_one_work+0x189/0x420
[<ffffffff810a31fe>] worker_thread+0x4e/0x4b0
[<ffffffff810a9786>] kthread+0xe6/0x100
[<ffffffff8171a9a1>] ret_from_fork+0x41/0x50
[<ffffffffffffffff>] 0xffffffffffffffff

task1 is waiting for the PageWriteback bit of the page that task2 has
collected in mpd->io_submit->io_bio, and tasks2 is waiting for the LOCKED
bit the page which tasks1 has locked.

It seems that this deadlock only happens when those pages are mapped pages
so that mpage_prepare_extent_to_map() can have pages queued in io_bio and
when waiting to lock the subsequent page.

Signed-off-by: Liu Bo <bo.liu@linux.alibaba.com>
---

Only did build test.

 fs/ext4/inode.c | 21 ++++++++++++++++++++-
 1 file changed, 20 insertions(+), 1 deletion(-)

Comments

Liu Bo Oct. 31, 2018, 6:49 p.m. UTC | #1
Hi Ted,

Could you please take a look at this?

(unfortunately I failed to come up with a reproducer as it mixed
'short of memory, writeback and fault'.)

thanks,
liubo

On Mon, Oct 29, 2018 at 5:26 PM Liu Bo <bo.liu@linux.alibaba.com> wrote:
>
> mpage_prepare_extent_to_map() tries to build up a large bio to stuff down
> the pipe.  But if it needs to wait for a page lock, it needs to make sure
> and send down any pending writes so we don't deadlock with anyone who has
> the page lock and is waiting for writeback of things inside the bio.
>
> The related lock stack is shown as follows,
>
> task1:
> [<ffffffff811aaa52>] wait_on_page_bit+0x82/0xa0
> [<ffffffff811c5777>] shrink_page_list+0x907/0x960
> [<ffffffff811c6027>] shrink_inactive_list+0x2c7/0x680
> [<ffffffff811c6ba4>] shrink_node_memcg+0x404/0x830
> [<ffffffff811c70a8>] shrink_node+0xd8/0x300
> [<ffffffff811c73dd>] do_try_to_free_pages+0x10d/0x330
> [<ffffffff811c7865>] try_to_free_mem_cgroup_pages+0xd5/0x1b0
> [<ffffffff8122df2d>] try_charge+0x14d/0x720
> [<ffffffff812320cc>] memcg_kmem_charge_memcg+0x3c/0xa0
> [<ffffffff812321ae>] memcg_kmem_charge+0x7e/0xd0
> [<ffffffff811b68a8>] __alloc_pages_nodemask+0x178/0x260
> [<ffffffff8120bff5>] alloc_pages_current+0x95/0x140
> [<ffffffff81074247>] pte_alloc_one+0x17/0x40
> [<ffffffff811e34de>] __pte_alloc+0x1e/0x110
> [<ffffffffa06739de>] alloc_set_pte+0x5fe/0xc20
> [<ffffffff811e5d93>] do_fault+0x103/0x970
> [<ffffffff811e6e5e>] handle_mm_fault+0x61e/0xd10
> [<ffffffff8106ea02>] __do_page_fault+0x252/0x4d0
> [<ffffffff8106ecb0>] do_page_fault+0x30/0x80
> [<ffffffff8171bce8>] page_fault+0x28/0x30
> [<ffffffffffffffff>] 0xffffffffffffffff
>
> task2:
> [<ffffffff811aadc6>] __lock_page+0x86/0xa0
> [<ffffffffa02f1e47>] mpage_prepare_extent_to_map+0x2e7/0x310 [ext4]
> [<ffffffffa08a2689>] ext4_writepages+0x479/0xd60
> [<ffffffff811bbede>] do_writepages+0x1e/0x30
> [<ffffffff812725e5>] __writeback_single_inode+0x45/0x320
> [<ffffffff81272de2>] writeback_sb_inodes+0x272/0x600
> [<ffffffff81273202>] __writeback_inodes_wb+0x92/0xc0
> [<ffffffff81273568>] wb_writeback+0x268/0x300
> [<ffffffff81273d24>] wb_workfn+0xb4/0x390
> [<ffffffff810a2f19>] process_one_work+0x189/0x420
> [<ffffffff810a31fe>] worker_thread+0x4e/0x4b0
> [<ffffffff810a9786>] kthread+0xe6/0x100
> [<ffffffff8171a9a1>] ret_from_fork+0x41/0x50
> [<ffffffffffffffff>] 0xffffffffffffffff
>
> task1 is waiting for the PageWriteback bit of the page that task2 has
> collected in mpd->io_submit->io_bio, and tasks2 is waiting for the LOCKED
> bit the page which tasks1 has locked.
>
> It seems that this deadlock only happens when those pages are mapped pages
> so that mpage_prepare_extent_to_map() can have pages queued in io_bio and
> when waiting to lock the subsequent page.
>
> Signed-off-by: Liu Bo <bo.liu@linux.alibaba.com>
> ---
>
> Only did build test.
>
>  fs/ext4/inode.c | 21 ++++++++++++++++++++-
>  1 file changed, 20 insertions(+), 1 deletion(-)
>
> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index c3d9a42c561e..becbfb292bf0 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -2681,7 +2681,26 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
>                         if (mpd->map.m_len > 0 && mpd->next_page != page->index)
>                                 goto out;
>
> -                       lock_page(page);
> +                       if (!trylock_page(page)) {
> +                               /*
> +                                * A rare race may happen between fault and
> +                                * writeback,
> +                                *
> +                                * 1. fault may have raced in and locked this
> +                                * page ahead of us, and if fault needs to
> +                                * reclaim memory via shrink_page_list(), it may
> +                                * also wait on the writeback pages we've
> +                                * collected in our mpd->io_submit.
> +                                *
> +                                * 2. We have to submit mpd->io_submit->io_bio
> +                                * to let memory reclaim make progress in order
> +                                * to avoid the deadlock between fault and
> +                                * ourselves(writeback).
> +                                */
> +                               ext4_io_submit(&mpd->io_submit);
> +                               lock_page(page);
> +                       }
> +
>                         /*
>                          * If the page is no longer dirty, or its mapping no
>                          * longer corresponds to inode we are writing (which
> --
> 1.8.3.1
>
Jan Kara Nov. 27, 2018, 11:42 a.m. UTC | #2
CCed fsdevel since this may be interesting to other filesystem developers
as well.

On Tue 30-10-18 08:22:49, Liu Bo wrote:
> mpage_prepare_extent_to_map() tries to build up a large bio to stuff down
> the pipe.  But if it needs to wait for a page lock, it needs to make sure
> and send down any pending writes so we don't deadlock with anyone who has
> the page lock and is waiting for writeback of things inside the bio.

Thanks for report! I agree the current code has a deadlock possibility you
describe. But I think the problem reaches a bit further than what your
patch fixes.  The problem is with pages that are unlocked but have
PageWriteback set.  Page reclaim may end up waiting for these pages and
thus any memory allocation with __GFP_FS set can block on these. So in our
current setting page writeback must not block on anything that can be held
while doing memory allocation with __GFP_FS set. Page lock is just one of
these possibilities, wait_on_page_writeback() in
mpage_prepare_extent_to_map() is another suspect and there mat be more. Or
to say it differently, if there's lock A and GFP_KERNEL allocation can
happen under lock A, then A cannot be taken by the writeback path. This is
actually pretty subtle deadlock possibility and our current lockdep
instrumentation isn't going to catch this.

So I see two ways how to fix this properly:

1) Change ext4 code to always submit the bio once we have a full page
prepared for writing. This may be relatively simple but has a higher CPU
overhead for bio allocation & freeing (actual IO won't really differ since
the plugging code should take care of merging the submitted bios). XFS
seems to be doing this.

2) Change the code to unlock the page only when we submit the bio.

								Honza
> The related lock stack is shown as follows,
> 
> task1:
> [<ffffffff811aaa52>] wait_on_page_bit+0x82/0xa0
> [<ffffffff811c5777>] shrink_page_list+0x907/0x960
> [<ffffffff811c6027>] shrink_inactive_list+0x2c7/0x680
> [<ffffffff811c6ba4>] shrink_node_memcg+0x404/0x830
> [<ffffffff811c70a8>] shrink_node+0xd8/0x300
> [<ffffffff811c73dd>] do_try_to_free_pages+0x10d/0x330
> [<ffffffff811c7865>] try_to_free_mem_cgroup_pages+0xd5/0x1b0
> [<ffffffff8122df2d>] try_charge+0x14d/0x720
> [<ffffffff812320cc>] memcg_kmem_charge_memcg+0x3c/0xa0
> [<ffffffff812321ae>] memcg_kmem_charge+0x7e/0xd0
> [<ffffffff811b68a8>] __alloc_pages_nodemask+0x178/0x260
> [<ffffffff8120bff5>] alloc_pages_current+0x95/0x140
> [<ffffffff81074247>] pte_alloc_one+0x17/0x40
> [<ffffffff811e34de>] __pte_alloc+0x1e/0x110
> [<ffffffffa06739de>] alloc_set_pte+0x5fe/0xc20
> [<ffffffff811e5d93>] do_fault+0x103/0x970
> [<ffffffff811e6e5e>] handle_mm_fault+0x61e/0xd10
> [<ffffffff8106ea02>] __do_page_fault+0x252/0x4d0
> [<ffffffff8106ecb0>] do_page_fault+0x30/0x80
> [<ffffffff8171bce8>] page_fault+0x28/0x30
> [<ffffffffffffffff>] 0xffffffffffffffff
> 
> task2:
> [<ffffffff811aadc6>] __lock_page+0x86/0xa0
> [<ffffffffa02f1e47>] mpage_prepare_extent_to_map+0x2e7/0x310 [ext4]
> [<ffffffffa08a2689>] ext4_writepages+0x479/0xd60
> [<ffffffff811bbede>] do_writepages+0x1e/0x30
> [<ffffffff812725e5>] __writeback_single_inode+0x45/0x320
> [<ffffffff81272de2>] writeback_sb_inodes+0x272/0x600
> [<ffffffff81273202>] __writeback_inodes_wb+0x92/0xc0
> [<ffffffff81273568>] wb_writeback+0x268/0x300
> [<ffffffff81273d24>] wb_workfn+0xb4/0x390
> [<ffffffff810a2f19>] process_one_work+0x189/0x420
> [<ffffffff810a31fe>] worker_thread+0x4e/0x4b0
> [<ffffffff810a9786>] kthread+0xe6/0x100
> [<ffffffff8171a9a1>] ret_from_fork+0x41/0x50
> [<ffffffffffffffff>] 0xffffffffffffffff
> 
> task1 is waiting for the PageWriteback bit of the page that task2 has
> collected in mpd->io_submit->io_bio, and tasks2 is waiting for the LOCKED
> bit the page which tasks1 has locked.
> 
> It seems that this deadlock only happens when those pages are mapped pages
> so that mpage_prepare_extent_to_map() can have pages queued in io_bio and
> when waiting to lock the subsequent page.
> 
> Signed-off-by: Liu Bo <bo.liu@linux.alibaba.com>
> ---
> 
> Only did build test.
> 
>  fs/ext4/inode.c | 21 ++++++++++++++++++++-
>  1 file changed, 20 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index c3d9a42c561e..becbfb292bf0 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -2681,7 +2681,26 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
>  			if (mpd->map.m_len > 0 && mpd->next_page != page->index)
>  				goto out;
>  
> -			lock_page(page);
> +			if (!trylock_page(page)) {
> +				/*
> +				 * A rare race may happen between fault and
> +				 * writeback,
> +				 *
> +				 * 1. fault may have raced in and locked this
> +				 * page ahead of us, and if fault needs to
> +				 * reclaim memory via shrink_page_list(), it may
> +				 * also wait on the writeback pages we've
> +				 * collected in our mpd->io_submit.
> +				 *
> +				 * 2. We have to submit mpd->io_submit->io_bio
> +				 * to let memory reclaim make progress in order
> +				 * to avoid the deadlock between fault and
> +				 * ourselves(writeback).
> +				 */
> +				ext4_io_submit(&mpd->io_submit);
> +				lock_page(page);
> +			}
> +
>  			/*
>  			 * If the page is no longer dirty, or its mapping no
>  			 * longer corresponds to inode we are writing (which
> -- 
> 1.8.3.1
>
Liu Bo Nov. 28, 2018, 8:11 p.m. UTC | #3
On Tue, Nov 27, 2018 at 12:42:49PM +0100, Jan Kara wrote:
> CCed fsdevel since this may be interesting to other filesystem developers
> as well.
> 
> On Tue 30-10-18 08:22:49, Liu Bo wrote:
> > mpage_prepare_extent_to_map() tries to build up a large bio to stuff down
> > the pipe.  But if it needs to wait for a page lock, it needs to make sure
> > and send down any pending writes so we don't deadlock with anyone who has
> > the page lock and is waiting for writeback of things inside the bio.
> 
> Thanks for report! I agree the current code has a deadlock possibility you
> describe. But I think the problem reaches a bit further than what your
> patch fixes.  The problem is with pages that are unlocked but have
> PageWriteback set.  Page reclaim may end up waiting for these pages and
> thus any memory allocation with __GFP_FS set can block on these. So in our
> current setting page writeback must not block on anything that can be held
> while doing memory allocation with __GFP_FS set. Page lock is just one of
> these possibilities, wait_on_page_writeback() in
> mpage_prepare_extent_to_map() is another suspect and there mat be more. Or
> to say it differently, if there's lock A and GFP_KERNEL allocation can
> happen under lock A, then A cannot be taken by the writeback path. This is
> actually pretty subtle deadlock possibility and our current lockdep
> instrumentation isn't going to catch this.
>

Thanks for the nice summary, it's true that a lock A held in both
writeback path and memory reclaim can end up with deadlock.

Fortunately, by far there're only deadlock reports of page's lock bit
and writeback bit in both ext4 and btrfs[1].  I think
wait_on_page_writeback() would be OK as it's been protected by page
lock.

[1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01d658f2ca3c85c1ffb20b306e30d16197000ce7

> So I see two ways how to fix this properly:
> 
> 1) Change ext4 code to always submit the bio once we have a full page
> prepared for writing. This may be relatively simple but has a higher CPU
> overhead for bio allocation & freeing (actual IO won't really differ since
> the plugging code should take care of merging the submitted bios). XFS
> seems to be doing this.

Seems that that's the safest way to do it, but as you said there's
some tradeoff.

(Just took a look at xfs's writepages, xfs also did the page
collection if there're adjacent pages in xfs_add_to_ioend(), and since
xfs_vm_writepages() is using the generic helper write_cache_pages()
which calls lock_page() as well, it's still possible to run into the
above kind of deadlock.)

> 
> 2) Change the code to unlock the page only when we submit the bio.

This sounds doable but not good IMO, the concern is that page locks
can be held for too long, or if we do 2), submitting one bio per page
in 1) is also needed.

thanks,
-liubo

> 
> 								Honza
> > The related lock stack is shown as follows,
> > 
> > task1:
> > [<ffffffff811aaa52>] wait_on_page_bit+0x82/0xa0
> > [<ffffffff811c5777>] shrink_page_list+0x907/0x960
> > [<ffffffff811c6027>] shrink_inactive_list+0x2c7/0x680
> > [<ffffffff811c6ba4>] shrink_node_memcg+0x404/0x830
> > [<ffffffff811c70a8>] shrink_node+0xd8/0x300
> > [<ffffffff811c73dd>] do_try_to_free_pages+0x10d/0x330
> > [<ffffffff811c7865>] try_to_free_mem_cgroup_pages+0xd5/0x1b0
> > [<ffffffff8122df2d>] try_charge+0x14d/0x720
> > [<ffffffff812320cc>] memcg_kmem_charge_memcg+0x3c/0xa0
> > [<ffffffff812321ae>] memcg_kmem_charge+0x7e/0xd0
> > [<ffffffff811b68a8>] __alloc_pages_nodemask+0x178/0x260
> > [<ffffffff8120bff5>] alloc_pages_current+0x95/0x140
> > [<ffffffff81074247>] pte_alloc_one+0x17/0x40
> > [<ffffffff811e34de>] __pte_alloc+0x1e/0x110
> > [<ffffffffa06739de>] alloc_set_pte+0x5fe/0xc20
> > [<ffffffff811e5d93>] do_fault+0x103/0x970
> > [<ffffffff811e6e5e>] handle_mm_fault+0x61e/0xd10
> > [<ffffffff8106ea02>] __do_page_fault+0x252/0x4d0
> > [<ffffffff8106ecb0>] do_page_fault+0x30/0x80
> > [<ffffffff8171bce8>] page_fault+0x28/0x30
> > [<ffffffffffffffff>] 0xffffffffffffffff
> > 
> > task2:
> > [<ffffffff811aadc6>] __lock_page+0x86/0xa0
> > [<ffffffffa02f1e47>] mpage_prepare_extent_to_map+0x2e7/0x310 [ext4]
> > [<ffffffffa08a2689>] ext4_writepages+0x479/0xd60
> > [<ffffffff811bbede>] do_writepages+0x1e/0x30
> > [<ffffffff812725e5>] __writeback_single_inode+0x45/0x320
> > [<ffffffff81272de2>] writeback_sb_inodes+0x272/0x600
> > [<ffffffff81273202>] __writeback_inodes_wb+0x92/0xc0
> > [<ffffffff81273568>] wb_writeback+0x268/0x300
> > [<ffffffff81273d24>] wb_workfn+0xb4/0x390
> > [<ffffffff810a2f19>] process_one_work+0x189/0x420
> > [<ffffffff810a31fe>] worker_thread+0x4e/0x4b0
> > [<ffffffff810a9786>] kthread+0xe6/0x100
> > [<ffffffff8171a9a1>] ret_from_fork+0x41/0x50
> > [<ffffffffffffffff>] 0xffffffffffffffff
> > 
> > task1 is waiting for the PageWriteback bit of the page that task2 has
> > collected in mpd->io_submit->io_bio, and tasks2 is waiting for the LOCKED
> > bit the page which tasks1 has locked.
> > 
> > It seems that this deadlock only happens when those pages are mapped pages
> > so that mpage_prepare_extent_to_map() can have pages queued in io_bio and
> > when waiting to lock the subsequent page.
> > 
> > Signed-off-by: Liu Bo <bo.liu@linux.alibaba.com>
> > ---
> > 
> > Only did build test.
> > 
> >  fs/ext4/inode.c | 21 ++++++++++++++++++++-
> >  1 file changed, 20 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> > index c3d9a42c561e..becbfb292bf0 100644
> > --- a/fs/ext4/inode.c
> > +++ b/fs/ext4/inode.c
> > @@ -2681,7 +2681,26 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
> >  			if (mpd->map.m_len > 0 && mpd->next_page != page->index)
> >  				goto out;
> >  
> > -			lock_page(page);
> > +			if (!trylock_page(page)) {
> > +				/*
> > +				 * A rare race may happen between fault and
> > +				 * writeback,
> > +				 *
> > +				 * 1. fault may have raced in and locked this
> > +				 * page ahead of us, and if fault needs to
> > +				 * reclaim memory via shrink_page_list(), it may
> > +				 * also wait on the writeback pages we've
> > +				 * collected in our mpd->io_submit.
> > +				 *
> > +				 * 2. We have to submit mpd->io_submit->io_bio
> > +				 * to let memory reclaim make progress in order
> > +				 * to avoid the deadlock between fault and
> > +				 * ourselves(writeback).
> > +				 */
> > +				ext4_io_submit(&mpd->io_submit);
> > +				lock_page(page);
> > +			}
> > +
> >  			/*
> >  			 * If the page is no longer dirty, or its mapping no
> >  			 * longer corresponds to inode we are writing (which
> > -- 
> > 1.8.3.1
> > 
> -- 
> Jan Kara <jack@suse.com>
> SUSE Labs, CR
Jan Kara Nov. 29, 2018, 8:52 a.m. UTC | #4
On Wed 28-11-18 12:11:23, Liu Bo wrote:
> On Tue, Nov 27, 2018 at 12:42:49PM +0100, Jan Kara wrote:
> > CCed fsdevel since this may be interesting to other filesystem developers
> > as well.
> > 
> > On Tue 30-10-18 08:22:49, Liu Bo wrote:
> > > mpage_prepare_extent_to_map() tries to build up a large bio to stuff down
> > > the pipe.  But if it needs to wait for a page lock, it needs to make sure
> > > and send down any pending writes so we don't deadlock with anyone who has
> > > the page lock and is waiting for writeback of things inside the bio.
> > 
> > Thanks for report! I agree the current code has a deadlock possibility you
> > describe. But I think the problem reaches a bit further than what your
> > patch fixes.  The problem is with pages that are unlocked but have
> > PageWriteback set.  Page reclaim may end up waiting for these pages and
> > thus any memory allocation with __GFP_FS set can block on these. So in our
> > current setting page writeback must not block on anything that can be held
> > while doing memory allocation with __GFP_FS set. Page lock is just one of
> > these possibilities, wait_on_page_writeback() in
> > mpage_prepare_extent_to_map() is another suspect and there mat be more. Or
> > to say it differently, if there's lock A and GFP_KERNEL allocation can
> > happen under lock A, then A cannot be taken by the writeback path. This is
> > actually pretty subtle deadlock possibility and our current lockdep
> > instrumentation isn't going to catch this.
> >
> 
> Thanks for the nice summary, it's true that a lock A held in both
> writeback path and memory reclaim can end up with deadlock.
> 
> Fortunately, by far there're only deadlock reports of page's lock bit
> and writeback bit in both ext4 and btrfs[1].  I think
> wait_on_page_writeback() would be OK as it's been protected by page
> lock.
> 
> [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01d658f2ca3c85c1ffb20b306e30d16197000ce7

Yes, but that may just mean that the other deadlocks are just harder to
hit...

> > So I see two ways how to fix this properly:
> > 
> > 1) Change ext4 code to always submit the bio once we have a full page
> > prepared for writing. This may be relatively simple but has a higher CPU
> > overhead for bio allocation & freeing (actual IO won't really differ since
> > the plugging code should take care of merging the submitted bios). XFS
> > seems to be doing this.
> 
> Seems that that's the safest way to do it, but as you said there's
> some tradeoff.
> 
> (Just took a look at xfs's writepages, xfs also did the page
> collection if there're adjacent pages in xfs_add_to_ioend(), and since
> xfs_vm_writepages() is using the generic helper write_cache_pages()
> which calls lock_page() as well, it's still possible to run into the
> above kind of deadlock.)

Originally I thought XFS doesn't have this problem but now when I look
again, you are right that their ioend may accumulate more pages to write
and so they are prone to the same deadlock ext4 is. Added XFS list to CC.

> > 2) Change the code to unlock the page only when we submit the bio.
> 
> This sounds doable but not good IMO, the concern is that page locks
> can be held for too long, or if we do 2), submitting one bio per page
> in 1) is also needed.

Hum, you're right that page lock hold times may increase noticeably and
that's not very good. Ideally we'd need a way to submit whatever we have
prepared when we are going to sleep but there's no easy way to do that.
Hum... except if we somehow hooked the bio plugging mechanism we have. And
actually it seems there already is implemented a mechanism for unplug
callbacks (blk_check_plugged()) so our writepages() functions could just
add their callback there, on schedule unplug callbacks will get called and
we can submit the bio we have accumulated so far in our writepages context.
So I think using this will be the best option. We might just add a variant
of blk_check_plugged() that will just add passed in blk_plug_cb structure
as all filesystems will likely just want to embed this in their writepages
context structure instead of allocating it with GFP_ATOMIC...

Will you look into this or should I try to write the patch?

								Honza

> > > task1:
> > > [<ffffffff811aaa52>] wait_on_page_bit+0x82/0xa0
> > > [<ffffffff811c5777>] shrink_page_list+0x907/0x960
> > > [<ffffffff811c6027>] shrink_inactive_list+0x2c7/0x680
> > > [<ffffffff811c6ba4>] shrink_node_memcg+0x404/0x830
> > > [<ffffffff811c70a8>] shrink_node+0xd8/0x300
> > > [<ffffffff811c73dd>] do_try_to_free_pages+0x10d/0x330
> > > [<ffffffff811c7865>] try_to_free_mem_cgroup_pages+0xd5/0x1b0
> > > [<ffffffff8122df2d>] try_charge+0x14d/0x720
> > > [<ffffffff812320cc>] memcg_kmem_charge_memcg+0x3c/0xa0
> > > [<ffffffff812321ae>] memcg_kmem_charge+0x7e/0xd0
> > > [<ffffffff811b68a8>] __alloc_pages_nodemask+0x178/0x260
> > > [<ffffffff8120bff5>] alloc_pages_current+0x95/0x140
> > > [<ffffffff81074247>] pte_alloc_one+0x17/0x40
> > > [<ffffffff811e34de>] __pte_alloc+0x1e/0x110
> > > [<ffffffffa06739de>] alloc_set_pte+0x5fe/0xc20
> > > [<ffffffff811e5d93>] do_fault+0x103/0x970
> > > [<ffffffff811e6e5e>] handle_mm_fault+0x61e/0xd10
> > > [<ffffffff8106ea02>] __do_page_fault+0x252/0x4d0
> > > [<ffffffff8106ecb0>] do_page_fault+0x30/0x80
> > > [<ffffffff8171bce8>] page_fault+0x28/0x30
> > > [<ffffffffffffffff>] 0xffffffffffffffff
> > > 
> > > task2:
> > > [<ffffffff811aadc6>] __lock_page+0x86/0xa0
> > > [<ffffffffa02f1e47>] mpage_prepare_extent_to_map+0x2e7/0x310 [ext4]
> > > [<ffffffffa08a2689>] ext4_writepages+0x479/0xd60
> > > [<ffffffff811bbede>] do_writepages+0x1e/0x30
> > > [<ffffffff812725e5>] __writeback_single_inode+0x45/0x320
> > > [<ffffffff81272de2>] writeback_sb_inodes+0x272/0x600
> > > [<ffffffff81273202>] __writeback_inodes_wb+0x92/0xc0
> > > [<ffffffff81273568>] wb_writeback+0x268/0x300
> > > [<ffffffff81273d24>] wb_workfn+0xb4/0x390
> > > [<ffffffff810a2f19>] process_one_work+0x189/0x420
> > > [<ffffffff810a31fe>] worker_thread+0x4e/0x4b0
> > > [<ffffffff810a9786>] kthread+0xe6/0x100
> > > [<ffffffff8171a9a1>] ret_from_fork+0x41/0x50
> > > [<ffffffffffffffff>] 0xffffffffffffffff
> > > 
> > > task1 is waiting for the PageWriteback bit of the page that task2 has
> > > collected in mpd->io_submit->io_bio, and tasks2 is waiting for the LOCKED
> > > bit the page which tasks1 has locked.
> > > 
> > > It seems that this deadlock only happens when those pages are mapped pages
> > > so that mpage_prepare_extent_to_map() can have pages queued in io_bio and
> > > when waiting to lock the subsequent page.
> > > 
> > > Signed-off-by: Liu Bo <bo.liu@linux.alibaba.com>
> > > ---
> > > 
> > > Only did build test.
> > > 
> > >  fs/ext4/inode.c | 21 ++++++++++++++++++++-
> > >  1 file changed, 20 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> > > index c3d9a42c561e..becbfb292bf0 100644
> > > --- a/fs/ext4/inode.c
> > > +++ b/fs/ext4/inode.c
> > > @@ -2681,7 +2681,26 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
> > >  			if (mpd->map.m_len > 0 && mpd->next_page != page->index)
> > >  				goto out;
> > >  
> > > -			lock_page(page);
> > > +			if (!trylock_page(page)) {
> > > +				/*
> > > +				 * A rare race may happen between fault and
> > > +				 * writeback,
> > > +				 *
> > > +				 * 1. fault may have raced in and locked this
> > > +				 * page ahead of us, and if fault needs to
> > > +				 * reclaim memory via shrink_page_list(), it may
> > > +				 * also wait on the writeback pages we've
> > > +				 * collected in our mpd->io_submit.
> > > +				 *
> > > +				 * 2. We have to submit mpd->io_submit->io_bio
> > > +				 * to let memory reclaim make progress in order
> > > +				 * to avoid the deadlock between fault and
> > > +				 * ourselves(writeback).
> > > +				 */
> > > +				ext4_io_submit(&mpd->io_submit);
> > > +				lock_page(page);
> > > +			}
> > > +
> > >  			/*
> > >  			 * If the page is no longer dirty, or its mapping no
> > >  			 * longer corresponds to inode we are writing (which
> > > -- 
> > > 1.8.3.1
> > > 
> > -- 
> > Jan Kara <jack@suse.com>
> > SUSE Labs, CR
Dave Chinner Nov. 29, 2018, 12:02 p.m. UTC | #5
On Thu, Nov 29, 2018 at 09:52:38AM +0100, Jan Kara wrote:
> On Wed 28-11-18 12:11:23, Liu Bo wrote:
> > On Tue, Nov 27, 2018 at 12:42:49PM +0100, Jan Kara wrote:
> > > CCed fsdevel since this may be interesting to other filesystem developers
> > > as well.
> > > 
> > > On Tue 30-10-18 08:22:49, Liu Bo wrote:
> > > > mpage_prepare_extent_to_map() tries to build up a large bio to stuff down
> > > > the pipe.  But if it needs to wait for a page lock, it needs to make sure
> > > > and send down any pending writes so we don't deadlock with anyone who has
> > > > the page lock and is waiting for writeback of things inside the bio.
> > > 
> > > Thanks for report! I agree the current code has a deadlock possibility you
> > > describe. But I think the problem reaches a bit further than what your
> > > patch fixes.  The problem is with pages that are unlocked but have
> > > PageWriteback set.  Page reclaim may end up waiting for these pages and
> > > thus any memory allocation with __GFP_FS set can block on these. So in our
> > > current setting page writeback must not block on anything that can be held
> > > while doing memory allocation with __GFP_FS set. Page lock is just one of
> > > these possibilities, wait_on_page_writeback() in
> > > mpage_prepare_extent_to_map() is another suspect and there mat be more. Or
> > > to say it differently, if there's lock A and GFP_KERNEL allocation can
> > > happen under lock A, then A cannot be taken by the writeback path. This is
> > > actually pretty subtle deadlock possibility and our current lockdep
> > > instrumentation isn't going to catch this.
> > >
> > 
> > Thanks for the nice summary, it's true that a lock A held in both
> > writeback path and memory reclaim can end up with deadlock.
> > 
> > Fortunately, by far there're only deadlock reports of page's lock bit
> > and writeback bit in both ext4 and btrfs[1].  I think
> > wait_on_page_writeback() would be OK as it's been protected by page
> > lock.
> > 
> > [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01d658f2ca3c85c1ffb20b306e30d16197000ce7
> 
> Yes, but that may just mean that the other deadlocks are just harder to
> hit...
> 
> > > So I see two ways how to fix this properly:
> > > 
> > > 1) Change ext4 code to always submit the bio once we have a full page
> > > prepared for writing. This may be relatively simple but has a higher CPU
> > > overhead for bio allocation & freeing (actual IO won't really differ since
> > > the plugging code should take care of merging the submitted bios). XFS
> > > seems to be doing this.
> > 
> > Seems that that's the safest way to do it, but as you said there's
> > some tradeoff.
> > 
> > (Just took a look at xfs's writepages, xfs also did the page
> > collection if there're adjacent pages in xfs_add_to_ioend(), and since
> > xfs_vm_writepages() is using the generic helper write_cache_pages()
> > which calls lock_page() as well, it's still possible to run into the
> > above kind of deadlock.)
> 
> Originally I thought XFS doesn't have this problem but now when I look
> again, you are right that their ioend may accumulate more pages to write
> and so they are prone to the same deadlock ext4 is. Added XFS list to CC.

I don't think XFS has a problem here, because the deadlock is
dependent on holding a lock that writeback might take and then doing
a GFP_KERNEL allocation. I don't think we do that anywhere in XFS -
the only lock that is of concern here is the ip->i_ilock, and I
think we always do GFP_NOFS allocations inside that lock.

As it is, this sort of lock vs reclaim inversion should be caught by
lockdep - allocations and reclaim contexts are recorded by lockdep
we get reports if we do lock A - alloc and then do reclaim - lock A.
We've always had problems with false positives from lockdep for
these situations where common XFS code can be called from GFP_KERNEL
valid contexts as well as reclaim or GFP_NOFS-only contexts, but I
don't recall ever seeing such a report for the writeback path....

> > > 2) Change the code to unlock the page only when we submit the bio.

> > This sounds doable but not good IMO, the concern is that page locks
> > can be held for too long, or if we do 2), submitting one bio per page
> > in 1) is also needed.
> 
> Hum, you're right that page lock hold times may increase noticeably and
> that's not very good. Ideally we'd need a way to submit whatever we have
> prepared when we are going to sleep but there's no easy way to do that.

XFS unlocks the page after it has been added to the bio and marked
as under writeback, not when the bio is submitted. i.e.

writepage w/ locked page dirty
lock ilock
<mapping, allocation>
unlock ilock
bio_add_page
clear_page_dirty_for_io
set_page_writeback
unlock_page
.....
<gather more dirty pages into bio>
.....
<bio is full or discontiguous page to be written>
submit_bio()

If we switch away which holding a partially built bio, the only page
we have locked is the one we are currently trying to add to the bio.
Lock ordering prevents deadlocks on that one page, and all other
pages in the bio being built are marked as under writeback and are
not locked. Hence anything that wants to modify a page held in the
bio will block waiting for page writeback to clear, not the page
lock.

Cheers,

Dave.
Jan Kara Nov. 29, 2018, 1 p.m. UTC | #6
On Thu 29-11-18 23:02:53, Dave Chinner wrote:
> On Thu, Nov 29, 2018 at 09:52:38AM +0100, Jan Kara wrote:
> > On Wed 28-11-18 12:11:23, Liu Bo wrote:
> > > On Tue, Nov 27, 2018 at 12:42:49PM +0100, Jan Kara wrote:
> > > > CCed fsdevel since this may be interesting to other filesystem developers
> > > > as well.
> > > > 
> > > > On Tue 30-10-18 08:22:49, Liu Bo wrote:
> > > > > mpage_prepare_extent_to_map() tries to build up a large bio to stuff down
> > > > > the pipe.  But if it needs to wait for a page lock, it needs to make sure
> > > > > and send down any pending writes so we don't deadlock with anyone who has
> > > > > the page lock and is waiting for writeback of things inside the bio.
> > > > 
> > > > Thanks for report! I agree the current code has a deadlock possibility you
> > > > describe. But I think the problem reaches a bit further than what your
> > > > patch fixes.  The problem is with pages that are unlocked but have
> > > > PageWriteback set.  Page reclaim may end up waiting for these pages and
> > > > thus any memory allocation with __GFP_FS set can block on these. So in our
> > > > current setting page writeback must not block on anything that can be held
> > > > while doing memory allocation with __GFP_FS set. Page lock is just one of
> > > > these possibilities, wait_on_page_writeback() in
> > > > mpage_prepare_extent_to_map() is another suspect and there mat be more. Or
> > > > to say it differently, if there's lock A and GFP_KERNEL allocation can
> > > > happen under lock A, then A cannot be taken by the writeback path. This is
> > > > actually pretty subtle deadlock possibility and our current lockdep
> > > > instrumentation isn't going to catch this.
> > > >
> > > 
> > > Thanks for the nice summary, it's true that a lock A held in both
> > > writeback path and memory reclaim can end up with deadlock.
> > > 
> > > Fortunately, by far there're only deadlock reports of page's lock bit
> > > and writeback bit in both ext4 and btrfs[1].  I think
> > > wait_on_page_writeback() would be OK as it's been protected by page
> > > lock.
> > > 
> > > [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01d658f2ca3c85c1ffb20b306e30d16197000ce7
> > 
> > Yes, but that may just mean that the other deadlocks are just harder to
> > hit...
> > 
> > > > So I see two ways how to fix this properly:
> > > > 
> > > > 1) Change ext4 code to always submit the bio once we have a full page
> > > > prepared for writing. This may be relatively simple but has a higher CPU
> > > > overhead for bio allocation & freeing (actual IO won't really differ since
> > > > the plugging code should take care of merging the submitted bios). XFS
> > > > seems to be doing this.
> > > 
> > > Seems that that's the safest way to do it, but as you said there's
> > > some tradeoff.
> > > 
> > > (Just took a look at xfs's writepages, xfs also did the page
> > > collection if there're adjacent pages in xfs_add_to_ioend(), and since
> > > xfs_vm_writepages() is using the generic helper write_cache_pages()
> > > which calls lock_page() as well, it's still possible to run into the
> > > above kind of deadlock.)
> > 
> > Originally I thought XFS doesn't have this problem but now when I look
> > again, you are right that their ioend may accumulate more pages to write
> > and so they are prone to the same deadlock ext4 is. Added XFS list to CC.
> 
> I don't think XFS has a problem here, because the deadlock is
> dependent on holding a lock that writeback might take and then doing
> a GFP_KERNEL allocation. I don't think we do that anywhere in XFS -
> the only lock that is of concern here is the ip->i_ilock, and I
> think we always do GFP_NOFS allocations inside that lock.
> 
> As it is, this sort of lock vs reclaim inversion should be caught by
> lockdep - allocations and reclaim contexts are recorded by lockdep
> we get reports if we do lock A - alloc and then do reclaim - lock A.
> We've always had problems with false positives from lockdep for
> these situations where common XFS code can be called from GFP_KERNEL
> valid contexts as well as reclaim or GFP_NOFS-only contexts, but I
> don't recall ever seeing such a report for the writeback path....

I think for A == page lock, XFS may have the problem (and lockdep won't
notice because it does not track page locks). There are some parts of
kernel which do GFP_KERNEL allocations under page lock - pte_alloc_one() is
one such function which allocates page tables with GFP_KERNEL and gets
called with the faulted page locked. And I believe there are others.

So direct reclaim from pte_alloc_one() can wait for writeback on page B
while holding lock on page A. And if B is just prepared (added to bio,
under writeback, unlocked) but not submitted in xfs_writepages() and we
block on lock_page(A), we have a deadlock.

Generally deadlocks like these will be invisible to lockdep because it does
not track either PageWriteback or PageLocked as a dependency.

> > > > 2) Change the code to unlock the page only when we submit the bio.
> 
> > > This sounds doable but not good IMO, the concern is that page locks
> > > can be held for too long, or if we do 2), submitting one bio per page
> > > in 1) is also needed.
> > 
> > Hum, you're right that page lock hold times may increase noticeably and
> > that's not very good. Ideally we'd need a way to submit whatever we have
> > prepared when we are going to sleep but there's no easy way to do that.
> 
> XFS unlocks the page after it has been added to the bio and marked
> as under writeback, not when the bio is submitted. i.e.
> 
> writepage w/ locked page dirty
> lock ilock
> <mapping, allocation>
> unlock ilock
> bio_add_page
> clear_page_dirty_for_io
> set_page_writeback
> unlock_page
> .....
> <gather more dirty pages into bio>
> .....
> <bio is full or discontiguous page to be written>
> submit_bio()

Yes, ext4 works the same way. But thanks for confirmation.

> If we switch away which holding a partially built bio, the only page
> we have locked is the one we are currently trying to add to the bio.
> Lock ordering prevents deadlocks on that one page, and all other
> pages in the bio being built are marked as under writeback and are
> not locked. Hence anything that wants to modify a page held in the
> bio will block waiting for page writeback to clear, not the page
> lock.

Yes, and the blocking on writeback of such page in direct reclaim is
exactly one link in the deadlock chain...

								Honza
Zhengping Zhou Nov. 29, 2018, 2:07 p.m. UTC | #7
in kenrel 4.20-rc1 , function shrink_page_list:

1227         if (PageWriteback(page)) {
1228             /* Case 1 above */
1229             if (current_is_kswapd() &&
1230                 PageReclaim(page) &&
1231                 test_bit(PGDAT_WRITEBACK, &pgdat->flags)) {
1232                 nr_immediate++;
1233                 goto activate_locked;
1234
1235             /* Case 2 above */
1236             } else if (sane_reclaim(sc) ||
1237                 !PageReclaim(page) || !may_enter_fs) {
1238                 /*
1239                  * This is slightly racy - end_page_writeback()
1240                  * might have just cleared PageReclaim, then
1241                  * setting PageReclaim here end up interpreted
1242                  * as PageReadahead - but that does not matter
1243                  * enough to care.  What we do want is for this
1244                  * page to have PageReclaim set next time memcg
1245                  * reclaim reaches the tests above, so it will
1246                  * then wait_on_page_writeback() to avoid OOM;
1247                  * and it's also appropriate in global reclaim.
1248                  */
1249                 SetPageReclaim(page);
1250                 nr_writeback++;
1251                 goto activate_locked;
1252
1253             /* Case 3 above */
1254             } else {
1255                 unlock_page(page);
1256                 wait_on_page_writeback(page);
1257                 /* then go back and try same page again */
1258                 list_add_tail(&page->lru, page_list);
1259                 continue;
1260             }
1261         }


What's your kernel version ? Seems we won't  wait_page_writeback with
holding page lock in latest kernel version.


Jan Kara <jack@suse.cz> 于2018年11月29日周四 下午9:01写道:
>
> On Thu 29-11-18 23:02:53, Dave Chinner wrote:
> > On Thu, Nov 29, 2018 at 09:52:38AM +0100, Jan Kara wrote:
> > > On Wed 28-11-18 12:11:23, Liu Bo wrote:
> > > > On Tue, Nov 27, 2018 at 12:42:49PM +0100, Jan Kara wrote:
> > > > > CCed fsdevel since this may be interesting to other filesystem developers
> > > > > as well.
> > > > >
> > > > > On Tue 30-10-18 08:22:49, Liu Bo wrote:
> > > > > > mpage_prepare_extent_to_map() tries to build up a large bio to stuff down
> > > > > > the pipe.  But if it needs to wait for a page lock, it needs to make sure
> > > > > > and send down any pending writes so we don't deadlock with anyone who has
> > > > > > the page lock and is waiting for writeback of things inside the bio.
> > > > >
> > > > > Thanks for report! I agree the current code has a deadlock possibility you
> > > > > describe. But I think the problem reaches a bit further than what your
> > > > > patch fixes.  The problem is with pages that are unlocked but have
> > > > > PageWriteback set.  Page reclaim may end up waiting for these pages and
> > > > > thus any memory allocation with __GFP_FS set can block on these. So in our
> > > > > current setting page writeback must not block on anything that can be held
> > > > > while doing memory allocation with __GFP_FS set. Page lock is just one of
> > > > > these possibilities, wait_on_page_writeback() in
> > > > > mpage_prepare_extent_to_map() is another suspect and there mat be more. Or
> > > > > to say it differently, if there's lock A and GFP_KERNEL allocation can
> > > > > happen under lock A, then A cannot be taken by the writeback path. This is
> > > > > actually pretty subtle deadlock possibility and our current lockdep
> > > > > instrumentation isn't going to catch this.
> > > > >
> > > >
> > > > Thanks for the nice summary, it's true that a lock A held in both
> > > > writeback path and memory reclaim can end up with deadlock.
> > > >
> > > > Fortunately, by far there're only deadlock reports of page's lock bit
> > > > and writeback bit in both ext4 and btrfs[1].  I think
> > > > wait_on_page_writeback() would be OK as it's been protected by page
> > > > lock.
> > > >
> > > > [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01d658f2ca3c85c1ffb20b306e30d16197000ce7
> > >
> > > Yes, but that may just mean that the other deadlocks are just harder to
> > > hit...
> > >
> > > > > So I see two ways how to fix this properly:
> > > > >
> > > > > 1) Change ext4 code to always submit the bio once we have a full page
> > > > > prepared for writing. This may be relatively simple but has a higher CPU
> > > > > overhead for bio allocation & freeing (actual IO won't really differ since
> > > > > the plugging code should take care of merging the submitted bios). XFS
> > > > > seems to be doing this.
> > > >
> > > > Seems that that's the safest way to do it, but as you said there's
> > > > some tradeoff.
> > > >
> > > > (Just took a look at xfs's writepages, xfs also did the page
> > > > collection if there're adjacent pages in xfs_add_to_ioend(), and since
> > > > xfs_vm_writepages() is using the generic helper write_cache_pages()
> > > > which calls lock_page() as well, it's still possible to run into the
> > > > above kind of deadlock.)
> > >
> > > Originally I thought XFS doesn't have this problem but now when I look
> > > again, you are right that their ioend may accumulate more pages to write
> > > and so they are prone to the same deadlock ext4 is. Added XFS list to CC.
> >
> > I don't think XFS has a problem here, because the deadlock is
> > dependent on holding a lock that writeback might take and then doing
> > a GFP_KERNEL allocation. I don't think we do that anywhere in XFS -
> > the only lock that is of concern here is the ip->i_ilock, and I
> > think we always do GFP_NOFS allocations inside that lock.
> >
> > As it is, this sort of lock vs reclaim inversion should be caught by
> > lockdep - allocations and reclaim contexts are recorded by lockdep
> > we get reports if we do lock A - alloc and then do reclaim - lock A.
> > We've always had problems with false positives from lockdep for
> > these situations where common XFS code can be called from GFP_KERNEL
> > valid contexts as well as reclaim or GFP_NOFS-only contexts, but I
> > don't recall ever seeing such a report for the writeback path....
>
> I think for A == page lock, XFS may have the problem (and lockdep won't
> notice because it does not track page locks). There are some parts of
> kernel which do GFP_KERNEL allocations under page lock - pte_alloc_one() is
> one such function which allocates page tables with GFP_KERNEL and gets
> called with the faulted page locked. And I believe there are others.
>
> So direct reclaim from pte_alloc_one() can wait for writeback on page B
> while holding lock on page A. And if B is just prepared (added to bio,
> under writeback, unlocked) but not submitted in xfs_writepages() and we
> block on lock_page(A), we have a deadlock.
>
> Generally deadlocks like these will be invisible to lockdep because it does
> not track either PageWriteback or PageLocked as a dependency.
>
> > > > > 2) Change the code to unlock the page only when we submit the bio.
> >
> > > > This sounds doable but not good IMO, the concern is that page locks
> > > > can be held for too long, or if we do 2), submitting one bio per page
> > > > in 1) is also needed.
> > >
> > > Hum, you're right that page lock hold times may increase noticeably and
> > > that's not very good. Ideally we'd need a way to submit whatever we have
> > > prepared when we are going to sleep but there's no easy way to do that.
> >
> > XFS unlocks the page after it has been added to the bio and marked
> > as under writeback, not when the bio is submitted. i.e.
> >
> > writepage w/ locked page dirty
> > lock ilock
> > <mapping, allocation>
> > unlock ilock
> > bio_add_page
> > clear_page_dirty_for_io
> > set_page_writeback
> > unlock_page
> > .....
> > <gather more dirty pages into bio>
> > .....
> > <bio is full or discontiguous page to be written>
> > submit_bio()
>
> Yes, ext4 works the same way. But thanks for confirmation.
>
> > If we switch away which holding a partially built bio, the only page
> > we have locked is the one we are currently trying to add to the bio.
> > Lock ordering prevents deadlocks on that one page, and all other
> > pages in the bio being built are marked as under writeback and are
> > not locked. Hence anything that wants to modify a page held in the
> > bio will block waiting for page writeback to clear, not the page
> > lock.
>
> Yes, and the blocking on writeback of such page in direct reclaim is
> exactly one link in the deadlock chain...
>
>                                                                 Honza
> --
> Jan Kara <jack@suse.com>
> SUSE Labs, CR
Jan Kara Nov. 29, 2018, 3:58 p.m. UTC | #8
On Thu 29-11-18 22:07:44, Zhengping Zhou wrote:
> in kenrel 4.20-rc1 , function shrink_page_list:
> 
> 1227         if (PageWriteback(page)) {
> 1228             /* Case 1 above */
> 1229             if (current_is_kswapd() &&
> 1230                 PageReclaim(page) &&
> 1231                 test_bit(PGDAT_WRITEBACK, &pgdat->flags)) {
> 1232                 nr_immediate++;
> 1233                 goto activate_locked;
> 1234
> 1235             /* Case 2 above */
> 1236             } else if (sane_reclaim(sc) ||
> 1237                 !PageReclaim(page) || !may_enter_fs) {
> 1238                 /*
> 1239                  * This is slightly racy - end_page_writeback()
> 1240                  * might have just cleared PageReclaim, then
> 1241                  * setting PageReclaim here end up interpreted
> 1242                  * as PageReadahead - but that does not matter
> 1243                  * enough to care.  What we do want is for this
> 1244                  * page to have PageReclaim set next time memcg
> 1245                  * reclaim reaches the tests above, so it will
> 1246                  * then wait_on_page_writeback() to avoid OOM;
> 1247                  * and it's also appropriate in global reclaim.
> 1248                  */
> 1249                 SetPageReclaim(page);
> 1250                 nr_writeback++;
> 1251                 goto activate_locked;
> 1252
> 1253             /* Case 3 above */
> 1254             } else {
> 1255                 unlock_page(page);
> 1256                 wait_on_page_writeback(page);
> 1257                 /* then go back and try same page again */
> 1258                 list_add_tail(&page->lru, page_list);
> 1259                 continue;
> 1260             }
> 1261         }
> 
> 
> What's your kernel version ? Seems we won't  wait_page_writeback with
> holding page lock in latest kernel version.

The page lock we hold is the page lock that is already held when doing
memory allocation. So it is not the one that is acquired and released by
the page reclaim code.

								Honza

> Jan Kara <jack@suse.cz> 于2018年11月29日周四 下午9:01写道:
> >
> > On Thu 29-11-18 23:02:53, Dave Chinner wrote:
> > > On Thu, Nov 29, 2018 at 09:52:38AM +0100, Jan Kara wrote:
> > > > On Wed 28-11-18 12:11:23, Liu Bo wrote:
> > > > > On Tue, Nov 27, 2018 at 12:42:49PM +0100, Jan Kara wrote:
> > > > > > CCed fsdevel since this may be interesting to other filesystem developers
> > > > > > as well.
> > > > > >
> > > > > > On Tue 30-10-18 08:22:49, Liu Bo wrote:
> > > > > > > mpage_prepare_extent_to_map() tries to build up a large bio to stuff down
> > > > > > > the pipe.  But if it needs to wait for a page lock, it needs to make sure
> > > > > > > and send down any pending writes so we don't deadlock with anyone who has
> > > > > > > the page lock and is waiting for writeback of things inside the bio.
> > > > > >
> > > > > > Thanks for report! I agree the current code has a deadlock possibility you
> > > > > > describe. But I think the problem reaches a bit further than what your
> > > > > > patch fixes.  The problem is with pages that are unlocked but have
> > > > > > PageWriteback set.  Page reclaim may end up waiting for these pages and
> > > > > > thus any memory allocation with __GFP_FS set can block on these. So in our
> > > > > > current setting page writeback must not block on anything that can be held
> > > > > > while doing memory allocation with __GFP_FS set. Page lock is just one of
> > > > > > these possibilities, wait_on_page_writeback() in
> > > > > > mpage_prepare_extent_to_map() is another suspect and there mat be more. Or
> > > > > > to say it differently, if there's lock A and GFP_KERNEL allocation can
> > > > > > happen under lock A, then A cannot be taken by the writeback path. This is
> > > > > > actually pretty subtle deadlock possibility and our current lockdep
> > > > > > instrumentation isn't going to catch this.
> > > > > >
> > > > >
> > > > > Thanks for the nice summary, it's true that a lock A held in both
> > > > > writeback path and memory reclaim can end up with deadlock.
> > > > >
> > > > > Fortunately, by far there're only deadlock reports of page's lock bit
> > > > > and writeback bit in both ext4 and btrfs[1].  I think
> > > > > wait_on_page_writeback() would be OK as it's been protected by page
> > > > > lock.
> > > > >
> > > > > [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01d658f2ca3c85c1ffb20b306e30d16197000ce7
> > > >
> > > > Yes, but that may just mean that the other deadlocks are just harder to
> > > > hit...
> > > >
> > > > > > So I see two ways how to fix this properly:
> > > > > >
> > > > > > 1) Change ext4 code to always submit the bio once we have a full page
> > > > > > prepared for writing. This may be relatively simple but has a higher CPU
> > > > > > overhead for bio allocation & freeing (actual IO won't really differ since
> > > > > > the plugging code should take care of merging the submitted bios). XFS
> > > > > > seems to be doing this.
> > > > >
> > > > > Seems that that's the safest way to do it, but as you said there's
> > > > > some tradeoff.
> > > > >
> > > > > (Just took a look at xfs's writepages, xfs also did the page
> > > > > collection if there're adjacent pages in xfs_add_to_ioend(), and since
> > > > > xfs_vm_writepages() is using the generic helper write_cache_pages()
> > > > > which calls lock_page() as well, it's still possible to run into the
> > > > > above kind of deadlock.)
> > > >
> > > > Originally I thought XFS doesn't have this problem but now when I look
> > > > again, you are right that their ioend may accumulate more pages to write
> > > > and so they are prone to the same deadlock ext4 is. Added XFS list to CC.
> > >
> > > I don't think XFS has a problem here, because the deadlock is
> > > dependent on holding a lock that writeback might take and then doing
> > > a GFP_KERNEL allocation. I don't think we do that anywhere in XFS -
> > > the only lock that is of concern here is the ip->i_ilock, and I
> > > think we always do GFP_NOFS allocations inside that lock.
> > >
> > > As it is, this sort of lock vs reclaim inversion should be caught by
> > > lockdep - allocations and reclaim contexts are recorded by lockdep
> > > we get reports if we do lock A - alloc and then do reclaim - lock A.
> > > We've always had problems with false positives from lockdep for
> > > these situations where common XFS code can be called from GFP_KERNEL
> > > valid contexts as well as reclaim or GFP_NOFS-only contexts, but I
> > > don't recall ever seeing such a report for the writeback path....
> >
> > I think for A == page lock, XFS may have the problem (and lockdep won't
> > notice because it does not track page locks). There are some parts of
> > kernel which do GFP_KERNEL allocations under page lock - pte_alloc_one() is
> > one such function which allocates page tables with GFP_KERNEL and gets
> > called with the faulted page locked. And I believe there are others.
> >
> > So direct reclaim from pte_alloc_one() can wait for writeback on page B
> > while holding lock on page A. And if B is just prepared (added to bio,
> > under writeback, unlocked) but not submitted in xfs_writepages() and we
> > block on lock_page(A), we have a deadlock.
> >
> > Generally deadlocks like these will be invisible to lockdep because it does
> > not track either PageWriteback or PageLocked as a dependency.
> >
> > > > > > 2) Change the code to unlock the page only when we submit the bio.
> > >
> > > > > This sounds doable but not good IMO, the concern is that page locks
> > > > > can be held for too long, or if we do 2), submitting one bio per page
> > > > > in 1) is also needed.
> > > >
> > > > Hum, you're right that page lock hold times may increase noticeably and
> > > > that's not very good. Ideally we'd need a way to submit whatever we have
> > > > prepared when we are going to sleep but there's no easy way to do that.
> > >
> > > XFS unlocks the page after it has been added to the bio and marked
> > > as under writeback, not when the bio is submitted. i.e.
> > >
> > > writepage w/ locked page dirty
> > > lock ilock
> > > <mapping, allocation>
> > > unlock ilock
> > > bio_add_page
> > > clear_page_dirty_for_io
> > > set_page_writeback
> > > unlock_page
> > > .....
> > > <gather more dirty pages into bio>
> > > .....
> > > <bio is full or discontiguous page to be written>
> > > submit_bio()
> >
> > Yes, ext4 works the same way. But thanks for confirmation.
> >
> > > If we switch away which holding a partially built bio, the only page
> > > we have locked is the one we are currently trying to add to the bio.
> > > Lock ordering prevents deadlocks on that one page, and all other
> > > pages in the bio being built are marked as under writeback and are
> > > not locked. Hence anything that wants to modify a page held in the
> > > bio will block waiting for page writeback to clear, not the page
> > > lock.
> >
> > Yes, and the blocking on writeback of such page in direct reclaim is
> > exactly one link in the deadlock chain...
> >
> >                                                                 Honza
> > --
> > Jan Kara <jack@suse.com>
> > SUSE Labs, CR
Liu Bo Nov. 29, 2018, 7:24 p.m. UTC | #9
On Thu, Nov 29, 2018 at 09:52:38AM +0100, Jan Kara wrote:
> On Wed 28-11-18 12:11:23, Liu Bo wrote:
> > On Tue, Nov 27, 2018 at 12:42:49PM +0100, Jan Kara wrote:
> > > CCed fsdevel since this may be interesting to other filesystem developers
> > > as well.
> > > 
> > > On Tue 30-10-18 08:22:49, Liu Bo wrote:
> > > > mpage_prepare_extent_to_map() tries to build up a large bio to stuff down
> > > > the pipe.  But if it needs to wait for a page lock, it needs to make sure
> > > > and send down any pending writes so we don't deadlock with anyone who has
> > > > the page lock and is waiting for writeback of things inside the bio.
> > > 
> > > Thanks for report! I agree the current code has a deadlock possibility you
> > > describe. But I think the problem reaches a bit further than what your
> > > patch fixes.  The problem is with pages that are unlocked but have
> > > PageWriteback set.  Page reclaim may end up waiting for these pages and
> > > thus any memory allocation with __GFP_FS set can block on these. So in our
> > > current setting page writeback must not block on anything that can be held
> > > while doing memory allocation with __GFP_FS set. Page lock is just one of
> > > these possibilities, wait_on_page_writeback() in
> > > mpage_prepare_extent_to_map() is another suspect and there mat be more. Or
> > > to say it differently, if there's lock A and GFP_KERNEL allocation can
> > > happen under lock A, then A cannot be taken by the writeback path. This is
> > > actually pretty subtle deadlock possibility and our current lockdep
> > > instrumentation isn't going to catch this.
> > >
> > 
> > Thanks for the nice summary, it's true that a lock A held in both
> > writeback path and memory reclaim can end up with deadlock.
> > 
> > Fortunately, by far there're only deadlock reports of page's lock bit
> > and writeback bit in both ext4 and btrfs[1].  I think
> > wait_on_page_writeback() would be OK as it's been protected by page
> > lock.
> > 
> > [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01d658f2ca3c85c1ffb20b306e30d16197000ce7
> 
> Yes, but that may just mean that the other deadlocks are just harder to
> hit...
>

Yes, we hit the "page lock&writeback" deadlock when charging pte
memory to memcg rather than when not charging, but even with it, I
failed to work out a reproducer.

(Anyway we took a workaround of disabling charging pte memory to memcg
in order to avoid other lock inversion.)

> > > So I see two ways how to fix this properly:
> > > 
> > > 1) Change ext4 code to always submit the bio once we have a full page
> > > prepared for writing. This may be relatively simple but has a higher CPU
> > > overhead for bio allocation & freeing (actual IO won't really differ since
> > > the plugging code should take care of merging the submitted bios). XFS
> > > seems to be doing this.
> > 
> > Seems that that's the safest way to do it, but as you said there's
> > some tradeoff.
> > 
> > (Just took a look at xfs's writepages, xfs also did the page
> > collection if there're adjacent pages in xfs_add_to_ioend(), and since
> > xfs_vm_writepages() is using the generic helper write_cache_pages()
> > which calls lock_page() as well, it's still possible to run into the
> > above kind of deadlock.)
> 
> Originally I thought XFS doesn't have this problem but now when I look
> again, you are right that their ioend may accumulate more pages to write
> and so they are prone to the same deadlock ext4 is. Added XFS list to CC.
> 
> > > 2) Change the code to unlock the page only when we submit the bio.
> > 
> > This sounds doable but not good IMO, the concern is that page locks
> > can be held for too long, or if we do 2), submitting one bio per page
> > in 1) is also needed.
> 
> Hum, you're right that page lock hold times may increase noticeably and
> that's not very good. Ideally we'd need a way to submit whatever we have
> prepared when we are going to sleep but there's no easy way to do that.
> Hum... except if we somehow hooked the bio plugging mechanism we have. And
> actually it seems there already is implemented a mechanism for unplug
> callbacks (blk_check_plugged()) so our writepages() functions could just
> add their callback there, on schedule unplug callbacks will get called and
> we can submit the bio we have accumulated so far in our writepages context.
> So I think using this will be the best option. We might just add a variant
> of blk_check_plugged() that will just add passed in blk_plug_cb structure
> as all filesystems will likely just want to embed this in their writepages
> context structure instead of allocating it with GFP_ATOMIC...
>

Great, the blk_check_plugged way really makes sense to me.

I was wondering if it was just OK to use the existing blk_check_unplug
helper with GFP_ATOMIC inside because calling blk_check_unplug is
supposed to happen when we initial ioend and ext4_writepages() itself
has used GFP_KERNEL to allocate memory for ioend.

> Will you look into this or should I try to write the patch?
>

I'm kind of engaged in some backport stuff recently, so much
appreciated if you could give it a shot.

thanks,
-liubo

> 								Honza
> 
> > > > task1:
> > > > [<ffffffff811aaa52>] wait_on_page_bit+0x82/0xa0
> > > > [<ffffffff811c5777>] shrink_page_list+0x907/0x960
> > > > [<ffffffff811c6027>] shrink_inactive_list+0x2c7/0x680
> > > > [<ffffffff811c6ba4>] shrink_node_memcg+0x404/0x830
> > > > [<ffffffff811c70a8>] shrink_node+0xd8/0x300
> > > > [<ffffffff811c73dd>] do_try_to_free_pages+0x10d/0x330
> > > > [<ffffffff811c7865>] try_to_free_mem_cgroup_pages+0xd5/0x1b0
> > > > [<ffffffff8122df2d>] try_charge+0x14d/0x720
> > > > [<ffffffff812320cc>] memcg_kmem_charge_memcg+0x3c/0xa0
> > > > [<ffffffff812321ae>] memcg_kmem_charge+0x7e/0xd0
> > > > [<ffffffff811b68a8>] __alloc_pages_nodemask+0x178/0x260
> > > > [<ffffffff8120bff5>] alloc_pages_current+0x95/0x140
> > > > [<ffffffff81074247>] pte_alloc_one+0x17/0x40
> > > > [<ffffffff811e34de>] __pte_alloc+0x1e/0x110
> > > > [<ffffffffa06739de>] alloc_set_pte+0x5fe/0xc20
> > > > [<ffffffff811e5d93>] do_fault+0x103/0x970
> > > > [<ffffffff811e6e5e>] handle_mm_fault+0x61e/0xd10
> > > > [<ffffffff8106ea02>] __do_page_fault+0x252/0x4d0
> > > > [<ffffffff8106ecb0>] do_page_fault+0x30/0x80
> > > > [<ffffffff8171bce8>] page_fault+0x28/0x30
> > > > [<ffffffffffffffff>] 0xffffffffffffffff
> > > > 
> > > > task2:
> > > > [<ffffffff811aadc6>] __lock_page+0x86/0xa0
> > > > [<ffffffffa02f1e47>] mpage_prepare_extent_to_map+0x2e7/0x310 [ext4]
> > > > [<ffffffffa08a2689>] ext4_writepages+0x479/0xd60
> > > > [<ffffffff811bbede>] do_writepages+0x1e/0x30
> > > > [<ffffffff812725e5>] __writeback_single_inode+0x45/0x320
> > > > [<ffffffff81272de2>] writeback_sb_inodes+0x272/0x600
> > > > [<ffffffff81273202>] __writeback_inodes_wb+0x92/0xc0
> > > > [<ffffffff81273568>] wb_writeback+0x268/0x300
> > > > [<ffffffff81273d24>] wb_workfn+0xb4/0x390
> > > > [<ffffffff810a2f19>] process_one_work+0x189/0x420
> > > > [<ffffffff810a31fe>] worker_thread+0x4e/0x4b0
> > > > [<ffffffff810a9786>] kthread+0xe6/0x100
> > > > [<ffffffff8171a9a1>] ret_from_fork+0x41/0x50
> > > > [<ffffffffffffffff>] 0xffffffffffffffff
> > > > 
> > > > task1 is waiting for the PageWriteback bit of the page that task2 has
> > > > collected in mpd->io_submit->io_bio, and tasks2 is waiting for the LOCKED
> > > > bit the page which tasks1 has locked.
> > > > 
> > > > It seems that this deadlock only happens when those pages are mapped pages
> > > > so that mpage_prepare_extent_to_map() can have pages queued in io_bio and
> > > > when waiting to lock the subsequent page.
> > > > 
> > > > Signed-off-by: Liu Bo <bo.liu@linux.alibaba.com>
> > > > ---
> > > > 
> > > > Only did build test.
> > > > 
> > > >  fs/ext4/inode.c | 21 ++++++++++++++++++++-
> > > >  1 file changed, 20 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> > > > index c3d9a42c561e..becbfb292bf0 100644
> > > > --- a/fs/ext4/inode.c
> > > > +++ b/fs/ext4/inode.c
> > > > @@ -2681,7 +2681,26 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
> > > >  			if (mpd->map.m_len > 0 && mpd->next_page != page->index)
> > > >  				goto out;
> > > >  
> > > > -			lock_page(page);
> > > > +			if (!trylock_page(page)) {
> > > > +				/*
> > > > +				 * A rare race may happen between fault and
> > > > +				 * writeback,
> > > > +				 *
> > > > +				 * 1. fault may have raced in and locked this
> > > > +				 * page ahead of us, and if fault needs to
> > > > +				 * reclaim memory via shrink_page_list(), it may
> > > > +				 * also wait on the writeback pages we've
> > > > +				 * collected in our mpd->io_submit.
> > > > +				 *
> > > > +				 * 2. We have to submit mpd->io_submit->io_bio
> > > > +				 * to let memory reclaim make progress in order
> > > > +				 * to avoid the deadlock between fault and
> > > > +				 * ourselves(writeback).
> > > > +				 */
> > > > +				ext4_io_submit(&mpd->io_submit);
> > > > +				lock_page(page);
> > > > +			}
> > > > +
> > > >  			/*
> > > >  			 * If the page is no longer dirty, or its mapping no
> > > >  			 * longer corresponds to inode we are writing (which
> > > > -- 
> > > > 1.8.3.1
> > > > 
> > > -- 
> > > Jan Kara <jack@suse.com>
> > > SUSE Labs, CR
> -- 
> Jan Kara <jack@suse.com>
> SUSE Labs, CR
Dave Chinner Nov. 29, 2018, 8:40 p.m. UTC | #10
On Thu, Nov 29, 2018 at 02:00:02PM +0100, Jan Kara wrote:
> On Thu 29-11-18 23:02:53, Dave Chinner wrote:
> > On Thu, Nov 29, 2018 at 09:52:38AM +0100, Jan Kara wrote:
> > > On Wed 28-11-18 12:11:23, Liu Bo wrote:
> > > > On Tue, Nov 27, 2018 at 12:42:49PM +0100, Jan Kara wrote:
> > > > > CCed fsdevel since this may be interesting to other filesystem developers
> > > > > as well.
> > > > > 
> > > > > On Tue 30-10-18 08:22:49, Liu Bo wrote:
> > > > > > mpage_prepare_extent_to_map() tries to build up a large bio to stuff down
> > > > > > the pipe.  But if it needs to wait for a page lock, it needs to make sure
> > > > > > and send down any pending writes so we don't deadlock with anyone who has
> > > > > > the page lock and is waiting for writeback of things inside the bio.
> > > > > 
> > > > > Thanks for report! I agree the current code has a deadlock possibility you
> > > > > describe. But I think the problem reaches a bit further than what your
> > > > > patch fixes.  The problem is with pages that are unlocked but have
> > > > > PageWriteback set.  Page reclaim may end up waiting for these pages and
> > > > > thus any memory allocation with __GFP_FS set can block on these. So in our
> > > > > current setting page writeback must not block on anything that can be held
> > > > > while doing memory allocation with __GFP_FS set. Page lock is just one of
> > > > > these possibilities, wait_on_page_writeback() in
> > > > > mpage_prepare_extent_to_map() is another suspect and there mat be more. Or
> > > > > to say it differently, if there's lock A and GFP_KERNEL allocation can
> > > > > happen under lock A, then A cannot be taken by the writeback path. This is
> > > > > actually pretty subtle deadlock possibility and our current lockdep
> > > > > instrumentation isn't going to catch this.
> > > > >
> > > > 
> > > > Thanks for the nice summary, it's true that a lock A held in both
> > > > writeback path and memory reclaim can end up with deadlock.
> > > > 
> > > > Fortunately, by far there're only deadlock reports of page's lock bit
> > > > and writeback bit in both ext4 and btrfs[1].  I think
> > > > wait_on_page_writeback() would be OK as it's been protected by page
> > > > lock.
> > > > 
> > > > [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01d658f2ca3c85c1ffb20b306e30d16197000ce7
> > > 
> > > Yes, but that may just mean that the other deadlocks are just harder to
> > > hit...
> > > 
> > > > > So I see two ways how to fix this properly:
> > > > > 
> > > > > 1) Change ext4 code to always submit the bio once we have a full page
> > > > > prepared for writing. This may be relatively simple but has a higher CPU
> > > > > overhead for bio allocation & freeing (actual IO won't really differ since
> > > > > the plugging code should take care of merging the submitted bios). XFS
> > > > > seems to be doing this.
> > > > 
> > > > Seems that that's the safest way to do it, but as you said there's
> > > > some tradeoff.
> > > > 
> > > > (Just took a look at xfs's writepages, xfs also did the page
> > > > collection if there're adjacent pages in xfs_add_to_ioend(), and since
> > > > xfs_vm_writepages() is using the generic helper write_cache_pages()
> > > > which calls lock_page() as well, it's still possible to run into the
> > > > above kind of deadlock.)
> > > 
> > > Originally I thought XFS doesn't have this problem but now when I look
> > > again, you are right that their ioend may accumulate more pages to write
> > > and so they are prone to the same deadlock ext4 is. Added XFS list to CC.
> > 
> > I don't think XFS has a problem here, because the deadlock is
> > dependent on holding a lock that writeback might take and then doing
> > a GFP_KERNEL allocation. I don't think we do that anywhere in XFS -
> > the only lock that is of concern here is the ip->i_ilock, and I
> > think we always do GFP_NOFS allocations inside that lock.
> > 
> > As it is, this sort of lock vs reclaim inversion should be caught by
> > lockdep - allocations and reclaim contexts are recorded by lockdep
> > we get reports if we do lock A - alloc and then do reclaim - lock A.
> > We've always had problems with false positives from lockdep for
> > these situations where common XFS code can be called from GFP_KERNEL
> > valid contexts as well as reclaim or GFP_NOFS-only contexts, but I
> > don't recall ever seeing such a report for the writeback path....
> 
> I think for A == page lock, XFS may have the problem (and lockdep won't
> notice because it does not track page locks). There are some parts of
> kernel which do GFP_KERNEL allocations under page lock - pte_alloc_one() is
> one such function which allocates page tables with GFP_KERNEL and gets
> called with the faulted page locked. And I believe there are others.

Where in direct reclaim are we doing writeback to XFS?

It doesn't happen, and I've recently proposed we remove ->writepage
support from XFS altogether so that memory reclaim never, ever
tries to write pages to XFS filesystems, even from kswapd.

> So direct reclaim from pte_alloc_one() can wait for writeback on page B
> while holding lock on page A. And if B is just prepared (added to bio,
> under writeback, unlocked) but not submitted in xfs_writepages() and we
> block on lock_page(A), we have a deadlock.

Fundamentally, doing GFP_KERNEL allocations with a page lock
held violates any ordering rules we might have for multiple page
locking order. This is asking for random ABBA reclaim deadlocks to
occur, and it's not a filesystem bug - that's a bug in the page
table code. e.g if we are doing this in a filesystem/page cache
context, it's always in ascending page->index order for pages
referenced by the inode's mapping. Memory reclaim provides none of
these lock ordering guarantees.

Indeed, pte_alloc_one() doing hard coded GFP_KERNEL allocations is a
problem we've repeatedly tried to get fixed over the past 15 years
because of the need to call vmalloc in GFP_NOFS contexts. What we've
got now is just a "blessed hack" of using task based NOFS context
via memalloc_nofs_save() to override the hard coded pte allocation
context.

But that doesn't work with calls direct from page faults - it has no
idea filesystems or page locking orders for multiple page locking.
Using GFP_KERNEL while holding a page lock is a bug. Fix the damn
bug, not force everyone else who is doing things safely and
correctly to change their code.

> Generally deadlocks like these will be invisible to lockdep because it does
> not track either PageWriteback or PageLocked as a dependency.

And, because lockdep doesn't report it, it's not a bug that needs
fixing, eh?

> > If we switch away which holding a partially built bio, the only page
> > we have locked is the one we are currently trying to add to the bio.
> > Lock ordering prevents deadlocks on that one page, and all other
> > pages in the bio being built are marked as under writeback and are
> > not locked. Hence anything that wants to modify a page held in the
> > bio will block waiting for page writeback to clear, not the page
> > lock.
> 
> Yes, and the blocking on writeback of such page in direct reclaim is
> exactly one link in the deadlock chain...

So, like preventing explicit writeback in direct reclaim, we either
need to prevent direct reclaim from waiting on writeback or use
GFP_NOFS allocation context when holding a page lock. The bug is not
in the filesystem code here.

Cheers,

Dave.
Jan Kara Dec. 5, 2018, 5:06 p.m. UTC | #11
Added MM people to CC since this starts to be relevant for them.

On Fri 30-11-18 07:40:19, Dave Chinner wrote:
> On Thu, Nov 29, 2018 at 02:00:02PM +0100, Jan Kara wrote:
> > On Thu 29-11-18 23:02:53, Dave Chinner wrote:
> > > On Thu, Nov 29, 2018 at 09:52:38AM +0100, Jan Kara wrote:
> > > > On Wed 28-11-18 12:11:23, Liu Bo wrote:
> > > > > On Tue, Nov 27, 2018 at 12:42:49PM +0100, Jan Kara wrote:
> > > > > > CCed fsdevel since this may be interesting to other filesystem developers
> > > > > > as well.
> > > > > > 
> > > > > > On Tue 30-10-18 08:22:49, Liu Bo wrote:
> > > > > > > mpage_prepare_extent_to_map() tries to build up a large bio to stuff down
> > > > > > > the pipe.  But if it needs to wait for a page lock, it needs to make sure
> > > > > > > and send down any pending writes so we don't deadlock with anyone who has
> > > > > > > the page lock and is waiting for writeback of things inside the bio.
> > > > > > 
> > > > > > Thanks for report! I agree the current code has a deadlock possibility you
> > > > > > describe. But I think the problem reaches a bit further than what your
> > > > > > patch fixes.  The problem is with pages that are unlocked but have
> > > > > > PageWriteback set.  Page reclaim may end up waiting for these pages and
> > > > > > thus any memory allocation with __GFP_FS set can block on these. So in our
> > > > > > current setting page writeback must not block on anything that can be held
> > > > > > while doing memory allocation with __GFP_FS set. Page lock is just one of
> > > > > > these possibilities, wait_on_page_writeback() in
> > > > > > mpage_prepare_extent_to_map() is another suspect and there mat be more. Or
> > > > > > to say it differently, if there's lock A and GFP_KERNEL allocation can
> > > > > > happen under lock A, then A cannot be taken by the writeback path. This is
> > > > > > actually pretty subtle deadlock possibility and our current lockdep
> > > > > > instrumentation isn't going to catch this.
> > > > > >
> > > > > 
> > > > > Thanks for the nice summary, it's true that a lock A held in both
> > > > > writeback path and memory reclaim can end up with deadlock.
> > > > > 
> > > > > Fortunately, by far there're only deadlock reports of page's lock bit
> > > > > and writeback bit in both ext4 and btrfs[1].  I think
> > > > > wait_on_page_writeback() would be OK as it's been protected by page
> > > > > lock.
> > > > > 
> > > > > [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01d658f2ca3c85c1ffb20b306e30d16197000ce7
> > > > 
> > > > Yes, but that may just mean that the other deadlocks are just harder to
> > > > hit...
> > > > 
> > > > > > So I see two ways how to fix this properly:
> > > > > > 
> > > > > > 1) Change ext4 code to always submit the bio once we have a full page
> > > > > > prepared for writing. This may be relatively simple but has a higher CPU
> > > > > > overhead for bio allocation & freeing (actual IO won't really differ since
> > > > > > the plugging code should take care of merging the submitted bios). XFS
> > > > > > seems to be doing this.
> > > > > 
> > > > > Seems that that's the safest way to do it, but as you said there's
> > > > > some tradeoff.
> > > > > 
> > > > > (Just took a look at xfs's writepages, xfs also did the page
> > > > > collection if there're adjacent pages in xfs_add_to_ioend(), and since
> > > > > xfs_vm_writepages() is using the generic helper write_cache_pages()
> > > > > which calls lock_page() as well, it's still possible to run into the
> > > > > above kind of deadlock.)
> > > > 
> > > > Originally I thought XFS doesn't have this problem but now when I look
> > > > again, you are right that their ioend may accumulate more pages to write
> > > > and so they are prone to the same deadlock ext4 is. Added XFS list to CC.
> > > 
> > > I don't think XFS has a problem here, because the deadlock is
> > > dependent on holding a lock that writeback might take and then doing
> > > a GFP_KERNEL allocation. I don't think we do that anywhere in XFS -
> > > the only lock that is of concern here is the ip->i_ilock, and I
> > > think we always do GFP_NOFS allocations inside that lock.
> > > 
> > > As it is, this sort of lock vs reclaim inversion should be caught by
> > > lockdep - allocations and reclaim contexts are recorded by lockdep
> > > we get reports if we do lock A - alloc and then do reclaim - lock A.
> > > We've always had problems with false positives from lockdep for
> > > these situations where common XFS code can be called from GFP_KERNEL
> > > valid contexts as well as reclaim or GFP_NOFS-only contexts, but I
> > > don't recall ever seeing such a report for the writeback path....
> > 
> > I think for A == page lock, XFS may have the problem (and lockdep won't
> > notice because it does not track page locks). There are some parts of
> > kernel which do GFP_KERNEL allocations under page lock - pte_alloc_one() is
> > one such function which allocates page tables with GFP_KERNEL and gets
> > called with the faulted page locked. And I believe there are others.
> 
> Where in direct reclaim are we doing writeback to XFS?
> 
> It doesn't happen, and I've recently proposed we remove ->writepage
> support from XFS altogether so that memory reclaim never, ever
> tries to write pages to XFS filesystems, even from kswapd.

Direct reclaim will never do writeback but it may still wait for writeback
that has been started by someone else. That is enough for the deadlock to
happen. But from what you write below you seem to understand that so I just
write this comment here so that others don't get confused.

> > So direct reclaim from pte_alloc_one() can wait for writeback on page B
> > while holding lock on page A. And if B is just prepared (added to bio,
> > under writeback, unlocked) but not submitted in xfs_writepages() and we
> > block on lock_page(A), we have a deadlock.
> 
> Fundamentally, doing GFP_KERNEL allocations with a page lock
> held violates any ordering rules we might have for multiple page
> locking order. This is asking for random ABBA reclaim deadlocks to
> occur, and it's not a filesystem bug - that's a bug in the page
> table code. e.g if we are doing this in a filesystem/page cache
> context, it's always in ascending page->index order for pages
> referenced by the inode's mapping. Memory reclaim provides none of
> these lock ordering guarantees.

So this is where I'd like MM people to tell their opinion. Reclaim code
tries to avoid possible deadlocks on page lock by always doing trylock on
the page. But as this example shows it is not enough once is blocks in
wait_on_page_writeback().

> Indeed, pte_alloc_one() doing hard coded GFP_KERNEL allocations is a
> problem we've repeatedly tried to get fixed over the past 15 years
> because of the need to call vmalloc in GFP_NOFS contexts. What we've
> got now is just a "blessed hack" of using task based NOFS context
> via memalloc_nofs_save() to override the hard coded pte allocation
> context.
> 
> But that doesn't work with calls direct from page faults - it has no
> idea filesystems or page locking orders for multiple page locking.
> Using GFP_KERNEL while holding a page lock is a bug. Fix the damn
> bug, not force everyone else who is doing things safely and
> correctly to change their code.

I'm fine with banning GFP_KERNEL allocations from under page lock. Life
will be certainly easier for filesystems ... but harder for memory reclaim
so let's see what other people think about this.

> > Generally deadlocks like these will be invisible to lockdep because it does
> > not track either PageWriteback or PageLocked as a dependency.
> 
> And, because lockdep doesn't report it, it's not a bug that needs
> fixing, eh?

The bug definitely needs fixing IMO. Real user hit it after all...

> > > If we switch away which holding a partially built bio, the only page
> > > we have locked is the one we are currently trying to add to the bio.
> > > Lock ordering prevents deadlocks on that one page, and all other
> > > pages in the bio being built are marked as under writeback and are
> > > not locked. Hence anything that wants to modify a page held in the
> > > bio will block waiting for page writeback to clear, not the page
> > > lock.
> > 
> > Yes, and the blocking on writeback of such page in direct reclaim is
> > exactly one link in the deadlock chain...
> 
> So, like preventing explicit writeback in direct reclaim, we either
> need to prevent direct reclaim from waiting on writeback or use
> GFP_NOFS allocation context when holding a page lock. The bug is not
> in the filesystem code here.

								Honza
Dave Chinner Dec. 7, 2018, 5:20 a.m. UTC | #12
On Wed, Dec 05, 2018 at 06:06:56PM +0100, Jan Kara wrote:
> Added MM people to CC since this starts to be relevant for them.
> 
> On Fri 30-11-18 07:40:19, Dave Chinner wrote:
> > On Thu, Nov 29, 2018 at 02:00:02PM +0100, Jan Kara wrote:
> > > On Thu 29-11-18 23:02:53, Dave Chinner wrote:
> > > > As it is, this sort of lock vs reclaim inversion should be caught by
> > > > lockdep - allocations and reclaim contexts are recorded by lockdep
> > > > we get reports if we do lock A - alloc and then do reclaim - lock A.
> > > > We've always had problems with false positives from lockdep for
> > > > these situations where common XFS code can be called from GFP_KERNEL
> > > > valid contexts as well as reclaim or GFP_NOFS-only contexts, but I
> > > > don't recall ever seeing such a report for the writeback path....
> > > 
> > > I think for A == page lock, XFS may have the problem (and lockdep won't
> > > notice because it does not track page locks). There are some parts of
> > > kernel which do GFP_KERNEL allocations under page lock - pte_alloc_one() is
> > > one such function which allocates page tables with GFP_KERNEL and gets
> > > called with the faulted page locked. And I believe there are others.
> > 
> > Where in direct reclaim are we doing writeback to XFS?
> > 
> > It doesn't happen, and I've recently proposed we remove ->writepage
> > support from XFS altogether so that memory reclaim never, ever
> > tries to write pages to XFS filesystems, even from kswapd.
> 
> Direct reclaim will never do writeback but it may still wait for writeback
> that has been started by someone else. That is enough for the deadlock to
> happen. But from what you write below you seem to understand that so I just
> write this comment here so that others don't get confused.
> 
> > > So direct reclaim from pte_alloc_one() can wait for writeback on page B
> > > while holding lock on page A. And if B is just prepared (added to bio,
> > > under writeback, unlocked) but not submitted in xfs_writepages() and we
> > > block on lock_page(A), we have a deadlock.
> > 
> > Fundamentally, doing GFP_KERNEL allocations with a page lock
> > held violates any ordering rules we might have for multiple page
> > locking order. This is asking for random ABBA reclaim deadlocks to
> > occur, and it's not a filesystem bug - that's a bug in the page
> > table code. e.g if we are doing this in a filesystem/page cache
> > context, it's always in ascending page->index order for pages
> > referenced by the inode's mapping. Memory reclaim provides none of
> > these lock ordering guarantees.
> 
> So this is where I'd like MM people to tell their opinion. Reclaim code
> tries to avoid possible deadlocks on page lock by always doing trylock on
> the page. But as this example shows it is not enough once is blocks in
> wait_on_page_writeback().

I think it only does this in a "legacy memcg" case, according to the
comment in shrink_page_list. Which is, apparently, a hack around the
fact that memcgs didn't used to have dirty page throttling. AFAIA,
balance_dirty_pages() has had memcg-based throttling for some time
now, so that kinda points to stale reclaim algorithms, right?

> > > Generally deadlocks like these will be invisible to lockdep because it does
> > > not track either PageWriteback or PageLocked as a dependency.
> > 
> > And, because lockdep doesn't report it, it's not a bug that needs
> > fixing, eh?
> 
> The bug definitely needs fixing IMO. Real user hit it after all...

Sorry, I left off the <sarcasm> tag. I'm so used to people ignoring
locking problems until someone adds a lockdep tag to catch that
case....

Cheers,

Dave.
Michal Hocko Dec. 7, 2018, 7:16 a.m. UTC | #13
On Fri 07-12-18 16:20:51, Dave Chinner wrote:
> On Wed, Dec 05, 2018 at 06:06:56PM +0100, Jan Kara wrote:
> > Added MM people to CC since this starts to be relevant for them.
> > 
> > On Fri 30-11-18 07:40:19, Dave Chinner wrote:
> > > On Thu, Nov 29, 2018 at 02:00:02PM +0100, Jan Kara wrote:
> > > > On Thu 29-11-18 23:02:53, Dave Chinner wrote:
> > > > > As it is, this sort of lock vs reclaim inversion should be caught by
> > > > > lockdep - allocations and reclaim contexts are recorded by lockdep
> > > > > we get reports if we do lock A - alloc and then do reclaim - lock A.
> > > > > We've always had problems with false positives from lockdep for
> > > > > these situations where common XFS code can be called from GFP_KERNEL
> > > > > valid contexts as well as reclaim or GFP_NOFS-only contexts, but I
> > > > > don't recall ever seeing such a report for the writeback path....
> > > > 
> > > > I think for A == page lock, XFS may have the problem (and lockdep won't
> > > > notice because it does not track page locks). There are some parts of
> > > > kernel which do GFP_KERNEL allocations under page lock - pte_alloc_one() is
> > > > one such function which allocates page tables with GFP_KERNEL and gets
> > > > called with the faulted page locked. And I believe there are others.
> > > 
> > > Where in direct reclaim are we doing writeback to XFS?
> > > 
> > > It doesn't happen, and I've recently proposed we remove ->writepage
> > > support from XFS altogether so that memory reclaim never, ever
> > > tries to write pages to XFS filesystems, even from kswapd.
> > 
> > Direct reclaim will never do writeback but it may still wait for writeback
> > that has been started by someone else. That is enough for the deadlock to
> > happen. But from what you write below you seem to understand that so I just
> > write this comment here so that others don't get confused.
> > 
> > > > So direct reclaim from pte_alloc_one() can wait for writeback on page B
> > > > while holding lock on page A. And if B is just prepared (added to bio,
> > > > under writeback, unlocked) but not submitted in xfs_writepages() and we
> > > > block on lock_page(A), we have a deadlock.
> > > 
> > > Fundamentally, doing GFP_KERNEL allocations with a page lock
> > > held violates any ordering rules we might have for multiple page
> > > locking order. This is asking for random ABBA reclaim deadlocks to
> > > occur, and it's not a filesystem bug - that's a bug in the page
> > > table code. e.g if we are doing this in a filesystem/page cache
> > > context, it's always in ascending page->index order for pages
> > > referenced by the inode's mapping. Memory reclaim provides none of
> > > these lock ordering guarantees.
> > 
> > So this is where I'd like MM people to tell their opinion. Reclaim code
> > tries to avoid possible deadlocks on page lock by always doing trylock on
> > the page. But as this example shows it is not enough once is blocks in
> > wait_on_page_writeback().
> 
> I think it only does this in a "legacy memcg" case, according to the
> comment in shrink_page_list. Which is, apparently, a hack around the
> fact that memcgs didn't used to have dirty page throttling. AFAIA,
> balance_dirty_pages() has had memcg-based throttling for some time
> now, so that kinda points to stale reclaim algorithms, right?

Memcg v1 indeed doesn't have any dirty IO throttling and this is a
poor's man workaround. We still do not have that AFAIK and I do not know
of an elegant way around that. Fortunatelly we shouldn't have that many
GFP_KERNEL | __GFP_ACCOUNT allocations under page lock and we can work
around this specific one quite easily. I haven't tested this yet but the
following should work

diff --git a/mm/memory.c b/mm/memory.c
index 4ad2d293ddc2..59c98eeb0260 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2993,6 +2993,16 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
 	struct vm_area_struct *vma = vmf->vma;
 	vm_fault_t ret;
 
+	/*
+	 * Preallocate pte before we take page_lock because this might lead to
+	 * deadlocks for memcg reclaim which waits for pages under writeback.
+	 */
+	if (!vmf->prealloc_pte) {
+		vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm>mm, vmf->address);
+		if (!vmf->prealloc_pte)
+			return VM_FAULT_OOM;
+	}
+
 	ret = vma->vm_ops->fault(vmf);
 	if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY |
 			    VM_FAULT_DONE_COW)))

Is there any reliable reproducer?
Michal Hocko Dec. 7, 2018, 11:20 a.m. UTC | #14
On Fri 07-12-18 08:16:15, Michal Hocko wrote:
[...]
> Memcg v1 indeed doesn't have any dirty IO throttling and this is a
> poor's man workaround. We still do not have that AFAIK and I do not know
> of an elegant way around that. Fortunatelly we shouldn't have that many
> GFP_KERNEL | __GFP_ACCOUNT allocations under page lock and we can work
> around this specific one quite easily. I haven't tested this yet but the
> following should work
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 4ad2d293ddc2..59c98eeb0260 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2993,6 +2993,16 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
>  	struct vm_area_struct *vma = vmf->vma;
>  	vm_fault_t ret;
>  
> +	/*
> +	 * Preallocate pte before we take page_lock because this might lead to
> +	 * deadlocks for memcg reclaim which waits for pages under writeback.
> +	 */
> +	if (!vmf->prealloc_pte) {
> +		vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm>mm, vmf->address);
> +		if (!vmf->prealloc_pte)
> +			return VM_FAULT_OOM;
> +	}
> +
>  	ret = vma->vm_ops->fault(vmf);
>  	if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY |
>  			    VM_FAULT_DONE_COW)))

This is too eager to allocate pte even when it is not really needed.
Jack has also pointed out that I am missing a write barrier. So here we
go with an updated patch. This is essentially what fault around code
does.

diff --git a/mm/memory.c b/mm/memory.c
index 4ad2d293ddc2..1a73d2d4659e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2993,6 +2993,17 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
 	struct vm_area_struct *vma = vmf->vma;
 	vm_fault_t ret;
 
+	/*
+	 * Preallocate pte before we take page_lock because this might lead to
+	 * deadlocks for memcg reclaim which waits for pages under writeback.
+	 */
+	if (pmd_none(*vmf->pmd) && !vmf->prealloc_pte) {
+		vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm>mm, vmf->address);
+		if (!vmf->prealloc_pte)
+			return VM_FAULT_OOM;
+		smp_wmb(); /* See comment in __pte_alloc() */
+	}
+
 	ret = vma->vm_ops->fault(vmf);
 	if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY |
 			    VM_FAULT_DONE_COW)))
Liu Bo Dec. 7, 2018, 6:51 p.m. UTC | #15
On Fri, Dec 07, 2018 at 12:20:36PM +0100, Michal Hocko wrote:
> On Fri 07-12-18 08:16:15, Michal Hocko wrote:
> [...]
> > Memcg v1 indeed doesn't have any dirty IO throttling and this is a
> > poor's man workaround. We still do not have that AFAIK and I do not know
> > of an elegant way around that. Fortunatelly we shouldn't have that many
> > GFP_KERNEL | __GFP_ACCOUNT allocations under page lock and we can work
> > around this specific one quite easily. I haven't tested this yet but the
> > following should work
> > 
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 4ad2d293ddc2..59c98eeb0260 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -2993,6 +2993,16 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
> >  	struct vm_area_struct *vma = vmf->vma;
> >  	vm_fault_t ret;
> >  
> > +	/*
> > +	 * Preallocate pte before we take page_lock because this might lead to
> > +	 * deadlocks for memcg reclaim which waits for pages under writeback.
> > +	 */
> > +	if (!vmf->prealloc_pte) {
> > +		vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm>mm, vmf->address);
> > +		if (!vmf->prealloc_pte)
> > +			return VM_FAULT_OOM;
> > +	}
> > +
> >  	ret = vma->vm_ops->fault(vmf);
> >  	if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY |
> >  			    VM_FAULT_DONE_COW)))
> 
> This is too eager to allocate pte even when it is not really needed.
> Jack has also pointed out that I am missing a write barrier. So here we
> go with an updated patch. This is essentially what fault around code
> does.
> 

Makes sense to me, unfortunately we don't have a local reproducer to verify it
and we've disabled CONFIG_MEMCG_KMEM to workaround the problem.  Given the stack
I put, the patch should address the deadlock at least.

thanks,
-liubo

> diff --git a/mm/memory.c b/mm/memory.c
> index 4ad2d293ddc2..1a73d2d4659e 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2993,6 +2993,17 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
>  	struct vm_area_struct *vma = vmf->vma;
>  	vm_fault_t ret;
>  
> +	/*
> +	 * Preallocate pte before we take page_lock because this might lead to
> +	 * deadlocks for memcg reclaim which waits for pages under writeback.
> +	 */
> +	if (pmd_none(*vmf->pmd) && !vmf->prealloc_pte) {
> +		vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm>mm, vmf->address);
> +		if (!vmf->prealloc_pte)
> +			return VM_FAULT_OOM;
> +		smp_wmb(); /* See comment in __pte_alloc() */
> +	}
> +
>  	ret = vma->vm_ops->fault(vmf);
>  	if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY |
>  			    VM_FAULT_DONE_COW)))
> -- 
> Michal Hocko
> SUSE Labs
Michal Hocko Dec. 10, 2018, 5:59 p.m. UTC | #16
On Fri 07-12-18 10:51:04, Liu Bo wrote:
> On Fri, Dec 07, 2018 at 12:20:36PM +0100, Michal Hocko wrote:
> > On Fri 07-12-18 08:16:15, Michal Hocko wrote:
> > [...]
> > > Memcg v1 indeed doesn't have any dirty IO throttling and this is a
> > > poor's man workaround. We still do not have that AFAIK and I do not know
> > > of an elegant way around that. Fortunatelly we shouldn't have that many
> > > GFP_KERNEL | __GFP_ACCOUNT allocations under page lock and we can work
> > > around this specific one quite easily. I haven't tested this yet but the
> > > following should work
> > > 
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index 4ad2d293ddc2..59c98eeb0260 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -2993,6 +2993,16 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
> > >  	struct vm_area_struct *vma = vmf->vma;
> > >  	vm_fault_t ret;
> > >  
> > > +	/*
> > > +	 * Preallocate pte before we take page_lock because this might lead to
> > > +	 * deadlocks for memcg reclaim which waits for pages under writeback.
> > > +	 */
> > > +	if (!vmf->prealloc_pte) {
> > > +		vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm>mm, vmf->address);
> > > +		if (!vmf->prealloc_pte)
> > > +			return VM_FAULT_OOM;
> > > +	}
> > > +
> > >  	ret = vma->vm_ops->fault(vmf);
> > >  	if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY |
> > >  			    VM_FAULT_DONE_COW)))
> > 
> > This is too eager to allocate pte even when it is not really needed.
> > Jack has also pointed out that I am missing a write barrier. So here we
> > go with an updated patch. This is essentially what fault around code
> > does.
> > 
> 
> Makes sense to me, unfortunately we don't have a local reproducer to verify it
> and we've disabled CONFIG_MEMCG_KMEM to workaround the problem.  Given the stack
> I put, the patch should address the deadlock at least.

OK, I will send a full patch with the changelog tomorrow.
diff mbox series

Patch

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index c3d9a42c561e..becbfb292bf0 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2681,7 +2681,26 @@  static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
 			if (mpd->map.m_len > 0 && mpd->next_page != page->index)
 				goto out;
 
-			lock_page(page);
+			if (!trylock_page(page)) {
+				/*
+				 * A rare race may happen between fault and
+				 * writeback,
+				 *
+				 * 1. fault may have raced in and locked this
+				 * page ahead of us, and if fault needs to
+				 * reclaim memory via shrink_page_list(), it may
+				 * also wait on the writeback pages we've
+				 * collected in our mpd->io_submit.
+				 *
+				 * 2. We have to submit mpd->io_submit->io_bio
+				 * to let memory reclaim make progress in order
+				 * to avoid the deadlock between fault and
+				 * ourselves(writeback).
+				 */
+				ext4_io_submit(&mpd->io_submit);
+				lock_page(page);
+			}
+
 			/*
 			 * If the page is no longer dirty, or its mapping no
 			 * longer corresponds to inode we are writing (which