diff mbox

bounce:fix bug, avoid to flush dcache on slab page from jbd2.

Message ID 20130313011020.GA5313@blackbox.djwong.org
State Not Applicable, archived
Headers show

Commit Message

Darrick Wong March 13, 2013, 1:10 a.m. UTC
On Tue, Mar 12, 2013 at 03:32:21PM -0700, Andrew Morton wrote:
> On Fri, 08 Mar 2013 20:37:36 +0800 Shuge <shugelinux@gmail.com> wrote:
> 
> > The bounce accept slab pages from jbd2, and flush dcache on them.
> > When enabling VM_DEBUG, it will tigger VM_BUG_ON in page_mapping().
> > So, check PageSlab to avoid it in __blk_queue_bounce().
> > 
> > Bug URL: http://lkml.org/lkml/2013/3/7/56
> > 
> > ...
> >
> > --- a/mm/bounce.c
> > +++ b/mm/bounce.c
> > @@ -214,7 +214,8 @@ static void __blk_queue_bounce(struct request_queue 
> > *q, struct bio **bio_orig,
> >   		if (rw == WRITE) {
> >   			char *vto, *vfrom;
> >   -			flush_dcache_page(from->bv_page);
> > +			if (unlikely(!PageSlab(from->bv_page)))
> > +				flush_dcache_page(from->bv_page);
> >   			vto = page_address(to->bv_page) + to->bv_offset;
> >   			vfrom = kmap(from->bv_page) + from->bv_offset;
> >   			memcpy(vto, vfrom, to->bv_len);
> 
> I guess this is triggered by Catalin's f1a0c4aa0937975b ("arm64: Cache
> maintenance routines"), which added a page_mapping() call to arm64's
> arch/arm64/mm/flush.c:flush_dcache_page().
> 
> What's happening is that jbd2 is using kmalloc() to allocate buffer_head
> data.  That gets submitted down the BIO layer and __blk_queue_bounce()
> calls flush_dcache_page() which in the arm64 case calls page_mapping()
> and page_mapping() does VM_BUG_ON(PageSlab) and splat.
> 
> The unusual thing about all of this is that the payload for some disk
> IO is coming from kmalloc, rather than being a user page.  It's oddball
> but we've done this for ages and should continue to support it.
> 
> 
> Now, the page from kmalloc() cannot be in highmem, so why did the
> bounce code decide to bounce it?
> 
> __blk_queue_bounce() does
> 
> 		/*
> 		 * is destination page below bounce pfn?
> 		 */
> 		if (page_to_pfn(page) <= queue_bounce_pfn(q) && !force)
> 			continue;
> 
> and `force' comes from must_snapshot_stable_pages().  But
> must_snapshot_stable_pages() must have returned false, because if it
> had returned true then it would have been must_snapshot_stable_pages()
> which went BUG, because must_snapshot_stable_pages() calls page_mapping().
> 
> So my tentative diagosis is that arm64 is fishy.  A page which was
> allocated via jbd2_alloc(GFP_NOFS)->kmem_cache_alloc() ended up being
> above arm64's queue_bounce_pfn().  Can you please do a bit of
> investigation to work out if this is what is happening?  Find out why
> __blk_queue_bounce() decided to bounce a page which shouldn't have been
> bounced?

That sure is strange.  I didn't see any obvious reasons why we'd end up with a
kmalloc above queue_bounce_pfn().  But then I don't have any arm64s either.

> This is all terribly fragile :( afaict if someone sets
> bdi_cap_stable_pages_required() against that jbd2 queue, we're going to
> hit that BUG_ON() again, via must_snapshot_stable_pages()'s
> page_mapping() call.  (Darrick, this means you ;))

Wheeee.  You're right, we shouldn't be calling page_mapping on slab pages.
We can keep walking the bio segments to find a non-slab page that can tell us
MS_SNAP_STABLE is set, since we probably won't need the bounce buffer anyway.

How does something like this look?  (+ the patch above)

--D

Subject: [PATCH] mm: Don't blow up on slab pages being written to disk

Don't assume that all pages attached to a bio are non-slab pages.  This happens
if (for example) jbd2 allocates a buffer out of the slab to hold frozen data.
If we encounter a slab page, just ignore the page and keep searching.
Hopefully filesystems are smart enough to guarantee that slab pages won't
be dirtied while they're also being written to disk.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 mm/bounce.c |    2 ++
 1 file changed, 2 insertions(+)

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Shuge March 13, 2013, 3:35 a.m. UTC | #1
Hi all
>>> The bounce accept slab pages from jbd2, and flush dcache on them.
>>> When enabling VM_DEBUG, it will tigger VM_BUG_ON in page_mapping().
>>> So, check PageSlab to avoid it in __blk_queue_bounce().
>>>
>>> Bug URL: http://lkml.org/lkml/2013/3/7/56
>>>
>>> ...
>>>
>> ......
>>
> That sure is strange.  I didn't see any obvious reasons why we'd end up with a
>
......

     Well, this problem not only appear in arm64, but also arm32. And my 
kernel version is 3.3.0, arch is arm32.
Following the newest kernel, the problem shoulde be exist.
     I agree with Darrick's modification. Hum, if 
CONFIG_NEED_BOUNCE_POOL is not set, it also flush dcahce on
the pages of b_frozen_data, some of them are allocated by kmem_cache_alloc.
     As we know, jbd2_alloc allocate a buffer from jbd2_xk slab pool, 
when the size is smaller than PAGE_SIZE.
The b_frozen_data  is not mapped to usrspace, not aliasing cache. It cat 
be lazy flush or other. Is it right?

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Andrew Morton March 13, 2013, 4:11 a.m. UTC | #2
On Wed, 13 Mar 2013 11:35:15 +0800 Shuge <shugelinux@gmail.com> wrote:

> Hi all
> >>> The bounce accept slab pages from jbd2, and flush dcache on them.
> >>> When enabling VM_DEBUG, it will tigger VM_BUG_ON in page_mapping().
> >>> So, check PageSlab to avoid it in __blk_queue_bounce().
> >>>
> >>> Bug URL: http://lkml.org/lkml/2013/3/7/56
> >>>
> >>> ...
> >>>
> >> ......
> >>
> > That sure is strange.  I didn't see any obvious reasons why we'd end up with a
> >
> ......
> 
>      Well, this problem not only appear in arm64, but also arm32. And my 
> kernel version is 3.3.0, arch is arm32.
> Following the newest kernel, the problem shoulde be exist.
>      I agree with Darrick's modification. Hum, if 
> CONFIG_NEED_BOUNCE_POOL is not set, it also flush dcahce on
> the pages of b_frozen_data, some of them are allocated by kmem_cache_alloc.
>      As we know, jbd2_alloc allocate a buffer from jbd2_xk slab pool, 
> when the size is smaller than PAGE_SIZE.
> The b_frozen_data  is not mapped to usrspace, not aliasing cache. It cat 
> be lazy flush or other. Is it right?

Please reread my email.  The page at b_frozen_data was allocated with
GFP_NOFS.  Hence it should not need bounce treatment (if arm is
anything like x86).

And yet it *did* receive bounce treatment.  Why?
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jan Kara March 13, 2013, 8:50 a.m. UTC | #3
On Tue 12-03-13 18:10:20, Darrick J. Wong wrote:
> On Tue, Mar 12, 2013 at 03:32:21PM -0700, Andrew Morton wrote:
> > On Fri, 08 Mar 2013 20:37:36 +0800 Shuge <shugelinux@gmail.com> wrote:
> > 
> > > The bounce accept slab pages from jbd2, and flush dcache on them.
> > > When enabling VM_DEBUG, it will tigger VM_BUG_ON in page_mapping().
> > > So, check PageSlab to avoid it in __blk_queue_bounce().
> > > 
> > > Bug URL: http://lkml.org/lkml/2013/3/7/56
> > > 
> > > ...
> > >
> > > --- a/mm/bounce.c
> > > +++ b/mm/bounce.c
> > > @@ -214,7 +214,8 @@ static void __blk_queue_bounce(struct request_queue 
> > > *q, struct bio **bio_orig,
> > >   		if (rw == WRITE) {
> > >   			char *vto, *vfrom;
> > >   -			flush_dcache_page(from->bv_page);
> > > +			if (unlikely(!PageSlab(from->bv_page)))
> > > +				flush_dcache_page(from->bv_page);
> > >   			vto = page_address(to->bv_page) + to->bv_offset;
> > >   			vfrom = kmap(from->bv_page) + from->bv_offset;
> > >   			memcpy(vto, vfrom, to->bv_len);
> > 
> > I guess this is triggered by Catalin's f1a0c4aa0937975b ("arm64: Cache
> > maintenance routines"), which added a page_mapping() call to arm64's
> > arch/arm64/mm/flush.c:flush_dcache_page().
> > 
> > What's happening is that jbd2 is using kmalloc() to allocate buffer_head
> > data.  That gets submitted down the BIO layer and __blk_queue_bounce()
> > calls flush_dcache_page() which in the arm64 case calls page_mapping()
> > and page_mapping() does VM_BUG_ON(PageSlab) and splat.
> > 
> > The unusual thing about all of this is that the payload for some disk
> > IO is coming from kmalloc, rather than being a user page.  It's oddball
> > but we've done this for ages and should continue to support it.
> > 
> > 
> > Now, the page from kmalloc() cannot be in highmem, so why did the
> > bounce code decide to bounce it?
> > 
> > __blk_queue_bounce() does
> > 
> > 		/*
> > 		 * is destination page below bounce pfn?
> > 		 */
> > 		if (page_to_pfn(page) <= queue_bounce_pfn(q) && !force)
> > 			continue;
> > 
> > and `force' comes from must_snapshot_stable_pages().  But
> > must_snapshot_stable_pages() must have returned false, because if it
> > had returned true then it would have been must_snapshot_stable_pages()
> > which went BUG, because must_snapshot_stable_pages() calls page_mapping().
> > 
> > So my tentative diagosis is that arm64 is fishy.  A page which was
> > allocated via jbd2_alloc(GFP_NOFS)->kmem_cache_alloc() ended up being
> > above arm64's queue_bounce_pfn().  Can you please do a bit of
> > investigation to work out if this is what is happening?  Find out why
> > __blk_queue_bounce() decided to bounce a page which shouldn't have been
> > bounced?
> 
> That sure is strange.  I didn't see any obvious reasons why we'd end up with a
> kmalloc above queue_bounce_pfn().  But then I don't have any arm64s either.
> 
> > This is all terribly fragile :( afaict if someone sets
> > bdi_cap_stable_pages_required() against that jbd2 queue, we're going to
> > hit that BUG_ON() again, via must_snapshot_stable_pages()'s
> > page_mapping() call.  (Darrick, this means you ;))
> 
> Wheeee.  You're right, we shouldn't be calling page_mapping on slab pages.
> We can keep walking the bio segments to find a non-slab page that can tell us
> MS_SNAP_STABLE is set, since we probably won't need the bounce buffer anyway.
> 
> How does something like this look?  (+ the patch above)
  Umm, this won't quite work. We can have a bio which has just PageSlab
page attached and so you won't be able to get to the superblock. Heh, isn't
the whole page_mapping() thing in must_snapshot_stable_pages() wrong? When we
do direct IO, these pages come directly from userspace and hell knows where
they come from. Definitely their page_mapping() doesn't give us anything
useful... Sorry for not realizing this earlier when reviewing the patch.

... remembering why we need to get to sb and why ext3 needs this ... So
maybe a better solution would be to have a bio flag meaning that pages need
bouncing? And we would set it from filesystems that need it - in case of
ext3 only writeback of data from kjournald actually needs to bounce the
pages. Thoughts?

								Honza
Russell King - ARM Linux March 13, 2013, 9:42 a.m. UTC | #4
On Tue, Mar 12, 2013 at 09:11:38PM -0700, Andrew Morton wrote:
> Please reread my email.  The page at b_frozen_data was allocated with
> GFP_NOFS.  Hence it should not need bounce treatment (if arm is
> anything like x86).
> 
> And yet it *did* receive bounce treatment.  Why?

If I had to guess, it's because you've uncovered a bug in the utter crap
that we call a "dma mask".

When is a mask not a mask?  When it is used as a numerical limit.  When
is a mask really a mask?  When it indicates which bits are significant in
a DMA address.

The problem here is that there's a duality in the way the mask is used,
and that is caused by memory on x86 always starting at physical address
zero.  The problem is this:

On ARM, we have some platforms which offset the start of physical memory.
This offset can be significant - maybe 3GB.  However, only a limited
amount of that memory may be DMA-able.  So, we may end up with the
maximum physical address of DMA-able memory being 3GB + 64MB for example,
or 0xc4000000, because the DMA controller only has 26 address lines.  So,
this brings up the problem of whether we set the DMA mask to 0xc3ffffff
or 0x03ffffff.

There's places in the kernel which assume that DMA masks are a set of
zero bits followed by a set of one bits, and nothing else does...

Now, max_low_pfn is initialized this way:

/**
 * init_bootmem - register boot memory
 * @start: pfn where the bitmap is to be placed
 * @pages: number of available physical pages
 *
 * Returns the number of bytes needed to hold the bitmap.
 */
unsigned long __init init_bootmem(unsigned long start, unsigned long pages)
{
        max_low_pfn = pages;
        min_low_pfn = start;
        return init_bootmem_core(NODE_DATA(0)->bdata, start, 0, pages);
}

So, min_low_pfn is the PFN offset of the start of physical memory (so
3GB >> PAGE_SHIFT) and max_low_pfn ends up being the number of pages,
_not_ the maximum PFN value - if it were to be the maximum PFN value,
then we end up with a _huge_ bootmem bitmap which may not even fit in
the available memory we have.

However, other places in the kernel treat max_low_pfn entirely
differently:

        blk_max_low_pfn = max_low_pfn - 1;

void blk_queue_bounce_limit(struct request_queue *q, u64 dma_mask)
{
        unsigned long b_pfn = dma_mask >> PAGE_SHIFT;

        if (b_pfn < blk_max_low_pfn)
                dma = 1;
        q->limits.bounce_pfn = b_pfn;

And then we have stuff doing this:

	page_to_pfn(bv->bv_page) > queue_bounce_pfn(q);
                if (page_to_pfn(page) <= queue_bounce_pfn(q) && !force)
                if (queue_bounce_pfn(q) >= blk_max_pfn && !must_bounce)

So, "max_low_pfn" is totally and utterly confused in the kernel as to
what it is, and it only really works on x86 (and other architectures)
that start their memory at physical address 0 (because then it doesn't
matter how you interpret it.)

So the whole thing about "is a DMA mask a mask or a maximum address"
is totally confused in the kernel in such a way that platforms like ARM
get a very hard time, and what we now have in place has worked 100%
fine for all the platforms we've had for the last 10+ years.

It's a very longstanding bug in the kernel, going all the way back to
2.2 days or so.

What to do about it, I have no idea - changing to satisfy the "DMA mask
is a maximum address" is likely to break things.  What we need is a
proper fix, and a consistent way to interpret DMA masks which works not
only on x86, but also on platforms which have limited DMA to memory
which has huge physical offsets.
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Darrick Wong March 13, 2013, 7:44 p.m. UTC | #5
On Wed, Mar 13, 2013 at 09:50:21AM +0100, Jan Kara wrote:
> On Tue 12-03-13 18:10:20, Darrick J. Wong wrote:
> > On Tue, Mar 12, 2013 at 03:32:21PM -0700, Andrew Morton wrote:
> > > On Fri, 08 Mar 2013 20:37:36 +0800 Shuge <shugelinux@gmail.com> wrote:
> > > 
> > > > The bounce accept slab pages from jbd2, and flush dcache on them.
> > > > When enabling VM_DEBUG, it will tigger VM_BUG_ON in page_mapping().
> > > > So, check PageSlab to avoid it in __blk_queue_bounce().
> > > > 
> > > > Bug URL: http://lkml.org/lkml/2013/3/7/56
> > > > 
> > > > ...
> > > >
> > > > --- a/mm/bounce.c
> > > > +++ b/mm/bounce.c
> > > > @@ -214,7 +214,8 @@ static void __blk_queue_bounce(struct request_queue 
> > > > *q, struct bio **bio_orig,
> > > >   		if (rw == WRITE) {
> > > >   			char *vto, *vfrom;
> > > >   -			flush_dcache_page(from->bv_page);
> > > > +			if (unlikely(!PageSlab(from->bv_page)))
> > > > +				flush_dcache_page(from->bv_page);
> > > >   			vto = page_address(to->bv_page) + to->bv_offset;
> > > >   			vfrom = kmap(from->bv_page) + from->bv_offset;
> > > >   			memcpy(vto, vfrom, to->bv_len);
> > > 
> > > I guess this is triggered by Catalin's f1a0c4aa0937975b ("arm64: Cache
> > > maintenance routines"), which added a page_mapping() call to arm64's
> > > arch/arm64/mm/flush.c:flush_dcache_page().
> > > 
> > > What's happening is that jbd2 is using kmalloc() to allocate buffer_head
> > > data.  That gets submitted down the BIO layer and __blk_queue_bounce()
> > > calls flush_dcache_page() which in the arm64 case calls page_mapping()
> > > and page_mapping() does VM_BUG_ON(PageSlab) and splat.
> > > 
> > > The unusual thing about all of this is that the payload for some disk
> > > IO is coming from kmalloc, rather than being a user page.  It's oddball
> > > but we've done this for ages and should continue to support it.
> > > 
> > > 
> > > Now, the page from kmalloc() cannot be in highmem, so why did the
> > > bounce code decide to bounce it?
> > > 
> > > __blk_queue_bounce() does
> > > 
> > > 		/*
> > > 		 * is destination page below bounce pfn?
> > > 		 */
> > > 		if (page_to_pfn(page) <= queue_bounce_pfn(q) && !force)
> > > 			continue;
> > > 
> > > and `force' comes from must_snapshot_stable_pages().  But
> > > must_snapshot_stable_pages() must have returned false, because if it
> > > had returned true then it would have been must_snapshot_stable_pages()
> > > which went BUG, because must_snapshot_stable_pages() calls page_mapping().
> > > 
> > > So my tentative diagosis is that arm64 is fishy.  A page which was
> > > allocated via jbd2_alloc(GFP_NOFS)->kmem_cache_alloc() ended up being
> > > above arm64's queue_bounce_pfn().  Can you please do a bit of
> > > investigation to work out if this is what is happening?  Find out why
> > > __blk_queue_bounce() decided to bounce a page which shouldn't have been
> > > bounced?
> > 
> > That sure is strange.  I didn't see any obvious reasons why we'd end up with a
> > kmalloc above queue_bounce_pfn().  But then I don't have any arm64s either.
> > 
> > > This is all terribly fragile :( afaict if someone sets
> > > bdi_cap_stable_pages_required() against that jbd2 queue, we're going to
> > > hit that BUG_ON() again, via must_snapshot_stable_pages()'s
> > > page_mapping() call.  (Darrick, this means you ;))
> > 
> > Wheeee.  You're right, we shouldn't be calling page_mapping on slab pages.
> > We can keep walking the bio segments to find a non-slab page that can tell us
> > MS_SNAP_STABLE is set, since we probably won't need the bounce buffer anyway.
> > 
> > How does something like this look?  (+ the patch above)
>   Umm, this won't quite work. We can have a bio which has just PageSlab
> page attached and so you won't be able to get to the superblock. Heh, isn't
> the whole page_mapping() thing in must_snapshot_stable_pages() wrong? When we
> do direct IO, these pages come directly from userspace and hell knows where
> they come from. Definitely their page_mapping() doesn't give us anything
> useful... Sorry for not realizing this earlier when reviewing the patch.
> 
> ... remembering why we need to get to sb and why ext3 needs this ... So
> maybe a better solution would be to have a bio flag meaning that pages need
> bouncing? And we would set it from filesystems that need it - in case of
> ext3 only writeback of data from kjournald actually needs to bounce the
> pages. Thoughts?

What about dirty pages that don't result in journal transactions?  I think
ext3_sync_file() eventually calls ext3_ordered_writepage, which then calls
__block_write_full_page, which in turn calls submit_bh().

--D
> 
> 								Honza
> -- 
> Jan Kara <jack@suse.cz>
> SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jan Kara March 13, 2013, 9:02 p.m. UTC | #6
On Wed 13-03-13 12:44:29, Darrick J. Wong wrote:
> On Wed, Mar 13, 2013 at 09:50:21AM +0100, Jan Kara wrote:
> > On Tue 12-03-13 18:10:20, Darrick J. Wong wrote:
> > > On Tue, Mar 12, 2013 at 03:32:21PM -0700, Andrew Morton wrote:
> > > > On Fri, 08 Mar 2013 20:37:36 +0800 Shuge <shugelinux@gmail.com> wrote:
> > > > 
> > > > > The bounce accept slab pages from jbd2, and flush dcache on them.
> > > > > When enabling VM_DEBUG, it will tigger VM_BUG_ON in page_mapping().
> > > > > So, check PageSlab to avoid it in __blk_queue_bounce().
> > > > > 
> > > > > Bug URL: http://lkml.org/lkml/2013/3/7/56
> > > > > 
> > > > > ...
> > > > >
> > > > > --- a/mm/bounce.c
> > > > > +++ b/mm/bounce.c
> > > > > @@ -214,7 +214,8 @@ static void __blk_queue_bounce(struct request_queue 
> > > > > *q, struct bio **bio_orig,
> > > > >   		if (rw == WRITE) {
> > > > >   			char *vto, *vfrom;
> > > > >   -			flush_dcache_page(from->bv_page);
> > > > > +			if (unlikely(!PageSlab(from->bv_page)))
> > > > > +				flush_dcache_page(from->bv_page);
> > > > >   			vto = page_address(to->bv_page) + to->bv_offset;
> > > > >   			vfrom = kmap(from->bv_page) + from->bv_offset;
> > > > >   			memcpy(vto, vfrom, to->bv_len);
> > > > 
> > > > I guess this is triggered by Catalin's f1a0c4aa0937975b ("arm64: Cache
> > > > maintenance routines"), which added a page_mapping() call to arm64's
> > > > arch/arm64/mm/flush.c:flush_dcache_page().
> > > > 
> > > > What's happening is that jbd2 is using kmalloc() to allocate buffer_head
> > > > data.  That gets submitted down the BIO layer and __blk_queue_bounce()
> > > > calls flush_dcache_page() which in the arm64 case calls page_mapping()
> > > > and page_mapping() does VM_BUG_ON(PageSlab) and splat.
> > > > 
> > > > The unusual thing about all of this is that the payload for some disk
> > > > IO is coming from kmalloc, rather than being a user page.  It's oddball
> > > > but we've done this for ages and should continue to support it.
> > > > 
> > > > 
> > > > Now, the page from kmalloc() cannot be in highmem, so why did the
> > > > bounce code decide to bounce it?
> > > > 
> > > > __blk_queue_bounce() does
> > > > 
> > > > 		/*
> > > > 		 * is destination page below bounce pfn?
> > > > 		 */
> > > > 		if (page_to_pfn(page) <= queue_bounce_pfn(q) && !force)
> > > > 			continue;
> > > > 
> > > > and `force' comes from must_snapshot_stable_pages().  But
> > > > must_snapshot_stable_pages() must have returned false, because if it
> > > > had returned true then it would have been must_snapshot_stable_pages()
> > > > which went BUG, because must_snapshot_stable_pages() calls page_mapping().
> > > > 
> > > > So my tentative diagosis is that arm64 is fishy.  A page which was
> > > > allocated via jbd2_alloc(GFP_NOFS)->kmem_cache_alloc() ended up being
> > > > above arm64's queue_bounce_pfn().  Can you please do a bit of
> > > > investigation to work out if this is what is happening?  Find out why
> > > > __blk_queue_bounce() decided to bounce a page which shouldn't have been
> > > > bounced?
> > > 
> > > That sure is strange.  I didn't see any obvious reasons why we'd end up with a
> > > kmalloc above queue_bounce_pfn().  But then I don't have any arm64s either.
> > > 
> > > > This is all terribly fragile :( afaict if someone sets
> > > > bdi_cap_stable_pages_required() against that jbd2 queue, we're going to
> > > > hit that BUG_ON() again, via must_snapshot_stable_pages()'s
> > > > page_mapping() call.  (Darrick, this means you ;))
> > > 
> > > Wheeee.  You're right, we shouldn't be calling page_mapping on slab pages.
> > > We can keep walking the bio segments to find a non-slab page that can tell us
> > > MS_SNAP_STABLE is set, since we probably won't need the bounce buffer anyway.
> > > 
> > > How does something like this look?  (+ the patch above)
> >   Umm, this won't quite work. We can have a bio which has just PageSlab
> > page attached and so you won't be able to get to the superblock. Heh, isn't
> > the whole page_mapping() thing in must_snapshot_stable_pages() wrong? When we
> > do direct IO, these pages come directly from userspace and hell knows where
> > they come from. Definitely their page_mapping() doesn't give us anything
> > useful... Sorry for not realizing this earlier when reviewing the patch.
> > 
> > ... remembering why we need to get to sb and why ext3 needs this ... So
> > maybe a better solution would be to have a bio flag meaning that pages need
> > bouncing? And we would set it from filesystems that need it - in case of
> > ext3 only writeback of data from kjournald actually needs to bounce the
> > pages. Thoughts?
> 
> What about dirty pages that don't result in journal transactions?  I think
> ext3_sync_file() eventually calls ext3_ordered_writepage, which then calls
> __block_write_full_page, which in turn calls submit_bh().
  So here we have two options:
Either we let ext3 wait the same way as other filesystems when stable pages
are required. Then only data IO from kjournald needs to be bounced (all
other IO is properly protected by PageWriteback bit).

Or we won't let ext3 wait (as it is now), keep the superblock flag that fs
needs bouncing, and set the bio flag in __block_write_full_page() and
kjournald based on the sb flag.

I think the first option is slightly better but I don't feel strongly
about that.

								Honza
Andrew Morton March 14, 2013, 10:46 p.m. UTC | #7
On Wed, 13 Mar 2013 22:02:16 +0100 Jan Kara <jack@suse.cz> wrote:

> > > ... remembering why we need to get to sb and why ext3 needs this ... So
> > > maybe a better solution would be to have a bio flag meaning that pages need
> > > bouncing? And we would set it from filesystems that need it - in case of
> > > ext3 only writeback of data from kjournald actually needs to bounce the
> > > pages. Thoughts?
> > 
> > What about dirty pages that don't result in journal transactions?  I think
> > ext3_sync_file() eventually calls ext3_ordered_writepage, which then calls
> > __block_write_full_page, which in turn calls submit_bh().
>   So here we have two options:
> Either we let ext3 wait the same way as other filesystems when stable pages
> are required. Then only data IO from kjournald needs to be bounced (all
> other IO is properly protected by PageWriteback bit).
> 
> Or we won't let ext3 wait (as it is now), keep the superblock flag that fs
> needs bouncing, and set the bio flag in __block_write_full_page() and
> kjournald based on the sb flag.
> 
> I think the first option is slightly better but I don't feel strongly
> about that.

It seems Just Wrong that we're dicking around with filesystem
superblocks at this level.  It's the bounce code, for heavens sake!


What the heck's going on here and why wasn't I able to work that out
from reading the code :( The need to stabilise these pages is driven by
the characteristics of the underlying device and driver stack, isn't
it?  Things like checksumming?  What else drives this requirement? 
</rant>

Because I *think* it should be sufficient to maintain this boolean in
the backing_dev.  My *guess* is that this is all here because we want
to enable stable-snapshotting on a per-fs basis rather than on a
per-device basis?  If so, why?  If not, what?



btw, local variable `bdi' in must_snapshot_stable_pages() doesn't do
anything.


None of this will stop Shuge's kernel from going splat either.
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Darrick Wong March 14, 2013, 11:27 p.m. UTC | #8
On Thu, Mar 14, 2013 at 03:46:51PM -0700, Andrew Morton wrote:
> On Wed, 13 Mar 2013 22:02:16 +0100 Jan Kara <jack@suse.cz> wrote:
> 
> > > > ... remembering why we need to get to sb and why ext3 needs this ... So
> > > > maybe a better solution would be to have a bio flag meaning that pages need
> > > > bouncing? And we would set it from filesystems that need it - in case of
> > > > ext3 only writeback of data from kjournald actually needs to bounce the
> > > > pages. Thoughts?
> > > 
> > > What about dirty pages that don't result in journal transactions?  I think
> > > ext3_sync_file() eventually calls ext3_ordered_writepage, which then calls
> > > __block_write_full_page, which in turn calls submit_bh().
> >   So here we have two options:
> > Either we let ext3 wait the same way as other filesystems when stable pages
> > are required. Then only data IO from kjournald needs to be bounced (all
> > other IO is properly protected by PageWriteback bit).
> > 
> > Or we won't let ext3 wait (as it is now), keep the superblock flag that fs
> > needs bouncing, and set the bio flag in __block_write_full_page() and
> > kjournald based on the sb flag.
> > 
> > I think the first option is slightly better but I don't feel strongly
> > about that.
> 
> It seems Just Wrong that we're dicking around with filesystem
> superblocks at this level.  It's the bounce code, for heavens sake!
> 
> 
> What the heck's going on here and why wasn't I able to work that out
> from reading the code :( The need to stabilise these pages is driven by
> the characteristics of the underlying device and driver stack, isn't
> it?  Things like checksumming?  What else drives this requirement? 
> </rant>

Right now, checksumming for weird DIF/DIX devices is the only requirement for
this behavior.  In theory we can also hook checksumming iscsi and other things
up to this, but for now they have their own solutions for keeping writeback
page contents stable.

> Because I *think* it should be sufficient to maintain this boolean in
> the backing_dev.  My *guess* is that this is all here because we want
> to enable stable-snapshotting on a per-fs basis rather than on a
> per-device basis?  If so, why?  If not, what?

Yes, we do want to enable stable-snapshotting on a per-fs basis.  Here's why:

The first time I tried to solve this problem, I simply had everything use the
bounce buffer.  That was shot down because bounce buffers add memory pressure,
there might not be free pages available when we're doing writeback, etc.

The second attempt was to simply make everything wait for writeback to finish
before dirtying pages.  That's what everything (except ext3) does now.  jbd
initiates writeback on pages without setting PG_writeback, which means that our
convenient wait_on_stable_pages is broken in this case.  Hence ext3/jbd need to
be able to stable-snapshot.  However, it's the /only/ filesystem in the kernel
that needs this.  Everything else is either ok with waiting (ext4, xfs) or
implements their own tricks (tux3, btrfs) to make stable pages work correctly.

Fixing jbd to set PG_writeback has been discussed and rejected, because it's a
lot of work and you'd end up with something rather jbd2-like.  However,
bouncing the outgoing buffers is a fairly small change to jbd.  Jan (at least a
few months ago) was ok with band-aiding ext3.

I could rip out ext3 entirely, but people seem uncomfortable with that, and it
hasn't (yet) been proven that ext4 can provide a perfect imitation of ext3.

I could also just fix up Kconfig so that you can't use a BLK_DEV_INTEGRITY
device with JBD, but that was also shot down as ridiculous.

Given that a backing_dev covers a whole disk, which could contain several
different filesystems and an ext3, I don't want to make /all/ of them use
bounce buffering just because jbd is broken.  We've already established that
bounce pages should be used only when necessary, and (as it turns out), ext3
can initiate writeout of certain dirty user data pages without needing to go
through jbd, which means that those pages don't need to be bounced either.

Therefore, this really is a per-fs thing.

> btw, local variable `bdi' in must_snapshot_stable_pages() doesn't do
> anything.
>
> None of this will stop Shuge's kernel from going splat either.

I'm not trying to fix that in this patch; his splat resulted from stuff going
on in ext4/jbd2.

--D
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/mm/bounce.c b/mm/bounce.c
index 5f89017..af34855 100644
--- a/mm/bounce.c
+++ b/mm/bounce.c
@@ -199,6 +199,8 @@  static int must_snapshot_stable_pages(struct request_queue *q, struct bio *bio)
 	 */
 	bio_for_each_segment(from, bio, i) {
 		page = from->bv_page;
+		if (PageSlab(page))
+			continue;
 		mapping = page_mapping(page);
 		if (!mapping)
 			continue;