diff mbox

[BUG] fatal hang untarring 90GB file, possibly writeback related.

Message ID 20110428150827.GY4658@suse.de
State Not Applicable, archived
Headers show

Commit Message

Mel Gorman April 28, 2011, 3:08 p.m. UTC
On Thu, Apr 28, 2011 at 09:25:14AM -0500, James Bottomley wrote:
> On Thu, 2011-04-28 at 15:07 +0100, Mel Gorman wrote:
> > On Thu, Apr 28, 2011 at 03:52:28PM +0200, Jan Kara wrote:
> > > On Thu 28-04-11 12:36:30, Colin Ian King wrote:
> > > > One more data point to add, I've been looking at an identical issue when
> > > > copying large amounts of data.  I bisected this - and the lockups occur
> > > > with commit 
> > > > 3e7d344970673c5334cf7b5bb27c8c0942b06126 - before that I don't see the
> > > > issue. With this commit, my file copy test locks up after ~8-10
> > > > iterations, before this commit I can copy > 100 times and don't see the
> > > > lockup.
> > >   Adding Mel to CC, I guess he'll be interested. Mel, it seems this commit
> > > of yours causes kswapd on non-preempt kernels spin for a *long* time...
> > > 
> > 
> > I'm still thinking about the traces which do not point the finger
> > directly at compaction per-se but it's possible that the change means
> > kswapd is not reclaiming like it should be.
> > 
> > To test this theory, does applying
> > [d527caf2: mm: compaction: prevent kswapd compacting memory to reduce
> > CPU usage] help?
> 
> I can answer definitively no to this.  The upstream kernel I reproduced
> this on has that patch included.
> 

So it is.

Another consequence of this patch is that when high order allocations
are in progress (is the test case fork heavy in any way for
example? alternatively, it might be something in the storage stack
that requires high-order allocs) we are no longer necessarily going
to sleep because of should_reclaim_continue() check. This could
explain kswapd-at-99% but would only apply if CONFIG_COMPACTION is
set (does unsetting CONFIG_COMPACTION help). If the bug only triggers
for CONFIG_COMPACTION, does the following *untested* patch help any?

(as a warning, I'm offline Friday until Tuesday morning. I'll try
check mail over the weekend but it's unlikely I'll find a terminal
or be allowed to use it without an ass kicking)

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

James Bottomley April 28, 2011, 4:01 p.m. UTC | #1
On Thu, 2011-04-28 at 16:08 +0100, Mel Gorman wrote:
> On Thu, Apr 28, 2011 at 09:25:14AM -0500, James Bottomley wrote:
> > On Thu, 2011-04-28 at 15:07 +0100, Mel Gorman wrote:
> > > On Thu, Apr 28, 2011 at 03:52:28PM +0200, Jan Kara wrote:
> > > > On Thu 28-04-11 12:36:30, Colin Ian King wrote:
> > > > > One more data point to add, I've been looking at an identical issue when
> > > > > copying large amounts of data.  I bisected this - and the lockups occur
> > > > > with commit 
> > > > > 3e7d344970673c5334cf7b5bb27c8c0942b06126 - before that I don't see the
> > > > > issue. With this commit, my file copy test locks up after ~8-10
> > > > > iterations, before this commit I can copy > 100 times and don't see the
> > > > > lockup.
> > > >   Adding Mel to CC, I guess he'll be interested. Mel, it seems this commit
> > > > of yours causes kswapd on non-preempt kernels spin for a *long* time...
> > > > 
> > > 
> > > I'm still thinking about the traces which do not point the finger
> > > directly at compaction per-se but it's possible that the change means
> > > kswapd is not reclaiming like it should be.
> > > 
> > > To test this theory, does applying
> > > [d527caf2: mm: compaction: prevent kswapd compacting memory to reduce
> > > CPU usage] help?
> > 
> > I can answer definitively no to this.  The upstream kernel I reproduced
> > this on has that patch included.
> > 
> 
> So it is.
> 
> Another consequence of this patch is that when high order allocations
> are in progress (is the test case fork heavy in any way for
> example?

It's a simple huge untar, so it shouldn't fork.

>  alternatively, it might be something in the storage stack
> that requires high-order allocs)

I've tried switching from CFQ to deadline with no apparent changes

>  we are no longer necessarily going
> to sleep because of should_reclaim_continue() check. This could
> explain kswapd-at-99% but would only apply if CONFIG_COMPACTION is
> set (does unsetting CONFIG_COMPACTION help). If the bug only triggers
> for CONFIG_COMPACTION, does the following *untested* patch help any?

Turning off COMPACTION and HUGEPAGES doesn't help ... kswapd still goes
to 99% on the PREEMPT kernel, so it doesn't seem to be related

James


--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Colin Ian King April 28, 2011, 5:10 p.m. UTC | #2
On Thu, 2011-04-28 at 16:08 +0100, Mel Gorman wrote:

[ text deleted ]

> Another consequence of this patch is that when high order allocations
> are in progress (is the test case fork heavy in any way for
> example? alternatively, it might be something in the storage stack
> that requires high-order allocs) we are no longer necessarily going
> to sleep because of should_reclaim_continue() check. This could
> explain kswapd-at-99% but would only apply if CONFIG_COMPACTION is
> set (does unsetting CONFIG_COMPACTION help). If the bug only triggers
> for CONFIG_COMPACTION, does the following *untested* patch help any?

Afraid to report this patch didn't help either.
> 
> (as a warning, I'm offline Friday until Tuesday morning. I'll try
> check mail over the weekend but it's unlikely I'll find a terminal
> or be allowed to use it without an ass kicking)

Ditto, me, to, I will pick this up Tuesday.
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 148c6e6..c74a501 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1842,15 +1842,22 @@ static inline bool should_continue_reclaim(struct zone *zone,
>  		return false;
>  
>  	/*
> -	 * If we failed to reclaim and have scanned the full list, stop.
> -	 * NOTE: Checking just nr_reclaimed would exit reclaim/compaction far
> -	 *       faster but obviously would be less likely to succeed
> -	 *       allocation. If this is desirable, use GFP_REPEAT to decide
> -	 *       if both reclaimed and scanned should be checked or just
> -	 *       reclaimed
> +	 * For direct reclaimers
> +	 *   If we failed to reclaim and have scanned the full list, stop.
> +	 *   The caller will check congestion and sleep if necessary until
> +	 *   some IO completes.
> +	 * For kswapd
> +	 *   Check just nr_reclaimed. If we are failing to reclaim, we
> +	 *   want to stop this reclaim loop, increase the priority and
> +	 *   go to sleep if necessary to allow IO a change to complete.
> +	 *   This avoids kswapd going into a busy loop in shrink_zone()
>  	 */
> -	if (!nr_reclaimed && !nr_scanned)
> -		return false;
> +	if (!nr_reclaimed) {
> +		if (current_is_kswapd())
> +			return false;
> +		else if (!nr_scanned)
> +			return false;
> +	}
>  
>  	/*
>  	 * If we have not reclaimed enough pages for compaction and the
> @@ -1924,8 +1931,13 @@ restart:
>  
>  	/* reclaim/compaction might need reclaim to continue */
>  	if (should_continue_reclaim(zone, nr_reclaimed,
> -					sc->nr_scanned - nr_scanned, sc))
> +					sc->nr_scanned - nr_scanned, sc)) {
> +		/* Throttle direct reclaimers if congested */
> +		if (!current_is_kswapd())
> +			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
> +
>  		goto restart;
> +	}
>  
>  	throttle_vm_writeout(sc->gfp_mask);
>  }


--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Mel Gorman April 28, 2011, 5:18 p.m. UTC | #3
On Thu, Apr 28, 2011 at 11:56:17AM -0500, James Bottomley wrote:
> On Thu, 2011-04-28 at 11:50 -0500, James Bottomley wrote:
> > This is the output of perf record -g -a -f sleep 5
> > 
> > (hopefully the list won't choke)
> 
> Um, this one actually shows kswapd
> 
> James
> 
> ---
> 
> # Events: 6K cycles
> #
> # Overhead      Command        Shared Object                                   Symbol
> # ........  ...........  ...................  .......................................
> #
>     20.41%      kswapd0  [kernel.kallsyms]    [k] shrink_slab
>                 |
>                 --- shrink_slab
>                    |          
>                    |--99.91%-- kswapd
>                    |          kthread
>                    |          kernel_thread_helper
>                     --0.09%-- [...]
> 

Ok. I can't see how the patch "mm: vmscan: reclaim order-0 and use
compaction instead of lumpy reclaim" is related unless we are seeing
two problems that happen to manifest in a similar manner.

However, there were a number of changes made to dcache in particular
for 2.6.38. Specifically thinks like dentry_kill use trylock and is
happy to loop around if it fails to acquire anything. See things like
this for example;

static void try_prune_one_dentry(struct dentry *dentry)
        __releases(dentry->d_lock)
{
        struct dentry *parent;

        parent = dentry_kill(dentry, 0);
        /*
         * If dentry_kill returns NULL, we have nothing more to do.
         * if it returns the same dentry, trylocks failed. In either
         * case, just loop again.


If this in combination with many inodes being locked for whatever
reason (writeback locking them maybe?) is causing the shrinker to
return after zero progress, it could in turn cause kswapd to enter
into a loop for longish periods of time in shrink_slab here;

                while (total_scan >= SHRINK_BATCH) {
                        long this_scan = SHRINK_BATCH;
                        int shrink_ret;
                        int nr_before;

                        nr_before = (*shrinker->shrink)(shrinker, 0, gfp_mask);
                        shrink_ret = (*shrinker->shrink)(shrinker, this_scan,
                                                                gfp_mask);
                        if (shrink_ret == -1)
                                break;
                        if (shrink_ret < nr_before)
                                ret += nr_before - shrink_ret;
                        count_vm_events(SLABS_SCANNED, this_scan);
                        total_scan -= this_scan;

                        cond_resched();
                }

That would explain this trace.

>      9.98%      kswapd0  [kernel.kallsyms]    [k] shrink_zone
>                 |
>                 --- shrink_zone
>                    |          
>                    |--99.46%-- kswapd
>                    |          kthread
>                    |          kernel_thread_helper
>                    |          
>                     --0.54%-- kthread
>                               kernel_thread_helper
> 
>      7.70%      kswapd0  [kernel.kallsyms]    [k] kswapd
>                 |
>                 --- kswapd
>                     kthread
>                     kernel_thread_helper
> 
>      5.40%      kswapd0  [kernel.kallsyms]    [k] zone_watermark_ok_safe
>                 |
>                 --- zone_watermark_ok_safe
>                    |          
>                    |--72.66%-- kswapd
>                    |          kthread
>                    |          kernel_thread_helper
>                    |          
>                    |--20.88%-- sleeping_prematurely.part.12
>                    |          kswapd
>                    |          kthread
>                    |          kernel_thread_helper
>                    |          
>                     --6.46%-- kthread
>                               kernel_thread_helper
> 

We are also spending an astonishing amount of time in
sleeping_prematurely leading me to believe we are failing to balance the
zones and are continually under the min watermark for one of the zones.
We are never going to sleep because of this check;

                if (total_scanned && (priority < DEF_PRIORITY - 2)) {
                        if (has_under_min_watermark_zone)
                                count_vm_event(KSWAPD_SKIP_CONGESTION_WAIT);
                        else
                                congestion_wait(BLK_RW_ASYNC, HZ/10);
                }

However, I think this is a secondary effect to the failure of shrinkers
to do their work. If slabs were being shrunk, one would expect us to
be getting over the min watermark.

>      4.25%      kswapd0  [kernel.kallsyms]    [k] do_raw_spin_lock
>                 |
>                 --- do_raw_spin_lock
>                    |          
>                    |--77.49%-- _raw_spin_lock
>                    |          |          
>                    |          |--51.85%-- mb_cache_shrink_fn
>                    |          |          shrink_slab
>                    |          |          kswapd
>                    |          |          kthread
>                    |          |          kernel_thread_helper
>                    |          |          
>                    |           --48.15%-- mem_cgroup_soft_limit_reclaim
>                    |                     kswapd
>                    |                     kthread
>                    |                     kernel_thread_helper
>                    |          

Way hey, cgroups are also in the mix. How jolly.

Is systemd a common element of the machines hitting this bug by any
chance?

The remaining traces seem to be follow-on damage related to the three
issues of "shrinkers are bust in some manner" causing "we are not
getting over the min watermark" and as a side-show "we are spending lots
of time doing something unspecified but unhelpful in cgroups".

> <SNIP>
James Bottomley April 28, 2011, 6:30 p.m. UTC | #4
On Thu, 2011-04-28 at 18:18 +0100, Mel Gorman wrote:
> On Thu, Apr 28, 2011 at 11:56:17AM -0500, James Bottomley wrote:
> > # Events: 6K cycles
> > #
> > # Overhead      Command        Shared Object                                   Symbol
> > # ........  ...........  ...................  .......................................
> > #
> >     20.41%      kswapd0  [kernel.kallsyms]    [k] shrink_slab
> >                 |
> >                 --- shrink_slab
> >                    |          
> >                    |--99.91%-- kswapd
> >                    |          kthread
> >                    |          kernel_thread_helper
> >                     --0.09%-- [...]
> > 
> 
> Ok. I can't see how the patch "mm: vmscan: reclaim order-0 and use
> compaction instead of lumpy reclaim" is related unless we are seeing
> two problems that happen to manifest in a similar manner.
> 
> However, there were a number of changes made to dcache in particular
> for 2.6.38. Specifically thinks like dentry_kill use trylock and is
> happy to loop around if it fails to acquire anything. See things like
> this for example;

OK, so for this, I tried a 2.6.37 kernel.  It doesn't work very well,
networking is hosed for no reason I can see (probably systemd / cgroups
problems).

However, it runs enough for me to say that the tar proceeds to
completion in a non-PREEMPT kernel.  (I tried several times for good
measure).  That makes this definitely a regression of some sort, but it
doesn't definitively identify the dcache code ... it could be an ext4
bug that got introduced in 2.6.38 either.

> static void try_prune_one_dentry(struct dentry *dentry)
>         __releases(dentry->d_lock)
> {
>         struct dentry *parent;
> 
>         parent = dentry_kill(dentry, 0);
>         /*
>          * If dentry_kill returns NULL, we have nothing more to do.
>          * if it returns the same dentry, trylocks failed. In either
>          * case, just loop again.
> 
> 
> If this in combination with many inodes being locked for whatever
> reason (writeback locking them maybe?) is causing the shrinker to
> return after zero progress, it could in turn cause kswapd to enter
> into a loop for longish periods of time in shrink_slab here;
> 
>                 while (total_scan >= SHRINK_BATCH) {
>                         long this_scan = SHRINK_BATCH;
>                         int shrink_ret;
>                         int nr_before;
> 
>                         nr_before = (*shrinker->shrink)(shrinker, 0, gfp_mask);
>                         shrink_ret = (*shrinker->shrink)(shrinker, this_scan,
>                                                                 gfp_mask);
>                         if (shrink_ret == -1)
>                                 break;
>                         if (shrink_ret < nr_before)
>                                 ret += nr_before - shrink_ret;
>                         count_vm_events(SLABS_SCANNED, this_scan);
>                         total_scan -= this_scan;
> 
>                         cond_resched();
>                 }
> 
> That would explain this trace.
> 
> >      9.98%      kswapd0  [kernel.kallsyms]    [k] shrink_zone
> >                 |
> >                 --- shrink_zone
> >                    |          
> >                    |--99.46%-- kswapd
> >                    |          kthread
> >                    |          kernel_thread_helper
> >                    |          
> >                     --0.54%-- kthread
> >                               kernel_thread_helper
> > 
> >      7.70%      kswapd0  [kernel.kallsyms]    [k] kswapd
> >                 |
> >                 --- kswapd
> >                     kthread
> >                     kernel_thread_helper
> > 
> >      5.40%      kswapd0  [kernel.kallsyms]    [k] zone_watermark_ok_safe
> >                 |
> >                 --- zone_watermark_ok_safe
> >                    |          
> >                    |--72.66%-- kswapd
> >                    |          kthread
> >                    |          kernel_thread_helper
> >                    |          
> >                    |--20.88%-- sleeping_prematurely.part.12
> >                    |          kswapd
> >                    |          kthread
> >                    |          kernel_thread_helper
> >                    |          
> >                     --6.46%-- kthread
> >                               kernel_thread_helper
> > 
> 
> We are also spending an astonishing amount of time in
> sleeping_prematurely leading me to believe we are failing to balance the
> zones and are continually under the min watermark for one of the zones.
> We are never going to sleep because of this check;
> 
>                 if (total_scanned && (priority < DEF_PRIORITY - 2)) {
>                         if (has_under_min_watermark_zone)
>                                 count_vm_event(KSWAPD_SKIP_CONGESTION_WAIT);
>                         else
>                                 congestion_wait(BLK_RW_ASYNC, HZ/10);
>                 }
> 
> However, I think this is a secondary effect to the failure of shrinkers
> to do their work. If slabs were being shrunk, one would expect us to
> be getting over the min watermark.
> 
> >      4.25%      kswapd0  [kernel.kallsyms]    [k] do_raw_spin_lock
> >                 |
> >                 --- do_raw_spin_lock
> >                    |          
> >                    |--77.49%-- _raw_spin_lock
> >                    |          |          
> >                    |          |--51.85%-- mb_cache_shrink_fn
> >                    |          |          shrink_slab
> >                    |          |          kswapd
> >                    |          |          kthread
> >                    |          |          kernel_thread_helper
> >                    |          |          
> >                    |           --48.15%-- mem_cgroup_soft_limit_reclaim
> >                    |                     kswapd
> >                    |                     kthread
> >                    |                     kernel_thread_helper
> >                    |          
> 
> Way hey, cgroups are also in the mix. How jolly.
> 
> Is systemd a common element of the machines hitting this bug by any
> chance?

Well, yes, the bug report is against FC15, which needs cgroups for
systemd.

> The remaining traces seem to be follow-on damage related to the three
> issues of "shrinkers are bust in some manner" causing "we are not
> getting over the min watermark" and as a side-show "we are spending lots
> of time doing something unspecified but unhelpful in cgroups".

Heh, well find a way for me to verify this: I can't turn off cgroups
because systemd then won't work and the machine won't boot ...

James


--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Colin Ian King May 3, 2011, 9:54 a.m. UTC | #5
On Thu, 2011-04-28 at 18:18 +0100, Mel Gorman wrote:
> On Thu, Apr 28, 2011 at 11:56:17AM -0500, James Bottomley wrote:
> > On Thu, 2011-04-28 at 11:50 -0500, James Bottomley wrote:
> > > This is the output of perf record -g -a -f sleep 5
> > > 
> > > (hopefully the list won't choke)
> > 
> > Um, this one actually shows kswapd
> > 
> > James
> > 
> > ---
> > 
> > # Events: 6K cycles
> > #
> > # Overhead      Command        Shared Object                                   Symbol
> > # ........  ...........  ...................  .......................................
> > #
> >     20.41%      kswapd0  [kernel.kallsyms]    [k] shrink_slab
> >                 |
> >                 --- shrink_slab
> >                    |          
> >                    |--99.91%-- kswapd
> >                    |          kthread
> >                    |          kernel_thread_helper
> >                     --0.09%-- [...]
> > 
> 
> Ok. I can't see how the patch "mm: vmscan: reclaim order-0 and use
> compaction instead of lumpy reclaim" is related unless we are seeing
> two problems that happen to manifest in a similar manner.

That is a distinct possibility.
> 
> However, there were a number of changes made to dcache in particular
> for 2.6.38. Specifically thinks like dentry_kill use trylock and is
> happy to loop around if it fails to acquire anything. See things like
> this for example;
> 
[ text deleted ]

> Way hey, cgroups are also in the mix. How jolly.
> 
> Is systemd a common element of the machines hitting this bug by any
> chance?

Not in my case, using upstart on my machine.
> 
Colin


--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 148c6e6..c74a501 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1842,15 +1842,22 @@  static inline bool should_continue_reclaim(struct zone *zone,
 		return false;
 
 	/*
-	 * If we failed to reclaim and have scanned the full list, stop.
-	 * NOTE: Checking just nr_reclaimed would exit reclaim/compaction far
-	 *       faster but obviously would be less likely to succeed
-	 *       allocation. If this is desirable, use GFP_REPEAT to decide
-	 *       if both reclaimed and scanned should be checked or just
-	 *       reclaimed
+	 * For direct reclaimers
+	 *   If we failed to reclaim and have scanned the full list, stop.
+	 *   The caller will check congestion and sleep if necessary until
+	 *   some IO completes.
+	 * For kswapd
+	 *   Check just nr_reclaimed. If we are failing to reclaim, we
+	 *   want to stop this reclaim loop, increase the priority and
+	 *   go to sleep if necessary to allow IO a change to complete.
+	 *   This avoids kswapd going into a busy loop in shrink_zone()
 	 */
-	if (!nr_reclaimed && !nr_scanned)
-		return false;
+	if (!nr_reclaimed) {
+		if (current_is_kswapd())
+			return false;
+		else if (!nr_scanned)
+			return false;
+	}
 
 	/*
 	 * If we have not reclaimed enough pages for compaction and the
@@ -1924,8 +1931,13 @@  restart:
 
 	/* reclaim/compaction might need reclaim to continue */
 	if (should_continue_reclaim(zone, nr_reclaimed,
-					sc->nr_scanned - nr_scanned, sc))
+					sc->nr_scanned - nr_scanned, sc)) {
+		/* Throttle direct reclaimers if congested */
+		if (!current_is_kswapd())
+			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
+
 		goto restart;
+	}
 
 	throttle_vm_writeout(sc->gfp_mask);
 }