diff mbox

mm: vmscan: Correctly check if reclaimer should schedule during shrink_slab

Message ID 20110517161508.GN5279@suse.de
State Not Applicable, archived
Headers show

Commit Message

Mel Gorman May 17, 2011, 4:15 p.m. UTC
It has been reported on some laptops that kswapd is consuming large
amounts of CPU and not being scheduled when SLUB is enabled during
large amounts of file copying. It is expected that this is due to
kswapd missing every cond_resched() point because;

shrink_page_list() calls cond_resched() if inactive pages were isolated
        which in turn may not happen if all_unreclaimable is set in
        shrink_zones(). If for whatver reason, all_unreclaimable is
        set on all zones, we can miss calling cond_resched().

balance_pgdat() only calls cond_resched if the zones are not
        balanced. For a high-order allocation that is balanced, it
        checks order-0 again. During that window, order-0 might have
        become unbalanced so it loops again for order-0 and returns
        that it was reclaiming for order-0 to kswapd(). It can then
        find that a caller has rewoken kswapd for a high-order and
        re-enters balance_pgdat() without ever calling cond_resched().

shrink_slab only calls cond_resched() if we are reclaiming slab
	pages. If there are a large number of direct reclaimers, the
	shrinker_rwsem can be contended and prevent kswapd calling
	cond_resched().

This patch modifies the shrink_slab() case. If the semaphore is
contended, the caller will still check cond_resched(). After each
successful call into a shrinker, the check for cond_resched() is
still necessary in case one shrinker call is particularly slow.

This patch replaces
mm-vmscan-if-kswapd-has-been-running-too-long-allow-it-to-sleep.patch
in -mm.

[mgorman@suse.de: Preserve call to cond_resched after each call into shrinker]
From: Minchan Kim <minchan.kim@gmail.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/vmscan.c |    9 +++++++--
 1 files changed, 7 insertions(+), 2 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

KOSAKI Motohiro May 18, 2011, 12:45 a.m. UTC | #1
(2011/05/18 1:15), Mel Gorman wrote:
> It has been reported on some laptops that kswapd is consuming large
> amounts of CPU and not being scheduled when SLUB is enabled during
> large amounts of file copying. It is expected that this is due to
> kswapd missing every cond_resched() point because;
>
> shrink_page_list() calls cond_resched() if inactive pages were isolated
>          which in turn may not happen if all_unreclaimable is set in
>          shrink_zones(). If for whatver reason, all_unreclaimable is
>          set on all zones, we can miss calling cond_resched().
>
> balance_pgdat() only calls cond_resched if the zones are not
>          balanced. For a high-order allocation that is balanced, it
>          checks order-0 again. During that window, order-0 might have
>          become unbalanced so it loops again for order-0 and returns
>          that it was reclaiming for order-0 to kswapd(). It can then
>          find that a caller has rewoken kswapd for a high-order and
>          re-enters balance_pgdat() without ever calling cond_resched().
>
> shrink_slab only calls cond_resched() if we are reclaiming slab
> 	pages. If there are a large number of direct reclaimers, the
> 	shrinker_rwsem can be contended and prevent kswapd calling
> 	cond_resched().
>
> This patch modifies the shrink_slab() case. If the semaphore is
> contended, the caller will still check cond_resched(). After each
> successful call into a shrinker, the check for cond_resched() is
> still necessary in case one shrinker call is particularly slow.
>
> This patch replaces
> mm-vmscan-if-kswapd-has-been-running-too-long-allow-it-to-sleep.patch
> in -mm.
>
> [mgorman@suse.de: Preserve call to cond_resched after each call into shrinker]
> From: Minchan Kim<minchan.kim@gmail.com>
> Signed-off-by: Mel Gorman<mgorman@suse.de>

Looks good to me.
	Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>


--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
MinChan Kim May 19, 2011, 12:03 a.m. UTC | #2
On Wed, May 18, 2011 at 1:15 AM, Mel Gorman <mgorman@suse.de> wrote:
> It has been reported on some laptops that kswapd is consuming large
> amounts of CPU and not being scheduled when SLUB is enabled during
> large amounts of file copying. It is expected that this is due to
> kswapd missing every cond_resched() point because;
>
> shrink_page_list() calls cond_resched() if inactive pages were isolated
>        which in turn may not happen if all_unreclaimable is set in
>        shrink_zones(). If for whatver reason, all_unreclaimable is
>        set on all zones, we can miss calling cond_resched().
>
> balance_pgdat() only calls cond_resched if the zones are not
>        balanced. For a high-order allocation that is balanced, it
>        checks order-0 again. During that window, order-0 might have
>        become unbalanced so it loops again for order-0 and returns
>        that it was reclaiming for order-0 to kswapd(). It can then
>        find that a caller has rewoken kswapd for a high-order and
>        re-enters balance_pgdat() without ever calling cond_resched().
>
> shrink_slab only calls cond_resched() if we are reclaiming slab
>        pages. If there are a large number of direct reclaimers, the
>        shrinker_rwsem can be contended and prevent kswapd calling
>        cond_resched().
>
> This patch modifies the shrink_slab() case. If the semaphore is
> contended, the caller will still check cond_resched(). After each
> successful call into a shrinker, the check for cond_resched() is
> still necessary in case one shrinker call is particularly slow.
>
> This patch replaces
> mm-vmscan-if-kswapd-has-been-running-too-long-allow-it-to-sleep.patch
> in -mm.
>
> [mgorman@suse.de: Preserve call to cond_resched after each call into shrinker]
> From: Minchan Kim <minchan.kim@gmail.com>
Signed-off-by: Minchan Kim <minchan.kim@gmail.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>
MinChan Kim May 19, 2011, 12:09 a.m. UTC | #3
Hi Colin.

Sorry for bothering you. :(
I hope this test is last.

We(Mel, KOSAKI and me) finalized opinion.

Could you test below patch with patch[1/4] of Mel's series(ie,
!pgdat_balanced  of sleeping_prematurely)?
If it is successful, we will try to merge this version instead of
various cond_resched sprinkling version.


On Wed, May 18, 2011 at 1:15 AM, Mel Gorman <mgorman@suse.de> wrote:
> It has been reported on some laptops that kswapd is consuming large
> amounts of CPU and not being scheduled when SLUB is enabled during
> large amounts of file copying. It is expected that this is due to
> kswapd missing every cond_resched() point because;
>
> shrink_page_list() calls cond_resched() if inactive pages were isolated
>        which in turn may not happen if all_unreclaimable is set in
>        shrink_zones(). If for whatver reason, all_unreclaimable is
>        set on all zones, we can miss calling cond_resched().
>
> balance_pgdat() only calls cond_resched if the zones are not
>        balanced. For a high-order allocation that is balanced, it
>        checks order-0 again. During that window, order-0 might have
>        become unbalanced so it loops again for order-0 and returns
>        that it was reclaiming for order-0 to kswapd(). It can then
>        find that a caller has rewoken kswapd for a high-order and
>        re-enters balance_pgdat() without ever calling cond_resched().
>
> shrink_slab only calls cond_resched() if we are reclaiming slab
>        pages. If there are a large number of direct reclaimers, the
>        shrinker_rwsem can be contended and prevent kswapd calling
>        cond_resched().
>
> This patch modifies the shrink_slab() case. If the semaphore is
> contended, the caller will still check cond_resched(). After each
> successful call into a shrinker, the check for cond_resched() is
> still necessary in case one shrinker call is particularly slow.
>
> This patch replaces
> mm-vmscan-if-kswapd-has-been-running-too-long-allow-it-to-sleep.patch
> in -mm.
>
> [mgorman@suse.de: Preserve call to cond_resched after each call into shrinker]
> From: Minchan Kim <minchan.kim@gmail.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>
> ---
>  mm/vmscan.c |    9 +++++++--
>  1 files changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index af24d1e..0bed248 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -230,8 +230,11 @@ unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask,
>        if (scanned == 0)
>                scanned = SWAP_CLUSTER_MAX;
>
> -       if (!down_read_trylock(&shrinker_rwsem))
> -               return 1;       /* Assume we'll be able to shrink next time */
> +       if (!down_read_trylock(&shrinker_rwsem)) {
> +               /* Assume we'll be able to shrink next time */
> +               ret = 1;
> +               goto out;
> +       }
>
>        list_for_each_entry(shrinker, &shrinker_list, list) {
>                unsigned long long delta;
> @@ -282,6 +285,8 @@ unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask,
>                shrinker->nr += total_scan;
>        }
>        up_read(&shrinker_rwsem);
> +out:
> +       cond_resched();
>        return ret;
>  }
>
>
Colin Ian King May 19, 2011, 11:36 a.m. UTC | #4
On Thu, 2011-05-19 at 09:09 +0900, Minchan Kim wrote:
> Hi Colin.
> 
> Sorry for bothering you. :(

No problem at all, I've very happy to re-test.

> I hope this test is last.
> 
> We(Mel, KOSAKI and me) finalized opinion.
> 
> Could you test below patch with patch[1/4] of Mel's series(ie,
> !pgdat_balanced  of sleeping_prematurely)?
> If it is successful, we will try to merge this version instead of
> various cond_resched sprinkling version.

tested with the patch below + patch[1/4] of Mel's series.  300 cycles,
2.5 hrs of soak testing: works OK.

Colin
> 
> 
> On Wed, May 18, 2011 at 1:15 AM, Mel Gorman <mgorman@suse.de> wrote:
> > It has been reported on some laptops that kswapd is consuming large
> > amounts of CPU and not being scheduled when SLUB is enabled during
> > large amounts of file copying. It is expected that this is due to
> > kswapd missing every cond_resched() point because;
> >
> > shrink_page_list() calls cond_resched() if inactive pages were isolated
> >        which in turn may not happen if all_unreclaimable is set in
> >        shrink_zones(). If for whatver reason, all_unreclaimable is
> >        set on all zones, we can miss calling cond_resched().
> >
> > balance_pgdat() only calls cond_resched if the zones are not
> >        balanced. For a high-order allocation that is balanced, it
> >        checks order-0 again. During that window, order-0 might have
> >        become unbalanced so it loops again for order-0 and returns
> >        that it was reclaiming for order-0 to kswapd(). It can then
> >        find that a caller has rewoken kswapd for a high-order and
> >        re-enters balance_pgdat() without ever calling cond_resched().
> >
> > shrink_slab only calls cond_resched() if we are reclaiming slab
> >        pages. If there are a large number of direct reclaimers, the
> >        shrinker_rwsem can be contended and prevent kswapd calling
> >        cond_resched().
> >
> > This patch modifies the shrink_slab() case. If the semaphore is
> > contended, the caller will still check cond_resched(). After each
> > successful call into a shrinker, the check for cond_resched() is
> > still necessary in case one shrinker call is particularly slow.
> >
> > This patch replaces
> > mm-vmscan-if-kswapd-has-been-running-too-long-allow-it-to-sleep.patch
> > in -mm.
> >
> > [mgorman@suse.de: Preserve call to cond_resched after each call into shrinker]
> > From: Minchan Kim <minchan.kim@gmail.com>
> > Signed-off-by: Mel Gorman <mgorman@suse.de>
> > ---
> >  mm/vmscan.c |    9 +++++++--
> >  1 files changed, 7 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index af24d1e..0bed248 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -230,8 +230,11 @@ unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask,
> >        if (scanned == 0)
> >                scanned = SWAP_CLUSTER_MAX;
> >
> > -       if (!down_read_trylock(&shrinker_rwsem))
> > -               return 1;       /* Assume we'll be able to shrink next time */
> > +       if (!down_read_trylock(&shrinker_rwsem)) {
> > +               /* Assume we'll be able to shrink next time */
> > +               ret = 1;
> > +               goto out;
> > +       }
> >
> >        list_for_each_entry(shrinker, &shrinker_list, list) {
> >                unsigned long long delta;
> > @@ -282,6 +285,8 @@ unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask,
> >                shrinker->nr += total_scan;
> >        }
> >        up_read(&shrinker_rwsem);
> > +out:
> > +       cond_resched();
> >        return ret;
> >  }
> >
> >
> 
> 
> 


--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
MinChan Kim May 20, 2011, 12:06 a.m. UTC | #5
On Thu, May 19, 2011 at 8:36 PM, Colin Ian King
<colin.king@canonical.com> wrote:
> On Thu, 2011-05-19 at 09:09 +0900, Minchan Kim wrote:
>> Hi Colin.
>>
>> Sorry for bothering you. :(
>
> No problem at all, I've very happy to re-test.
>
>> I hope this test is last.
>>
>> We(Mel, KOSAKI and me) finalized opinion.
>>
>> Could you test below patch with patch[1/4] of Mel's series(ie,
>> !pgdat_balanced  of sleeping_prematurely)?
>> If it is successful, we will try to merge this version instead of
>> various cond_resched sprinkling version.
>
> tested with the patch below + patch[1/4] of Mel's series.  300 cycles,
> 2.5 hrs of soak testing: works OK.
>
> Colin

Thanks, Colin.
We are approaching the conclusion for  your help. :)

Mel, KOSAKI.
I will ask test to Andrew Lutomirski.
If he doesn't have a problem, let's go, then.
diff mbox

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index af24d1e..0bed248 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -230,8 +230,11 @@  unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask,
 	if (scanned == 0)
 		scanned = SWAP_CLUSTER_MAX;
 
-	if (!down_read_trylock(&shrinker_rwsem))
-		return 1;	/* Assume we'll be able to shrink next time */
+	if (!down_read_trylock(&shrinker_rwsem)) {
+		/* Assume we'll be able to shrink next time */
+		ret = 1;
+		goto out;
+	}
 
 	list_for_each_entry(shrinker, &shrinker_list, list) {
 		unsigned long long delta;
@@ -282,6 +285,8 @@  unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask,
 		shrinker->nr += total_scan;
 	}
 	up_read(&shrinker_rwsem);
+out:
+	cond_resched();
 	return ret;
 }