Patchwork [2/3] sched: Fix asymmetric scheduling for POWER7

login
register
mail settings
Submitter Vaidyanathan Srinivasan
Date Oct. 21, 2013, 11:44 a.m.
Message ID <20131021114452.13291.19947.stgit@drishya>
Download mbox | patch
Permalink /patch/285175/
State Not Applicable
Headers show

Comments

Vaidyanathan Srinivasan - Oct. 21, 2013, 11:44 a.m.
Asymmetric scheduling within a core is a scheduler loadbalancing
feature that is triggered when SD_ASYM_PACKING flag is set.  The goal
for the load balancer is to move tasks to lower order idle SMT threads
within a core on a POWER7 system.

In nohz_kick_needed(), we intend to check if our sched domain (core)
is completely busy or we have idle cpu.

The following check for SD_ASYM_PACKING:

    (cpumask_first_and(nohz.idle_cpus_mask, sched_domain_span(sd)) < cpu)

already covers the case of checking if the domain has an idle cpu,
because cpumask_first_and() will not yield any set bits if this domain
has no idle cpu.

Hence, nr_busy check against group weight can be removed.

Reported-by: Michael Neuling <michael.neuling@au1.ibm.com>
Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
---
 kernel/sched/fair.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
Michael Neuling - Oct. 21, 2013, 10:55 p.m.
Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> wrote:

> Asymmetric scheduling within a core is a scheduler loadbalancing
> feature that is triggered when SD_ASYM_PACKING flag is set.  The goal
> for the load balancer is to move tasks to lower order idle SMT threads
> within a core on a POWER7 system.
> 
> In nohz_kick_needed(), we intend to check if our sched domain (core)
> is completely busy or we have idle cpu.
> 
> The following check for SD_ASYM_PACKING:
> 
>     (cpumask_first_and(nohz.idle_cpus_mask, sched_domain_span(sd)) < cpu)
> 
> already covers the case of checking if the domain has an idle cpu,
> because cpumask_first_and() will not yield any set bits if this domain
> has no idle cpu.
> 
> Hence, nr_busy check against group weight can be removed.
> 
> Reported-by: Michael Neuling <michael.neuling@au1.ibm.com>

Tested-by: Michael Neuling <mikey@neuling.org>

Peter, I tested this only a brief while back but it turned out my test
wasn't stringent enough and it was actually broken (in v3.9).  This
fixes it.

Mikey

> Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
> Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
> ---
>  kernel/sched/fair.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 12f0eab..828ed97 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5821,8 +5821,8 @@ static inline int nohz_kick_needed(struct rq *rq, int cpu)
>  				goto need_kick_unlock;
>  		}
>  
> -		if (sd->flags & SD_ASYM_PACKING && nr_busy != sg->group_weight
> -		    && (cpumask_first_and(nohz.idle_cpus_mask,
> +		if (sd->flags & SD_ASYM_PACKING &&
> +			(cpumask_first_and(nohz.idle_cpus_mask,
>  					  sched_domain_span(sd)) < cpu))
>  			goto need_kick_unlock;
>  
>
Peter Zijlstra - Oct. 22, 2013, 10:18 p.m.
On Mon, Oct 21, 2013 at 05:14:52PM +0530, Vaidyanathan Srinivasan wrote:
>  kernel/sched/fair.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 12f0eab..828ed97 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5821,8 +5821,8 @@ static inline int nohz_kick_needed(struct rq *rq, int cpu)
>  				goto need_kick_unlock;
>  		}
>  
> -		if (sd->flags & SD_ASYM_PACKING && nr_busy != sg->group_weight
> -		    && (cpumask_first_and(nohz.idle_cpus_mask,
> +		if (sd->flags & SD_ASYM_PACKING &&
> +			(cpumask_first_and(nohz.idle_cpus_mask,
>  					  sched_domain_span(sd)) < cpu))
>  			goto need_kick_unlock;
>  
> 

Ahh, so here you remove the nr_busy usage.. this patch should really go
before the first one that makes this all weird and funny.

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 12f0eab..828ed97 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5821,8 +5821,8 @@  static inline int nohz_kick_needed(struct rq *rq, int cpu)
 				goto need_kick_unlock;
 		}
 
-		if (sd->flags & SD_ASYM_PACKING && nr_busy != sg->group_weight
-		    && (cpumask_first_and(nohz.idle_cpus_mask,
+		if (sd->flags & SD_ASYM_PACKING &&
+			(cpumask_first_and(nohz.idle_cpus_mask,
 					  sched_domain_span(sd)) < cpu))
 			goto need_kick_unlock;