[SRU,1/2,Bionic] sched/fair: Add lsub_positive() and use it consistently
diff mbox series

Message ID 20191016211538.25290-2-khalid.elmously@canonical.com
State New
Headers show
Series
  • Fix aggressive CFS throttling
Related show

Commit Message

Khaled Elmously Oct. 16, 2019, 9:15 p.m. UTC
From: Patrick Bellasi <patrick.bellasi@arm.com>

BugLink: https://bugs.launchpad.net/bugs/1832151

The following pattern:

   var -= min_t(typeof(var), var, val);

is used multiple times in fair.c.

The existing sub_positive() already captures that pattern, but it also
adds an explicit load-store to properly support lockless observations.
In other cases the pattern above is used to update local, and/or not
concurrently accessed, variables.

Let's add a simpler version of sub_positive(), targeted at local variables
updates, which gives the same readability benefits at calling sites,
without enforcing {READ,WRITE}_ONCE() barriers.

Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Quentin Perret <quentin.perret@arm.com>
Cc: Steve Muckle <smuckle@google.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Todd Kjos <tkjos@google.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lore.kernel.org/lkml/20181031184527.GA3178@hirez.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(backported picked from commit b5c0ce7bd1848892e2930f481828b6d7750231ed)
[ kmously: The changes to cpu_util_without() aren't applicable ]
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
---
 kernel/sched/fair.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

Comments

Kleber Souza Oct. 17, 2019, 12:48 p.m. UTC | #1
On 16.10.19 23:15, Khalid Elmously wrote:
> From: Patrick Bellasi <patrick.bellasi@arm.com>
> 
> BugLink: https://bugs.launchpad.net/bugs/1832151
> 
> The following pattern:
> 
>    var -= min_t(typeof(var), var, val);
> 
> is used multiple times in fair.c.
> 
> The existing sub_positive() already captures that pattern, but it also
> adds an explicit load-store to properly support lockless observations.
> In other cases the pattern above is used to update local, and/or not
> concurrently accessed, variables.
> 
> Let's add a simpler version of sub_positive(), targeted at local variables
> updates, which gives the same readability benefits at calling sites,
> without enforcing {READ,WRITE}_ONCE() barriers.
> 
> Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Cc: Juri Lelli <juri.lelli@redhat.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Morten Rasmussen <morten.rasmussen@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Quentin Perret <quentin.perret@arm.com>
> Cc: Steve Muckle <smuckle@google.com>
> Cc: Suren Baghdasaryan <surenb@google.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Todd Kjos <tkjos@google.com>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> Link: https://lore.kernel.org/lkml/20181031184527.GA3178@hirez.programming.kicks-ass.net
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> (backported picked from commit b5c0ce7bd1848892e2930f481828b6d7750231ed)

^ (backported from commit ...)

> [ kmously: The changes to cpu_util_without() aren't applicable ]
> Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
> ---
>  kernel/sched/fair.c | 15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 09401d5eb471..c3dfd4e28c46 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2743,6 +2743,17 @@ account_entity_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se)
>  	WRITE_ONCE(*ptr, res);					\
>  } while (0)
>  
> +/*
> + * Remove and clamp on negative, from a local variable.
> + *
> + * A variant of sub_positive(), which does not use explicit load-store
> + * and is thus optimized for local variable updates.
> + */
> +#define lsub_positive(_ptr, _val) do {				\
> +	typeof(_ptr) ptr = (_ptr);				\
> +	*ptr -= min_t(typeof(*ptr), *ptr, _val);		\
> +} while (0)
> +
>  #ifdef CONFIG_SMP
>  /*
>   * XXX we want to get rid of these helpers and use the full load resolution.
> @@ -4814,7 +4825,7 @@ static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun)
>  		cfs_b->distribute_running = 0;
>  		throttled = !list_empty(&cfs_b->throttled_cfs_rq);
>  
> -		cfs_b->runtime -= min(runtime, cfs_b->runtime);
> +		lsub_positive(&cfs_b->runtime, runtime);
>  	}
>  
>  	/*
> @@ -4948,7 +4959,7 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
>  
>  	raw_spin_lock(&cfs_b->lock);
>  	if (expires == cfs_b->runtime_expires)
> -		cfs_b->runtime -= min(runtime, cfs_b->runtime);
> +		lsub_positive(&cfs_b->runtime, runtime);
>  	cfs_b->distribute_running = 0;
>  	raw_spin_unlock(&cfs_b->lock);
>  }
>

Patch
diff mbox series

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 09401d5eb471..c3dfd4e28c46 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2743,6 +2743,17 @@  account_entity_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se)
 	WRITE_ONCE(*ptr, res);					\
 } while (0)
 
+/*
+ * Remove and clamp on negative, from a local variable.
+ *
+ * A variant of sub_positive(), which does not use explicit load-store
+ * and is thus optimized for local variable updates.
+ */
+#define lsub_positive(_ptr, _val) do {				\
+	typeof(_ptr) ptr = (_ptr);				\
+	*ptr -= min_t(typeof(*ptr), *ptr, _val);		\
+} while (0)
+
 #ifdef CONFIG_SMP
 /*
  * XXX we want to get rid of these helpers and use the full load resolution.
@@ -4814,7 +4825,7 @@  static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun)
 		cfs_b->distribute_running = 0;
 		throttled = !list_empty(&cfs_b->throttled_cfs_rq);
 
-		cfs_b->runtime -= min(runtime, cfs_b->runtime);
+		lsub_positive(&cfs_b->runtime, runtime);
 	}
 
 	/*
@@ -4948,7 +4959,7 @@  static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
 
 	raw_spin_lock(&cfs_b->lock);
 	if (expires == cfs_b->runtime_expires)
-		cfs_b->runtime -= min(runtime, cfs_b->runtime);
+		lsub_positive(&cfs_b->runtime, runtime);
 	cfs_b->distribute_running = 0;
 	raw_spin_unlock(&cfs_b->lock);
 }