Message ID | 20211110202448.4054153-3-valentin.schneider@arm.com (mailing list archive) |
---|---|
State | Handled Elsewhere |
Headers | show |
Series | preempt: PREEMPT vs PREEMPT_DYNAMIC configs fixup | expand |
On Wed, 2021-11-10 at 20:24 +0000, Valentin Schneider wrote: > > diff --git a/include/linux/sched.h b/include/linux/sched.h > index 5f8db54226af..0640d5622496 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -2073,6 +2073,22 @@ static inline void cond_resched_rcu(void) > #endif > } > > +#ifdef CONFIG_PREEMPT_DYNAMIC > + > +extern bool is_preempt_none(void); > +extern bool is_preempt_voluntary(void); > +extern bool is_preempt_full(void); > + > +#else > + > +#define is_preempt_none() IS_ENABLED(CONFIG_PREEMPT_NONE) > +#define is_preempt_voluntary() IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) > +#define is_preempt_full() IS_ENABLED(CONFIG_PREEMPT) I think that should be IS_ENABLED(CONFIG_PREEMPTION), see c1a280b68d4e. Noticed while applying the series to an RT tree, where tglx has done that replacement to the powerpc spot your next patch diddles. -Mike
On Thu, 2021-11-11 at 04:16 +0100, Mike Galbraith wrote: > On Wed, 2021-11-10 at 20:24 +0000, Valentin Schneider wrote: > > > > diff --git a/include/linux/sched.h b/include/linux/sched.h > > index 5f8db54226af..0640d5622496 100644 > > --- a/include/linux/sched.h > > +++ b/include/linux/sched.h > > @@ -2073,6 +2073,22 @@ static inline void cond_resched_rcu(void) > > #endif > > } > > > > +#ifdef CONFIG_PREEMPT_DYNAMIC > > + > > +extern bool is_preempt_none(void); > > +extern bool is_preempt_voluntary(void); > > +extern bool is_preempt_full(void); > > + > > +#else > > + > > +#define is_preempt_none() IS_ENABLED(CONFIG_PREEMPT_NONE) > > +#define is_preempt_voluntary() > > IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) > > +#define is_preempt_full() IS_ENABLED(CONFIG_PREEMPT) > > I think that should be IS_ENABLED(CONFIG_PREEMPTION), see > c1a280b68d4e. > > Noticed while applying the series to an RT tree, where tglx > has done that replacement to the powerpc spot your next patch > diddles. Damn, then comes patch 5 properly differentiating PREEMPT/PREEMPT_RT. -Mike
On Thu, 2021-11-11 at 04:35 +0100, Mike Galbraith wrote: > On Thu, 2021-11-11 at 04:16 +0100, Mike Galbraith wrote: > > On Wed, 2021-11-10 at 20:24 +0000, Valentin Schneider wrote: > > > > > > diff --git a/include/linux/sched.h b/include/linux/sched.h > > > index 5f8db54226af..0640d5622496 100644 > > > --- a/include/linux/sched.h > > > +++ b/include/linux/sched.h > > > @@ -2073,6 +2073,22 @@ static inline void cond_resched_rcu(void) > > > #endif > > > } > > > > > > +#ifdef CONFIG_PREEMPT_DYNAMIC > > > + > > > +extern bool is_preempt_none(void); > > > +extern bool is_preempt_voluntary(void); > > > +extern bool is_preempt_full(void); > > > + > > > +#else > > > + > > > +#define is_preempt_none() IS_ENABLED(CONFIG_PREEMPT_NONE) > > > +#define is_preempt_voluntary() > > > IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) > > > +#define is_preempt_full() IS_ENABLED(CONFIG_PREEMPT) > > > > I think that should be IS_ENABLED(CONFIG_PREEMPTION), see > > c1a280b68d4e. > > > > Noticed while applying the series to an RT tree, where tglx > > has done that replacement to the powerpc spot your next patch > > diddles. > > Damn, then comes patch 5 properly differentiating PREEMPT/PREEMPT_RT. So I suppose the powerpc spot should remain CONFIG_PREEMPT and become CONFIG_PREEMPTION when the RT change gets merged, because that spot is about full preemptibility, not a distinct preemption model. That's rather annoying :-/ -Mike
On Thu, 2021-11-11 at 04:47 +0100, Mike Galbraith wrote: > > So I suppose the powerpc spot should remain CONFIG_PREEMPT and become > CONFIG_PREEMPTION when the RT change gets merged, because that spot is > about full preemptibility, not a distinct preemption model. KCSAN needs a little help to be usable by RT, but ditto that spot. -Mike
On Wed, Nov 10, 2021 at 08:24PM +0000, Valentin Schneider wrote: [...] > +#ifdef CONFIG_PREEMPT_DYNAMIC > + > +extern bool is_preempt_none(void); > +extern bool is_preempt_voluntary(void); > +extern bool is_preempt_full(void); > + > +#else > + > +#define is_preempt_none() IS_ENABLED(CONFIG_PREEMPT_NONE) > +#define is_preempt_voluntary() IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) > +#define is_preempt_full() IS_ENABLED(CONFIG_PREEMPT) > + > +#endif > + > +#define is_preempt_rt() IS_ENABLED(CONFIG_PREEMPT_RT) > + Can these callables be real functions in all configs, making the !DYNAMIC ones just static inline bool ones? That'd catch invalid use in #if etc. in all configs. > /* > * Does a critical section need to be broken due to another > * task waiting?: (technically does not depend on CONFIG_PREEMPTION, > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 97047aa7b6c2..9db7f77e53c3 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -6638,6 +6638,17 @@ static void __init preempt_dynamic_init(void) > } > } > > +#define PREEMPT_MODE_ACCESSOR(mode) \ > + bool is_preempt_##mode(void) \ > + { \ > + WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \ > + return preempt_dynamic_mode == preempt_dynamic_##mode; \ > + } This needs an EXPORT_SYMBOL, so it's usable from modules like the kcsan_test module.
On Thu, 11 Nov 2021 at 04:47, Mike Galbraith <efault@gmx.de> wrote: > > On Thu, 2021-11-11 at 04:35 +0100, Mike Galbraith wrote: > > On Thu, 2021-11-11 at 04:16 +0100, Mike Galbraith wrote: > > > On Wed, 2021-11-10 at 20:24 +0000, Valentin Schneider wrote: > > > > > > > > diff --git a/include/linux/sched.h b/include/linux/sched.h > > > > index 5f8db54226af..0640d5622496 100644 > > > > --- a/include/linux/sched.h > > > > +++ b/include/linux/sched.h > > > > @@ -2073,6 +2073,22 @@ static inline void cond_resched_rcu(void) > > > > #endif > > > > } > > > > > > > > +#ifdef CONFIG_PREEMPT_DYNAMIC > > > > + > > > > +extern bool is_preempt_none(void); > > > > +extern bool is_preempt_voluntary(void); > > > > +extern bool is_preempt_full(void); > > > > + > > > > +#else > > > > + > > > > +#define is_preempt_none() IS_ENABLED(CONFIG_PREEMPT_NONE) > > > > +#define is_preempt_voluntary() > > > > IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) > > > > +#define is_preempt_full() IS_ENABLED(CONFIG_PREEMPT) > > > > > > I think that should be IS_ENABLED(CONFIG_PREEMPTION), see > > > c1a280b68d4e. > > > > > > Noticed while applying the series to an RT tree, where tglx > > > has done that replacement to the powerpc spot your next patch > > > diddles. > > > > Damn, then comes patch 5 properly differentiating PREEMPT/PREEMPT_RT. > > So I suppose the powerpc spot should remain CONFIG_PREEMPT and become > CONFIG_PREEMPTION when the RT change gets merged, because that spot is > about full preemptibility, not a distinct preemption model. > > That's rather annoying :-/ I guess the question is if is_preempt_full() should be true also if is_preempt_rt() is true? Not sure all cases are happy with that, e.g. the kernel/trace/trace.c case, which wants to print the precise preemption level. To avoid confusion, I'd introduce another helper that says true if the preemption level is "at least full", currently that'd be "full or rt". Something like is_preempt_full_or_rt() (but might as well write "is_preempt_full() || is_preempt_rt()"), or is_preemption() (to match that Kconfig variable, although it's slightly confusing). The implementation of that helper can just be a static inline function returning "is_preempt_full() || is_preempt_rt()". Would that help?
On Thu, 2021-11-11 at 10:36 +0100, Marco Elver wrote: > On Thu, 11 Nov 2021 at 04:47, Mike Galbraith <efault@gmx.de> wrote: > > > > On Thu, 2021-11-11 at 04:35 +0100, Mike Galbraith wrote: > > > On Thu, 2021-11-11 at 04:16 +0100, Mike Galbraith wrote: > > > > On Wed, 2021-11-10 at 20:24 +0000, Valentin Schneider wrote: > > > > > > > > > > diff --git a/include/linux/sched.h b/include/linux/sched.h > > > > > index 5f8db54226af..0640d5622496 100644 > > > > > --- a/include/linux/sched.h > > > > > +++ b/include/linux/sched.h > > > > > @@ -2073,6 +2073,22 @@ static inline void cond_resched_rcu(void) > > > > > #endif > > > > > } > > > > > > > > > > +#ifdef CONFIG_PREEMPT_DYNAMIC > > > > > + > > > > > +extern bool is_preempt_none(void); > > > > > +extern bool is_preempt_voluntary(void); > > > > > +extern bool is_preempt_full(void); > > > > > + > > > > > +#else > > > > > + > > > > > +#define is_preempt_none() IS_ENABLED(CONFIG_PREEMPT_NONE) > > > > > +#define is_preempt_voluntary() > > > > > IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) > > > > > +#define is_preempt_full() IS_ENABLED(CONFIG_PREEMPT) > > > > > > > > I think that should be IS_ENABLED(CONFIG_PREEMPTION), see > > > > c1a280b68d4e. > > > > > > > > Noticed while applying the series to an RT tree, where tglx > > > > has done that replacement to the powerpc spot your next patch > > > > diddles. > > > > > > Damn, then comes patch 5 properly differentiating PREEMPT/PREEMPT_RT. > > > > So I suppose the powerpc spot should remain CONFIG_PREEMPT and become > > CONFIG_PREEMPTION when the RT change gets merged, because that spot is > > about full preemptibility, not a distinct preemption model. > > > > That's rather annoying :-/ > > I guess the question is if is_preempt_full() should be true also if > is_preempt_rt() is true? That's what CONFIG_PREEMPTION is. More could follow, but it was added to allow multiple models to say "preemptible". > Not sure all cases are happy with that, e.g. the kernel/trace/trace.c > case, which wants to print the precise preemption level. Yeah, that's the "annoying" bit, needing one oddball model accessor that isn't about a particular model. > To avoid confusion, I'd introduce another helper that says true if the > preemption level is "at least full", currently that'd be "full or rt". > Something like is_preempt_full_or_rt() (but might as well write > "is_preempt_full() || is_preempt_rt()"), or is_preemption() (to match > that Kconfig variable, although it's slightly confusing). The > implementation of that helper can just be a static inline function > returning "is_preempt_full() || is_preempt_rt()". > > Would that help? Yeah, as it sits two accessors are needed, one that says PREEMPT the other PREEMPTION, spelling optional. -Mike
On 11/11/21 09:54, Marco Elver wrote: > On Wed, Nov 10, 2021 at 08:24PM +0000, Valentin Schneider wrote: > [...] >> +#ifdef CONFIG_PREEMPT_DYNAMIC >> + >> +extern bool is_preempt_none(void); >> +extern bool is_preempt_voluntary(void); >> +extern bool is_preempt_full(void); >> + >> +#else >> + >> +#define is_preempt_none() IS_ENABLED(CONFIG_PREEMPT_NONE) >> +#define is_preempt_voluntary() IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) >> +#define is_preempt_full() IS_ENABLED(CONFIG_PREEMPT) >> + >> +#endif >> + >> +#define is_preempt_rt() IS_ENABLED(CONFIG_PREEMPT_RT) >> + > > Can these callables be real functions in all configs, making the > !DYNAMIC ones just static inline bool ones? That'd catch invalid use in > #if etc. in all configs. > Ack >> /* >> * Does a critical section need to be broken due to another >> * task waiting?: (technically does not depend on CONFIG_PREEMPTION, >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c >> index 97047aa7b6c2..9db7f77e53c3 100644 >> --- a/kernel/sched/core.c >> +++ b/kernel/sched/core.c >> @@ -6638,6 +6638,17 @@ static void __init preempt_dynamic_init(void) >> } >> } >> >> +#define PREEMPT_MODE_ACCESSOR(mode) \ >> + bool is_preempt_##mode(void) \ >> + { \ >> + WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \ >> + return preempt_dynamic_mode == preempt_dynamic_##mode; \ >> + } > > This needs an EXPORT_SYMBOL, so it's usable from modules like the > kcsan_test module. Ah, wasn't sure about that one, thanks!
On 11/11/21 11:32, Mike Galbraith wrote: > On Thu, 2021-11-11 at 10:36 +0100, Marco Elver wrote: >> I guess the question is if is_preempt_full() should be true also if >> is_preempt_rt() is true? > > That's what CONFIG_PREEMPTION is. More could follow, but it was added > to allow multiple models to say "preemptible". > That's what I was gonna say, but you can have CONFIG_PREEMPTION while being is_preempt_none() due to PREEMPT_DYNAMIC... >> Not sure all cases are happy with that, e.g. the kernel/trace/trace.c >> case, which wants to print the precise preemption level. > > Yeah, that's the "annoying" bit, needing one oddball model accessor > that isn't about a particular model. > >> To avoid confusion, I'd introduce another helper that says true if the >> preemption level is "at least full", currently that'd be "full or rt". >> Something like is_preempt_full_or_rt() (but might as well write >> "is_preempt_full() || is_preempt_rt()"), or is_preemption() (to match >> that Kconfig variable, although it's slightly confusing). The >> implementation of that helper can just be a static inline function >> returning "is_preempt_full() || is_preempt_rt()". >> >> Would that help? > > Yeah, as it sits two accessors are needed, one that says PREEMPT the > other PREEMPTION, spelling optional. > Per the above, I think we need the full || rt thingie. > -Mike
On Thu, 2021-11-11 at 10:56 +0000, Valentin Schneider wrote: > On 11/11/21 11:32, Mike Galbraith wrote: > > On Thu, 2021-11-11 at 10:36 +0100, Marco Elver wrote: > > > I guess the question is if is_preempt_full() should be true also if > > > is_preempt_rt() is true? > > > > That's what CONFIG_PREEMPTION is. More could follow, but it was added > > to allow multiple models to say "preemptible". > > > > That's what I was gonna say, but you can have CONFIG_PREEMPTION while being > is_preempt_none() due to PREEMPT_DYNAMIC... Ah, right.. this is gonna take some getting used to. -Mike
Le 10/11/2021 à 21:24, Valentin Schneider a écrit : > CONFIG_PREEMPT{_NONE, _VOLUNTARY} designate either: > o The build-time preemption model when !PREEMPT_DYNAMIC > o The default boot-time preemption model when PREEMPT_DYNAMIC > > IOW, using those on PREEMPT_DYNAMIC kernels is meaningless - the actual > model could have been set to something else by the "preempt=foo" cmdline > parameter. > > Introduce a set of helpers to determine the actual preemption mode used by > the live kernel. > > Suggested-by: Marco Elver <elver@google.com> > Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> > --- > include/linux/sched.h | 16 ++++++++++++++++ > kernel/sched/core.c | 11 +++++++++++ > 2 files changed, 27 insertions(+) > > diff --git a/include/linux/sched.h b/include/linux/sched.h > index 5f8db54226af..0640d5622496 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -2073,6 +2073,22 @@ static inline void cond_resched_rcu(void) > #endif > } > > +#ifdef CONFIG_PREEMPT_DYNAMIC > + > +extern bool is_preempt_none(void); > +extern bool is_preempt_voluntary(void); > +extern bool is_preempt_full(void); Those are trivial tests supposed to be used in fast pathes. They should be static inlines in order to minimise the overhead. > + > +#else > + > +#define is_preempt_none() IS_ENABLED(CONFIG_PREEMPT_NONE) > +#define is_preempt_voluntary() IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) > +#define is_preempt_full() IS_ENABLED(CONFIG_PREEMPT) Would be better to use static inlines here as well instead of macros. > + > +#endif > + > +#define is_preempt_rt() IS_ENABLED(CONFIG_PREEMPT_RT) > + > /* > * Does a critical section need to be broken due to another > * task waiting?: (technically does not depend on CONFIG_PREEMPTION, > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 97047aa7b6c2..9db7f77e53c3 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -6638,6 +6638,17 @@ static void __init preempt_dynamic_init(void) > } > } > > +#define PREEMPT_MODE_ACCESSOR(mode) \ > + bool is_preempt_##mode(void) \ > + { \ > + WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \ Not sure using WARN_ON is a good idea here, as it may be called very early, see comment on powerpc patch. > + return preempt_dynamic_mode == preempt_dynamic_##mode; \ > + } I'm not sure that's worth a macro. You only have 3 accessors, 2 lines of code each. Just define all 3 in plain text. CONFIG_PREEMPT_DYNAMIC is based on using strategies like static_calls in order to minimise the overhead. For those accessors you should use the same kind of approach and use things like jump_labels in order to not redo the test at each time and minimise overhead as much as possible. > + > +PREEMPT_MODE_ACCESSOR(none) > +PREEMPT_MODE_ACCESSOR(voluntary) > +PREEMPT_MODE_ACCESSOR(full) > + > #else /* !CONFIG_PREEMPT_DYNAMIC */ > > static inline void preempt_dynamic_init(void) { } >
On 16/11/21 14:29, Christophe Leroy wrote: > Le 10/11/2021 à 21:24, Valentin Schneider a écrit : >> CONFIG_PREEMPT{_NONE, _VOLUNTARY} designate either: >> o The build-time preemption model when !PREEMPT_DYNAMIC >> o The default boot-time preemption model when PREEMPT_DYNAMIC >> >> IOW, using those on PREEMPT_DYNAMIC kernels is meaningless - the actual >> model could have been set to something else by the "preempt=foo" cmdline >> parameter. >> >> Introduce a set of helpers to determine the actual preemption mode used by >> the live kernel. >> >> Suggested-by: Marco Elver <elver@google.com> >> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> >> --- >> include/linux/sched.h | 16 ++++++++++++++++ >> kernel/sched/core.c | 11 +++++++++++ >> 2 files changed, 27 insertions(+) >> >> diff --git a/include/linux/sched.h b/include/linux/sched.h >> index 5f8db54226af..0640d5622496 100644 >> --- a/include/linux/sched.h >> +++ b/include/linux/sched.h >> @@ -2073,6 +2073,22 @@ static inline void cond_resched_rcu(void) >> #endif >> } >> >> +#ifdef CONFIG_PREEMPT_DYNAMIC >> + >> +extern bool is_preempt_none(void); >> +extern bool is_preempt_voluntary(void); >> +extern bool is_preempt_full(void); > > Those are trivial tests supposed to be used in fast pathes. They should > be static inlines in order to minimise the overhead. > >> + >> +#else >> + >> +#define is_preempt_none() IS_ENABLED(CONFIG_PREEMPT_NONE) >> +#define is_preempt_voluntary() IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) >> +#define is_preempt_full() IS_ENABLED(CONFIG_PREEMPT) > > Would be better to use static inlines here as well instead of macros. > I realize I stripped all ppc folks from the cclist after dropping the ppc snippet, but you guys might still be interested - my bad. That's done in v3: https://lore.kernel.org/lkml/20211112185203.280040-1-valentin.schneider@arm.com/ >> + >> +#endif >> + >> +#define is_preempt_rt() IS_ENABLED(CONFIG_PREEMPT_RT) >> + >> /* >> * Does a critical section need to be broken due to another >> * task waiting?: (technically does not depend on CONFIG_PREEMPTION, >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c >> index 97047aa7b6c2..9db7f77e53c3 100644 >> --- a/kernel/sched/core.c >> +++ b/kernel/sched/core.c >> @@ -6638,6 +6638,17 @@ static void __init preempt_dynamic_init(void) >> } >> } >> >> +#define PREEMPT_MODE_ACCESSOR(mode) \ >> + bool is_preempt_##mode(void) \ >> + { \ >> + WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \ > > Not sure using WARN_ON is a good idea here, as it may be called very > early, see comment on powerpc patch. Bah, I was gonna say that you *don't* want users of is_preempt_*() to be called before the "final" preemption model is set up (such users would need to make use of static_calls), but I realize there's a debug interface to flip the preemption model at will... Say an initcall sees is_preempt_voluntary() and sets things up accordingly, and then the debug knob switches to preempt_full. I don't think there's much we can really do here though :/ > >> + return preempt_dynamic_mode == preempt_dynamic_##mode; \ >> + } > > I'm not sure that's worth a macro. You only have 3 accessors, 2 lines of > code each. Just define all 3 in plain text. > > CONFIG_PREEMPT_DYNAMIC is based on using strategies like static_calls in > order to minimise the overhead. For those accessors you should use the > same kind of approach and use things like jump_labels in order to not > redo the test at each time and minimise overhead as much as possible. > That's a valid point, though the few paths that need patching up and don't make use of static calls already (AFAICT the ppc irq path I was touching in v2 needs to make use of irqentry_exit_cond_resched()) really seem like slow-paths. >> + >> +PREEMPT_MODE_ACCESSOR(none) >> +PREEMPT_MODE_ACCESSOR(voluntary) >> +PREEMPT_MODE_ACCESSOR(full) >> + >> #else /* !CONFIG_PREEMPT_DYNAMIC */ >> >> static inline void preempt_dynamic_init(void) { } >>
diff --git a/include/linux/sched.h b/include/linux/sched.h index 5f8db54226af..0640d5622496 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2073,6 +2073,22 @@ static inline void cond_resched_rcu(void) #endif } +#ifdef CONFIG_PREEMPT_DYNAMIC + +extern bool is_preempt_none(void); +extern bool is_preempt_voluntary(void); +extern bool is_preempt_full(void); + +#else + +#define is_preempt_none() IS_ENABLED(CONFIG_PREEMPT_NONE) +#define is_preempt_voluntary() IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) +#define is_preempt_full() IS_ENABLED(CONFIG_PREEMPT) + +#endif + +#define is_preempt_rt() IS_ENABLED(CONFIG_PREEMPT_RT) + /* * Does a critical section need to be broken due to another * task waiting?: (technically does not depend on CONFIG_PREEMPTION, diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 97047aa7b6c2..9db7f77e53c3 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6638,6 +6638,17 @@ static void __init preempt_dynamic_init(void) } } +#define PREEMPT_MODE_ACCESSOR(mode) \ + bool is_preempt_##mode(void) \ + { \ + WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \ + return preempt_dynamic_mode == preempt_dynamic_##mode; \ + } + +PREEMPT_MODE_ACCESSOR(none) +PREEMPT_MODE_ACCESSOR(voluntary) +PREEMPT_MODE_ACCESSOR(full) + #else /* !CONFIG_PREEMPT_DYNAMIC */ static inline void preempt_dynamic_init(void) { }
CONFIG_PREEMPT{_NONE, _VOLUNTARY} designate either: o The build-time preemption model when !PREEMPT_DYNAMIC o The default boot-time preemption model when PREEMPT_DYNAMIC IOW, using those on PREEMPT_DYNAMIC kernels is meaningless - the actual model could have been set to something else by the "preempt=foo" cmdline parameter. Introduce a set of helpers to determine the actual preemption mode used by the live kernel. Suggested-by: Marco Elver <elver@google.com> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> --- include/linux/sched.h | 16 ++++++++++++++++ kernel/sched/core.c | 11 +++++++++++ 2 files changed, 27 insertions(+)