diff mbox series

[10/19] preempt: Cleanup the macro maze a bit

Message ID 20201113141733.864469886@linutronix.de
State Not Applicable
Headers show
Series softirq: Cleanups and RT awareness | expand

Commit Message

Thomas Gleixner Nov. 13, 2020, 2:02 p.m. UTC
Make the macro maze consistent and prepare it for adding the RT variant for
BH accounting.

 - Use nmi_count() for the NMI portion of preempt count
 - Introduce in_hardirq() to make the naming consistent and non-ambiguos
 - Use the macros to create combined checks (e.g. in_task()) so the
   softirq representation for RT just falls into place.
 - Update comments and move the deprecated macros aside

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/preempt.h |   30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

Comments

Peter Zijlstra Nov. 16, 2020, 12:17 p.m. UTC | #1
On Fri, Nov 13, 2020 at 03:02:17PM +0100, Thomas Gleixner wrote:

> -#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
> -				 | NMI_MASK))
> +#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())


> +#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
> -#define in_task()		(!(preempt_count() & \
> -				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))

How horrible is the code-gen? Because preempt_count() is
raw_cpu_read_4() and at least some old compilers will refuse to CSE it
(consider the this_cpu_read_stable mess).
Thomas Gleixner Nov. 16, 2020, 5:42 p.m. UTC | #2
On Mon, Nov 16 2020 at 13:17, Peter Zijlstra wrote:
> On Fri, Nov 13, 2020 at 03:02:17PM +0100, Thomas Gleixner wrote:
>
>> -#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
>> -				 | NMI_MASK))
>> +#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
>
>
>> +#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
>> -#define in_task()		(!(preempt_count() & \
>> -				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
>
> How horrible is the code-gen? Because preempt_count() is
> raw_cpu_read_4() and at least some old compilers will refuse to CSE it
> (consider the this_cpu_read_stable mess).

I looked at gcc8 and 10 output and the compilers are smart enough to
fold it for the !RT case. But yeah ...

Thanks,

        tglx
Peter Zijlstra Nov. 17, 2020, 10:21 a.m. UTC | #3
On Mon, Nov 16, 2020 at 06:42:19PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 16 2020 at 13:17, Peter Zijlstra wrote:
> > On Fri, Nov 13, 2020 at 03:02:17PM +0100, Thomas Gleixner wrote:
> >
> >> -#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
> >> -				 | NMI_MASK))
> >> +#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
> >
> >
> >> +#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
> >> -#define in_task()		(!(preempt_count() & \
> >> -				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
> >
> > How horrible is the code-gen? Because preempt_count() is
> > raw_cpu_read_4() and at least some old compilers will refuse to CSE it
> > (consider the this_cpu_read_stable mess).
> 
> I looked at gcc8 and 10 output and the compilers are smart enough to
> fold it for the !RT case. But yeah ...

If recent GCC is smart enough I suppose it doesn't matter, thanks!
diff mbox series

Patch

--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -77,31 +77,33 @@ 
 /* preempt_count() and related functions, depends on PREEMPT_NEED_RESCHED */
 #include <asm/preempt.h>
 
+#define nmi_count()	(preempt_count() & NMI_MASK)
 #define hardirq_count()	(preempt_count() & HARDIRQ_MASK)
 #define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
-#define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
-				 | NMI_MASK))
+#define irq_count()	(nmi_count() | hardirq_count() | softirq_count())
 
 /*
- * Are we doing bottom half or hardware interrupt processing?
+ * Macros to retrieve the current execution context:
  *
- * in_irq()       - We're in (hard) IRQ context
+ * in_nmi()		- We're in NMI context
+ * in_hardirq()		- We're in hard IRQ context
+ * in_serving_softirq()	- We're in softirq context
+ * in_task()		- We're in task context
+ */
+#define in_nmi()		(nmi_count())
+#define in_hardirq()		(hardirq_count())
+#define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
+#define in_task()		(!(in_nmi() | in_hardirq() | in_serving_softirq()))
+
+/*
+ * The following macros are deprecated and should not be used in new code:
+ * in_irq()       - Obsolete version of in_hardirq()
  * in_softirq()   - We have BH disabled, or are processing softirqs
  * in_interrupt() - We're in NMI,IRQ,SoftIRQ context or have BH disabled
- * in_serving_softirq() - We're in softirq context
- * in_nmi()       - We're in NMI context
- * in_task()	  - We're in task context
- *
- * Note: due to the BH disabled confusion: in_softirq(),in_interrupt() really
- *       should not be used in new code.
  */
 #define in_irq()		(hardirq_count())
 #define in_softirq()		(softirq_count())
 #define in_interrupt()		(irq_count())
-#define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
-#define in_nmi()		(preempt_count() & NMI_MASK)
-#define in_task()		(!(preempt_count() & \
-				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
 
 /*
  * The preempt_count offset after preempt_disable();