diff mbox series

[03/12] powerpc: qspinlock: Enforce qnode writes prior to publishing to queue

Message ID 20230508020120.218494-4-rmclure@linux.ibm.com (mailing list archive)
State Superseded
Headers show
Series powerpc: KCSAN fix warnings and mark accesses | expand

Commit Message

Rohan McLure May 8, 2023, 2:01 a.m. UTC
Use a compiler barrier to enforce that all fields of a new struct qnode
be written to (especially the lock value) before publishing the qnode to
the waitqueue.

Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
 arch/powerpc/lib/qspinlock.c | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Nicholas Piggin May 9, 2023, 2:04 a.m. UTC | #1
On Mon May 8, 2023 at 12:01 PM AEST, Rohan McLure wrote:
> Use a compiler barrier to enforce that all fields of a new struct qnode
> be written to (especially the lock value) before publishing the qnode to
> the waitqueue.

publish_tail_cpu is the release barrier for this and includes the memory
clobber there. Can we annotate that instead?

Thanks,
Nick

>
> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
> ---
>  arch/powerpc/lib/qspinlock.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/arch/powerpc/lib/qspinlock.c b/arch/powerpc/lib/qspinlock.c
> index 579290d55abf..d548001a86be 100644
> --- a/arch/powerpc/lib/qspinlock.c
> +++ b/arch/powerpc/lib/qspinlock.c
> @@ -567,6 +567,10 @@ static __always_inline void queued_spin_lock_mcs_queue(struct qspinlock *lock, b
>  	node->cpu = smp_processor_id();
>  	node->yield_cpu = -1;
>  	node->locked = 1;
> +	/*
> +	 * Assign all attributes of a node before it can be published.
> +	 */
> +	barrier();
>  
>  	tail = encode_tail_cpu(node->cpu);
>  
> -- 
> 2.37.2
Rohan McLure May 9, 2023, 5:26 a.m. UTC | #2
> On 9 May 2023, at 12:04 pm, Nicholas Piggin <npiggin@gmail.com> wrote:
> 
> On Mon May 8, 2023 at 12:01 PM AEST, Rohan McLure wrote:
>> Use a compiler barrier to enforce that all fields of a new struct qnode
>> be written to (especially the lock value) before publishing the qnode to
>> the waitqueue.
> 
> publish_tail_cpu is the release barrier for this and includes the memory
> clobber there. Can we annotate that instead?

Got it, I see that one now.

On another note though, it looks like the memory clobber doesn’t serve
to squash KCSAN warnings here.

==================================================================
BUG: KCSAN: data-race in queued_spin_lock_slowpath / queued_spin_lock_slowpath

write (marked) to 0xc000000ff3790b20 of 1 bytes by task 1045 on cpu 64:

write (reordered) to 0xc000000ff3790b20 of 1 bytes by task 1063 on cpu 31:
  |
  +-> reordered to: queued_spin_lock_slowpath+0xcec/0x1b70

Reported by Kernel Concurrency Sanitizer on:
==================================================================

The one byte memory access has to be ‘locked’ in this instance. KCSAN
takes issue with how the assignment of locked is unmarked in this
instance, even while during the epoch at which this assignment can occur,
the node is inaccessible. Looks like I’ll have to issue a data_race, even
while there is no capacity for a real data race.

> 
> Thanks,
> Nick
> 
>> 
>> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
>> ---
>> arch/powerpc/lib/qspinlock.c | 4 ++++
>> 1 file changed, 4 insertions(+)
>> 
>> diff --git a/arch/powerpc/lib/qspinlock.c b/arch/powerpc/lib/qspinlock.c
>> index 579290d55abf..d548001a86be 100644
>> --- a/arch/powerpc/lib/qspinlock.c
>> +++ b/arch/powerpc/lib/qspinlock.c
>> @@ -567,6 +567,10 @@ static __always_inline void queued_spin_lock_mcs_queue(struct qspinlock *lock, b
>> node->cpu = smp_processor_id();
>> node->yield_cpu = -1;
>> node->locked = 1;
>> + /*
>> + * Assign all attributes of a node before it can be published.
>> + */
>> + barrier();
>> 
>> tail = encode_tail_cpu(node->cpu);
>> 
>> -- 
>> 2.37.2
>
Nicholas Piggin May 9, 2023, 6:45 a.m. UTC | #3
On Tue May 9, 2023 at 3:26 PM AEST, Rohan McLure wrote:
> > On 9 May 2023, at 12:04 pm, Nicholas Piggin <npiggin@gmail.com> wrote:
> > 
> > On Mon May 8, 2023 at 12:01 PM AEST, Rohan McLure wrote:
> >> Use a compiler barrier to enforce that all fields of a new struct qnode
> >> be written to (especially the lock value) before publishing the qnode to
> >> the waitqueue.
> > 
> > publish_tail_cpu is the release barrier for this and includes the memory
> > clobber there. Can we annotate that instead?
>
> Got it, I see that one now.
>
> On another note though, it looks like the memory clobber doesn’t serve
> to squash KCSAN warnings here.

Aha, publish_tail_cpu() needs a kcsan_release().

Thanks,
Nick
diff mbox series

Patch

diff --git a/arch/powerpc/lib/qspinlock.c b/arch/powerpc/lib/qspinlock.c
index 579290d55abf..d548001a86be 100644
--- a/arch/powerpc/lib/qspinlock.c
+++ b/arch/powerpc/lib/qspinlock.c
@@ -567,6 +567,10 @@  static __always_inline void queued_spin_lock_mcs_queue(struct qspinlock *lock, b
 	node->cpu = smp_processor_id();
 	node->yield_cpu = -1;
 	node->locked = 1;
+	/*
+	 * Assign all attributes of a node before it can be published.
+	 */
+	barrier();
 
 	tail = encode_tail_cpu(node->cpu);