diff mbox

perf events ring buffer memory barrier on powerpc

Message ID 12083.1382486094@ale.ozlabs.ibm.com (mailing list archive)
State Not Applicable
Headers show

Commit Message

Michael Neuling Oct. 22, 2013, 11:54 p.m. UTC
Frederic,

In the perf ring buffer code we have this in perf_output_get_handle():

	if (!local_dec_and_test(&rb->nest))
		goto out;

	/*
	 * Publish the known good head. Rely on the full barrier implied
	 * by atomic_dec_and_test() order the rb->head read and this
	 * write.
	 */
	rb->user_page->data_head = head;

The comment says atomic_dec_and_test() but the code is
local_dec_and_test().

On powerpc, local_dec_and_test() doesn't have a memory barrier but
atomic_dec_and_test() does.  Is the comment wrong, or is
local_dec_and_test() suppose to imply a memory barrier too and we have
it wrongly implemented in powerpc?

My guess is that local_dec_and_test() is correct but we to add an
explicit memory barrier like below:

(Kudos to Victor Kaplansky for finding this)

Mikey

Comments

Victor Kaplansky Oct. 23, 2013, 7:39 a.m. UTC | #1
See below.

Michael Neuling <mikey@neuling.org> wrote on 10/23/2013 02:54:54 AM:

>
> diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
> index cd55144..95768c6 100644
> --- a/kernel/events/ring_buffer.c
> +++ b/kernel/events/ring_buffer.c
> @@ -87,10 +87,10 @@ again:
>        goto out;
>
>     /*
> -    * Publish the known good head. Rely on the full barrier implied
> -    * by atomic_dec_and_test() order the rb->head read and this
> -    * write.
> +    * Publish the known good head. We need a memory barrier to order the
> +    * order the rb->head read and this write.
>      */
> +   smp_mb ();
>     rb->user_page->data_head = head;
>
>     /*

1. As far as I understand, smp_mb() is superfluous in this case, smp_wmb()
should be enough.
   (same for the space between the name of function and open
parenthesis :-) )

2. Again, as far as I understand from ./Documentation/atomic_ops.txt, it is
mistake in architecture independent
   code to rely on memory barriers in atomic operations, all the more so in
"local" operations.

3. The solution above is sub-optimal on architectures where memory barrier
is part of "local", since we are going to execute
   two consecutive barriers. So, maybe, it would be better to use
smp_mb__after_atomic_dec().

4. I'm not sure, but I think there is another, unrelated potential problem
in function perf_output_put_handle()
   - the write to "data_head" -

kernel/events/ring_buffer.c:

 77         /*
 78          * Publish the known good head. Rely on the full barrier
implied
 79          * by atomic_dec_and_test() order the rb->head read and this
 80          * write.
 81          */
 82         rb->user_page->data_head = head;

As data_head is 64-bit wide, the update should be done by an atomic64_set
().

Regards,
-- Victor
Frédéric Weisbecker Oct. 23, 2013, 2:19 p.m. UTC | #2
On Wed, Oct 23, 2013 at 10:54:54AM +1100, Michael Neuling wrote:
> Frederic,
> 
> In the perf ring buffer code we have this in perf_output_get_handle():
> 
> 	if (!local_dec_and_test(&rb->nest))
> 		goto out;
> 
> 	/*
> 	 * Publish the known good head. Rely on the full barrier implied
> 	 * by atomic_dec_and_test() order the rb->head read and this
> 	 * write.
> 	 */
> 	rb->user_page->data_head = head;
> 
> The comment says atomic_dec_and_test() but the code is
> local_dec_and_test().
> 
> On powerpc, local_dec_and_test() doesn't have a memory barrier but
> atomic_dec_and_test() does.  Is the comment wrong, or is
> local_dec_and_test() suppose to imply a memory barrier too and we have
> it wrongly implemented in powerpc?
> 
> My guess is that local_dec_and_test() is correct but we to add an
> explicit memory barrier like below:
> 
> (Kudos to Victor Kaplansky for finding this)
> 
> Mikey
> 
> diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
> index cd55144..95768c6 100644
> --- a/kernel/events/ring_buffer.c
> +++ b/kernel/events/ring_buffer.c
> @@ -87,10 +87,10 @@ again:
>  		goto out;
>  
>  	/*
> -	 * Publish the known good head. Rely on the full barrier implied
> -	 * by atomic_dec_and_test() order the rb->head read and this
> -	 * write.
> +	 * Publish the known good head. We need a memory barrier to order the
> +	 * order the rb->head read and this write.
>  	 */
> +	smp_mb ();
>  	rb->user_page->data_head = head;
>  
>  	/*


I'm adding Peter in Cc since he wrote that code.
I agree that local_dec_and_test() doesn't need to imply an smp barrier.
All it has to provide as a guarantee is the atomicity against local concurrent
operations (interrupts, preemption, ...).

Now I'm a bit confused about this barrier.

I think we want this ordering:

    Kernel                             User

   READ rb->user_page->data_tail       READ rb->user_page->data_head
   smp_mb()                            smp_mb()
   WRITE rb data                       READ rb  data
   smp_mb()                            smp_mb()
   rb->user_page->data_head            WRITE rb->user_page->data_tail

So yeah we want a berrier between the data published and the user data_head.
But this ordering concerns wider layout than just rb->head and rb->user_page->data_head

And BTW I can see an smp_rmb() after we read rb->user_page->data_tail. This is probably the
first kernel barrier in my above example. (not sure if rmb() alone is enough though).
Frédéric Weisbecker Oct. 23, 2013, 2:25 p.m. UTC | #3
2013/10/23 Frederic Weisbecker <fweisbec@gmail.com>:
> On Wed, Oct 23, 2013 at 10:54:54AM +1100, Michael Neuling wrote:
>> Frederic,
>>
>> In the perf ring buffer code we have this in perf_output_get_handle():
>>
>>       if (!local_dec_and_test(&rb->nest))
>>               goto out;
>>
>>       /*
>>        * Publish the known good head. Rely on the full barrier implied
>>        * by atomic_dec_and_test() order the rb->head read and this
>>        * write.
>>        */
>>       rb->user_page->data_head = head;
>>
>> The comment says atomic_dec_and_test() but the code is
>> local_dec_and_test().
>>
>> On powerpc, local_dec_and_test() doesn't have a memory barrier but
>> atomic_dec_and_test() does.  Is the comment wrong, or is
>> local_dec_and_test() suppose to imply a memory barrier too and we have
>> it wrongly implemented in powerpc?
>>
>> My guess is that local_dec_and_test() is correct but we to add an
>> explicit memory barrier like below:
>>
>> (Kudos to Victor Kaplansky for finding this)
>>
>> Mikey
>>
>> diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
>> index cd55144..95768c6 100644
>> --- a/kernel/events/ring_buffer.c
>> +++ b/kernel/events/ring_buffer.c
>> @@ -87,10 +87,10 @@ again:
>>               goto out;
>>
>>       /*
>> -      * Publish the known good head. Rely on the full barrier implied
>> -      * by atomic_dec_and_test() order the rb->head read and this
>> -      * write.
>> +      * Publish the known good head. We need a memory barrier to order the
>> +      * order the rb->head read and this write.
>>        */
>> +     smp_mb ();
>>       rb->user_page->data_head = head;
>>
>>       /*
>
>
> I'm adding Peter in Cc since he wrote that code.
> I agree that local_dec_and_test() doesn't need to imply an smp barrier.
> All it has to provide as a guarantee is the atomicity against local concurrent
> operations (interrupts, preemption, ...).
>
> Now I'm a bit confused about this barrier.
>
> I think we want this ordering:
>
>     Kernel                             User
>
>    READ rb->user_page->data_tail       READ rb->user_page->data_head
>    smp_mb()                            smp_mb()
>    WRITE rb data                       READ rb  data
>    smp_mb()                            smp_mb()
>    rb->user_page->data_head            WRITE rb->user_page->data_tail
      ^^ I meant a write above for data_head.
diff mbox

Patch

diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index cd55144..95768c6 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -87,10 +87,10 @@  again:
 		goto out;
 
 	/*
-	 * Publish the known good head. Rely on the full barrier implied
-	 * by atomic_dec_and_test() order the rb->head read and this
-	 * write.
+	 * Publish the known good head. We need a memory barrier to order the
+	 * order the rb->head read and this write.
 	 */
+	smp_mb ();
 	rb->user_page->data_head = head;
 
 	/*