diff mbox

[1/2] async: add aio_bh_schedule_oneshot

Message ID 1475511256-20051-2-git-send-email-pbonzini@redhat.com
State New
Headers show

Commit Message

Paolo Bonzini Oct. 3, 2016, 4:14 p.m. UTC
qemu_bh_delete is already clearing bh->scheduled at the same time
as it's setting bh->deleted.  Since it's not using any memory
barriers, there is no synchronization going on for bh->deleted,
and this makes the bh->deleted checks superfluous in aio_compute_timeout,
aio_bh_poll and aio_ctx_check.

Just remove them, and put the (bh->scheduled && bh->deleted) combo
to work in a new function aio_bh_schedule_oneshot.  The new function
removes the need to save the QEMUBH pointer between the creation
and the execution of the bottom half.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 async.c             | 27 +++++++++++++++++++++++----
 include/block/aio.h | 10 ++++++++++
 2 files changed, 33 insertions(+), 4 deletions(-)

Comments

Stefan Hajnoczi Oct. 5, 2016, 1:13 p.m. UTC | #1
On Mon, Oct 03, 2016 at 06:14:15PM +0200, Paolo Bonzini wrote:
> qemu_bh_delete is already clearing bh->scheduled at the same time
> as it's setting bh->deleted.  Since it's not using any memory
> barriers, there is no synchronization going on for bh->deleted,
> and this makes the bh->deleted checks superfluous in aio_compute_timeout,
> aio_bh_poll and aio_ctx_check.

Yikes.  On one hand this sounds scary but in practice qemu_bh_delete()
isn't called from another thread so the next aio_bh_poll() will indeed
clean it up instead of dispatching a deleted BH.

Due to the nature of this change I suggest making it in a separate
patch.

> Just remove them, and put the (bh->scheduled && bh->deleted) combo
> to work in a new function aio_bh_schedule_oneshot.  The new function
> removes the need to save the QEMUBH pointer between the creation
> and the execution of the bottom half.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  async.c             | 27 +++++++++++++++++++++++----
>  include/block/aio.h | 10 ++++++++++
>  2 files changed, 33 insertions(+), 4 deletions(-)
> 
> diff --git a/async.c b/async.c
> index 3bca9b0..f30d011 100644
> --- a/async.c
> +++ b/async.c
> @@ -44,6 +44,25 @@ struct QEMUBH {
>      bool deleted;
>  };
>  
> +void aio_bh_schedule_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
> +{
> +    QEMUBH *bh;
> +    bh = g_new(QEMUBH, 1);
> +    *bh = (QEMUBH){
> +        .ctx = ctx,
> +        .cb = cb,
> +        .opaque = opaque,
> +    };
> +    qemu_mutex_lock(&ctx->bh_lock);
> +    bh->next = ctx->first_bh;
> +    bh->scheduled = 1;
> +    bh->deleted = 1;
> +    /* Make sure that the members are ready before putting bh into list */
> +    smp_wmb();
> +    ctx->first_bh = bh;
> +    qemu_mutex_unlock(&ctx->bh_lock);
> +}
> +
>  QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
>  {
>      QEMUBH *bh;
> @@ -86,7 +105,7 @@ int aio_bh_poll(AioContext *ctx)
>           * thread sees the zero before bh->cb has run, and thus will call
>           * aio_notify again if necessary.
>           */
> -        if (!bh->deleted && atomic_xchg(&bh->scheduled, 0)) {
> +        if (atomic_xchg(&bh->scheduled, 0)) {
>              /* Idle BHs and the notify BH don't count as progress */
>              if (!bh->idle && bh != ctx->notify_dummy_bh) {
>                  ret = 1;
> @@ -104,7 +123,7 @@ int aio_bh_poll(AioContext *ctx)
>          bhp = &ctx->first_bh;
>          while (*bhp) {
>              bh = *bhp;
> -            if (bh->deleted) {
> +            if (bh->deleted && !bh->scheduled) {
>                  *bhp = bh->next;
>                  g_free(bh);
>              } else {
> @@ -168,7 +187,7 @@ aio_compute_timeout(AioContext *ctx)
>      QEMUBH *bh;
>  
>      for (bh = ctx->first_bh; bh; bh = bh->next) {
> -        if (!bh->deleted && bh->scheduled) {
> +        if (bh->scheduled) {
>              if (bh->idle) {
>                  /* idle bottom halves will be polled at least
>                   * every 10ms */
> @@ -216,7 +235,7 @@ aio_ctx_check(GSource *source)
>      aio_notify_accept(ctx);
>  
>      for (bh = ctx->first_bh; bh; bh = bh->next) {
> -        if (!bh->deleted && bh->scheduled) {
> +        if (bh->scheduled) {
>              return true;
>          }
>      }
> diff --git a/include/block/aio.h b/include/block/aio.h
> index 173c1ed..be7dd3b 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -181,6 +181,16 @@ void aio_context_acquire(AioContext *ctx);
>  void aio_context_release(AioContext *ctx);
>  
>  /**
> + * aio_bh_schedule_oneshot: Allocate a new bottom half structure that will run
> + * only once and as soon as possible.
> + *
> + * Bottom halves are lightweight callbacks whose invocation is guaranteed
> + * to be wait-free, thread-safe and signal-safe.  The #QEMUBH structure
> + * is opaque and must be allocated prior to its use.

I'm confused.  There is no QEMUBH structure in this function
prototype.  Is this comment from an earlier version of this function?

> + */
> +void aio_bh_schedule_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque);
> +
> +/**
>   * aio_bh_new: Allocate a new bottom half structure.
>   *
>   * Bottom halves are lightweight callbacks whose invocation is guaranteed
> -- 
> 2.7.4
> 
>
Paolo Bonzini Oct. 5, 2016, 1:55 p.m. UTC | #2
On 05/10/2016 15:13, Stefan Hajnoczi wrote:
> > qemu_bh_delete is already clearing bh->scheduled at the same time
> > as it's setting bh->deleted.  Since it's not using any memory
> > barriers, there is no synchronization going on for bh->deleted,
> > and this makes the bh->deleted checks superfluous in aio_compute_timeout,
> > aio_bh_poll and aio_ctx_check.
> 
> Yikes.  On one hand this sounds scary but in practice qemu_bh_delete()
> isn't called from another thread so the next aio_bh_poll() will indeed
> clean it up instead of dispatching a deleted BH.
> 
> Due to the nature of this change I suggest making it in a separate
> patch.

Separate from what?  (Sorry if I'm being dense).

>>
>> + * aio_bh_schedule_oneshot: Allocate a new bottom half structure that will run
>> + * only once and as soon as possible.
>> + *
>> + * Bottom halves are lightweight callbacks whose invocation is guaranteed
>> + * to be wait-free, thread-safe and signal-safe.  The #QEMUBH structure
>> + * is opaque and must be allocated prior to its use.
> 
> I'm confused.  There is no QEMUBH structure in this function
> prototype.  Is this comment from an earlier version of this function?

No, it's from aio_bh_new.  Of course this one is neither wait-free nor
signal-safe.  Kevin, do you want me to respin?

Paolo
Kevin Wolf Oct. 5, 2016, 2:20 p.m. UTC | #3
Am 05.10.2016 um 15:55 hat Paolo Bonzini geschrieben:
> On 05/10/2016 15:13, Stefan Hajnoczi wrote:
> > > qemu_bh_delete is already clearing bh->scheduled at the same time
> > > as it's setting bh->deleted.  Since it's not using any memory
> > > barriers, there is no synchronization going on for bh->deleted,
> > > and this makes the bh->deleted checks superfluous in aio_compute_timeout,
> > > aio_bh_poll and aio_ctx_check.
> > 
> > Yikes.  On one hand this sounds scary but in practice qemu_bh_delete()
> > isn't called from another thread so the next aio_bh_poll() will indeed
> > clean it up instead of dispatching a deleted BH.
> > 
> > Due to the nature of this change I suggest making it in a separate
> > patch.
> 
> Separate from what?  (Sorry if I'm being dense).
> 
> >>
> >> + * aio_bh_schedule_oneshot: Allocate a new bottom half structure that will run
> >> + * only once and as soon as possible.
> >> + *
> >> + * Bottom halves are lightweight callbacks whose invocation is guaranteed
> >> + * to be wait-free, thread-safe and signal-safe.  The #QEMUBH structure
> >> + * is opaque and must be allocated prior to its use.
> > 
> > I'm confused.  There is no QEMUBH structure in this function
> > prototype.  Is this comment from an earlier version of this function?
> 
> No, it's from aio_bh_new.  Of course this one is neither wait-free nor
> signal-safe.  Kevin, do you want me to respin?

If the comment is wrong, either post a v2 of this patch or just reply
with a new version of the comment and I'll squash it in. Your choice, I
don't mind either way.

Kevin
Paolo Bonzini Oct. 5, 2016, 2:25 p.m. UTC | #4
On 05/10/2016 16:20, Kevin Wolf wrote:
> Am 05.10.2016 um 15:55 hat Paolo Bonzini geschrieben:
>> On 05/10/2016 15:13, Stefan Hajnoczi wrote:
>>>> qemu_bh_delete is already clearing bh->scheduled at the same time
>>>> as it's setting bh->deleted.  Since it's not using any memory
>>>> barriers, there is no synchronization going on for bh->deleted,
>>>> and this makes the bh->deleted checks superfluous in aio_compute_timeout,
>>>> aio_bh_poll and aio_ctx_check.
>>>
>>> Yikes.  On one hand this sounds scary but in practice qemu_bh_delete()
>>> isn't called from another thread so the next aio_bh_poll() will indeed
>>> clean it up instead of dispatching a deleted BH.
>>>
>>> Due to the nature of this change I suggest making it in a separate
>>> patch.
>>
>> Separate from what?  (Sorry if I'm being dense).
>>
>>>>
>>>> + * aio_bh_schedule_oneshot: Allocate a new bottom half structure that will run
>>>> + * only once and as soon as possible.
>>>> + *
>>>> + * Bottom halves are lightweight callbacks whose invocation is guaranteed
>>>> + * to be wait-free, thread-safe and signal-safe.  The #QEMUBH structure
>>>> + * is opaque and must be allocated prior to its use.
>>>
>>> I'm confused.  There is no QEMUBH structure in this function
>>> prototype.  Is this comment from an earlier version of this function?
>>
>> No, it's from aio_bh_new.  Of course this one is neither wait-free nor
>> signal-safe.  Kevin, do you want me to respin?
> 
> If the comment is wrong, either post a v2 of this patch or just reply
> with a new version of the comment and I'll squash it in. Your choice, I
> don't mind either way.

Just removing those three lines would be okay.

Paolo
Kevin Wolf Oct. 5, 2016, 3:26 p.m. UTC | #5
Am 05.10.2016 um 16:25 hat Paolo Bonzini geschrieben:
> 
> 
> On 05/10/2016 16:20, Kevin Wolf wrote:
> > Am 05.10.2016 um 15:55 hat Paolo Bonzini geschrieben:
> >> On 05/10/2016 15:13, Stefan Hajnoczi wrote:
> >>>> qemu_bh_delete is already clearing bh->scheduled at the same time
> >>>> as it's setting bh->deleted.  Since it's not using any memory
> >>>> barriers, there is no synchronization going on for bh->deleted,
> >>>> and this makes the bh->deleted checks superfluous in aio_compute_timeout,
> >>>> aio_bh_poll and aio_ctx_check.
> >>>
> >>> Yikes.  On one hand this sounds scary but in practice qemu_bh_delete()
> >>> isn't called from another thread so the next aio_bh_poll() will indeed
> >>> clean it up instead of dispatching a deleted BH.
> >>>
> >>> Due to the nature of this change I suggest making it in a separate
> >>> patch.
> >>
> >> Separate from what?  (Sorry if I'm being dense).
> >>
> >>>>
> >>>> + * aio_bh_schedule_oneshot: Allocate a new bottom half structure that will run
> >>>> + * only once and as soon as possible.
> >>>> + *
> >>>> + * Bottom halves are lightweight callbacks whose invocation is guaranteed
> >>>> + * to be wait-free, thread-safe and signal-safe.  The #QEMUBH structure
> >>>> + * is opaque and must be allocated prior to its use.
> >>>
> >>> I'm confused.  There is no QEMUBH structure in this function
> >>> prototype.  Is this comment from an earlier version of this function?
> >>
> >> No, it's from aio_bh_new.  Of course this one is neither wait-free nor
> >> signal-safe.  Kevin, do you want me to respin?
> > 
> > If the comment is wrong, either post a v2 of this patch or just reply
> > with a new version of the comment and I'll squash it in. Your choice, I
> > don't mind either way.
> 
> Just removing those three lines would be okay.

Ok, done.

Kevin
diff mbox

Patch

diff --git a/async.c b/async.c
index 3bca9b0..f30d011 100644
--- a/async.c
+++ b/async.c
@@ -44,6 +44,25 @@  struct QEMUBH {
     bool deleted;
 };
 
+void aio_bh_schedule_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
+{
+    QEMUBH *bh;
+    bh = g_new(QEMUBH, 1);
+    *bh = (QEMUBH){
+        .ctx = ctx,
+        .cb = cb,
+        .opaque = opaque,
+    };
+    qemu_mutex_lock(&ctx->bh_lock);
+    bh->next = ctx->first_bh;
+    bh->scheduled = 1;
+    bh->deleted = 1;
+    /* Make sure that the members are ready before putting bh into list */
+    smp_wmb();
+    ctx->first_bh = bh;
+    qemu_mutex_unlock(&ctx->bh_lock);
+}
+
 QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
 {
     QEMUBH *bh;
@@ -86,7 +105,7 @@  int aio_bh_poll(AioContext *ctx)
          * thread sees the zero before bh->cb has run, and thus will call
          * aio_notify again if necessary.
          */
-        if (!bh->deleted && atomic_xchg(&bh->scheduled, 0)) {
+        if (atomic_xchg(&bh->scheduled, 0)) {
             /* Idle BHs and the notify BH don't count as progress */
             if (!bh->idle && bh != ctx->notify_dummy_bh) {
                 ret = 1;
@@ -104,7 +123,7 @@  int aio_bh_poll(AioContext *ctx)
         bhp = &ctx->first_bh;
         while (*bhp) {
             bh = *bhp;
-            if (bh->deleted) {
+            if (bh->deleted && !bh->scheduled) {
                 *bhp = bh->next;
                 g_free(bh);
             } else {
@@ -168,7 +187,7 @@  aio_compute_timeout(AioContext *ctx)
     QEMUBH *bh;
 
     for (bh = ctx->first_bh; bh; bh = bh->next) {
-        if (!bh->deleted && bh->scheduled) {
+        if (bh->scheduled) {
             if (bh->idle) {
                 /* idle bottom halves will be polled at least
                  * every 10ms */
@@ -216,7 +235,7 @@  aio_ctx_check(GSource *source)
     aio_notify_accept(ctx);
 
     for (bh = ctx->first_bh; bh; bh = bh->next) {
-        if (!bh->deleted && bh->scheduled) {
+        if (bh->scheduled) {
             return true;
         }
     }
diff --git a/include/block/aio.h b/include/block/aio.h
index 173c1ed..be7dd3b 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -181,6 +181,16 @@  void aio_context_acquire(AioContext *ctx);
 void aio_context_release(AioContext *ctx);
 
 /**
+ * aio_bh_schedule_oneshot: Allocate a new bottom half structure that will run
+ * only once and as soon as possible.
+ *
+ * Bottom halves are lightweight callbacks whose invocation is guaranteed
+ * to be wait-free, thread-safe and signal-safe.  The #QEMUBH structure
+ * is opaque and must be allocated prior to its use.
+ */
+void aio_bh_schedule_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque);
+
+/**
  * aio_bh_new: Allocate a new bottom half structure.
  *
  * Bottom halves are lightweight callbacks whose invocation is guaranteed