From patchwork Thu Jun 20 09:41:09 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: pingfan liu X-Patchwork-Id: 252838 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 780062C0090 for ; Thu, 20 Jun 2013 19:42:02 +1000 (EST) Received: from localhost ([::1]:44530 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UpbNO-0002XA-R4 for incoming@patchwork.ozlabs.org; Thu, 20 Jun 2013 05:41:58 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39815) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UpbN4-0002X4-Vu for qemu-devel@nongnu.org; Thu, 20 Jun 2013 05:41:43 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UpbMx-0005qe-4H for qemu-devel@nongnu.org; Thu, 20 Jun 2013 05:41:38 -0400 Received: from mail-lb0-f175.google.com ([209.85.217.175]:49271) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UpbMw-0005qS-PV for qemu-devel@nongnu.org; Thu, 20 Jun 2013 05:41:31 -0400 Received: by mail-lb0-f175.google.com with SMTP id r10so5596198lbi.20 for ; Thu, 20 Jun 2013 02:41:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=31gGXRVACPLiDdIY3RPFqqBGqY0+Wsw56bfZyz+IkSE=; b=uo3VLfKkAl2JkzL3fcatpBs89Kmaomce/PTUEGhrk11d0a7OwrymYT4aJTpXz79x4p g4NCu0IJ51LXDDjNj7jeVESXd6v/zDRCe0lPhol+1uAP8pudzbXLDydWbYNhJabpqLsh H1jH2ZJ9Mok92QQ/5Cu8YO7eaSlgx9jvxHMT1+2Lgamu7w8cxnPLYovcsIVHylEZWd0V 2g5vOPGoBw4I8gXU0FNp0QNcbnNUYX1D+O5e1lU7LLknbh13od7Z2uKtTNhlBr20Xhev 1p3kiaxUHx7o7ZdcVARiRFKKVDhMei6PoBpW4FsQwusjFN0yUregkQEB8Ypv1wzd6l5/ TxiA== X-Received: by 10.112.205.69 with SMTP id le5mr5182108lbc.3.1371721289779; Thu, 20 Jun 2013 02:41:29 -0700 (PDT) MIME-Version: 1.0 Received: by 10.114.24.135 with HTTP; Thu, 20 Jun 2013 02:41:09 -0700 (PDT) In-Reply-To: <51C2BA6B.2050706@redhat.com> References: <1371675569-6516-1-git-send-email-pingfank@linux.vnet.ibm.com> <1371675569-6516-3-git-send-email-pingfank@linux.vnet.ibm.com> <20130620073924.GA14255@stefanha-thinkpad.redhat.com> <51C2BA6B.2050706@redhat.com> From: liu ping fan Date: Thu, 20 Jun 2013 17:41:09 +0800 Message-ID: To: Paolo Bonzini X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.85.217.175 Cc: Kevin Wolf , Anthony Liguori , Liu Ping Fan , qemu-devel@nongnu.org, Stefan Hajnoczi Subject: Re: [Qemu-devel] [PATCH v3 2/2] QEMUBH: make AioContext's bh re-entrant X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org On Thu, Jun 20, 2013 at 4:16 PM, Paolo Bonzini wrote: > Il 20/06/2013 09:39, Stefan Hajnoczi ha scritto: >> qemu_bh_cancel() and qemu_bh_delete() are not modified by this patch. >> >> It seems that calling them from a thread is a little risky because there >> is no guarantee that the BH is no longer invoked after a thread calls >> these functions. >> >> I think that's worth a comment or do you want them to take the lock so >> they become safe? > > Taking the lock wouldn't help. The invoking loop of aio_bh_poll runs > lockless. I think a comment is better. > > qemu_bh_cancel is inherently not thread-safe, there's not much you can > do about it. > > qemu_bh_delete is safe as long as you wait for the bottom half to stop > before deleting the containing object. Once we have RCU, deletion of > QOM objects will be RCU-protected. Hence, a simple way could be to put > the first part of aio_bh_poll() within rcu_read_lock/unlock. > In fact, I have some idea about this, introduce another member - Object for QEMUBH which will be refereed in cb, then we leave anything to refcnt mechanism. For qemu_bh_cancel(), I do not figure out whether it is important or not to sync with caller. int ret; + int sched; ctx->walking_bh++; @@ -69,8 +70,10 @@ int aio_bh_poll(AioContext *ctx) /* Make sure fetching bh before accessing its members */ smp_read_barrier_depends(); next = bh->next; - if (!bh->deleted && bh->scheduled) { - bh->scheduled = 0; + sched = 0; + atomic_xchg(&bh->scheduled, sched); + if (!bh->deleted && sched) { + //bh->scheduled = 0; if (!bh->idle) ret = 1; bh->idle = 0; @@ -79,6 +82,9 @@ int aio_bh_poll(AioContext *ctx) */ smp_rmb(); bh->cb(bh->opaque); + if (bh->obj) { + object_unref(bh->obj); + } } } @@ -105,8 +111,12 @@ int aio_bh_poll(AioContext *ctx) void qemu_bh_schedule_idle(QEMUBH *bh) { - if (bh->scheduled) + int sched = 1; + + atomic_xchg( &bh->scheduled, sched); + if (sched) { return; + } /* Make sure any writes that are needed by the callback are done * before the locations are read in the aio_bh_poll. */ @@ -117,25 +127,46 @@ void qemu_bh_schedule_idle(QEMUBH *bh) void qemu_bh_schedule(QEMUBH *bh) { - if (bh->scheduled) + int sched = 1; + + atomic_xchg( &bh->scheduled, sched); + if (sched) { return; + } /* Make sure any writes that are needed by the callback are done * before the locations are read in the aio_bh_poll. */ smp_wmb(); bh->scheduled = 1; + if (bh->obj) { + object_ref(bh->obj); + } bh->idle = 0; aio_notify(bh->ctx); } void qemu_bh_cancel(QEMUBH *bh) { - bh->scheduled = 0; + int sched = 0; + + atomic_xchg( &bh->scheduled, sched); + if (sched) { + if (bh->obj) { + object_ref(bh->obj); + } + } } void qemu_bh_delete(QEMUBH *bh) { - bh->scheduled = 0; + int sched = 0; + + atomic_xchg( &bh->scheduled, sched); + if (sched) { + if (bh->obj) { + object_ref(bh->obj); + } + } bh->deleted = 1; } Regards, Pingfan >> The other thing I'm unclear on is the ->idle assignment followed >> immediately by a ->scheduled assignment. Without memory barriers >> aio_bh_poll() isn't guaranteed to get an ordered view of these updates: >> it may see an idle BH as a regular scheduled BH because ->idle is still >> 0. > > Right. You need to order ->idle writes before ->scheduled writes, and > add memory barriers, or alternatively use two bits in ->scheduled so > that you can assign both atomically. > > Paolo diff --git a/async.c b/async.c index 4b17eb7..60c35a1 100644 --- a/async.c +++ b/async.c @@ -61,6 +61,7 @@ int aio_bh_poll(AioContext *ctx) { QEMUBH *bh, **bhp, *next; int ret; + int sched; { QEMUBH *bh, **bhp, *next;