From patchwork Fri Apr 8 08:02:45 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 607924 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3qhC3355qlz9t47 for ; Fri, 8 Apr 2016 18:16:15 +1000 (AEST) Received: from localhost ([::1]:54868 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aoRaP-00029R-Kk for incoming@patchwork.ozlabs.org; Fri, 08 Apr 2016 04:16:13 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60363) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aoRNb-0001Xc-3x for qemu-devel@nongnu.org; Fri, 08 Apr 2016 04:03:05 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aoRNW-0005qi-Is for qemu-devel@nongnu.org; Fri, 08 Apr 2016 04:02:59 -0400 Received: from mx1.redhat.com ([209.132.183.28]:38980) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aoRNR-0005oO-8M; Fri, 08 Apr 2016 04:02:49 -0400 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D43EBED271; Fri, 8 Apr 2016 08:02:48 +0000 (UTC) Received: from redhat.com (vpn1-6-229.ams2.redhat.com [10.36.6.229]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP id u3882jsB026700; Fri, 8 Apr 2016 04:02:46 -0400 Date: Fri, 8 Apr 2016 11:02:45 +0300 From: "Michael S. Tsirkin" To: qemu-devel@nongnu.org Message-ID: <1460102154-16096-13-git-send-email-mst@redhat.com> References: <1460102154-16096-1-git-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1460102154-16096-1-git-send-email-mst@redhat.com> X-Mutt-Fcc: =sent X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Fri, 08 Apr 2016 08:02:48 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 12/15] virtio-blk: use aio handler for data plane X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Maydell , qemu-block@nongnu.org, Stefan Hajnoczi , Paolo Bonzini Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" In addition to handling IO in vcpu thread and in io thread, dataplane introduces yet another mode: handling it by AioContext. This reuses the same handler as previous modes, which triggers races as these were not designed to be reentrant. Use a separate handler just for aio, and disable regular handlers when dataplane is active. Signed-off-by: Michael S. Tsirkin Signed-off-by: Paolo Bonzini Reviewed-by: Michael S. Tsirkin Signed-off-by: Michael S. Tsirkin --- include/hw/virtio/virtio-blk.h | 2 ++ hw/block/dataplane/virtio-blk.c | 13 +++++++++++++ hw/block/virtio-blk.c | 27 +++++++++++++++++---------- 3 files changed, 32 insertions(+), 10 deletions(-) diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h index 59ae1e4..8f2b056 100644 --- a/include/hw/virtio/virtio-blk.h +++ b/include/hw/virtio/virtio-blk.h @@ -86,4 +86,6 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb); void virtio_blk_submit_multireq(BlockBackend *blk, MultiReqBuffer *mrb); +void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq); + #endif diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index 2870d21..65c7f70 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -184,6 +184,17 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s) g_free(s); } +static void virtio_blk_data_plane_handle_output(VirtIODevice *vdev, + VirtQueue *vq) +{ + VirtIOBlock *s = (VirtIOBlock *)vdev; + + assert(s->dataplane); + assert(s->dataplane_started); + + virtio_blk_handle_vq(s, vq); +} + /* Context: QEMU global mutex held */ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s) { @@ -226,6 +237,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s) /* Get this show started by hooking up our callbacks */ aio_context_acquire(s->ctx); + virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output); virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true); aio_context_release(s->ctx); return; @@ -262,6 +274,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s) /* Stop notifications for new requests from guest */ virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false); + virtio_set_queue_aio(s->vq, NULL); /* Drain and switch bs back to the QEMU main loop */ blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context()); diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 151fe78..3f88f8c 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -578,20 +578,11 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb) } } -static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) +void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq) { - VirtIOBlock *s = VIRTIO_BLK(vdev); VirtIOBlockReq *req; MultiReqBuffer mrb = {}; - /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start - * dataplane here instead of waiting for .set_status(). - */ - if (s->dataplane && !s->dataplane_started) { - virtio_blk_data_plane_start(s->dataplane); - return; - } - blk_io_plug(s->blk); while ((req = virtio_blk_get_request(s))) { @@ -605,6 +596,22 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) blk_io_unplug(s->blk); } +static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) +{ + VirtIOBlock *s = (VirtIOBlock *)vdev; + + if (s->dataplane) { + /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start + * dataplane here instead of waiting for .set_status(). + */ + virtio_blk_data_plane_start(s->dataplane); + if (!s->dataplane_disabled) { + return; + } + } + virtio_blk_handle_vq(s, vq); +} + static void virtio_blk_dma_restart_bh(void *opaque) { VirtIOBlock *s = opaque;