From patchwork Tue Jun 2 03:22:01 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fam Zheng X-Patchwork-Id: 479278 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id CE7181412EA for ; Tue, 2 Jun 2015 13:28:52 +1000 (AEST) Received: from localhost ([::1]:55925 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Yzcsl-0006ZN-1Q for incoming@patchwork.ozlabs.org; Mon, 01 Jun 2015 23:28:51 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49756) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YzcnF-0004Dr-OS for qemu-devel@nongnu.org; Mon, 01 Jun 2015 23:23:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YzcnE-0008NR-3O for qemu-devel@nongnu.org; Mon, 01 Jun 2015 23:23:09 -0400 Received: from mx1.redhat.com ([209.132.183.28]:38439) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Yzcn8-0008LR-L0; Mon, 01 Jun 2015 23:23:02 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (Postfix) with ESMTPS id 56F203679DD; Tue, 2 Jun 2015 03:23:02 +0000 (UTC) Received: from dhcp-14-238.nay.redhat.com (dhcp-14-104.nay.redhat.com [10.66.14.104]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t523M4RI016063; Mon, 1 Jun 2015 23:22:59 -0400 From: Fam Zheng To: qemu-devel@nongnu.org Date: Tue, 2 Jun 2015 11:22:01 +0800 Message-Id: <1433215322-23529-13-git-send-email-famz@redhat.com> In-Reply-To: <1433215322-23529-1-git-send-email-famz@redhat.com> References: <1433215322-23529-1-git-send-email-famz@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Kevin Wolf , Paolo Bonzini , Jeff Cody , Stefan Hajnoczi , qemu-block@nongnu.org Subject: [Qemu-devel] [PATCH v2 12/13] virtio-scsi-dataplane: Add backend lock listener X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org When a disk is attached to scsi-bus, virtio_scsi_hotplug will take care of protecting the block device with op blockers. Currently we haven't enabled block jobs (like what's done in virtio_blk_data_plane_create), but it is better to disable ioeventfd when backend is locked. This is useful to make sure that guest IO requests are paused during qmp transactions (such as multi-disk snapshot or backup). A counter is added to the virtio-scsi device, which keeps track of currently blocked disks. If it goes from 0 to 1, the ioeventfds are disabled; when it goes back to 0, they are re-enabled. Also in device initialization, push the enabling of ioeventfds to before return, so the virtio_scsi_clear_aio is not needed there. Rename it, pair with an enabling variant, fix one coding style issue, then use it in the device pause points. Signed-off-by: Fam Zheng --- hw/scsi/virtio-scsi-dataplane.c | 78 ++++++++++++++++++++++++++++++----------- hw/scsi/virtio-scsi.c | 3 ++ include/hw/virtio/virtio-scsi.h | 3 ++ 3 files changed, 64 insertions(+), 20 deletions(-) diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c index 5575648..28706a2 100644 --- a/hw/scsi/virtio-scsi-dataplane.c +++ b/hw/scsi/virtio-scsi-dataplane.c @@ -40,7 +40,6 @@ void virtio_scsi_set_iothread(VirtIOSCSI *s, IOThread *iothread) static VirtIOSCSIVring *virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, - EventNotifierHandler *handler, int n) { BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(s))); @@ -60,7 +59,6 @@ static VirtIOSCSIVring *virtio_scsi_vring_init(VirtIOSCSI *s, r = g_slice_new(VirtIOSCSIVring); r->host_notifier = *virtio_queue_get_host_notifier(vq); r->guest_notifier = *virtio_queue_get_guest_notifier(vq); - aio_set_event_notifier(s->ctx, &r->host_notifier, handler); r->parent = s; @@ -71,7 +69,6 @@ static VirtIOSCSIVring *virtio_scsi_vring_init(VirtIOSCSI *s, return r; fail_vring: - aio_set_event_notifier(s->ctx, &r->host_notifier, NULL); k->set_host_notifier(qbus->parent, n, false); g_slice_free(VirtIOSCSIVring, r); return NULL; @@ -104,6 +101,9 @@ void virtio_scsi_vring_push_notify(VirtIOSCSIReq *req) } } +static void virtio_scsi_start_ioeventfd(VirtIOSCSI *s); +static void virtio_scsi_stop_ioeventfd(VirtIOSCSI *s); + static void virtio_scsi_iothread_handle_ctrl(EventNotifier *notifier) { VirtIOSCSIVring *vring = container_of(notifier, @@ -111,6 +111,7 @@ static void virtio_scsi_iothread_handle_ctrl(EventNotifier *notifier) VirtIOSCSI *s = VIRTIO_SCSI(vring->parent); VirtIOSCSIReq *req; + assert(!s->pause_counter); event_notifier_test_and_clear(notifier); while ((req = virtio_scsi_pop_req_vring(s, vring))) { virtio_scsi_handle_ctrl_req(s, req); @@ -124,6 +125,7 @@ static void virtio_scsi_iothread_handle_event(EventNotifier *notifier) VirtIOSCSI *s = vring->parent; VirtIODevice *vdev = VIRTIO_DEVICE(s); + assert(!s->pause_counter); event_notifier_test_and_clear(notifier); if (!(vdev->status & VIRTIO_CONFIG_S_DRIVER_OK)) { @@ -143,6 +145,7 @@ static void virtio_scsi_iothread_handle_cmd(EventNotifier *notifier) VirtIOSCSIReq *req, *next; QTAILQ_HEAD(, VirtIOSCSIReq) reqs = QTAILQ_HEAD_INITIALIZER(reqs); + assert(!s->pause_counter); event_notifier_test_and_clear(notifier); while ((req = virtio_scsi_pop_req_vring(s, vring))) { if (virtio_scsi_handle_cmd_req_prepare(s, req)) { @@ -155,8 +158,52 @@ static void virtio_scsi_iothread_handle_cmd(EventNotifier *notifier) } } +void virtio_scsi_dataplane_pause_handler(Notifier *notifier, void *data) +{ + VirtIOSCSI *s = container_of(notifier, VirtIOSCSI, pause_notifier); + BdrvLockEvent *event = data; + + if (event->locking) { + s->pause_counter++; + if (s->pause_counter == 1) { + virtio_scsi_stop_ioeventfd(s); + } + } else { + s->pause_counter--; + if (s->pause_counter == 0) { + virtio_scsi_start_ioeventfd(s); + } + } + assert(s->pause_counter >= 0); +} + /* assumes s->ctx held */ -static void virtio_scsi_clear_aio(VirtIOSCSI *s) +static void virtio_scsi_start_ioeventfd(VirtIOSCSI *s) +{ + VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s); + int i; + + if (!s->dataplane_started || s->dataplane_stopping) { + return; + } + if (s->ctrl_vring) { + aio_set_event_notifier(s->ctx, &s->ctrl_vring->host_notifier, + virtio_scsi_iothread_handle_ctrl); + } + if (s->event_vring) { + aio_set_event_notifier(s->ctx, &s->event_vring->host_notifier, + virtio_scsi_iothread_handle_event); + } + if (s->cmd_vrings) { + for (i = 0; i < vs->conf.num_queues && s->cmd_vrings[i]; i++) { + aio_set_event_notifier(s->ctx, &s->cmd_vrings[i]->host_notifier, + virtio_scsi_iothread_handle_cmd); + } + } +} + +/* assumes s->ctx held */ +static void virtio_scsi_stop_ioeventfd(VirtIOSCSI *s) { VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s); int i; @@ -169,7 +216,8 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s) } if (s->cmd_vrings) { for (i = 0; i < vs->conf.num_queues && s->cmd_vrings[i]; i++) { - aio_set_event_notifier(s->ctx, &s->cmd_vrings[i]->host_notifier, NULL); + aio_set_event_notifier(s->ctx, &s->cmd_vrings[i]->host_notifier, + NULL); } } } @@ -229,24 +277,18 @@ void virtio_scsi_dataplane_start(VirtIOSCSI *s) } aio_context_acquire(s->ctx); - s->ctrl_vring = virtio_scsi_vring_init(s, vs->ctrl_vq, - virtio_scsi_iothread_handle_ctrl, - 0); + s->ctrl_vring = virtio_scsi_vring_init(s, vs->ctrl_vq, 0); if (!s->ctrl_vring) { goto fail_vrings; } - s->event_vring = virtio_scsi_vring_init(s, vs->event_vq, - virtio_scsi_iothread_handle_event, - 1); + s->event_vring = virtio_scsi_vring_init(s, vs->event_vq, 1); if (!s->event_vring) { goto fail_vrings; } s->cmd_vrings = g_new(VirtIOSCSIVring *, vs->conf.num_queues); for (i = 0; i < vs->conf.num_queues; i++) { s->cmd_vrings[i] = - virtio_scsi_vring_init(s, vs->cmd_vqs[i], - virtio_scsi_iothread_handle_cmd, - i + 2); + virtio_scsi_vring_init(s, vs->cmd_vqs[i], i + 2); if (!s->cmd_vrings[i]) { goto fail_vrings; } @@ -254,11 +296,11 @@ void virtio_scsi_dataplane_start(VirtIOSCSI *s) s->dataplane_starting = false; s->dataplane_started = true; + virtio_scsi_start_ioeventfd(s); aio_context_release(s->ctx); return; fail_vrings: - virtio_scsi_clear_aio(s); aio_context_release(s->ctx); virtio_scsi_vring_teardown(s); for (i = 0; i < vs->conf.num_queues + 2; i++) { @@ -290,11 +332,7 @@ void virtio_scsi_dataplane_stop(VirtIOSCSI *s) aio_context_acquire(s->ctx); - aio_set_event_notifier(s->ctx, &s->ctrl_vring->host_notifier, NULL); - aio_set_event_notifier(s->ctx, &s->event_vring->host_notifier, NULL); - for (i = 0; i < vs->conf.num_queues; i++) { - aio_set_event_notifier(s->ctx, &s->cmd_vrings[i]->host_notifier, NULL); - } + virtio_scsi_stop_ioeventfd(s); blk_drain_all(); /* ensure there are no in-flight requests */ diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index b0dee29..0a770d2 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -774,6 +774,8 @@ static void virtio_scsi_hotplug(HotplugHandler *hotplug_dev, DeviceState *dev, blk_op_block_all(sd->conf.blk, s->blocker); aio_context_acquire(s->ctx); blk_set_aio_context(sd->conf.blk, s->ctx); + s->pause_notifier.notify = virtio_scsi_dataplane_pause_handler; + blk_add_lock_unlock_notifier(sd->conf.blk, &s->pause_notifier); aio_context_release(s->ctx); } @@ -798,6 +800,7 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev, } if (s->ctx) { + notifier_remove(&s->pause_notifier); blk_op_unblock_all(sd->conf.blk, s->blocker); } qdev_simple_device_unplug_cb(hotplug_dev, dev, errp); diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h index b42e7f1..928b87e 100644 --- a/include/hw/virtio/virtio-scsi.h +++ b/include/hw/virtio/virtio-scsi.h @@ -97,6 +97,8 @@ typedef struct VirtIOSCSI { bool dataplane_disabled; bool dataplane_fenced; Error *blocker; + Notifier pause_notifier; + int pause_counter; Notifier migration_state_notifier; uint32_t host_features; } VirtIOSCSI; @@ -170,6 +172,7 @@ void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev, uint32_t event, uint32_t reason); void virtio_scsi_set_iothread(VirtIOSCSI *s, IOThread *iothread); +void virtio_scsi_dataplane_pause_handler(Notifier *notifier, void *data); void virtio_scsi_dataplane_start(VirtIOSCSI *s); void virtio_scsi_dataplane_stop(VirtIOSCSI *s); void virtio_scsi_vring_push_notify(VirtIOSCSIReq *req);