From patchwork Tue Nov 8 21:19:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 1701487 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=e0/POtNf; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4N6LbT5rWLz23ll for ; Wed, 9 Nov 2022 08:20:05 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1osW0X-0004tr-0T; Tue, 08 Nov 2022 16:19:45 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0V-0004sE-AN for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:19:43 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0T-0008IP-Ja for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:19:43 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667942381; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=k4eBnuVum9VAJvuQVEH9UVF7ojeV6W+qMtMhMKWIojY=; b=e0/POtNfMIyGmRa+oxuJ9EozVakoOUCJtBk3fLKySUzu5tKwUaPUpWcfp6tEYBG1lryAsP d6ofNukkPjVNU4P8pqK2yzvZAvI6nPxGd+M3UNGFreeB+DsxtsCpXUaYSYmsQNPWtH8m/z CZm8/WGtxSsu5RP2HFMC4Sddh0j5XZ4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-369-5K2JBReBOQe3V8Bd53EnIA-1; Tue, 08 Nov 2022 16:19:37 -0500 X-MC-Unique: 5K2JBReBOQe3V8Bd53EnIA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E6BCD3817969; Tue, 8 Nov 2022 21:19:35 +0000 (UTC) Received: from localhost (unknown [10.39.192.85]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4B62B2024CC1; Tue, 8 Nov 2022 21:19:35 +0000 (UTC) From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Emanuele Giuseppe Esposito , "Michael S. Tsirkin" , qemu-block@nongnu.org, Kevin Wolf , Hanna Reitz , Paolo Bonzini , Fam Zheng , Stefan Hajnoczi Subject: [PATCH 1/8] virtio_queue_aio_attach_host_notifier: remove AioContext lock Date: Tue, 8 Nov 2022 16:19:23 -0500 Message-Id: <20221108211930.876142-2-stefanha@redhat.com> In-Reply-To: <20221108211930.876142-1-stefanha@redhat.com> References: <20221108211930.876142-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Received-SPF: pass client-ip=170.10.129.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Emanuele Giuseppe Esposito virtio_queue_aio_attach_host_notifier() and virtio_queue_aio_attach_host_notifier_nopoll() run always in the main loop, so there is no need to protect them with AioContext lock. On the other side, virtio_queue_aio_detach_host_notifier() runs in a bh in the iothread context, but it is always scheduled (thus serialized) by the main loop. Therefore removing the AioContext lock is safe. In order to remove the AioContext lock it is necessary to switch aio_wait_bh_oneshot() to AIO_WAIT_WHILE_UNLOCKED(). virtio-blk and virtio-scsi are the only users of aio_wait_bh_oneshot() so it is possible to make this change. For now bdrv_set_aio_context() still needs the AioContext lock. Signed-off-by: Emanuele Giuseppe Esposito Signed-off-by: Stefan Hajnoczi Message-Id: <20220609143727.1151816-2-eesposit@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito --- include/block/aio-wait.h | 4 ++-- hw/block/dataplane/virtio-blk.c | 10 ++++++---- hw/block/virtio-blk.c | 2 ++ hw/scsi/virtio-scsi-dataplane.c | 10 ++++------ util/aio-wait.c | 2 +- 5 files changed, 15 insertions(+), 13 deletions(-) diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h index dd9a7f6461..fce6bfee3a 100644 --- a/include/block/aio-wait.h +++ b/include/block/aio-wait.h @@ -131,8 +131,8 @@ void aio_wait_kick(void); * * Run a BH in @ctx and wait for it to complete. * - * Must be called from the main loop thread with @ctx acquired exactly once. - * Note that main loop event processing may occur. + * Must be called from the main loop thread. @ctx must not be acquired by the + * caller. Note that main loop event processing may occur. */ void aio_wait_bh_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque); diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index b28d81737e..975f5ca8c4 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -167,6 +167,8 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev) Error *local_err = NULL; int r; + GLOBAL_STATE_CODE(); + if (vblk->dataplane_started || s->starting) { return 0; } @@ -245,13 +247,11 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev) } /* Get this show started by hooking up our callbacks */ - aio_context_acquire(s->ctx); for (i = 0; i < nvqs; i++) { VirtQueue *vq = virtio_get_queue(s->vdev, i); virtio_queue_aio_attach_host_notifier(vq, s->ctx); } - aio_context_release(s->ctx); return 0; fail_aio_context: @@ -301,6 +301,8 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev) unsigned i; unsigned nvqs = s->conf->num_queues; + GLOBAL_STATE_CODE(); + if (!vblk->dataplane_started || s->stopping) { return; } @@ -314,9 +316,10 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev) s->stopping = true; trace_virtio_blk_data_plane_stop(s); - aio_context_acquire(s->ctx); aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s); + aio_context_acquire(s->ctx); + /* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete */ blk_drain(s->conf->conf.blk); @@ -325,7 +328,6 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev) * BlockBackend in the iothread, that's ok */ blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context(), NULL); - aio_context_release(s->ctx); /* diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 1762517878..cdc6fd5979 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -100,6 +100,8 @@ static void virtio_blk_rw_complete(void *opaque, int ret) VirtIOBlock *s = next->dev; VirtIODevice *vdev = VIRTIO_DEVICE(s); + IO_CODE(); + aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); while (next) { VirtIOBlockReq *req = next; diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c index 20bb91766e..f6f55d4511 100644 --- a/hw/scsi/virtio-scsi-dataplane.c +++ b/hw/scsi/virtio-scsi-dataplane.c @@ -91,6 +91,8 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev) VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(vdev); VirtIOSCSI *s = VIRTIO_SCSI(vdev); + GLOBAL_STATE_CODE(); + if (s->dataplane_started || s->dataplane_starting || s->dataplane_fenced) { @@ -138,20 +140,18 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev) /* * These fields are visible to the IOThread so we rely on implicit barriers - * in aio_context_acquire() on the write side and aio_notify_accept() on - * the read side. + * in virtio_queue_aio_attach_host_notifier() on the write side and + * aio_notify_accept() on the read side. */ s->dataplane_starting = false; s->dataplane_started = true; - aio_context_acquire(s->ctx); virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx); virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx); for (i = 0; i < vs->conf.num_queues; i++) { virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx); } - aio_context_release(s->ctx); return 0; fail_host_notifiers: @@ -197,9 +197,7 @@ void virtio_scsi_dataplane_stop(VirtIODevice *vdev) } s->dataplane_stopping = true; - aio_context_acquire(s->ctx); aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s); - aio_context_release(s->ctx); blk_drain_all(); /* ensure there are no in-flight requests */ diff --git a/util/aio-wait.c b/util/aio-wait.c index 98c5accd29..80f26ee520 100644 --- a/util/aio-wait.c +++ b/util/aio-wait.c @@ -82,5 +82,5 @@ void aio_wait_bh_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque) assert(qemu_get_current_aio_context() == qemu_get_aio_context()); aio_bh_schedule_oneshot(ctx, aio_wait_bh, &data); - AIO_WAIT_WHILE(ctx, !data.done); + AIO_WAIT_WHILE_UNLOCKED(ctx, !data.done); } From patchwork Tue Nov 8 21:19:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 1701492 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=JMZlZGB5; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4N6LdW6v90z23lT for ; Wed, 9 Nov 2022 08:21:51 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1osW0X-0004uV-Bc; Tue, 08 Nov 2022 16:19:45 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0W-0004sO-0T for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:19:44 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0U-0008Jn-Ij for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:19:43 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667942382; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8niXQ0M6oTwB9tXPfc7Cn0IZ933zxKEgw2jU2Axalj8=; b=JMZlZGB5ttN6+VPKeqsgIs4WLBqBR+KSaVJo+GAhXKJf5o3bvEAC2SEuZmVr2blFCNPpg2 5rZRvYmIdjr3YewanBohzx71RYtYenDy05zN9dwMbn2HekTrjSPukvST5JORrm3k1ZY4DL 4z7NdQ9ZjENwJaphWt+L5G8dJmC55vw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-265-xlxCvilGMl-ygSywgU2CMg-1; Tue, 08 Nov 2022 16:19:38 -0500 X-MC-Unique: xlxCvilGMl-ygSywgU2CMg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 680FD858F17; Tue, 8 Nov 2022 21:19:38 +0000 (UTC) Received: from localhost (unknown [10.39.192.85]) by smtp.corp.redhat.com (Postfix) with ESMTP id D4A7A2166B2C; Tue, 8 Nov 2022 21:19:37 +0000 (UTC) From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Emanuele Giuseppe Esposito , "Michael S. Tsirkin" , qemu-block@nongnu.org, Kevin Wolf , Hanna Reitz , Paolo Bonzini , Fam Zheng , Stefan Hajnoczi Subject: [PATCH 2/8] block-backend: enable_write_cache should be atomic Date: Tue, 8 Nov 2022 16:19:24 -0500 Message-Id: <20221108211930.876142-3-stefanha@redhat.com> In-Reply-To: <20221108211930.876142-1-stefanha@redhat.com> References: <20221108211930.876142-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Received-SPF: pass client-ip=170.10.129.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Emanuele Giuseppe Esposito It is read from IO_CODE and written with BQL held, so setting it as atomic should be enough. Also remove the aiocontext lock that was sporadically taken around the set. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi Signed-off-by: Stefan Hajnoczi Message-Id: <20220609143727.1151816-3-eesposit@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito --- block/block-backend.c | 6 +++--- hw/block/virtio-blk.c | 4 ---- 2 files changed, 3 insertions(+), 7 deletions(-) diff --git a/block/block-backend.c b/block/block-backend.c index c0c7d56c8d..949418cad4 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -60,7 +60,7 @@ struct BlockBackend { * can be used to restore those options in the new BDS on insert) */ BlockBackendRootState root_state; - bool enable_write_cache; + bool enable_write_cache; /* Atomic */ /* I/O stats (display with "info blockstats"). */ BlockAcctStats stats; @@ -1939,13 +1939,13 @@ bool blk_is_sg(BlockBackend *blk) bool blk_enable_write_cache(BlockBackend *blk) { IO_CODE(); - return blk->enable_write_cache; + return qatomic_read(&blk->enable_write_cache); } void blk_set_enable_write_cache(BlockBackend *blk, bool wce) { IO_CODE(); - blk->enable_write_cache = wce; + qatomic_set(&blk->enable_write_cache, wce); } void blk_activate(BlockBackend *blk, Error **errp) diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index cdc6fd5979..96d00103a4 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -961,9 +961,7 @@ static void virtio_blk_set_config(VirtIODevice *vdev, const uint8_t *config) memcpy(&blkcfg, config, s->config_size); - aio_context_acquire(blk_get_aio_context(s->blk)); blk_set_enable_write_cache(s->blk, blkcfg.wce != 0); - aio_context_release(blk_get_aio_context(s->blk)); } static uint64_t virtio_blk_get_features(VirtIODevice *vdev, uint64_t features, @@ -1031,11 +1029,9 @@ static void virtio_blk_set_status(VirtIODevice *vdev, uint8_t status) * s->blk would erroneously be placed in writethrough mode. */ if (!virtio_vdev_has_feature(vdev, VIRTIO_BLK_F_CONFIG_WCE)) { - aio_context_acquire(blk_get_aio_context(s->blk)); blk_set_enable_write_cache(s->blk, virtio_vdev_has_feature(vdev, VIRTIO_BLK_F_WCE)); - aio_context_release(blk_get_aio_context(s->blk)); } } From patchwork Tue Nov 8 21:19:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 1701491 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=ccxDWa7p; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4N6LdV1Mxfz23lT for ; Wed, 9 Nov 2022 08:21:49 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1osW0n-0005Ae-5R; Tue, 08 Nov 2022 16:20:01 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0k-00055Q-Ga for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:19:58 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0i-0008Lh-FA for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:19:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667942395; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MXXd9gV1zeE8VURJJ/WkGjI4zaICParTIrkVpyMNxWw=; b=ccxDWa7pkHnpmd0G7bwLrMAVG82Pjx/LYapbkm5L9ZXLhCUIpt5vNE70O+zXOCP76s+/FD mEcljcuJuzJx6vDkAUot2miOEYVCkNcqyP0liKCbhDRAqpBGgnzpzbY9Tjg4jRfjyCuIzK ipmjOZCffTXu65CYDAxQipqde/D4moc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-636-fZ6jcltOM_Sb5b-fUmoP5A-1; Tue, 08 Nov 2022 16:19:52 -0500 X-MC-Unique: fZ6jcltOM_Sb5b-fUmoP5A-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4AC1E101A58F; Tue, 8 Nov 2022 21:19:52 +0000 (UTC) Received: from localhost (unknown [10.39.192.85]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1D676477F5F; Tue, 8 Nov 2022 21:19:39 +0000 (UTC) From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Emanuele Giuseppe Esposito , "Michael S. Tsirkin" , qemu-block@nongnu.org, Kevin Wolf , Hanna Reitz , Paolo Bonzini , Fam Zheng , Stefan Hajnoczi Subject: [PATCH 3/8] virtio: categorize callbacks in GS Date: Tue, 8 Nov 2022 16:19:25 -0500 Message-Id: <20221108211930.876142-4-stefanha@redhat.com> In-Reply-To: <20221108211930.876142-1-stefanha@redhat.com> References: <20221108211930.876142-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Received-SPF: pass client-ip=170.10.133.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Emanuele Giuseppe Esposito All the callbacks below are always running in the main loop. The callbacks are the following: - start/stop_ioeventfd: these are the callbacks where blk_set_aio_context(iothread) is done, so they are called in the main loop. - save and load: called during migration, when VM is stopped from the main loop. - reset: before calling this callback, stop_ioeventfd is invoked, so it can only run in the main loop. - set_status: going through all the callers we can see it is called from a MemoryRegionOps callback, which always run in a vcpu thread and hold the BQL. - realize: iothread is not even created yet. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi Acked-by: Michael S. Tsirkin Signed-off-by: Stefan Hajnoczi Message-Id: <20220609143727.1151816-5-eesposit@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito --- hw/block/virtio-blk.c | 2 ++ hw/virtio/virtio-bus.c | 5 +++++ hw/virtio/virtio-pci.c | 2 ++ hw/virtio/virtio.c | 8 ++++++++ 4 files changed, 17 insertions(+) diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 96d00103a4..96bc11d2fe 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -1005,6 +1005,8 @@ static void virtio_blk_set_status(VirtIODevice *vdev, uint8_t status) { VirtIOBlock *s = VIRTIO_BLK(vdev); + GLOBAL_STATE_CODE(); + if (!(status & (VIRTIO_CONFIG_S_DRIVER | VIRTIO_CONFIG_S_DRIVER_OK))) { assert(!s->dataplane_started); } diff --git a/hw/virtio/virtio-bus.c b/hw/virtio/virtio-bus.c index 896feb37a1..74cdf4bd27 100644 --- a/hw/virtio/virtio-bus.c +++ b/hw/virtio/virtio-bus.c @@ -23,6 +23,7 @@ */ #include "qemu/osdep.h" +#include "qemu/main-loop.h" #include "qemu/error-report.h" #include "qemu/module.h" #include "qapi/error.h" @@ -224,6 +225,8 @@ int virtio_bus_start_ioeventfd(VirtioBusState *bus) VirtioDeviceClass *vdc = VIRTIO_DEVICE_GET_CLASS(vdev); int r; + GLOBAL_STATE_CODE(); + if (!k->ioeventfd_assign || !k->ioeventfd_enabled(proxy)) { return -ENOSYS; } @@ -248,6 +251,8 @@ void virtio_bus_stop_ioeventfd(VirtioBusState *bus) VirtIODevice *vdev; VirtioDeviceClass *vdc; + GLOBAL_STATE_CODE(); + if (!bus->ioeventfd_started) { return; } diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c index a1c9dfa7bb..4f9a94f61b 100644 --- a/hw/virtio/virtio-pci.c +++ b/hw/virtio/virtio-pci.c @@ -313,6 +313,8 @@ static void virtio_ioport_write(void *opaque, uint32_t addr, uint32_t val) uint16_t vector; hwaddr pa; + GLOBAL_STATE_CODE(); + switch (addr) { case VIRTIO_PCI_GUEST_FEATURES: /* Guest does not negotiate properly? We have to assume nothing. */ diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 9683b2e158..468e8f5ad0 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -2422,6 +2422,8 @@ int virtio_set_status(VirtIODevice *vdev, uint8_t val) VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev); trace_virtio_set_status(vdev, val); + GLOBAL_STATE_CODE(); + if (virtio_vdev_has_feature(vdev, VIRTIO_F_VERSION_1)) { if (!(vdev->status & VIRTIO_CONFIG_S_FEATURES_OK) && val & VIRTIO_CONFIG_S_FEATURES_OK) { @@ -2515,6 +2517,8 @@ void virtio_reset(void *opaque) VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev); int i; + GLOBAL_STATE_CODE(); + virtio_set_status(vdev, 0); if (current_cpu) { /* Guest initiated reset */ @@ -3357,6 +3361,8 @@ int virtio_save(VirtIODevice *vdev, QEMUFile *f) uint32_t guest_features_lo = (vdev->guest_features & 0xffffffff); int i; + GLOBAL_STATE_CODE(); + if (k->save_config) { k->save_config(qbus->parent, f); } @@ -3508,6 +3514,8 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id) VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus); VirtioDeviceClass *vdc = VIRTIO_DEVICE_GET_CLASS(vdev); + GLOBAL_STATE_CODE(); + /* * We poison the endianness to ensure it does not get used before * subsections have been loaded. From patchwork Tue Nov 8 21:19:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 1701495 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=UUjuWQuO; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4N6LfJ6nnFz23lT for ; Wed, 9 Nov 2022 08:22:32 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1osW16-0005OH-Ep; Tue, 08 Nov 2022 16:20:20 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0q-0005DW-Hs for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:20:08 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0l-0008Mh-T5 for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:20:04 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667942399; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fdutNCwwUufz/K0xOfEnZCzq6t0R/NOelUe6SRbC/sU=; b=UUjuWQuOPBwQgat7rgfnOavI5tidh6NPNBalbdGJOGqIrcx1wOwYAqXSOmD+t/RUI+bwoU ht2itAeDGwlR04tFSfBMaf+Qwxwu76v6i1IO+0ev48dX3/yGPS8QfrKbRnrxKTAPBvcHzD A4LPBCWwl/dr1bU3bjp9o6Tp3kG0oT4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-385-RVRdQhVhM_272ckAXUEPKQ-1; Tue, 08 Nov 2022 16:19:54 -0500 X-MC-Unique: RVRdQhVhM_272ckAXUEPKQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7A46B3C0E204; Tue, 8 Nov 2022 21:19:54 +0000 (UTC) Received: from localhost (unknown [10.39.192.85]) by smtp.corp.redhat.com (Postfix) with ESMTP id E7C74111F3BB; Tue, 8 Nov 2022 21:19:53 +0000 (UTC) From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Emanuele Giuseppe Esposito , "Michael S. Tsirkin" , qemu-block@nongnu.org, Kevin Wolf , Hanna Reitz , Paolo Bonzini , Fam Zheng , Stefan Hajnoczi Subject: [PATCH 4/8] virtio-blk: mark GLOBAL_STATE_CODE functions Date: Tue, 8 Nov 2022 16:19:26 -0500 Message-Id: <20221108211930.876142-5-stefanha@redhat.com> In-Reply-To: <20221108211930.876142-1-stefanha@redhat.com> References: <20221108211930.876142-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass client-ip=170.10.129.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, T_SPF_HELO_TEMPERROR=0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Emanuele Giuseppe Esposito Just as done in the block API, mark functions in virtio-blk that are always called in the main loop with BQL held. We know such functions are GS because they all are callbacks from virtio.c API that has already classified them as GS. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi Signed-off-by: Stefan Hajnoczi Message-Id: <20220609143727.1151816-6-eesposit@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito --- hw/block/dataplane/virtio-blk.c | 4 ++++ hw/block/virtio-blk.c | 27 +++++++++++++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index 975f5ca8c4..728c9cd86c 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -89,6 +89,8 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf, BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev))); VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus); + GLOBAL_STATE_CODE(); + *dataplane = NULL; if (conf->iothread) { @@ -140,6 +142,8 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s) { VirtIOBlock *vblk; + GLOBAL_STATE_CODE(); + if (!s) { return; } diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 96bc11d2fe..02b213a140 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -845,11 +845,17 @@ static void virtio_blk_dma_restart_bh(void *opaque) aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } +/* + * Only called when VM is started or stopped in cpus.c. + * No iothread runs in parallel + */ static void virtio_blk_dma_restart_cb(void *opaque, bool running, RunState state) { VirtIOBlock *s = opaque; + GLOBAL_STATE_CODE(); + if (!running) { return; } @@ -867,8 +873,14 @@ static void virtio_blk_reset(VirtIODevice *vdev) AioContext *ctx; VirtIOBlockReq *req; + GLOBAL_STATE_CODE(); + ctx = blk_get_aio_context(s->blk); aio_context_acquire(ctx); + /* + * This drain together with ->stop_ioeventfd() in virtio_pci_reset() + * stops all Iothreads. + */ blk_drain(s->blk); /* We drop queued requests after blk_drain() because blk_drain() itself can @@ -1037,11 +1049,17 @@ static void virtio_blk_set_status(VirtIODevice *vdev, uint8_t status) } } +/* + * VM is stopped while doing migration, so iothread has + * no requests to process. + */ static void virtio_blk_save_device(VirtIODevice *vdev, QEMUFile *f) { VirtIOBlock *s = VIRTIO_BLK(vdev); VirtIOBlockReq *req = s->rq; + GLOBAL_STATE_CODE(); + while (req) { qemu_put_sbyte(f, 1); @@ -1055,11 +1073,17 @@ static void virtio_blk_save_device(VirtIODevice *vdev, QEMUFile *f) qemu_put_sbyte(f, 0); } +/* + * VM is stopped while doing migration, so iothread has + * no requests to process. + */ static int virtio_blk_load_device(VirtIODevice *vdev, QEMUFile *f, int version_id) { VirtIOBlock *s = VIRTIO_BLK(vdev); + GLOBAL_STATE_CODE(); + while (qemu_get_sbyte(f)) { unsigned nvqs = s->conf.num_queues; unsigned vq_idx = 0; @@ -1108,6 +1132,7 @@ static const BlockDevOps virtio_block_ops = { .resize_cb = virtio_blk_resize, }; +/* Iothread is not yet created */ static void virtio_blk_device_realize(DeviceState *dev, Error **errp) { VirtIODevice *vdev = VIRTIO_DEVICE(dev); @@ -1116,6 +1141,8 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp) Error *err = NULL; unsigned i; + GLOBAL_STATE_CODE(); + if (!conf->conf.blk) { error_setg(errp, "drive property not set"); return; From patchwork Tue Nov 8 21:19:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 1701497 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=KUqdXqVW; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4N6Lg94x8Dz23lT for ; Wed, 9 Nov 2022 08:23:16 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1osW17-0005Si-VJ; Tue, 08 Nov 2022 16:20:21 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0r-0005Dz-Lm for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:20:09 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0o-0008Mx-Vl for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:20:05 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667942401; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9RCY8Qz/qZyTFpAQA++gtBamJYUsGWxTVCobEBscDbs=; b=KUqdXqVWeRSgxhSAoFH0EfuvHBA+SBUVBTT6/PRi7ssii8767VBcaqI21dqQPL9+2ptMrX e6nK+rb9fNKRbUN4IDO7npYqbI3V7uR/lZTxisOE+D/uc4Cpn9oolLPnnoS9TspgUsVD7F zJ9+WwkDJlmGjfn5Rvmmmx93abkNshE= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-104-DX-b4_vCOTCD5GiHhMpVFg-1; Tue, 08 Nov 2022 16:19:58 -0500 X-MC-Unique: DX-b4_vCOTCD5GiHhMpVFg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 65FEC3C0E21A; Tue, 8 Nov 2022 21:19:56 +0000 (UTC) Received: from localhost (unknown [10.39.192.85]) by smtp.corp.redhat.com (Postfix) with ESMTP id D7E232028CE4; Tue, 8 Nov 2022 21:19:55 +0000 (UTC) From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Emanuele Giuseppe Esposito , "Michael S. Tsirkin" , qemu-block@nongnu.org, Kevin Wolf , Hanna Reitz , Paolo Bonzini , Fam Zheng , Stefan Hajnoczi Subject: [PATCH 5/8] virtio-blk: mark IO_CODE functions Date: Tue, 8 Nov 2022 16:19:27 -0500 Message-Id: <20221108211930.876142-6-stefanha@redhat.com> In-Reply-To: <20221108211930.876142-1-stefanha@redhat.com> References: <20221108211930.876142-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Received-SPF: pass client-ip=170.10.133.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Emanuele Giuseppe Esposito Just as done in the block API, mark functions in virtio-blk that are called also from iothread(s). We know such functions are IO because many are blk_* callbacks, running always in the device iothread, and remaining are propagated from the leaf IO functions (if a function calls a IO_CODE function, itself is categorized as IO_CODE too). Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi Signed-off-by: Stefan Hajnoczi Message-Id: <20220609143727.1151816-7-eesposit@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito --- hw/block/dataplane/virtio-blk.c | 4 +++ hw/block/virtio-blk.c | 45 ++++++++++++++++++++++++++++----- 2 files changed, 43 insertions(+), 6 deletions(-) diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index 728c9cd86c..3593ac0e7b 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -63,6 +63,8 @@ static void notify_guest_bh(void *opaque) unsigned long bitmap[BITS_TO_LONGS(nvqs)]; unsigned j; + IO_CODE(); + memcpy(bitmap, s->batch_notify_vqs, sizeof(bitmap)); memset(s->batch_notify_vqs, 0, sizeof(bitmap)); @@ -288,6 +290,8 @@ static void virtio_blk_data_plane_stop_bh(void *opaque) VirtIOBlockDataPlane *s = opaque; unsigned i; + IO_CODE(); + for (i = 0; i < s->conf->num_queues; i++) { VirtQueue *vq = virtio_get_queue(s->vdev, i); diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 02b213a140..f8fcf25292 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -39,6 +39,8 @@ static void virtio_blk_init_request(VirtIOBlock *s, VirtQueue *vq, VirtIOBlockReq *req) { + IO_CODE(); + req->dev = s; req->vq = vq; req->qiov.size = 0; @@ -57,6 +59,8 @@ static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status) VirtIOBlock *s = req->dev; VirtIODevice *vdev = VIRTIO_DEVICE(s); + IO_CODE(); + trace_virtio_blk_req_complete(vdev, req, status); stb_p(&req->in->status, status); @@ -76,6 +80,8 @@ static int virtio_blk_handle_rw_error(VirtIOBlockReq *req, int error, VirtIOBlock *s = req->dev; BlockErrorAction action = blk_get_error_action(s->blk, is_read, error); + IO_CODE(); + if (action == BLOCK_ERROR_ACTION_STOP) { /* Break the link as the next request is going to be parsed from the * ring again. Otherwise we may end up doing a double completion! */ @@ -143,7 +149,9 @@ static void virtio_blk_flush_complete(void *opaque, int ret) VirtIOBlockReq *req = opaque; VirtIOBlock *s = req->dev; + IO_CODE(); aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); + if (ret) { if (virtio_blk_handle_rw_error(req, -ret, 0, true)) { goto out; @@ -165,7 +173,9 @@ static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret) bool is_write_zeroes = (virtio_ldl_p(VIRTIO_DEVICE(s), &req->out.type) & ~VIRTIO_BLK_T_BARRIER) == VIRTIO_BLK_T_WRITE_ZEROES; + IO_CODE(); aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); + if (ret) { if (virtio_blk_handle_rw_error(req, -ret, false, is_write_zeroes)) { goto out; @@ -198,6 +208,8 @@ static void virtio_blk_ioctl_complete(void *opaque, int status) struct virtio_scsi_inhdr *scsi; struct sg_io_hdr *hdr; + IO_CODE(); + scsi = (void *)req->elem.in_sg[req->elem.in_num - 2].iov_base; if (status) { @@ -239,6 +251,8 @@ static VirtIOBlockReq *virtio_blk_get_request(VirtIOBlock *s, VirtQueue *vq) { VirtIOBlockReq *req = virtqueue_pop(vq, sizeof(VirtIOBlockReq)); + IO_CODE(); + if (req) { virtio_blk_init_request(s, vq, req); } @@ -259,6 +273,8 @@ static int virtio_blk_handle_scsi_req(VirtIOBlockReq *req) BlockAIOCB *acb; #endif + IO_CODE(); + /* * We require at least one output segment each for the virtio_blk_outhdr * and the SCSI command block. @@ -357,6 +373,7 @@ fail: static void virtio_blk_handle_scsi(VirtIOBlockReq *req) { int status; + IO_CODE(); status = virtio_blk_handle_scsi_req(req); if (status != -EINPROGRESS) { @@ -374,6 +391,8 @@ static inline void submit_requests(VirtIOBlock *s, MultiReqBuffer *mrb, bool is_write = mrb->is_write; BdrvRequestFlags flags = 0; + IO_CODE(); + if (num_reqs > 1) { int i; struct iovec *tmp_iov = qiov->iov; @@ -423,6 +442,8 @@ static int multireq_compare(const void *a, const void *b) const VirtIOBlockReq *req1 = *(VirtIOBlockReq **)a, *req2 = *(VirtIOBlockReq **)b; + IO_CODE(); + /* * Note that we can't simply subtract sector_num1 from sector_num2 * here as that could overflow the return value. @@ -442,6 +463,8 @@ static void virtio_blk_submit_multireq(VirtIOBlock *s, MultiReqBuffer *mrb) uint32_t max_transfer; int64_t sector_num = 0; + IO_CODE(); + if (mrb->num_reqs == 1) { submit_requests(s, mrb, 0, 1, -1); mrb->num_reqs = 0; @@ -491,6 +514,8 @@ static void virtio_blk_handle_flush(VirtIOBlockReq *req, MultiReqBuffer *mrb) { VirtIOBlock *s = req->dev; + IO_CODE(); + block_acct_start(blk_get_stats(s->blk), &req->acct, 0, BLOCK_ACCT_FLUSH); @@ -509,6 +534,8 @@ static bool virtio_blk_sect_range_ok(VirtIOBlock *dev, uint64_t nb_sectors = size >> BDRV_SECTOR_BITS; uint64_t total_sectors; + IO_CODE(); + if (nb_sectors > BDRV_REQUEST_MAX_SECTORS) { return false; } @@ -535,6 +562,8 @@ static uint8_t virtio_blk_handle_discard_write_zeroes(VirtIOBlockReq *req, uint8_t err_status; int bytes; + IO_CODE(); + sector = virtio_ldq_p(vdev, &dwz_hdr->sector); num_sectors = virtio_ldl_p(vdev, &dwz_hdr->num_sectors); flags = virtio_ldl_p(vdev, &dwz_hdr->flags); @@ -613,6 +642,8 @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb) VirtIOBlock *s = req->dev; VirtIODevice *vdev = VIRTIO_DEVICE(s); + IO_CODE(); + if (req->elem.out_num < 1 || req->elem.in_num < 1) { virtio_error(vdev, "virtio-blk missing headers"); return -1; @@ -763,6 +794,8 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq) MultiReqBuffer mrb = {}; bool suppress_notifications = virtio_queue_get_notification(vq); + IO_CODE(); + aio_context_acquire(blk_get_aio_context(s->blk)); blk_io_plug(s->blk); @@ -796,6 +829,8 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) { VirtIOBlock *s = (VirtIOBlock *)vdev; + IO_CODE(); + if (s->dataplane && !s->dataplane_started) { /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start * dataplane here instead of waiting for .set_status(). @@ -846,8 +881,9 @@ static void virtio_blk_dma_restart_bh(void *opaque) } /* - * Only called when VM is started or stopped in cpus.c. - * No iothread runs in parallel + * Only called when VM is started or stopped in cpus.c. When running is true + * ->start_ioeventfd() has already been called. When running is false + * ->stop_ioeventfd() has not yet been called. */ static void virtio_blk_dma_restart_cb(void *opaque, bool running, RunState state) @@ -867,6 +903,7 @@ static void virtio_blk_dma_restart_cb(void *opaque, bool running, virtio_blk_dma_restart_bh, s); } +/* ->stop_ioeventfd() has already been called by virtio_bus_reset() */ static void virtio_blk_reset(VirtIODevice *vdev) { VirtIOBlock *s = VIRTIO_BLK(vdev); @@ -877,10 +914,6 @@ static void virtio_blk_reset(VirtIODevice *vdev) ctx = blk_get_aio_context(s->blk); aio_context_acquire(ctx); - /* - * This drain together with ->stop_ioeventfd() in virtio_pci_reset() - * stops all Iothreads. - */ blk_drain(s->blk); /* We drop queued requests after blk_drain() because blk_drain() itself can From patchwork Tue Nov 8 21:19:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 1701493 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=FCuRDMuc; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4N6LdY2ttRz23lT for ; Wed, 9 Nov 2022 08:21:53 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1osW17-0005QQ-7j; Tue, 08 Nov 2022 16:20:21 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW11-0005JK-Nz for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:20:17 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0p-0008Oj-Eu for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:20:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667942402; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SAOhxaD0NGsnLQ/4hmxRnZgulgDvH30FUpAJSIzDbno=; b=FCuRDMucAOqXStWfslA+ddS3kN72RQ2VPZbxSNO+Mmtz3dMHnwmWrTvls99XDOB+wgA5Se XRmebMAzbYuA/Lh9m1TMqkkmCRNjRb5dmPdB0L1S3bLrP+Xz8aTiivNeWVp3Z+yURr5GkJ jo/DyLOc2j6JHRM/N1okKY8YQcezYhQ= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-346-FdPsbsutOwqigapmKtqTRQ-1; Tue, 08 Nov 2022 16:19:58 -0500 X-MC-Unique: FdPsbsutOwqigapmKtqTRQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 602AC29324BA; Tue, 8 Nov 2022 21:19:58 +0000 (UTC) Received: from localhost (unknown [10.39.192.85]) by smtp.corp.redhat.com (Postfix) with ESMTP id CA84DC15BB5; Tue, 8 Nov 2022 21:19:57 +0000 (UTC) From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Emanuele Giuseppe Esposito , "Michael S. Tsirkin" , qemu-block@nongnu.org, Kevin Wolf , Hanna Reitz , Paolo Bonzini , Fam Zheng , Stefan Hajnoczi Subject: [PATCH 6/8] virtio-blk: remove unnecessary AioContext lock from function already safe Date: Tue, 8 Nov 2022 16:19:28 -0500 Message-Id: <20221108211930.876142-7-stefanha@redhat.com> In-Reply-To: <20221108211930.876142-1-stefanha@redhat.com> References: <20221108211930.876142-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Received-SPF: pass client-ip=170.10.133.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Emanuele Giuseppe Esposito AioContext lock was introduced in b9e413dd375 and in this instance it is used to protect these 3 functions: - virtio_blk_handle_rw_error - virtio_blk_req_complete - block_acct_done Now that all three of the above functions are protected with their own locks, we can get rid of the AioContext lock. Signed-off-by: Emanuele Giuseppe Esposito Reviewed-by: Stefan Hajnoczi Signed-off-by: Stefan Hajnoczi Message-Id: <20220609143727.1151816-9-eesposit@redhat.com> Reviewed-by: Emanuele Giuseppe Esposito --- hw/block/virtio-blk.c | 19 ++----------------- 1 file changed, 2 insertions(+), 17 deletions(-) diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index f8fcf25292..faea045178 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -108,7 +108,6 @@ static void virtio_blk_rw_complete(void *opaque, int ret) IO_CODE(); - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); while (next) { VirtIOBlockReq *req = next; next = req->mr_next; @@ -141,7 +140,6 @@ static void virtio_blk_rw_complete(void *opaque, int ret) block_acct_done(blk_get_stats(s->blk), &req->acct); virtio_blk_free_request(req); } - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } static void virtio_blk_flush_complete(void *opaque, int ret) @@ -150,20 +148,16 @@ static void virtio_blk_flush_complete(void *opaque, int ret) VirtIOBlock *s = req->dev; IO_CODE(); - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); if (ret) { if (virtio_blk_handle_rw_error(req, -ret, 0, true)) { - goto out; + return; } } virtio_blk_req_complete(req, VIRTIO_BLK_S_OK); block_acct_done(blk_get_stats(s->blk), &req->acct); virtio_blk_free_request(req); - -out: - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret) @@ -174,11 +168,10 @@ static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret) ~VIRTIO_BLK_T_BARRIER) == VIRTIO_BLK_T_WRITE_ZEROES; IO_CODE(); - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); if (ret) { if (virtio_blk_handle_rw_error(req, -ret, false, is_write_zeroes)) { - goto out; + return; } } @@ -187,9 +180,6 @@ static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret) block_acct_done(blk_get_stats(s->blk), &req->acct); } virtio_blk_free_request(req); - -out: - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } #ifdef __linux__ @@ -238,10 +228,8 @@ static void virtio_blk_ioctl_complete(void *opaque, int status) virtio_stl_p(vdev, &scsi->data_len, hdr->dxfer_len); out: - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); virtio_blk_req_complete(req, status); virtio_blk_free_request(req); - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); g_free(ioctl_req); } @@ -852,7 +840,6 @@ static void virtio_blk_dma_restart_bh(void *opaque) s->rq = NULL; - aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); while (req) { VirtIOBlockReq *next = req->next; if (virtio_blk_handle_request(req, &mrb)) { @@ -876,8 +863,6 @@ static void virtio_blk_dma_restart_bh(void *opaque) /* Paired with inc in virtio_blk_dma_restart_cb() */ blk_dec_in_flight(s->conf.conf.blk); - - aio_context_release(blk_get_aio_context(s->conf.conf.blk)); } /* From patchwork Tue Nov 8 21:19:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 1701494 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=JGeRmJTK; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4N6Ldd1pB7z23lT for ; Wed, 9 Nov 2022 08:21:57 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1osW15-0005Kp-W1; Tue, 08 Nov 2022 16:20:20 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0t-0005FR-9u for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:20:10 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0q-0008Vd-MZ for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:20:07 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667942403; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/6geCx/2iv4zqgllNBXbXJWKrMBVs+D3OqKdx4CS+ko=; b=JGeRmJTKsnFWijP/8eyADqNWLI6f8vbtz2PMa0N/vOX7a4/q2KCucXdtrRJxsEFiTMdG0u LDB/Grqjw9IpbvRL6tnER5fWqdBvpbxjVYyOn5sH2cpX/6G9H2De8FCwYylbwy27PZQ2pY fwW/VmGzNgBCzeVdbWz/a+JDJ5CGK7Q= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-444-6i_XSGkCNb2i6OD3TNl4aQ-1; Tue, 08 Nov 2022 16:20:01 -0500 X-MC-Unique: 6i_XSGkCNb2i6OD3TNl4aQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AD7F23817978; Tue, 8 Nov 2022 21:20:00 +0000 (UTC) Received: from localhost (unknown [10.39.192.85]) by smtp.corp.redhat.com (Postfix) with ESMTP id 21FFA40C2086; Tue, 8 Nov 2022 21:19:59 +0000 (UTC) From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Emanuele Giuseppe Esposito , "Michael S. Tsirkin" , qemu-block@nongnu.org, Kevin Wolf , Hanna Reitz , Paolo Bonzini , Fam Zheng , Stefan Hajnoczi Subject: [PATCH 7/8] virtio-blk: don't acquire AioContext in virtio_blk_handle_vq() Date: Tue, 8 Nov 2022 16:19:29 -0500 Message-Id: <20221108211930.876142-8-stefanha@redhat.com> In-Reply-To: <20221108211930.876142-1-stefanha@redhat.com> References: <20221108211930.876142-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Received-SPF: pass client-ip=170.10.133.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org There is no need to acquire AioContext in virtio_blk_handle_vq() because no APIs used in the function require it and nothing else in the virtio-blk code requires mutual exclusion anymore. Signed-off-by: Stefan Hajnoczi --- hw/block/virtio-blk.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index faea045178..771d87cfbe 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -784,7 +784,6 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq) IO_CODE(); - aio_context_acquire(blk_get_aio_context(s->blk)); blk_io_plug(s->blk); do { @@ -810,7 +809,6 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq) } blk_io_unplug(s->blk); - aio_context_release(blk_get_aio_context(s->blk)); } static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) From patchwork Tue Nov 8 21:19:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 1701496 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=Oy8SLnjb; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4N6LfY5t6fz23lT for ; Wed, 9 Nov 2022 08:22:45 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1osW16-0005OG-ET; Tue, 08 Nov 2022 16:20:20 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0w-0005FY-4Q for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:20:13 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1osW0t-00009S-U1 for qemu-devel@nongnu.org; Tue, 08 Nov 2022 16:20:09 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667942407; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VQ3ATQgKtUyo3XXFzYY5Hl43Lj2yTiPHOQSl+WRJnsw=; b=Oy8SLnjbxwDWcynUAkWOZGEJnJzuTNVs30Lng83TFokdn83Ur9BVYUCO/jcaB1SpaW8JjE KP/h4y8nzOw6PhDti2uzGKZi/Ut3kLbYQC/m8yAYUeJCYX0rVrgzZn7jNu1mhTLgM4dOPK F4ZtUW24BY6bI9UEdf0DVEqimDRkBZg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-252-SN4d4uxvM_-o4ZRA5swXig-1; Tue, 08 Nov 2022 16:20:03 -0500 X-MC-Unique: SN4d4uxvM_-o4ZRA5swXig-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 00B3E185A7AA; Tue, 8 Nov 2022 21:20:03 +0000 (UTC) Received: from localhost (unknown [10.39.192.85]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6A0F54C819; Tue, 8 Nov 2022 21:20:02 +0000 (UTC) From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Emanuele Giuseppe Esposito , "Michael S. Tsirkin" , qemu-block@nongnu.org, Kevin Wolf , Hanna Reitz , Paolo Bonzini , Fam Zheng , Stefan Hajnoczi Subject: [PATCH 8/8] virtio-blk: minimize virtio_blk_reset() AioContext lock region Date: Tue, 8 Nov 2022 16:19:30 -0500 Message-Id: <20221108211930.876142-9-stefanha@redhat.com> In-Reply-To: <20221108211930.876142-1-stefanha@redhat.com> References: <20221108211930.876142-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Received-SPF: pass client-ip=170.10.133.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org blk_drain() needs the lock because it calls AIO_WAIT_WHILE(). The s->rq loop doesn't need the lock because dataplane has been stopped when virtio_blk_reset() is called. Signed-off-by: Stefan Hajnoczi Reviewed-by: Emanuele Giuseppe Esposito --- hw/block/virtio-blk.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 771d87cfbe..0b411b3065 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -898,6 +898,7 @@ static void virtio_blk_reset(VirtIODevice *vdev) ctx = blk_get_aio_context(s->blk); aio_context_acquire(ctx); blk_drain(s->blk); + aio_context_release(ctx); /* We drop queued requests after blk_drain() because blk_drain() itself can * produce them. */ @@ -908,8 +909,6 @@ static void virtio_blk_reset(VirtIODevice *vdev) virtio_blk_free_request(req); } - aio_context_release(ctx); - assert(!s->dataplane_started); blk_set_enable_write_cache(s->blk, s->original_wce); }