From patchwork Sat Aug 5 00:39:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: ~hyman X-Patchwork-Id: 1817277 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4RHvt35dHhz1yYC for ; Sat, 5 Aug 2023 17:51:39 +1000 (AEST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qSC3q-0007so-HC; Sat, 05 Aug 2023 03:50:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qSC3p-0007sK-6D for qemu-devel@nongnu.org; Sat, 05 Aug 2023 03:50:53 -0400 Received: from mail-b.sr.ht ([173.195.146.151]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qSC3n-0005DF-JR for qemu-devel@nongnu.org; Sat, 05 Aug 2023 03:50:52 -0400 Authentication-Results: mail-b.sr.ht; dkim=none Received: from git.sr.ht (unknown [173.195.146.142]) by mail-b.sr.ht (Postfix) with ESMTPSA id 2D68C11EF48; Sat, 5 Aug 2023 07:50:50 +0000 (UTC) From: ~hyman Date: Sat, 05 Aug 2023 08:39:56 +0800 Subject: [PATCH QEMU 3/3] vhost-user-blk-pci: introduce auto-num-queues property MIME-Version: 1.0 Message-ID: <169122184935.7839.16786323109706183366-3@git.sr.ht> X-Mailer: git.sr.ht In-Reply-To: <169122184935.7839.16786323109706183366-0@git.sr.ht> To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Fam Zheng , "Michael S. Tsirkin" , Raphael Norwitz , Stefan Hajnoczi , Kevin Wolf , Hanna Reitz Received-SPF: pass client-ip=173.195.146.151; envelope-from=outgoing@sr.ht; helo=mail-b.sr.ht X-Spam_score_int: -3 X-Spam_score: -0.4 X-Spam_bar: / X-Spam_report: (-0.4 / 5.0 requ) BAYES_00=-1.9, DATE_IN_PAST_06_12=1.543, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: ~hyman Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Hyman Huang(黄勇) Commit "a4eef0711b vhost-user-blk-pci: default num_queues to -smp N" implment sizing the number of vhost-user-blk-pci request virtqueues to match the number of vCPUs automatically. Which improves IO preformance remarkably. To enable this feature for the existing VMs, the cloud platform may migrate VMs from the source hypervisor (num_queues is set to 1 by default) to the destination hypervisor (num_queues is set to -smp N) lively. The different num-queues for vhost-user-blk-pci devices between the source side and the destination side will result in migration failure due to loading vmstate incorrectly on the destination side. To provide a smooth upgrade solution, introduce the auto-num-queues property for the vhost-user-blk-pci device. This allows upper APPs, e.g., libvirt, to recognize the hypervisor's capability of allocating the virtqueues automatically by probing the vhost-user-blk-pci.auto-num-queues property. Basing on which, upper APPs can ensure to allocate the same num-queues on the destination side in case of migration failure. Signed-off-by: Hyman Huang(黄勇) --- hw/block/vhost-user-blk.c | 1 + hw/virtio/vhost-user-blk-pci.c | 9 ++++++++- include/hw/virtio/vhost-user-blk.h | 5 +++++ 3 files changed, 14 insertions(+), 1 deletion(-) diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c index eecf3f7a81..34e23b1727 100644 --- a/hw/block/vhost-user-blk.c +++ b/hw/block/vhost-user-blk.c @@ -566,6 +566,7 @@ static const VMStateDescription vmstate_vhost_user_blk = { static Property vhost_user_blk_properties[] = { DEFINE_PROP_CHR("chardev", VHostUserBlk, chardev), + DEFINE_PROP_BOOL("auto-num-queues", VHostUserBlk, auto_num_queues, true), DEFINE_PROP_UINT16("num-queues", VHostUserBlk, num_queues, VHOST_USER_BLK_AUTO_NUM_QUEUES), DEFINE_PROP_UINT32("queue-size", VHostUserBlk, queue_size, 128), diff --git a/hw/virtio/vhost-user-blk-pci.c b/hw/virtio/vhost-user-blk-pci.c index eef8641a98..f7776e928a 100644 --- a/hw/virtio/vhost-user-blk-pci.c +++ b/hw/virtio/vhost-user-blk-pci.c @@ -56,7 +56,14 @@ static void vhost_user_blk_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp) DeviceState *vdev = DEVICE(&dev->vdev); if (dev->vdev.num_queues == VHOST_USER_BLK_AUTO_NUM_QUEUES) { - dev->vdev.num_queues = virtio_pci_optimal_num_queues(0); + /* + * Allocate virtqueues automatically only if auto_num_queues + * property set true. + */ + if (dev->vdev.auto_num_queues) + dev->vdev.num_queues = virtio_pci_optimal_num_queues(0); + else + dev->vdev.num_queues = 1; } if (vpci_dev->nvectors == DEV_NVECTORS_UNSPECIFIED) { diff --git a/include/hw/virtio/vhost-user-blk.h b/include/hw/virtio/vhost-user-blk.h index ea085ee1ed..e6f0515bc6 100644 --- a/include/hw/virtio/vhost-user-blk.h +++ b/include/hw/virtio/vhost-user-blk.h @@ -50,6 +50,11 @@ struct VHostUserBlk { bool connected; /* vhost_user_blk_start/vhost_user_blk_stop */ bool started_vu; + /* + * Set to true if virtqueues allow to be allocated to + * match the number of virtual CPUs automatically. + */ + bool auto_num_queues; }; #endif