From patchwork Fri May 22 05:58:04 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ouyang, Changchun" X-Patchwork-Id: 475308 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id CDC6B1402C8 for ; Fri, 22 May 2015 16:28:34 +1000 (AEST) Received: from localhost ([::1]:60489 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YvgRc-0000ym-PY for incoming@patchwork.ozlabs.org; Fri, 22 May 2015 02:28:32 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49643) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YvgRE-0000II-18 for qemu-devel@nongnu.org; Fri, 22 May 2015 02:28:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YvgR8-0007j7-VD for qemu-devel@nongnu.org; Fri, 22 May 2015 02:28:07 -0400 Received: from mga02.intel.com ([134.134.136.20]:24505) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YvgR8-0007ih-Lg for qemu-devel@nongnu.org; Fri, 22 May 2015 02:28:02 -0400 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP; 21 May 2015 22:58:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,474,1427785200"; d="scan'208";a="496894756" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by FMSMGA003.fm.intel.com with ESMTP; 21 May 2015 22:58:12 -0700 Received: from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com [10.239.29.89]) by shvmail01.sh.intel.com with ESMTP id t4M5wA6Q009813; Fri, 22 May 2015 13:58:10 +0800 Received: from shecgisg004.sh.intel.com (localhost [127.0.0.1]) by shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id t4M5w7fi011847; Fri, 22 May 2015 13:58:09 +0800 Received: (from couyang@localhost) by shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id t4M5w6qg011843; Fri, 22 May 2015 13:58:07 +0800 From: Ouyang Changchun To: qemu-devel@nongnu.org, mst@redhat.com Date: Fri, 22 May 2015 13:58:04 +0800 Message-Id: <1432274284-11814-1-git-send-email-changchun.ouyang@intel.com> X-Mailer: git-send-email 1.7.12.2 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.20 Cc: snabb-devel@googlegroups.com, n.nikolaev@virtualopensystems.com, luke@snabb.co, thomas.long@intel.com, changchun.ouyang@intel.com Subject: [Qemu-devel] [PATCH v4] vhost-user: add multi queue support X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Based on patch by Nikolay Nikolaev: Vhost-user will implement the multi queue support in a similar way to what vhost already has - a separate thread for each queue. To enable the multi queue functionality - a new command line parameter "queues" is introduced for the vhost-user netdev. Signed-off-by: Nikolay Nikolaev Signed-off-by: Changchun Ouyang --- Changes since v3: - fix one typo and wrap one long line Changes since v2: - fix vq index issue for set_vring_call When it is the case of VHOST_SET_VRING_CALL, The vq_index is not initialized before it is used, thus it could be a random value. The random value leads to crash in vhost after passing down to vhost, as vhost use this random value to index an array index. - fix the typo in the doc and description - address vq index for reset_owner Changes since v1: - use s->nc.info_str when bringing up/down the backend docs/specs/vhost-user.txt | 5 +++++ hw/net/vhost_net.c | 3 ++- hw/virtio/vhost-user.c | 11 ++++++++++- net/vhost-user.c | 37 ++++++++++++++++++++++++------------- qapi-schema.json | 6 +++++- qemu-options.hx | 5 +++-- 6 files changed, 49 insertions(+), 18 deletions(-) diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt index 650bb18..2c8e934 100644 --- a/docs/specs/vhost-user.txt +++ b/docs/specs/vhost-user.txt @@ -127,6 +127,11 @@ in the ancillary data: If Master is unable to send the full message or receives a wrong reply it will close the connection. An optional reconnection mechanism can be implemented. +Multi queue support +------------------- +The protocol supports multiple queues by setting all index fields in the sent +messages to a properly calculated value. + Message types ------------- diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c index 47f8b89..426b23e 100644 --- a/hw/net/vhost_net.c +++ b/hw/net/vhost_net.c @@ -157,6 +157,7 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options) net->dev.nvqs = 2; net->dev.vqs = net->vqs; + net->dev.vq_index = net->nc->queue_index; r = vhost_dev_init(&net->dev, options->opaque, options->backend_type, options->force); @@ -267,7 +268,7 @@ static void vhost_net_stop_one(struct vhost_net *net, for (file.index = 0; file.index < net->dev.nvqs; ++file.index) { const VhostOps *vhost_ops = net->dev.vhost_ops; int r = vhost_ops->vhost_call(&net->dev, VHOST_RESET_OWNER, - NULL); + &file); assert(r >= 0); } } diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index e7ab829..d6f2163 100644 --- a/hw/virtio/vhost-user.c +++ b/hw/virtio/vhost-user.c @@ -210,7 +210,12 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request, break; case VHOST_SET_OWNER: + break; + case VHOST_RESET_OWNER: + memcpy(&msg.state, arg, sizeof(struct vhost_vring_state)); + msg.state.index += dev->vq_index; + msg.size = sizeof(m.state); break; case VHOST_SET_MEM_TABLE: @@ -253,17 +258,20 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request, case VHOST_SET_VRING_NUM: case VHOST_SET_VRING_BASE: memcpy(&msg.state, arg, sizeof(struct vhost_vring_state)); + msg.state.index += dev->vq_index; msg.size = sizeof(m.state); break; case VHOST_GET_VRING_BASE: memcpy(&msg.state, arg, sizeof(struct vhost_vring_state)); + msg.state.index += dev->vq_index; msg.size = sizeof(m.state); need_reply = 1; break; case VHOST_SET_VRING_ADDR: memcpy(&msg.addr, arg, sizeof(struct vhost_vring_addr)); + msg.addr.index += dev->vq_index; msg.size = sizeof(m.addr); break; @@ -271,7 +279,7 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request, case VHOST_SET_VRING_CALL: case VHOST_SET_VRING_ERR: file = arg; - msg.u64 = file->index & VHOST_USER_VRING_IDX_MASK; + msg.u64 = (file->index + dev->vq_index) & VHOST_USER_VRING_IDX_MASK; msg.size = sizeof(m.u64); if (ioeventfd_enabled() && file->fd > 0) { fds[fd_num++] = file->fd; @@ -313,6 +321,7 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned long int request, error_report("Received bad msg size."); return -1; } + msg.state.index -= dev->vq_index; memcpy(arg, &msg.state, sizeof(struct vhost_vring_state)); break; default: diff --git a/net/vhost-user.c b/net/vhost-user.c index 1d86a2b..41c8a27 100644 --- a/net/vhost-user.c +++ b/net/vhost-user.c @@ -121,35 +121,39 @@ static void net_vhost_user_event(void *opaque, int event) case CHR_EVENT_OPENED: vhost_user_start(s); net_vhost_link_down(s, false); - error_report("chardev \"%s\" went up", s->chr->label); + error_report("chardev \"%s\" went up\n", s->nc.info_str); break; case CHR_EVENT_CLOSED: net_vhost_link_down(s, true); vhost_user_stop(s); - error_report("chardev \"%s\" went down", s->chr->label); + error_report("chardev \"%s\" went down\n", s->nc.info_str); break; } } static int net_vhost_user_init(NetClientState *peer, const char *device, - const char *name, CharDriverState *chr) + const char *name, CharDriverState *chr, + uint32_t queues) { NetClientState *nc; VhostUserState *s; + int i; - nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name); + for (i = 0; i < queues; i++) { + nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name); - snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user to %s", - chr->label); + snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user%d to %s", + i, chr->label); - s = DO_UPCAST(VhostUserState, nc, nc); + s = DO_UPCAST(VhostUserState, nc, nc); - /* We don't provide a receive callback */ - s->nc.receive_disabled = 1; - s->chr = chr; - - qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, s); + /* We don't provide a receive callback */ + s->nc.receive_disabled = 1; + s->chr = chr; + s->nc.queue_index = i; + qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, s); + } return 0; } @@ -225,6 +229,7 @@ static int net_vhost_check_net(QemuOpts *opts, void *opaque) int net_init_vhost_user(const NetClientOptions *opts, const char *name, NetClientState *peer) { + uint32_t queues; const NetdevVhostUserOptions *vhost_user_opts; CharDriverState *chr; @@ -243,6 +248,12 @@ int net_init_vhost_user(const NetClientOptions *opts, const char *name, return -1; } + /* number of queues for multiqueue */ + if (vhost_user_opts->has_queues) { + queues = vhost_user_opts->queues; + } else { + queues = 1; + } - return net_vhost_user_init(peer, "vhost_user", name, chr); + return net_vhost_user_init(peer, "vhost_user", name, chr, queues); } diff --git a/qapi-schema.json b/qapi-schema.json index f97ffa1..00791dd 100644 --- a/qapi-schema.json +++ b/qapi-schema.json @@ -2444,12 +2444,16 @@ # # @vhostforce: #optional vhost on for non-MSIX virtio guests (default: false). # +# @queues: #optional number of queues to be created for multiqueue vhost-user +# (default: 1) (Since 2.4) +# # Since 2.1 ## { 'struct': 'NetdevVhostUserOptions', 'data': { 'chardev': 'str', - '*vhostforce': 'bool' } } + '*vhostforce': 'bool', + '*queues': 'uint32' } } ## # @NetClientOptions diff --git a/qemu-options.hx b/qemu-options.hx index ec356f6..dad035e 100644 --- a/qemu-options.hx +++ b/qemu-options.hx @@ -1942,13 +1942,14 @@ The hubport netdev lets you connect a NIC to a QEMU "vlan" instead of a single netdev. @code{-net} and @code{-device} with parameter @option{vlan} create the required hub automatically. -@item -netdev vhost-user,chardev=@var{id}[,vhostforce=on|off] +@item -netdev vhost-user,chardev=@var{id}[,vhostforce=on|off][,queues=n] Establish a vhost-user netdev, backed by a chardev @var{id}. The chardev should be a unix domain socket backed one. The vhost-user uses a specifically defined protocol to pass vhost ioctl replacement messages to an application on the other end of the socket. On non-MSIX guests, the feature can be forced with -@var{vhostforce}. +@var{vhostforce}. Use 'queues=@var{n}' to specify the number of queues to +be created for multiqueue vhost-user. Example: @example