From patchwork Tue Apr 16 03:08:59 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Wang X-Patchwork-Id: 236832 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 5170D2C00E5 for ; Tue, 16 Apr 2013 13:09:38 +1000 (EST) Received: from localhost ([::1]:56504 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1URwH2-0000CY-Fw for incoming@patchwork.ozlabs.org; Mon, 15 Apr 2013 23:09:36 -0400 Received: from eggs.gnu.org ([208.118.235.92]:34280) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1URwGj-00005z-3p for qemu-devel@nongnu.org; Mon, 15 Apr 2013 23:09:21 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1URwGe-0000HY-7Y for qemu-devel@nongnu.org; Mon, 15 Apr 2013 23:09:17 -0400 Received: from mx1.redhat.com ([209.132.183.28]:22870) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1URwGd-0000HR-Vw for qemu-devel@nongnu.org; Mon, 15 Apr 2013 23:09:12 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r3G395Oh011799 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Mon, 15 Apr 2013 23:09:05 -0400 Received: from [10.66.71.39] (dhcp-66-71-39.eng.nay.redhat.com [10.66.71.39] (may be forged)) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r3G392ld022887; Mon, 15 Apr 2013 23:09:03 -0400 Message-ID: <516CC0CB.9020202@redhat.com> Date: Tue, 16 Apr 2013 11:08:59 +0800 From: Jason Wang User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130329 Thunderbird/17.0.5 MIME-Version: 1.0 To: Aurelien Jarno References: <1359110143-42984-1-git-send-email-jasowang@redhat.com> <1359110143-42984-19-git-send-email-jasowang@redhat.com> <20130413131715.GA17977@hall.aurel32.net> <516B901D.7030708@redhat.com> <20130415082853.GA32246@ohm.aurel32.net> In-Reply-To: <20130415082853.GA32246@ohm.aurel32.net> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: qemu-devel@nongnu.org Subject: Re: [Qemu-devel] [PATCH V2 18/20] virtio-net: multiqueue support X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org On 04/15/2013 04:28 PM, Aurelien Jarno wrote: > On Mon, Apr 15, 2013 at 01:29:01PM +0800, Jason Wang wrote: >> On 04/13/2013 09:17 PM, Aurelien Jarno wrote: >>> On Fri, Jan 25, 2013 at 06:35:41PM +0800, Jason Wang wrote: >>>> This patch implements both userspace and vhost support for multiple queue >>>> virtio-net (VIRTIO_NET_F_MQ). This is done by introducing an array of >>>> VirtIONetQueue to VirtIONet. >>>> >>>> Signed-off-by: Jason Wang >>>> --- >>>> hw/virtio-net.c | 317 +++++++++++++++++++++++++++++++++++++++++++------------ >>>> hw/virtio-net.h | 28 +++++- >>>> 2 files changed, 275 insertions(+), 70 deletions(-) >>> This patch breaks virtio-net in Minix, even with multiqueue disable. I >>> don't know virtio enough to know if it is a Minix or a QEMU problem. >>> However I have been able to identify the part of the commit causing the >>> failure: >> Hi Aurelien: >> >> Thanks for the work. >>>> diff --git a/hw/virtio-net.c b/hw/virtio-net.c >>>> index ef522d5..cec91a7 100644 >>>> --- a/hw/virtio-net.c >>>> +++ b/hw/virtio-net.c >>> ... >>> >>>> +static void virtio_net_set_multiqueue(VirtIONet *n, int multiqueue, int ctrl) >>>> +{ >>>> + VirtIODevice *vdev = &n->vdev; >>>> + int i, max = multiqueue ? n->max_queues : 1; >>>> + >>>> + n->multiqueue = multiqueue; >>>> + >>>> + for (i = 2; i <= n->max_queues * 2 + 1; i++) { >>>> + virtio_del_queue(vdev, i); >>>> + } >>>> + >>> The for loop above is something which is new, even with multiqueue >>> disabled. Even with max_queues=1 it calls virtio_del_queue with i = 2 >>> and i = 3. Disabling this loop makes the code to work as before. >> Looks like a bug here, need to change n->max_queues * 2 + 1 to >> n->max_queues * 2. The reason we need to del queue 2 each time because >> vq 2 has different meaning is multiqueue and single queue. In single >> queue, vq 2 maybe ctrl vq, but in multiqueue mode it was rx1. >> >> Let's see whether this small change works. > Unfortunately it doesn't fix the issue. I don't know a lot about virtio, > but would it be possible to only delete the queue that have been > enabled? The issue is vq 2 has different meanings in two modes, so it must be deleted and reinitialized during feature negotiation. > >>> On the Minix side it triggers the following assertion: >>> >>> | virtio.c:370: assert "q->vaddr != NULL" failed, function "free_phys_queue" >>> | virtio_net(73141): panic: assert failed >>> >>> This correspond to this function in lib/libvirtio/virtio.c: >>> >>> | static void >>> | free_phys_queue(struct virtio_queue *q) >>> | { >>> | assert(q != NULL); >>> | assert(q->vaddr != NULL); >>> | >>> | free_contig(q->vaddr, q->ring_size); >>> | q->vaddr = NULL; >>> | q->paddr = 0; >>> | q->num = 0; >>> | free_contig(q->data, sizeof(q->data[0])); >>> | q->data = NULL; >>> | } >>> >>> Do you have an idea if the problem is on the Minix side or on the QEMU >>> side? >>> >> Haven't figured out the relationship between virtqueue dynamic del/add >> and q->vaddr here. If the above changes does not work, I guess this > Unfortunately I don't really know the minix code either. I happen to > found the issue when testing other changes, and looked at the code to > try to understand the problem. Me too. >> problem happens only for ctrl vq (vq2)? And when does this happen? > I guess so given removing the call to virtio_del_queue() for vq2 fixes > or workarounds the issue. It will cause trouble for multiqueue guest who may think vq2 is rx queue. > >> Rebooting? > It happens at the initial boot. Looks at the codes, looks like vq2 was initialized unconditionally at start, how about the following patch? n->status = VIRTIO_NET_S_LINK_UP; Thanks > > Thanks, > Aurelien > diff --git a/hw/virtio-net.c b/hw/virtio-net.c index 4bb49eb..4886aa0 100644 --- a/hw/virtio-net.c +++ b/hw/virtio-net.c @@ -1300,7 +1300,6 @@ VirtIODevice *virtio_net_init(DeviceState *dev, NICConf *conf, virtio_net_handle_tx_bh); n->vqs[0].tx_bh = qemu_bh_new(virtio_net_tx_bh, &n->vqs[0]); } - n->ctrl_vq = virtio_add_queue(&n->vdev, 64, virtio_net_handle_ctrl); qemu_macaddr_default_if_unset(&conf->macaddr); memcpy(&n->mac[0], &conf->macaddr, sizeof(n->mac));