From patchwork Mon Feb 8 16:10:01 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 44802 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [199.232.76.165]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id CC45FB7D24 for ; Tue, 9 Feb 2010 03:14:29 +1100 (EST) Received: from localhost ([127.0.0.1]:56944 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NeWFl-0004ET-2O for incoming@patchwork.ozlabs.org; Mon, 08 Feb 2010 11:14:25 -0500 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NeWBz-0001gU-Tx for qemu-devel@nongnu.org; Mon, 08 Feb 2010 11:10:32 -0500 Received: from [199.232.76.173] (port=39422 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NeWBz-0001g6-B2 for qemu-devel@nongnu.org; Mon, 08 Feb 2010 11:10:31 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1NeWBx-0000VN-Up for qemu-devel@nongnu.org; Mon, 08 Feb 2010 11:10:31 -0500 Received: from e8.ny.us.ibm.com ([32.97.182.138]:39788) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1NeWBx-0000Rp-LW for qemu-devel@nongnu.org; Mon, 08 Feb 2010 11:10:29 -0500 Received: from d01relay05.pok.ibm.com (d01relay05.pok.ibm.com [9.56.227.237]) by e8.ny.us.ibm.com (8.14.3/8.13.1) with ESMTP id o18G1CUo026542 for ; Mon, 8 Feb 2010 11:01:12 -0500 Received: from d03av05.boulder.ibm.com (d03av05.boulder.ibm.com [9.17.195.85]) by d01relay05.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id o18GAEi2104670 for ; Mon, 8 Feb 2010 11:10:15 -0500 Received: from d03av05.boulder.ibm.com (loopback [127.0.0.1]) by d03av05.boulder.ibm.com (8.14.3/8.13.1/NCO v10.0 AVout) with ESMTP id o18GA4Bg014827 for ; Mon, 8 Feb 2010 09:10:05 -0700 Received: from tomlt1.ibmoffice.com (toml.austin.ibm.com [9.53.41.61]) by d03av05.boulder.ibm.com (8.14.3/8.13.1/NCO v10.0 AVin) with ESMTP id o18GA45o014816; Mon, 8 Feb 2010 09:10:04 -0700 From: Tom Lendacky Organization: IBM Linux Performance To: kvm@vger.kernel.org Date: Mon, 8 Feb 2010 10:10:01 -0600 User-Agent: KMail/1.12.4 (Linux/2.6.30.10-105.2.4.fc11.i686.PAE; KDE/4.3.4; i686; ; ) References: <201001291406.41559.tahm@linux.vnet.ibm.com> In-Reply-To: <201001291406.41559.tahm@linux.vnet.ibm.com> MIME-Version: 1.0 Message-Id: <201002081010.03751.tahm@linux.vnet.ibm.com> X-detected-operating-system: by monty-python.gnu.org: GNU/Linux 2.6, seldom 2.4 (older, 4) Cc: chrisw@redhat.com, markmc@redhat.com, aliguori@us.ibm.com, herbert@gondor.apana.org.au, qemu-devel@nongnu.org, rek2@binaryfreedom.info, avi@redhat.com Subject: [Qemu-devel] Re: Network shutdown under load X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Fix a race condition where qemu finds that there are not enough virtio ring buffers available and the guest make more buffers available before qemu can enable notifications. Signed-off-by: Tom Lendacky Signed-off-by: Anthony Liguori hw/virtio-net.c | 10 +++++++++- 1 files changed, 9 insertions(+), 1 deletions(-) On Friday 29 January 2010 02:06:41 pm Tom Lendacky wrote: > There's been some discussion of this already in the kvm list, but I want to > summarize what I've found and also include the qemu-devel list in an effort > to find a solution to this problem. > > Running a netperf test between two kvm guests results in the guest's > network interface shutting down. I originally found this using kvm guests > on two different machines that were connected via a 10GbE link. However, > I found this problem can be easily reproduced using two guests on the same > machine. > > I am running the 2.6.32 level of the kvm.git tree and the 0.12.1.2 level of > the qemu-kvm.git tree. > > The setup includes two bridges, br0 and br1. > > The commands used to start the guests are as follows: > usr/local/bin/qemu-system-x86_64 -name cape-vm001 -m 1024 -drive > file=/autobench/var/tmp/cape-vm001- > raw.img,if=virtio,index=0,media=disk,boot=on -net > nic,model=virtio,vlan=0,macaddr=00:16:3E:00:62:51,netdev=cape-vm001-eth0 - > netdev tap,id=cape-vm001-eth0,script=/autobench/var/tmp/ifup-kvm- > br0,downscript=/autobench/var/tmp/ifdown-kvm-br0 -net > nic,model=virtio,vlan=1,macaddr=00:16:3E:00:62:D1,netdev=cape-vm001-eth1 - > netdev tap,id=cape-vm001-eth1,script=/autobench/var/tmp/ifup-kvm- > br1,downscript=/autobench/var/tmp/ifdown-kvm-br1 -vnc :1 -monitor > telnet::5701,server,nowait -snapshot -daemonize > > usr/local/bin/qemu-system-x86_64 -name cape-vm002 -m 1024 -drive > file=/autobench/var/tmp/cape-vm002- > raw.img,if=virtio,index=0,media=disk,boot=on -net > nic,model=virtio,vlan=0,macaddr=00:16:3E:00:62:61,netdev=cape-vm002-eth0 - > netdev tap,id=cape-vm002-eth0,script=/autobench/var/tmp/ifup-kvm- > br0,downscript=/autobench/var/tmp/ifdown-kvm-br0 -net > nic,model=virtio,vlan=1,macaddr=00:16:3E:00:62:E1,netdev=cape-vm002-eth1 - > netdev tap,id=cape-vm002-eth1,script=/autobench/var/tmp/ifup-kvm- > br1,downscript=/autobench/var/tmp/ifdown-kvm-br1 -vnc :2 -monitor > telnet::5702,server,nowait -snapshot -daemonize > > The ifup-kvm-br0 script takes the (first) qemu created tap device and > brings it up and adds it to bridge br0. The ifup-kvm-br1 script take the > (second) qemu created tap device and brings it up and adds it to bridge > br1. > > Each ethernet device within a guest is on it's own subnet. For example: > guest 1 eth0 has addr 192.168.100.32 and eth1 has addr 192.168.101.32 > guest 2 eth0 has addr 192.168.100.64 and eth1 has addr 192.168.101.64 > > On one of the guests run netserver: > netserver -L 192.168.101.32 -p 12000 > > On the other guest run netperf: > netperf -L 192.168.101.64 -H 192.168.101.32 -p 12000 -t TCP_STREAM -l 60 > -c -C -- -m 16K -M 16K > > It may take more than one netperf run (I find that my second run almost > always causes the shutdown) but the network on the eth1 links will stop > working. > > I did some debugging and found that in qemu on the guest running netserver: > - the receive_disabled variable is set and never gets reset > - the read_poll event handler for the eth1 tap device is disabled and > never re-enabled > These conditions result in no packets being read from the tap device and > sent to the guest - effectively shutting down the network. Network > connectivity can be restored by shutting down the guest interfaces, > unloading the virtio_net module, re-loading the virtio_net module and > re-starting the guest interfaces. > > I'm continuing to work on debugging this, but would appreciate if some > folks with more qemu network experience could try to recreate and debug > this. > > If my kernel config matters, I can provide that. > > Thanks, > Tom > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > diff --git a/hw/virtio-net.c b/hw/virtio-net.c index 6e48997..5c0093e 100644 --- a/hw/virtio-net.c +++ b/hw/virtio-net.c @@ -379,7 +379,15 @@ static int virtio_net_has_buffers(VirtIONet *n, int bufsize) (n->mergeable_rx_bufs && !virtqueue_avail_bytes(n->rx_vq, bufsize, 0))) { virtio_queue_set_notification(n->rx_vq, 1); - return 0; + + /* To avoid a race condition where the guest has made some buffers + * available after the above check but before notification was + * enabled, check for available buffers again. + */ + if (virtio_queue_empty(n->rx_vq) || + (n->mergeable_rx_bufs && + !virtqueue_avail_bytes(n->rx_vq, bufsize, 0))) + return 0; } virtio_queue_set_notification(n->rx_vq, 0);