From patchwork Wed Apr 1 08:15:09 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Wang X-Patchwork-Id: 457139 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 8369514008F for ; Wed, 1 Apr 2015 19:23:10 +1100 (AEDT) Received: from localhost ([::1]:42124 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YdDvY-0000fw-Dd for incoming@patchwork.ozlabs.org; Wed, 01 Apr 2015 04:23:08 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39373) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YdDpJ-0005wu-Gh for qemu-devel@nongnu.org; Wed, 01 Apr 2015 04:16:42 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YdDpE-0002nF-Ad for qemu-devel@nongnu.org; Wed, 01 Apr 2015 04:16:41 -0400 Received: from mx1.redhat.com ([209.132.183.28]:33275) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YdDpE-0002n4-2l for qemu-devel@nongnu.org; Wed, 01 Apr 2015 04:16:36 -0400 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id t318GWGD018039 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 1 Apr 2015 04:16:32 -0400 Received: from jason-ThinkPad-T430s.redhat.com (vpn1-6-136.pek2.redhat.com [10.72.6.136]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t318FExq006923; Wed, 1 Apr 2015 04:16:29 -0400 From: Jason Wang To: qemu-devel@nongnu.org Date: Wed, 1 Apr 2015 16:15:09 +0800 Message-Id: <1427876112-12615-16-git-send-email-jasowang@redhat.com> In-Reply-To: <1427876112-12615-1-git-send-email-jasowang@redhat.com> References: <1427876112-12615-1-git-send-email-jasowang@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: cornelia.huck@de.ibm.com, Jason Wang , mst@redhat.com Subject: [Qemu-devel] [PATCH V5 15/18] virtio-pci: speedup MSI-X masking and unmasking X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org This patch tries to speed up the MSI-X masking and unmasking through the mapping between vector and queues. With this patch it will there's no need to go through all possible virtqueues, which may help to reduce the time spent when doing MSI-X masking/unmasking a single vector when more than hundreds or even thousands of virtqueues were supported. Tested with 80 queue pairs virito-net-pci by changing the smp affinity in the background and doing netperf in the same time: Before the patch: 5711.70 Gbits/sec After the patch: 6830.98 Gbits/sec About 19.6% improvements in throughput. Cc: Michael S. Tsirkin Signed-off-by: Jason Wang --- hw/virtio/virtio-pci.c | 40 +++++++++++++++++++++------------------- 1 file changed, 21 insertions(+), 19 deletions(-) diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c index c38f33f..9a5242a 100644 --- a/hw/virtio/virtio-pci.c +++ b/hw/virtio/virtio-pci.c @@ -632,28 +632,30 @@ static int virtio_pci_vector_unmask(PCIDevice *dev, unsigned vector, { VirtIOPCIProxy *proxy = container_of(dev, VirtIOPCIProxy, pci_dev); VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus); - int ret, queue_no; + VirtQueue *vq = virtio_vector_first_queue(vdev, vector); + int ret, index, unmasked = 0; - for (queue_no = 0; queue_no < proxy->nvqs_with_notifiers; queue_no++) { - if (!virtio_queue_get_num(vdev, queue_no)) { + while (vq) { + index = virtio_queue_get_index(vdev, vq); + if (!virtio_queue_get_num(vdev, index)) { break; } - if (virtio_queue_vector(vdev, queue_no) != vector) { - continue; - } - ret = virtio_pci_vq_vector_unmask(proxy, queue_no, vector, msg); + ret = virtio_pci_vq_vector_unmask(proxy, index, vector, msg); if (ret < 0) { goto undo; } + vq = virtio_vector_next_queue(vq); + ++unmasked; } + return 0; undo: - while (--queue_no >= 0) { - if (virtio_queue_vector(vdev, queue_no) != vector) { - continue; - } - virtio_pci_vq_vector_mask(proxy, queue_no, vector); + vq = virtio_vector_first_queue(vdev, vector); + while (vq && --unmasked >= 0) { + index = virtio_queue_get_index(vdev, vq); + virtio_pci_vq_vector_mask(proxy, index, vector); + vq = virtio_vector_next_queue(vq); } return ret; } @@ -662,16 +664,16 @@ static void virtio_pci_vector_mask(PCIDevice *dev, unsigned vector) { VirtIOPCIProxy *proxy = container_of(dev, VirtIOPCIProxy, pci_dev); VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus); - int queue_no; + VirtQueue *vq = virtio_vector_first_queue(vdev, vector); + int index; - for (queue_no = 0; queue_no < proxy->nvqs_with_notifiers; queue_no++) { - if (!virtio_queue_get_num(vdev, queue_no)) { + while (vq) { + index = virtio_queue_get_index(vdev, vq); + if (!virtio_queue_get_num(vdev, index)) { break; } - if (virtio_queue_vector(vdev, queue_no) != vector) { - continue; - } - virtio_pci_vq_vector_mask(proxy, queue_no, vector); + virtio_pci_vq_vector_mask(proxy, index, vector); + vq = virtio_vector_next_queue(vq); } }