From patchwork Wed Oct 28 12:16:33 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Igor Mammedov X-Patchwork-Id: 537315 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id E1526140D24 for ; Wed, 28 Oct 2015 23:19:10 +1100 (AEDT) Received: from localhost ([::1]:37563 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZrPh6-000826-Pz for incoming@patchwork.ozlabs.org; Wed, 28 Oct 2015 08:19:08 -0400 Received: from eggs.gnu.org ([208.118.235.92]:59725) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZrPg5-0007h0-NP for qemu-devel@nongnu.org; Wed, 28 Oct 2015 08:18:54 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZrPef-0002F4-3K for qemu-devel@nongnu.org; Wed, 28 Oct 2015 08:18:05 -0400 Received: from mx1.redhat.com ([209.132.183.28]:54596) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZrPee-0002En-OL for qemu-devel@nongnu.org; Wed, 28 Oct 2015 08:16:36 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (Postfix) with ESMTPS id 5CAD58EFD9 for ; Wed, 28 Oct 2015 12:16:35 +0000 (UTC) Received: from nial.brq.redhat.com (dhcp-1-118.brq.redhat.com [10.34.1.118]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t9SCGXPH020887; Wed, 28 Oct 2015 08:16:34 -0400 Date: Wed, 28 Oct 2015 13:16:33 +0100 From: Igor Mammedov To: "Michael S. Tsirkin" Message-ID: <20151028131633.5faf0c6f@nial.brq.redhat.com> In-Reply-To: <1445935663-31971-2-git-send-email-mst@redhat.com> References: <1445935663-31971-1-git-send-email-mst@redhat.com> <1445935663-31971-2-git-send-email-mst@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: qemu-devel@nongnu.org Subject: Re: [Qemu-devel] [PATCH 1/6] virtio: introduce virtio_map X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org On Tue, 27 Oct 2015 10:47:56 +0200 "Michael S. Tsirkin" wrote: > virtio_map_sg currently fails if one of the entries it's mapping is > contigious in GPA but not HVA address space. Introduce virtio_map which > handles this by splitting sg entries. > > This new API generally turns out to be a good idea since it's harder to > misuse: at least in one case the existing one was used incorrectly. > > This will still fail if there's no space left in the sg, but luckily max > queue size in use is currently 256, while max sg size is 1024, so we > should be OK even is all entries happen to cross a single DIMM boundary. ^^ S/is/if/ > > Won't work well with very small DIMM sizes, unfortunately: > e.g. this will fail with 4K DIMMs where a single > request might span a large number of DIMMs. > > Let's hope these are uncommon - at least we are not breaking things. > > Note: virtio-scsi calls virtio_map_sg on data loaded from network, and > validates input, asserting on failure. Copy the validating code here - > it will be dropped from virtio-scsi in a follow-up patch. > > Reported-by: Igor Mammedov > Signed-off-by: Michael S. Tsirkin > --- > include/hw/virtio/virtio.h | 1 + > hw/virtio/virtio.c | 56 ++++++++++++++++++++++++++++++++++++++-------- > 2 files changed, 48 insertions(+), 9 deletions(-) > > diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h > index 9d09115..9d9abb4 100644 > --- a/include/hw/virtio/virtio.h > +++ b/include/hw/virtio/virtio.h > @@ -153,6 +153,7 @@ void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem, > > void virtqueue_map_sg(struct iovec *sg, hwaddr *addr, > size_t num_sg, int is_write); > +void virtqueue_map(VirtQueueElement *elem); > int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem); > int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes, > unsigned int out_bytes); > diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c > index d0bc72e..a6878c0 100644 > --- a/hw/virtio/virtio.c > +++ b/hw/virtio/virtio.c > @@ -448,28 +448,66 @@ int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes, > return in_bytes <= in_total && out_bytes <= out_total; > } > > -void virtqueue_map_sg(struct iovec *sg, hwaddr *addr, > - size_t num_sg, int is_write) > +static void virtqueue_map_iovec(struct iovec *sg, hwaddr *addr, > + size_t *num_sg, size_t max_size, > + int is_write) this fails to build with: hw/virtio/virtio.c:498:25: error: passing argument 3 of ‘virtqueue_map_iovec’ from incompatible pointer type [-Werror] here is fixup: > { > unsigned int i; > hwaddr len; > > - if (num_sg > VIRTQUEUE_MAX_SIZE) { > - error_report("virtio: map attempt out of bounds: %zd > %d", > - num_sg, VIRTQUEUE_MAX_SIZE); > - exit(1); > - } > + /* Note: this function MUST validate input, some callers > + * are passing in num_sg values received over the network. > + */ > + /* TODO: teach all callers that this can fail, and return failure instead > + * of asserting here. > + * When we do, we might be able to re-enable NDEBUG below. > + */ > +#ifdef NDEBUG > +#error building with NDEBUG is not supported > +#endif > + assert(*num_sg <= max_size); > > - for (i = 0; i < num_sg; i++) { > + for (i = 0; i < *num_sg; i++) { > len = sg[i].iov_len; > sg[i].iov_base = cpu_physical_memory_map(addr[i], &len, is_write); > - if (sg[i].iov_base == NULL || len != sg[i].iov_len) { > + if (!sg[i].iov_base) { > error_report("virtio: error trying to map MMIO memory"); > exit(1); > } > + if (len == sg[i].iov_len) { > + continue; > + } > + if (*num_sg >= max_size) { > + error_report("virtio: memory split makes iovec too large"); > + exit(1); > + } > + memcpy(sg + i + 1, sg + i, sizeof(*sg) * (*num_sg - i)); > + memcpy(addr + i + 1, addr + i, sizeof(*addr) * (*num_sg - i)); > + assert(len < sg[i + 1].iov_len); > + sg[i].iov_len = len; > + addr[i + 1] += len; > + sg[i + 1].iov_len -= len; > + ++*num_sg; > } > } > > +/* Deprecated: don't use in new code */ > +void virtqueue_map_sg(struct iovec *sg, hwaddr *addr, > + size_t num_sg, int is_write) > +{ > + virtqueue_map_iovec(sg, addr, &num_sg, num_sg, is_write); > +} > + > +void virtqueue_map(VirtQueueElement *elem) > +{ > + virtqueue_map_iovec(elem->in_sg, elem->in_addr, &elem->in_num, > + MIN(ARRAY_SIZE(elem->in_sg), ARRAY_SIZE(elem->in_addr)), > + 1); > + virtqueue_map_iovec(elem->out_sg, elem->out_addr, &elem->out_num, > + MIN(ARRAY_SIZE(elem->out_sg), ARRAY_SIZE(elem->out_addr)), > + 0); > +} > + > int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem) > { > unsigned int i, head, max; diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 10cd03a..ef42baa 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -449,7 +449,7 @@ int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes, } static void virtqueue_map_iovec(struct iovec *sg, hwaddr *addr, - size_t *num_sg, size_t max_size, + unsigned int *num_sg, size_t max_size, int is_write)