From patchwork Mon Apr 14 12:00:03 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kurz X-Patchwork-Id: 338926 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 9BE4C1400CF for ; Mon, 14 Apr 2014 22:01:06 +1000 (EST) Received: from localhost ([::1]:42635 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WZfZQ-0001m1-Ib for incoming@patchwork.ozlabs.org; Mon, 14 Apr 2014 08:01:04 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40100) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WZfYl-0001QD-3z for qemu-devel@nongnu.org; Mon, 14 Apr 2014 08:00:31 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1WZfYc-0004eG-LH for qemu-devel@nongnu.org; Mon, 14 Apr 2014 08:00:23 -0400 Received: from e06smtp11.uk.ibm.com ([195.75.94.107]:58093) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WZfYc-0004dC-9o for qemu-devel@nongnu.org; Mon, 14 Apr 2014 08:00:14 -0400 Received: from /spool/local by e06smtp11.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 14 Apr 2014 13:00:12 +0100 Received: from d06dlp01.portsmouth.uk.ibm.com (9.149.20.13) by e06smtp11.uk.ibm.com (192.168.101.141) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Mon, 14 Apr 2014 13:00:10 +0100 Received: from b06cxnps3074.portsmouth.uk.ibm.com (d06relay09.portsmouth.uk.ibm.com [9.149.109.194]) by d06dlp01.portsmouth.uk.ibm.com (Postfix) with ESMTP id E2A1417D804E for ; Mon, 14 Apr 2014 13:01:04 +0100 (BST) Received: from d06av05.portsmouth.uk.ibm.com (d06av05.portsmouth.uk.ibm.com [9.149.37.229]) by b06cxnps3074.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id s3EC09iT66387972 for ; Mon, 14 Apr 2014 12:00:09 GMT Received: from d06av05.portsmouth.uk.ibm.com (localhost [127.0.0.1]) by d06av05.portsmouth.uk.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id s3EC081e016673 for ; Mon, 14 Apr 2014 06:00:09 -0600 Received: from smtp.lab.toulouse-stg.fr.ibm.com (srv01.lab.toulouse-stg.fr.ibm.com [9.101.4.1]) by d06av05.portsmouth.uk.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id s3EC08v5016654; Mon, 14 Apr 2014 06:00:08 -0600 Received: from bahia.local (bahia.lab.toulouse-stg.fr.ibm.com [9.101.4.41]) by smtp.lab.toulouse-stg.fr.ibm.com (Postfix) with ESMTP id 5FEE3210FFD; Mon, 14 Apr 2014 14:00:08 +0200 (CEST) To: afaerber@suse.de From: Greg Kurz Date: Mon, 14 Apr 2014 14:00:03 +0200 Message-ID: <20140414115846.32507.15347.stgit@bahia.local> In-Reply-To: <20140414112942.32507.38267.stgit@bahia.local> References: <20140414112942.32507.38267.stgit@bahia.local> User-Agent: StGit/0.16 MIME-Version: 1.0 X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 14041412-5024-0000-0000-0000097B6003 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x-2.6.x [generic] X-Received-From: 195.75.94.107 Cc: kwolf@redhat.com, peter.maydell@linaro.org, thuth@linux.vnet.ibm.com, mst@redhat.com, marc.zyngier@arm.com, rusty@rustcorp.com.au, agraf@suse.de, qemu-devel@nongnu.org, clg@fr.ibm.com, stefanha@redhat.com, cornelia.huck@de.ibm.com, pbonzini@redhat.com, anthony@codemonkey.ws Subject: [Qemu-devel] [PATCH v7 2/8] virtio: allow byte swapping for vring and config access X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Rusty Russell This is based on a simpler patch by Anthony Liguouri, which only handled the vring accesses. We also need some drivers to access these helpers, eg. for data which contains headers. Signed-off-by: Rusty Russell [ add VirtIODevice * argument to most helpers, Greg Kurz ] Signed-off-by: Greg Kurz Reviewed-by: Thomas Huth --- Changes since v6: - argument re-ordering hw/virtio/virtio.c | 88 ++++++++++++++++++++++++++++------------------------ 1 file changed, 47 insertions(+), 41 deletions(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index bb646f0..163de1c 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -102,53 +102,56 @@ static void virtqueue_init(VirtQueue *vq) vq->vring.align); } -static inline uint64_t vring_desc_addr(hwaddr desc_pa, int i) +static inline uint64_t vring_desc_addr(VirtIODevice *vdev, hwaddr desc_pa, + int i) { hwaddr pa; pa = desc_pa + sizeof(VRingDesc) * i + offsetof(VRingDesc, addr); - return ldq_phys(&address_space_memory, pa); + return virtio_ldq_phys(vdev, pa); } -static inline uint32_t vring_desc_len(hwaddr desc_pa, int i) +static inline uint32_t vring_desc_len(VirtIODevice *vdev, hwaddr desc_pa, int i) { hwaddr pa; pa = desc_pa + sizeof(VRingDesc) * i + offsetof(VRingDesc, len); - return ldl_phys(&address_space_memory, pa); + return virtio_ldl_phys(vdev, pa); } -static inline uint16_t vring_desc_flags(hwaddr desc_pa, int i) +static inline uint16_t vring_desc_flags(VirtIODevice *vdev, hwaddr desc_pa, + int i) { hwaddr pa; pa = desc_pa + sizeof(VRingDesc) * i + offsetof(VRingDesc, flags); - return lduw_phys(&address_space_memory, pa); + return virtio_lduw_phys(vdev, pa); } -static inline uint16_t vring_desc_next(hwaddr desc_pa, int i) +static inline uint16_t vring_desc_next(VirtIODevice *vdev, hwaddr desc_pa, + int i) { hwaddr pa; pa = desc_pa + sizeof(VRingDesc) * i + offsetof(VRingDesc, next); - return lduw_phys(&address_space_memory, pa); + return virtio_lduw_phys(vdev, pa); } static inline uint16_t vring_avail_flags(VirtQueue *vq) { hwaddr pa; pa = vq->vring.avail + offsetof(VRingAvail, flags); - return lduw_phys(&address_space_memory, pa); + return virtio_lduw_phys(vq->vdev, pa); } static inline uint16_t vring_avail_idx(VirtQueue *vq) { hwaddr pa; pa = vq->vring.avail + offsetof(VRingAvail, idx); - return lduw_phys(&address_space_memory, pa); + return virtio_lduw_phys(vq->vdev, pa); } static inline uint16_t vring_avail_ring(VirtQueue *vq, int i) { hwaddr pa; pa = vq->vring.avail + offsetof(VRingAvail, ring[i]); - return lduw_phys(&address_space_memory, pa); + return virtio_lduw_phys(vq->vdev, pa); } static inline uint16_t vring_used_event(VirtQueue *vq) @@ -160,44 +163,44 @@ static inline void vring_used_ring_id(VirtQueue *vq, int i, uint32_t val) { hwaddr pa; pa = vq->vring.used + offsetof(VRingUsed, ring[i].id); - stl_phys(&address_space_memory, pa, val); + virtio_stl_phys(vq->vdev, pa, val); } static inline void vring_used_ring_len(VirtQueue *vq, int i, uint32_t val) { hwaddr pa; pa = vq->vring.used + offsetof(VRingUsed, ring[i].len); - stl_phys(&address_space_memory, pa, val); + virtio_stl_phys(vq->vdev, pa, val); } static uint16_t vring_used_idx(VirtQueue *vq) { hwaddr pa; pa = vq->vring.used + offsetof(VRingUsed, idx); - return lduw_phys(&address_space_memory, pa); + return virtio_lduw_phys(vq->vdev, pa); } static inline void vring_used_idx_set(VirtQueue *vq, uint16_t val) { hwaddr pa; pa = vq->vring.used + offsetof(VRingUsed, idx); - stw_phys(&address_space_memory, pa, val); + virtio_stw_phys(vq->vdev, pa, val); } static inline void vring_used_flags_set_bit(VirtQueue *vq, int mask) { + VirtIODevice *vdev = vq->vdev; hwaddr pa; pa = vq->vring.used + offsetof(VRingUsed, flags); - stw_phys(&address_space_memory, - pa, lduw_phys(&address_space_memory, pa) | mask); + virtio_stw_phys(vdev, pa, virtio_lduw_phys(vdev, pa) | mask); } static inline void vring_used_flags_unset_bit(VirtQueue *vq, int mask) { + VirtIODevice *vdev = vq->vdev; hwaddr pa; pa = vq->vring.used + offsetof(VRingUsed, flags); - stw_phys(&address_space_memory, - pa, lduw_phys(&address_space_memory, pa) & ~mask); + virtio_stw_phys(vdev, pa, virtio_lduw_phys(vdev, pa) & ~mask); } static inline void vring_avail_event(VirtQueue *vq, uint16_t val) @@ -207,7 +210,7 @@ static inline void vring_avail_event(VirtQueue *vq, uint16_t val) return; } pa = vq->vring.used + offsetof(VRingUsed, ring[vq->vring.num]); - stw_phys(&address_space_memory, pa, val); + virtio_stw_phys(vq->vdev, pa, val); } void virtio_queue_set_notification(VirtQueue *vq, int enable) @@ -324,17 +327,18 @@ static unsigned int virtqueue_get_head(VirtQueue *vq, unsigned int idx) return head; } -static unsigned virtqueue_next_desc(hwaddr desc_pa, +static unsigned virtqueue_next_desc(VirtIODevice *vdev, hwaddr desc_pa, unsigned int i, unsigned int max) { unsigned int next; /* If this descriptor says it doesn't chain, we're done. */ - if (!(vring_desc_flags(desc_pa, i) & VRING_DESC_F_NEXT)) + if (!(vring_desc_flags(vdev, desc_pa, i) & VRING_DESC_F_NEXT)) { return max; + } /* Check they're not leading us off end of descriptors. */ - next = vring_desc_next(desc_pa, i); + next = vring_desc_next(vdev, desc_pa, i); /* Make sure compiler knows to grab that: we don't want it changing! */ smp_wmb(); @@ -357,6 +361,7 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes, total_bufs = in_total = out_total = 0; while (virtqueue_num_heads(vq, idx)) { + VirtIODevice *vdev = vq->vdev; unsigned int max, num_bufs, indirect = 0; hwaddr desc_pa; int i; @@ -366,8 +371,8 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes, i = virtqueue_get_head(vq, idx++); desc_pa = vq->vring.desc; - if (vring_desc_flags(desc_pa, i) & VRING_DESC_F_INDIRECT) { - if (vring_desc_len(desc_pa, i) % sizeof(VRingDesc)) { + if (vring_desc_flags(vdev, desc_pa, i) & VRING_DESC_F_INDIRECT) { + if (vring_desc_len(vdev, desc_pa, i) % sizeof(VRingDesc)) { error_report("Invalid size for indirect buffer table"); exit(1); } @@ -380,8 +385,8 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes, /* loop over the indirect descriptor table */ indirect = 1; - max = vring_desc_len(desc_pa, i) / sizeof(VRingDesc); - desc_pa = vring_desc_addr(desc_pa, i); + max = vring_desc_len(vdev, desc_pa, i) / sizeof(VRingDesc); + desc_pa = vring_desc_addr(vdev, desc_pa, i); num_bufs = i = 0; } @@ -392,15 +397,15 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes, exit(1); } - if (vring_desc_flags(desc_pa, i) & VRING_DESC_F_WRITE) { - in_total += vring_desc_len(desc_pa, i); + if (vring_desc_flags(vdev, desc_pa, i) & VRING_DESC_F_WRITE) { + in_total += vring_desc_len(vdev, desc_pa, i); } else { - out_total += vring_desc_len(desc_pa, i); + out_total += vring_desc_len(vdev, desc_pa, i); } if (in_total >= max_in_bytes && out_total >= max_out_bytes) { goto done; } - } while ((i = virtqueue_next_desc(desc_pa, i, max)) != max); + } while ((i = virtqueue_next_desc(vdev, desc_pa, i, max)) != max); if (!indirect) total_bufs = num_bufs; @@ -445,6 +450,7 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem) { unsigned int i, head, max; hwaddr desc_pa = vq->vring.desc; + VirtIODevice *vdev = vq->vdev; if (!virtqueue_num_heads(vq, vq->last_avail_idx)) return 0; @@ -455,19 +461,19 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem) max = vq->vring.num; i = head = virtqueue_get_head(vq, vq->last_avail_idx++); - if (vq->vdev->guest_features & (1 << VIRTIO_RING_F_EVENT_IDX)) { + if (vdev->guest_features & (1 << VIRTIO_RING_F_EVENT_IDX)) { vring_avail_event(vq, vring_avail_idx(vq)); } - if (vring_desc_flags(desc_pa, i) & VRING_DESC_F_INDIRECT) { - if (vring_desc_len(desc_pa, i) % sizeof(VRingDesc)) { + if (vring_desc_flags(vdev, desc_pa, i) & VRING_DESC_F_INDIRECT) { + if (vring_desc_len(vdev, desc_pa, i) % sizeof(VRingDesc)) { error_report("Invalid size for indirect buffer table"); exit(1); } /* loop over the indirect descriptor table */ - max = vring_desc_len(desc_pa, i) / sizeof(VRingDesc); - desc_pa = vring_desc_addr(desc_pa, i); + max = vring_desc_len(vdev, desc_pa, i) / sizeof(VRingDesc); + desc_pa = vring_desc_addr(vdev, desc_pa, i); i = 0; } @@ -475,30 +481,30 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem) do { struct iovec *sg; - if (vring_desc_flags(desc_pa, i) & VRING_DESC_F_WRITE) { + if (vring_desc_flags(vdev, desc_pa, i) & VRING_DESC_F_WRITE) { if (elem->in_num >= ARRAY_SIZE(elem->in_sg)) { error_report("Too many write descriptors in indirect table"); exit(1); } - elem->in_addr[elem->in_num] = vring_desc_addr(desc_pa, i); + elem->in_addr[elem->in_num] = vring_desc_addr(vdev, desc_pa, i); sg = &elem->in_sg[elem->in_num++]; } else { if (elem->out_num >= ARRAY_SIZE(elem->out_sg)) { error_report("Too many read descriptors in indirect table"); exit(1); } - elem->out_addr[elem->out_num] = vring_desc_addr(desc_pa, i); + elem->out_addr[elem->out_num] = vring_desc_addr(vdev, desc_pa, i); sg = &elem->out_sg[elem->out_num++]; } - sg->iov_len = vring_desc_len(desc_pa, i); + sg->iov_len = vring_desc_len(vdev, desc_pa, i); /* If we've got too many, that implies a descriptor loop. */ if ((elem->in_num + elem->out_num) > max) { error_report("Looped descriptor"); exit(1); } - } while ((i = virtqueue_next_desc(desc_pa, i, max)) != max); + } while ((i = virtqueue_next_desc(vdev, desc_pa, i, max)) != max); /* Now map what we have collected */ virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);