From patchwork Wed Dec 4 03:21:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Kardashevskiy X-Patchwork-Id: 1203965 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47SPKn3rDqz9sPh for ; Wed, 4 Dec 2019 14:22:21 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.ru Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47SPKn20v1zDqSR for ; Wed, 4 Dec 2019 14:22:21 +1100 (AEDT) X-Original-To: slof@lists.ozlabs.org Delivered-To: slof@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.ru (client-ip=107.174.27.60; helo=ozlabs.ru; envelope-from=aik@ozlabs.ru; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.ru Received: from ozlabs.ru (unknown [107.174.27.60]) by lists.ozlabs.org (Postfix) with ESMTP id 47SPKg0xfxzDqSJ for ; Wed, 4 Dec 2019 14:22:15 +1100 (AEDT) Received: from fstn1-p1.ozlabs.ibm.com (localhost [IPv6:::1]) by ozlabs.ru (Postfix) with ESMTP id 8177CAE80045; Tue, 3 Dec 2019 22:20:40 -0500 (EST) From: Alexey Kardashevskiy To: slof@lists.ozlabs.org Date: Wed, 4 Dec 2019 14:21:34 +1100 Message-Id: <20191204032138.127624-2-aik@ozlabs.ru> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191204032138.127624-1-aik@ozlabs.ru> References: <20191204032138.127624-1-aik@ozlabs.ru> Subject: [SLOF] [PATCH slof v4 1/5] pci-phb: Reimplement dma-map-in/out X-BeenThere: slof@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Patches for https://github.com/aik/SLOF" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: slof-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "SLOF" The immediate problem with the code is that it relies on memory allocator aligning addresses to the size. This is true for SLOF but not for GRUB and in unaligned situations we end up mapping more pages than bm-alloc allocated. This fixes the problem by calculating aligned DMA size before calling bm-alloc. While at this, simplify the code by removing global variables. Also replace 1000/fff (the default 4K IOMMU page size) with tce-ps/mask. Signed-off-by: Alexey Kardashevskiy Reviewed-by: Michael Roth --- Changes: v4: * fixed code comments, tab/spaces * fixed bm-alloc failure handling --- board-qemu/slof/pci-phb.fs | 95 ++++++++++++++++---------------------- 1 file changed, 39 insertions(+), 56 deletions(-) diff --git a/board-qemu/slof/pci-phb.fs b/board-qemu/slof/pci-phb.fs index 06729bcf77a0..2a003fca4a30 100644 --- a/board-qemu/slof/pci-phb.fs +++ b/board-qemu/slof/pci-phb.fs @@ -14,6 +14,8 @@ 0 VALUE phb-debug? +1000 CONSTANT tce-ps \ Default TCE page size is 4K +tce-ps 1- CONSTANT tce-mask ." Populating " pwd cr @@ -86,17 +88,17 @@ setup-puid : dma-alloc ( size -- virt ) phb-debug? IF cr ." dma-alloc called: " .s cr THEN - fff + fff not and \ Align size to next 4k boundary + tce-ps #aligned alloc-mem \ alloc-mem always returns aligned memory - double check just to be sure - dup fff and IF + dup tce-mask and IF ." Warning: dma-alloc got unaligned memory!" cr THEN ; : dma-free ( virt size -- ) phb-debug? IF cr ." dma-free called: " .s cr THEN - fff + fff not and \ Align size to next 4k boundary + tce-ps #aligned free-mem ; @@ -107,10 +109,6 @@ setup-puid 0 VALUE dma-window-size \ Size of the window 0 VALUE bm-handle \ Bitmap allocator handle -0 VALUE my-virt -0 VALUE my-size -0 VALUE dev-addr -0 VALUE tmp-dev-addr \ Read helper variables (LIOBN, DMA window base and size) from the \ "ibm,dma-window" property. This property can be either located @@ -130,11 +128,11 @@ setup-puid decode-64 TO dma-window-size 2drop bm-handle 0= IF - dma-window-base dma-window-size 1000 bm-allocator-init to bm-handle + dma-window-base dma-window-size tce-ps bm-allocator-init to bm-handle \ Sometimes the window-base appears as zero, that does not \ go well with NULL pointers. So block this address dma-window-base 0= IF - bm-handle 1000 bm-alloc drop + bm-handle tce-ps bm-alloc drop THEN THEN ; @@ -145,69 +143,54 @@ setup-puid 0 TO dma-window-size ; -\ We assume that firmware never maps more than the whole dma-window-size -\ so we cheat by calculating the remainder of addr/windowsize instead -\ of taking care to maintain a list of assigned device addresses -: dma-virt2dev ( virt -- devaddr ) - dma-window-size mod dma-window-base + -; +\ grub does not align allocated addresses to the size so when mapping, +\ we might need to ask bm-alloc for an extra IOMMU page +: dma-align ( size virt -- aligned-size ) tce-mask and + tce-ps #aligned ; +: dma-trunc ( addr -- addr&~fff ) tce-mask not and ; : dma-map-in ( virt size cachable? -- devaddr ) phb-debug? IF cr ." dma-map-in called: " .s cr THEN (init-dma-window-vars) - drop ( virt size ) - - to my-size - to my-virt - bm-handle my-size bm-alloc - to dev-addr - dev-addr 0 < IF - ." Bitmap allocation Failed " dev-addr . - FALSE EXIT + drop + over dma-align ( virt size ) \ size is aligned now + tuck ( size virt size ) + bm-handle swap bm-alloc ( size virt dev-addr ) \ dev-addr is aligned + dup 0 < IF + ." Bitmap allocation Failed " + 3drop + 0 EXIT THEN - dev-addr to tmp-dev-addr - my-virt my-size - bounds dup >r ( v+s virt R: virt ) - swap fff + fff not and \ Align end to next 4k boundary - swap fff not and ( v+s' virt' R: virt ) + swap ( size dev-addr virt ) + 2dup tce-mask and or >r \ add page offset to the return value + + dma-trunc 3 OR \ Truncate and add read and write perm + rot ( dev-addr virt size r: dev-addr ) + 0 ?DO - \ ." mapping " i . cr - dma-window-liobn \ liobn - tmp-dev-addr \ ioba - i 3 OR \ Make a read- & writeable TCE - ( liobn ioba tce R: virt ) + 2dup dma-window-liobn -rot ( dev-addr virt liobn dev-addr virt r: dev-addr ) hv-put-tce ABORT" H_PUT_TCE failed" - tmp-dev-addr 1000 + to tmp-dev-addr - 1000 +LOOP - r> drop - my-virt FFF and dev-addr or + tce-ps + swap tce-ps + swap ( dev-addr' virt' r: dev-addr ) + tce-ps +LOOP (clear-dma-window-vars) + 2drop + r> ; : dma-map-out ( virt devaddr size -- ) phb-debug? IF cr ." dma-map-out called: " .s cr THEN (init-dma-window-vars) - to my-size - to dev-addr - to my-virt - dev-addr fff not and to dev-addr - dev-addr to tmp-dev-addr - - my-virt my-size ( virt size ) - bounds ( v+s virt ) - swap fff + fff not and \ Align end to next 4k boundary - swap fff not and ( v+s' virt' ) + rot drop ( devaddr size ) + over dma-align + swap dma-trunc swap ( devaddr-trunc size-extended ) + 2dup bm-handle -rot bm-free + 0 ?DO - \ ." unmapping " i . cr - dma-window-liobn \ liobn - tmp-dev-addr \ ioba - i \ Lowest bits not set => invalid TCE - ( liobn ioba tce ) + dup 0 dma-window-liobn -rot hv-put-tce ABORT" H_PUT_TCE failed" - tmp-dev-addr 1000 + to tmp-dev-addr - 1000 +LOOP - bm-handle dev-addr my-size bm-free + tce-ps + + tce-ps +LOOP + drop (clear-dma-window-vars) ; From patchwork Wed Dec 4 03:21:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Kardashevskiy X-Patchwork-Id: 1203966 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47SPKt12ccz9sPh for ; Wed, 4 Dec 2019 14:22:26 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.ru Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47SPKs548lzDqTJ for ; Wed, 4 Dec 2019 14:22:25 +1100 (AEDT) X-Original-To: slof@lists.ozlabs.org Delivered-To: slof@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.ru (client-ip=107.174.27.60; helo=ozlabs.ru; envelope-from=aik@ozlabs.ru; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.ru Received: from ozlabs.ru (unknown [107.174.27.60]) by lists.ozlabs.org (Postfix) with ESMTP id 47SPKh43kjzDqSj for ; Wed, 4 Dec 2019 14:22:16 +1100 (AEDT) Received: from fstn1-p1.ozlabs.ibm.com (localhost [IPv6:::1]) by ozlabs.ru (Postfix) with ESMTP id 0BC22AE800EC; Tue, 3 Dec 2019 22:20:41 -0500 (EST) From: Alexey Kardashevskiy To: slof@lists.ozlabs.org Date: Wed, 4 Dec 2019 14:21:35 +1100 Message-Id: <20191204032138.127624-3-aik@ozlabs.ru> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191204032138.127624-1-aik@ozlabs.ru> References: <20191204032138.127624-1-aik@ozlabs.ru> Subject: [SLOF] [PATCH slof v4 2/5] virtio: Store queue descriptors in virtio_device X-BeenThere: slof@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Patches for https://github.com/aik/SLOF" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: slof-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "SLOF" At the moment desc/avail/used pointers are read from the device every time we need them. This works for now unless iommu_platform=on is used, desc/avail/used stored in the config space are bus addresses while SLOF should keep using the guest physical addresses. virtio-net stores queue descriptors already, virtio-serial does it in global statics, move them into virtio_device. The next patch will use this to allow IOMMU. While at this, move repeating avail->flags/idx setup into virtio_queue_init_vq() except virtio-serial which vq_rx->avail->idx is setup differently. Signed-off-by: Alexey Kardashevskiy Reviewed-by: Michael Roth --- Changes: v4: * removed vqs::id as it is not really used * replaced vq_size with vq->size in virtio-serial.c --- lib/libvirtio/virtio-internal.h | 12 +-- lib/libvirtio/virtio-net.h | 2 - lib/libvirtio/virtio.h | 25 +++---- lib/libvirtio/virtio-9p.c | 37 ++++------ lib/libvirtio/virtio-blk.c | 52 ++++++------- lib/libvirtio/virtio-net.c | 27 ++++--- lib/libvirtio/virtio-scsi.c | 67 ++++++----------- lib/libvirtio/virtio-serial.c | 78 +++++++++----------- lib/libvirtio/virtio.c | 125 +++++++++++++------------------- 9 files changed, 174 insertions(+), 251 deletions(-) diff --git a/lib/libvirtio/virtio-internal.h b/lib/libvirtio/virtio-internal.h index 08662eab7014..fe59c6b9909d 100644 --- a/lib/libvirtio/virtio-internal.h +++ b/lib/libvirtio/virtio-internal.h @@ -17,32 +17,32 @@ static inline uint16_t virtio_cpu_to_modern16(struct virtio_device *dev, uint16_t val) { - return dev->is_modern ? cpu_to_le16(val) : val; + return (dev->features & VIRTIO_F_VERSION_1) ? cpu_to_le16(val) : val; } static inline uint32_t virtio_cpu_to_modern32(struct virtio_device *dev, uint32_t val) { - return dev->is_modern ? cpu_to_le32(val) : val; + return (dev->features & VIRTIO_F_VERSION_1) ? cpu_to_le32(val) : val; } static inline uint64_t virtio_cpu_to_modern64(struct virtio_device *dev, uint64_t val) { - return dev->is_modern ? cpu_to_le64(val) : val; + return (dev->features & VIRTIO_F_VERSION_1) ? cpu_to_le64(val) : val; } static inline uint16_t virtio_modern16_to_cpu(struct virtio_device *dev, uint16_t val) { - return dev->is_modern ? le16_to_cpu(val) : val; + return (dev->features & VIRTIO_F_VERSION_1) ? le16_to_cpu(val) : val; } static inline uint32_t virtio_modern32_to_cpu(struct virtio_device *dev, uint32_t val) { - return dev->is_modern ? le32_to_cpu(val) : val; + return (dev->features & VIRTIO_F_VERSION_1) ? le32_to_cpu(val) : val; } static inline uint64_t virtio_modern64_to_cpu(struct virtio_device *dev, uint64_t val) { - return dev->is_modern ? le64_to_cpu(val) : val; + return (dev->features & VIRTIO_F_VERSION_1) ? le64_to_cpu(val) : val; } #endif /* _LIBVIRTIO_INTERNAL_H */ diff --git a/lib/libvirtio/virtio-net.h b/lib/libvirtio/virtio-net.h index f72d435564bb..c71fbded0bf1 100644 --- a/lib/libvirtio/virtio-net.h +++ b/lib/libvirtio/virtio-net.h @@ -27,8 +27,6 @@ enum { struct virtio_net { net_driver_t driver; struct virtio_device vdev; - struct vqs vq_rx; - struct vqs vq_tx; }; /* VIRTIO_NET Feature bits */ diff --git a/lib/libvirtio/virtio.h b/lib/libvirtio/virtio.h index b65c716e88c9..7efc1e524d77 100644 --- a/lib/libvirtio/virtio.h +++ b/lib/libvirtio/virtio.h @@ -14,7 +14,6 @@ #define _LIBVIRTIO_H #include -#include /* Device status bits */ #define VIRTIO_STAT_ACKNOWLEDGE 1 @@ -78,8 +77,16 @@ struct virtio_cap { uint8_t cap_id; }; +struct vqs { + uint32_t size; + void *buf_mem; + struct vring_desc *desc; + struct vring_avail *avail; + struct vring_used *used; +}; + struct virtio_device { - uint32_t is_modern; /* Indicates whether to use virtio 1.0 */ + uint64_t features; struct virtio_cap legacy; struct virtio_cap common; struct virtio_cap notify; @@ -87,15 +94,7 @@ struct virtio_device { struct virtio_cap device; struct virtio_cap pci; uint32_t notify_off_mul; -}; - -struct vqs { - uint64_t id; /* Queue ID */ - uint32_t size; - void *buf_mem; - struct vring_desc *desc; - struct vring_avail *avail; - struct vring_used *used; + struct vqs vq[3]; }; /* Parts of the virtqueue are aligned on a 4096 byte page boundary */ @@ -106,10 +105,10 @@ extern unsigned int virtio_get_qsize(struct virtio_device *dev, int queue); extern struct vring_desc *virtio_get_vring_desc(struct virtio_device *dev, int queue); extern struct vring_avail *virtio_get_vring_avail(struct virtio_device *dev, int queue); extern struct vring_used *virtio_get_vring_used(struct virtio_device *dev, int queue); -extern void virtio_fill_desc(struct vring_desc *desc, bool is_modern, +extern void virtio_fill_desc(struct vqs *vq, int id, uint64_t features, uint64_t addr, uint32_t len, uint16_t flags, uint16_t next); -extern int virtio_queue_init_vq(struct virtio_device *dev, struct vqs *vq, unsigned int id); +extern struct vqs *virtio_queue_init_vq(struct virtio_device *dev, unsigned int id); extern void virtio_queue_term_vq(struct virtio_device *dev, struct vqs *vq, unsigned int id); extern struct virtio_device *virtio_setup_vd(void); diff --git a/lib/libvirtio/virtio-9p.c b/lib/libvirtio/virtio-9p.c index fb329b3fa637..426069fe9509 100644 --- a/lib/libvirtio/virtio-9p.c +++ b/lib/libvirtio/virtio-9p.c @@ -89,25 +89,19 @@ static int virtio_9p_transact(void *opaque, uint8_t *tx, int tx_size, uint8_t *r uint32_t *rx_size) { struct virtio_device *dev = opaque; - struct vring_desc *desc; int id, i; uint32_t vq_size; - struct vring_desc *vq_desc; - struct vring_avail *vq_avail; - struct vring_used *vq_used; volatile uint16_t *current_used_idx; uint16_t last_used_idx, avail_idx; + struct vqs *vq = &dev->vq[0]; /* Virt IO queues. */ vq_size = virtio_get_qsize(dev, 0); - vq_desc = virtio_get_vring_desc(dev, 0); - vq_avail = virtio_get_vring_avail(dev, 0); - vq_used = virtio_get_vring_used(dev, 0); - last_used_idx = vq_used->idx; - current_used_idx = &vq_used->idx; + last_used_idx = vq->used->idx; + current_used_idx = &vq->used->idx; - avail_idx = virtio_modern16_to_cpu(dev, vq_avail->idx); + avail_idx = virtio_modern16_to_cpu(dev, vq->avail->idx); /* Determine descriptor index */ id = (avail_idx * 3) % vq_size; @@ -115,19 +109,18 @@ static int virtio_9p_transact(void *opaque, uint8_t *tx, int tx_size, uint8_t *r /* TX in first queue item. */ dprint_buffer("TX", tx, tx_size); - desc = &vq_desc[id]; - virtio_fill_desc(desc, dev->is_modern, (uint64_t)tx, tx_size, - VRING_DESC_F_NEXT, (id + 1) % vq_size); + virtio_fill_desc(vq, id, dev->features, (uint64_t)tx, tx_size, + VRING_DESC_F_NEXT, id + 1); /* RX in the second queue item. */ - desc = &vq_desc[(id + 1) % vq_size]; - virtio_fill_desc(desc, dev->is_modern, (uint64_t)rx, *rx_size, + virtio_fill_desc(vq, id + 1, dev->features, (uint64_t)rx, + *rx_size, VRING_DESC_F_WRITE, 0); /* Tell HV that the queue is ready */ - vq_avail->ring[avail_idx % vq_size] = virtio_cpu_to_modern16 (dev, id); + vq->avail->ring[avail_idx % vq_size] = virtio_cpu_to_modern16 (dev, id); mb(); - vq_avail->idx = virtio_cpu_to_modern16(dev, avail_idx + 1); + vq->avail->idx = virtio_cpu_to_modern16(dev, avail_idx + 1); virtio_queue_notify(dev, 0); /* Receive the response. */ @@ -161,7 +154,7 @@ static int virtio_9p_transact(void *opaque, uint8_t *tx, int tx_size, uint8_t *r int virtio_9p_init(struct virtio_device *dev, void *tx_buf, void *rx_buf, int buf_size) { - struct vqs vq; + struct vqs *vq; int status = VIRTIO_STAT_ACKNOWLEDGE; /* Check for double open */ @@ -182,7 +175,7 @@ int virtio_9p_init(struct virtio_device *dev, void *tx_buf, void *rx_buf, virtio_set_status(dev, status); /* Device specific setup - we do not support special features */ - if (dev->is_modern) { + if (dev->features & VIRTIO_F_VERSION_1) { if (virtio_negotiate_guest_features(dev, VIRTIO_F_VERSION_1)) goto dev_error; virtio_get_status(dev, &status); @@ -190,12 +183,10 @@ int virtio_9p_init(struct virtio_device *dev, void *tx_buf, void *rx_buf, virtio_set_guest_features(dev, 0); } - if (virtio_queue_init_vq(dev, &vq, 0)) + vq = virtio_queue_init_vq(dev, 0); + if (!vq) goto dev_error; - vq.avail->flags = virtio_cpu_to_modern16(dev, VRING_AVAIL_F_NO_INTERRUPT); - vq.avail->idx = 0; - /* Tell HV that setup succeeded */ status |= VIRTIO_STAT_DRIVER_OK; virtio_set_status(dev, status); diff --git a/lib/libvirtio/virtio-blk.c b/lib/libvirtio/virtio-blk.c index 9eea99d564f1..a0dadbb0d6a8 100644 --- a/lib/libvirtio/virtio-blk.c +++ b/lib/libvirtio/virtio-blk.c @@ -11,6 +11,7 @@ *****************************************************************************/ #include +#include #include #include #include @@ -28,7 +29,7 @@ int virtioblk_init(struct virtio_device *dev) { - struct vqs vq; + struct vqs *vq; int blk_size = DEFAULT_SECTOR_SIZE; uint64_t features; int status = VIRTIO_STAT_ACKNOWLEDGE; @@ -43,7 +44,7 @@ virtioblk_init(struct virtio_device *dev) status |= VIRTIO_STAT_DRIVER; virtio_set_status(dev, status); - if (dev->is_modern) { + if (dev->features & VIRTIO_F_VERSION_1) { /* Negotiate features and sets FEATURES_OK if successful */ if (virtio_negotiate_guest_features(dev, DRIVER_FEATURE_SUPPORT)) goto dev_error; @@ -54,12 +55,10 @@ virtioblk_init(struct virtio_device *dev) virtio_set_guest_features(dev, VIRTIO_BLK_F_BLK_SIZE); } - if (virtio_queue_init_vq(dev, &vq, 0)) + vq = virtio_queue_init_vq(dev, 0); + if (!vq) goto dev_error; - vq.avail->flags = virtio_cpu_to_modern16(dev, VRING_AVAIL_F_NO_INTERRUPT); - vq.avail->idx = 0; - /* Tell HV that setup succeeded */ status |= VIRTIO_STAT_DRIVER_OK; virtio_set_status(dev, status); @@ -121,15 +120,12 @@ int virtioblk_transfer(struct virtio_device *dev, char *buf, uint64_t blocknum, long cnt, unsigned int type) { - struct vring_desc *desc; int id; static struct virtio_blk_req blkhdr; //struct virtio_blk_config *blkconf; uint64_t capacity; - uint32_t vq_size, time; - struct vring_desc *vq_desc; /* Descriptor vring */ - struct vring_avail *vq_avail; /* "Available" vring */ - struct vring_used *vq_used; /* "Used" vring */ + uint32_t time; + struct vqs *vq = &dev->vq[0]; volatile uint8_t status = -1; volatile uint16_t *current_used_idx; uint16_t last_used_idx, avail_idx; @@ -155,43 +151,37 @@ virtioblk_transfer(struct virtio_device *dev, char *buf, uint64_t blocknum, return 0; } - vq_size = virtio_get_qsize(dev, 0); - vq_desc = virtio_get_vring_desc(dev, 0); - vq_avail = virtio_get_vring_avail(dev, 0); - vq_used = virtio_get_vring_used(dev, 0); + avail_idx = virtio_modern16_to_cpu(dev, vq->avail->idx); - avail_idx = virtio_modern16_to_cpu(dev, vq_avail->idx); - - last_used_idx = vq_used->idx; - current_used_idx = &vq_used->idx; + last_used_idx = vq->used->idx; + current_used_idx = &vq->used->idx; /* Set up header */ - fill_blk_hdr(&blkhdr, dev->is_modern, type | VIRTIO_BLK_T_BARRIER, + fill_blk_hdr(&blkhdr, dev->features, type | VIRTIO_BLK_T_BARRIER, 1, blocknum * blk_size / DEFAULT_SECTOR_SIZE); /* Determine descriptor index */ - id = (avail_idx * 3) % vq_size; + id = (avail_idx * 3) % vq->size; /* Set up virtqueue descriptor for header */ - desc = &vq_desc[id]; - virtio_fill_desc(desc, dev->is_modern, (uint64_t)&blkhdr, + virtio_fill_desc(vq, id, dev->features, (uint64_t)&blkhdr, sizeof(struct virtio_blk_req), - VRING_DESC_F_NEXT, (id + 1) % vq_size); + VRING_DESC_F_NEXT, id + 1); /* Set up virtqueue descriptor for data */ - desc = &vq_desc[(id + 1) % vq_size]; - virtio_fill_desc(desc, dev->is_modern, (uint64_t)buf, cnt * blk_size, + virtio_fill_desc(vq, id + 1, dev->features, (uint64_t)buf, + cnt * blk_size, VRING_DESC_F_NEXT | ((type & 1) ? 0 : VRING_DESC_F_WRITE), - (id + 2) % vq_size); + id + 2); /* Set up virtqueue descriptor for status */ - desc = &vq_desc[(id + 2) % vq_size]; - virtio_fill_desc(desc, dev->is_modern, (uint64_t)&status, 1, + virtio_fill_desc(vq, id + 2, dev->features, + (uint64_t)&status, 1, VRING_DESC_F_WRITE, 0); - vq_avail->ring[avail_idx % vq_size] = virtio_cpu_to_modern16 (dev, id); + vq->avail->ring[avail_idx % vq->size] = virtio_cpu_to_modern16 (dev, id); mb(); - vq_avail->idx = virtio_cpu_to_modern16(dev, avail_idx + 1); + vq->avail->idx = virtio_cpu_to_modern16(dev, avail_idx + 1); /* Tell HV that the queue is ready */ virtio_queue_notify(dev, 0); diff --git a/lib/libvirtio/virtio-net.c b/lib/libvirtio/virtio-net.c index 337eb77d5d9d..2290b2d74765 100644 --- a/lib/libvirtio/virtio-net.c +++ b/lib/libvirtio/virtio-net.c @@ -89,8 +89,8 @@ static int virtionet_init_pci(struct virtio_net *vnet, struct virtio_device *dev * second the transmit queue, and the forth is the control queue for * networking options. * We are only interested in the receive and transmit queue here. */ - if (virtio_queue_init_vq(vdev, &vnet->vq_rx, VQ_RX) || - virtio_queue_init_vq(vdev, &vnet->vq_tx, VQ_TX)) { + if (!virtio_queue_init_vq(vdev, VQ_RX) || + !virtio_queue_init_vq(vdev, VQ_TX)) { virtio_set_status(vdev, VIRTIO_STAT_ACKNOWLEDGE|VIRTIO_STAT_DRIVER |VIRTIO_STAT_FAILED); return -1; @@ -113,8 +113,7 @@ static int virtionet_init(struct virtio_net *vnet) int status = VIRTIO_STAT_ACKNOWLEDGE | VIRTIO_STAT_DRIVER; struct virtio_device *vdev = &vnet->vdev; net_driver_t *driver = &vnet->driver; - struct vqs *vq_tx = &vnet->vq_tx; - struct vqs *vq_rx = &vnet->vq_rx; + struct vqs *vq_tx = &vdev->vq[VQ_TX], *vq_rx = &vdev->vq[VQ_RX]; dprintf("virtionet_init(%02x:%02x:%02x:%02x:%02x:%02x)\n", driver->mac_addr[0], driver->mac_addr[1], @@ -128,7 +127,7 @@ static int virtionet_init(struct virtio_net *vnet) virtio_set_status(vdev, status); /* Device specific setup */ - if (vdev->is_modern) { + if (vdev->features & VIRTIO_F_VERSION_1) { if (virtio_negotiate_guest_features(vdev, DRIVER_FEATURE_SUPPORT)) goto dev_error; net_hdr_size = sizeof(struct virtio_net_hdr_v1); @@ -152,11 +151,11 @@ static int virtionet_init(struct virtio_net *vnet) + i * (BUFFER_ENTRY_SIZE+net_hdr_size); uint32_t id = i*2; /* Descriptor for net_hdr: */ - virtio_fill_desc(&vq_rx->desc[id], vdev->is_modern, addr, net_hdr_size, + virtio_fill_desc(vq_rx, id, vdev->features, addr, net_hdr_size, VRING_DESC_F_NEXT | VRING_DESC_F_WRITE, id + 1); /* Descriptor for data: */ - virtio_fill_desc(&vq_rx->desc[id+1], vdev->is_modern, addr + net_hdr_size, + virtio_fill_desc(vq_rx, id + 1, vdev->features, addr + net_hdr_size, BUFFER_ENTRY_SIZE, VRING_DESC_F_WRITE, 0); vq_rx->avail->ring[i] = virtio_cpu_to_modern16(vdev, id); @@ -200,8 +199,8 @@ static int virtionet_term(struct virtio_net *vnet) { struct virtio_device *vdev = &vnet->vdev; net_driver_t *driver = &vnet->driver; - struct vqs *vq_rx = &vnet->vq_rx; - struct vqs *vq_tx = &vnet->vq_tx; + struct vqs *vq_tx = &vnet->vdev.vq[VQ_TX]; + struct vqs *vq_rx = &vnet->vdev.vq[VQ_RX]; dprintf("virtionet_term()\n"); @@ -237,7 +236,7 @@ static int virtionet_xmit(struct virtio_net *vnet, char *buf, int len) static struct virtio_net_hdr nethdr_legacy; void *nethdr = &nethdr_legacy; struct virtio_device *vdev = &vnet->vdev; - struct vqs *vq_tx = &vnet->vq_tx; + struct vqs *vq_tx = &vdev->vq[VQ_TX]; if (len > BUFFER_ENTRY_SIZE) { printf("virtionet: Packet too big!\n"); @@ -246,7 +245,7 @@ static int virtionet_xmit(struct virtio_net *vnet, char *buf, int len) dprintf("\nvirtionet_xmit(packet at %p, %d bytes)\n", buf, len); - if (vdev->is_modern) + if (vdev->features & VIRTIO_F_VERSION_1) nethdr = &nethdr_v1; memset(nethdr, 0, net_hdr_size); @@ -256,11 +255,11 @@ static int virtionet_xmit(struct virtio_net *vnet, char *buf, int len) id = (idx * 2) % vq_tx->size; /* Set up virtqueue descriptor for header */ - virtio_fill_desc(&vq_tx->desc[id], vdev->is_modern, (uint64_t)nethdr, + virtio_fill_desc(vq_tx, id, vdev->features, (uint64_t)nethdr, net_hdr_size, VRING_DESC_F_NEXT, id + 1); /* Set up virtqueue descriptor for data */ - virtio_fill_desc(&vq_tx->desc[id+1], vdev->is_modern, (uint64_t)buf, len, 0, 0); + virtio_fill_desc(vq_tx, id + 1, vdev->features, (uint64_t)buf, len, 0, 0); vq_tx->avail->ring[idx % vq_tx->size] = virtio_cpu_to_modern16(vdev, id); sync(); @@ -283,7 +282,7 @@ static int virtionet_receive(struct virtio_net *vnet, char *buf, int maxlen) uint32_t id, idx; uint16_t avail_idx; struct virtio_device *vdev = &vnet->vdev; - struct vqs *vq_rx = &vnet->vq_rx; + struct vqs *vq_rx = &vnet->vdev.vq[VQ_RX]; idx = virtio_modern16_to_cpu(vdev, vq_rx->used->idx); diff --git a/lib/libvirtio/virtio-scsi.c b/lib/libvirtio/virtio-scsi.c index e95352da8191..ae87e97e7330 100644 --- a/lib/libvirtio/virtio-scsi.c +++ b/lib/libvirtio/virtio-scsi.c @@ -23,63 +23,54 @@ int virtioscsi_send(struct virtio_device *dev, struct virtio_scsi_resp_cmd *resp, int is_read, void *buf, uint64_t buf_len) { - struct vring_desc *vq_desc; /* Descriptor vring */ - struct vring_avail *vq_avail; /* "Available" vring */ - struct vring_used *vq_used; /* "Used" vring */ volatile uint16_t *current_used_idx; uint16_t last_used_idx, avail_idx; int id; - uint32_t vq_size, time; + uint32_t time; + struct vqs *vq = &dev->vq[VIRTIO_SCSI_REQUEST_VQ]; - int vq = VIRTIO_SCSI_REQUEST_VQ; + avail_idx = virtio_modern16_to_cpu(dev, vq->avail->idx); - vq_size = virtio_get_qsize(dev, vq); - vq_desc = virtio_get_vring_desc(dev, vq); - vq_avail = virtio_get_vring_avail(dev, vq); - vq_used = virtio_get_vring_used(dev, vq); - - avail_idx = virtio_modern16_to_cpu(dev, vq_avail->idx); - - last_used_idx = vq_used->idx; - current_used_idx = &vq_used->idx; + last_used_idx = vq->used->idx; + current_used_idx = &vq->used->idx; /* Determine descriptor index */ - id = (avail_idx * 3) % vq_size; - virtio_fill_desc(&vq_desc[id], dev->is_modern, (uint64_t)req, sizeof(*req), VRING_DESC_F_NEXT, - (id + 1) % vq_size); + id = (avail_idx * 3) % vq->size; + virtio_fill_desc(vq, id, dev->features, (uint64_t)req, sizeof(*req), VRING_DESC_F_NEXT, + id + 1); if (buf == NULL || buf_len == 0) { /* Set up descriptor for response information */ - virtio_fill_desc(&vq_desc[(id + 1) % vq_size], dev->is_modern, + virtio_fill_desc(vq, id + 1, dev->features, (uint64_t)resp, sizeof(*resp), VRING_DESC_F_WRITE, 0); } else if (is_read) { /* Set up descriptor for response information */ - virtio_fill_desc(&vq_desc[(id + 1) % vq_size], dev->is_modern, + virtio_fill_desc(vq, id + 1, dev->features, (uint64_t)resp, sizeof(*resp), VRING_DESC_F_NEXT | VRING_DESC_F_WRITE, - (id + 2) % vq_size); + id + 2); /* Set up virtqueue descriptor for data from device */ - virtio_fill_desc(&vq_desc[(id + 2) % vq_size], dev->is_modern, + virtio_fill_desc(vq, id + 2, dev->features, (uint64_t)buf, buf_len, VRING_DESC_F_WRITE, 0); } else { /* Set up virtqueue descriptor for data to device */ - virtio_fill_desc(&vq_desc[(id + 1) % vq_size], dev->is_modern, + virtio_fill_desc(vq, id + 1, dev->features, (uint64_t)buf, buf_len, VRING_DESC_F_NEXT, - (id + 2) % vq_size); + id + 2); /* Set up descriptor for response information */ - virtio_fill_desc(&vq_desc[(id + 2) % vq_size], dev->is_modern, + virtio_fill_desc(vq, id + 2, dev->features, (uint64_t)resp, sizeof(*resp), VRING_DESC_F_WRITE, 0); } - vq_avail->ring[avail_idx % vq_size] = virtio_cpu_to_modern16(dev, id); + vq->avail->ring[avail_idx % vq->size] = virtio_cpu_to_modern16(dev, id); mb(); - vq_avail->idx = virtio_cpu_to_modern16(dev, avail_idx + 1); + vq->avail->idx = virtio_cpu_to_modern16(dev, avail_idx + 1); /* Tell HV that the vq is ready */ - virtio_queue_notify(dev, vq); + virtio_queue_notify(dev, VIRTIO_SCSI_REQUEST_VQ); /* Wait for host to consume the descriptor */ time = SLOF_GetTimer() + VIRTIO_TIMEOUT; @@ -99,9 +90,8 @@ int virtioscsi_send(struct virtio_device *dev, */ int virtioscsi_init(struct virtio_device *dev) { - struct vqs vq_ctrl, vq_event, vq_request; + struct vqs *vq_ctrl, *vq_event, *vq_request; int status = VIRTIO_STAT_ACKNOWLEDGE; - uint16_t flags; /* Reset device */ // XXX That will clear the virtq base. We need to move @@ -117,7 +107,7 @@ int virtioscsi_init(struct virtio_device *dev) virtio_set_status(dev, status); /* Device specific setup - we do not support special features right now */ - if (dev->is_modern) { + if (dev->features & VIRTIO_F_VERSION_1) { if (virtio_negotiate_guest_features(dev, VIRTIO_F_VERSION_1)) goto dev_error; virtio_get_status(dev, &status); @@ -125,21 +115,12 @@ int virtioscsi_init(struct virtio_device *dev) virtio_set_guest_features(dev, 0); } - if (virtio_queue_init_vq(dev, &vq_ctrl, VIRTIO_SCSI_CONTROL_VQ) || - virtio_queue_init_vq(dev, &vq_event, VIRTIO_SCSI_EVENT_VQ) || - virtio_queue_init_vq(dev, &vq_request, VIRTIO_SCSI_REQUEST_VQ)) + vq_ctrl = virtio_queue_init_vq(dev, VIRTIO_SCSI_CONTROL_VQ); + vq_event = virtio_queue_init_vq(dev, VIRTIO_SCSI_EVENT_VQ); + vq_request = virtio_queue_init_vq(dev, VIRTIO_SCSI_REQUEST_VQ); + if (!vq_ctrl || !vq_event || !vq_request) goto dev_error; - flags = virtio_cpu_to_modern16(dev, VRING_AVAIL_F_NO_INTERRUPT); - vq_ctrl.avail->flags = flags; - vq_ctrl.avail->idx = 0; - - vq_event.avail->flags = flags; - vq_event.avail->idx = 0; - - vq_request.avail->flags = flags; - vq_request.avail->idx = 0; - /* Tell HV that setup succeeded */ status |= VIRTIO_STAT_DRIVER_OK; virtio_set_status(dev, status); diff --git a/lib/libvirtio/virtio-serial.c b/lib/libvirtio/virtio-serial.c index d2eac63d0835..b8b898fc8bea 100644 --- a/lib/libvirtio/virtio-serial.c +++ b/lib/libvirtio/virtio-serial.c @@ -30,13 +30,11 @@ #define RX_Q 0 #define TX_Q 1 -static struct vqs vq_rx; -static struct vqs vq_tx; static uint16_t last_rx_idx; /* Last index in RX "used" ring */ int virtio_serial_init(struct virtio_device *dev) { - struct vring_avail *vq_avail; + struct vqs *vq_rx, *vq_tx; int status = VIRTIO_STAT_ACKNOWLEDGE; int i; @@ -50,7 +48,7 @@ int virtio_serial_init(struct virtio_device *dev) status |= VIRTIO_STAT_DRIVER; virtio_set_status(dev, status); - if (dev->is_modern) { + if (dev->features & VIRTIO_F_VERSION_1) { /* Negotiate features and sets FEATURES_OK if successful */ if (virtio_negotiate_guest_features(dev, DRIVER_FEATURE_SUPPORT)) goto dev_error; @@ -58,36 +56,34 @@ int virtio_serial_init(struct virtio_device *dev) virtio_get_status(dev, &status); } - if (virtio_queue_init_vq(dev, &vq_rx, RX_Q)) + vq_rx = virtio_queue_init_vq(dev, RX_Q); + if (!vq_rx) goto dev_error; /* Allocate memory for multiple receive buffers */ - vq_rx.buf_mem = SLOF_alloc_mem(RX_ELEM_SIZE * RX_NUM_ELEMS); - if (!vq_rx.buf_mem) { + vq_rx->buf_mem = SLOF_alloc_mem(RX_ELEM_SIZE * RX_NUM_ELEMS); + if (!vq_rx->buf_mem) { printf("virtio-serial: Failed to allocate buffers!\n"); goto dev_error; } /* Prepare receive buffer queue */ for (i = 0; i < RX_NUM_ELEMS; i++) { - uint64_t addr = (uint64_t)vq_rx.buf_mem + i * RX_ELEM_SIZE; + uint64_t addr = (uint64_t)vq_rx->buf_mem + i * RX_ELEM_SIZE; /* Descriptor for data: */ - virtio_fill_desc(&vq_rx.desc[i], dev->is_modern, addr, 1, VRING_DESC_F_WRITE, 0); - vq_rx.avail->ring[i] = virtio_cpu_to_modern16(dev, i); + virtio_fill_desc(vq_rx, i, dev->features, addr, 1, VRING_DESC_F_WRITE, 0); + vq_rx->avail->ring[i] = virtio_cpu_to_modern16(dev, i); } - vq_rx.avail->flags = virtio_cpu_to_modern16(dev, VRING_AVAIL_F_NO_INTERRUPT); - vq_rx.avail->idx = virtio_cpu_to_modern16(dev, RX_NUM_ELEMS); + vq_rx->avail->idx = virtio_cpu_to_modern16(dev, RX_NUM_ELEMS); sync(); - last_rx_idx = virtio_modern16_to_cpu(dev, vq_rx.used->idx); + last_rx_idx = virtio_modern16_to_cpu(dev, vq_rx->used->idx); - if (virtio_queue_init_vq(dev, &vq_tx, TX_Q)) + vq_tx = virtio_queue_init_vq(dev, TX_Q); + if (vq_tx) goto dev_error; - vq_avail = virtio_get_vring_avail(dev, TX_Q); - vq_avail->flags = virtio_cpu_to_modern16(dev, VRING_AVAIL_F_NO_INTERRUPT); - vq_avail->idx = 0; /* Tell HV that setup succeeded */ status |= VIRTIO_STAT_DRIVER_OK; @@ -112,35 +108,26 @@ void virtio_serial_shutdown(struct virtio_device *dev) int virtio_serial_putchar(struct virtio_device *dev, char c) { - struct vring_desc *desc; int id; - uint32_t vq_size, time; - struct vring_desc *vq_desc; - struct vring_avail *vq_avail; - struct vring_used *vq_used; + uint32_t time; volatile uint16_t *current_used_idx; uint16_t last_used_idx, avail_idx; + struct vqs *vq = &dev->vq[TX_Q]; - vq_size = virtio_get_qsize(dev, TX_Q); - vq_desc = virtio_get_vring_desc(dev, TX_Q); - vq_avail = virtio_get_vring_avail(dev, TX_Q); - vq_used = virtio_get_vring_used(dev, TX_Q); + avail_idx = virtio_modern16_to_cpu(dev, vq->avail->idx); - avail_idx = virtio_modern16_to_cpu(dev, vq_avail->idx); - - last_used_idx = vq_used->idx; - current_used_idx = &vq_used->idx; + last_used_idx = vq->used->idx; + current_used_idx = &vq->used->idx; /* Determine descriptor index */ - id = avail_idx % vq_size; + id = avail_idx % vq->size; /* Set up virtqueue descriptor for header */ - desc = &vq_desc[id]; - virtio_fill_desc(desc, dev->is_modern, (uint64_t)&c, 1, 0, 0); + virtio_fill_desc(vq, id, dev->features, (uint64_t)&c, 1, 0, 0); - vq_avail->ring[avail_idx % vq_size] = virtio_cpu_to_modern16 (dev, id); + vq->avail->ring[avail_idx % vq->size] = virtio_cpu_to_modern16 (dev, id); mb(); - vq_avail->idx = virtio_cpu_to_modern16(dev, avail_idx + 1); + vq->avail->idx = virtio_cpu_to_modern16(dev, avail_idx + 1); /* Tell HV that the queue is ready */ virtio_queue_notify(dev, TX_Q); @@ -159,33 +146,32 @@ int virtio_serial_putchar(struct virtio_device *dev, char c) return 1; } -static uint16_t last_rx_idx; /* Last index in RX "used" ring */ - char virtio_serial_getchar(struct virtio_device *dev) { int id, idx; char buf[RX_NUM_ELEMS] = {0}; uint16_t avail_idx; + struct vqs *vq_rx = &dev->vq[RX_Q]; - idx = virtio_modern16_to_cpu(dev, vq_rx.used->idx); + idx = virtio_modern16_to_cpu(dev, vq_rx->used->idx); if (last_rx_idx == idx) { /* Nothing received yet */ return 0; } - id = (virtio_modern32_to_cpu(dev, vq_rx.used->ring[last_rx_idx % vq_rx.size].id) + 1) - % vq_rx.size; + id = (virtio_modern32_to_cpu(dev, vq_rx->used->ring[last_rx_idx % vq_rx->size].id) + 1) + % vq_rx->size; /* Copy data to destination buffer */ - memcpy(buf, (void *)virtio_modern64_to_cpu(dev, vq_rx.desc[id - 1].addr), RX_ELEM_SIZE); + memcpy(buf, (void *)virtio_modern64_to_cpu(dev, vq_rx->desc[id - 1].addr), RX_ELEM_SIZE); /* Move indices to next entries */ last_rx_idx = last_rx_idx + 1; - avail_idx = virtio_modern16_to_cpu(dev, vq_rx.avail->idx); - vq_rx.avail->ring[avail_idx % vq_rx.size] = virtio_cpu_to_modern16(dev, id - 1); + avail_idx = virtio_modern16_to_cpu(dev, vq_rx->avail->idx); + vq_rx->avail->ring[avail_idx % vq_rx->size] = virtio_cpu_to_modern16(dev, id - 1); sync(); - vq_rx.avail->idx = virtio_cpu_to_modern16(dev, avail_idx + 1); + vq_rx->avail->idx = virtio_cpu_to_modern16(dev, avail_idx + 1); sync(); /* Tell HV that RX queue entry is ready */ @@ -196,7 +182,9 @@ char virtio_serial_getchar(struct virtio_device *dev) int virtio_serial_haschar(struct virtio_device *dev) { - if (last_rx_idx == virtio_modern16_to_cpu(dev, vq_rx.used->idx)) + struct vqs *vq_rx = &dev->vq[RX_Q]; + + if (last_rx_idx == virtio_modern16_to_cpu(dev, vq_rx->used->idx)) return 0; else return 1; diff --git a/lib/libvirtio/virtio.c b/lib/libvirtio/virtio.c index 4b9457cc7093..3e615c65fc2c 100644 --- a/lib/libvirtio/virtio.c +++ b/lib/libvirtio/virtio.c @@ -11,7 +11,6 @@ *****************************************************************************/ #include -#include #include #include #include @@ -20,6 +19,7 @@ #include #include "virtio.h" #include "helpers.h" +#include "virtio-internal.h" /* PCI virtio header offsets */ #define VIRTIOHDR_DEVICE_FEATURES 0 @@ -98,15 +98,6 @@ static void virtio_pci_write64(void *addr, uint64_t val) ci_write_32(addr + 4, cpu_to_le32(hi)); } -static uint64_t virtio_pci_read64(void *addr) -{ - uint64_t hi, lo; - - lo = le32_to_cpu(ci_read_32(addr)); - hi = le32_to_cpu(ci_read_32(addr + 4)); - return (hi << 32) | lo; -} - static void virtio_cap_set_base_addr(struct virtio_cap *cap, uint32_t offset) { uint64_t addr; @@ -183,9 +174,9 @@ struct virtio_device *virtio_setup_vd(void) if (dev->common.cap_id && dev->notify.cap_id && dev->isr.cap_id && dev->device.cap_id) { - dev->is_modern = 1; + dev->features = VIRTIO_F_VERSION_1; } else { - dev->is_modern = 0; + dev->features = 0; dev->legacy.cap_id = 0; dev->legacy.bar = 0; virtio_cap_set_base_addr(&dev->legacy, 0); @@ -215,7 +206,7 @@ unsigned int virtio_get_qsize(struct virtio_device *dev, int queue) { unsigned int size = 0; - if (dev->is_modern) { + if (dev->features & VIRTIO_F_VERSION_1) { void *addr = dev->common.addr + offset_of(struct virtio_dev_common, q_select); ci_write_16(addr, cpu_to_le16(queue)); eieio(); @@ -241,24 +232,7 @@ unsigned int virtio_get_qsize(struct virtio_device *dev, int queue) */ struct vring_desc *virtio_get_vring_desc(struct virtio_device *dev, int queue) { - struct vring_desc *desc = 0; - - if (dev->is_modern) { - void *q_sel = dev->common.addr + offset_of(struct virtio_dev_common, q_select); - void *q_desc = dev->common.addr + offset_of(struct virtio_dev_common, q_desc); - - ci_write_16(q_sel, cpu_to_le16(queue)); - eieio(); - desc = (void *)(virtio_pci_read64(q_desc)); - } else { - ci_write_16(dev->legacy.addr+VIRTIOHDR_QUEUE_SELECT, - cpu_to_le16(queue)); - eieio(); - desc = (void*)(4096L * - le32_to_cpu(ci_read_32(dev->legacy.addr+VIRTIOHDR_QUEUE_ADDRESS))); - } - - return desc; + return dev->vq[queue].desc; } @@ -270,18 +244,7 @@ struct vring_desc *virtio_get_vring_desc(struct virtio_device *dev, int queue) */ struct vring_avail *virtio_get_vring_avail(struct virtio_device *dev, int queue) { - if (dev->is_modern) { - void *q_sel = dev->common.addr + offset_of(struct virtio_dev_common, q_select); - void *q_avail = dev->common.addr + offset_of(struct virtio_dev_common, q_avail); - - ci_write_16(q_sel, cpu_to_le16(queue)); - eieio(); - return (void *)(virtio_pci_read64(q_avail)); - } - else { - return (void*)((uint64_t)virtio_get_vring_desc(dev, queue) + - virtio_get_qsize(dev, queue) * sizeof(struct vring_desc)); - } + return dev->vq[queue].avail; } @@ -293,28 +256,23 @@ struct vring_avail *virtio_get_vring_avail(struct virtio_device *dev, int queue) */ struct vring_used *virtio_get_vring_used(struct virtio_device *dev, int queue) { - if (dev->is_modern) { - void *q_sel = dev->common.addr + offset_of(struct virtio_dev_common, q_select); - void *q_used = dev->common.addr + offset_of(struct virtio_dev_common, q_used); - - ci_write_16(q_sel, cpu_to_le16(queue)); - eieio(); - return (void *)(virtio_pci_read64(q_used)); - } else { - return (void*)VQ_ALIGN((uint64_t)virtio_get_vring_avail(dev, queue) - + virtio_get_qsize(dev, queue) - * sizeof(struct vring_avail)); - } + return dev->vq[queue].used; } /** * Fill the virtio ring descriptor depending on the legacy mode or virtio 1.0 */ -void virtio_fill_desc(struct vring_desc *desc, bool is_modern, +void virtio_fill_desc(struct vqs *vq, int id, uint64_t features, uint64_t addr, uint32_t len, uint16_t flags, uint16_t next) { - if (is_modern) { + struct vring_desc *desc; + + id %= vq->size; + desc = &vq->desc[id]; + next %= vq->size; + + if (features & VIRTIO_F_VERSION_1) { desc->addr = cpu_to_le64(addr); desc->len = cpu_to_le32(len); desc->flags = cpu_to_le16(flags); @@ -341,7 +299,7 @@ void virtio_reset_device(struct virtio_device *dev) */ void virtio_queue_notify(struct virtio_device *dev, int queue) { - if (dev->is_modern) { + if (dev->features & VIRTIO_F_VERSION_1) { void *q_sel = dev->common.addr + offset_of(struct virtio_dev_common, q_select); void *q_ntfy = dev->common.addr + offset_of(struct virtio_dev_common, q_notify_off); void *addr; @@ -362,7 +320,7 @@ void virtio_queue_notify(struct virtio_device *dev, int queue) */ static void virtio_set_qaddr(struct virtio_device *dev, int queue, unsigned long qaddr) { - if (dev->is_modern) { + if (dev->features & VIRTIO_F_VERSION_1) { uint64_t q_desc = qaddr; uint64_t q_avail; uint64_t q_used; @@ -385,20 +343,37 @@ static void virtio_set_qaddr(struct virtio_device *dev, int queue, unsigned long } } -int virtio_queue_init_vq(struct virtio_device *dev, struct vqs *vq, unsigned int id) +struct vqs *virtio_queue_init_vq(struct virtio_device *dev, unsigned int id) { + struct vqs *vq; + + if (id >= sizeof(dev->vq)/sizeof(dev->vq[0])) { + printf("Queue index is too big!\n"); + return NULL; + } + vq = &dev->vq[id]; + + memset(vq, 0, sizeof(*vq)); + vq->size = virtio_get_qsize(dev, id); vq->desc = SLOF_alloc_mem_aligned(virtio_vring_size(vq->size), 4096); if (!vq->desc) { printf("memory allocation failed!\n"); - return -1; + return NULL; } + + vq->avail = (void *) vq->desc + vq->size * sizeof(struct vring_desc); + vq->used = (void *) VQ_ALIGN((unsigned long) vq->avail + + sizeof(struct vring_avail) + + sizeof(uint16_t) * vq->size); + memset(vq->desc, 0, virtio_vring_size(vq->size)); virtio_set_qaddr(dev, id, (unsigned long)vq->desc); - vq->avail = virtio_get_vring_avail(dev, id); - vq->used = virtio_get_vring_used(dev, id); - vq->id = id; - return 0; + + vq->avail->flags = virtio_cpu_to_modern16(dev, VRING_AVAIL_F_NO_INTERRUPT); + vq->avail->idx = 0; + + return vq; } void virtio_queue_term_vq(struct virtio_device *dev, struct vqs *vq, unsigned int id) @@ -413,7 +388,7 @@ void virtio_queue_term_vq(struct virtio_device *dev, struct vqs *vq, unsigned in */ void virtio_set_status(struct virtio_device *dev, int status) { - if (dev->is_modern) { + if (dev->features & VIRTIO_F_VERSION_1) { ci_write_8(dev->common.addr + offset_of(struct virtio_dev_common, dev_status), status); } else { @@ -426,7 +401,7 @@ void virtio_set_status(struct virtio_device *dev, int status) */ void virtio_get_status(struct virtio_device *dev, int *status) { - if (dev->is_modern) { + if (dev->features & VIRTIO_F_VERSION_1) { *status = ci_read_8(dev->common.addr + offset_of(struct virtio_dev_common, dev_status)); } else { @@ -440,7 +415,7 @@ void virtio_get_status(struct virtio_device *dev, int *status) void virtio_set_guest_features(struct virtio_device *dev, uint64_t features) { - if (dev->is_modern) { + if (dev->features & VIRTIO_F_VERSION_1) { uint32_t f1 = (features >> 32) & 0xFFFFFFFF; uint32_t f0 = features & 0xFFFFFFFF; void *addr = dev->common.addr; @@ -466,7 +441,7 @@ uint64_t virtio_get_host_features(struct virtio_device *dev) { uint64_t features = 0; - if (dev->is_modern) { + if (dev->features & VIRTIO_F_VERSION_1) { uint32_t f0 = 0, f1 = 0; void *addr = dev->common.addr; @@ -514,6 +489,8 @@ int virtio_negotiate_guest_features(struct virtio_device *dev, uint64_t features if ((status & VIRTIO_STAT_FEATURES_OK) != VIRTIO_STAT_FEATURES_OK) return -1; + dev->features = features; + return 0; } @@ -526,7 +503,7 @@ uint64_t virtio_get_config(struct virtio_device *dev, int offset, int size) uint32_t hi, lo; void *confbase; - if (dev->is_modern) + if (dev->features & VIRTIO_F_VERSION_1) confbase = dev->device.addr; else confbase = dev->legacy.addr+VIRTIOHDR_DEVICE_CONFIG; @@ -537,12 +514,12 @@ uint64_t virtio_get_config(struct virtio_device *dev, int offset, int size) break; case 2: val = ci_read_16(confbase+offset); - if (dev->is_modern) + if (dev->features & VIRTIO_F_VERSION_1) val = le16_to_cpu(val); break; case 4: val = ci_read_32(confbase+offset); - if (dev->is_modern) + if (dev->features & VIRTIO_F_VERSION_1) val = le32_to_cpu(val); break; case 8: @@ -551,7 +528,7 @@ uint64_t virtio_get_config(struct virtio_device *dev, int offset, int size) */ lo = ci_read_32(confbase+offset); hi = ci_read_32(confbase+offset+4); - if (dev->is_modern) + if (dev->features & VIRTIO_F_VERSION_1) val = (uint64_t)le32_to_cpu(hi) << 32 | le32_to_cpu(lo); else val = (uint64_t)hi << 32 | lo; @@ -571,7 +548,7 @@ int __virtio_read_config(struct virtio_device *dev, void *dst, unsigned char *buf = dst; int i; - if (dev->is_modern) + if (dev->features & VIRTIO_F_VERSION_1) confbase = dev->device.addr; else confbase = dev->legacy.addr+VIRTIOHDR_DEVICE_CONFIG; From patchwork Wed Dec 4 03:21:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Kardashevskiy X-Patchwork-Id: 1203962 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47SPKJ6f96z9sPL for ; Wed, 4 Dec 2019 14:21:56 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.ru Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47SPKJ45yrzDqSj for ; Wed, 4 Dec 2019 14:21:56 +1100 (AEDT) X-Original-To: slof@lists.ozlabs.org Delivered-To: slof@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.ru (client-ip=107.174.27.60; helo=ozlabs.ru; envelope-from=aik@ozlabs.ru; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.ru Received: from ozlabs.ru (unknown [107.174.27.60]) by lists.ozlabs.org (Postfix) with ESMTP id 47SPKD3lXkzDqSJ for ; Wed, 4 Dec 2019 14:21:51 +1100 (AEDT) Received: from fstn1-p1.ozlabs.ibm.com (localhost [IPv6:::1]) by ozlabs.ru (Postfix) with ESMTP id 8DEECAE801F8; Tue, 3 Dec 2019 22:20:43 -0500 (EST) From: Alexey Kardashevskiy To: slof@lists.ozlabs.org Date: Wed, 4 Dec 2019 14:21:36 +1100 Message-Id: <20191204032138.127624-4-aik@ozlabs.ru> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191204032138.127624-1-aik@ozlabs.ru> References: <20191204032138.127624-1-aik@ozlabs.ru> Subject: [SLOF] [PATCH slof v4 3/5] virtio-net: Init queues after features negotiation X-BeenThere: slof@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Patches for https://github.com/aik/SLOF" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: slof-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "SLOF" Every virtio device negotiates virtio protocol features before setting up internal queue descriptors with one exception which is virtio-net. This moves virtio_queue_init_vq() later to have feature negotiation happened sooner. This is going to be used for IOMMU setup later. Signed-off-by: Alexey Kardashevskiy Reviewed-by: Michael Roth --- lib/libvirtio/virtio-net.c | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/lib/libvirtio/virtio-net.c b/lib/libvirtio/virtio-net.c index 2290b2d74765..ae67883020ef 100644 --- a/lib/libvirtio/virtio-net.c +++ b/lib/libvirtio/virtio-net.c @@ -84,18 +84,6 @@ static int virtionet_init_pci(struct virtio_net *vnet, struct virtio_device *dev /* Reset device */ virtio_reset_device(vdev); - /* The queue information can be retrieved via the virtio header that - * can be found in the I/O BAR. First queue is the receive queue, - * second the transmit queue, and the forth is the control queue for - * networking options. - * We are only interested in the receive and transmit queue here. */ - if (!virtio_queue_init_vq(vdev, VQ_RX) || - !virtio_queue_init_vq(vdev, VQ_TX)) { - virtio_set_status(vdev, VIRTIO_STAT_ACKNOWLEDGE|VIRTIO_STAT_DRIVER - |VIRTIO_STAT_FAILED); - return -1; - } - /* Acknowledge device. */ virtio_set_status(vdev, VIRTIO_STAT_ACKNOWLEDGE); @@ -113,7 +101,7 @@ static int virtionet_init(struct virtio_net *vnet) int status = VIRTIO_STAT_ACKNOWLEDGE | VIRTIO_STAT_DRIVER; struct virtio_device *vdev = &vnet->vdev; net_driver_t *driver = &vnet->driver; - struct vqs *vq_tx = &vdev->vq[VQ_TX], *vq_rx = &vdev->vq[VQ_RX]; + struct vqs *vq_tx, *vq_rx; dprintf("virtionet_init(%02x:%02x:%02x:%02x:%02x:%02x)\n", driver->mac_addr[0], driver->mac_addr[1], @@ -137,6 +125,19 @@ static int virtionet_init(struct virtio_net *vnet) virtio_set_guest_features(vdev, 0); } + /* The queue information can be retrieved via the virtio header that + * can be found in the I/O BAR. First queue is the receive queue, + * second the transmit queue, and the forth is the control queue for + * networking options. + * We are only interested in the receive and transmit queue here. */ + vq_rx = virtio_queue_init_vq(vdev, VQ_RX); + vq_tx = virtio_queue_init_vq(vdev, VQ_TX); + if (!vq_rx || !vq_tx) { + virtio_set_status(vdev, VIRTIO_STAT_ACKNOWLEDGE|VIRTIO_STAT_DRIVER + |VIRTIO_STAT_FAILED); + return -1; + } + /* Allocate memory for one transmit an multiple receive buffers */ vq_rx->buf_mem = SLOF_alloc_mem((BUFFER_ENTRY_SIZE+net_hdr_size) * RX_QUEUE_SIZE); From patchwork Wed Dec 4 03:21:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Kardashevskiy X-Patchwork-Id: 1203967 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47SPKy05nNz9sPh for ; Wed, 4 Dec 2019 14:22:30 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.ru Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47SPKx4403zDqSL for ; Wed, 4 Dec 2019 14:22:29 +1100 (AEDT) X-Original-To: slof@lists.ozlabs.org Delivered-To: slof@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.ru (client-ip=107.174.27.60; helo=ozlabs.ru; envelope-from=aik@ozlabs.ru; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.ru Received: from ozlabs.ru (unknown [107.174.27.60]) by lists.ozlabs.org (Postfix) with ESMTP id 47SPKl2HF2zDqT7 for ; Wed, 4 Dec 2019 14:22:19 +1100 (AEDT) Received: from fstn1-p1.ozlabs.ibm.com (localhost [IPv6:::1]) by ozlabs.ru (Postfix) with ESMTP id DBBCBAE80570; Tue, 3 Dec 2019 22:20:44 -0500 (EST) From: Alexey Kardashevskiy To: slof@lists.ozlabs.org Date: Wed, 4 Dec 2019 14:21:37 +1100 Message-Id: <20191204032138.127624-5-aik@ozlabs.ru> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191204032138.127624-1-aik@ozlabs.ru> References: <20191204032138.127624-1-aik@ozlabs.ru> Subject: [SLOF] [PATCH slof v4 4/5] dma: Define default dma methods for using by client/package instances X-BeenThere: slof@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Patches for https://github.com/aik/SLOF" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: slof-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "SLOF" They call parent node (which is a device) methods. Signed-off-by: Michael Roth Signed-off-by: Alexey Kardashevskiy --- board-qemu/slof/Makefile | 1 + board-qemu/slof/OF.fs | 3 +++ slof/fs/dma-instance-function.fs | 28 ++++++++++++++++++++++++++++ 3 files changed, 32 insertions(+) create mode 100644 slof/fs/dma-instance-function.fs diff --git a/board-qemu/slof/Makefile b/board-qemu/slof/Makefile index 2263e751bde9..d7ed2d7a6f18 100644 --- a/board-qemu/slof/Makefile +++ b/board-qemu/slof/Makefile @@ -99,6 +99,7 @@ OF_FFS_FILES = \ $(SLOFCMNDIR)/fs/graphics.fs \ $(SLOFCMNDIR)/fs/generic-disk.fs \ $(SLOFCMNDIR)/fs/dma-function.fs \ + $(SLOFCMNDIR)/fs/dma-instance-function.fs \ $(SLOFCMNDIR)/fs/pci-device.fs \ $(SLOFCMNDIR)/fs/pci-bridge.fs \ $(SLOFCMNDIR)/fs/pci-properties.fs \ diff --git a/board-qemu/slof/OF.fs b/board-qemu/slof/OF.fs index a85f6c558e67..3e117ad03e09 100644 --- a/board-qemu/slof/OF.fs +++ b/board-qemu/slof/OF.fs @@ -143,6 +143,9 @@ check-for-nvramrc 8a0 cp +\ For DMA functions used by client/package instances. +#include "dma-instance-function.fs" + \ The client interface. #include "client.fs" \ ELF binary file format. diff --git a/slof/fs/dma-instance-function.fs b/slof/fs/dma-instance-function.fs new file mode 100644 index 000000000000..6b8f8a06fcba --- /dev/null +++ b/slof/fs/dma-instance-function.fs @@ -0,0 +1,28 @@ +\ ****************************************************************************/ +\ * Copyright (c) 2019 IBM Corporation +\ * All rights reserved. +\ * This program and the accompanying materials +\ * are made available under the terms of the BSD License +\ * which accompanies this distribution, and is available at +\ * http://www.opensource.org/licenses/bsd-license.php +\ * +\ * Contributors: +\ * IBM Corporation - initial implementation +\ ****************************************************************************/ + +\ DMA memory allocation functions +: dma-alloc ( size -- virt ) + s" dma-alloc" $call-parent +; + +: dma-free ( virt size -- ) + s" dma-free" $call-parent +; + +: dma-map-in ( virt size cacheable? -- devaddr ) + s" dma-map-in" $call-parent +; + +: dma-map-out ( virt devaddr size -- ) + s" dma-map-out" $call-parent +; From patchwork Wed Dec 4 03:21:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Kardashevskiy X-Patchwork-Id: 1203963 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47SPKQ1ppNz9sPh for ; Wed, 4 Dec 2019 14:22:02 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.ru Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47SPKP5S6MzDqSR for ; Wed, 4 Dec 2019 14:22:01 +1100 (AEDT) X-Original-To: slof@lists.ozlabs.org Delivered-To: slof@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.ru (client-ip=107.174.27.60; helo=ozlabs.ru; envelope-from=aik@ozlabs.ru; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.ru Received: from ozlabs.ru (unknown [107.174.27.60]) by lists.ozlabs.org (Postfix) with ESMTP id 47SPKG6cBjzDqSJ for ; Wed, 4 Dec 2019 14:21:54 +1100 (AEDT) Received: from fstn1-p1.ozlabs.ibm.com (localhost [IPv6:::1]) by ozlabs.ru (Postfix) with ESMTP id 39FB7AE807DD; Tue, 3 Dec 2019 22:20:46 -0500 (EST) From: Alexey Kardashevskiy To: slof@lists.ozlabs.org Date: Wed, 4 Dec 2019 14:21:38 +1100 Message-Id: <20191204032138.127624-6-aik@ozlabs.ru> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191204032138.127624-1-aik@ozlabs.ru> References: <20191204032138.127624-1-aik@ozlabs.ru> Subject: [SLOF] [PATCH slof v4 5/5] virtio: Enable IOMMU X-BeenThere: slof@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Patches for https://github.com/aik/SLOF" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: slof-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "SLOF" When QEMU is started with iommu_platform=on, the guest driver must accept it or the device will fail. This enables IOMMU support for virtio-net, -scsi, -block, -serial, -9p devices. -serial and -9p are only compile tested though. For virtio-net we map all RX buffers once and TX when xmit() is called and unmap older pages when we are about to reuse the VQ descriptor. As all other devices are synchronous, we unmap IOMMU pages right after completion of a transaction. This depends on QEMU's: https://patchwork.ozlabs.org/patch/1194067/ Signed-off-by: Alexey Kardashevskiy Tested-by: Michael Roth --- Changes: v4: * ditched vqs->id in virtio_queue_init_vq v2: * added Mike's fs/dma-instance-function.fs * total rework --- lib/libvirtio/virtio.h | 5 +++ lib/libvirtio/virtio-9p.c | 4 ++ lib/libvirtio/virtio-blk.c | 4 ++ lib/libvirtio/virtio-net.c | 5 ++- lib/libvirtio/virtio-scsi.c | 5 +++ lib/libvirtio/virtio-serial.c | 12 +++-- lib/libvirtio/virtio.c | 82 ++++++++++++++++++++++++++++++++++- 7 files changed, 111 insertions(+), 6 deletions(-) diff --git a/lib/libvirtio/virtio.h b/lib/libvirtio/virtio.h index 7efc1e524d77..c4eafe40dd31 100644 --- a/lib/libvirtio/virtio.h +++ b/lib/libvirtio/virtio.h @@ -29,6 +29,7 @@ #define VIRTIO_F_RING_INDIRECT_DESC BIT(28) #define VIRTIO_F_RING_EVENT_IDX BIT(29) #define VIRTIO_F_VERSION_1 BIT(32) +#define VIRTIO_F_IOMMU_PLATFORM BIT(33) #define VIRTIO_TIMEOUT 5000 /* 5 sec timeout */ @@ -83,6 +84,8 @@ struct vqs { struct vring_desc *desc; struct vring_avail *avail; struct vring_used *used; + void **desc_gpas; /* to get gpa from desc->addr (which is ioba) */ + uint64_t bus_desc; }; struct virtio_device { @@ -108,6 +111,8 @@ extern struct vring_used *virtio_get_vring_used(struct virtio_device *dev, int q extern void virtio_fill_desc(struct vqs *vq, int id, uint64_t features, uint64_t addr, uint32_t len, uint16_t flags, uint16_t next); +extern void virtio_free_desc(struct vqs *vq, int id, uint64_t features); +void *virtio_desc_addr(struct virtio_device *vdev, int queue, int id); extern struct vqs *virtio_queue_init_vq(struct virtio_device *dev, unsigned int id); extern void virtio_queue_term_vq(struct virtio_device *dev, struct vqs *vq, unsigned int id); diff --git a/lib/libvirtio/virtio-9p.c b/lib/libvirtio/virtio-9p.c index 426069fe9509..76078612b06e 100644 --- a/lib/libvirtio/virtio-9p.c +++ b/lib/libvirtio/virtio-9p.c @@ -129,6 +129,10 @@ static int virtio_9p_transact(void *opaque, uint8_t *tx, int tx_size, uint8_t *r // do something better mb(); } + + virtio_free_desc(vq, id, dev->features); + virtio_free_desc(vq, id + 1, dev->features); + if (i == 0) { return -1; } diff --git a/lib/libvirtio/virtio-blk.c b/lib/libvirtio/virtio-blk.c index a0dadbb0d6a8..0363038e559d 100644 --- a/lib/libvirtio/virtio-blk.c +++ b/lib/libvirtio/virtio-blk.c @@ -195,6 +195,10 @@ virtioblk_transfer(struct virtio_device *dev, char *buf, uint64_t blocknum, break; } + virtio_free_desc(vq, id, dev->features); + virtio_free_desc(vq, id + 1, dev->features); + virtio_free_desc(vq, id + 2, dev->features); + if (status == 0) return cnt; diff --git a/lib/libvirtio/virtio-net.c b/lib/libvirtio/virtio-net.c index ae67883020ef..5a0d19088527 100644 --- a/lib/libvirtio/virtio-net.c +++ b/lib/libvirtio/virtio-net.c @@ -255,6 +255,9 @@ static int virtionet_xmit(struct virtio_net *vnet, char *buf, int len) idx = virtio_modern16_to_cpu(vdev, vq_tx->avail->idx); id = (idx * 2) % vq_tx->size; + virtio_free_desc(vq_tx, id, vdev->features); + virtio_free_desc(vq_tx, id + 1, vdev->features); + /* Set up virtqueue descriptor for header */ virtio_fill_desc(vq_tx, id, vdev->features, (uint64_t)nethdr, net_hdr_size, VRING_DESC_F_NEXT, id + 1); @@ -317,7 +320,7 @@ static int virtionet_receive(struct virtio_net *vnet, char *buf, int maxlen) #endif /* Copy data to destination buffer */ - memcpy(buf, (void *)virtio_modern64_to_cpu(vdev, vq_rx->desc[id].addr), len); + memcpy(buf, virtio_desc_addr(vdev, VQ_RX, id), len); /* Move indices to next entries */ last_rx_idx = last_rx_idx + 1; diff --git a/lib/libvirtio/virtio-scsi.c b/lib/libvirtio/virtio-scsi.c index ae87e97e7330..96285e3891af 100644 --- a/lib/libvirtio/virtio-scsi.c +++ b/lib/libvirtio/virtio-scsi.c @@ -81,6 +81,11 @@ int virtioscsi_send(struct virtio_device *dev, break; } + virtio_free_desc(vq, id, dev->features); + virtio_free_desc(vq, id + 1, dev->features); + if (!(buf == NULL || buf_len == 0)) + virtio_free_desc(vq, id + 2, dev->features); + return 0; } diff --git a/lib/libvirtio/virtio-serial.c b/lib/libvirtio/virtio-serial.c index b8b898fc8bea..8826be96c24e 100644 --- a/lib/libvirtio/virtio-serial.c +++ b/lib/libvirtio/virtio-serial.c @@ -108,7 +108,7 @@ void virtio_serial_shutdown(struct virtio_device *dev) int virtio_serial_putchar(struct virtio_device *dev, char c) { - int id; + int id, ret; uint32_t time; volatile uint16_t *current_used_idx; uint16_t last_used_idx, avail_idx; @@ -133,17 +133,21 @@ int virtio_serial_putchar(struct virtio_device *dev, char c) virtio_queue_notify(dev, TX_Q); /* Wait for host to consume the descriptor */ + ret = 1; time = SLOF_GetTimer() + VIRTIO_TIMEOUT; while (*current_used_idx == last_used_idx) { // do something better mb(); if (time < SLOF_GetTimer()) { printf("virtio_serial_putchar failed! \n"); - return 0; + ret = 0; + break; } } - return 1; + virtio_free_desc(vq, id, dev->features); + + return ret; } char virtio_serial_getchar(struct virtio_device *dev) @@ -163,7 +167,7 @@ char virtio_serial_getchar(struct virtio_device *dev) % vq_rx->size; /* Copy data to destination buffer */ - memcpy(buf, (void *)virtio_modern64_to_cpu(dev, vq_rx->desc[id - 1].addr), RX_ELEM_SIZE); + memcpy(buf, virtio_desc_addr(dev, RX_Q, id - 1), RX_ELEM_SIZE); /* Move indices to next entries */ last_rx_idx = last_rx_idx + 1; diff --git a/lib/libvirtio/virtio.c b/lib/libvirtio/virtio.c index 3e615c65fc2c..9a0c3a96371a 100644 --- a/lib/libvirtio/virtio.c +++ b/lib/libvirtio/virtio.c @@ -273,6 +273,17 @@ void virtio_fill_desc(struct vqs *vq, int id, uint64_t features, next %= vq->size; if (features & VIRTIO_F_VERSION_1) { + if (features & VIRTIO_F_IOMMU_PLATFORM) { + void *gpa = (void *) addr; + + if (!vq->desc_gpas) { + fprintf(stderr, "IOMMU setup has not been done!\n"); + return; + } + + addr = SLOF_dma_map_in(gpa, len, 0); + vq->desc_gpas[id] = gpa; + } desc->addr = cpu_to_le64(addr); desc->len = cpu_to_le32(len); desc->flags = cpu_to_le16(flags); @@ -285,6 +296,32 @@ void virtio_fill_desc(struct vqs *vq, int id, uint64_t features, } } +void virtio_free_desc(struct vqs *vq, int id, uint64_t features) +{ + struct vring_desc *desc; + + id %= vq->size; + desc = &vq->desc[id]; + + if (features & VIRTIO_F_VERSION_1) { + if (features & VIRTIO_F_IOMMU_PLATFORM) { + SLOF_dma_map_out(le64_to_cpu(desc->addr), + 0, le32_to_cpu(desc->len)); + vq->desc_gpas[id] = NULL; + } + } +} + +void *virtio_desc_addr(struct virtio_device *vdev, int queue, int id) +{ + struct vqs *vq = &vdev->vq[queue]; + + if (vq->desc_gpas) + return vq->desc_gpas[id]; + + return (void *) virtio_modern64_to_cpu(vdev, vq->desc[id].addr); +} + /** * Reset virtio device */ @@ -326,6 +363,19 @@ static void virtio_set_qaddr(struct virtio_device *dev, int queue, unsigned long uint64_t q_used; uint32_t q_size = virtio_get_qsize(dev, queue); + if (dev->features & VIRTIO_F_IOMMU_PLATFORM) { + unsigned long cb; + + cb = q_size * sizeof(struct vring_desc); + cb += sizeof(struct vring_avail) + sizeof(uint16_t) * q_size; + cb = VQ_ALIGN(cb); + cb += sizeof(struct vring_used) + sizeof(uint16_t) * q_size; + cb = VQ_ALIGN(cb); + q_desc = SLOF_dma_map_in((void *)q_desc, cb, 0); + + dev->vq[queue].bus_desc = q_desc; + } + virtio_pci_write64(dev->common.addr + offset_of(struct virtio_dev_common, q_desc), q_desc); q_avail = q_desc + q_size * sizeof(struct vring_desc); virtio_pci_write64(dev->common.addr + offset_of(struct virtio_dev_common, q_avail), q_avail); @@ -372,14 +422,41 @@ struct vqs *virtio_queue_init_vq(struct virtio_device *dev, unsigned int id) vq->avail->flags = virtio_cpu_to_modern16(dev, VRING_AVAIL_F_NO_INTERRUPT); vq->avail->idx = 0; + if (dev->features & VIRTIO_F_IOMMU_PLATFORM) + vq->desc_gpas = SLOF_alloc_mem_aligned( + vq->size * sizeof(vq->desc_gpas[0]), 4096); return vq; } void virtio_queue_term_vq(struct virtio_device *dev, struct vqs *vq, unsigned int id) { - if (vq->desc) + if (vq->desc_gpas) { + int i; + + for (i = 0; i < vq->size; ++i) + virtio_free_desc(vq, i, dev->features); + + memset(vq->desc_gpas, 0, vq->size * sizeof(vq->desc_gpas[0])); + SLOF_free_mem(vq->desc_gpas, + vq->size * sizeof(vq->desc_gpas[0])); + } + if (vq->desc) { + if (dev->features & VIRTIO_F_IOMMU_PLATFORM) { + unsigned long cb; + uint32_t q_size = virtio_get_qsize(dev, id); + + cb = q_size * sizeof(struct vring_desc); + cb += sizeof(struct vring_avail) + sizeof(uint16_t) * q_size; + cb = VQ_ALIGN(cb); + cb += sizeof(struct vring_used) + sizeof(uint16_t) * q_size; + cb = VQ_ALIGN(cb); + + SLOF_dma_map_out(vq->bus_desc, 0, cb); + } + SLOF_free_mem(vq->desc, virtio_vring_size(vq->size)); + } memset(vq, 0, sizeof(*vq)); } @@ -473,6 +550,9 @@ int virtio_negotiate_guest_features(struct virtio_device *dev, uint64_t features return -1; } + if (host_features & VIRTIO_F_IOMMU_PLATFORM) + features |= VIRTIO_F_IOMMU_PLATFORM; + virtio_set_guest_features(dev, features); host_features = virtio_get_host_features(dev); if ((host_features & features) != features) {