From patchwork Wed Aug 2 10:35:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikunj A Dadhania X-Patchwork-Id: 796598 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3xMqMy4xQ7z9t24 for ; Wed, 2 Aug 2017 20:35:42 +1000 (AEST) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3xMqMy3LY7zDrSs for ; Wed, 2 Aug 2017 20:35:42 +1000 (AEST) X-Original-To: slof@lists.ozlabs.org Delivered-To: slof@lists.ozlabs.org Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3xMqMs1lMRzDrLg for ; Wed, 2 Aug 2017 20:35:37 +1000 (AEST) Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v72AXumI100454 for ; Wed, 2 Aug 2017 06:35:35 -0400 Received: from e23smtp01.au.ibm.com (e23smtp01.au.ibm.com [202.81.31.143]) by mx0a-001b2d01.pphosted.com with ESMTP id 2c387uend8-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 02 Aug 2017 06:35:34 -0400 Received: from localhost by e23smtp01.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 2 Aug 2017 20:35:32 +1000 Received: from d23relay10.au.ibm.com (202.81.31.229) by e23smtp01.au.ibm.com (202.81.31.207) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 2 Aug 2017 20:35:31 +1000 Received: from d23av03.au.ibm.com (d23av03.au.ibm.com [9.190.234.97]) by d23relay10.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v72AZUIQ27263086 for ; Wed, 2 Aug 2017 20:35:30 +1000 Received: from d23av03.au.ibm.com (localhost [127.0.0.1]) by d23av03.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v72AZLX4027400 for ; Wed, 2 Aug 2017 20:35:21 +1000 Received: from abhimanyu.vnet.linux.ibm.com (abhimanyu.in.ibm.com [9.124.35.77] (may be forged)) by d23av03.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id v72AZH0B027311 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 2 Aug 2017 20:35:20 +1000 From: Nikunj A Dadhania To: Thomas Huth , slof@lists.ozlabs.org In-Reply-To: References: <20170801063942.29591-1-nikunj@linux.vnet.ibm.com> Date: Wed, 02 Aug 2017 16:05:26 +0530 MIME-Version: 1.0 X-TM-AS-MML: disable x-cbid: 17080210-1617-0000-0000-000001F99761 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17080210-1618-0000-0000-0000484393AC Message-Id: <87a83is1e9.fsf@abhimanyu.i-did-not-set--mail-host-address--so-tickle-me> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-08-02_06:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=5 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1706020000 definitions=main-1708020172 Subject: Re: [SLOF] [PATCH] virtio-net: use common receive buffer for devices X-BeenThere: slof@lists.ozlabs.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "Patches for https://github.com/aik/SLOF" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: slof-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "SLOF" Thomas Huth writes: > On 01.08.2017 08:39, Nikunj A Dadhania wrote: >> Found that virtio-net is using a around 200K receive buffer per device, if we >> connect more than 40 virtio-net devices the heap(8MB) gets over. Because of >> which allocation starts failing and the VM does not boot. >> >> Use common receive buffer for all the virtio-net devices. This will reduce the >> memory requirement per virtio-net device in slof. > > Not sure whether this is a really good idea... I think theoretically an > OF client could open multiple network devices and use them in parallel - > and then the devices would destroy their shared receive buffers mutually. > Maybe you could change the code so that the buffers are only allocated > when the device is opened, and released again when the device is closed > again? Roughly something like this, works, will need more testing though. virtio-net: rework the driver to support multiple open not-yet-Signed-off-by: Nikunj A Dadhania diff --git a/board-qemu/slof/virtio-net.fs b/board-qemu/slof/virtio-net.fs index 4a4ac05..ff6fcaf 100644 --- a/board-qemu/slof/virtio-net.fs +++ b/board-qemu/slof/virtio-net.fs @@ -64,7 +64,7 @@ virtio-setup-vd VALUE virtiodev : read ( buf len -- actual ) dup IF - virtio-net-read + virtio-net-priv virtio-net-read ELSE nip THEN @@ -72,7 +72,7 @@ virtio-setup-vd VALUE virtiodev : write ( buf len -- actual ) dup IF - virtio-net-write + virtio-net-priv virtio-net-write ELSE nip THEN diff --git a/lib/libvirtio/virtio-net.c b/lib/libvirtio/virtio-net.c index 2573031..f59397c 100644 --- a/lib/libvirtio/virtio-net.c +++ b/lib/libvirtio/virtio-net.c @@ -24,7 +24,6 @@ #include #include #include -#include "virtio.h" #include "virtio-net.h" #include "virtio-internal.h" @@ -40,10 +39,6 @@ #define DRIVER_FEATURE_SUPPORT (VIRTIO_NET_F_MAC | VIRTIO_F_VERSION_1) -struct virtio_device virtiodev; -static struct vqs vq_rx; /* Information about receive virtqueues */ -static struct vqs vq_tx; /* Information about transmit virtqueues */ - /* See Virtio Spec, appendix C, "Device Operation" */ struct virtio_net_hdr { uint8_t flags; @@ -74,33 +69,35 @@ static uint16_t last_rx_idx; /* Last index in RX "used" ring */ * Checks whether we're reponsible for the given device and set up * the virtqueue configuration. */ -static int virtionet_init_pci(struct virtio_device *dev) +static int virtionet_init_pci(struct virtio_net *vnet, struct virtio_device *dev) { + struct virtio_device *vdev = &vnet->vdev; + dprintf("virtionet: doing virtionet_init_pci!\n"); if (!dev) return -1; /* make a copy of the device structure */ - memcpy(&virtiodev, dev, sizeof(struct virtio_device)); + memcpy(vdev, dev, sizeof(struct virtio_device)); /* Reset device */ - virtio_reset_device(&virtiodev); + virtio_reset_device(vdev); /* The queue information can be retrieved via the virtio header that * can be found in the I/O BAR. First queue is the receive queue, * second the transmit queue, and the forth is the control queue for * networking options. * We are only interested in the receive and transmit queue here. */ - if (virtio_queue_init_vq(dev, &vq_rx, VQ_RX) || - virtio_queue_init_vq(dev, &vq_tx, VQ_TX)) { - virtio_set_status(dev, VIRTIO_STAT_ACKNOWLEDGE|VIRTIO_STAT_DRIVER + if (virtio_queue_init_vq(vdev, &vnet->vq_rx, VQ_RX) || + virtio_queue_init_vq(vdev, &vnet->vq_tx, VQ_TX)) { + virtio_set_status(vdev, VIRTIO_STAT_ACKNOWLEDGE|VIRTIO_STAT_DRIVER |VIRTIO_STAT_FAILED); return -1; } /* Acknowledge device. */ - virtio_set_status(&virtiodev, VIRTIO_STAT_ACKNOWLEDGE); + virtio_set_status(vdev, VIRTIO_STAT_ACKNOWLEDGE); return 0; } @@ -110,10 +107,14 @@ static int virtionet_init_pci(struct virtio_device *dev) * See the Virtio Spec, chapter 2.2.1 and Appendix C "Device Initialization" * for details. */ -static int virtionet_init(net_driver_t *driver) +static int virtionet_init(struct virtio_net *vnet) { int i; int status = VIRTIO_STAT_ACKNOWLEDGE | VIRTIO_STAT_DRIVER; + struct virtio_device *vdev = &vnet->vdev; + net_driver_t *driver = &vnet->driver; + struct vqs *vq_tx = &vnet->vq_tx; + struct vqs *vq_rx = &vnet->vq_rx; dprintf("virtionet_init(%02x:%02x:%02x:%02x:%02x:%02x)\n", driver->mac_addr[0], driver->mac_addr[1], @@ -124,68 +125,68 @@ static int virtionet_init(net_driver_t *driver) return 0; /* Tell HV that we know how to drive the device. */ - virtio_set_status(&virtiodev, status); + virtio_set_status(vdev, status); /* Device specific setup */ - if (virtiodev.is_modern) { - if (virtio_negotiate_guest_features(&virtiodev, DRIVER_FEATURE_SUPPORT)) + if (vdev->is_modern) { + if (virtio_negotiate_guest_features(vdev, DRIVER_FEATURE_SUPPORT)) goto dev_error; net_hdr_size = sizeof(struct virtio_net_hdr_v1); - virtio_get_status(&virtiodev, &status); + virtio_get_status(vdev, &status); } else { net_hdr_size = sizeof(struct virtio_net_hdr); - virtio_set_guest_features(&virtiodev, 0); + virtio_set_guest_features(vdev, 0); } /* Allocate memory for one transmit an multiple receive buffers */ - vq_rx.buf_mem = SLOF_alloc_mem((BUFFER_ENTRY_SIZE+net_hdr_size) + vq_rx->buf_mem = SLOF_alloc_mem((BUFFER_ENTRY_SIZE+net_hdr_size) * RX_QUEUE_SIZE); - if (!vq_rx.buf_mem) { + if (!vq_rx->buf_mem) { printf("virtionet: Failed to allocate buffers!\n"); goto dev_error; } /* Prepare receive buffer queue */ for (i = 0; i < RX_QUEUE_SIZE; i++) { - uint64_t addr = (uint64_t)vq_rx.buf_mem + uint64_t addr = (uint64_t)vq_rx->buf_mem + i * (BUFFER_ENTRY_SIZE+net_hdr_size); uint32_t id = i*2; /* Descriptor for net_hdr: */ - virtio_fill_desc(&vq_rx.desc[id], virtiodev.is_modern, addr, net_hdr_size, + virtio_fill_desc(&vq_rx->desc[id], vdev->is_modern, addr, net_hdr_size, VRING_DESC_F_NEXT | VRING_DESC_F_WRITE, id + 1); /* Descriptor for data: */ - virtio_fill_desc(&vq_rx.desc[id+1], virtiodev.is_modern, addr + net_hdr_size, + virtio_fill_desc(&vq_rx->desc[id+1], vdev->is_modern, addr + net_hdr_size, BUFFER_ENTRY_SIZE, VRING_DESC_F_WRITE, 0); - vq_rx.avail->ring[i] = virtio_cpu_to_modern16(&virtiodev, id); + vq_rx->avail->ring[i] = virtio_cpu_to_modern16(vdev, id); } sync(); - vq_rx.avail->flags = virtio_cpu_to_modern16(&virtiodev, VRING_AVAIL_F_NO_INTERRUPT); - vq_rx.avail->idx = virtio_cpu_to_modern16(&virtiodev, RX_QUEUE_SIZE); + vq_rx->avail->flags = virtio_cpu_to_modern16(vdev, VRING_AVAIL_F_NO_INTERRUPT); + vq_rx->avail->idx = virtio_cpu_to_modern16(vdev, RX_QUEUE_SIZE); - last_rx_idx = virtio_modern16_to_cpu(&virtiodev, vq_rx.used->idx); + last_rx_idx = virtio_modern16_to_cpu(vdev, vq_rx->used->idx); - vq_tx.avail->flags = virtio_cpu_to_modern16(&virtiodev, VRING_AVAIL_F_NO_INTERRUPT); - vq_tx.avail->idx = 0; + vq_tx->avail->flags = virtio_cpu_to_modern16(vdev, VRING_AVAIL_F_NO_INTERRUPT); + vq_tx->avail->idx = 0; /* Tell HV that setup succeeded */ status |= VIRTIO_STAT_DRIVER_OK; - virtio_set_status(&virtiodev, status); + virtio_set_status(vdev, status); /* Tell HV that RX queues are ready */ - virtio_queue_notify(&virtiodev, VQ_RX); + virtio_queue_notify(vdev, VQ_RX); driver->running = 1; for(i = 0; i < (int)sizeof(driver->mac_addr); i++) { - driver->mac_addr[i] = virtio_get_config(&virtiodev, i, 1); + driver->mac_addr[i] = virtio_get_config(vdev, i, 1); } return 0; dev_error: status |= VIRTIO_STAT_FAILED; - virtio_set_status(&virtiodev, status); + virtio_set_status(vdev, status); return -1; } @@ -195,21 +196,29 @@ dev_error: * We've got to make sure that the hosts stops all transfers since the buffers * in our main memory will become invalid after this module has been terminated. */ -static int virtionet_term(net_driver_t *driver) +static int virtionet_term(struct virtio_net *vnet) { + struct virtio_device *vdev = &vnet->vdev; + net_driver_t *driver = &vnet->driver; + struct vqs *vq_rx = &vnet->vq_rx; + dprintf("virtionet_term()\n"); if (driver->running == 0) return 0; /* Quiesce device */ - virtio_set_status(&virtiodev, VIRTIO_STAT_FAILED); + virtio_set_status(vdev, VIRTIO_STAT_FAILED); /* Reset device */ - virtio_reset_device(&virtiodev); + virtio_reset_device(vdev); driver->running = 0; + SLOF_free_mem(vq_rx->buf_mem, + (BUFFER_ENTRY_SIZE+net_hdr_size) * RX_QUEUE_SIZE); + vq_rx->buf_mem = NULL; + return 0; } @@ -217,12 +226,14 @@ static int virtionet_term(net_driver_t *driver) /** * Transmit a packet */ -static int virtionet_xmit(char *buf, int len) +static int virtionet_xmit(struct virtio_net *vnet, char *buf, int len) { int id, idx; static struct virtio_net_hdr_v1 nethdr_v1; static struct virtio_net_hdr nethdr_legacy; void *nethdr = &nethdr_legacy; + struct virtio_device *vdev = &vnet->vdev; + struct vqs *vq_tx = &vnet->vq_tx; if (len > BUFFER_ENTRY_SIZE) { printf("virtionet: Packet too big!\n"); @@ -231,29 +242,29 @@ static int virtionet_xmit(char *buf, int len) dprintf("\nvirtionet_xmit(packet at %p, %d bytes)\n", buf, len); - if (virtiodev.is_modern) + if (vdev->is_modern) nethdr = &nethdr_v1; memset(nethdr, 0, net_hdr_size); /* Determine descriptor index */ - idx = virtio_modern16_to_cpu(&virtiodev, vq_tx.avail->idx); - id = (idx * 2) % vq_tx.size; + idx = virtio_modern16_to_cpu(vdev, vq_tx->avail->idx); + id = (idx * 2) % vq_tx->size; /* Set up virtqueue descriptor for header */ - virtio_fill_desc(&vq_tx.desc[id], virtiodev.is_modern, (uint64_t)nethdr, + virtio_fill_desc(&vq_tx->desc[id], vdev->is_modern, (uint64_t)nethdr, net_hdr_size, VRING_DESC_F_NEXT, id + 1); /* Set up virtqueue descriptor for data */ - virtio_fill_desc(&vq_tx.desc[id+1], virtiodev.is_modern, (uint64_t)buf, len, 0, 0); + virtio_fill_desc(&vq_tx->desc[id+1], vdev->is_modern, (uint64_t)buf, len, 0, 0); - vq_tx.avail->ring[idx % vq_tx.size] = virtio_cpu_to_modern16(&virtiodev, id); + vq_tx->avail->ring[idx % vq_tx->size] = virtio_cpu_to_modern16(vdev, id); sync(); - vq_tx.avail->idx = virtio_cpu_to_modern16(&virtiodev, idx + 1); + vq_tx->avail->idx = virtio_cpu_to_modern16(vdev, idx + 1); sync(); /* Tell HV that TX queue is ready */ - virtio_queue_notify(&virtiodev, VQ_TX); + virtio_queue_notify(vdev, VQ_TX); return len; } @@ -262,25 +273,27 @@ static int virtionet_xmit(char *buf, int len) /** * Receive a packet */ -static int virtionet_receive(char *buf, int maxlen) +static int virtionet_receive(struct virtio_net *vnet, char *buf, int maxlen) { uint32_t len = 0; uint32_t id, idx; uint16_t avail_idx; + struct virtio_device *vdev = &vnet->vdev; + struct vqs *vq_rx = &vnet->vq_rx; - idx = virtio_modern16_to_cpu(&virtiodev, vq_rx.used->idx); + idx = virtio_modern16_to_cpu(vdev, vq_rx->used->idx); if (last_rx_idx == idx) { /* Nothing received yet */ return 0; } - id = (virtio_modern32_to_cpu(&virtiodev, vq_rx.used->ring[last_rx_idx % vq_rx.size].id) + 1) - % vq_rx.size; - len = virtio_modern32_to_cpu(&virtiodev, vq_rx.used->ring[last_rx_idx % vq_rx.size].len) + id = (virtio_modern32_to_cpu(vdev, vq_rx->used->ring[last_rx_idx % vq_rx->size].id) + 1) + % vq_rx->size; + len = virtio_modern32_to_cpu(vdev, vq_rx->used->ring[last_rx_idx % vq_rx->size].len) - net_hdr_size; - dprintf("virtionet_receive() last_rx_idx=%i, vq_rx.used->idx=%i," - " id=%i len=%i\n", last_rx_idx, vq_rx.used->idx, id, len); + dprintf("virtionet_receive() last_rx_idx=%i, vq_rx->used->idx=%i," + " id=%i len=%i\n", last_rx_idx, vq_rx->used->idx, id, len); if (len > (uint32_t)maxlen) { printf("virtio-net: Receive buffer not big enough!\n"); @@ -292,7 +305,7 @@ static int virtionet_receive(char *buf, int maxlen) printf("\n"); int i; for (i=0; i<64; i++) { - printf(" %02x", *(uint8_t*)(vq_rx.desc[id].addr+i)); + printf(" %02x", *(uint8_t*)(vq_rx->desc[id].addr+i)); if ((i%16)==15) printf("\n"); } @@ -300,64 +313,65 @@ static int virtionet_receive(char *buf, int maxlen) #endif /* Copy data to destination buffer */ - memcpy(buf, (void *)virtio_modern64_to_cpu(&virtiodev, vq_rx.desc[id].addr), len); + memcpy(buf, (void *)virtio_modern64_to_cpu(vdev, vq_rx->desc[id].addr), len); /* Move indices to next entries */ last_rx_idx = last_rx_idx + 1; - avail_idx = virtio_modern16_to_cpu(&virtiodev, vq_rx.avail->idx); - vq_rx.avail->ring[avail_idx % vq_rx.size] = virtio_cpu_to_modern16(&virtiodev, id - 1); + avail_idx = virtio_modern16_to_cpu(vdev, vq_rx->avail->idx); + vq_rx->avail->ring[avail_idx % vq_rx->size] = virtio_cpu_to_modern16(vdev, id - 1); sync(); - vq_rx.avail->idx = virtio_cpu_to_modern16(&virtiodev, avail_idx + 1); + vq_rx->avail->idx = virtio_cpu_to_modern16(vdev, avail_idx + 1); /* Tell HV that RX queue entry is ready */ - virtio_queue_notify(&virtiodev, VQ_RX); + virtio_queue_notify(vdev, VQ_RX); return len; } -net_driver_t *virtionet_open(struct virtio_device *dev) +struct virtio_net *virtionet_open(struct virtio_device *dev) { - net_driver_t *driver; + struct virtio_net *vnet; - driver = SLOF_alloc_mem(sizeof(*driver)); - if (!driver) { + vnet = SLOF_alloc_mem(sizeof(*vnet)); + if (!vnet) { printf("Unable to allocate virtio-net driver\n"); return NULL; } - driver->running = 0; + vnet->driver.running = 0; - if (virtionet_init_pci(dev)) + if (virtionet_init_pci(vnet, dev)) goto FAIL; - if (virtionet_init(driver)) + if (virtionet_init(vnet)) goto FAIL; - return driver; + return vnet; -FAIL: SLOF_free_mem(driver, sizeof(*driver)); + FAIL: + SLOF_free_mem(vnet, sizeof(*vnet)); return NULL; } -void virtionet_close(net_driver_t *driver) +void virtionet_close(struct virtio_net *vnet) { - if (driver) { - virtionet_term(driver); - SLOF_free_mem(driver, sizeof(*driver)); + if (vnet) { + virtionet_term(vnet); + SLOF_free_mem(vnet, sizeof(*vnet)); } } -int virtionet_read(char *buf, int len) +int virtionet_read(struct virtio_net *vnet, char *buf, int len) { - if (buf) - return virtionet_receive(buf, len); + if (vnet && buf) + return virtionet_receive(vnet, buf, len); return -1; } -int virtionet_write(char *buf, int len) +int virtionet_write(struct virtio_net *vnet, char *buf, int len) { - if (buf) - return virtionet_xmit(buf, len); + if (vnet && buf) + return virtionet_xmit(vnet, buf, len); return -1; } diff --git a/lib/libvirtio/virtio-net.h b/lib/libvirtio/virtio-net.h index c2d8ee3..f72d435 100644 --- a/lib/libvirtio/virtio-net.h +++ b/lib/libvirtio/virtio-net.h @@ -14,6 +14,7 @@ #define VIRTIO_NET_H #include +#include "virtio.h" #define RX_QUEUE_SIZE 128 #define BUFFER_ENTRY_SIZE 1514 @@ -23,12 +24,19 @@ enum { VQ_TX = 1, /* Transmit Queue */ }; +struct virtio_net { + net_driver_t driver; + struct virtio_device vdev; + struct vqs vq_rx; + struct vqs vq_tx; +}; + /* VIRTIO_NET Feature bits */ #define VIRTIO_NET_F_MAC (1 << 5) -extern net_driver_t *virtionet_open(struct virtio_device *dev); -extern void virtionet_close(net_driver_t *driver); -extern int virtionet_read(char *buf, int len); -extern int virtionet_write(char *buf, int len); +extern struct virtio_net *virtionet_open(struct virtio_device *dev); +extern void virtionet_close(struct virtio_net *vnet); +extern int virtionet_read(struct virtio_net *vnet, char *buf, int len); +extern int virtionet_write(struct virtio_net *vnet, char *buf, int len); #endif diff --git a/lib/libvirtio/virtio.code b/lib/libvirtio/virtio.code index b8262ad..467f382 100644 --- a/lib/libvirtio/virtio.code +++ b/lib/libvirtio/virtio.code @@ -137,42 +137,44 @@ MIRP /******** virtio-net ********/ -// : virtio-net-open ( dev -- false | [ driver true ] ) +// : virtio-net-open ( dev -- false | [ vnet true ] ) PRIM(virtio_X2d_net_X2d_open) { void *dev = TOS.a; - net_driver_t *net_driver = virtionet_open(dev); + void *vnet = virtionet_open(dev); - if (net_driver) { - TOS.u = (unsigned long)net_driver; PUSH; + if (vnet) { + TOS.u = (unsigned long)vnet; PUSH; TOS.n = -1; } else TOS.n = 0; } MIRP -// : virtio-net-close ( driver -- ) +// : virtio-net-close ( vnet -- ) PRIM(virtio_X2d_net_X2d_close) { - net_driver_t *driver = TOS.a; POP; - virtionet_close(driver); + void *vnet = TOS.a; POP; + virtionet_close(vnet); } MIRP -// : virtio-net-read ( addr len -- actual ) +// : virtio-net-read ( addr len vnet -- actual ) PRIM(virtio_X2d_net_X2d_read) { - int len = TOS.u; POP; - TOS.n = virtionet_read(TOS.a, len); + void *vnet = TOS.a; POP; + int len = TOS.u; POP; + TOS.n = virtionet_read(vnet, TOS.a, len); } MIRP -// : virtio-net-write ( addr len -- actual ) +// : virtio-net-write ( addr len vnet -- actual ) PRIM(virtio_X2d_net_X2d_write) { + void *vnet = TOS.a; POP; int len = TOS.u; POP; - TOS.n = virtionet_write(TOS.a, len); + TOS.n = virtionet_write(vnet, TOS.a, len); } MIRP