From patchwork Sun Feb 10 11:04:33 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: sjur.brandeland@stericsson.com X-Patchwork-Id: 219488 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id C6F7D2C0092 for ; Sun, 10 Feb 2013 22:06:00 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755025Ab3BJLFc (ORCPT ); Sun, 10 Feb 2013 06:05:32 -0500 Received: from mail-wi0-f171.google.com ([209.85.212.171]:63081 "EHLO mail-wi0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754740Ab3BJLFQ (ORCPT ); Sun, 10 Feb 2013 06:05:16 -0500 Received: by mail-wi0-f171.google.com with SMTP id hn17so2255264wib.4 for ; Sun, 10 Feb 2013 03:05:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:sender:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:mime-version:content-type :content-transfer-encoding; bh=6fOy2r9TVkg8BPwrlZddMPhteu23Pbj6f2iPfDVr35Y=; b=uhud7C2YFeIILpqfxti1q8kX9jpWjqjfJlSV4NyzzAoDYWFTht84uQEdhrBxNRcASq RMpbO+t2imjSlLhVkVVMNs19OZ91vybGxgLEnpoqoIXKjk8qIVIKesZnAJB64MFFbODU EYtPyzBTBiIZMQEuCQxu3r/sAq21xsGWhvtyZUqOBOxTX7gXoMHKG+LDAxGNmgDVAxcY eoCTjvKxAIL0abpI3qVURsJs0y9xgkIpMZjZflFlec2P3eIFH27A42q1+I+rPBP+/79v L0Z6RzsWGnuUueJjWAXqp2AE4AMS/E7Msg+aTDGYcSzebgt70fakym+BOIlH4nNLgH2x cf3w== X-Received: by 10.180.79.201 with SMTP id l9mr10520339wix.20.1360494315184; Sun, 10 Feb 2013 03:05:15 -0800 (PST) Received: from localhost.localdomain ([188.148.157.34]) by mx.google.com with ESMTPS id ex15sm11868228wid.5.2013.02.10.03.05.12 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 10 Feb 2013 03:05:14 -0800 (PST) From: sjur.brandeland@stericsson.com To: Rusty Russell , "David S. Miller" , Ohad Ben-Cohen Cc: sjur@brendeland.net, netdev@vger.kernel.org, virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Dmitry Tarnyagin , Linus Walleij , Erwan Yvin , Vikram ARV , =?UTF-8?q?Sjur=20Br=C3=A6ndeland?= , Ido Yariv Subject: [PATCH vringh 2/2] caif_virtio: Introduce caif over virtio Date: Sun, 10 Feb 2013 12:04:33 +0100 Message-Id: <1360494273-27889-3-git-send-email-sjur.brandeland@stericsson.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1360494273-27889-1-git-send-email-sjur.brandeland@stericsson.com> References: <1360494273-27889-1-git-send-email-sjur.brandeland@stericsson.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vikram ARV Add the the Virtio shared memory driver for STE Modems. caif_virtio is implemented utilizing the virtio framework for data transport and is managed with the remoteproc frameworks. The Virtio queue is used for transmitting data to the modem, and the new vringh implementation is receiving data over the vring. Signed-off-by: Vikram ARV Signed-off-by: Sjur Brændeland to: David S. Miller cc: Rusty Russell cc: Ohad Ben-Cohen cc: Ido Yariv cc: Erwan Yvin --- Hi Dave, Rusty has accepted to take this patch via his tree. Feedback and review comments are appreciated. Thanks, Sjur drivers/net/caif/Kconfig | 8 + drivers/net/caif/Makefile | 3 + drivers/net/caif/caif_virtio.c | 568 +++++++++++++++++++++++++++++++++++++++ include/linux/virtio_caif.h | 24 ++ include/uapi/linux/virtio_ids.h | 1 + 5 files changed, 604 insertions(+), 0 deletions(-) create mode 100644 drivers/net/caif/caif_virtio.c create mode 100644 include/linux/virtio_caif.h diff --git a/drivers/net/caif/Kconfig b/drivers/net/caif/Kconfig index abf4d7a..a8b67e9 100644 --- a/drivers/net/caif/Kconfig +++ b/drivers/net/caif/Kconfig @@ -47,3 +47,11 @@ config CAIF_HSI The caif low level driver for CAIF over HSI. Be aware that if you enable this then you also need to enable a low-level HSI driver. + +config CAIF_VIRTIO + tristate "CAIF virtio transport driver" + depends on CAIF + depends on REMOTEPROC + default n + ---help--- + The caif driver for CAIF over Virtio. diff --git a/drivers/net/caif/Makefile b/drivers/net/caif/Makefile index 91dff86..d9ee26a 100644 --- a/drivers/net/caif/Makefile +++ b/drivers/net/caif/Makefile @@ -13,3 +13,6 @@ obj-$(CONFIG_CAIF_SHM) += caif_shm.o # HSI interface obj-$(CONFIG_CAIF_HSI) += caif_hsi.o + +# Virtio interface +obj-$(CONFIG_CAIF_VIRTIO) += caif_virtio.o diff --git a/drivers/net/caif/caif_virtio.c b/drivers/net/caif/caif_virtio.c new file mode 100644 index 0000000..e8ea114 --- /dev/null +++ b/drivers/net/caif/caif_virtio.c @@ -0,0 +1,568 @@ +/* + * Copyright (C) ST-Ericsson AB 2012 + * Contact: Sjur Brendeland / sjur.brandeland@stericsson.com + * Authors: Vicram Arv / vikram.arv@stericsson.com, + * Dmitry Tarnyagin / dmitry.tarnyagin@stericsson.com + * Sjur Brendeland / sjur.brandeland@stericsson.com + * License terms: GNU General Public License (GPL) version 2 + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Vicram Arv "); +MODULE_AUTHOR("Sjur Brendeland riov.allocated) { + kfree(ctx->riov.iov); + ctx->riov.iov = NULL; + ctx->riov.allocated = false; + } + ctx->riov.iov = NULL; + ctx->riov.i = 0; + ctx->riov.max = 0; +} + +static void cfv_release_cb(struct virtqueue *vq_tx) +{ + struct cfv_info *cfv = vq_tx->vdev->priv; + tasklet_schedule(&cfv->tx_release_tasklet); +} + +/* This is invoked whenever the remote processor completed processing + * a TX msg we just sent it, and the buffer is put back to the used ring. + */ +static void cfv_release_used_buf(struct virtqueue *vq_tx) +{ + struct cfv_info *cfv = vq_tx->vdev->priv; + unsigned long flags; + + BUG_ON(vq_tx != cfv->vq_tx); + WARN_ON_ONCE(irqs_disabled()); + + for (;;) { + unsigned int len; + struct token_info *buf_info; + + /* Get used buffer from used ring to recycle used descriptors */ + spin_lock_irqsave(&cfv->tx_lock, flags); + buf_info = virtqueue_get_buf(vq_tx, &len); + + if (!buf_info) + goto out; + + BUG_ON(!cfv->queued_tx); + if (--cfv->queued_tx <= cfv->watermark_tx) { + cfv->watermark_tx = 0; + netif_tx_wake_all_queues(cfv->ndev); + } + spin_unlock_irqrestore(&cfv->tx_lock, flags); + + dma_free_coherent(vq_tx->vdev->dev.parent->parent, + buf_info->size, buf_info->vaddr, + buf_info->dma_handle); + kfree(buf_info); + } + return; +out: + spin_unlock_irqrestore(&cfv->tx_lock, flags); +} + +static struct sk_buff *cfv_alloc_and_copy_skb(int *err, + struct cfv_info *cfv, + u8 *frm, u32 frm_len) +{ + struct sk_buff *skb; + u32 cfpkt_len, pad_len; + + *err = 0; + /* Verify that packet size with down-link header and mtu size */ + if (frm_len > cfv->mru || frm_len <= cfv->rx_hr + cfv->rx_tr) { + netdev_err(cfv->ndev, + "Invalid frmlen:%u mtu:%u hr:%d tr:%d\n", + frm_len, cfv->mru, cfv->rx_hr, + cfv->rx_tr); + *err = -EPROTO; + return NULL; + } + + cfpkt_len = frm_len - (cfv->rx_hr + cfv->rx_tr); + + pad_len = (unsigned long)(frm + cfv->rx_hr) & (IP_HDR_ALIGN - 1); + + skb = netdev_alloc_skb(cfv->ndev, frm_len + pad_len); + if (!skb) { + *err = -ENOMEM; + return NULL; + } + /* Reserve space for headers. */ + skb_reserve(skb, cfv->rx_hr + pad_len); + + memcpy(skb_put(skb, cfpkt_len), frm + cfv->rx_hr, cfpkt_len); + return skb; +} + +static int cfv_rx_poll(struct napi_struct *napi, int quota) +{ + struct cfv_info *cfv = container_of(napi, struct cfv_info, napi); + int rxcnt = 0; + int err = 0; + struct vringh_kiov wiov; + void *buf; + struct sk_buff *skb; + struct vringh_kiov *riov = &cfv->ctx.riov; + + memset(&wiov, 0, sizeof(wiov)); + + do { + skb = NULL; + if (riov->i == riov->max) { + if (cfv->ctx.head != USHRT_MAX) { + vringh_complete_kern(cfv->vr_rx, + cfv->ctx.head, + 0); + cfv->ctx.head = USHRT_MAX; + } + + ctx_prep_iov(&cfv->ctx); + err = vringh_getdesc_kern( + cfv->vr_rx, + riov, + &wiov, + &cfv->ctx.head, + GFP_ATOMIC); + + if (err <= 0) + goto out; + + if (wiov.max != 0) { + /* CAIF does not use write descriptors */ + err = -EPROTO; + goto out; + } + } + + buf = phys_to_virt((unsigned long) riov->iov[riov->i].iov_base); + /* TODO: Add check on valid buffer address */ + + skb = cfv_alloc_and_copy_skb(&err, cfv, buf, + riov->iov[riov->i].iov_len); + if (unlikely(err)) + goto out; + + /* Push received packet up the stack. */ + skb->protocol = htons(ETH_P_CAIF); + skb_reset_mac_header(skb); + skb->dev = cfv->ndev; + err = netif_receive_skb(skb); + if (unlikely(err)) { + ++cfv->ndev->stats.rx_dropped; + } else { + ++cfv->ndev->stats.rx_packets; + cfv->ndev->stats.rx_bytes += skb->len; + } + + ++riov->i; + ++rxcnt; + } while (rxcnt < quota); + + return rxcnt; + +out: + switch (err) { + case 0: + /* Empty ring, enable notifications and stop NAPI polling */ + if (!vringh_notify_enable_kern(cfv->vr_rx)) { + napi_complete(napi); + ctx_prep_iov(&cfv->ctx); + } + return rxcnt; + break; + + case -ENOMEM: + dev_kfree_skb(skb); + /* Stop NAPI poll on OOM, we hope to be polled later */ + napi_complete(napi); + vringh_notify_enable_kern(cfv->vr_rx); + break; + + default: + /* We're doomed */ + netdev_warn(cfv->ndev, "Bad ring, disable device\n"); + cfv->ndev->stats.rx_dropped = riov->max - riov->i; + ctx_prep_iov(&cfv->ctx); + napi_complete(napi); + vringh_notify_disable_kern(cfv->vr_rx); + netif_carrier_off(cfv->ndev); + break; + } + + return rxcnt; +} + +static irqreturn_t cfv_recv(struct virtio_device *vdev, struct vringh *vr_rx) +{ + struct cfv_info *cfv = vdev->priv; + + vringh_notify_disable_kern(cfv->vr_rx); + napi_schedule(&cfv->napi); + return IRQ_HANDLED; +} + +static int cfv_netdev_open(struct net_device *netdev) +{ + struct cfv_info *cfv = netdev_priv(netdev); + + netif_carrier_on(netdev); + napi_enable(&cfv->napi); + return 0; +} + +static int cfv_netdev_close(struct net_device *netdev) +{ + struct cfv_info *cfv = netdev_priv(netdev); + + netif_carrier_off(netdev); + napi_disable(&cfv->napi); + return 0; +} + +static struct token_info *cfv_alloc_and_copy_to_dmabuf(struct cfv_info *cfv, + struct sk_buff *skb, + struct scatterlist *sg) +{ + struct caif_payload_info *info = (void *)&skb->cb; + struct token_info *buf_info = NULL; + u8 pad_len, hdr_ofs; + + if (unlikely(cfv->tx_hr + skb->len + cfv->tx_tr > cfv->mtu)) { + netdev_warn(cfv->ndev, "Invalid packet len (%d > %d)\n", + cfv->tx_hr + skb->len + cfv->tx_tr, cfv->mtu); + goto err; + } + + buf_info = kmalloc(sizeof(struct token_info), GFP_ATOMIC); + if (unlikely(!buf_info)) + goto err; + + /* Make the IP header aligned in tbe buffer */ + hdr_ofs = cfv->tx_hr + info->hdr_len; + pad_len = hdr_ofs & (IP_HDR_ALIGN - 1); + buf_info->size = cfv->tx_hr + skb->len + cfv->tx_tr + pad_len; + + if (WARN_ON_ONCE(!cfv->vdev->dev.parent)) + goto err; + + /* allocate coherent memory for the buffers */ + buf_info->vaddr = + dma_alloc_coherent(cfv->vdev->dev.parent->parent, + buf_info->size, &buf_info->dma_handle, + GFP_ATOMIC); + if (unlikely(!buf_info->vaddr)) { + netdev_warn(cfv->ndev, + "Out of DMA memory (alloc %zu bytes)\n", + buf_info->size); + goto err; + } + + /* copy skbuf contents to send buffer */ + skb_copy_bits(skb, 0, buf_info->vaddr + cfv->tx_hr + pad_len, skb->len); + sg_init_one(sg, buf_info->vaddr + pad_len, + skb->len + cfv->tx_hr + cfv->rx_hr); + + return buf_info; +err: + kfree(buf_info); + return NULL; +} + +/* This is invoked whenever the host processor application has sent + * up-link data. Send it in the TX VQ avail ring. + * + * CAIF Virtio sends does not use linked descriptors in the tx direction. + */ +static int cfv_netdev_tx(struct sk_buff *skb, struct net_device *netdev) +{ + struct cfv_info *cfv = netdev_priv(netdev); + struct token_info *buf_info; + struct scatterlist sg; + unsigned long flags; + int ret; + + buf_info = cfv_alloc_and_copy_to_dmabuf(cfv, skb, &sg); + + spin_lock_irqsave(&cfv->tx_lock, flags); + if (unlikely(!buf_info)) + goto flow_off; + + /* Add buffer to avail ring. Flow control below should ensure + * that this always succeedes + */ + ret = virtqueue_add_buf(cfv->vq_tx, &sg, 1, 0, + buf_info, GFP_ATOMIC); + + if (unlikely(WARN_ON(ret < 0))) { + kfree(buf_info); + goto flow_off; + } + + + /* update netdev statistics */ + cfv->queued_tx++; + cfv->ndev->stats.tx_packets++; + cfv->ndev->stats.tx_bytes += skb->len; + + /* tell the remote processor it has a pending message to read */ + virtqueue_kick(cfv->vq_tx); + + /* Flow-off check takes into account number of cpus to make sure + * virtqueue will not be overfilled in any possible smp conditions. + * + * Flow-on is triggered when sufficient buffers are freed + */ + if (ret <= num_present_cpus()) { +flow_off: + cfv->watermark_tx = cfv->queued_tx >> 1; + netif_tx_stop_all_queues(netdev); + } + + spin_unlock_irqrestore(&cfv->tx_lock, flags); + + dev_kfree_skb(skb); + tasklet_schedule(&cfv->tx_release_tasklet); + return NETDEV_TX_OK; +} + +static void cfv_tx_release_tasklet(unsigned long drv) +{ + struct cfv_info *cfv = (struct cfv_info *)drv; + cfv_release_used_buf(cfv->vq_tx); +} + +static const struct net_device_ops cfv_netdev_ops = { + .ndo_open = cfv_netdev_open, + .ndo_stop = cfv_netdev_close, + .ndo_start_xmit = cfv_netdev_tx, +}; + +static void cfv_netdev_setup(struct net_device *netdev) +{ + netdev->netdev_ops = &cfv_netdev_ops; + netdev->type = ARPHRD_CAIF; + netdev->tx_queue_len = 100; + netdev->flags = IFF_POINTOPOINT | IFF_NOARP; + netdev->mtu = CFV_DEF_MTU_SIZE; + netdev->destructor = free_netdev; +} + +static int cfv_probe(struct virtio_device *vdev) +{ + vq_callback_t *vq_cbs = cfv_release_cb; + const char *names = "output"; + const char *cfv_netdev_name = "cfvrt"; + struct net_device *netdev; + struct virtqueue *vqs; + + struct cfv_info *cfv; + int err = 0; + + netdev = alloc_netdev(sizeof(struct cfv_info), cfv_netdev_name, + cfv_netdev_setup); + if (!netdev) + return -ENOMEM; + + cfv = netdev_priv(netdev); + cfv->vdev = vdev; + cfv->ndev = netdev; + + spin_lock_init(&cfv->tx_lock); + + cfv->vr_rx = rproc_virtio_new_vringh(vdev, RX_RING_INDEX, cfv_recv); + if (!cfv->vr_rx) + goto free_cfv; + + /* Get the TX (uplink) virtque */ + err = vdev->config->find_vqs(vdev, 1, &vqs, &vq_cbs, &names); + if (err) + goto free_cfv; + + cfv->vq_tx = vqs; + +#define GET_VIRTIO_CONFIG_OPS(_v, _var, _f) \ + ((_v)->config->get(_v, offsetof(struct virtio_caif_transf_config, _f), \ + &_var, \ + FIELD_SIZEOF(struct virtio_caif_transf_config, _f))) + + if (vdev->config->get) { + GET_VIRTIO_CONFIG_OPS(vdev, cfv->tx_hr, headroom); + GET_VIRTIO_CONFIG_OPS(vdev, cfv->rx_hr, headroom); + GET_VIRTIO_CONFIG_OPS(vdev, cfv->tx_tr, tailroom); + GET_VIRTIO_CONFIG_OPS(vdev, cfv->rx_tr, tailroom); + GET_VIRTIO_CONFIG_OPS(vdev, cfv->mtu, mtu); + GET_VIRTIO_CONFIG_OPS(vdev, cfv->mru, mtu); + } else { + cfv->tx_hr = CFV_DEF_HEADROOM; + cfv->rx_hr = CFV_DEF_HEADROOM; + cfv->tx_tr = CFV_DEF_TAILROOM; + cfv->rx_tr = CFV_DEF_TAILROOM; + cfv->mtu = CFV_DEF_MTU_SIZE; + cfv->mru = CFV_DEF_MTU_SIZE; + } + + netdev->needed_headroom = cfv->tx_hr; + netdev->needed_tailroom = cfv->tx_tr; + + /* Subtract needed tailroom from MTU to ensure enough room */ + netdev->mtu = cfv->mtu - cfv->tx_tr; + + vdev->priv = cfv; + memset(&cfv->ctx, 0, sizeof(cfv->ctx)); + cfv->ctx.head = USHRT_MAX; + + netif_napi_add(netdev, &cfv->napi, cfv_rx_poll, CFV_DEFAULT_QUOTA); + tasklet_init(&cfv->tx_release_tasklet, + cfv_tx_release_tasklet, + (unsigned long)cfv); + + netif_carrier_off(netdev); + + /* register Netdev */ + err = register_netdev(netdev); + if (err) { + dev_err(&vdev->dev, "Unable to register netdev (%d)\n", err); + goto vqs_del; + } + + /* tell the remote processor it can start sending messages */ + rproc_virtio_kick_vringh(vdev, RX_RING_INDEX); + + return 0; + +vqs_del: + vdev->config->del_vqs(cfv->vdev); +free_cfv: + free_netdev(netdev); + return err; +} + +static void cfv_remove(struct virtio_device *vdev) +{ + struct cfv_info *cfv = vdev->priv; + vdev->config->reset(vdev); + rproc_virtio_del_vringh(vdev, RX_RING_INDEX); + cfv->vr_rx = NULL; + vdev->config->del_vqs(cfv->vdev); + unregister_netdev(cfv->ndev); +} + +static struct virtio_device_id id_table[] = { + { VIRTIO_ID_CAIF, VIRTIO_DEV_ANY_ID }, + { 0 }, +}; + +static unsigned int features[] = { +}; + +static struct virtio_driver caif_virtio_driver = { + .feature_table = features, + .feature_table_size = ARRAY_SIZE(features), + .driver.name = KBUILD_MODNAME, + .driver.owner = THIS_MODULE, + .id_table = id_table, + .probe = cfv_probe, + .remove = cfv_remove, +}; + +module_driver(caif_virtio_driver, register_virtio_driver, + unregister_virtio_driver); +MODULE_DEVICE_TABLE(virtio, id_table); diff --git a/include/linux/virtio_caif.h b/include/linux/virtio_caif.h new file mode 100644 index 0000000..5d2d312 --- /dev/null +++ b/include/linux/virtio_caif.h @@ -0,0 +1,24 @@ +/* + * Copyright (C) ST-Ericsson AB 2012 + * Author: Sjur Brændeland + * + * This header is BSD licensed so + * anyone can use the definitions to implement compatible remote processors + */ + +#ifndef VIRTIO_CAIF_H +#define VIRTIO_CAIF_H + +#include +struct virtio_caif_transf_config { + u16 headroom; + u16 tailroom; + u32 mtu; + u8 reserved[4]; +}; + +struct virtio_caif_config { + struct virtio_caif_transf_config uplink, downlink; + u8 reserved[8]; +}; +#endif diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h index a7630d0..284fc3a 100644 --- a/include/uapi/linux/virtio_ids.h +++ b/include/uapi/linux/virtio_ids.h @@ -38,5 +38,6 @@ #define VIRTIO_ID_SCSI 8 /* virtio scsi */ #define VIRTIO_ID_9P 9 /* 9p virtio console */ #define VIRTIO_ID_RPROC_SERIAL 11 /* virtio remoteproc serial link */ +#define VIRTIO_ID_CAIF 12 /* Virtio caif */ #endif /* _LINUX_VIRTIO_IDS_H */