{"id":814982,"url":"http://patchwork.ozlabs.org/api/patches/814982/?format=json","web_url":"http://patchwork.ozlabs.org/project/netdev/patch/20170918153049.44185-14-mika.westerberg@linux.intel.com/","project":{"id":7,"url":"http://patchwork.ozlabs.org/api/projects/7/?format=json","name":"Linux network development","link_name":"netdev","list_id":"netdev.vger.kernel.org","list_email":"netdev@vger.kernel.org","web_url":null,"scm_url":null,"webscm_url":null,"list_archive_url":"","list_archive_url_format":"","commit_url_format":""},"msgid":"<20170918153049.44185-14-mika.westerberg@linux.intel.com>","list_archive_url":null,"date":"2017-09-18T15:30:46","name":"[13/16] thunderbolt: Add polling mode for rings","commit_ref":null,"pull_url":null,"state":"not-applicable","archived":true,"hash":"c1c55d5309b7082896f991ba3f42660b8f74cdac","submitter":{"id":14534,"url":"http://patchwork.ozlabs.org/api/people/14534/?format=json","name":"Mika Westerberg","email":"mika.westerberg@linux.intel.com"},"delegate":{"id":34,"url":"http://patchwork.ozlabs.org/api/users/34/?format=json","username":"davem","first_name":"David","last_name":"Miller","email":"davem@davemloft.net"},"mbox":"http://patchwork.ozlabs.org/project/netdev/patch/20170918153049.44185-14-mika.westerberg@linux.intel.com/mbox/","series":[{"id":3664,"url":"http://patchwork.ozlabs.org/api/series/3664/?format=json","web_url":"http://patchwork.ozlabs.org/project/netdev/list/?series=3664","date":"2017-09-18T15:30:47","name":"Thunderbolt networking","version":1,"mbox":"http://patchwork.ozlabs.org/series/3664/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/814982/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/814982/checks/","tags":{},"related":[],"headers":{"Return-Path":"<netdev-owner@vger.kernel.org>","X-Original-To":"patchwork-incoming@ozlabs.org","Delivered-To":"patchwork-incoming@ozlabs.org","Authentication-Results":"ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=netdev-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xwqnF2YVBz9s7G\n\tfor <patchwork-incoming@ozlabs.org>;\n\tTue, 19 Sep 2017 01:34:41 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1756052AbdIRPd4 (ORCPT <rfc822;patchwork-incoming@ozlabs.org>);\n\tMon, 18 Sep 2017 11:33:56 -0400","from mga03.intel.com ([134.134.136.65]:52721 \"EHLO mga03.intel.com\"\n\trhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n\tid S1755418AbdIRPbC (ORCPT <rfc822;netdev@vger.kernel.org>);\n\tMon, 18 Sep 2017 11:31:02 -0400","from fmsmga001.fm.intel.com ([10.253.24.23])\n\tby orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t18 Sep 2017 08:31:01 -0700","from black.fi.intel.com ([10.237.72.28])\n\tby fmsmga001.fm.intel.com with ESMTP; 18 Sep 2017 08:30:58 -0700","by black.fi.intel.com (Postfix, from userid 1001)\n\tid F0D1E6EA; Mon, 18 Sep 2017 18:30:49 +0300 (EEST)"],"X-ExtLoop1":"1","X-IronPort-AV":"E=Sophos;i=\"5.42,413,1500966000\"; d=\"scan'208\";a=\"1196300395\"","From":"Mika Westerberg <mika.westerberg@linux.intel.com>","To":"Greg Kroah-Hartman <gregkh@linuxfoundation.org>,\n\t\"David S . Miller\" <davem@davemloft.net>","Cc":"Andreas Noever <andreas.noever@gmail.com>,\n\tMichael Jamet <michael.jamet@intel.com>,\n\tYehezkel Bernat <yehezkel.bernat@intel.com>,\n\tAmir Levy <amir.jer.levy@intel.com>,\n\tMario.Limonciello@dell.com, Lukas Wunner <lukas@wunner.de>,\n\tAndy Shevchenko <andriy.shevchenko@linux.intel.com>,\n\tMika Westerberg <mika.westerberg@linux.intel.com>,\n\tlinux-kernel@vger.kernel.org, netdev@vger.kernel.org","Subject":"[PATCH 13/16] thunderbolt: Add polling mode for rings","Date":"Mon, 18 Sep 2017 18:30:46 +0300","Message-Id":"<20170918153049.44185-14-mika.westerberg@linux.intel.com>","X-Mailer":"git-send-email 2.14.1","In-Reply-To":"<20170918153049.44185-1-mika.westerberg@linux.intel.com>","References":"<20170918153049.44185-1-mika.westerberg@linux.intel.com>","Sender":"netdev-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<netdev.vger.kernel.org>","X-Mailing-List":"netdev@vger.kernel.org"},"content":"In order to support things like networking over Thunderbolt cable, there\nneeds to be a way to switch the ring to a mode where it can be polled\nwith the interrupt masked. We implement such mode so that the caller can\nallocate a ring by passing pointer to a function that is then called\nwhen an interrupt is triggered. Completed frames can be fetched using\ntb_ring_poll() and the interrupt can be re-enabled when the caller is\nfinished with polling by using tb_ring_poll_complete().\n\nSigned-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>\nReviewed-by: Michael Jamet <michael.jamet@intel.com>\nReviewed-by: Yehezkel Bernat <yehezkel.bernat@intel.com>\n---\n drivers/thunderbolt/ctl.c   |   2 +-\n drivers/thunderbolt/nhi.c   | 126 ++++++++++++++++++++++++++++++++++++++++----\n include/linux/thunderbolt.h |  23 +++++---\n 3 files changed, 134 insertions(+), 17 deletions(-)","diff":"diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c\nindex dd10789e1dbb..d079dbba2c03 100644\n--- a/drivers/thunderbolt/ctl.c\n+++ b/drivers/thunderbolt/ctl.c\n@@ -619,7 +619,7 @@ struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data)\n \t\tgoto err;\n \n \tctl->rx = tb_ring_alloc_rx(nhi, 0, 10, RING_FLAG_NO_SUSPEND, 0xffff,\n-\t\t\t\t0xffff);\n+\t\t\t\t0xffff, NULL, NULL);\n \tif (!ctl->rx)\n \t\tgoto err;\n \ndiff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c\nindex cf1397afa72f..5bc3f77cc1f3 100644\n--- a/drivers/thunderbolt/nhi.c\n+++ b/drivers/thunderbolt/nhi.c\n@@ -252,7 +252,8 @@ static void ring_work(struct work_struct *work)\n \t\t * Do not hold on to it.\n \t\t */\n \t\tlist_del_init(&frame->list);\n-\t\tframe->callback(ring, frame, canceled);\n+\t\tif (frame->callback)\n+\t\t\tframe->callback(ring, frame, canceled);\n \t}\n }\n \n@@ -273,11 +274,106 @@ int __tb_ring_enqueue(struct tb_ring *ring, struct ring_frame *frame)\n }\n EXPORT_SYMBOL_GPL(__tb_ring_enqueue);\n \n+/**\n+ * tb_ring_poll() - Poll one completed frame from the ring\n+ * @ring: Ring to poll\n+ *\n+ * This function can be called when @start_poll callback of the @ring\n+ * has been called. It will read one completed frame from the ring and\n+ * return it to the caller. Returns %NULL if there is no more completed\n+ * frames.\n+ */\n+struct ring_frame *tb_ring_poll(struct tb_ring *ring)\n+{\n+\tstruct ring_frame *frame = NULL;\n+\tunsigned long flags;\n+\n+\tspin_lock_irqsave(&ring->lock, flags);\n+\tif (!ring->running)\n+\t\tgoto unlock;\n+\tif (ring_empty(ring))\n+\t\tgoto unlock;\n+\n+\tif (ring->descriptors[ring->tail].flags & RING_DESC_COMPLETED) {\n+\t\tframe = list_first_entry(&ring->in_flight, typeof(*frame),\n+\t\t\t\t\t list);\n+\t\tlist_del_init(&frame->list);\n+\n+\t\tif (!ring->is_tx) {\n+\t\t\tframe->size = ring->descriptors[ring->tail].length;\n+\t\t\tframe->eof = ring->descriptors[ring->tail].eof;\n+\t\t\tframe->sof = ring->descriptors[ring->tail].sof;\n+\t\t\tframe->flags = ring->descriptors[ring->tail].flags;\n+\t\t}\n+\n+\t\tring->tail = (ring->tail + 1) % ring->size;\n+\t}\n+\n+unlock:\n+\tspin_unlock_irqrestore(&ring->lock, flags);\n+\treturn frame;\n+}\n+EXPORT_SYMBOL_GPL(tb_ring_poll);\n+\n+static void __ring_interrupt_mask(struct tb_ring *ring, bool mask)\n+{\n+\tint idx = ring_interrupt_index(ring);\n+\tint reg = REG_RING_INTERRUPT_BASE + idx / 32 * 4;\n+\tint bit = idx % 32;\n+\tu32 val;\n+\n+\tval = ioread32(ring->nhi->iobase + reg);\n+\tif (mask)\n+\t\tval &= ~BIT(bit);\n+\telse\n+\t\tval |= BIT(bit);\n+\tiowrite32(val, ring->nhi->iobase + reg);\n+}\n+\n+/* Both @nhi->lock and @ring->lock should be held */\n+static void __ring_interrupt(struct tb_ring *ring)\n+{\n+\tif (!ring->running)\n+\t\treturn;\n+\n+\tif (ring->start_poll) {\n+\t\t__ring_interrupt_mask(ring, false);\n+\t\tring->start_poll(ring->poll_data);\n+\t} else {\n+\t\tschedule_work(&ring->work);\n+\t}\n+}\n+\n+/**\n+ * tb_ring_poll_complete() - Re-start interrupt for the ring\n+ * @ring: Ring to re-start the interrupt\n+ *\n+ * This will re-start (unmask) the ring interrupt once the user is done\n+ * with polling.\n+ */\n+void tb_ring_poll_complete(struct tb_ring *ring)\n+{\n+\tunsigned long flags;\n+\n+\tspin_lock_irqsave(&ring->nhi->lock, flags);\n+\tspin_lock(&ring->lock);\n+\tif (ring->start_poll)\n+\t\t__ring_interrupt_mask(ring, false);\n+\tspin_unlock(&ring->lock);\n+\tspin_unlock_irqrestore(&ring->nhi->lock, flags);\n+}\n+EXPORT_SYMBOL_GPL(tb_ring_poll_complete);\n+\n static irqreturn_t ring_msix(int irq, void *data)\n {\n \tstruct tb_ring *ring = data;\n \n-\tschedule_work(&ring->work);\n+\tspin_lock(&ring->nhi->lock);\n+\tspin_lock(&ring->lock);\n+\t__ring_interrupt(ring);\n+\tspin_unlock(&ring->lock);\n+\tspin_unlock(&ring->nhi->lock);\n+\n \treturn IRQ_HANDLED;\n }\n \n@@ -317,7 +413,9 @@ static void ring_release_msix(struct tb_ring *ring)\n \n static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,\n \t\t\t\t     bool transmit, unsigned int flags,\n-\t\t\t\t     u16 sof_mask, u16 eof_mask)\n+\t\t\t\t     u16 sof_mask, u16 eof_mask,\n+\t\t\t\t     void (*start_poll)(void *),\n+\t\t\t\t     void *poll_data)\n {\n \tstruct tb_ring *ring = NULL;\n \tdev_info(&nhi->pdev->dev, \"allocating %s ring %d of size %d\\n\",\n@@ -346,6 +444,8 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,\n \tring->head = 0;\n \tring->tail = 0;\n \tring->running = false;\n+\tring->start_poll = start_poll;\n+\tring->poll_data = poll_data;\n \n \tring->descriptors = dma_alloc_coherent(&ring->nhi->pdev->dev,\n \t\t\tsize * sizeof(*ring->descriptors),\n@@ -399,7 +499,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,\n struct tb_ring *tb_ring_alloc_tx(struct tb_nhi *nhi, int hop, int size,\n \t\t\t\t unsigned int flags)\n {\n-\treturn tb_ring_alloc(nhi, hop, size, true, flags, 0, 0);\n+\treturn tb_ring_alloc(nhi, hop, size, true, flags, 0, 0, NULL, NULL);\n }\n EXPORT_SYMBOL_GPL(tb_ring_alloc_tx);\n \n@@ -411,11 +511,17 @@ EXPORT_SYMBOL_GPL(tb_ring_alloc_tx);\n  * @flags: Flags for the ring\n  * @sof_mask: Mask of PDF values that start a frame\n  * @eof_mask: Mask of PDF values that end a frame\n+ * @start_poll: If not %NULL the ring will call this function when an\n+ *\t\tinterrupt is triggered and masked, instead of callback\n+ *\t\tin each Rx frame.\n+ * @poll_data: Optional data passed to @start_poll\n  */\n struct tb_ring *tb_ring_alloc_rx(struct tb_nhi *nhi, int hop, int size,\n-\t\t\t\t unsigned int flags, u16 sof_mask, u16 eof_mask)\n+\t\t\t\t unsigned int flags, u16 sof_mask, u16 eof_mask,\n+\t\t\t\t void (*start_poll)(void *), void *poll_data)\n {\n-\treturn tb_ring_alloc(nhi, hop, size, false, flags, sof_mask, eof_mask);\n+\treturn tb_ring_alloc(nhi, hop, size, false, flags, sof_mask, eof_mask,\n+\t\t\t     start_poll, poll_data);\n }\n EXPORT_SYMBOL_GPL(tb_ring_alloc_rx);\n \n@@ -556,6 +662,7 @@ void tb_ring_free(struct tb_ring *ring)\n \t\tdev_WARN(&ring->nhi->pdev->dev, \"%s %d still running\\n\",\n \t\t\t RING_TYPE(ring), ring->hop);\n \t}\n+\tspin_unlock_irq(&ring->nhi->lock);\n \n \tring_release_msix(ring);\n \n@@ -572,7 +679,6 @@ void tb_ring_free(struct tb_ring *ring)\n \t\t RING_TYPE(ring),\n \t\t ring->hop);\n \n-\tspin_unlock_irq(&ring->nhi->lock);\n \t/**\n \t * ring->work can no longer be scheduled (it is scheduled only\n \t * by nhi_interrupt_work, ring_stop and ring_msix). Wait for it\n@@ -682,8 +788,10 @@ static void nhi_interrupt_work(struct work_struct *work)\n \t\t\t\t hop);\n \t\t\tcontinue;\n \t\t}\n-\t\t/* we do not check ring->running, this is done in ring->work */\n-\t\tschedule_work(&ring->work);\n+\n+\t\tspin_lock(&ring->lock);\n+\t\t__ring_interrupt(ring);\n+\t\tspin_unlock(&ring->lock);\n \t}\n \tspin_unlock_irq(&nhi->lock);\n }\ndiff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h\nindex 62adb97d77f1..9a63210d5084 100644\n--- a/include/linux/thunderbolt.h\n+++ b/include/linux/thunderbolt.h\n@@ -446,6 +446,9 @@ struct tb_nhi {\n  * @flags: Ring specific flags\n  * @sof_mask: Bit mask used to detect start of frame PDF\n  * @eof_mask: Bit mask used to detect end of frame PDF\n+ * @start_poll: Called when ring interrupt is triggered to start\n+ *\t\tpolling. Passing %NULL keeps the ring in interrupt mode.\n+ * @poll_data: Data passed to @start_poll\n  */\n struct tb_ring {\n \tspinlock_t lock;\n@@ -466,6 +469,8 @@ struct tb_ring {\n \tunsigned int flags;\n \tu16 sof_mask;\n \tu16 eof_mask;\n+\tvoid (*start_poll)(void *data);\n+\tvoid *poll_data;\n };\n \n /* Leave ring interrupt enabled on suspend */\n@@ -499,7 +504,7 @@ enum ring_desc_flags {\n /**\n  * struct ring_frame - For use with ring_rx/ring_tx\n  * @buffer_phy: DMA mapped address of the frame\n- * @callback: Callback called when the frame is finished\n+ * @callback: Callback called when the frame is finished (optional)\n  * @list: Frame is linked to a queue using this\n  * @size: Size of the frame in bytes (%0 means %4096)\n  * @flags: Flags for the frame (see &enum ring_desc_flags)\n@@ -522,8 +527,8 @@ struct ring_frame {\n struct tb_ring *tb_ring_alloc_tx(struct tb_nhi *nhi, int hop, int size,\n \t\t\t\t unsigned int flags);\n struct tb_ring *tb_ring_alloc_rx(struct tb_nhi *nhi, int hop, int size,\n-\t\t\t\t unsigned int flags, u16 sof_mask,\n-\t\t\t\t u16 eof_mask);\n+\t\t\t\t unsigned int flags, u16 sof_mask, u16 eof_mask,\n+\t\t\t\t void (*start_poll)(void *), void *poll_data);\n void tb_ring_start(struct tb_ring *ring);\n void tb_ring_stop(struct tb_ring *ring);\n void tb_ring_free(struct tb_ring *ring);\n@@ -535,8 +540,8 @@ int __tb_ring_enqueue(struct tb_ring *ring, struct ring_frame *frame);\n  * @ring: Ring to enqueue the frame\n  * @frame: Frame to enqueue\n  *\n- * @frame->buffer, @frame->buffer_phy and @frame->callback have to be set. The\n- * buffer must contain at least %TB_FRAME_SIZE bytes.\n+ * @frame->buffer, @frame->buffer_phy have to be set. The buffer must\n+ * contain at least %TB_FRAME_SIZE bytes.\n  *\n  * @frame->callback will be invoked with @frame->size, @frame->flags,\n  * @frame->eof, @frame->sof set once the frame has been received.\n@@ -557,8 +562,8 @@ static inline int tb_ring_rx(struct tb_ring *ring, struct ring_frame *frame)\n  * @ring: Ring the enqueue the frame\n  * @frame: Frame to enqueue\n  *\n- * @frame->buffer, @frame->buffer_phy, @frame->callback, @frame->size,\n- * @frame->eof and @frame->sof have to be set.\n+ * @frame->buffer, @frame->buffer_phy, @frame->size, @frame->eof and\n+ * @frame->sof have to be set.\n  *\n  * @frame->callback will be invoked with once the frame has been transmitted.\n  *\n@@ -573,4 +578,8 @@ static inline int tb_ring_tx(struct tb_ring *ring, struct ring_frame *frame)\n \treturn __tb_ring_enqueue(ring, frame);\n }\n \n+/* Used only when the ring is in polling mode */\n+struct ring_frame *tb_ring_poll(struct tb_ring *ring);\n+void tb_ring_poll_complete(struct tb_ring *ring);\n+\n #endif /* THUNDERBOLT_H_ */\n","prefixes":["13/16"]}