From patchwork Mon Jul 2 11:40:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mikko Perttunen X-Patchwork-Id: 937846 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41K53T5Nnmz9s4Z for ; Mon, 2 Jul 2018 21:43:01 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030372AbeGBLl4 (ORCPT ); Mon, 2 Jul 2018 07:41:56 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:12728 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1030618AbeGBLlp (ORCPT ); Mon, 2 Jul 2018 07:41:45 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1, AES128-SHA) id ; Mon, 02 Jul 2018 04:41:43 -0700 Received: from HQMAIL101.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 02 Jul 2018 04:41:45 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 02 Jul 2018 04:41:45 -0700 Received: from HQMAIL102.nvidia.com (172.18.146.10) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1347.2; Mon, 2 Jul 2018 11:41:44 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL102.nvidia.com (172.18.146.10) with Microsoft SMTP Server (TLS) id 15.0.1347.2 via Frontend Transport; Mon, 2 Jul 2018 11:41:44 +0000 Received: from mperttunen-lnx.Nvidia.com (Not Verified[10.21.26.144]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Mon, 02 Jul 2018 04:41:44 -0700 From: Mikko Perttunen To: , , , CC: , , , , , Mikko Perttunen Subject: [PATCH v3 3/8] mailbox: Add transmit done by blocking option Date: Mon, 2 Jul 2018 14:40:28 +0300 Message-ID: <20180702114033.15654-4-mperttunen@nvidia.com> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20180702114033.15654-1-mperttunen@nvidia.com> References: <20180702114033.15654-1-mperttunen@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Add a new TXDONE option, TXDONE_BY_BLOCK. With this option, the send_data function of the mailbox driver is expected to block until the message has been sent. The new option is used with the Tegra Combined UART driver to minimize unnecessary overhead when transmitting data. Signed-off-by: Mikko Perttunen --- drivers/mailbox/mailbox.c | 30 +++++++++++++++++++++--------- drivers/mailbox/mailbox.h | 1 + 2 files changed, 22 insertions(+), 9 deletions(-) diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c index 674b35f402f5..5c76b70e673c 100644 --- a/drivers/mailbox/mailbox.c +++ b/drivers/mailbox/mailbox.c @@ -53,6 +53,8 @@ static int add_to_rbuf(struct mbox_chan *chan, void *mssg) return idx; } +static void tx_tick(struct mbox_chan *chan, int r, bool submit_next); + static void msg_submit(struct mbox_chan *chan) { unsigned count, idx; @@ -60,10 +62,13 @@ static void msg_submit(struct mbox_chan *chan) void *data; int err = -EBUSY; +next: spin_lock_irqsave(&chan->lock, flags); - if (!chan->msg_count || chan->active_req) - goto exit; + if (!chan->msg_count || chan->active_req) { + spin_unlock_irqrestore(&chan->lock, flags); + return; + } count = chan->msg_count; idx = chan->msg_free; @@ -82,15 +87,21 @@ static void msg_submit(struct mbox_chan *chan) chan->active_req = data; chan->msg_count--; } -exit: + spin_unlock_irqrestore(&chan->lock, flags); if (!err && (chan->txdone_method & TXDONE_BY_POLL)) /* kick start the timer immediately to avoid delays */ hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL); + + if (chan->txdone_method & TXDONE_BY_BLOCK) { + tx_tick(chan, err, false); + if (!err) + goto next; + } } -static void tx_tick(struct mbox_chan *chan, int r) +static void tx_tick(struct mbox_chan *chan, int r, bool submit_next) { unsigned long flags; void *mssg; @@ -101,7 +112,8 @@ static void tx_tick(struct mbox_chan *chan, int r) spin_unlock_irqrestore(&chan->lock, flags); /* Submit next message */ - msg_submit(chan); + if (submit_next) + msg_submit(chan); if (!mssg) return; @@ -127,7 +139,7 @@ static enum hrtimer_restart txdone_hrtimer(struct hrtimer *hrtimer) if (chan->active_req && chan->cl) { txdone = chan->mbox->ops->last_tx_done(chan); if (txdone) - tx_tick(chan, 0); + tx_tick(chan, 0, true); else resched = true; } @@ -176,7 +188,7 @@ void mbox_chan_txdone(struct mbox_chan *chan, int r) return; } - tx_tick(chan, r); + tx_tick(chan, r, true); } EXPORT_SYMBOL_GPL(mbox_chan_txdone); @@ -196,7 +208,7 @@ void mbox_client_txdone(struct mbox_chan *chan, int r) return; } - tx_tick(chan, r); + tx_tick(chan, r, true); } EXPORT_SYMBOL_GPL(mbox_client_txdone); @@ -275,7 +287,7 @@ int mbox_send_message(struct mbox_chan *chan, void *mssg) ret = wait_for_completion_timeout(&chan->tx_complete, wait); if (ret == 0) { t = -ETIME; - tx_tick(chan, t); + tx_tick(chan, t, true); } } diff --git a/drivers/mailbox/mailbox.h b/drivers/mailbox/mailbox.h index 456ba68513bb..ec68e2e28cd6 100644 --- a/drivers/mailbox/mailbox.h +++ b/drivers/mailbox/mailbox.h @@ -10,5 +10,6 @@ #define TXDONE_BY_IRQ BIT(0) /* controller has remote RTR irq */ #define TXDONE_BY_POLL BIT(1) /* controller can read status of last TX */ #define TXDONE_BY_ACK BIT(2) /* S/W ACK recevied by Client ticks the TX */ +#define TXDONE_BY_BLOCK BIT(3) /* mailbox driver send_data blocks until done */ #endif /* __MAILBOX_H */