From patchwork Sat Feb 3 07:58:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Abhishek Sahu X-Patchwork-Id: 868867 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-i2c-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=codeaurora.org header.i=@codeaurora.org header.b="TCNQMUlP"; dkim=pass (1024-bit key) header.d=codeaurora.org header.i=@codeaurora.org header.b="TCNQMUlP"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zYRBs2rxsz9t5s for ; Sat, 3 Feb 2018 19:01:41 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750802AbeBCIBk (ORCPT ); Sat, 3 Feb 2018 03:01:40 -0500 Received: from smtp.codeaurora.org ([198.145.29.96]:35806 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752001AbeBCH66 (ORCPT ); Sat, 3 Feb 2018 02:58:58 -0500 Received: by smtp.codeaurora.org (Postfix, from userid 1000) id CE0A16081C; Sat, 3 Feb 2018 07:58:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1517644737; bh=Ymbm9MCE4lSJlitdwm3bSlv6DjhBdKUFEvymwsLpLGU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TCNQMUlP7avoKINXfZDYynMXaB5kyIP3ADdXY6IeTXVcNXedPzIbL79Y5KohvzKgv bj05GiSqrt+kBlbc82B49IlYD5O0qEaXvvze3KUkBkhPNrH5FAViZ00pB4FugafJdm gsXdS8R9BRetN8biMBghiBJ/GK7NGB96PMqh2T3Q= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on pdx-caf-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=ALL_TRUSTED,BAYES_00, DKIM_SIGNED, T_DKIM_INVALID autolearn=no autolearn_force=no version=3.4.0 Received: from absahu-linux.qualcomm.com (blr-c-bdr-fw-01_globalnat_allzones-outside.qualcomm.com [103.229.19.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: absahu@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 7F6E16050D; Sat, 3 Feb 2018 07:58:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1517644737; bh=Ymbm9MCE4lSJlitdwm3bSlv6DjhBdKUFEvymwsLpLGU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TCNQMUlP7avoKINXfZDYynMXaB5kyIP3ADdXY6IeTXVcNXedPzIbL79Y5KohvzKgv bj05GiSqrt+kBlbc82B49IlYD5O0qEaXvvze3KUkBkhPNrH5FAViZ00pB4FugafJdm gsXdS8R9BRetN8biMBghiBJ/GK7NGB96PMqh2T3Q= DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 7F6E16050D Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=absahu@codeaurora.org From: Abhishek Sahu To: Andy Gross , Wolfram Sang Cc: David Brown , Sricharan R , linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-i2c@vger.kernel.org, linux-kernel@vger.kernel.org, Abhishek Sahu Subject: [PATCH 09/12] i2c: qup: fix buffer overflow for multiple msg of maximum xfer len Date: Sat, 3 Feb 2018 13:28:14 +0530 Message-Id: <1517644697-30806-10-git-send-email-absahu@codeaurora.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1517644697-30806-1-git-send-email-absahu@codeaurora.org> References: <1517644697-30806-1-git-send-email-absahu@codeaurora.org> MIME-Version: 1.0 Sender: linux-i2c-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-i2c@vger.kernel.org The BAM mode requires buffer for start tag data and tx, rx SG list. Currently, this is being taken for maximum transfer length (65K). But an I2C transfer can have multiple messages and each message can be of this maximum length so the buffer overflow will happen in this case. Since increasing buffer length won’t be feasible since an I2C transfer can contain any number of messages so this patch does following changes to make i2c transfers working for multiple messages case. 1. Calculate the required buffers for 2 maximum length messages (65K * 2). 2. Split the descriptor formation and descriptor scheduling. The idea is to fit as many messages in one DMA transfers for 65K threshold value (max_xfer_sg_len). Whenever the sg_cnt is crossing this, then schedule the BAM transfer and subsequent transfer will again start from zero. Signed-off-by: Abhishek Sahu Reviewed-by: Andy Gross --- drivers/i2c/busses/i2c-qup.c | 199 +++++++++++++++++++++++++------------------ 1 file changed, 118 insertions(+), 81 deletions(-) diff --git a/drivers/i2c/busses/i2c-qup.c b/drivers/i2c/busses/i2c-qup.c index 6df65ea..ba717bb 100644 --- a/drivers/i2c/busses/i2c-qup.c +++ b/drivers/i2c/busses/i2c-qup.c @@ -155,6 +155,7 @@ struct qup_i2c_bam { struct qup_i2c_tag tag; struct dma_chan *dma; struct scatterlist *sg; + unsigned int sg_cnt; }; struct qup_i2c_dev { @@ -195,6 +196,8 @@ struct qup_i2c_dev { bool use_dma; /* The threshold length above which DMA will be used */ unsigned int dma_threshold; + unsigned int max_xfer_sg_len; + unsigned int tag_buf_pos; struct dma_pool *dpool; struct qup_i2c_tag start_tag; struct qup_i2c_bam brx; @@ -699,86 +702,86 @@ static int qup_i2c_req_dma(struct qup_i2c_dev *qup) return 0; } -static int qup_i2c_bam_do_xfer(struct qup_i2c_dev *qup, struct i2c_msg *msg, - int num) +static int qup_i2c_bam_make_desc(struct qup_i2c_dev *qup, struct i2c_msg *msg) +{ + int ret = 0, limit = QUP_READ_LIMIT; + u32 len = 0, blocks, rem; + u32 i = 0, tlen, tx_len = 0; + u8 *tags; + + qup_i2c_set_blk_data(qup, msg); + + blocks = qup->blk.count; + rem = msg->len - (blocks - 1) * limit; + + if (msg->flags & I2C_M_RD) { + while (qup->blk.pos < blocks) { + tlen = (i == (blocks - 1)) ? rem : limit; + tags = &qup->start_tag.start[qup->tag_buf_pos + len]; + len += qup_i2c_set_tags(tags, qup, msg); + qup->blk.data_len -= tlen; + + /* scratch buf to read the start and len tags */ + ret = qup_sg_set_buf(&qup->brx.sg[qup->brx.sg_cnt++], + &qup->brx.tag.start[0], + 2, qup, DMA_FROM_DEVICE); + + if (ret) + return ret; + + ret = qup_sg_set_buf(&qup->brx.sg[qup->brx.sg_cnt++], + &msg->buf[limit * i], + tlen, qup, + DMA_FROM_DEVICE); + if (ret) + return ret; + + i++; + qup->blk.pos = i; + } + ret = qup_sg_set_buf(&qup->btx.sg[qup->btx.sg_cnt++], + &qup->start_tag.start[qup->tag_buf_pos], + len, qup, DMA_TO_DEVICE); + if (ret) + return ret; + + qup->tag_buf_pos += len; + } else { + while (qup->blk.pos < blocks) { + tlen = (i == (blocks - 1)) ? rem : limit; + tags = &qup->start_tag.start[qup->tag_buf_pos + tx_len]; + len = qup_i2c_set_tags(tags, qup, msg); + qup->blk.data_len -= tlen; + + ret = qup_sg_set_buf(&qup->btx.sg[qup->btx.sg_cnt++], + tags, len, + qup, DMA_TO_DEVICE); + if (ret) + return ret; + + tx_len += len; + ret = qup_sg_set_buf(&qup->btx.sg[qup->btx.sg_cnt++], + &msg->buf[limit * i], + tlen, qup, DMA_TO_DEVICE); + if (ret) + return ret; + i++; + qup->blk.pos = i; + } + + qup->tag_buf_pos += tx_len; + } + + return 0; +} + +static int qup_i2c_bam_schedule_desc(struct qup_i2c_dev *qup) { struct dma_async_tx_descriptor *txd, *rxd = NULL; - int ret = 0, idx = 0, limit = QUP_READ_LIMIT; + int ret = 0; dma_cookie_t cookie_rx, cookie_tx; - u32 len, blocks, rem; - u32 i, tlen, tx_len, tx_buf = 0, rx_buf = 0, off = 0; - u8 *tags; - - while (idx < num) { - tx_len = 0, len = 0, i = 0; - - qup->is_last = (idx == (num - 1)); - - qup_i2c_set_blk_data(qup, msg); - - blocks = qup->blk.count; - rem = msg->len - (blocks - 1) * limit; - - if (msg->flags & I2C_M_RD) { - while (qup->blk.pos < blocks) { - tlen = (i == (blocks - 1)) ? rem : limit; - tags = &qup->start_tag.start[off + len]; - len += qup_i2c_set_tags(tags, qup, msg); - qup->blk.data_len -= tlen; - - /* scratch buf to read the start and len tags */ - ret = qup_sg_set_buf(&qup->brx.sg[rx_buf++], - &qup->brx.tag.start[0], - 2, qup, DMA_FROM_DEVICE); - - if (ret) - return ret; - - ret = qup_sg_set_buf(&qup->brx.sg[rx_buf++], - &msg->buf[limit * i], - tlen, qup, - DMA_FROM_DEVICE); - if (ret) - return ret; - - i++; - qup->blk.pos = i; - } - ret = qup_sg_set_buf(&qup->btx.sg[tx_buf++], - &qup->start_tag.start[off], - len, qup, DMA_TO_DEVICE); - if (ret) - return ret; - - off += len; - } else { - while (qup->blk.pos < blocks) { - tlen = (i == (blocks - 1)) ? rem : limit; - tags = &qup->start_tag.start[off + tx_len]; - len = qup_i2c_set_tags(tags, qup, msg); - qup->blk.data_len -= tlen; - - ret = qup_sg_set_buf(&qup->btx.sg[tx_buf++], - tags, len, - qup, DMA_TO_DEVICE); - if (ret) - return ret; - - tx_len += len; - ret = qup_sg_set_buf(&qup->btx.sg[tx_buf++], - &msg->buf[limit * i], - tlen, qup, DMA_TO_DEVICE); - if (ret) - return ret; - i++; - qup->blk.pos = i; - } - off += tx_len; - - } - idx++; - msg++; - } + u32 len = 0; + u32 tx_buf = qup->btx.sg_cnt, rx_buf = qup->brx.sg_cnt; /* schedule the EOT and FLUSH I2C tags */ len = 1; @@ -878,11 +881,19 @@ static int qup_i2c_bam_do_xfer(struct qup_i2c_dev *qup, struct i2c_msg *msg, return ret; } +static void qup_i2c_bam_clear_tag_buffers(struct qup_i2c_dev *qup) +{ + qup->btx.sg_cnt = 0; + qup->brx.sg_cnt = 0; + qup->tag_buf_pos = 0; +} + static int qup_i2c_bam_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num) { struct qup_i2c_dev *qup = i2c_get_adapdata(adap); int ret = 0; + int idx = 0; enable_irq(qup->irq); ret = qup_i2c_req_dma(qup); @@ -905,9 +916,34 @@ static int qup_i2c_bam_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, goto out; writel(qup->clk_ctl, qup->base + QUP_I2C_CLK_CTL); + qup_i2c_bam_clear_tag_buffers(qup); + + for (idx = 0; idx < num; idx++) { + qup->msg = msg + idx; + qup->is_last = idx == (num - 1); + + ret = qup_i2c_bam_make_desc(qup, qup->msg); + if (ret) + break; + + /* + * Make DMA descriptor and schedule the BAM transfer if its + * already crossed the maximum length. Since the memory for all + * tags buffers have been taken for 2 maximum possible + * transfers length so it will never cross the buffer actual + * length. + */ + if (qup->btx.sg_cnt > qup->max_xfer_sg_len || + qup->brx.sg_cnt > qup->max_xfer_sg_len || + qup->is_last) { + ret = qup_i2c_bam_schedule_desc(qup); + if (ret) + break; + + qup_i2c_bam_clear_tag_buffers(qup); + } + } - qup->msg = msg; - ret = qup_i2c_bam_do_xfer(qup, qup->msg, num); out: disable_irq(qup->irq); @@ -1459,7 +1495,8 @@ static int qup_i2c_probe(struct platform_device *pdev) else if (ret != 0) goto nodma; - blocks = (MX_BLOCKS << 1) + 1; + qup->max_xfer_sg_len = (MX_BLOCKS << 1); + blocks = 2 * qup->max_xfer_sg_len + 1; qup->btx.sg = devm_kzalloc(&pdev->dev, sizeof(*qup->btx.sg) * blocks, GFP_KERNEL); @@ -1603,7 +1640,7 @@ static int qup_i2c_probe(struct platform_device *pdev) one_bit_t = (USEC_PER_SEC / clk_freq) + 1; qup->one_byte_t = one_bit_t * 9; qup->xfer_timeout = TOUT_MIN * HZ + - usecs_to_jiffies(MX_TX_RX_LEN * qup->one_byte_t); + usecs_to_jiffies(2 * MX_TX_RX_LEN * qup->one_byte_t); dev_dbg(qup->dev, "IN:block:%d, fifo:%d, OUT:block:%d, fifo:%d\n", qup->in_blk_sz, qup->in_fifo_sz,