[{"id":1775597,"web_url":"http://patchwork.ozlabs.org/comment/1775597/","msgid":"<481add20-9cea-a91a-e72c-45a824362e64@nvidia.com>","list_archive_url":null,"date":"2017-09-26T14:45:22","subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","submitter":{"id":66273,"url":"http://patchwork.ozlabs.org/api/people/66273/","name":"Jon Hunter","email":"jonathanh@nvidia.com"},"content":"Hi Dmitry,\n\nOn 26/09/17 00:22, Dmitry Osipenko wrote:\n> AHB DMA controller presents on Tegra20/30 SoC's, it supports transfers\n> memory <-> AHB bus peripherals as well as mem-to-mem transfers. Driver\n> doesn't yet implement transfers larger than 64K and scatter-gather\n> transfers that have NENT > 1, HW doesn't have native support for these\n> cases.\n> \n> Signed-off-by: Dmitry Osipenko <digetx@gmail.com>\n> ---\n>  drivers/dma/Kconfig           |   9 +\n>  drivers/dma/Makefile          |   1 +\n>  drivers/dma/tegra20-ahb-dma.c | 679 ++++++++++++++++++++++++++++++++++++++++++\n>  3 files changed, 689 insertions(+)\n>  create mode 100644 drivers/dma/tegra20-ahb-dma.c\n\n...\n\n> diff --git a/drivers/dma/tegra20-ahb-dma.c b/drivers/dma/tegra20-ahb-dma.c\n> new file mode 100644\n> index 000000000000..8316d64e35e1\n> --- /dev/null\n> +++ b/drivers/dma/tegra20-ahb-dma.c\n> @@ -0,0 +1,679 @@\n> +/*\n> + * Copyright 2017 Dmitry Osipenko <digetx@gmail.com>\n> + *\n> + * This program is free software; you can redistribute it and/or modify it\n> + * under the terms and conditions of the GNU General Public License,\n> + * version 2, as published by the Free Software Foundation.\n> + *\n> + * This program is distributed in the hope it will be useful, but WITHOUT\n> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for\n> + * more details.\n> + *\n> + * You should have received a copy of the GNU General Public License\n> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n> + */\n> +\n> +#include <linux/clk.h>\n> +#include <linux/delay.h>\n> +#include <linux/interrupt.h>\n> +#include <linux/io.h>\n> +#include <linux/module.h>\n> +#include <linux/of_device.h>\n> +#include <linux/of_dma.h>\n> +#include <linux/platform_device.h>\n> +#include <linux/reset.h>\n> +#include <linux/slab.h>\n> +#include <linux/spinlock.h>\n> +\n> +#include \"dmaengine.h\"\n> +\n> +#define TEGRA_AHBDMA_CMD\t\t\t0x0\n> +#define TEGRA_AHBDMA_CMD_ENABLE\t\t\tBIT(31)\n> +\n> +#define TEGRA_AHBDMA_IRQ_ENB_MASK\t\t0x20\n> +#define TEGRA_AHBDMA_IRQ_ENB_CH(ch)\t\tBIT(ch)\n> +\n> +#define TEGRA_AHBDMA_CHANNEL_BASE(ch)\t\t(0x1000 + (ch) * 0x20)\n> +\n> +#define TEGRA_AHBDMA_CHANNEL_CSR\t\t0x0\n> +#define TEGRA_AHBDMA_CHANNEL_ADDR_WRAP\t\tBIT(18)\n> +#define TEGRA_AHBDMA_CHANNEL_FLOW\t\tBIT(24)\n> +#define TEGRA_AHBDMA_CHANNEL_ONCE\t\tBIT(26)\n> +#define TEGRA_AHBDMA_CHANNEL_DIR_TO_XMB\t\tBIT(27)\n> +#define TEGRA_AHBDMA_CHANNEL_IE_EOC\t\tBIT(30)\n> +#define TEGRA_AHBDMA_CHANNEL_ENABLE\t\tBIT(31)\n> +#define TEGRA_AHBDMA_CHANNEL_REQ_SEL_SHIFT\t16\n> +#define TEGRA_AHBDMA_CHANNEL_WCOUNT_MASK\t0xFFFC\n> +\n> +#define TEGRA_AHBDMA_CHANNEL_STA\t\t0x4\n> +#define TEGRA_AHBDMA_CHANNEL_IS_EOC\t\tBIT(30)\n> +\n> +#define TEGRA_AHBDMA_CHANNEL_AHB_PTR\t\t0x10\n> +\n> +#define TEGRA_AHBDMA_CHANNEL_AHB_SEQ\t\t0x14\n> +#define TEGRA_AHBDMA_CHANNEL_INTR_ENB\t\tBIT(31)\n> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT\t24\n> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_1\t2\n> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_4\t3\n> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_8\t4\n> +\n> +#define TEGRA_AHBDMA_CHANNEL_XMB_PTR\t\t0x18\n> +\n> +#define TEGRA_AHBDMA_BUS_WIDTH\t\t\tBIT(DMA_SLAVE_BUSWIDTH_4_BYTES)\n> +\n> +#define TEGRA_AHBDMA_DIRECTIONS\t\t\tBIT(DMA_DEV_TO_MEM) | \\\n> +\t\t\t\t\t\tBIT(DMA_MEM_TO_DEV)\n> +\n> +struct tegra_ahbdma_tx_desc {\n> +\tstruct dma_async_tx_descriptor desc;\n> +\tstruct tasklet_struct tasklet;\n> +\tstruct list_head node;\n\nAny reason why we cannot use the virt-dma framework for this driver? I\nwould hope it would simplify the driver a bit.\n\n> +\tenum dma_transfer_direction dir;\n> +\tdma_addr_t mem_paddr;\n> +\tunsigned long flags;\n> +\tsize_t size;\n> +\tbool in_fly;\n> +\tbool cyclic;\n> +};\n> +\n> +struct tegra_ahbdma_chan {\n> +\tstruct dma_chan dma_chan;\n> +\tstruct list_head active_list;\n> +\tstruct list_head pending_list;\n> +\tstruct completion idling;\n> +\tvoid __iomem *regs;\n> +\tspinlock_t lock;\n> +\tunsigned int id;\n> +};\n> +\n> +struct tegra_ahbdma {\n> +\tstruct tegra_ahbdma_chan channels[4];\n> +\tstruct dma_device dma_dev;\n> +\tstruct reset_control *rst;\n> +\tstruct clk *clk;\n> +\tvoid __iomem *regs;\n> +};\n> +\n> +static inline struct tegra_ahbdma *to_ahbdma(struct dma_device *dev)\n> +{\n> +\treturn container_of(dev, struct tegra_ahbdma, dma_dev);\n> +}\n> +\n> +static inline struct tegra_ahbdma_chan *to_ahbdma_chan(struct dma_chan *chan)\n> +{\n> +\treturn container_of(chan, struct tegra_ahbdma_chan, dma_chan);\n> +}\n> +\n> +static inline struct tegra_ahbdma_tx_desc *to_ahbdma_tx_desc(\n> +\t\t\t\tstruct dma_async_tx_descriptor *tx)\n> +{\n> +\treturn container_of(tx, struct tegra_ahbdma_tx_desc, desc);\n> +}\n> +\n> +static void tegra_ahbdma_submit_tx(struct tegra_ahbdma_chan *chan,\n> +\t\t\t\t   struct tegra_ahbdma_tx_desc *tx)\n> +{\n> +\tu32 csr;\n> +\n> +\twritel_relaxed(tx->mem_paddr,\n> +\t\t       chan->regs + TEGRA_AHBDMA_CHANNEL_XMB_PTR);\n> +\n> +\tcsr = readl_relaxed(chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n> +\n> +\tcsr &= ~TEGRA_AHBDMA_CHANNEL_WCOUNT_MASK;\n> +\tcsr &= ~TEGRA_AHBDMA_CHANNEL_DIR_TO_XMB;\n> +\tcsr |= TEGRA_AHBDMA_CHANNEL_ENABLE;\n> +\tcsr |= TEGRA_AHBDMA_CHANNEL_IE_EOC;\n> +\tcsr |= tx->size - sizeof(u32);\n> +\n> +\tif (tx->dir == DMA_DEV_TO_MEM)\n> +\t\tcsr |= TEGRA_AHBDMA_CHANNEL_DIR_TO_XMB;\n> +\n> +\tif (!tx->cyclic)\n> +\t\tcsr |= TEGRA_AHBDMA_CHANNEL_ONCE;\n> +\n> +\twritel_relaxed(csr, chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n> +\n> +\ttx->in_fly = true;\n> +}\n> +\n> +static void tegra_ahbdma_tasklet(unsigned long data)\n> +{\n> +\tstruct tegra_ahbdma_tx_desc *tx = (struct tegra_ahbdma_tx_desc *)data;\n> +\tstruct dma_async_tx_descriptor *desc = &tx->desc;\n> +\n> +\tdmaengine_desc_get_callback_invoke(desc, NULL);\n> +\n> +\tif (!tx->cyclic && !dmaengine_desc_test_reuse(desc))\n> +\t\tkfree(tx);\n> +}\n> +\n> +static bool tegra_ahbdma_tx_completed(struct tegra_ahbdma_chan *chan,\n> +\t\t\t\t      struct tegra_ahbdma_tx_desc *tx)\n> +{\n> +\tstruct dma_async_tx_descriptor *desc = &tx->desc;\n> +\tbool reuse = dmaengine_desc_test_reuse(desc);\n> +\tbool interrupt = tx->flags & DMA_PREP_INTERRUPT;\n> +\tbool completed = !tx->cyclic;\n> +\n> +\tif (completed)\n> +\t\tdma_cookie_complete(desc);\n> +\n> +\tif (interrupt)\n> +\t\ttasklet_schedule(&tx->tasklet);\n> +\n> +\tif (completed) {\n> +\t\tlist_del(&tx->node);\n> +\n> +\t\tif (reuse)\n> +\t\t\ttx->in_fly = false;\n> +\n> +\t\tif (!interrupt && !reuse)\n> +\t\t\tkfree(tx);\n> +\t}\n> +\n> +\treturn completed;\n> +}\n> +\n> +static bool tegra_ahbdma_next_tx_issued(struct tegra_ahbdma_chan *chan)\n> +{\n> +\tstruct tegra_ahbdma_tx_desc *tx;\n> +\n> +\ttx = list_first_entry_or_null(&chan->active_list,\n> +\t\t\t\t      struct tegra_ahbdma_tx_desc,\n> +\t\t\t\t      node);\n> +\tif (tx)\n> +\t\ttegra_ahbdma_submit_tx(chan, tx);\n> +\n> +\treturn !!tx;\n> +}\n> +\n> +static void tegra_ahbdma_handle_channel(struct tegra_ahbdma_chan *chan)\n> +{\n> +\tstruct tegra_ahbdma_tx_desc *tx;\n> +\tunsigned long flags;\n> +\tu32 status;\n> +\n> +\tstatus = readl_relaxed(chan->regs + TEGRA_AHBDMA_CHANNEL_STA);\n> +\tif (!(status & TEGRA_AHBDMA_CHANNEL_IS_EOC))\n> +\t\treturn;\n> +\n> +\twritel_relaxed(TEGRA_AHBDMA_CHANNEL_IS_EOC,\n> +\t\t       chan->regs + TEGRA_AHBDMA_CHANNEL_STA);\n> +\n> +\tspin_lock_irqsave(&chan->lock, flags);\n> +\n> +\tif (!completion_done(&chan->idling)) {\n> +\t\ttx = list_first_entry(&chan->active_list,\n> +\t\t\t\t      struct tegra_ahbdma_tx_desc,\n> +\t\t\t\t      node);\n> +\n> +\t\tif (tegra_ahbdma_tx_completed(chan, tx) &&\n> +\t\t    !tegra_ahbdma_next_tx_issued(chan))\n> +\t\t\tcomplete_all(&chan->idling);\n> +\t}\n> +\n> +\tspin_unlock_irqrestore(&chan->lock, flags);\n> +}\n> +\n> +static irqreturn_t tegra_ahbdma_isr(int irq, void *dev_id)\n> +{\n> +\tstruct tegra_ahbdma *tdma = dev_id;\n> +\tunsigned int i;\n> +\n> +\tfor (i = 0; i < ARRAY_SIZE(tdma->channels); i++)\n> +\t\ttegra_ahbdma_handle_channel(&tdma->channels[i]);\n> +\n> +\treturn IRQ_HANDLED;\n> +}\n> +\n> +static dma_cookie_t tegra_ahbdma_tx_submit(struct dma_async_tx_descriptor *desc)\n> +{\n> +\tstruct tegra_ahbdma_tx_desc *tx = to_ahbdma_tx_desc(desc);\n> +\tstruct tegra_ahbdma_chan *chan = to_ahbdma_chan(desc->chan);\n> +\tdma_cookie_t cookie;\n> +\n> +\tcookie = dma_cookie_assign(desc);\n> +\n> +\tspin_lock_irq(&chan->lock);\n> +\tlist_add_tail(&tx->node, &chan->pending_list);\n> +\tspin_unlock_irq(&chan->lock);\n> +\n> +\treturn cookie;\n> +}\n> +\n> +static int tegra_ahbdma_tx_desc_free(struct dma_async_tx_descriptor *desc)\n> +{\n> +\tkfree(to_ahbdma_tx_desc(desc));\n> +\n> +\treturn 0;\n> +}\n> +\n> +static struct dma_async_tx_descriptor *tegra_ahbdma_prep_slave_sg(\n> +\t\t\t\t\tstruct dma_chan *chan,\n> +\t\t\t\t\tstruct scatterlist *sgl,\n> +\t\t\t\t\tunsigned int sg_len,\n> +\t\t\t\t\tenum dma_transfer_direction dir,\n> +\t\t\t\t\tunsigned long flags,\n> +\t\t\t\t\tvoid *context)\n> +{\n> +\tstruct tegra_ahbdma_tx_desc *tx;\n> +\n> +\t/* unimplemented */\n> +\tif (sg_len != 1 || sg_dma_len(sgl) > SZ_64K)\n> +\t\treturn NULL;\n> +\n> +\ttx = kzalloc(sizeof(*tx), GFP_KERNEL);\n> +\tif (!tx)\n> +\t\treturn NULL;\n> +\n> +\tdma_async_tx_descriptor_init(&tx->desc, chan);\n> +\n> +\ttx->desc.tx_submit\t= tegra_ahbdma_tx_submit;\n> +\ttx->desc.desc_free\t= tegra_ahbdma_tx_desc_free;\n> +\ttx->mem_paddr\t\t= sg_dma_address(sgl);\n> +\ttx->size\t\t= sg_dma_len(sgl);\n> +\ttx->flags\t\t= flags;\n> +\ttx->dir\t\t\t= dir;\n> +\n> +\ttasklet_init(&tx->tasklet, tegra_ahbdma_tasklet, (unsigned long)tx);\n> +\n> +\treturn &tx->desc;\n> +}\n> +\n> +static struct dma_async_tx_descriptor *tegra_ahbdma_prep_dma_cyclic(\n> +\t\t\t\t\tstruct dma_chan *chan,\n> +\t\t\t\t\tdma_addr_t buf_addr,\n> +\t\t\t\t\tsize_t buf_len,\n> +\t\t\t\t\tsize_t period_len,\n> +\t\t\t\t\tenum dma_transfer_direction dir,\n> +\t\t\t\t\tunsigned long flags)\n> +{\n> +\tstruct tegra_ahbdma_tx_desc *tx;\n> +\n> +\t/* unimplemented */\n> +\tif (buf_len != period_len || buf_len > SZ_64K)\n> +\t\treturn NULL;\n> +\n> +\ttx = kzalloc(sizeof(*tx), GFP_KERNEL);\n> +\tif (!tx)\n> +\t\treturn NULL;\n> +\n> +\tdma_async_tx_descriptor_init(&tx->desc, chan);\n> +\n> +\ttx->desc.tx_submit\t= tegra_ahbdma_tx_submit;\n> +\ttx->mem_paddr\t\t= buf_addr;\n> +\ttx->size\t\t= buf_len;\n> +\ttx->flags\t\t= flags;\n> +\ttx->cyclic\t\t= true;\n> +\ttx->dir\t\t\t= dir;\n> +\n> +\ttasklet_init(&tx->tasklet, tegra_ahbdma_tasklet, (unsigned long)tx);\n> +\n> +\treturn &tx->desc;\n> +}\n> +\n> +static void tegra_ahbdma_issue_pending(struct dma_chan *chan)\n> +{\n> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n> +\tstruct tegra_ahbdma_tx_desc *tx;\n> +\tstruct list_head *entry, *tmp;\n> +\tunsigned long flags;\n> +\n> +\tspin_lock_irqsave(&ahbdma_chan->lock, flags);\n> +\n> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->pending_list)\n> +\t\tlist_move_tail(entry, &ahbdma_chan->active_list);\n> +\n> +\tif (completion_done(&ahbdma_chan->idling)) {\n> +\t\ttx = list_first_entry_or_null(&ahbdma_chan->active_list,\n> +\t\t\t\t\t      struct tegra_ahbdma_tx_desc,\n> +\t\t\t\t\t      node);\n> +\t\tif (tx) {\n> +\t\t\ttegra_ahbdma_submit_tx(ahbdma_chan, tx);\n> +\t\t\treinit_completion(&ahbdma_chan->idling);\n> +\t\t}\n> +\t}\n> +\n> +\tspin_unlock_irqrestore(&ahbdma_chan->lock, flags);\n> +}\n> +\n> +static enum dma_status tegra_ahbdma_tx_status(struct dma_chan *chan,\n> +\t\t\t\t\t      dma_cookie_t cookie,\n> +\t\t\t\t\t      struct dma_tx_state *state)\n> +{\n> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n> +\tstruct tegra_ahbdma_tx_desc *tx;\n> +\tenum dma_status cookie_status;\n> +\tunsigned long flags;\n> +\tsize_t residual;\n> +\tu32 status;\n> +\n> +\tspin_lock_irqsave(&ahbdma_chan->lock, flags);\n> +\n> +\tcookie_status = dma_cookie_status(chan, cookie, state);\n> +\tif (cookie_status != DMA_COMPLETE) {\n> +\t\tlist_for_each_entry(tx, &ahbdma_chan->active_list, node) {\n> +\t\t\tif (tx->desc.cookie == cookie)\n> +\t\t\t\tgoto found;\n> +\t\t}\n> +\t}\n> +\n> +\tgoto unlock;\n> +\n> +found:\n> +\tif (tx->in_fly) {\n> +\t\tstatus = readl_relaxed(\n> +\t\t\tahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_STA);\n> +\t\tstatus  &= TEGRA_AHBDMA_CHANNEL_WCOUNT_MASK;\n> +\n> +\t\tresidual = status;\n> +\t} else\n> +\t\tresidual = tx->size;\n> +\n> +\tdma_set_residue(state, residual);\n> +\n> +unlock:\n> +\tspin_unlock_irqrestore(&ahbdma_chan->lock, flags);\n> +\n> +\treturn cookie_status;\n> +}\n> +\n> +static int tegra_ahbdma_terminate_all(struct dma_chan *chan)\n> +{\n> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n> +\tstruct tegra_ahbdma_tx_desc *tx;\n> +\tstruct list_head *entry, *tmp;\n> +\tu32 csr;\n> +\n> +\tspin_lock_irq(&ahbdma_chan->lock);\n> +\n> +\tcsr = readl_relaxed(ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n> +\tcsr &= ~TEGRA_AHBDMA_CHANNEL_ENABLE;\n> +\n> +\twritel_relaxed(csr, ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n> +\n> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->active_list) {\n> +\t\ttx = list_entry(entry, struct tegra_ahbdma_tx_desc, node);\n> +\t\tlist_del(entry);\n> +\t\tkfree(tx);\n> +\t}\n> +\n> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->pending_list) {\n> +\t\ttx = list_entry(entry, struct tegra_ahbdma_tx_desc, node);\n> +\t\tlist_del(entry);\n> +\t\tkfree(tx);\n> +\t}\n> +\n> +\tcomplete_all(&ahbdma_chan->idling);\n> +\n> +\tspin_unlock_irq(&ahbdma_chan->lock);\n> +\n> +\treturn 0;\n> +}\n> +\n> +static int tegra_ahbdma_config(struct dma_chan *chan,\n> +\t\t\t       struct dma_slave_config *sconfig)\n> +{\n> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n> +\tenum dma_transfer_direction dir = sconfig->direction;\n> +\tu32 burst, ahb_seq, ahb_addr;\n> +\n> +\tif (sconfig->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||\n> +\t    sconfig->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES)\n> +\t\treturn -EINVAL;\n> +\n> +\tif (dir == DMA_DEV_TO_MEM) {\n> +\t\tburst    = sconfig->src_maxburst;\n> +\t\tahb_addr = sconfig->src_addr;\n> +\t} else {\n> +\t\tburst    = sconfig->dst_maxburst;\n> +\t\tahb_addr = sconfig->dst_addr;\n> +\t}\n> +\n> +\tswitch (burst) {\n> +\tcase 1: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_1; break;\n> +\tcase 4: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_4; break;\n> +\tcase 8: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_8; break;\n> +\tdefault:\n> +\t\treturn -EINVAL;\n> +\t}\n> +\n> +\tahb_seq  = burst << TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT;\n> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_ADDR_WRAP;\n> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_INTR_ENB;\n> +\n> +\twritel_relaxed(ahb_seq,\n> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_SEQ);\n> +\n> +\twritel_relaxed(ahb_addr,\n> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_PTR);\n> +\n> +\treturn 0;\n> +}\n> +\n> +static void tegra_ahbdma_synchronize(struct dma_chan *chan)\n> +{\n> +\twait_for_completion(&to_ahbdma_chan(chan)->idling);\n> +}\n> +\n> +static void tegra_ahbdma_free_chan_resources(struct dma_chan *chan)\n> +{\n> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n> +\tstruct tegra_ahbdma_tx_desc *tx;\n> +\tstruct list_head *entry, *tmp;\n> +\n> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->pending_list) {\n> +\t\ttx = list_entry(entry, struct tegra_ahbdma_tx_desc, node);\n> +\t\tlist_del(entry);\n> +\t\tkfree(tx);\n> +\t}\n> +}\n> +\n> +static void tegra_ahbdma_init_channel(struct tegra_ahbdma *tdma,\n> +\t\t\t\t      unsigned int chan_id)\n> +{\n> +\tstruct tegra_ahbdma_chan *ahbdma_chan = &tdma->channels[chan_id];\n> +\tstruct dma_chan *dma_chan = &ahbdma_chan->dma_chan;\n> +\tstruct dma_device *dma_dev = &tdma->dma_dev;\n> +\n> +\tINIT_LIST_HEAD(&ahbdma_chan->active_list);\n> +\tINIT_LIST_HEAD(&ahbdma_chan->pending_list);\n> +\tinit_completion(&ahbdma_chan->idling);\n> +\tspin_lock_init(&ahbdma_chan->lock);\n> +\tcomplete(&ahbdma_chan->idling);\n> +\n> +\tahbdma_chan->regs = tdma->regs + TEGRA_AHBDMA_CHANNEL_BASE(chan_id);\n> +\tahbdma_chan->id = chan_id;\n> +\n> +\tdma_cookie_init(dma_chan);\n> +\tdma_chan->device = dma_dev;\n> +\n> +\tlist_add_tail(&dma_chan->device_node, &dma_dev->channels);\n> +}\n> +\n> +static struct dma_chan *tegra_ahbdma_of_xlate(struct of_phandle_args *dma_spec,\n> +\t\t\t\t\t      struct of_dma *ofdma)\n> +{\n> +\tstruct tegra_ahbdma *tdma = ofdma->of_dma_data;\n> +\tstruct dma_chan *chan;\n> +\tu32 csr;\n> +\n> +\tchan = dma_get_any_slave_channel(&tdma->dma_dev);\n> +\tif (!chan)\n> +\t\treturn NULL;\n> +\n> +\t/* enable channels flow control */\n> +\tif (dma_spec->args_count == 1) {\n\nThe DT doc says #dma-cells should be '1' and so if not equal 1, is this\nnot an error?\n\n> +\t\tcsr  = TEGRA_AHBDMA_CHANNEL_FLOW;\n> +\t\tcsr |= dma_spec->args[0] << TEGRA_AHBDMA_CHANNEL_REQ_SEL_SHIFT;\n\nWhat about the TRIG_REQ field?\n\n> +\n> +\t\twritel_relaxed(csr,\n> +\t\t\tto_ahbdma_chan(chan)->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n> +\t}\n> +\t\n> +\treturn chan;\n> +}\n> +\n> +static int tegra_ahbdma_init_hw(struct tegra_ahbdma *tdma, struct device *dev)\n> +{\n> +\tint err;\n> +\n> +\terr = reset_control_assert(tdma->rst);\n> +\tif (err) {\n> +\t\tdev_err(dev, \"Failed to assert reset: %d\\n\", err);\n> +\t\treturn err;\n> +\t}\n> +\n> +\terr = clk_prepare_enable(tdma->clk);\n> +\tif (err) {\n> +\t\tdev_err(dev, \"Failed to enable clock: %d\\n\", err);\n> +\t\treturn err;\n> +\t}\n> +\n> +\tusleep_range(1000, 2000);\n> +\n> +\terr = reset_control_deassert(tdma->rst);\n> +\tif (err) {\n> +\t\tdev_err(dev, \"Failed to deassert reset: %d\\n\", err);\n> +\t\treturn err;\n> +\t}\n> +\n> +\twritel_relaxed(TEGRA_AHBDMA_CMD_ENABLE, tdma->regs + TEGRA_AHBDMA_CMD);\n> +\n> +\twritel_relaxed(TEGRA_AHBDMA_IRQ_ENB_CH(0) |\n> +\t\t       TEGRA_AHBDMA_IRQ_ENB_CH(1) |\n> +\t\t       TEGRA_AHBDMA_IRQ_ENB_CH(2) |\n> +\t\t       TEGRA_AHBDMA_IRQ_ENB_CH(3),\n> +\t\t       tdma->regs + TEGRA_AHBDMA_IRQ_ENB_MASK);\n> +\n> +\treturn 0;\n> +}\n\nPersonally I would use the pm_runtime callbacks for this sort of thing\nand ...\n\n> +static int tegra_ahbdma_probe(struct platform_device *pdev)\n> +{\n> +\tstruct dma_device *dma_dev;\n> +\tstruct tegra_ahbdma *tdma;\n> +\tstruct resource *res_regs;\n> +\tunsigned int i;\n> +\tint irq;\n> +\tint err;\n> +\n> +\ttdma = devm_kzalloc(&pdev->dev, sizeof(*tdma), GFP_KERNEL);\n> +\tif (!tdma)\n> +\t\treturn -ENOMEM;\n> +\n> +\tirq = platform_get_irq(pdev, 0);\n> +\tif (irq < 0) {\n> +\t\tdev_err(&pdev->dev, \"Failed to get IRQ\\n\");\n> +\t\treturn irq;\n> +\t}\n> +\n> +\terr = devm_request_irq(&pdev->dev, irq, tegra_ahbdma_isr, 0,\n> +\t\t\t       dev_name(&pdev->dev), tdma);\n> +\tif (err) {\n> +\t\tdev_err(&pdev->dev, \"Failed to request IRQ\\n\");\n> +\t\treturn -ENODEV;\n> +\t}\n> +\n> +\tres_regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);\n> +\tif (!res_regs)\n> +\t\treturn -ENODEV;\n> +\n> +\ttdma->regs = devm_ioremap_resource(&pdev->dev, res_regs);\n> +\tif (IS_ERR(tdma->regs))\n> +\t\treturn PTR_ERR(tdma->regs);\n> +\n> +\ttdma->clk = devm_clk_get(&pdev->dev, NULL);\n> +\tif (IS_ERR(tdma->clk)) {\n> +\t\tdev_err(&pdev->dev, \"Failed to get AHB-DMA clock\\n\");\n> +\t\treturn PTR_ERR(tdma->clk);\n> +\t}\n> +\n> +\ttdma->rst = devm_reset_control_get(&pdev->dev, NULL);\n> +\tif (IS_ERR(tdma->rst)) {\n> +\t\tdev_err(&pdev->dev, \"Failed to get AHB-DMA reset\\n\");\n> +\t\treturn PTR_ERR(tdma->rst);\n> +\t}\n> +\n> +\terr = tegra_ahbdma_init_hw(tdma, &pdev->dev);\n> +\tif (err)\n> +\t\treturn err;\n\n... here is looks like we turn the clocks on and leave them on. I would\nrather that we turn them on when the DMA channel is requested and turn\nthem off again when freed. Again would be good to use pm_runtime APIs\nfor this.\n\n> +\tdma_dev = &tdma->dma_dev;\n> +\n> +\tINIT_LIST_HEAD(&dma_dev->channels);\n> +\n> +\tfor (i = 0; i < ARRAY_SIZE(tdma->channels); i++)\n> +\t\ttegra_ahbdma_init_channel(tdma, i);\n> +\n> +\tdma_cap_set(DMA_PRIVATE, dma_dev->cap_mask);\n> +\tdma_cap_set(DMA_CYCLIC, dma_dev->cap_mask);\n> +\tdma_cap_set(DMA_SLAVE, dma_dev->cap_mask);\n> +\n> +\tdma_dev->max_burst\t\t= 8;\n> +\tdma_dev->directions\t\t= TEGRA_AHBDMA_DIRECTIONS;\n> +\tdma_dev->src_addr_widths\t= TEGRA_AHBDMA_BUS_WIDTH;\n> +\tdma_dev->dst_addr_widths\t= TEGRA_AHBDMA_BUS_WIDTH;\n> +\tdma_dev->descriptor_reuse\t= true;\n> +\tdma_dev->residue_granularity\t= DMA_RESIDUE_GRANULARITY_BURST;\n> +\tdma_dev->device_free_chan_resources = tegra_ahbdma_free_chan_resources;\n> +\tdma_dev->device_prep_slave_sg\t= tegra_ahbdma_prep_slave_sg;\n> +\tdma_dev->device_prep_dma_cyclic\t= tegra_ahbdma_prep_dma_cyclic;\n> +\tdma_dev->device_terminate_all\t= tegra_ahbdma_terminate_all;\n> +\tdma_dev->device_issue_pending\t= tegra_ahbdma_issue_pending;\n> +\tdma_dev->device_tx_status\t= tegra_ahbdma_tx_status;\n> +\tdma_dev->device_config\t\t= tegra_ahbdma_config;\n> +\tdma_dev->device_synchronize\t= tegra_ahbdma_synchronize;\n> +\tdma_dev->dev\t\t\t= &pdev->dev;\n> +\n> +\terr = dma_async_device_register(dma_dev);\n> +\tif (err) {\n> +\t\tdev_err(&pdev->dev, \"Device registration failed %d\\n\", err);\n> +\t\treturn err;\n> +\t}\n> +\n> +\terr = of_dma_controller_register(pdev->dev.of_node,\n> +\t\t\t\t\t tegra_ahbdma_of_xlate, tdma);\n> +\tif (err) {\n> +\t\tdev_err(&pdev->dev, \"OF registration failed %d\\n\", err);\n> +\t\tdma_async_device_unregister(dma_dev);\n> +\t\treturn err;\n> +\t}\n> +\n> +\tplatform_set_drvdata(pdev, tdma);\n> +\n> +\treturn 0;\n> +}\n> +\n> +static int tegra_ahbdma_remove(struct platform_device *pdev)\n> +{\n> +\tstruct tegra_ahbdma *tdma = platform_get_drvdata(pdev);\n> +\n> +\tof_dma_controller_free(pdev->dev.of_node);\n> +\tdma_async_device_unregister(&tdma->dma_dev);\n> +\tclk_disable_unprepare(tdma->clk);\n> +\n> +\treturn 0;\n> +}\n> +\n> +static const struct of_device_id tegra_ahbdma_of_match[] = {\n> +\t{ .compatible = \"nvidia,tegra20-ahbdma\" },\n> +\t{ },\n> +};\n> +MODULE_DEVICE_TABLE(of, tegra_ahbdma_of_match);\n> +\n> +static struct platform_driver tegra_ahbdma_driver = {\n> +\t.driver = {\n> +\t\t.name\t= \"tegra-ahbdma\",\n> +\t\t.of_match_table = tegra_ahbdma_of_match,\n\nIt would be nice to have suspend/resume handler too. We could do a\nsimilar thing to the APB dma driver.\n\n> +\t},\n> +\t.probe\t= tegra_ahbdma_probe,\n> +\t.remove\t= tegra_ahbdma_remove,\n> +};\n> +module_platform_driver(tegra_ahbdma_driver);\n> +\n> +MODULE_DESCRIPTION(\"NVIDIA Tegra AHB DMA Controller driver\");\n> +MODULE_AUTHOR(\"Dmitry Osipenko <digetx@gmail.com>\");\n> +MODULE_LICENSE(\"GPL\");\n\nCheers\nJon","headers":{"Return-Path":"<linux-tegra-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":"ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-tegra-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3y1kLw2mD7z9t6C\n\tfor <incoming@patchwork.ozlabs.org>;\n\tWed, 27 Sep 2017 00:47:20 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S968349AbdIZOrI (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tTue, 26 Sep 2017 10:47:08 -0400","from hqemgate16.nvidia.com ([216.228.121.65]:4018 \"EHLO\n\thqemgate16.nvidia.com\" rhost-flags-OK-OK-OK-OK) by vger.kernel.org\n\twith ESMTP id S967046AbdIZOrG (ORCPT\n\t<rfc822;linux-tegra@vger.kernel.org>);\n\tTue, 26 Sep 2017 10:47:06 -0400","from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by\n\thqemgate16.nvidia.com\n\tid <B59ca68480001>; Tue, 26 Sep 2017 07:46:35 -0700","from HQMAIL108.nvidia.com ([172.20.161.6])\n\tby hqpgpgate101.nvidia.com (PGP Universal service);\n\tTue, 26 Sep 2017 07:46:37 -0700","from UKMAIL101.nvidia.com (10.26.138.13) by HQMAIL108.nvidia.com\n\t(172.18.146.13) with Microsoft SMTP Server (TLS) id 15.0.1293.2;\n\tTue, 26 Sep 2017 14:45:27 +0000","from [10.21.132.144] (10.21.132.144) by UKMAIL101.nvidia.com\n\t(10.26.138.13) with Microsoft SMTP Server (TLS) id 15.0.1293.2;\n\tTue, 26 Sep 2017 14:45:23 +0000"],"X-PGP-Universal":"processed;\n\tby hqpgpgate101.nvidia.com on Tue, 26 Sep 2017 07:46:37 -0700","Subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","To":"Dmitry Osipenko <digetx@gmail.com>,\n\tThierry Reding <thierry.reding@gmail.com>,\n\tLaxman Dewangan <ldewangan@nvidia.com>,\n\t\"Peter De Schrijver\" <pdeschrijver@nvidia.com>,\n\tPrashant Gaikwad <pgaikwad@nvidia.com>,\n\tMichael Turquette <mturquette@baylibre.com>,\n\tStephen Boyd <sboyd@codeaurora.org>, Rob Herring <robh+dt@kernel.org>,\n\tVinod Koul <vinod.koul@intel.com>","CC":"<linux-tegra@vger.kernel.org>, <devicetree@vger.kernel.org>,\n\t<dmaengine@vger.kernel.org>, <linux-clk@vger.kernel.org>,\n\t<linux-kernel@vger.kernel.org>","References":"<cover.1506380746.git.digetx@gmail.com>\n\t<0a45e058baba72124b91c663ce1d908d275f4044.1506380746.git.digetx@gmail.com>","From":"Jon Hunter <jonathanh@nvidia.com>","Message-ID":"<481add20-9cea-a91a-e72c-45a824362e64@nvidia.com>","Date":"Tue, 26 Sep 2017 15:45:22 +0100","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101\n\tThunderbird/52.3.0","MIME-Version":"1.0","In-Reply-To":"<0a45e058baba72124b91c663ce1d908d275f4044.1506380746.git.digetx@gmail.com>","X-Originating-IP":"[10.21.132.144]","X-ClientProxiedBy":"UKMAIL102.nvidia.com (10.26.138.15) To\n\tUKMAIL101.nvidia.com (10.26.138.13)","Content-Type":"text/plain; charset=\"utf-8\"","Content-Language":"en-US","Content-Transfer-Encoding":"7bit","Sender":"linux-tegra-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-tegra.vger.kernel.org>","X-Mailing-List":"linux-tegra@vger.kernel.org"}},{"id":1775659,"web_url":"http://patchwork.ozlabs.org/comment/1775659/","msgid":"<189ae234-86c4-02ed-698c-5b447e27bf27@gmail.com>","list_archive_url":null,"date":"2017-09-26T16:06:03","subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","submitter":{"id":18124,"url":"http://patchwork.ozlabs.org/api/people/18124/","name":"Dmitry Osipenko","email":"digetx@gmail.com"},"content":"Hi Jon,\n\nOn 26.09.2017 17:45, Jon Hunter wrote:\n> Hi Dmitry,\n> \n> On 26/09/17 00:22, Dmitry Osipenko wrote:\n>> AHB DMA controller presents on Tegra20/30 SoC's, it supports transfers\n>> memory <-> AHB bus peripherals as well as mem-to-mem transfers. Driver\n>> doesn't yet implement transfers larger than 64K and scatter-gather\n>> transfers that have NENT > 1, HW doesn't have native support for these\n>> cases.\n>>\n>> Signed-off-by: Dmitry Osipenko <digetx@gmail.com>\n>> ---\n>>  drivers/dma/Kconfig           |   9 +\n>>  drivers/dma/Makefile          |   1 +\n>>  drivers/dma/tegra20-ahb-dma.c | 679 ++++++++++++++++++++++++++++++++++++++++++\n>>  3 files changed, 689 insertions(+)\n>>  create mode 100644 drivers/dma/tegra20-ahb-dma.c\n> \n> ...\n> \n>> diff --git a/drivers/dma/tegra20-ahb-dma.c b/drivers/dma/tegra20-ahb-dma.c\n>> new file mode 100644\n>> index 000000000000..8316d64e35e1\n>> --- /dev/null\n>> +++ b/drivers/dma/tegra20-ahb-dma.c\n>> @@ -0,0 +1,679 @@\n>> +/*\n>> + * Copyright 2017 Dmitry Osipenko <digetx@gmail.com>\n>> + *\n>> + * This program is free software; you can redistribute it and/or modify it\n>> + * under the terms and conditions of the GNU General Public License,\n>> + * version 2, as published by the Free Software Foundation.\n>> + *\n>> + * This program is distributed in the hope it will be useful, but WITHOUT\n>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for\n>> + * more details.\n>> + *\n>> + * You should have received a copy of the GNU General Public License\n>> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n>> + */\n>> +\n>> +#include <linux/clk.h>\n>> +#include <linux/delay.h>\n>> +#include <linux/interrupt.h>\n>> +#include <linux/io.h>\n>> +#include <linux/module.h>\n>> +#include <linux/of_device.h>\n>> +#include <linux/of_dma.h>\n>> +#include <linux/platform_device.h>\n>> +#include <linux/reset.h>\n>> +#include <linux/slab.h>\n>> +#include <linux/spinlock.h>\n>> +\n>> +#include \"dmaengine.h\"\n>> +\n>> +#define TEGRA_AHBDMA_CMD\t\t\t0x0\n>> +#define TEGRA_AHBDMA_CMD_ENABLE\t\t\tBIT(31)\n>> +\n>> +#define TEGRA_AHBDMA_IRQ_ENB_MASK\t\t0x20\n>> +#define TEGRA_AHBDMA_IRQ_ENB_CH(ch)\t\tBIT(ch)\n>> +\n>> +#define TEGRA_AHBDMA_CHANNEL_BASE(ch)\t\t(0x1000 + (ch) * 0x20)\n>> +\n>> +#define TEGRA_AHBDMA_CHANNEL_CSR\t\t0x0\n>> +#define TEGRA_AHBDMA_CHANNEL_ADDR_WRAP\t\tBIT(18)\n>> +#define TEGRA_AHBDMA_CHANNEL_FLOW\t\tBIT(24)\n>> +#define TEGRA_AHBDMA_CHANNEL_ONCE\t\tBIT(26)\n>> +#define TEGRA_AHBDMA_CHANNEL_DIR_TO_XMB\t\tBIT(27)\n>> +#define TEGRA_AHBDMA_CHANNEL_IE_EOC\t\tBIT(30)\n>> +#define TEGRA_AHBDMA_CHANNEL_ENABLE\t\tBIT(31)\n>> +#define TEGRA_AHBDMA_CHANNEL_REQ_SEL_SHIFT\t16\n>> +#define TEGRA_AHBDMA_CHANNEL_WCOUNT_MASK\t0xFFFC\n>> +\n>> +#define TEGRA_AHBDMA_CHANNEL_STA\t\t0x4\n>> +#define TEGRA_AHBDMA_CHANNEL_IS_EOC\t\tBIT(30)\n>> +\n>> +#define TEGRA_AHBDMA_CHANNEL_AHB_PTR\t\t0x10\n>> +\n>> +#define TEGRA_AHBDMA_CHANNEL_AHB_SEQ\t\t0x14\n>> +#define TEGRA_AHBDMA_CHANNEL_INTR_ENB\t\tBIT(31)\n>> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT\t24\n>> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_1\t2\n>> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_4\t3\n>> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_8\t4\n>> +\n>> +#define TEGRA_AHBDMA_CHANNEL_XMB_PTR\t\t0x18\n>> +\n>> +#define TEGRA_AHBDMA_BUS_WIDTH\t\t\tBIT(DMA_SLAVE_BUSWIDTH_4_BYTES)\n>> +\n>> +#define TEGRA_AHBDMA_DIRECTIONS\t\t\tBIT(DMA_DEV_TO_MEM) | \\\n>> +\t\t\t\t\t\tBIT(DMA_MEM_TO_DEV)\n>> +\n>> +struct tegra_ahbdma_tx_desc {\n>> +\tstruct dma_async_tx_descriptor desc;\n>> +\tstruct tasklet_struct tasklet;\n>> +\tstruct list_head node;\n> \n> Any reason why we cannot use the virt-dma framework for this driver? I\n> would hope it would simplify the driver a bit.\n> \n\nIIUC virt-dma is supposed to provide virtually unlimited number of channels.\nI've looked at it and decided that it would just add unnecessary functionality\nand, as a result, complexity. As I wrote in the cover-letter, it is supposed\nthat this driver would have only one consumer - the host1x. It shouldn't be\ndifficult to implement virt-dma later, if desired.  But again it is very\nunlikely that it would be needed.\n\n>> +\tenum dma_transfer_direction dir;\n>> +\tdma_addr_t mem_paddr;\n>> +\tunsigned long flags;\n>> +\tsize_t size;\n>> +\tbool in_fly;\n>> +\tbool cyclic;\n>> +};\n>> +\n>> +struct tegra_ahbdma_chan {\n>> +\tstruct dma_chan dma_chan;\n>> +\tstruct list_head active_list;\n>> +\tstruct list_head pending_list;\n>> +\tstruct completion idling;\n>> +\tvoid __iomem *regs;\n>> +\tspinlock_t lock;\n>> +\tunsigned int id;\n>> +};\n>> +\n>> +struct tegra_ahbdma {\n>> +\tstruct tegra_ahbdma_chan channels[4];\n>> +\tstruct dma_device dma_dev;\n>> +\tstruct reset_control *rst;\n>> +\tstruct clk *clk;\n>> +\tvoid __iomem *regs;\n>> +};\n>> +\n>> +static inline struct tegra_ahbdma *to_ahbdma(struct dma_device *dev)\n>> +{\n>> +\treturn container_of(dev, struct tegra_ahbdma, dma_dev);\n>> +}\n>> +\n>> +static inline struct tegra_ahbdma_chan *to_ahbdma_chan(struct dma_chan *chan)\n>> +{\n>> +\treturn container_of(chan, struct tegra_ahbdma_chan, dma_chan);\n>> +}\n>> +\n>> +static inline struct tegra_ahbdma_tx_desc *to_ahbdma_tx_desc(\n>> +\t\t\t\tstruct dma_async_tx_descriptor *tx)\n>> +{\n>> +\treturn container_of(tx, struct tegra_ahbdma_tx_desc, desc);\n>> +}\n>> +\n>> +static void tegra_ahbdma_submit_tx(struct tegra_ahbdma_chan *chan,\n>> +\t\t\t\t   struct tegra_ahbdma_tx_desc *tx)\n>> +{\n>> +\tu32 csr;\n>> +\n>> +\twritel_relaxed(tx->mem_paddr,\n>> +\t\t       chan->regs + TEGRA_AHBDMA_CHANNEL_XMB_PTR);\n>> +\n>> +\tcsr = readl_relaxed(chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>> +\n>> +\tcsr &= ~TEGRA_AHBDMA_CHANNEL_WCOUNT_MASK;\n>> +\tcsr &= ~TEGRA_AHBDMA_CHANNEL_DIR_TO_XMB;\n>> +\tcsr |= TEGRA_AHBDMA_CHANNEL_ENABLE;\n>> +\tcsr |= TEGRA_AHBDMA_CHANNEL_IE_EOC;\n>> +\tcsr |= tx->size - sizeof(u32);\n>> +\n>> +\tif (tx->dir == DMA_DEV_TO_MEM)\n>> +\t\tcsr |= TEGRA_AHBDMA_CHANNEL_DIR_TO_XMB;\n>> +\n>> +\tif (!tx->cyclic)\n>> +\t\tcsr |= TEGRA_AHBDMA_CHANNEL_ONCE;\n>> +\n>> +\twritel_relaxed(csr, chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>> +\n>> +\ttx->in_fly = true;\n>> +}\n>> +\n>> +static void tegra_ahbdma_tasklet(unsigned long data)\n>> +{\n>> +\tstruct tegra_ahbdma_tx_desc *tx = (struct tegra_ahbdma_tx_desc *)data;\n>> +\tstruct dma_async_tx_descriptor *desc = &tx->desc;\n>> +\n>> +\tdmaengine_desc_get_callback_invoke(desc, NULL);\n>> +\n>> +\tif (!tx->cyclic && !dmaengine_desc_test_reuse(desc))\n>> +\t\tkfree(tx);\n>> +}\n>> +\n>> +static bool tegra_ahbdma_tx_completed(struct tegra_ahbdma_chan *chan,\n>> +\t\t\t\t      struct tegra_ahbdma_tx_desc *tx)\n>> +{\n>> +\tstruct dma_async_tx_descriptor *desc = &tx->desc;\n>> +\tbool reuse = dmaengine_desc_test_reuse(desc);\n>> +\tbool interrupt = tx->flags & DMA_PREP_INTERRUPT;\n>> +\tbool completed = !tx->cyclic;\n>> +\n>> +\tif (completed)\n>> +\t\tdma_cookie_complete(desc);\n>> +\n>> +\tif (interrupt)\n>> +\t\ttasklet_schedule(&tx->tasklet);\n>> +\n>> +\tif (completed) {\n>> +\t\tlist_del(&tx->node);\n>> +\n>> +\t\tif (reuse)\n>> +\t\t\ttx->in_fly = false;\n>> +\n>> +\t\tif (!interrupt && !reuse)\n>> +\t\t\tkfree(tx);\n>> +\t}\n>> +\n>> +\treturn completed;\n>> +}\n>> +\n>> +static bool tegra_ahbdma_next_tx_issued(struct tegra_ahbdma_chan *chan)\n>> +{\n>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>> +\n>> +\ttx = list_first_entry_or_null(&chan->active_list,\n>> +\t\t\t\t      struct tegra_ahbdma_tx_desc,\n>> +\t\t\t\t      node);\n>> +\tif (tx)\n>> +\t\ttegra_ahbdma_submit_tx(chan, tx);\n>> +\n>> +\treturn !!tx;\n>> +}\n>> +\n>> +static void tegra_ahbdma_handle_channel(struct tegra_ahbdma_chan *chan)\n>> +{\n>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>> +\tunsigned long flags;\n>> +\tu32 status;\n>> +\n>> +\tstatus = readl_relaxed(chan->regs + TEGRA_AHBDMA_CHANNEL_STA);\n>> +\tif (!(status & TEGRA_AHBDMA_CHANNEL_IS_EOC))\n>> +\t\treturn;\n>> +\n>> +\twritel_relaxed(TEGRA_AHBDMA_CHANNEL_IS_EOC,\n>> +\t\t       chan->regs + TEGRA_AHBDMA_CHANNEL_STA);\n>> +\n>> +\tspin_lock_irqsave(&chan->lock, flags);\n>> +\n>> +\tif (!completion_done(&chan->idling)) {\n>> +\t\ttx = list_first_entry(&chan->active_list,\n>> +\t\t\t\t      struct tegra_ahbdma_tx_desc,\n>> +\t\t\t\t      node);\n>> +\n>> +\t\tif (tegra_ahbdma_tx_completed(chan, tx) &&\n>> +\t\t    !tegra_ahbdma_next_tx_issued(chan))\n>> +\t\t\tcomplete_all(&chan->idling);\n>> +\t}\n>> +\n>> +\tspin_unlock_irqrestore(&chan->lock, flags);\n>> +}\n>> +\n>> +static irqreturn_t tegra_ahbdma_isr(int irq, void *dev_id)\n>> +{\n>> +\tstruct tegra_ahbdma *tdma = dev_id;\n>> +\tunsigned int i;\n>> +\n>> +\tfor (i = 0; i < ARRAY_SIZE(tdma->channels); i++)\n>> +\t\ttegra_ahbdma_handle_channel(&tdma->channels[i]);\n>> +\n>> +\treturn IRQ_HANDLED;\n>> +}\n>> +\n>> +static dma_cookie_t tegra_ahbdma_tx_submit(struct dma_async_tx_descriptor *desc)\n>> +{\n>> +\tstruct tegra_ahbdma_tx_desc *tx = to_ahbdma_tx_desc(desc);\n>> +\tstruct tegra_ahbdma_chan *chan = to_ahbdma_chan(desc->chan);\n>> +\tdma_cookie_t cookie;\n>> +\n>> +\tcookie = dma_cookie_assign(desc);\n>> +\n>> +\tspin_lock_irq(&chan->lock);\n>> +\tlist_add_tail(&tx->node, &chan->pending_list);\n>> +\tspin_unlock_irq(&chan->lock);\n>> +\n>> +\treturn cookie;\n>> +}\n>> +\n>> +static int tegra_ahbdma_tx_desc_free(struct dma_async_tx_descriptor *desc)\n>> +{\n>> +\tkfree(to_ahbdma_tx_desc(desc));\n>> +\n>> +\treturn 0;\n>> +}\n>> +\n>> +static struct dma_async_tx_descriptor *tegra_ahbdma_prep_slave_sg(\n>> +\t\t\t\t\tstruct dma_chan *chan,\n>> +\t\t\t\t\tstruct scatterlist *sgl,\n>> +\t\t\t\t\tunsigned int sg_len,\n>> +\t\t\t\t\tenum dma_transfer_direction dir,\n>> +\t\t\t\t\tunsigned long flags,\n>> +\t\t\t\t\tvoid *context)\n>> +{\n>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>> +\n>> +\t/* unimplemented */\n>> +\tif (sg_len != 1 || sg_dma_len(sgl) > SZ_64K)\n>> +\t\treturn NULL;\n>> +\n>> +\ttx = kzalloc(sizeof(*tx), GFP_KERNEL);\n>> +\tif (!tx)\n>> +\t\treturn NULL;\n>> +\n>> +\tdma_async_tx_descriptor_init(&tx->desc, chan);\n>> +\n>> +\ttx->desc.tx_submit\t= tegra_ahbdma_tx_submit;\n>> +\ttx->desc.desc_free\t= tegra_ahbdma_tx_desc_free;\n>> +\ttx->mem_paddr\t\t= sg_dma_address(sgl);\n>> +\ttx->size\t\t= sg_dma_len(sgl);\n>> +\ttx->flags\t\t= flags;\n>> +\ttx->dir\t\t\t= dir;\n>> +\n>> +\ttasklet_init(&tx->tasklet, tegra_ahbdma_tasklet, (unsigned long)tx);\n>> +\n>> +\treturn &tx->desc;\n>> +}\n>> +\n>> +static struct dma_async_tx_descriptor *tegra_ahbdma_prep_dma_cyclic(\n>> +\t\t\t\t\tstruct dma_chan *chan,\n>> +\t\t\t\t\tdma_addr_t buf_addr,\n>> +\t\t\t\t\tsize_t buf_len,\n>> +\t\t\t\t\tsize_t period_len,\n>> +\t\t\t\t\tenum dma_transfer_direction dir,\n>> +\t\t\t\t\tunsigned long flags)\n>> +{\n>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>> +\n>> +\t/* unimplemented */\n>> +\tif (buf_len != period_len || buf_len > SZ_64K)\n>> +\t\treturn NULL;\n>> +\n>> +\ttx = kzalloc(sizeof(*tx), GFP_KERNEL);\n>> +\tif (!tx)\n>> +\t\treturn NULL;\n>> +\n>> +\tdma_async_tx_descriptor_init(&tx->desc, chan);\n>> +\n>> +\ttx->desc.tx_submit\t= tegra_ahbdma_tx_submit;\n>> +\ttx->mem_paddr\t\t= buf_addr;\n>> +\ttx->size\t\t= buf_len;\n>> +\ttx->flags\t\t= flags;\n>> +\ttx->cyclic\t\t= true;\n>> +\ttx->dir\t\t\t= dir;\n>> +\n>> +\ttasklet_init(&tx->tasklet, tegra_ahbdma_tasklet, (unsigned long)tx);\n>> +\n>> +\treturn &tx->desc;\n>> +}\n>> +\n>> +static void tegra_ahbdma_issue_pending(struct dma_chan *chan)\n>> +{\n>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>> +\tstruct list_head *entry, *tmp;\n>> +\tunsigned long flags;\n>> +\n>> +\tspin_lock_irqsave(&ahbdma_chan->lock, flags);\n>> +\n>> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->pending_list)\n>> +\t\tlist_move_tail(entry, &ahbdma_chan->active_list);\n>> +\n>> +\tif (completion_done(&ahbdma_chan->idling)) {\n>> +\t\ttx = list_first_entry_or_null(&ahbdma_chan->active_list,\n>> +\t\t\t\t\t      struct tegra_ahbdma_tx_desc,\n>> +\t\t\t\t\t      node);\n>> +\t\tif (tx) {\n>> +\t\t\ttegra_ahbdma_submit_tx(ahbdma_chan, tx);\n>> +\t\t\treinit_completion(&ahbdma_chan->idling);\n>> +\t\t}\n>> +\t}\n>> +\n>> +\tspin_unlock_irqrestore(&ahbdma_chan->lock, flags);\n>> +}\n>> +\n>> +static enum dma_status tegra_ahbdma_tx_status(struct dma_chan *chan,\n>> +\t\t\t\t\t      dma_cookie_t cookie,\n>> +\t\t\t\t\t      struct dma_tx_state *state)\n>> +{\n>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>> +\tenum dma_status cookie_status;\n>> +\tunsigned long flags;\n>> +\tsize_t residual;\n>> +\tu32 status;\n>> +\n>> +\tspin_lock_irqsave(&ahbdma_chan->lock, flags);\n>> +\n>> +\tcookie_status = dma_cookie_status(chan, cookie, state);\n>> +\tif (cookie_status != DMA_COMPLETE) {\n>> +\t\tlist_for_each_entry(tx, &ahbdma_chan->active_list, node) {\n>> +\t\t\tif (tx->desc.cookie == cookie)\n>> +\t\t\t\tgoto found;\n>> +\t\t}\n>> +\t}\n>> +\n>> +\tgoto unlock;\n>> +\n>> +found:\n>> +\tif (tx->in_fly) {\n>> +\t\tstatus = readl_relaxed(\n>> +\t\t\tahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_STA);\n>> +\t\tstatus  &= TEGRA_AHBDMA_CHANNEL_WCOUNT_MASK;\n>> +\n>> +\t\tresidual = status;\n>> +\t} else\n>> +\t\tresidual = tx->size;\n>> +\n>> +\tdma_set_residue(state, residual);\n>> +\n>> +unlock:\n>> +\tspin_unlock_irqrestore(&ahbdma_chan->lock, flags);\n>> +\n>> +\treturn cookie_status;\n>> +}\n>> +\n>> +static int tegra_ahbdma_terminate_all(struct dma_chan *chan)\n>> +{\n>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>> +\tstruct list_head *entry, *tmp;\n>> +\tu32 csr;\n>> +\n>> +\tspin_lock_irq(&ahbdma_chan->lock);\n>> +\n>> +\tcsr = readl_relaxed(ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>> +\tcsr &= ~TEGRA_AHBDMA_CHANNEL_ENABLE;\n>> +\n>> +\twritel_relaxed(csr, ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>> +\n>> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->active_list) {\n>> +\t\ttx = list_entry(entry, struct tegra_ahbdma_tx_desc, node);\n>> +\t\tlist_del(entry);\n>> +\t\tkfree(tx);\n>> +\t}\n>> +\n>> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->pending_list) {\n>> +\t\ttx = list_entry(entry, struct tegra_ahbdma_tx_desc, node);\n>> +\t\tlist_del(entry);\n>> +\t\tkfree(tx);\n>> +\t}\n>> +\n>> +\tcomplete_all(&ahbdma_chan->idling);\n>> +\n>> +\tspin_unlock_irq(&ahbdma_chan->lock);\n>> +\n>> +\treturn 0;\n>> +}\n>> +\n>> +static int tegra_ahbdma_config(struct dma_chan *chan,\n>> +\t\t\t       struct dma_slave_config *sconfig)\n>> +{\n>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>> +\tenum dma_transfer_direction dir = sconfig->direction;\n>> +\tu32 burst, ahb_seq, ahb_addr;\n>> +\n>> +\tif (sconfig->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||\n>> +\t    sconfig->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES)\n>> +\t\treturn -EINVAL;\n>> +\n>> +\tif (dir == DMA_DEV_TO_MEM) {\n>> +\t\tburst    = sconfig->src_maxburst;\n>> +\t\tahb_addr = sconfig->src_addr;\n>> +\t} else {\n>> +\t\tburst    = sconfig->dst_maxburst;\n>> +\t\tahb_addr = sconfig->dst_addr;\n>> +\t}\n>> +\n>> +\tswitch (burst) {\n>> +\tcase 1: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_1; break;\n>> +\tcase 4: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_4; break;\n>> +\tcase 8: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_8; break;\n>> +\tdefault:\n>> +\t\treturn -EINVAL;\n>> +\t}\n>> +\n>> +\tahb_seq  = burst << TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT;\n>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_ADDR_WRAP;\n>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_INTR_ENB;\n>> +\n>> +\twritel_relaxed(ahb_seq,\n>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_SEQ);\n>> +\n>> +\twritel_relaxed(ahb_addr,\n>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_PTR);\n>> +\n>> +\treturn 0;\n>> +}\n>> +\n>> +static void tegra_ahbdma_synchronize(struct dma_chan *chan)\n>> +{\n>> +\twait_for_completion(&to_ahbdma_chan(chan)->idling);\n>> +}\n>> +\n>> +static void tegra_ahbdma_free_chan_resources(struct dma_chan *chan)\n>> +{\n>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>> +\tstruct list_head *entry, *tmp;\n>> +\n>> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->pending_list) {\n>> +\t\ttx = list_entry(entry, struct tegra_ahbdma_tx_desc, node);\n>> +\t\tlist_del(entry);\n>> +\t\tkfree(tx);\n>> +\t}\n>> +}\n>> +\n>> +static void tegra_ahbdma_init_channel(struct tegra_ahbdma *tdma,\n>> +\t\t\t\t      unsigned int chan_id)\n>> +{\n>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = &tdma->channels[chan_id];\n>> +\tstruct dma_chan *dma_chan = &ahbdma_chan->dma_chan;\n>> +\tstruct dma_device *dma_dev = &tdma->dma_dev;\n>> +\n>> +\tINIT_LIST_HEAD(&ahbdma_chan->active_list);\n>> +\tINIT_LIST_HEAD(&ahbdma_chan->pending_list);\n>> +\tinit_completion(&ahbdma_chan->idling);\n>> +\tspin_lock_init(&ahbdma_chan->lock);\n>> +\tcomplete(&ahbdma_chan->idling);\n>> +\n>> +\tahbdma_chan->regs = tdma->regs + TEGRA_AHBDMA_CHANNEL_BASE(chan_id);\n>> +\tahbdma_chan->id = chan_id;\n>> +\n>> +\tdma_cookie_init(dma_chan);\n>> +\tdma_chan->device = dma_dev;\n>> +\n>> +\tlist_add_tail(&dma_chan->device_node, &dma_dev->channels);\n>> +}\n>> +\n>> +static struct dma_chan *tegra_ahbdma_of_xlate(struct of_phandle_args *dma_spec,\n>> +\t\t\t\t\t      struct of_dma *ofdma)\n>> +{\n>> +\tstruct tegra_ahbdma *tdma = ofdma->of_dma_data;\n>> +\tstruct dma_chan *chan;\n>> +\tu32 csr;\n>> +\n>> +\tchan = dma_get_any_slave_channel(&tdma->dma_dev);\n>> +\tif (!chan)\n>> +\t\treturn NULL;\n>> +\n>> +\t/* enable channels flow control */\n>> +\tif (dma_spec->args_count == 1) {\n> \n> The DT doc says #dma-cells should be '1' and so if not equal 1, is this\n> not an error?\n> \n\nI wanted to differentiate slave/master modes here. But if we'd want to add\nTRIG_SEL as another cell, then it probably would worth to implement a custom DMA\nconfigure options, like documentation suggests - to wrap generic\ndma_slave_config into the custom one. On the other hand that probably would add\nan unused functionality to the driver.\n\n>> +\t\tcsr  = TEGRA_AHBDMA_CHANNEL_FLOW;\n>> +\t\tcsr |= dma_spec->args[0] << TEGRA_AHBDMA_CHANNEL_REQ_SEL_SHIFT;\n> \n> What about the TRIG_REQ field?\n> \n\nNot implemented, there is no test case for it yet.\n\n>> +\n>> +\t\twritel_relaxed(csr,\n>> +\t\t\tto_ahbdma_chan(chan)->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>> +\t}\n>> +\t\n>> +\treturn chan;\n>> +}\n>> +\n>> +static int tegra_ahbdma_init_hw(struct tegra_ahbdma *tdma, struct device *dev)\n>> +{\n>> +\tint err;\n>> +\n>> +\terr = reset_control_assert(tdma->rst);\n>> +\tif (err) {\n>> +\t\tdev_err(dev, \"Failed to assert reset: %d\\n\", err);\n>> +\t\treturn err;\n>> +\t}\n>> +\n>> +\terr = clk_prepare_enable(tdma->clk);\n>> +\tif (err) {\n>> +\t\tdev_err(dev, \"Failed to enable clock: %d\\n\", err);\n>> +\t\treturn err;\n>> +\t}\n>> +\n>> +\tusleep_range(1000, 2000);\n>> +\n>> +\terr = reset_control_deassert(tdma->rst);\n>> +\tif (err) {\n>> +\t\tdev_err(dev, \"Failed to deassert reset: %d\\n\", err);\n>> +\t\treturn err;\n>> +\t}\n>> +\n>> +\twritel_relaxed(TEGRA_AHBDMA_CMD_ENABLE, tdma->regs + TEGRA_AHBDMA_CMD);\n>> +\n>> +\twritel_relaxed(TEGRA_AHBDMA_IRQ_ENB_CH(0) |\n>> +\t\t       TEGRA_AHBDMA_IRQ_ENB_CH(1) |\n>> +\t\t       TEGRA_AHBDMA_IRQ_ENB_CH(2) |\n>> +\t\t       TEGRA_AHBDMA_IRQ_ENB_CH(3),\n>> +\t\t       tdma->regs + TEGRA_AHBDMA_IRQ_ENB_MASK);\n>> +\n>> +\treturn 0;\n>> +}\n> \n> Personally I would use the pm_runtime callbacks for this sort of thing\n> and ...\n> \n\nI decided that it probaby would be better to implement PM later if needed. I'm\nnot sure whether DMA controller consumes any substantial amounts of power while\nidling. If it's not, why bother? Unnecessary power managment would just cause\nCPU to waste its cycles (and power) doing PM.\n\n>> +static int tegra_ahbdma_probe(struct platform_device *pdev)\n>> +{\n>> +\tstruct dma_device *dma_dev;\n>> +\tstruct tegra_ahbdma *tdma;\n>> +\tstruct resource *res_regs;\n>> +\tunsigned int i;\n>> +\tint irq;\n>> +\tint err;\n>> +\n>> +\ttdma = devm_kzalloc(&pdev->dev, sizeof(*tdma), GFP_KERNEL);\n>> +\tif (!tdma)\n>> +\t\treturn -ENOMEM;\n>> +\n>> +\tirq = platform_get_irq(pdev, 0);\n>> +\tif (irq < 0) {\n>> +\t\tdev_err(&pdev->dev, \"Failed to get IRQ\\n\");\n>> +\t\treturn irq;\n>> +\t}\n>> +\n>> +\terr = devm_request_irq(&pdev->dev, irq, tegra_ahbdma_isr, 0,\n>> +\t\t\t       dev_name(&pdev->dev), tdma);\n>> +\tif (err) {\n>> +\t\tdev_err(&pdev->dev, \"Failed to request IRQ\\n\");\n>> +\t\treturn -ENODEV;\n>> +\t}\n>> +\n>> +\tres_regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);\n>> +\tif (!res_regs)\n>> +\t\treturn -ENODEV;\n>> +\n>> +\ttdma->regs = devm_ioremap_resource(&pdev->dev, res_regs);\n>> +\tif (IS_ERR(tdma->regs))\n>> +\t\treturn PTR_ERR(tdma->regs);\n>> +\n>> +\ttdma->clk = devm_clk_get(&pdev->dev, NULL);\n>> +\tif (IS_ERR(tdma->clk)) {\n>> +\t\tdev_err(&pdev->dev, \"Failed to get AHB-DMA clock\\n\");\n>> +\t\treturn PTR_ERR(tdma->clk);\n>> +\t}\n>> +\n>> +\ttdma->rst = devm_reset_control_get(&pdev->dev, NULL);\n>> +\tif (IS_ERR(tdma->rst)) {\n>> +\t\tdev_err(&pdev->dev, \"Failed to get AHB-DMA reset\\n\");\n>> +\t\treturn PTR_ERR(tdma->rst);\n>> +\t}\n>> +\n>> +\terr = tegra_ahbdma_init_hw(tdma, &pdev->dev);\n>> +\tif (err)\n>> +\t\treturn err;\n> \n> ... here is looks like we turn the clocks on and leave them on. I would\n> rather that we turn them on when the DMA channel is requested and turn\n> them off again when freed. Again would be good to use pm_runtime APIs\n> for this.\n> \n\nAgain not sure about it :)\n\n>> +\tdma_dev = &tdma->dma_dev;\n>> +\n>> +\tINIT_LIST_HEAD(&dma_dev->channels);\n>> +\n>> +\tfor (i = 0; i < ARRAY_SIZE(tdma->channels); i++)\n>> +\t\ttegra_ahbdma_init_channel(tdma, i);\n>> +\n>> +\tdma_cap_set(DMA_PRIVATE, dma_dev->cap_mask);\n>> +\tdma_cap_set(DMA_CYCLIC, dma_dev->cap_mask);\n>> +\tdma_cap_set(DMA_SLAVE, dma_dev->cap_mask);\n>> +\n>> +\tdma_dev->max_burst\t\t= 8;\n>> +\tdma_dev->directions\t\t= TEGRA_AHBDMA_DIRECTIONS;\n>> +\tdma_dev->src_addr_widths\t= TEGRA_AHBDMA_BUS_WIDTH;\n>> +\tdma_dev->dst_addr_widths\t= TEGRA_AHBDMA_BUS_WIDTH;\n>> +\tdma_dev->descriptor_reuse\t= true;\n>> +\tdma_dev->residue_granularity\t= DMA_RESIDUE_GRANULARITY_BURST;\n>> +\tdma_dev->device_free_chan_resources = tegra_ahbdma_free_chan_resources;\n>> +\tdma_dev->device_prep_slave_sg\t= tegra_ahbdma_prep_slave_sg;\n>> +\tdma_dev->device_prep_dma_cyclic\t= tegra_ahbdma_prep_dma_cyclic;\n>> +\tdma_dev->device_terminate_all\t= tegra_ahbdma_terminate_all;\n>> +\tdma_dev->device_issue_pending\t= tegra_ahbdma_issue_pending;\n>> +\tdma_dev->device_tx_status\t= tegra_ahbdma_tx_status;\n>> +\tdma_dev->device_config\t\t= tegra_ahbdma_config;\n>> +\tdma_dev->device_synchronize\t= tegra_ahbdma_synchronize;\n>> +\tdma_dev->dev\t\t\t= &pdev->dev;\n>> +\n>> +\terr = dma_async_device_register(dma_dev);\n>> +\tif (err) {\n>> +\t\tdev_err(&pdev->dev, \"Device registration failed %d\\n\", err);\n>> +\t\treturn err;\n>> +\t}\n>> +\n>> +\terr = of_dma_controller_register(pdev->dev.of_node,\n>> +\t\t\t\t\t tegra_ahbdma_of_xlate, tdma);\n>> +\tif (err) {\n>> +\t\tdev_err(&pdev->dev, \"OF registration failed %d\\n\", err);\n>> +\t\tdma_async_device_unregister(dma_dev);\n>> +\t\treturn err;\n>> +\t}\n>> +\n>> +\tplatform_set_drvdata(pdev, tdma);\n>> +\n>> +\treturn 0;\n>> +}\n>> +\n>> +static int tegra_ahbdma_remove(struct platform_device *pdev)\n>> +{\n>> +\tstruct tegra_ahbdma *tdma = platform_get_drvdata(pdev);\n>> +\n>> +\tof_dma_controller_free(pdev->dev.of_node);\n>> +\tdma_async_device_unregister(&tdma->dma_dev);\n>> +\tclk_disable_unprepare(tdma->clk);\n>> +\n>> +\treturn 0;\n>> +}\n>> +\n>> +static const struct of_device_id tegra_ahbdma_of_match[] = {\n>> +\t{ .compatible = \"nvidia,tegra20-ahbdma\" },\n>> +\t{ },\n>> +};\n>> +MODULE_DEVICE_TABLE(of, tegra_ahbdma_of_match);\n>> +\n>> +static struct platform_driver tegra_ahbdma_driver = {\n>> +\t.driver = {\n>> +\t\t.name\t= \"tegra-ahbdma\",\n>> +\t\t.of_match_table = tegra_ahbdma_of_match,\n> \n> It would be nice to have suspend/resume handler too. We could do a\n> similar thing to the APB dma driver.\n> \n\nIt is not stricly necessary because LP0 isn't implemented by the core arch. I've\ntested LP1 and it works fine that way. I'd prefer to implement suspend/resume\nlater, we can't really test it properly without LP0.\n\n>> +\t},\n>> +\t.probe\t= tegra_ahbdma_probe,\n>> +\t.remove\t= tegra_ahbdma_remove,\n>> +};\n>> +module_platform_driver(tegra_ahbdma_driver);\n>> +\n>> +MODULE_DESCRIPTION(\"NVIDIA Tegra AHB DMA Controller driver\");\n>> +MODULE_AUTHOR(\"Dmitry Osipenko <digetx@gmail.com>\");\n>> +MODULE_LICENSE(\"GPL\");","headers":{"Return-Path":"<linux-tegra-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-tegra-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key;\n\tunprotected) header.d=gmail.com header.i=@gmail.com\n\theader.b=\"BUsf1ALp\"; dkim-atps=neutral"],"Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3y1m5w5zSnz9t5C\n\tfor <incoming@patchwork.ozlabs.org>;\n\tWed, 27 Sep 2017 02:06:12 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S968607AbdIZQGL (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tTue, 26 Sep 2017 12:06:11 -0400","from mail-lf0-f68.google.com ([209.85.215.68]:37575 \"EHLO\n\tmail-lf0-f68.google.com\" rhost-flags-OK-OK-OK-OK) by vger.kernel.org\n\twith ESMTP id S966178AbdIZQGJ (ORCPT\n\t<rfc822;linux-tegra@vger.kernel.org>);\n\tTue, 26 Sep 2017 12:06:09 -0400","by mail-lf0-f68.google.com with SMTP id q132so2918485lfe.4;\n\tTue, 26 Sep 2017 09:06:07 -0700 (PDT)","from [192.168.1.145] (ppp109-252-90-109.pppoe.spdop.ru.\n\t[109.252.90.109]) by smtp.googlemail.com with ESMTPSA id\n\ta198sm1375672lfb.40.2017.09.26.09.06.04\n\t(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);\n\tTue, 26 Sep 2017 09:06:04 -0700 (PDT)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=gmail.com; s=20161025;\n\th=subject:to:cc:references:from:message-id:date:user-agent\n\t:mime-version:in-reply-to:content-language:content-transfer-encoding; \n\tbh=ucVL/5RqyGbM3XE/aLDYVua1JUtc/jEGPV6UcxUVf78=;\n\tb=BUsf1ALpfbZ7VnBUCUh4+Cm7lbFI8fDydF4FTB+YpK3tbr2kQJ6/pEEMYZMRtSw9Ke\n\tUI/2Kc9hemc/T4RMYzNbYzcEkRKya+DJw/L1W0tP1DnJHLmLfXnahLZAnJ8aEDotkpSg\n\t0xVkZt13OxIlzAK4UHS8dxH0aTpuRv1jxBeHDx3CgNLbXrC3H5VoSC/I6tJ5gXJ2qTGe\n\tRKjNHcd3FiRvVEdqxaBjLAa8SB/0S/0pqyVaFd67SvLWYYGn1EBKVegWTg6zyek5GBid\n\tCLLI7iI3VZsjJlDn/MepFe4zA1Hira/1zLbvt2gqma61/Co+5UAsoIUaeRDBVqeBBVaW\n\ttyDw==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:subject:to:cc:references:from:message-id:date\n\t:user-agent:mime-version:in-reply-to:content-language\n\t:content-transfer-encoding;\n\tbh=ucVL/5RqyGbM3XE/aLDYVua1JUtc/jEGPV6UcxUVf78=;\n\tb=VgOeSFkl4wVaiqHrVDRm4KJfS8xTZh1Nkku5FwdFdXpi5H2oDfF436qcjea4hQYxU+\n\t+DfwW9MB4IAvUl4xlZW6QNQORyh+1qA1zDIoKRzoBV7xZbnzspHw4jx2LG7R8jnc6dcG\n\tdPQk6ZUdrhTkcR1n6sFgBto2sQhl5qROPj4eQ304qnUR4KfLp9L3TYjxHOpqn2OHQV7R\n\tWMhm4v2zLRhHP8s+L40GvwPxaYpNTnJDqxypJmhQ5mLuzvVmJ8Kmqn3mNlnNqYakWkRc\n\tNU16oUHKiFWpLHxsiLtgGLgri7MkEuuEoAkttEpevf26ChQKgqawQvVnCQuZC86pHWEH\n\tOa/Q==","X-Gm-Message-State":"AHPjjUhKg2+pqYUM3NzAMkVRSkNebaIeH9OBGoTDEyfaW9PR/yvmr5x9\n\tNY2yQPKj25oZCJxx+wDkE4saTI1B","X-Google-Smtp-Source":"AOwi7QAF+Q7coY5T5yrbOTIhaD6cVgnY2a10LSPSUUkYj5KRLwpkIdgFwC2B2Zvx5gZWphshnOTQvw==","X-Received":"by 10.25.15.22 with SMTP id e22mr3452430lfi.16.1506441966072;\n\tTue, 26 Sep 2017 09:06:06 -0700 (PDT)","Subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","To":"Jon Hunter <jonathanh@nvidia.com>,\n\tThierry Reding <thierry.reding@gmail.com>,\n\tLaxman Dewangan <ldewangan@nvidia.com>,\n\tPeter De Schrijver <pdeschrijver@nvidia.com>,\n\tPrashant Gaikwad <pgaikwad@nvidia.com>,\n\tMichael Turquette <mturquette@baylibre.com>,\n\tStephen Boyd <sboyd@codeaurora.org>, Rob Herring <robh+dt@kernel.org>,\n\tVinod Koul <vinod.koul@intel.com>","Cc":"linux-tegra@vger.kernel.org, devicetree@vger.kernel.org,\n\tdmaengine@vger.kernel.org, linux-clk@vger.kernel.org,\n\tlinux-kernel@vger.kernel.org","References":"<cover.1506380746.git.digetx@gmail.com>\n\t<0a45e058baba72124b91c663ce1d908d275f4044.1506380746.git.digetx@gmail.com>\n\t<481add20-9cea-a91a-e72c-45a824362e64@nvidia.com>","From":"Dmitry Osipenko <digetx@gmail.com>","Message-ID":"<189ae234-86c4-02ed-698c-5b447e27bf27@gmail.com>","Date":"Tue, 26 Sep 2017 19:06:03 +0300","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101\n\tThunderbird/52.3.0","MIME-Version":"1.0","In-Reply-To":"<481add20-9cea-a91a-e72c-45a824362e64@nvidia.com>","Content-Type":"text/plain; charset=utf-8","Content-Language":"en-US","Content-Transfer-Encoding":"7bit","Sender":"linux-tegra-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-tegra.vger.kernel.org>","X-Mailing-List":"linux-tegra@vger.kernel.org"}},{"id":1775883,"web_url":"http://patchwork.ozlabs.org/comment/1775883/","msgid":"<8fa6108d-421d-8054-c05c-9681a0e25518@nvidia.com>","list_archive_url":null,"date":"2017-09-26T21:37:32","subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","submitter":{"id":66273,"url":"http://patchwork.ozlabs.org/api/people/66273/","name":"Jon Hunter","email":"jonathanh@nvidia.com"},"content":"Hi Dmitry,\n\nOn 26/09/17 17:06, Dmitry Osipenko wrote:\n> Hi Jon,\n> \n> On 26.09.2017 17:45, Jon Hunter wrote:\n>> Hi Dmitry,\n>>\n>> On 26/09/17 00:22, Dmitry Osipenko wrote:\n>>> AHB DMA controller presents on Tegra20/30 SoC's, it supports transfers\n>>> memory <-> AHB bus peripherals as well as mem-to-mem transfers. Driver\n>>> doesn't yet implement transfers larger than 64K and scatter-gather\n>>> transfers that have NENT > 1, HW doesn't have native support for these\n>>> cases.\n>>>\n>>> Signed-off-by: Dmitry Osipenko <digetx@gmail.com>\n>>> ---\n>>>  drivers/dma/Kconfig           |   9 +\n>>>  drivers/dma/Makefile          |   1 +\n>>>  drivers/dma/tegra20-ahb-dma.c | 679 ++++++++++++++++++++++++++++++++++++++++++\n>>>  3 files changed, 689 insertions(+)\n>>>  create mode 100644 drivers/dma/tegra20-ahb-dma.c\n>>\n>> ...\n>>\n>>> diff --git a/drivers/dma/tegra20-ahb-dma.c b/drivers/dma/tegra20-ahb-dma.c\n>>> new file mode 100644\n>>> index 000000000000..8316d64e35e1\n>>> --- /dev/null\n>>> +++ b/drivers/dma/tegra20-ahb-dma.c\n>>> @@ -0,0 +1,679 @@\n>>> +/*\n>>> + * Copyright 2017 Dmitry Osipenko <digetx@gmail.com>\n>>> + *\n>>> + * This program is free software; you can redistribute it and/or modify it\n>>> + * under the terms and conditions of the GNU General Public License,\n>>> + * version 2, as published by the Free Software Foundation.\n>>> + *\n>>> + * This program is distributed in the hope it will be useful, but WITHOUT\n>>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n>>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for\n>>> + * more details.\n>>> + *\n>>> + * You should have received a copy of the GNU General Public License\n>>> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n>>> + */\n>>> +\n>>> +#include <linux/clk.h>\n>>> +#include <linux/delay.h>\n>>> +#include <linux/interrupt.h>\n>>> +#include <linux/io.h>\n>>> +#include <linux/module.h>\n>>> +#include <linux/of_device.h>\n>>> +#include <linux/of_dma.h>\n>>> +#include <linux/platform_device.h>\n>>> +#include <linux/reset.h>\n>>> +#include <linux/slab.h>\n>>> +#include <linux/spinlock.h>\n>>> +\n>>> +#include \"dmaengine.h\"\n>>> +\n>>> +#define TEGRA_AHBDMA_CMD\t\t\t0x0\n>>> +#define TEGRA_AHBDMA_CMD_ENABLE\t\t\tBIT(31)\n>>> +\n>>> +#define TEGRA_AHBDMA_IRQ_ENB_MASK\t\t0x20\n>>> +#define TEGRA_AHBDMA_IRQ_ENB_CH(ch)\t\tBIT(ch)\n>>> +\n>>> +#define TEGRA_AHBDMA_CHANNEL_BASE(ch)\t\t(0x1000 + (ch) * 0x20)\n>>> +\n>>> +#define TEGRA_AHBDMA_CHANNEL_CSR\t\t0x0\n>>> +#define TEGRA_AHBDMA_CHANNEL_ADDR_WRAP\t\tBIT(18)\n>>> +#define TEGRA_AHBDMA_CHANNEL_FLOW\t\tBIT(24)\n>>> +#define TEGRA_AHBDMA_CHANNEL_ONCE\t\tBIT(26)\n>>> +#define TEGRA_AHBDMA_CHANNEL_DIR_TO_XMB\t\tBIT(27)\n>>> +#define TEGRA_AHBDMA_CHANNEL_IE_EOC\t\tBIT(30)\n>>> +#define TEGRA_AHBDMA_CHANNEL_ENABLE\t\tBIT(31)\n>>> +#define TEGRA_AHBDMA_CHANNEL_REQ_SEL_SHIFT\t16\n>>> +#define TEGRA_AHBDMA_CHANNEL_WCOUNT_MASK\t0xFFFC\n>>> +\n>>> +#define TEGRA_AHBDMA_CHANNEL_STA\t\t0x4\n>>> +#define TEGRA_AHBDMA_CHANNEL_IS_EOC\t\tBIT(30)\n>>> +\n>>> +#define TEGRA_AHBDMA_CHANNEL_AHB_PTR\t\t0x10\n>>> +\n>>> +#define TEGRA_AHBDMA_CHANNEL_AHB_SEQ\t\t0x14\n>>> +#define TEGRA_AHBDMA_CHANNEL_INTR_ENB\t\tBIT(31)\n>>> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT\t24\n>>> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_1\t2\n>>> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_4\t3\n>>> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_8\t4\n>>> +\n>>> +#define TEGRA_AHBDMA_CHANNEL_XMB_PTR\t\t0x18\n>>> +\n>>> +#define TEGRA_AHBDMA_BUS_WIDTH\t\t\tBIT(DMA_SLAVE_BUSWIDTH_4_BYTES)\n>>> +\n>>> +#define TEGRA_AHBDMA_DIRECTIONS\t\t\tBIT(DMA_DEV_TO_MEM) | \\\n>>> +\t\t\t\t\t\tBIT(DMA_MEM_TO_DEV)\n>>> +\n>>> +struct tegra_ahbdma_tx_desc {\n>>> +\tstruct dma_async_tx_descriptor desc;\n>>> +\tstruct tasklet_struct tasklet;\n>>> +\tstruct list_head node;\n>>\n>> Any reason why we cannot use the virt-dma framework for this driver? I\n>> would hope it would simplify the driver a bit.\n>>\n> \n> IIUC virt-dma is supposed to provide virtually unlimited number of channels.\n> I've looked at it and decided that it would just add unnecessary functionality\n> and, as a result, complexity. As I wrote in the cover-letter, it is supposed\n> that this driver would have only one consumer - the host1x. It shouldn't be\n> difficult to implement virt-dma later, if desired.  But again it is very\n> unlikely that it would be needed.\n\nI think that the biggest benefit is that is simplifies the linked list\nmanagement. See the tegra210-adma driver.\n\n>>> +\tenum dma_transfer_direction dir;\n>>> +\tdma_addr_t mem_paddr;\n>>> +\tunsigned long flags;\n>>> +\tsize_t size;\n>>> +\tbool in_fly;\n>>> +\tbool cyclic;\n>>> +};\n>>> +\n>>> +struct tegra_ahbdma_chan {\n>>> +\tstruct dma_chan dma_chan;\n>>> +\tstruct list_head active_list;\n>>> +\tstruct list_head pending_list;\n>>> +\tstruct completion idling;\n>>> +\tvoid __iomem *regs;\n>>> +\tspinlock_t lock;\n>>> +\tunsigned int id;\n>>> +};\n>>> +\n>>> +struct tegra_ahbdma {\n>>> +\tstruct tegra_ahbdma_chan channels[4];\n>>> +\tstruct dma_device dma_dev;\n>>> +\tstruct reset_control *rst;\n>>> +\tstruct clk *clk;\n>>> +\tvoid __iomem *regs;\n>>> +};\n>>> +\n>>> +static inline struct tegra_ahbdma *to_ahbdma(struct dma_device *dev)\n>>> +{\n>>> +\treturn container_of(dev, struct tegra_ahbdma, dma_dev);\n>>> +}\n>>> +\n>>> +static inline struct tegra_ahbdma_chan *to_ahbdma_chan(struct dma_chan *chan)\n>>> +{\n>>> +\treturn container_of(chan, struct tegra_ahbdma_chan, dma_chan);\n>>> +}\n>>> +\n>>> +static inline struct tegra_ahbdma_tx_desc *to_ahbdma_tx_desc(\n>>> +\t\t\t\tstruct dma_async_tx_descriptor *tx)\n>>> +{\n>>> +\treturn container_of(tx, struct tegra_ahbdma_tx_desc, desc);\n>>> +}\n>>> +\n>>> +static void tegra_ahbdma_submit_tx(struct tegra_ahbdma_chan *chan,\n>>> +\t\t\t\t   struct tegra_ahbdma_tx_desc *tx)\n>>> +{\n>>> +\tu32 csr;\n>>> +\n>>> +\twritel_relaxed(tx->mem_paddr,\n>>> +\t\t       chan->regs + TEGRA_AHBDMA_CHANNEL_XMB_PTR);\n>>> +\n>>> +\tcsr = readl_relaxed(chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>>> +\n>>> +\tcsr &= ~TEGRA_AHBDMA_CHANNEL_WCOUNT_MASK;\n>>> +\tcsr &= ~TEGRA_AHBDMA_CHANNEL_DIR_TO_XMB;\n>>> +\tcsr |= TEGRA_AHBDMA_CHANNEL_ENABLE;\n>>> +\tcsr |= TEGRA_AHBDMA_CHANNEL_IE_EOC;\n>>> +\tcsr |= tx->size - sizeof(u32);\n>>> +\n>>> +\tif (tx->dir == DMA_DEV_TO_MEM)\n>>> +\t\tcsr |= TEGRA_AHBDMA_CHANNEL_DIR_TO_XMB;\n>>> +\n>>> +\tif (!tx->cyclic)\n>>> +\t\tcsr |= TEGRA_AHBDMA_CHANNEL_ONCE;\n>>> +\n>>> +\twritel_relaxed(csr, chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>>> +\n>>> +\ttx->in_fly = true;\n>>> +}\n>>> +\n>>> +static void tegra_ahbdma_tasklet(unsigned long data)\n>>> +{\n>>> +\tstruct tegra_ahbdma_tx_desc *tx = (struct tegra_ahbdma_tx_desc *)data;\n>>> +\tstruct dma_async_tx_descriptor *desc = &tx->desc;\n>>> +\n>>> +\tdmaengine_desc_get_callback_invoke(desc, NULL);\n>>> +\n>>> +\tif (!tx->cyclic && !dmaengine_desc_test_reuse(desc))\n>>> +\t\tkfree(tx);\n>>> +}\n>>> +\n>>> +static bool tegra_ahbdma_tx_completed(struct tegra_ahbdma_chan *chan,\n>>> +\t\t\t\t      struct tegra_ahbdma_tx_desc *tx)\n>>> +{\n>>> +\tstruct dma_async_tx_descriptor *desc = &tx->desc;\n>>> +\tbool reuse = dmaengine_desc_test_reuse(desc);\n>>> +\tbool interrupt = tx->flags & DMA_PREP_INTERRUPT;\n>>> +\tbool completed = !tx->cyclic;\n>>> +\n>>> +\tif (completed)\n>>> +\t\tdma_cookie_complete(desc);\n>>> +\n>>> +\tif (interrupt)\n>>> +\t\ttasklet_schedule(&tx->tasklet);\n>>> +\n>>> +\tif (completed) {\n>>> +\t\tlist_del(&tx->node);\n>>> +\n>>> +\t\tif (reuse)\n>>> +\t\t\ttx->in_fly = false;\n>>> +\n>>> +\t\tif (!interrupt && !reuse)\n>>> +\t\t\tkfree(tx);\n>>> +\t}\n>>> +\n>>> +\treturn completed;\n>>> +}\n>>> +\n>>> +static bool tegra_ahbdma_next_tx_issued(struct tegra_ahbdma_chan *chan)\n>>> +{\n>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>> +\n>>> +\ttx = list_first_entry_or_null(&chan->active_list,\n>>> +\t\t\t\t      struct tegra_ahbdma_tx_desc,\n>>> +\t\t\t\t      node);\n>>> +\tif (tx)\n>>> +\t\ttegra_ahbdma_submit_tx(chan, tx);\n>>> +\n>>> +\treturn !!tx;\n>>> +}\n>>> +\n>>> +static void tegra_ahbdma_handle_channel(struct tegra_ahbdma_chan *chan)\n>>> +{\n>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>> +\tunsigned long flags;\n>>> +\tu32 status;\n>>> +\n>>> +\tstatus = readl_relaxed(chan->regs + TEGRA_AHBDMA_CHANNEL_STA);\n>>> +\tif (!(status & TEGRA_AHBDMA_CHANNEL_IS_EOC))\n>>> +\t\treturn;\n>>> +\n>>> +\twritel_relaxed(TEGRA_AHBDMA_CHANNEL_IS_EOC,\n>>> +\t\t       chan->regs + TEGRA_AHBDMA_CHANNEL_STA);\n>>> +\n>>> +\tspin_lock_irqsave(&chan->lock, flags);\n>>> +\n>>> +\tif (!completion_done(&chan->idling)) {\n>>> +\t\ttx = list_first_entry(&chan->active_list,\n>>> +\t\t\t\t      struct tegra_ahbdma_tx_desc,\n>>> +\t\t\t\t      node);\n>>> +\n>>> +\t\tif (tegra_ahbdma_tx_completed(chan, tx) &&\n>>> +\t\t    !tegra_ahbdma_next_tx_issued(chan))\n>>> +\t\t\tcomplete_all(&chan->idling);\n>>> +\t}\n>>> +\n>>> +\tspin_unlock_irqrestore(&chan->lock, flags);\n>>> +}\n>>> +\n>>> +static irqreturn_t tegra_ahbdma_isr(int irq, void *dev_id)\n>>> +{\n>>> +\tstruct tegra_ahbdma *tdma = dev_id;\n>>> +\tunsigned int i;\n>>> +\n>>> +\tfor (i = 0; i < ARRAY_SIZE(tdma->channels); i++)\n>>> +\t\ttegra_ahbdma_handle_channel(&tdma->channels[i]);\n>>> +\n>>> +\treturn IRQ_HANDLED;\n>>> +}\n>>> +\n>>> +static dma_cookie_t tegra_ahbdma_tx_submit(struct dma_async_tx_descriptor *desc)\n>>> +{\n>>> +\tstruct tegra_ahbdma_tx_desc *tx = to_ahbdma_tx_desc(desc);\n>>> +\tstruct tegra_ahbdma_chan *chan = to_ahbdma_chan(desc->chan);\n>>> +\tdma_cookie_t cookie;\n>>> +\n>>> +\tcookie = dma_cookie_assign(desc);\n>>> +\n>>> +\tspin_lock_irq(&chan->lock);\n>>> +\tlist_add_tail(&tx->node, &chan->pending_list);\n>>> +\tspin_unlock_irq(&chan->lock);\n>>> +\n>>> +\treturn cookie;\n>>> +}\n>>> +\n>>> +static int tegra_ahbdma_tx_desc_free(struct dma_async_tx_descriptor *desc)\n>>> +{\n>>> +\tkfree(to_ahbdma_tx_desc(desc));\n>>> +\n>>> +\treturn 0;\n>>> +}\n>>> +\n>>> +static struct dma_async_tx_descriptor *tegra_ahbdma_prep_slave_sg(\n>>> +\t\t\t\t\tstruct dma_chan *chan,\n>>> +\t\t\t\t\tstruct scatterlist *sgl,\n>>> +\t\t\t\t\tunsigned int sg_len,\n>>> +\t\t\t\t\tenum dma_transfer_direction dir,\n>>> +\t\t\t\t\tunsigned long flags,\n>>> +\t\t\t\t\tvoid *context)\n>>> +{\n>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>> +\n>>> +\t/* unimplemented */\n>>> +\tif (sg_len != 1 || sg_dma_len(sgl) > SZ_64K)\n>>> +\t\treturn NULL;\n>>> +\n>>> +\ttx = kzalloc(sizeof(*tx), GFP_KERNEL);\n>>> +\tif (!tx)\n>>> +\t\treturn NULL;\n>>> +\n>>> +\tdma_async_tx_descriptor_init(&tx->desc, chan);\n>>> +\n>>> +\ttx->desc.tx_submit\t= tegra_ahbdma_tx_submit;\n>>> +\ttx->desc.desc_free\t= tegra_ahbdma_tx_desc_free;\n>>> +\ttx->mem_paddr\t\t= sg_dma_address(sgl);\n>>> +\ttx->size\t\t= sg_dma_len(sgl);\n>>> +\ttx->flags\t\t= flags;\n>>> +\ttx->dir\t\t\t= dir;\n>>> +\n>>> +\ttasklet_init(&tx->tasklet, tegra_ahbdma_tasklet, (unsigned long)tx);\n>>> +\n>>> +\treturn &tx->desc;\n>>> +}\n>>> +\n>>> +static struct dma_async_tx_descriptor *tegra_ahbdma_prep_dma_cyclic(\n>>> +\t\t\t\t\tstruct dma_chan *chan,\n>>> +\t\t\t\t\tdma_addr_t buf_addr,\n>>> +\t\t\t\t\tsize_t buf_len,\n>>> +\t\t\t\t\tsize_t period_len,\n>>> +\t\t\t\t\tenum dma_transfer_direction dir,\n>>> +\t\t\t\t\tunsigned long flags)\n>>> +{\n>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>> +\n>>> +\t/* unimplemented */\n>>> +\tif (buf_len != period_len || buf_len > SZ_64K)\n>>> +\t\treturn NULL;\n>>> +\n>>> +\ttx = kzalloc(sizeof(*tx), GFP_KERNEL);\n>>> +\tif (!tx)\n>>> +\t\treturn NULL;\n>>> +\n>>> +\tdma_async_tx_descriptor_init(&tx->desc, chan);\n>>> +\n>>> +\ttx->desc.tx_submit\t= tegra_ahbdma_tx_submit;\n>>> +\ttx->mem_paddr\t\t= buf_addr;\n>>> +\ttx->size\t\t= buf_len;\n>>> +\ttx->flags\t\t= flags;\n>>> +\ttx->cyclic\t\t= true;\n>>> +\ttx->dir\t\t\t= dir;\n>>> +\n>>> +\ttasklet_init(&tx->tasklet, tegra_ahbdma_tasklet, (unsigned long)tx);\n>>> +\n>>> +\treturn &tx->desc;\n>>> +}\n>>> +\n>>> +static void tegra_ahbdma_issue_pending(struct dma_chan *chan)\n>>> +{\n>>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>> +\tstruct list_head *entry, *tmp;\n>>> +\tunsigned long flags;\n>>> +\n>>> +\tspin_lock_irqsave(&ahbdma_chan->lock, flags);\n>>> +\n>>> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->pending_list)\n>>> +\t\tlist_move_tail(entry, &ahbdma_chan->active_list);\n>>> +\n>>> +\tif (completion_done(&ahbdma_chan->idling)) {\n>>> +\t\ttx = list_first_entry_or_null(&ahbdma_chan->active_list,\n>>> +\t\t\t\t\t      struct tegra_ahbdma_tx_desc,\n>>> +\t\t\t\t\t      node);\n>>> +\t\tif (tx) {\n>>> +\t\t\ttegra_ahbdma_submit_tx(ahbdma_chan, tx);\n>>> +\t\t\treinit_completion(&ahbdma_chan->idling);\n>>> +\t\t}\n>>> +\t}\n>>> +\n>>> +\tspin_unlock_irqrestore(&ahbdma_chan->lock, flags);\n>>> +}\n>>> +\n>>> +static enum dma_status tegra_ahbdma_tx_status(struct dma_chan *chan,\n>>> +\t\t\t\t\t      dma_cookie_t cookie,\n>>> +\t\t\t\t\t      struct dma_tx_state *state)\n>>> +{\n>>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>> +\tenum dma_status cookie_status;\n>>> +\tunsigned long flags;\n>>> +\tsize_t residual;\n>>> +\tu32 status;\n>>> +\n>>> +\tspin_lock_irqsave(&ahbdma_chan->lock, flags);\n>>> +\n>>> +\tcookie_status = dma_cookie_status(chan, cookie, state);\n>>> +\tif (cookie_status != DMA_COMPLETE) {\n>>> +\t\tlist_for_each_entry(tx, &ahbdma_chan->active_list, node) {\n>>> +\t\t\tif (tx->desc.cookie == cookie)\n>>> +\t\t\t\tgoto found;\n>>> +\t\t}\n>>> +\t}\n>>> +\n>>> +\tgoto unlock;\n>>> +\n>>> +found:\n>>> +\tif (tx->in_fly) {\n>>> +\t\tstatus = readl_relaxed(\n>>> +\t\t\tahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_STA);\n>>> +\t\tstatus  &= TEGRA_AHBDMA_CHANNEL_WCOUNT_MASK;\n>>> +\n>>> +\t\tresidual = status;\n>>> +\t} else\n>>> +\t\tresidual = tx->size;\n>>> +\n>>> +\tdma_set_residue(state, residual);\n>>> +\n>>> +unlock:\n>>> +\tspin_unlock_irqrestore(&ahbdma_chan->lock, flags);\n>>> +\n>>> +\treturn cookie_status;\n>>> +}\n>>> +\n>>> +static int tegra_ahbdma_terminate_all(struct dma_chan *chan)\n>>> +{\n>>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>> +\tstruct list_head *entry, *tmp;\n>>> +\tu32 csr;\n>>> +\n>>> +\tspin_lock_irq(&ahbdma_chan->lock);\n>>> +\n>>> +\tcsr = readl_relaxed(ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>>> +\tcsr &= ~TEGRA_AHBDMA_CHANNEL_ENABLE;\n>>> +\n>>> +\twritel_relaxed(csr, ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>>> +\n>>> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->active_list) {\n>>> +\t\ttx = list_entry(entry, struct tegra_ahbdma_tx_desc, node);\n>>> +\t\tlist_del(entry);\n>>> +\t\tkfree(tx);\n>>> +\t}\n>>> +\n>>> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->pending_list) {\n>>> +\t\ttx = list_entry(entry, struct tegra_ahbdma_tx_desc, node);\n>>> +\t\tlist_del(entry);\n>>> +\t\tkfree(tx);\n>>> +\t}\n>>> +\n>>> +\tcomplete_all(&ahbdma_chan->idling);\n>>> +\n>>> +\tspin_unlock_irq(&ahbdma_chan->lock);\n>>> +\n>>> +\treturn 0;\n>>> +}\n>>> +\n>>> +static int tegra_ahbdma_config(struct dma_chan *chan,\n>>> +\t\t\t       struct dma_slave_config *sconfig)\n>>> +{\n>>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>>> +\tenum dma_transfer_direction dir = sconfig->direction;\n>>> +\tu32 burst, ahb_seq, ahb_addr;\n>>> +\n>>> +\tif (sconfig->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||\n>>> +\t    sconfig->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES)\n>>> +\t\treturn -EINVAL;\n>>> +\n>>> +\tif (dir == DMA_DEV_TO_MEM) {\n>>> +\t\tburst    = sconfig->src_maxburst;\n>>> +\t\tahb_addr = sconfig->src_addr;\n>>> +\t} else {\n>>> +\t\tburst    = sconfig->dst_maxburst;\n>>> +\t\tahb_addr = sconfig->dst_addr;\n>>> +\t}\n>>> +\n>>> +\tswitch (burst) {\n>>> +\tcase 1: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_1; break;\n>>> +\tcase 4: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_4; break;\n>>> +\tcase 8: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_8; break;\n>>> +\tdefault:\n>>> +\t\treturn -EINVAL;\n>>> +\t}\n>>> +\n>>> +\tahb_seq  = burst << TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT;\n>>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_ADDR_WRAP;\n>>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_INTR_ENB;\n>>> +\n>>> +\twritel_relaxed(ahb_seq,\n>>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_SEQ);\n>>> +\n>>> +\twritel_relaxed(ahb_addr,\n>>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_PTR);\n>>> +\n>>> +\treturn 0;\n>>> +}\n>>> +\n>>> +static void tegra_ahbdma_synchronize(struct dma_chan *chan)\n>>> +{\n>>> +\twait_for_completion(&to_ahbdma_chan(chan)->idling);\n>>> +}\n>>> +\n>>> +static void tegra_ahbdma_free_chan_resources(struct dma_chan *chan)\n>>> +{\n>>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>> +\tstruct list_head *entry, *tmp;\n>>> +\n>>> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->pending_list) {\n>>> +\t\ttx = list_entry(entry, struct tegra_ahbdma_tx_desc, node);\n>>> +\t\tlist_del(entry);\n>>> +\t\tkfree(tx);\n>>> +\t}\n>>> +}\n>>> +\n>>> +static void tegra_ahbdma_init_channel(struct tegra_ahbdma *tdma,\n>>> +\t\t\t\t      unsigned int chan_id)\n>>> +{\n>>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = &tdma->channels[chan_id];\n>>> +\tstruct dma_chan *dma_chan = &ahbdma_chan->dma_chan;\n>>> +\tstruct dma_device *dma_dev = &tdma->dma_dev;\n>>> +\n>>> +\tINIT_LIST_HEAD(&ahbdma_chan->active_list);\n>>> +\tINIT_LIST_HEAD(&ahbdma_chan->pending_list);\n>>> +\tinit_completion(&ahbdma_chan->idling);\n>>> +\tspin_lock_init(&ahbdma_chan->lock);\n>>> +\tcomplete(&ahbdma_chan->idling);\n>>> +\n>>> +\tahbdma_chan->regs = tdma->regs + TEGRA_AHBDMA_CHANNEL_BASE(chan_id);\n>>> +\tahbdma_chan->id = chan_id;\n>>> +\n>>> +\tdma_cookie_init(dma_chan);\n>>> +\tdma_chan->device = dma_dev;\n>>> +\n>>> +\tlist_add_tail(&dma_chan->device_node, &dma_dev->channels);\n>>> +}\n>>> +\n>>> +static struct dma_chan *tegra_ahbdma_of_xlate(struct of_phandle_args *dma_spec,\n>>> +\t\t\t\t\t      struct of_dma *ofdma)\n>>> +{\n>>> +\tstruct tegra_ahbdma *tdma = ofdma->of_dma_data;\n>>> +\tstruct dma_chan *chan;\n>>> +\tu32 csr;\n>>> +\n>>> +\tchan = dma_get_any_slave_channel(&tdma->dma_dev);\n>>> +\tif (!chan)\n>>> +\t\treturn NULL;\n>>> +\n>>> +\t/* enable channels flow control */\n>>> +\tif (dma_spec->args_count == 1) {\n>>\n>> The DT doc says #dma-cells should be '1' and so if not equal 1, is this\n>> not an error?\n>>\n> \n> I wanted to differentiate slave/master modes here. But if we'd want to add\n> TRIG_SEL as another cell, then it probably would worth to implement a custom DMA\n> configure options, like documentation suggests - to wrap generic\n> dma_slave_config into the custom one. On the other hand that probably would add\n> an unused functionality to the driver.\n> \n>>> +\t\tcsr  = TEGRA_AHBDMA_CHANNEL_FLOW;\n>>> +\t\tcsr |= dma_spec->args[0] << TEGRA_AHBDMA_CHANNEL_REQ_SEL_SHIFT;\n>>\n>> What about the TRIG_REQ field?\n>>\n> \n> Not implemented, there is no test case for it yet.\n> \n>>> +\n>>> +\t\twritel_relaxed(csr,\n>>> +\t\t\tto_ahbdma_chan(chan)->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>>> +\t}\n>>> +\t\n>>> +\treturn chan;\n>>> +}\n>>> +\n>>> +static int tegra_ahbdma_init_hw(struct tegra_ahbdma *tdma, struct device *dev)\n>>> +{\n>>> +\tint err;\n>>> +\n>>> +\terr = reset_control_assert(tdma->rst);\n>>> +\tif (err) {\n>>> +\t\tdev_err(dev, \"Failed to assert reset: %d\\n\", err);\n>>> +\t\treturn err;\n>>> +\t}\n>>> +\n>>> +\terr = clk_prepare_enable(tdma->clk);\n>>> +\tif (err) {\n>>> +\t\tdev_err(dev, \"Failed to enable clock: %d\\n\", err);\n>>> +\t\treturn err;\n>>> +\t}\n>>> +\n>>> +\tusleep_range(1000, 2000);\n>>> +\n>>> +\terr = reset_control_deassert(tdma->rst);\n>>> +\tif (err) {\n>>> +\t\tdev_err(dev, \"Failed to deassert reset: %d\\n\", err);\n>>> +\t\treturn err;\n>>> +\t}\n>>> +\n>>> +\twritel_relaxed(TEGRA_AHBDMA_CMD_ENABLE, tdma->regs + TEGRA_AHBDMA_CMD);\n>>> +\n>>> +\twritel_relaxed(TEGRA_AHBDMA_IRQ_ENB_CH(0) |\n>>> +\t\t       TEGRA_AHBDMA_IRQ_ENB_CH(1) |\n>>> +\t\t       TEGRA_AHBDMA_IRQ_ENB_CH(2) |\n>>> +\t\t       TEGRA_AHBDMA_IRQ_ENB_CH(3),\n>>> +\t\t       tdma->regs + TEGRA_AHBDMA_IRQ_ENB_MASK);\n>>> +\n>>> +\treturn 0;\n>>> +}\n>>\n>> Personally I would use the pm_runtime callbacks for this sort of thing\n>> and ...\n>>\n> \n> I decided that it probaby would be better to implement PM later if needed. I'm\n> not sure whether DMA controller consumes any substantial amounts of power while\n> idling. If it's not, why bother? Unnecessary power managment would just cause\n> CPU to waste its cycles (and power) doing PM.\n\nYes it probably does not but it is easy to do and so even though there\nare probably a ton of other clocks left running, I still think it is\ngood practice.\n\nCheers\nJon","headers":{"Return-Path":"<linux-tegra-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":"ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-tegra-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3y1vWT16l6z9t3x\n\tfor <incoming@patchwork.ozlabs.org>;\n\tWed, 27 Sep 2017 07:40:21 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1032318AbdIZVkJ (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tTue, 26 Sep 2017 17:40:09 -0400","from hqemgate14.nvidia.com ([216.228.121.143]:1243 \"EHLO\n\thqemgate14.nvidia.com\" rhost-flags-OK-OK-OK-OK) by vger.kernel.org\n\twith ESMTP id S1031102AbdIZVkH (ORCPT\n\t<rfc822;linux-tegra@vger.kernel.org>);\n\tTue, 26 Sep 2017 17:40:07 -0400","from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by\n\thqemgate14.nvidia.com\n\tid <B59cac91c0000>; Tue, 26 Sep 2017 14:39:41 -0700","from HQMAIL103.nvidia.com ([172.20.161.6])\n\tby hqpgpgate102.nvidia.com (PGP Universal service);\n\tTue, 26 Sep 2017 14:39:45 -0700","from UKMAIL101.nvidia.com (10.26.138.13) by HQMAIL103.nvidia.com\n\t(172.20.187.11) with Microsoft SMTP Server (TLS) id 15.0.1293.2;\n\tTue, 26 Sep 2017 21:37:37 +0000","from [10.26.11.139] (10.26.11.139) by UKMAIL101.nvidia.com\n\t(10.26.138.13) with Microsoft SMTP Server (TLS) id 15.0.1293.2;\n\tTue, 26 Sep 2017 21:37:33 +0000"],"X-PGP-Universal":"processed;\n\tby hqpgpgate102.nvidia.com on Tue, 26 Sep 2017 14:39:45 -0700","Subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","To":"Dmitry Osipenko <digetx@gmail.com>,\n\tThierry Reding <thierry.reding@gmail.com>,\n\tLaxman Dewangan <ldewangan@nvidia.com>,\n\t\"Peter De Schrijver\" <pdeschrijver@nvidia.com>,\n\tPrashant Gaikwad <pgaikwad@nvidia.com>,\n\tMichael Turquette <mturquette@baylibre.com>,\n\tStephen Boyd <sboyd@codeaurora.org>, Rob Herring <robh+dt@kernel.org>,\n\tVinod Koul <vinod.koul@intel.com>","CC":"<linux-tegra@vger.kernel.org>, <devicetree@vger.kernel.org>,\n\t<dmaengine@vger.kernel.org>, <linux-clk@vger.kernel.org>,\n\t<linux-kernel@vger.kernel.org>","References":"<cover.1506380746.git.digetx@gmail.com>\n\t<0a45e058baba72124b91c663ce1d908d275f4044.1506380746.git.digetx@gmail.com>\n\t<481add20-9cea-a91a-e72c-45a824362e64@nvidia.com>\n\t<189ae234-86c4-02ed-698c-5b447e27bf27@gmail.com>","From":"Jon Hunter <jonathanh@nvidia.com>","Message-ID":"<8fa6108d-421d-8054-c05c-9681a0e25518@nvidia.com>","Date":"Tue, 26 Sep 2017 22:37:32 +0100","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101\n\tThunderbird/52.3.0","MIME-Version":"1.0","In-Reply-To":"<189ae234-86c4-02ed-698c-5b447e27bf27@gmail.com>","X-Originating-IP":"[10.26.11.139]","X-ClientProxiedBy":"UKMAIL101.nvidia.com (10.26.138.13) To\n\tUKMAIL101.nvidia.com (10.26.138.13)","Content-Type":"text/plain; charset=\"utf-8\"","Content-Language":"en-US","Content-Transfer-Encoding":"7bit","Sender":"linux-tegra-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-tegra.vger.kernel.org>","X-Mailing-List":"linux-tegra@vger.kernel.org"}},{"id":1775946,"web_url":"http://patchwork.ozlabs.org/comment/1775946/","msgid":"<55cd52ab-16b5-8073-0344-8fbdeca22b54@gmail.com>","list_archive_url":null,"date":"2017-09-26T23:00:05","subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","submitter":{"id":18124,"url":"http://patchwork.ozlabs.org/api/people/18124/","name":"Dmitry Osipenko","email":"digetx@gmail.com"},"content":"On 27.09.2017 00:37, Jon Hunter wrote:\n> Hi Dmitry,\n> \n> On 26/09/17 17:06, Dmitry Osipenko wrote:\n>> Hi Jon,\n>>\n>> On 26.09.2017 17:45, Jon Hunter wrote:\n>>> Hi Dmitry,\n>>>\n>>> On 26/09/17 00:22, Dmitry Osipenko wrote:\n>>>> AHB DMA controller presents on Tegra20/30 SoC's, it supports transfers\n>>>> memory <-> AHB bus peripherals as well as mem-to-mem transfers. Driver\n>>>> doesn't yet implement transfers larger than 64K and scatter-gather\n>>>> transfers that have NENT > 1, HW doesn't have native support for these\n>>>> cases.\n>>>>\n>>>> Signed-off-by: Dmitry Osipenko <digetx@gmail.com>\n>>>> ---\n>>>>  drivers/dma/Kconfig           |   9 +\n>>>>  drivers/dma/Makefile          |   1 +\n>>>>  drivers/dma/tegra20-ahb-dma.c | 679 ++++++++++++++++++++++++++++++++++++++++++\n>>>>  3 files changed, 689 insertions(+)\n>>>>  create mode 100644 drivers/dma/tegra20-ahb-dma.c\n>>>\n>>> ...\n>>>\n>>>> diff --git a/drivers/dma/tegra20-ahb-dma.c b/drivers/dma/tegra20-ahb-dma.c\n>>>> new file mode 100644\n>>>> index 000000000000..8316d64e35e1\n>>>> --- /dev/null\n>>>> +++ b/drivers/dma/tegra20-ahb-dma.c\n>>>> @@ -0,0 +1,679 @@\n>>>> +/*\n>>>> + * Copyright 2017 Dmitry Osipenko <digetx@gmail.com>\n>>>> + *\n>>>> + * This program is free software; you can redistribute it and/or modify it\n>>>> + * under the terms and conditions of the GNU General Public License,\n>>>> + * version 2, as published by the Free Software Foundation.\n>>>> + *\n>>>> + * This program is distributed in the hope it will be useful, but WITHOUT\n>>>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n>>>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for\n>>>> + * more details.\n>>>> + *\n>>>> + * You should have received a copy of the GNU General Public License\n>>>> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n>>>> + */\n>>>> +\n>>>> +#include <linux/clk.h>\n>>>> +#include <linux/delay.h>\n>>>> +#include <linux/interrupt.h>\n>>>> +#include <linux/io.h>\n>>>> +#include <linux/module.h>\n>>>> +#include <linux/of_device.h>\n>>>> +#include <linux/of_dma.h>\n>>>> +#include <linux/platform_device.h>\n>>>> +#include <linux/reset.h>\n>>>> +#include <linux/slab.h>\n>>>> +#include <linux/spinlock.h>\n>>>> +\n>>>> +#include \"dmaengine.h\"\n>>>> +\n>>>> +#define TEGRA_AHBDMA_CMD\t\t\t0x0\n>>>> +#define TEGRA_AHBDMA_CMD_ENABLE\t\t\tBIT(31)\n>>>> +\n>>>> +#define TEGRA_AHBDMA_IRQ_ENB_MASK\t\t0x20\n>>>> +#define TEGRA_AHBDMA_IRQ_ENB_CH(ch)\t\tBIT(ch)\n>>>> +\n>>>> +#define TEGRA_AHBDMA_CHANNEL_BASE(ch)\t\t(0x1000 + (ch) * 0x20)\n>>>> +\n>>>> +#define TEGRA_AHBDMA_CHANNEL_CSR\t\t0x0\n>>>> +#define TEGRA_AHBDMA_CHANNEL_ADDR_WRAP\t\tBIT(18)\n>>>> +#define TEGRA_AHBDMA_CHANNEL_FLOW\t\tBIT(24)\n>>>> +#define TEGRA_AHBDMA_CHANNEL_ONCE\t\tBIT(26)\n>>>> +#define TEGRA_AHBDMA_CHANNEL_DIR_TO_XMB\t\tBIT(27)\n>>>> +#define TEGRA_AHBDMA_CHANNEL_IE_EOC\t\tBIT(30)\n>>>> +#define TEGRA_AHBDMA_CHANNEL_ENABLE\t\tBIT(31)\n>>>> +#define TEGRA_AHBDMA_CHANNEL_REQ_SEL_SHIFT\t16\n>>>> +#define TEGRA_AHBDMA_CHANNEL_WCOUNT_MASK\t0xFFFC\n>>>> +\n>>>> +#define TEGRA_AHBDMA_CHANNEL_STA\t\t0x4\n>>>> +#define TEGRA_AHBDMA_CHANNEL_IS_EOC\t\tBIT(30)\n>>>> +\n>>>> +#define TEGRA_AHBDMA_CHANNEL_AHB_PTR\t\t0x10\n>>>> +\n>>>> +#define TEGRA_AHBDMA_CHANNEL_AHB_SEQ\t\t0x14\n>>>> +#define TEGRA_AHBDMA_CHANNEL_INTR_ENB\t\tBIT(31)\n>>>> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT\t24\n>>>> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_1\t2\n>>>> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_4\t3\n>>>> +#define TEGRA_AHBDMA_CHANNEL_AHB_BURST_8\t4\n>>>> +\n>>>> +#define TEGRA_AHBDMA_CHANNEL_XMB_PTR\t\t0x18\n>>>> +\n>>>> +#define TEGRA_AHBDMA_BUS_WIDTH\t\t\tBIT(DMA_SLAVE_BUSWIDTH_4_BYTES)\n>>>> +\n>>>> +#define TEGRA_AHBDMA_DIRECTIONS\t\t\tBIT(DMA_DEV_TO_MEM) | \\\n>>>> +\t\t\t\t\t\tBIT(DMA_MEM_TO_DEV)\n>>>> +\n>>>> +struct tegra_ahbdma_tx_desc {\n>>>> +\tstruct dma_async_tx_descriptor desc;\n>>>> +\tstruct tasklet_struct tasklet;\n>>>> +\tstruct list_head node;\n>>>\n>>> Any reason why we cannot use the virt-dma framework for this driver? I\n>>> would hope it would simplify the driver a bit.\n>>>\n>>\n>> IIUC virt-dma is supposed to provide virtually unlimited number of channels.\n>> I've looked at it and decided that it would just add unnecessary functionality\n>> and, as a result, complexity. As I wrote in the cover-letter, it is supposed\n>> that this driver would have only one consumer - the host1x. It shouldn't be\n>> difficult to implement virt-dma later, if desired.  But again it is very\n>> unlikely that it would be needed.\n> \n> I think that the biggest benefit is that is simplifies the linked list\n> management. See the tegra210-adma driver.\n> \n\nI'll take a more thorough look at it. Thank you for suggestion.\n\n>>>> +\tenum dma_transfer_direction dir;\n>>>> +\tdma_addr_t mem_paddr;\n>>>> +\tunsigned long flags;\n>>>> +\tsize_t size;\n>>>> +\tbool in_fly;\n>>>> +\tbool cyclic;\n>>>> +};\n>>>> +\n>>>> +struct tegra_ahbdma_chan {\n>>>> +\tstruct dma_chan dma_chan;\n>>>> +\tstruct list_head active_list;\n>>>> +\tstruct list_head pending_list;\n>>>> +\tstruct completion idling;\n>>>> +\tvoid __iomem *regs;\n>>>> +\tspinlock_t lock;\n>>>> +\tunsigned int id;\n>>>> +};\n>>>> +\n>>>> +struct tegra_ahbdma {\n>>>> +\tstruct tegra_ahbdma_chan channels[4];\n>>>> +\tstruct dma_device dma_dev;\n>>>> +\tstruct reset_control *rst;\n>>>> +\tstruct clk *clk;\n>>>> +\tvoid __iomem *regs;\n>>>> +};\n>>>> +\n>>>> +static inline struct tegra_ahbdma *to_ahbdma(struct dma_device *dev)\n>>>> +{\n>>>> +\treturn container_of(dev, struct tegra_ahbdma, dma_dev);\n>>>> +}\n>>>> +\n>>>> +static inline struct tegra_ahbdma_chan *to_ahbdma_chan(struct dma_chan *chan)\n>>>> +{\n>>>> +\treturn container_of(chan, struct tegra_ahbdma_chan, dma_chan);\n>>>> +}\n>>>> +\n>>>> +static inline struct tegra_ahbdma_tx_desc *to_ahbdma_tx_desc(\n>>>> +\t\t\t\tstruct dma_async_tx_descriptor *tx)\n>>>> +{\n>>>> +\treturn container_of(tx, struct tegra_ahbdma_tx_desc, desc);\n>>>> +}\n>>>> +\n>>>> +static void tegra_ahbdma_submit_tx(struct tegra_ahbdma_chan *chan,\n>>>> +\t\t\t\t   struct tegra_ahbdma_tx_desc *tx)\n>>>> +{\n>>>> +\tu32 csr;\n>>>> +\n>>>> +\twritel_relaxed(tx->mem_paddr,\n>>>> +\t\t       chan->regs + TEGRA_AHBDMA_CHANNEL_XMB_PTR);\n>>>> +\n>>>> +\tcsr = readl_relaxed(chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>>>> +\n>>>> +\tcsr &= ~TEGRA_AHBDMA_CHANNEL_WCOUNT_MASK;\n>>>> +\tcsr &= ~TEGRA_AHBDMA_CHANNEL_DIR_TO_XMB;\n>>>> +\tcsr |= TEGRA_AHBDMA_CHANNEL_ENABLE;\n>>>> +\tcsr |= TEGRA_AHBDMA_CHANNEL_IE_EOC;\n>>>> +\tcsr |= tx->size - sizeof(u32);\n>>>> +\n>>>> +\tif (tx->dir == DMA_DEV_TO_MEM)\n>>>> +\t\tcsr |= TEGRA_AHBDMA_CHANNEL_DIR_TO_XMB;\n>>>> +\n>>>> +\tif (!tx->cyclic)\n>>>> +\t\tcsr |= TEGRA_AHBDMA_CHANNEL_ONCE;\n>>>> +\n>>>> +\twritel_relaxed(csr, chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>>>> +\n>>>> +\ttx->in_fly = true;\n>>>> +}\n>>>> +\n>>>> +static void tegra_ahbdma_tasklet(unsigned long data)\n>>>> +{\n>>>> +\tstruct tegra_ahbdma_tx_desc *tx = (struct tegra_ahbdma_tx_desc *)data;\n>>>> +\tstruct dma_async_tx_descriptor *desc = &tx->desc;\n>>>> +\n>>>> +\tdmaengine_desc_get_callback_invoke(desc, NULL);\n>>>> +\n>>>> +\tif (!tx->cyclic && !dmaengine_desc_test_reuse(desc))\n>>>> +\t\tkfree(tx);\n>>>> +}\n>>>> +\n>>>> +static bool tegra_ahbdma_tx_completed(struct tegra_ahbdma_chan *chan,\n>>>> +\t\t\t\t      struct tegra_ahbdma_tx_desc *tx)\n>>>> +{\n>>>> +\tstruct dma_async_tx_descriptor *desc = &tx->desc;\n>>>> +\tbool reuse = dmaengine_desc_test_reuse(desc);\n>>>> +\tbool interrupt = tx->flags & DMA_PREP_INTERRUPT;\n>>>> +\tbool completed = !tx->cyclic;\n>>>> +\n>>>> +\tif (completed)\n>>>> +\t\tdma_cookie_complete(desc);\n>>>> +\n>>>> +\tif (interrupt)\n>>>> +\t\ttasklet_schedule(&tx->tasklet);\n>>>> +\n>>>> +\tif (completed) {\n>>>> +\t\tlist_del(&tx->node);\n>>>> +\n>>>> +\t\tif (reuse)\n>>>> +\t\t\ttx->in_fly = false;\n>>>> +\n>>>> +\t\tif (!interrupt && !reuse)\n>>>> +\t\t\tkfree(tx);\n>>>> +\t}\n>>>> +\n>>>> +\treturn completed;\n>>>> +}\n>>>> +\n>>>> +static bool tegra_ahbdma_next_tx_issued(struct tegra_ahbdma_chan *chan)\n>>>> +{\n>>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>>> +\n>>>> +\ttx = list_first_entry_or_null(&chan->active_list,\n>>>> +\t\t\t\t      struct tegra_ahbdma_tx_desc,\n>>>> +\t\t\t\t      node);\n>>>> +\tif (tx)\n>>>> +\t\ttegra_ahbdma_submit_tx(chan, tx);\n>>>> +\n>>>> +\treturn !!tx;\n>>>> +}\n>>>> +\n>>>> +static void tegra_ahbdma_handle_channel(struct tegra_ahbdma_chan *chan)\n>>>> +{\n>>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>>> +\tunsigned long flags;\n>>>> +\tu32 status;\n>>>> +\n>>>> +\tstatus = readl_relaxed(chan->regs + TEGRA_AHBDMA_CHANNEL_STA);\n>>>> +\tif (!(status & TEGRA_AHBDMA_CHANNEL_IS_EOC))\n>>>> +\t\treturn;\n>>>> +\n>>>> +\twritel_relaxed(TEGRA_AHBDMA_CHANNEL_IS_EOC,\n>>>> +\t\t       chan->regs + TEGRA_AHBDMA_CHANNEL_STA);\n>>>> +\n>>>> +\tspin_lock_irqsave(&chan->lock, flags);\n>>>> +\n>>>> +\tif (!completion_done(&chan->idling)) {\n>>>> +\t\ttx = list_first_entry(&chan->active_list,\n>>>> +\t\t\t\t      struct tegra_ahbdma_tx_desc,\n>>>> +\t\t\t\t      node);\n>>>> +\n>>>> +\t\tif (tegra_ahbdma_tx_completed(chan, tx) &&\n>>>> +\t\t    !tegra_ahbdma_next_tx_issued(chan))\n>>>> +\t\t\tcomplete_all(&chan->idling);\n>>>> +\t}\n>>>> +\n>>>> +\tspin_unlock_irqrestore(&chan->lock, flags);\n>>>> +}\n>>>> +\n>>>> +static irqreturn_t tegra_ahbdma_isr(int irq, void *dev_id)\n>>>> +{\n>>>> +\tstruct tegra_ahbdma *tdma = dev_id;\n>>>> +\tunsigned int i;\n>>>> +\n>>>> +\tfor (i = 0; i < ARRAY_SIZE(tdma->channels); i++)\n>>>> +\t\ttegra_ahbdma_handle_channel(&tdma->channels[i]);\n>>>> +\n>>>> +\treturn IRQ_HANDLED;\n>>>> +}\n>>>> +\n>>>> +static dma_cookie_t tegra_ahbdma_tx_submit(struct dma_async_tx_descriptor *desc)\n>>>> +{\n>>>> +\tstruct tegra_ahbdma_tx_desc *tx = to_ahbdma_tx_desc(desc);\n>>>> +\tstruct tegra_ahbdma_chan *chan = to_ahbdma_chan(desc->chan);\n>>>> +\tdma_cookie_t cookie;\n>>>> +\n>>>> +\tcookie = dma_cookie_assign(desc);\n>>>> +\n>>>> +\tspin_lock_irq(&chan->lock);\n>>>> +\tlist_add_tail(&tx->node, &chan->pending_list);\n>>>> +\tspin_unlock_irq(&chan->lock);\n>>>> +\n>>>> +\treturn cookie;\n>>>> +}\n>>>> +\n>>>> +static int tegra_ahbdma_tx_desc_free(struct dma_async_tx_descriptor *desc)\n>>>> +{\n>>>> +\tkfree(to_ahbdma_tx_desc(desc));\n>>>> +\n>>>> +\treturn 0;\n>>>> +}\n>>>> +\n>>>> +static struct dma_async_tx_descriptor *tegra_ahbdma_prep_slave_sg(\n>>>> +\t\t\t\t\tstruct dma_chan *chan,\n>>>> +\t\t\t\t\tstruct scatterlist *sgl,\n>>>> +\t\t\t\t\tunsigned int sg_len,\n>>>> +\t\t\t\t\tenum dma_transfer_direction dir,\n>>>> +\t\t\t\t\tunsigned long flags,\n>>>> +\t\t\t\t\tvoid *context)\n>>>> +{\n>>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>>> +\n>>>> +\t/* unimplemented */\n>>>> +\tif (sg_len != 1 || sg_dma_len(sgl) > SZ_64K)\n>>>> +\t\treturn NULL;\n>>>> +\n>>>> +\ttx = kzalloc(sizeof(*tx), GFP_KERNEL);\n>>>> +\tif (!tx)\n>>>> +\t\treturn NULL;\n>>>> +\n>>>> +\tdma_async_tx_descriptor_init(&tx->desc, chan);\n>>>> +\n>>>> +\ttx->desc.tx_submit\t= tegra_ahbdma_tx_submit;\n>>>> +\ttx->desc.desc_free\t= tegra_ahbdma_tx_desc_free;\n>>>> +\ttx->mem_paddr\t\t= sg_dma_address(sgl);\n>>>> +\ttx->size\t\t= sg_dma_len(sgl);\n>>>> +\ttx->flags\t\t= flags;\n>>>> +\ttx->dir\t\t\t= dir;\n>>>> +\n>>>> +\ttasklet_init(&tx->tasklet, tegra_ahbdma_tasklet, (unsigned long)tx);\n>>>> +\n>>>> +\treturn &tx->desc;\n>>>> +}\n>>>> +\n>>>> +static struct dma_async_tx_descriptor *tegra_ahbdma_prep_dma_cyclic(\n>>>> +\t\t\t\t\tstruct dma_chan *chan,\n>>>> +\t\t\t\t\tdma_addr_t buf_addr,\n>>>> +\t\t\t\t\tsize_t buf_len,\n>>>> +\t\t\t\t\tsize_t period_len,\n>>>> +\t\t\t\t\tenum dma_transfer_direction dir,\n>>>> +\t\t\t\t\tunsigned long flags)\n>>>> +{\n>>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>>> +\n>>>> +\t/* unimplemented */\n>>>> +\tif (buf_len != period_len || buf_len > SZ_64K)\n>>>> +\t\treturn NULL;\n>>>> +\n>>>> +\ttx = kzalloc(sizeof(*tx), GFP_KERNEL);\n>>>> +\tif (!tx)\n>>>> +\t\treturn NULL;\n>>>> +\n>>>> +\tdma_async_tx_descriptor_init(&tx->desc, chan);\n>>>> +\n>>>> +\ttx->desc.tx_submit\t= tegra_ahbdma_tx_submit;\n>>>> +\ttx->mem_paddr\t\t= buf_addr;\n>>>> +\ttx->size\t\t= buf_len;\n>>>> +\ttx->flags\t\t= flags;\n>>>> +\ttx->cyclic\t\t= true;\n>>>> +\ttx->dir\t\t\t= dir;\n>>>> +\n>>>> +\ttasklet_init(&tx->tasklet, tegra_ahbdma_tasklet, (unsigned long)tx);\n>>>> +\n>>>> +\treturn &tx->desc;\n>>>> +}\n>>>> +\n>>>> +static void tegra_ahbdma_issue_pending(struct dma_chan *chan)\n>>>> +{\n>>>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>>> +\tstruct list_head *entry, *tmp;\n>>>> +\tunsigned long flags;\n>>>> +\n>>>> +\tspin_lock_irqsave(&ahbdma_chan->lock, flags);\n>>>> +\n>>>> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->pending_list)\n>>>> +\t\tlist_move_tail(entry, &ahbdma_chan->active_list);\n>>>> +\n>>>> +\tif (completion_done(&ahbdma_chan->idling)) {\n>>>> +\t\ttx = list_first_entry_or_null(&ahbdma_chan->active_list,\n>>>> +\t\t\t\t\t      struct tegra_ahbdma_tx_desc,\n>>>> +\t\t\t\t\t      node);\n>>>> +\t\tif (tx) {\n>>>> +\t\t\ttegra_ahbdma_submit_tx(ahbdma_chan, tx);\n>>>> +\t\t\treinit_completion(&ahbdma_chan->idling);\n>>>> +\t\t}\n>>>> +\t}\n>>>> +\n>>>> +\tspin_unlock_irqrestore(&ahbdma_chan->lock, flags);\n>>>> +}\n>>>> +\n>>>> +static enum dma_status tegra_ahbdma_tx_status(struct dma_chan *chan,\n>>>> +\t\t\t\t\t      dma_cookie_t cookie,\n>>>> +\t\t\t\t\t      struct dma_tx_state *state)\n>>>> +{\n>>>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>>> +\tenum dma_status cookie_status;\n>>>> +\tunsigned long flags;\n>>>> +\tsize_t residual;\n>>>> +\tu32 status;\n>>>> +\n>>>> +\tspin_lock_irqsave(&ahbdma_chan->lock, flags);\n>>>> +\n>>>> +\tcookie_status = dma_cookie_status(chan, cookie, state);\n>>>> +\tif (cookie_status != DMA_COMPLETE) {\n>>>> +\t\tlist_for_each_entry(tx, &ahbdma_chan->active_list, node) {\n>>>> +\t\t\tif (tx->desc.cookie == cookie)\n>>>> +\t\t\t\tgoto found;\n>>>> +\t\t}\n>>>> +\t}\n>>>> +\n>>>> +\tgoto unlock;\n>>>> +\n>>>> +found:\n>>>> +\tif (tx->in_fly) {\n>>>> +\t\tstatus = readl_relaxed(\n>>>> +\t\t\tahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_STA);\n>>>> +\t\tstatus  &= TEGRA_AHBDMA_CHANNEL_WCOUNT_MASK;\n>>>> +\n>>>> +\t\tresidual = status;\n>>>> +\t} else\n>>>> +\t\tresidual = tx->size;\n>>>> +\n>>>> +\tdma_set_residue(state, residual);\n>>>> +\n>>>> +unlock:\n>>>> +\tspin_unlock_irqrestore(&ahbdma_chan->lock, flags);\n>>>> +\n>>>> +\treturn cookie_status;\n>>>> +}\n>>>> +\n>>>> +static int tegra_ahbdma_terminate_all(struct dma_chan *chan)\n>>>> +{\n>>>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>>> +\tstruct list_head *entry, *tmp;\n>>>> +\tu32 csr;\n>>>> +\n>>>> +\tspin_lock_irq(&ahbdma_chan->lock);\n>>>> +\n>>>> +\tcsr = readl_relaxed(ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>>>> +\tcsr &= ~TEGRA_AHBDMA_CHANNEL_ENABLE;\n>>>> +\n>>>> +\twritel_relaxed(csr, ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>>>> +\n>>>> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->active_list) {\n>>>> +\t\ttx = list_entry(entry, struct tegra_ahbdma_tx_desc, node);\n>>>> +\t\tlist_del(entry);\n>>>> +\t\tkfree(tx);\n>>>> +\t}\n>>>> +\n>>>> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->pending_list) {\n>>>> +\t\ttx = list_entry(entry, struct tegra_ahbdma_tx_desc, node);\n>>>> +\t\tlist_del(entry);\n>>>> +\t\tkfree(tx);\n>>>> +\t}\n>>>> +\n>>>> +\tcomplete_all(&ahbdma_chan->idling);\n>>>> +\n>>>> +\tspin_unlock_irq(&ahbdma_chan->lock);\n>>>> +\n>>>> +\treturn 0;\n>>>> +}\n>>>> +\n>>>> +static int tegra_ahbdma_config(struct dma_chan *chan,\n>>>> +\t\t\t       struct dma_slave_config *sconfig)\n>>>> +{\n>>>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>>>> +\tenum dma_transfer_direction dir = sconfig->direction;\n>>>> +\tu32 burst, ahb_seq, ahb_addr;\n>>>> +\n>>>> +\tif (sconfig->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||\n>>>> +\t    sconfig->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES)\n>>>> +\t\treturn -EINVAL;\n>>>> +\n>>>> +\tif (dir == DMA_DEV_TO_MEM) {\n>>>> +\t\tburst    = sconfig->src_maxburst;\n>>>> +\t\tahb_addr = sconfig->src_addr;\n>>>> +\t} else {\n>>>> +\t\tburst    = sconfig->dst_maxburst;\n>>>> +\t\tahb_addr = sconfig->dst_addr;\n>>>> +\t}\n>>>> +\n>>>> +\tswitch (burst) {\n>>>> +\tcase 1: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_1; break;\n>>>> +\tcase 4: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_4; break;\n>>>> +\tcase 8: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_8; break;\n>>>> +\tdefault:\n>>>> +\t\treturn -EINVAL;\n>>>> +\t}\n>>>> +\n>>>> +\tahb_seq  = burst << TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT;\n>>>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_ADDR_WRAP;\n>>>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_INTR_ENB;\n>>>> +\n>>>> +\twritel_relaxed(ahb_seq,\n>>>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_SEQ);\n>>>> +\n>>>> +\twritel_relaxed(ahb_addr,\n>>>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_PTR);\n>>>> +\n>>>> +\treturn 0;\n>>>> +}\n>>>> +\n>>>> +static void tegra_ahbdma_synchronize(struct dma_chan *chan)\n>>>> +{\n>>>> +\twait_for_completion(&to_ahbdma_chan(chan)->idling);\n>>>> +}\n>>>> +\n>>>> +static void tegra_ahbdma_free_chan_resources(struct dma_chan *chan)\n>>>> +{\n>>>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>>>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>>>> +\tstruct list_head *entry, *tmp;\n>>>> +\n>>>> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->pending_list) {\n>>>> +\t\ttx = list_entry(entry, struct tegra_ahbdma_tx_desc, node);\n>>>> +\t\tlist_del(entry);\n>>>> +\t\tkfree(tx);\n>>>> +\t}\n>>>> +}\n>>>> +\n>>>> +static void tegra_ahbdma_init_channel(struct tegra_ahbdma *tdma,\n>>>> +\t\t\t\t      unsigned int chan_id)\n>>>> +{\n>>>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = &tdma->channels[chan_id];\n>>>> +\tstruct dma_chan *dma_chan = &ahbdma_chan->dma_chan;\n>>>> +\tstruct dma_device *dma_dev = &tdma->dma_dev;\n>>>> +\n>>>> +\tINIT_LIST_HEAD(&ahbdma_chan->active_list);\n>>>> +\tINIT_LIST_HEAD(&ahbdma_chan->pending_list);\n>>>> +\tinit_completion(&ahbdma_chan->idling);\n>>>> +\tspin_lock_init(&ahbdma_chan->lock);\n>>>> +\tcomplete(&ahbdma_chan->idling);\n>>>> +\n>>>> +\tahbdma_chan->regs = tdma->regs + TEGRA_AHBDMA_CHANNEL_BASE(chan_id);\n>>>> +\tahbdma_chan->id = chan_id;\n>>>> +\n>>>> +\tdma_cookie_init(dma_chan);\n>>>> +\tdma_chan->device = dma_dev;\n>>>> +\n>>>> +\tlist_add_tail(&dma_chan->device_node, &dma_dev->channels);\n>>>> +}\n>>>> +\n>>>> +static struct dma_chan *tegra_ahbdma_of_xlate(struct of_phandle_args *dma_spec,\n>>>> +\t\t\t\t\t      struct of_dma *ofdma)\n>>>> +{\n>>>> +\tstruct tegra_ahbdma *tdma = ofdma->of_dma_data;\n>>>> +\tstruct dma_chan *chan;\n>>>> +\tu32 csr;\n>>>> +\n>>>> +\tchan = dma_get_any_slave_channel(&tdma->dma_dev);\n>>>> +\tif (!chan)\n>>>> +\t\treturn NULL;\n>>>> +\n>>>> +\t/* enable channels flow control */\n>>>> +\tif (dma_spec->args_count == 1) {\n>>>\n>>> The DT doc says #dma-cells should be '1' and so if not equal 1, is this\n>>> not an error?\n>>>\n>>\n>> I wanted to differentiate slave/master modes here. But if we'd want to add\n>> TRIG_SEL as another cell, then it probably would worth to implement a custom DMA\n>> configure options, like documentation suggests - to wrap generic\n>> dma_slave_config into the custom one. On the other hand that probably would add\n>> an unused functionality to the driver.\n>>\n>>>> +\t\tcsr  = TEGRA_AHBDMA_CHANNEL_FLOW;\n>>>> +\t\tcsr |= dma_spec->args[0] << TEGRA_AHBDMA_CHANNEL_REQ_SEL_SHIFT;\n>>>\n>>> What about the TRIG_REQ field?\n>>>\n>>\n>> Not implemented, there is no test case for it yet.\n>>\n>>>> +\n>>>> +\t\twritel_relaxed(csr,\n>>>> +\t\t\tto_ahbdma_chan(chan)->regs + TEGRA_AHBDMA_CHANNEL_CSR);\n>>>> +\t}\n>>>> +\t\n>>>> +\treturn chan;\n>>>> +}\n>>>> +\n>>>> +static int tegra_ahbdma_init_hw(struct tegra_ahbdma *tdma, struct device *dev)\n>>>> +{\n>>>> +\tint err;\n>>>> +\n>>>> +\terr = reset_control_assert(tdma->rst);\n>>>> +\tif (err) {\n>>>> +\t\tdev_err(dev, \"Failed to assert reset: %d\\n\", err);\n>>>> +\t\treturn err;\n>>>> +\t}\n>>>> +\n>>>> +\terr = clk_prepare_enable(tdma->clk);\n>>>> +\tif (err) {\n>>>> +\t\tdev_err(dev, \"Failed to enable clock: %d\\n\", err);\n>>>> +\t\treturn err;\n>>>> +\t}\n>>>> +\n>>>> +\tusleep_range(1000, 2000);\n>>>> +\n>>>> +\terr = reset_control_deassert(tdma->rst);\n>>>> +\tif (err) {\n>>>> +\t\tdev_err(dev, \"Failed to deassert reset: %d\\n\", err);\n>>>> +\t\treturn err;\n>>>> +\t}\n>>>> +\n>>>> +\twritel_relaxed(TEGRA_AHBDMA_CMD_ENABLE, tdma->regs + TEGRA_AHBDMA_CMD);\n>>>> +\n>>>> +\twritel_relaxed(TEGRA_AHBDMA_IRQ_ENB_CH(0) |\n>>>> +\t\t       TEGRA_AHBDMA_IRQ_ENB_CH(1) |\n>>>> +\t\t       TEGRA_AHBDMA_IRQ_ENB_CH(2) |\n>>>> +\t\t       TEGRA_AHBDMA_IRQ_ENB_CH(3),\n>>>> +\t\t       tdma->regs + TEGRA_AHBDMA_IRQ_ENB_MASK);\n>>>> +\n>>>> +\treturn 0;\n>>>> +}\n>>>\n>>> Personally I would use the pm_runtime callbacks for this sort of thing\n>>> and ...\n>>>\n>>\n>> I decided that it probaby would be better to implement PM later if needed. I'm\n>> not sure whether DMA controller consumes any substantial amounts of power while\n>> idling. If it's not, why bother? Unnecessary power managment would just cause\n>> CPU to waste its cycles (and power) doing PM.\n> \n> Yes it probably does not but it is easy to do and so even though there\n> are probably a ton of other clocks left running, I still think it is\n> good practice.\n> \n\nOkay, I'll take a look into implementing PM. Disabling AHBDMA clock won't stop\nthe actual clock, but only gate it to the controller.\n\nThank you for the comments!","headers":{"Return-Path":"<linux-tegra-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-tegra-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key;\n\tunprotected) header.d=gmail.com header.i=@gmail.com\n\theader.b=\"LGNpPJ3c\"; dkim-atps=neutral"],"Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3y1xHg4XFpz9t3B\n\tfor <incoming@patchwork.ozlabs.org>;\n\tWed, 27 Sep 2017 09:00:15 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1031377AbdIZXAM (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tTue, 26 Sep 2017 19:00:12 -0400","from mail-lf0-f65.google.com ([209.85.215.65]:35126 \"EHLO\n\tmail-lf0-f65.google.com\" rhost-flags-OK-OK-OK-OK) by vger.kernel.org\n\twith ESMTP id S1031323AbdIZXAK (ORCPT\n\t<rfc822;linux-tegra@vger.kernel.org>);\n\tTue, 26 Sep 2017 19:00:10 -0400","by mail-lf0-f65.google.com with SMTP id c8so3423866lfe.2;\n\tTue, 26 Sep 2017 16:00:08 -0700 (PDT)","from [192.168.1.145] (ppp109-252-90-109.pppoe.spdop.ru.\n\t[109.252.90.109]) by smtp.googlemail.com with ESMTPSA id\n\tq70sm2093538lje.58.2017.09.26.16.00.05\n\t(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);\n\tTue, 26 Sep 2017 16:00:06 -0700 (PDT)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=gmail.com; s=20161025;\n\th=subject:to:cc:references:from:message-id:date:user-agent\n\t:mime-version:in-reply-to:content-language:content-transfer-encoding; \n\tbh=Nkl81dq5blBcvi747sA/mdYtNDwBvSTfgoYf+E6Hfco=;\n\tb=LGNpPJ3cGX1Ccw8n+M0sYiwOu1/c6eYI8roD7VBH7Ax3wPE0MTPEnAtSSEBpV9r54j\n\t8jnltV+rEQ7T0DqYr4REwKlye0fDv09Ezac7JWUboEhpCrkcsSjmi+ItoS6yUPGU0U+v\n\tSgLw/ayHQr+sQjU/Sdq0viYbVIhTLxmTW+oL2+Q4rHBenSi35ZcartqG7BI0EVagU2xu\n\tnc8HftTt0ZmIXj4UhWyJQBmM4JKOsBGs/MH/dgp2XH11wCojBl8NMduTdWSKsNadz3L/\n\trBHVTg8k3L9XjJZ8e1WLcEzg1ZnRPTdYrWtaJOIdj3QI8Y4gRwpvt/fmgZ22kbyeJEgO\n\tCh3g==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:subject:to:cc:references:from:message-id:date\n\t:user-agent:mime-version:in-reply-to:content-language\n\t:content-transfer-encoding;\n\tbh=Nkl81dq5blBcvi747sA/mdYtNDwBvSTfgoYf+E6Hfco=;\n\tb=h6tJlr0PMtZvoW8a//psdBUPO/K7vzcjW7I8tj9Tspj3f/Z+Gj6VCj9o0WS/jsb0/E\n\t2Zrk7mA+GjyzTmvhzsxW3JyDiP3H3WI7tf6oyNQiKFuOphWNSRjCMc+czYUL77cYNZkn\n\tU8E/MjsVIaYpTXF+3BKj1JymIoM8iGzVRZiGk4UMpQeyDglY76fdOlT0pEsGZJtV/HZX\n\tmz4HAbmn/ziCW16yefUDhnouZzjMa1D+5LgAhVpdr4tUXLROZZ4Ajrklyomvj1u7N0kr\n\thBdHiPAzovTqhqnUAO2fZlfL8eRTmhlfRUZ4ZEproD5pQYzIC0ut0k+KAymOfpgWOxci\n\txn1g==","X-Gm-Message-State":"AHPjjUji4Nhe7gZzqT4F6GNuE11IsODUaWSgia0Lk6tMykvlHIq+YZn9\n\t10HGLlMi9rf7LVya3aVKQai3cPcz","X-Google-Smtp-Source":"AOwi7QAsMd8aqVIchrE4HRoIxqQxjUEvtLcEZddcvZDzIIJwSax/jbZbZy6J+FLt8fQ2Gx6yOpcAIA==","X-Received":"by 10.25.142.132 with SMTP id a4mr3723290lfl.170.1506466807367; \n\tTue, 26 Sep 2017 16:00:07 -0700 (PDT)","Subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","To":"Jon Hunter <jonathanh@nvidia.com>,\n\tThierry Reding <thierry.reding@gmail.com>,\n\tLaxman Dewangan <ldewangan@nvidia.com>,\n\tPeter De Schrijver <pdeschrijver@nvidia.com>,\n\tPrashant Gaikwad <pgaikwad@nvidia.com>,\n\tMichael Turquette <mturquette@baylibre.com>,\n\tStephen Boyd <sboyd@codeaurora.org>, Rob Herring <robh+dt@kernel.org>,\n\tVinod Koul <vinod.koul@intel.com>","Cc":"linux-tegra@vger.kernel.org, devicetree@vger.kernel.org,\n\tdmaengine@vger.kernel.org, linux-clk@vger.kernel.org,\n\tlinux-kernel@vger.kernel.org","References":"<cover.1506380746.git.digetx@gmail.com>\n\t<0a45e058baba72124b91c663ce1d908d275f4044.1506380746.git.digetx@gmail.com>\n\t<481add20-9cea-a91a-e72c-45a824362e64@nvidia.com>\n\t<189ae234-86c4-02ed-698c-5b447e27bf27@gmail.com>\n\t<8fa6108d-421d-8054-c05c-9681a0e25518@nvidia.com>","From":"Dmitry Osipenko <digetx@gmail.com>","Message-ID":"<55cd52ab-16b5-8073-0344-8fbdeca22b54@gmail.com>","Date":"Wed, 27 Sep 2017 02:00:05 +0300","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101\n\tThunderbird/52.3.0","MIME-Version":"1.0","In-Reply-To":"<8fa6108d-421d-8054-c05c-9681a0e25518@nvidia.com>","Content-Type":"text/plain; charset=utf-8","Content-Language":"en-US","Content-Transfer-Encoding":"7bit","Sender":"linux-tegra-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-tegra.vger.kernel.org>","X-Mailing-List":"linux-tegra@vger.kernel.org"}},{"id":1776905,"web_url":"http://patchwork.ozlabs.org/comment/1776905/","msgid":"<20170928092949.GB30097@localhost>","list_archive_url":null,"date":"2017-09-28T09:29:49","subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","submitter":{"id":8232,"url":"http://patchwork.ozlabs.org/api/people/8232/","name":"Vinod Koul","email":"vinod.koul@intel.com"},"content":"On Tue, Sep 26, 2017 at 02:22:05AM +0300, Dmitry Osipenko wrote:\n\n> +config TEGRA20_AHB_DMA\n> +\ttristate \"NVIDIA Tegra20 AHB DMA support\"\n> +\tdepends on ARCH_TEGRA\n\nCan we add COMPILE_TEST, helps me compile drivers\n\n> +#include <linux/clk.h>\n> +#include <linux/delay.h>\n> +#include <linux/interrupt.h>\n> +#include <linux/io.h>\n> +#include <linux/module.h>\n> +#include <linux/of_device.h>\n> +#include <linux/of_dma.h>\n> +#include <linux/platform_device.h>\n> +#include <linux/reset.h>\n> +#include <linux/slab.h>\n> +#include <linux/spinlock.h>\n\nno vchan.h, so i presume we are not using that here, any reason why?\n\n> +\n> +#include \"dmaengine.h\"\n> +\n> +#define TEGRA_AHBDMA_CMD\t\t\t0x0\n> +#define TEGRA_AHBDMA_CMD_ENABLE\t\t\tBIT(31)\n> +\n> +#define TEGRA_AHBDMA_IRQ_ENB_MASK\t\t0x20\n> +#define TEGRA_AHBDMA_IRQ_ENB_CH(ch)\t\tBIT(ch)\n> +\n> +#define TEGRA_AHBDMA_CHANNEL_BASE(ch)\t\t(0x1000 + (ch) * 0x20)\n> +\n> +#define TEGRA_AHBDMA_CHANNEL_CSR\t\t0x0\n> +#define TEGRA_AHBDMA_CHANNEL_ADDR_WRAP\t\tBIT(18)\n> +#define TEGRA_AHBDMA_CHANNEL_FLOW\t\tBIT(24)\n> +#define TEGRA_AHBDMA_CHANNEL_ONCE\t\tBIT(26)\n> +#define TEGRA_AHBDMA_CHANNEL_DIR_TO_XMB\t\tBIT(27)\n> +#define TEGRA_AHBDMA_CHANNEL_IE_EOC\t\tBIT(30)\n> +#define TEGRA_AHBDMA_CHANNEL_ENABLE\t\tBIT(31)\n> +#define TEGRA_AHBDMA_CHANNEL_REQ_SEL_SHIFT\t16\n> +#define TEGRA_AHBDMA_CHANNEL_WCOUNT_MASK\t0xFFFC\n\nGENMASK() ?\n\n> +static void tegra_ahbdma_tasklet(unsigned long data)\n> +{\n> +\tstruct tegra_ahbdma_tx_desc *tx = (struct tegra_ahbdma_tx_desc *)data;\n> +\tstruct dma_async_tx_descriptor *desc = &tx->desc;\n> +\n> +\tdmaengine_desc_get_callback_invoke(desc, NULL);\n> +\n> +\tif (!tx->cyclic && !dmaengine_desc_test_reuse(desc))\n> +\t\tkfree(tx);\n\nlot of code here can be reduced if we use vchan\n\n> +static struct dma_async_tx_descriptor *tegra_ahbdma_prep_dma_cyclic(\n> +\t\t\t\t\tstruct dma_chan *chan,\n> +\t\t\t\t\tdma_addr_t buf_addr,\n> +\t\t\t\t\tsize_t buf_len,\n> +\t\t\t\t\tsize_t period_len,\n> +\t\t\t\t\tenum dma_transfer_direction dir,\n> +\t\t\t\t\tunsigned long flags)\n> +{\n> +\tstruct tegra_ahbdma_tx_desc *tx;\n> +\n> +\t/* unimplemented */\n> +\tif (buf_len != period_len || buf_len > SZ_64K)\n> +\t\treturn NULL;\n> +\n> +\ttx = kzalloc(sizeof(*tx), GFP_KERNEL);\n> +\tif (!tx)\n> +\t\treturn NULL;\n> +\n> +\tdma_async_tx_descriptor_init(&tx->desc, chan);\n> +\n> +\ttx->desc.tx_submit\t= tegra_ahbdma_tx_submit;\n> +\ttx->mem_paddr\t\t= buf_addr;\n> +\ttx->size\t\t= buf_len;\n> +\ttx->flags\t\t= flags;\n> +\ttx->cyclic\t\t= true;\n> +\ttx->dir\t\t\t= dir;\n> +\n> +\ttasklet_init(&tx->tasklet, tegra_ahbdma_tasklet, (unsigned long)tx);\n\nwhy not precalulcate the register settings here. While submitting you are in\nhot path keeping dmaengine idle so faster you can submit, better the perf\n\n> +static void tegra_ahbdma_issue_pending(struct dma_chan *chan)\n> +{\n> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n> +\tstruct tegra_ahbdma_tx_desc *tx;\n> +\tstruct list_head *entry, *tmp;\n> +\tunsigned long flags;\n> +\n> +\tspin_lock_irqsave(&ahbdma_chan->lock, flags);\n> +\n> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->pending_list)\n> +\t\tlist_move_tail(entry, &ahbdma_chan->active_list);\n> +\n> +\tif (completion_done(&ahbdma_chan->idling)) {\n> +\t\ttx = list_first_entry_or_null(&ahbdma_chan->active_list,\n> +\t\t\t\t\t      struct tegra_ahbdma_tx_desc,\n> +\t\t\t\t\t      node);\n> +\t\tif (tx) {\n> +\t\t\ttegra_ahbdma_submit_tx(ahbdma_chan, tx);\n\nwhat is chan is already running?\n\n> +\t\t\treinit_completion(&ahbdma_chan->idling);\n> +\t\t}\n> +\t}\n> +\n> +\tspin_unlock_irqrestore(&ahbdma_chan->lock, flags);\n> +}\n> +\n> +static enum dma_status tegra_ahbdma_tx_status(struct dma_chan *chan,\n> +\t\t\t\t\t      dma_cookie_t cookie,\n> +\t\t\t\t\t      struct dma_tx_state *state)\n> +{\n> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n> +\tstruct tegra_ahbdma_tx_desc *tx;\n> +\tenum dma_status cookie_status;\n> +\tunsigned long flags;\n> +\tsize_t residual;\n> +\tu32 status;\n> +\n> +\tspin_lock_irqsave(&ahbdma_chan->lock, flags);\n> +\n> +\tcookie_status = dma_cookie_status(chan, cookie, state);\n> +\tif (cookie_status != DMA_COMPLETE) {\n\nresidue can be NULL so check it before proceeding ahead\n\n> +static int tegra_ahbdma_config(struct dma_chan *chan,\n> +\t\t\t       struct dma_slave_config *sconfig)\n> +{\n> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n> +\tenum dma_transfer_direction dir = sconfig->direction;\n> +\tu32 burst, ahb_seq, ahb_addr;\n> +\n> +\tif (sconfig->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||\n> +\t    sconfig->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES)\n> +\t\treturn -EINVAL;\n> +\n> +\tif (dir == DMA_DEV_TO_MEM) {\n> +\t\tburst    = sconfig->src_maxburst;\n> +\t\tahb_addr = sconfig->src_addr;\n> +\t} else {\n> +\t\tburst    = sconfig->dst_maxburst;\n> +\t\tahb_addr = sconfig->dst_addr;\n> +\t}\n> +\n> +\tswitch (burst) {\n> +\tcase 1: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_1; break;\n> +\tcase 4: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_4; break;\n> +\tcase 8: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_8; break;\n\npls make this statement and break on subsequent lines, readablity matters\n\n> +\tdefault:\n> +\t\treturn -EINVAL;\n> +\t}\n> +\n> +\tahb_seq  = burst << TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT;\n> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_ADDR_WRAP;\n> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_INTR_ENB;\n> +\n> +\twritel_relaxed(ahb_seq,\n> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_SEQ);\n> +\n> +\twritel_relaxed(ahb_addr,\n> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_PTR);\n\noh no, you don't write to HW here. This can be called anytime when you have\ntxn running! You should save these and use them in prep_ calls.\n\n> +static int tegra_ahbdma_remove(struct platform_device *pdev)\n> +{\n> +\tstruct tegra_ahbdma *tdma = platform_get_drvdata(pdev);\n> +\n> +\tof_dma_controller_free(pdev->dev.of_node);\n> +\tdma_async_device_unregister(&tdma->dma_dev);\n> +\tclk_disable_unprepare(tdma->clk);\n\nnot ensuring tasklets are killed and irq is freed so no more tasklets can\nrun? I think that needs to be done...\n\n> +MODULE_DESCRIPTION(\"NVIDIA Tegra AHB DMA Controller driver\");\n> +MODULE_AUTHOR(\"Dmitry Osipenko <digetx@gmail.com>\");\n> +MODULE_LICENSE(\"GPL\");\n\nMODULE_ALIAS?","headers":{"Return-Path":"<linux-tegra-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":"ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-tegra-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3y2q7M70Y5z9t38\n\tfor <incoming@patchwork.ozlabs.org>;\n\tThu, 28 Sep 2017 19:26:07 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1752528AbdI1JZz (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tThu, 28 Sep 2017 05:25:55 -0400","from mga14.intel.com ([192.55.52.115]:15114 \"EHLO mga14.intel.com\"\n\trhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n\tid S1752521AbdI1JZy (ORCPT <rfc822;linux-tegra@vger.kernel.org>);\n\tThu, 28 Sep 2017 05:25:54 -0400","from fmsmga002.fm.intel.com ([10.253.24.26])\n\tby fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t28 Sep 2017 02:25:53 -0700","from vkoul-udesk7.iind.intel.com (HELO localhost) ([10.223.84.143])\n\tby fmsmga002.fm.intel.com with ESMTP; 28 Sep 2017 02:25:50 -0700"],"X-ExtLoop1":"1","X-IronPort-AV":"E=Sophos;i=\"5.42,449,1500966000\"; d=\"scan'208\";a=\"1224714116\"","Date":"Thu, 28 Sep 2017 14:59:49 +0530","From":"Vinod Koul <vinod.koul@intel.com>","To":"Dmitry Osipenko <digetx@gmail.com>","Cc":"Thierry Reding <thierry.reding@gmail.com>,\n\tJonathan Hunter <jonathanh@nvidia.com>,\n\tLaxman Dewangan <ldewangan@nvidia.com>,\n\tPeter De Schrijver <pdeschrijver@nvidia.com>,\n\tPrashant Gaikwad <pgaikwad@nvidia.com>,\n\tMichael Turquette <mturquette@baylibre.com>,\n\tStephen Boyd <sboyd@codeaurora.org>,\n\tRob Herring <robh+dt@kernel.org>, linux-tegra@vger.kernel.org,\n\tdevicetree@vger.kernel.org, dmaengine@vger.kernel.org,\n\tlinux-clk@vger.kernel.org, linux-kernel@vger.kernel.org","Subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","Message-ID":"<20170928092949.GB30097@localhost>","References":"<cover.1506380746.git.digetx@gmail.com>\n\t<0a45e058baba72124b91c663ce1d908d275f4044.1506380746.git.digetx@gmail.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=us-ascii","Content-Disposition":"inline","In-Reply-To":"<0a45e058baba72124b91c663ce1d908d275f4044.1506380746.git.digetx@gmail.com>","User-Agent":"Mutt/1.5.24 (2015-08-30)","Sender":"linux-tegra-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-tegra.vger.kernel.org>","X-Mailing-List":"linux-tegra@vger.kernel.org"}},{"id":1776991,"web_url":"http://patchwork.ozlabs.org/comment/1776991/","msgid":"<8893ef75-a4c0-e918-a1f2-f374f318f9e0@gmail.com>","list_archive_url":null,"date":"2017-09-28T12:17:52","subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","submitter":{"id":18124,"url":"http://patchwork.ozlabs.org/api/people/18124/","name":"Dmitry Osipenko","email":"digetx@gmail.com"},"content":"On 28.09.2017 12:29, Vinod Koul wrote:\n> On Tue, Sep 26, 2017 at 02:22:05AM +0300, Dmitry Osipenko wrote:\n> \n>> +config TEGRA20_AHB_DMA\n>> +\ttristate \"NVIDIA Tegra20 AHB DMA support\"\n>> +\tdepends on ARCH_TEGRA\n> \n> Can we add COMPILE_TEST, helps me compile drivers\n> \n\nGood point.\n\n>> +#include <linux/clk.h>\n>> +#include <linux/delay.h>\n>> +#include <linux/interrupt.h>\n>> +#include <linux/io.h>\n>> +#include <linux/module.h>\n>> +#include <linux/of_device.h>\n>> +#include <linux/of_dma.h>\n>> +#include <linux/platform_device.h>\n>> +#include <linux/reset.h>\n>> +#include <linux/slab.h>\n>> +#include <linux/spinlock.h>\n> \n> no vchan.h, so i presume we are not using that here, any reason why?\n> \n\nJon Hunter asked the same question, I already reworked driver to use the\nvirt-dma. Turned out it is a really neat helper, -100 lines of driver code.\n\n>> +\n>> +#include \"dmaengine.h\"\n>> +\n>> +#define TEGRA_AHBDMA_CMD\t\t\t0x0\n>> +#define TEGRA_AHBDMA_CMD_ENABLE\t\t\tBIT(31)\n>> +\n>> +#define TEGRA_AHBDMA_IRQ_ENB_MASK\t\t0x20\n>> +#define TEGRA_AHBDMA_IRQ_ENB_CH(ch)\t\tBIT(ch)\n>> +\n>> +#define TEGRA_AHBDMA_CHANNEL_BASE(ch)\t\t(0x1000 + (ch) * 0x20)\n>> +\n>> +#define TEGRA_AHBDMA_CHANNEL_CSR\t\t0x0\n>> +#define TEGRA_AHBDMA_CHANNEL_ADDR_WRAP\t\tBIT(18)\n>> +#define TEGRA_AHBDMA_CHANNEL_FLOW\t\tBIT(24)\n>> +#define TEGRA_AHBDMA_CHANNEL_ONCE\t\tBIT(26)\n>> +#define TEGRA_AHBDMA_CHANNEL_DIR_TO_XMB\t\tBIT(27)\n>> +#define TEGRA_AHBDMA_CHANNEL_IE_EOC\t\tBIT(30)\n>> +#define TEGRA_AHBDMA_CHANNEL_ENABLE\t\tBIT(31)\n>> +#define TEGRA_AHBDMA_CHANNEL_REQ_SEL_SHIFT\t16\n>> +#define TEGRA_AHBDMA_CHANNEL_WCOUNT_MASK\t0xFFFC\n> \n> GENMASK() ?\n> \n\nOkay.\n\n>> +static void tegra_ahbdma_tasklet(unsigned long data)\n>> +{\n>> +\tstruct tegra_ahbdma_tx_desc *tx = (struct tegra_ahbdma_tx_desc *)data;\n>> +\tstruct dma_async_tx_descriptor *desc = &tx->desc;\n>> +\n>> +\tdmaengine_desc_get_callback_invoke(desc, NULL);\n>> +\n>> +\tif (!tx->cyclic && !dmaengine_desc_test_reuse(desc))\n>> +\t\tkfree(tx);\n> \n> lot of code here can be reduced if we use vchan\n> \n\n+1\n\n>> +static struct dma_async_tx_descriptor *tegra_ahbdma_prep_dma_cyclic(\n>> +\t\t\t\t\tstruct dma_chan *chan,\n>> +\t\t\t\t\tdma_addr_t buf_addr,\n>> +\t\t\t\t\tsize_t buf_len,\n>> +\t\t\t\t\tsize_t period_len,\n>> +\t\t\t\t\tenum dma_transfer_direction dir,\n>> +\t\t\t\t\tunsigned long flags)\n>> +{\n>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>> +\n>> +\t/* unimplemented */\n>> +\tif (buf_len != period_len || buf_len > SZ_64K)\n>> +\t\treturn NULL;\n>> +\n>> +\ttx = kzalloc(sizeof(*tx), GFP_KERNEL);\n>> +\tif (!tx)\n>> +\t\treturn NULL;\n>> +\n>> +\tdma_async_tx_descriptor_init(&tx->desc, chan);\n>> +\n>> +\ttx->desc.tx_submit\t= tegra_ahbdma_tx_submit;\n>> +\ttx->mem_paddr\t\t= buf_addr;\n>> +\ttx->size\t\t= buf_len;\n>> +\ttx->flags\t\t= flags;\n>> +\ttx->cyclic\t\t= true;\n>> +\ttx->dir\t\t\t= dir;\n>> +\n>> +\ttasklet_init(&tx->tasklet, tegra_ahbdma_tasklet, (unsigned long)tx);\n> \n> why not precalulcate the register settings here. While submitting you are in\n> hot path keeping dmaengine idle so faster you can submit, better the perf\n> \n\nI may argue that the perf impact isn't measurable, but I agree that\nprecalculated register value would be a bit cleaner. Thanks for the suggestion.\n\n>> +static void tegra_ahbdma_issue_pending(struct dma_chan *chan)\n>> +{\n>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>> +\tstruct list_head *entry, *tmp;\n>> +\tunsigned long flags;\n>> +\n>> +\tspin_lock_irqsave(&ahbdma_chan->lock, flags);\n>> +\n>> +\tlist_for_each_safe(entry, tmp, &ahbdma_chan->pending_list)\n>> +\t\tlist_move_tail(entry, &ahbdma_chan->active_list);\n>> +\n>> +\tif (completion_done(&ahbdma_chan->idling)) {\n>> +\t\ttx = list_first_entry_or_null(&ahbdma_chan->active_list,\n>> +\t\t\t\t\t      struct tegra_ahbdma_tx_desc,\n>> +\t\t\t\t\t      node);\n>> +\t\tif (tx) {\n>> +\t\t\ttegra_ahbdma_submit_tx(ahbdma_chan, tx);\n> \n> what is chan is already running?\n> \n\nIt can't run here, we just checked whether it is idling. That would be a HW bug.\n\n>> +\t\t\treinit_completion(&ahbdma_chan->idling);\n>> +\t\t}\n>> +\t}\n>> +\n>> +\tspin_unlock_irqrestore(&ahbdma_chan->lock, flags);\n>> +}\n>> +\n>> +static enum dma_status tegra_ahbdma_tx_status(struct dma_chan *chan,\n>> +\t\t\t\t\t      dma_cookie_t cookie,\n>> +\t\t\t\t\t      struct dma_tx_state *state)\n>> +{\n>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>> +\tstruct tegra_ahbdma_tx_desc *tx;\n>> +\tenum dma_status cookie_status;\n>> +\tunsigned long flags;\n>> +\tsize_t residual;\n>> +\tu32 status;\n>> +\n>> +\tspin_lock_irqsave(&ahbdma_chan->lock, flags);\n>> +\n>> +\tcookie_status = dma_cookie_status(chan, cookie, state);\n>> +\tif (cookie_status != DMA_COMPLETE) {\n> \n> residue can be NULL so check it before proceeding ahead\n> \n\nYeah, I noticed it too and fixed it in the upcoming V2 yesterday.\n\n>> +static int tegra_ahbdma_config(struct dma_chan *chan,\n>> +\t\t\t       struct dma_slave_config *sconfig)\n>> +{\n>> +\tstruct tegra_ahbdma_chan *ahbdma_chan = to_ahbdma_chan(chan);\n>> +\tenum dma_transfer_direction dir = sconfig->direction;\n>> +\tu32 burst, ahb_seq, ahb_addr;\n>> +\n>> +\tif (sconfig->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||\n>> +\t    sconfig->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES)\n>> +\t\treturn -EINVAL;\n>> +\n>> +\tif (dir == DMA_DEV_TO_MEM) {\n>> +\t\tburst    = sconfig->src_maxburst;\n>> +\t\tahb_addr = sconfig->src_addr;\n>> +\t} else {\n>> +\t\tburst    = sconfig->dst_maxburst;\n>> +\t\tahb_addr = sconfig->dst_addr;\n>> +\t}\n>> +\n>> +\tswitch (burst) {\n>> +\tcase 1: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_1; break;\n>> +\tcase 4: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_4; break;\n>> +\tcase 8: burst = TEGRA_AHBDMA_CHANNEL_AHB_BURST_8; break;\n> \n> pls make this statement and break on subsequent lines, readablity matters\n> \n\nOkay.\n\n>> +\tdefault:\n>> +\t\treturn -EINVAL;\n>> +\t}\n>> +\n>> +\tahb_seq  = burst << TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT;\n>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_ADDR_WRAP;\n>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_INTR_ENB;\n>> +\n>> +\twritel_relaxed(ahb_seq,\n>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_SEQ);\n>> +\n>> +\twritel_relaxed(ahb_addr,\n>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_PTR);\n> \n> oh no, you don't write to HW here. This can be called anytime when you have\n> txn running! You should save these and use them in prep_ calls.\n> \n\nOkay.\n\n>> +static int tegra_ahbdma_remove(struct platform_device *pdev)\n>> +{\n>> +\tstruct tegra_ahbdma *tdma = platform_get_drvdata(pdev);\n>> +\n>> +\tof_dma_controller_free(pdev->dev.of_node);\n>> +\tdma_async_device_unregister(&tdma->dma_dev);\n>> +\tclk_disable_unprepare(tdma->clk);\n> \n> not ensuring tasklets are killed and irq is freed so no more tasklets can\n> run? I think that needs to be done...\n> \n\nAlready fixed in V2 by using vchan_synchronize() that kills tasklet in\ntegra_ahbdma_synchronize(). DMA core invokes synchronization upon channels\nresource freeing.\n\n>> +MODULE_DESCRIPTION(\"NVIDIA Tegra AHB DMA Controller driver\");\n>> +MODULE_AUTHOR(\"Dmitry Osipenko <digetx@gmail.com>\");\n>> +MODULE_LICENSE(\"GPL\");\n> \n> MODULE_ALIAS?\n> \n\nNot needed, driver is \"OF-only\". It's default alias is \"tegra20-ahb-dma\".","headers":{"Return-Path":"<linux-tegra-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-tegra-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key;\n\tunprotected) header.d=gmail.com header.i=@gmail.com\n\theader.b=\"tbBiEu1y\"; dkim-atps=neutral"],"Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3y2txw42gTz9tXN\n\tfor <incoming@patchwork.ozlabs.org>;\n\tThu, 28 Sep 2017 22:18:12 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1752965AbdI1MR7 (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tThu, 28 Sep 2017 08:17:59 -0400","from mail-wr0-f194.google.com ([209.85.128.194]:46224 \"EHLO\n\tmail-wr0-f194.google.com\" rhost-flags-OK-OK-OK-OK) by vger.kernel.org\n\twith ESMTP id S1752910AbdI1MR5 (ORCPT\n\t<rfc822;linux-tegra@vger.kernel.org>);\n\tThu, 28 Sep 2017 08:17:57 -0400","by mail-wr0-f194.google.com with SMTP id o42so2321815wrb.3;\n\tThu, 28 Sep 2017 05:17:56 -0700 (PDT)","from [192.168.1.145] (ppp109-252-90-109.pppoe.spdop.ru.\n\t[109.252.90.109]) by smtp.googlemail.com with ESMTPSA id\n\ti123sm267646lji.92.2017.09.28.05.17.53\n\t(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);\n\tThu, 28 Sep 2017 05:17:54 -0700 (PDT)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=gmail.com; s=20161025;\n\th=subject:to:cc:references:from:message-id:date:user-agent\n\t:mime-version:in-reply-to:content-language:content-transfer-encoding; \n\tbh=sEHUk9UlBcr+iC43FnS6FrNECG2jNzXmz6Ybkhvk3zA=;\n\tb=tbBiEu1y7h5VPVj5/YaACcN2lE/YsNb/ptzXi4NzHyCVK2PprKBxLoP1TesjcTxH1X\n\tFrU88kxHdst90LY1PFrR7XBGb7SMxVPtSxdkjQh7ta84WLf0gUfNYvYnNb+tq3uvRcpu\n\tTySWre0YvK7Xh+mTX5WSWXkIbHXDakPky6w7EAngZfrkpZ7DEujeiwR6ksw0A8cnhXMm\n\t1+5OGySPeHNBDvEfMYDuXPAdZkbE0I8EMLIJLg6c/THL9334uDAgVTnlpgU36CZZa9h5\n\tgQUaQS4Cg6qwiPxDuTb3qe4BJxh/O0de0AQ97MmZ/jFmD9sAqgCzDo5s950TpKK49F1w\n\tA/YA==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:subject:to:cc:references:from:message-id:date\n\t:user-agent:mime-version:in-reply-to:content-language\n\t:content-transfer-encoding;\n\tbh=sEHUk9UlBcr+iC43FnS6FrNECG2jNzXmz6Ybkhvk3zA=;\n\tb=BDMdleVlbbj9GlKNI8jL4//PqwmRGxLzd6l8x4GHnsWhrb5J72e2zpUrNx18p+ZbnY\n\tLWGFeoNZ0j7SVQh9e6qk+otqjjv9EkU5CIh8Zi/DeTrVvpqMnoGZciIjiC0h6Aok3Ria\n\tQwt3G/ExdSXvRbH+pNp304z7eJpu/wDQoqPebXG8g3H/y/hy92u5sepZ9zWNh4nex46f\n\tTOunvWPNuWU8U3jy93/xc37mudp3wMPRdyTNroErGhibtLoCZ92Jlq8hfMAV45/sIdpo\n\t+sbTU7daOH/jA0QwyCHz/H1fY7suFX0AsmvHtMEMVe/hXRvd9kM/EldLrUIQLWQEHY/Q\n\tPRpw==","X-Gm-Message-State":"AHPjjUiNx5Y+d5TxZL94KtetEWfHaaps1W1E57lxibw7scQkpP/5p0iB\n\tiQSRWa4FGcskgAX8gjG0I5y+JCfj","X-Google-Smtp-Source":"AOwi7QDPAKkrqf9pHKlA4vSlgf22Ckh8cvSO/zKuIEvD3/kyOqEdFKwO7Ms+axeH9lMp9mIaAIeUYw==","X-Received":"by 10.25.217.213 with SMTP id s82mr150113lfi.176.1506601075020; \n\tThu, 28 Sep 2017 05:17:55 -0700 (PDT)","Subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","To":"Vinod Koul <vinod.koul@intel.com>","Cc":"Thierry Reding <thierry.reding@gmail.com>,\n\tJonathan Hunter <jonathanh@nvidia.com>,\n\tLaxman Dewangan <ldewangan@nvidia.com>,\n\tPeter De Schrijver <pdeschrijver@nvidia.com>,\n\tPrashant Gaikwad <pgaikwad@nvidia.com>,\n\tMichael Turquette <mturquette@baylibre.com>,\n\tStephen Boyd <sboyd@codeaurora.org>,\n\tRob Herring <robh+dt@kernel.org>, linux-tegra@vger.kernel.org,\n\tdevicetree@vger.kernel.org, dmaengine@vger.kernel.org,\n\tlinux-clk@vger.kernel.org, linux-kernel@vger.kernel.org","References":"<cover.1506380746.git.digetx@gmail.com>\n\t<0a45e058baba72124b91c663ce1d908d275f4044.1506380746.git.digetx@gmail.com>\n\t<20170928092949.GB30097@localhost>","From":"Dmitry Osipenko <digetx@gmail.com>","Message-ID":"<8893ef75-a4c0-e918-a1f2-f374f318f9e0@gmail.com>","Date":"Thu, 28 Sep 2017 15:17:52 +0300","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101\n\tThunderbird/52.3.0","MIME-Version":"1.0","In-Reply-To":"<20170928092949.GB30097@localhost>","Content-Type":"text/plain; charset=utf-8","Content-Language":"en-US","Content-Transfer-Encoding":"7bit","Sender":"linux-tegra-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-tegra.vger.kernel.org>","X-Mailing-List":"linux-tegra@vger.kernel.org"}},{"id":1777046,"web_url":"http://patchwork.ozlabs.org/comment/1777046/","msgid":"<b601b829-d87f-99e0-dcf9-ad3f9a7195df@gmail.com>","list_archive_url":null,"date":"2017-09-28T14:06:03","subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","submitter":{"id":18124,"url":"http://patchwork.ozlabs.org/api/people/18124/","name":"Dmitry Osipenko","email":"digetx@gmail.com"},"content":"On 28.09.2017 12:29, Vinod Koul wrote:\n>> +\tdefault:\n>> +\t\treturn -EINVAL;\n>> +\t}\n>> +\n>> +\tahb_seq  = burst << TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT;\n>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_ADDR_WRAP;\n>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_INTR_ENB;\n>> +\n>> +\twritel_relaxed(ahb_seq,\n>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_SEQ);\n>> +\n>> +\twritel_relaxed(ahb_addr,\n>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_PTR);\n> \n> oh no, you don't write to HW here. This can be called anytime when you have\n> txn running! You should save these and use them in prep_ calls.\n> \n\nBTW, some of the DMA drivers have exactly the same problem. I now see that it is\nactually documented explicitly in provider.txt, but that's inconsistent across\nthe actual drivers.","headers":{"Return-Path":"<linux-tegra-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-tegra-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key;\n\tunprotected) header.d=gmail.com header.i=@gmail.com\n\theader.b=\"YfEfP0rR\"; dkim-atps=neutral"],"Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3y2xLY1SFRz9t3B\n\tfor <incoming@patchwork.ozlabs.org>;\n\tFri, 29 Sep 2017 00:06:13 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1752156AbdI1OGK (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tThu, 28 Sep 2017 10:06:10 -0400","from mail-wr0-f179.google.com ([209.85.128.179]:53787 \"EHLO\n\tmail-wr0-f179.google.com\" rhost-flags-OK-OK-OK-OK) by vger.kernel.org\n\twith ESMTP id S1750759AbdI1OGI (ORCPT\n\t<rfc822;linux-tegra@vger.kernel.org>);\n\tThu, 28 Sep 2017 10:06:08 -0400","by mail-wr0-f179.google.com with SMTP id 54so2885648wrz.10;\n\tThu, 28 Sep 2017 07:06:06 -0700 (PDT)","from [192.168.1.145] (ppp109-252-90-109.pppoe.spdop.ru.\n\t[109.252.90.109]) by smtp.googlemail.com with ESMTPSA id\n\tf87sm220622lfi.89.2017.09.28.07.06.04\n\t(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);\n\tThu, 28 Sep 2017 07:06:04 -0700 (PDT)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=gmail.com; s=20161025;\n\th=subject:to:cc:references:from:message-id:date:user-agent\n\t:mime-version:in-reply-to:content-language:content-transfer-encoding; \n\tbh=evLgY/cOkpnrYVNlE0weqpEZcziu1ipAziszSFsPX2g=;\n\tb=YfEfP0rR+wcCP2ckw4PXlkIzMb7rJ6yuyeMQYVAplohjWywX0jyqgE1ReLcambztRi\n\tGtplJGedLers/gM/hwsm7KLTYiHxBMxJuPXbphtrqn32eCovmGmF2/J7DAwUmB+1mD6g\n\tckc6gnrvdqOvhhqrCB0RwAbxJjicI6oixzgxZUDkpnJcq6tLvaYMz//Vwmo4Upl4zI8R\n\tCiswa66TXCf18ldrvBS2Z1xwzeEYKjg6DQXvkTdrhVVi8XN0EbgjySQ/KUTns57aAIVq\n\tmGujNrr3LxE3Uj5yeixbBwS6bh4IHes1awoVDtLc2w0t004KCj8JjY6hFuiYXHsCunJW\n\tKAbA==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:subject:to:cc:references:from:message-id:date\n\t:user-agent:mime-version:in-reply-to:content-language\n\t:content-transfer-encoding;\n\tbh=evLgY/cOkpnrYVNlE0weqpEZcziu1ipAziszSFsPX2g=;\n\tb=Z9AGJF54EQib3swICi1qgW5M7oPqPop1LCtzSr8HRLNM+E1GtAWucwL6ERkydaQ0Vg\n\tAAXXQtS/UNlsTY3Q6SEa9c1kSGPd+kQE6PyH3Kc10WZEfBaXtbnfTB+CQJcxesoZfMS+\n\t4H4JqM3w+dzU53GPA+rI2d8ctnfQWRbc4QJACZnN1P/ev6E8ukvoJJjKbvaRJCyV4ByB\n\to4kY+TyO7HWuXZs7bf3HVqTqjfDOb3UMdWr8NbCMKCrPBR6ADwjoIMg5G0mcsRUh6h8E\n\tTYDrnadborTs3LEOP1wkeZxXPKI9OKkMAJf3AI9ihu75/DGaIItH3yyYgPhE0nrFscWo\n\tCttQ==","X-Gm-Message-State":"AMCzsaWlKx2c4GGXLxDJ19t7nLVDszloKF5FYALw98BhClqC+cVBMMCm\n\tEZNfwu5ITAokZWm1et0S7+Y8mp5x","X-Google-Smtp-Source":"AOwi7QAz3Qvm4mqAxn2kBpl4/OizIBLTikVQIsUbmMZtUKmh8e2fQBq/ObFw7L9q8fNZuIvPJyYB1g==","X-Received":"by 10.25.149.131 with SMTP id x125mr237981lfd.231.1506607565787; \n\tThu, 28 Sep 2017 07:06:05 -0700 (PDT)","Subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","To":"Vinod Koul <vinod.koul@intel.com>","Cc":"Thierry Reding <thierry.reding@gmail.com>,\n\tJonathan Hunter <jonathanh@nvidia.com>,\n\tLaxman Dewangan <ldewangan@nvidia.com>,\n\tPeter De Schrijver <pdeschrijver@nvidia.com>,\n\tPrashant Gaikwad <pgaikwad@nvidia.com>,\n\tMichael Turquette <mturquette@baylibre.com>,\n\tStephen Boyd <sboyd@codeaurora.org>,\n\tRob Herring <robh+dt@kernel.org>, linux-tegra@vger.kernel.org,\n\tdevicetree@vger.kernel.org, dmaengine@vger.kernel.org,\n\tlinux-clk@vger.kernel.org, linux-kernel@vger.kernel.org","References":"<cover.1506380746.git.digetx@gmail.com>\n\t<0a45e058baba72124b91c663ce1d908d275f4044.1506380746.git.digetx@gmail.com>\n\t<20170928092949.GB30097@localhost>","From":"Dmitry Osipenko <digetx@gmail.com>","Message-ID":"<b601b829-d87f-99e0-dcf9-ad3f9a7195df@gmail.com>","Date":"Thu, 28 Sep 2017 17:06:03 +0300","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101\n\tThunderbird/52.3.0","MIME-Version":"1.0","In-Reply-To":"<20170928092949.GB30097@localhost>","Content-Type":"text/plain; charset=utf-8","Content-Language":"en-US","Content-Transfer-Encoding":"7bit","Sender":"linux-tegra-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-tegra.vger.kernel.org>","X-Mailing-List":"linux-tegra@vger.kernel.org"}},{"id":1777071,"web_url":"http://patchwork.ozlabs.org/comment/1777071/","msgid":"<260fa409-0d07-ec9e-9e3b-fb08255026d8@gmail.com>","list_archive_url":null,"date":"2017-09-28T14:35:59","subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","submitter":{"id":18124,"url":"http://patchwork.ozlabs.org/api/people/18124/","name":"Dmitry Osipenko","email":"digetx@gmail.com"},"content":"On 28.09.2017 17:06, Dmitry Osipenko wrote:\n> On 28.09.2017 12:29, Vinod Koul wrote:\n>>> +\tdefault:\n>>> +\t\treturn -EINVAL;\n>>> +\t}\n>>> +\n>>> +\tahb_seq  = burst << TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT;\n>>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_ADDR_WRAP;\n>>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_INTR_ENB;\n>>> +\n>>> +\twritel_relaxed(ahb_seq,\n>>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_SEQ);\n>>> +\n>>> +\twritel_relaxed(ahb_addr,\n>>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_PTR);\n>>\n>> oh no, you don't write to HW here. This can be called anytime when you have\n>> txn running! You should save these and use them in prep_ calls.\n>>\n> \n> BTW, some of the DMA drivers have exactly the same problem. I now see that it is\n> actually documented explicitly in provider.txt, but that's inconsistent across\n> the actual drivers.\n> \n\nAlso, shouldn't prep_ and dma_slave_config be protected with locking? I don't\nsee DMA core doing any locking and seems none of the drivers too.","headers":{"Return-Path":"<linux-tegra-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-tegra-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key;\n\tunprotected) header.d=gmail.com header.i=@gmail.com\n\theader.b=\"VfOAOtug\"; dkim-atps=neutral"],"Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3y2y1J3wwxz9tXd\n\tfor <incoming@patchwork.ozlabs.org>;\n\tFri, 29 Sep 2017 00:36:20 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1753200AbdI1OgF (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tThu, 28 Sep 2017 10:36:05 -0400","from mail-wr0-f181.google.com ([209.85.128.181]:46917 \"EHLO\n\tmail-wr0-f181.google.com\" rhost-flags-OK-OK-OK-OK) by vger.kernel.org\n\twith ESMTP id S1752385AbdI1OgD (ORCPT\n\t<rfc822;linux-tegra@vger.kernel.org>);\n\tThu, 28 Sep 2017 10:36:03 -0400","by mail-wr0-f181.google.com with SMTP id o42so3076299wrb.3;\n\tThu, 28 Sep 2017 07:36:02 -0700 (PDT)","from [192.168.1.145] (ppp109-252-90-109.pppoe.spdop.ru.\n\t[109.252.90.109]) by smtp.googlemail.com with ESMTPSA id\n\th18sm231925lfl.19.2017.09.28.07.35.59\n\t(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);\n\tThu, 28 Sep 2017 07:36:00 -0700 (PDT)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=gmail.com; s=20161025;\n\th=subject:from:to:cc:references:message-id:date:user-agent\n\t:mime-version:in-reply-to:content-language:content-transfer-encoding; \n\tbh=3S/mE77miJRYR4zA+HeOmt8nYkkM+ut3qXLrnb4mUUc=;\n\tb=VfOAOtug5Qi7XwSL5HLVMT6/ik4zwbeMk2976MSVohcdpQLewQL+dMb4Q94X0f85K6\n\tFJmyHUcjDR20ejNWm24s9IsNqr5ZFGB9ntcTF3ptHKOYY2YJ6djIHhpNZk7AIZfNTsO2\n\t03CrUaoEmRgranV0NAmdEOvDvgo8aDce5DVLGsuKkyiZhV2MTJ28dvzi5z46QaLbjfe3\n\tQ3Z82LDqxwR15igP6AQy2ut7mLBw6+OsKk+JBrA7pE7ZwnIv2lGwfL2GWzwzHbz7eL32\n\tlkoghuPU1kRWrAE0k8rdTI3cISYNSR4GRUG2RDXS+GIiMFXmopArki0BYxDCzTswQnTy\n\tBlaQ==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:subject:from:to:cc:references:message-id:date\n\t:user-agent:mime-version:in-reply-to:content-language\n\t:content-transfer-encoding;\n\tbh=3S/mE77miJRYR4zA+HeOmt8nYkkM+ut3qXLrnb4mUUc=;\n\tb=iHrc4eLfRbCHeG1zsMvc75D42NzNaKWx8XZOt1CNYUPa+HaHhjQHW7v9qXcrIdvkpR\n\tu587Ezbxsk05R3LWgk3VUJccj3Yh7HZ8gD6GwYwsBihUafLo1jw706psj/8kGedWEqB6\n\tEEQwjPe7cHMp5nnmTKIB3e0mDo3w8kxIw1EdnLA+09qrcRaHAuZXgKb9T1K4I9KKuF5r\n\tKQge0freIADfw9qUutGHDKuHAnc7cZUz0KHpOXttAOnuEaV0X1dVaRj1HPz94DqdUJf9\n\tSovO/RhwfVwBz8Trq2u89GomLEM7A2RP6wq2VZrT/oe5XGCwW3J+Q6BEiicxunCk4zBp\n\tsC7A==","X-Gm-Message-State":"AHPjjUiMPnVn6g3rLw2H9Wk3FfPpD5xM3aCjl9zYzeXkw5BAbPjtOB7G\n\t802StJW6ZN2ttoRV1Khosn80HLyy","X-Google-Smtp-Source":"AOwi7QAHP94xGMswLwb409JK8sdPAiGhr9rwZERCtvInjnM7yoOAwMoby9oAQFskgnSY37WQcMCA5A==","X-Received":"by 10.46.93.134 with SMTP id v6mr2209697lje.9.1506609361367;\n\tThu, 28 Sep 2017 07:36:01 -0700 (PDT)","Subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","From":"Dmitry Osipenko <digetx@gmail.com>","To":"Vinod Koul <vinod.koul@intel.com>","Cc":"Thierry Reding <thierry.reding@gmail.com>,\n\tJonathan Hunter <jonathanh@nvidia.com>,\n\tLaxman Dewangan <ldewangan@nvidia.com>,\n\tPeter De Schrijver <pdeschrijver@nvidia.com>,\n\tPrashant Gaikwad <pgaikwad@nvidia.com>,\n\tMichael Turquette <mturquette@baylibre.com>,\n\tStephen Boyd <sboyd@codeaurora.org>,\n\tRob Herring <robh+dt@kernel.org>, linux-tegra@vger.kernel.org,\n\tdevicetree@vger.kernel.org, dmaengine@vger.kernel.org,\n\tlinux-clk@vger.kernel.org, linux-kernel@vger.kernel.org","References":"<cover.1506380746.git.digetx@gmail.com>\n\t<0a45e058baba72124b91c663ce1d908d275f4044.1506380746.git.digetx@gmail.com>\n\t<20170928092949.GB30097@localhost>\n\t<b601b829-d87f-99e0-dcf9-ad3f9a7195df@gmail.com>","Message-ID":"<260fa409-0d07-ec9e-9e3b-fb08255026d8@gmail.com>","Date":"Thu, 28 Sep 2017 17:35:59 +0300","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101\n\tThunderbird/52.3.0","MIME-Version":"1.0","In-Reply-To":"<b601b829-d87f-99e0-dcf9-ad3f9a7195df@gmail.com>","Content-Type":"text/plain; charset=utf-8","Content-Language":"en-US","Content-Transfer-Encoding":"7bit","Sender":"linux-tegra-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-tegra.vger.kernel.org>","X-Mailing-List":"linux-tegra@vger.kernel.org"}},{"id":1777146,"web_url":"http://patchwork.ozlabs.org/comment/1777146/","msgid":"<20170928162125.GE30097@localhost>","list_archive_url":null,"date":"2017-09-28T16:21:25","subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","submitter":{"id":8232,"url":"http://patchwork.ozlabs.org/api/people/8232/","name":"Vinod Koul","email":"vinod.koul@intel.com"},"content":"On Thu, Sep 28, 2017 at 05:06:03PM +0300, Dmitry Osipenko wrote:\n> On 28.09.2017 12:29, Vinod Koul wrote:\n> >> +\tdefault:\n> >> +\t\treturn -EINVAL;\n> >> +\t}\n> >> +\n> >> +\tahb_seq  = burst << TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT;\n> >> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_ADDR_WRAP;\n> >> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_INTR_ENB;\n> >> +\n> >> +\twritel_relaxed(ahb_seq,\n> >> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_SEQ);\n> >> +\n> >> +\twritel_relaxed(ahb_addr,\n> >> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_PTR);\n> > \n> > oh no, you don't write to HW here. This can be called anytime when you have\n> > txn running! You should save these and use them in prep_ calls.\n> > \n> \n> BTW, some of the DMA drivers have exactly the same problem. I now see that it is\n> actually documented explicitly in provider.txt, but that's inconsistent across\n> the actual drivers.\n\nyeah they need to be fixed!","headers":{"Return-Path":"<linux-tegra-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":"ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-tegra-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3y30Gf6j0rz9t48\n\tfor <incoming@patchwork.ozlabs.org>;\n\tFri, 29 Sep 2017 02:18:02 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1751894AbdI1QRg (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tThu, 28 Sep 2017 12:17:36 -0400","from mga14.intel.com ([192.55.52.115]:37375 \"EHLO mga14.intel.com\"\n\trhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n\tid S1751691AbdI1QRf (ORCPT <rfc822;linux-tegra@vger.kernel.org>);\n\tThu, 28 Sep 2017 12:17:35 -0400","from orsmga002.jf.intel.com ([10.7.209.21])\n\tby fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t28 Sep 2017 09:17:35 -0700","from vkoul-udesk7.iind.intel.com (HELO localhost) ([10.223.84.143])\n\tby orsmga002.jf.intel.com with ESMTP; 28 Sep 2017 09:17:26 -0700"],"X-ExtLoop1":"1","X-IronPort-AV":"E=Sophos;i=\"5.42,450,1500966000\"; d=\"scan'208\";a=\"140557590\"","Date":"Thu, 28 Sep 2017 21:51:25 +0530","From":"Vinod Koul <vinod.koul@intel.com>","To":"Dmitry Osipenko <digetx@gmail.com>","Cc":"Thierry Reding <thierry.reding@gmail.com>,\n\tJonathan Hunter <jonathanh@nvidia.com>,\n\tLaxman Dewangan <ldewangan@nvidia.com>,\n\tPeter De Schrijver <pdeschrijver@nvidia.com>,\n\tPrashant Gaikwad <pgaikwad@nvidia.com>,\n\tMichael Turquette <mturquette@baylibre.com>,\n\tStephen Boyd <sboyd@codeaurora.org>,\n\tRob Herring <robh+dt@kernel.org>, linux-tegra@vger.kernel.org,\n\tdevicetree@vger.kernel.org, dmaengine@vger.kernel.org,\n\tlinux-clk@vger.kernel.org, linux-kernel@vger.kernel.org","Subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","Message-ID":"<20170928162125.GE30097@localhost>","References":"<cover.1506380746.git.digetx@gmail.com>\n\t<0a45e058baba72124b91c663ce1d908d275f4044.1506380746.git.digetx@gmail.com>\n\t<20170928092949.GB30097@localhost>\n\t<b601b829-d87f-99e0-dcf9-ad3f9a7195df@gmail.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=us-ascii","Content-Disposition":"inline","In-Reply-To":"<b601b829-d87f-99e0-dcf9-ad3f9a7195df@gmail.com>","User-Agent":"Mutt/1.5.24 (2015-08-30)","Sender":"linux-tegra-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-tegra.vger.kernel.org>","X-Mailing-List":"linux-tegra@vger.kernel.org"}},{"id":1777150,"web_url":"http://patchwork.ozlabs.org/comment/1777150/","msgid":"<20170928162238.GF30097@localhost>","list_archive_url":null,"date":"2017-09-28T16:22:38","subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","submitter":{"id":8232,"url":"http://patchwork.ozlabs.org/api/people/8232/","name":"Vinod Koul","email":"vinod.koul@intel.com"},"content":"On Thu, Sep 28, 2017 at 05:35:59PM +0300, Dmitry Osipenko wrote:\n> On 28.09.2017 17:06, Dmitry Osipenko wrote:\n> > On 28.09.2017 12:29, Vinod Koul wrote:\n> >>> +\tdefault:\n> >>> +\t\treturn -EINVAL;\n> >>> +\t}\n> >>> +\n> >>> +\tahb_seq  = burst << TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT;\n> >>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_ADDR_WRAP;\n> >>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_INTR_ENB;\n> >>> +\n> >>> +\twritel_relaxed(ahb_seq,\n> >>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_SEQ);\n> >>> +\n> >>> +\twritel_relaxed(ahb_addr,\n> >>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_PTR);\n> >>\n> >> oh no, you don't write to HW here. This can be called anytime when you have\n> >> txn running! You should save these and use them in prep_ calls.\n> >>\n> > \n> > BTW, some of the DMA drivers have exactly the same problem. I now see that it is\n> > actually documented explicitly in provider.txt, but that's inconsistent across\n> > the actual drivers.\n> > \n> \n> Also, shouldn't prep_ and dma_slave_config be protected with locking? I don't\n> see DMA core doing any locking and seems none of the drivers too.\n\nIn prep when you modify the list yes (with vchan I suspect that maybe taken\ncare), but in general yes driver needs to do that","headers":{"Return-Path":"<linux-tegra-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":"ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-tegra-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3y30JG186Mz9t30\n\tfor <incoming@patchwork.ozlabs.org>;\n\tFri, 29 Sep 2017 02:19:26 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1752086AbdI1QTG (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tThu, 28 Sep 2017 12:19:06 -0400","from mga11.intel.com ([192.55.52.93]:41876 \"EHLO mga11.intel.com\"\n\trhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n\tid S1751689AbdI1QTE (ORCPT <rfc822;linux-tegra@vger.kernel.org>);\n\tThu, 28 Sep 2017 12:19:04 -0400","from fmsmga001.fm.intel.com ([10.253.24.23])\n\tby fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t28 Sep 2017 09:19:03 -0700","from vkoul-udesk7.iind.intel.com (HELO localhost) ([10.223.84.143])\n\tby fmsmga001.fm.intel.com with ESMTP; 28 Sep 2017 09:18:39 -0700"],"X-ExtLoop1":"1","X-IronPort-AV":"E=Sophos;i=\"5.42,450,1500966000\"; d=\"scan'208\";a=\"1200089942\"","Date":"Thu, 28 Sep 2017 21:52:38 +0530","From":"Vinod Koul <vinod.koul@intel.com>","To":"Dmitry Osipenko <digetx@gmail.com>","Cc":"Thierry Reding <thierry.reding@gmail.com>,\n\tJonathan Hunter <jonathanh@nvidia.com>,\n\tLaxman Dewangan <ldewangan@nvidia.com>,\n\tPeter De Schrijver <pdeschrijver@nvidia.com>,\n\tPrashant Gaikwad <pgaikwad@nvidia.com>,\n\tMichael Turquette <mturquette@baylibre.com>,\n\tStephen Boyd <sboyd@codeaurora.org>,\n\tRob Herring <robh+dt@kernel.org>, linux-tegra@vger.kernel.org,\n\tdevicetree@vger.kernel.org, dmaengine@vger.kernel.org,\n\tlinux-clk@vger.kernel.org, linux-kernel@vger.kernel.org","Subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","Message-ID":"<20170928162238.GF30097@localhost>","References":"<cover.1506380746.git.digetx@gmail.com>\n\t<0a45e058baba72124b91c663ce1d908d275f4044.1506380746.git.digetx@gmail.com>\n\t<20170928092949.GB30097@localhost>\n\t<b601b829-d87f-99e0-dcf9-ad3f9a7195df@gmail.com>\n\t<260fa409-0d07-ec9e-9e3b-fb08255026d8@gmail.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=us-ascii","Content-Disposition":"inline","In-Reply-To":"<260fa409-0d07-ec9e-9e3b-fb08255026d8@gmail.com>","User-Agent":"Mutt/1.5.24 (2015-08-30)","Sender":"linux-tegra-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-tegra.vger.kernel.org>","X-Mailing-List":"linux-tegra@vger.kernel.org"}},{"id":1777155,"web_url":"http://patchwork.ozlabs.org/comment/1777155/","msgid":"<3d7e0b5e-563a-5955-cb06-36ffa1b7e30f@gmail.com>","list_archive_url":null,"date":"2017-09-28T16:37:45","subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","submitter":{"id":18124,"url":"http://patchwork.ozlabs.org/api/people/18124/","name":"Dmitry Osipenko","email":"digetx@gmail.com"},"content":"On 28.09.2017 19:22, Vinod Koul wrote:\n> On Thu, Sep 28, 2017 at 05:35:59PM +0300, Dmitry Osipenko wrote:\n>> On 28.09.2017 17:06, Dmitry Osipenko wrote:\n>>> On 28.09.2017 12:29, Vinod Koul wrote:\n>>>>> +\tdefault:\n>>>>> +\t\treturn -EINVAL;\n>>>>> +\t}\n>>>>> +\n>>>>> +\tahb_seq  = burst << TEGRA_AHBDMA_CHANNEL_AHB_BURST_SHIFT;\n>>>>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_ADDR_WRAP;\n>>>>> +\tahb_seq |= TEGRA_AHBDMA_CHANNEL_INTR_ENB;\n>>>>> +\n>>>>> +\twritel_relaxed(ahb_seq,\n>>>>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_SEQ);\n>>>>> +\n>>>>> +\twritel_relaxed(ahb_addr,\n>>>>> +\t\t       ahbdma_chan->regs + TEGRA_AHBDMA_CHANNEL_AHB_PTR);\n>>>>\n>>>> oh no, you don't write to HW here. This can be called anytime when you have\n>>>> txn running! You should save these and use them in prep_ calls.\n>>>>\n>>>\n>>> BTW, some of the DMA drivers have exactly the same problem. I now see that it is\n>>> actually documented explicitly in provider.txt, but that's inconsistent across\n>>> the actual drivers.\n>>>\n>>\n>> Also, shouldn't prep_ and dma_slave_config be protected with locking? I don't\n>> see DMA core doing any locking and seems none of the drivers too.\n> \n> In prep when you modify the list yes (with vchan I suspect that maybe taken\n> care), but in general yes driver needs to do that\n> \n\nI meant that one CPU could modify channels config, while other CPU is preparing\nthe new TX using config that is in process of the modification. On the other\nhand, this looks like something that DMA client should take care of.","headers":{"Return-Path":"<linux-tegra-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-tegra-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key;\n\tunprotected) header.d=gmail.com header.i=@gmail.com\n\theader.b=\"WNFyw3jv\"; dkim-atps=neutral"],"Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3y30jY2Sl2z9t3R\n\tfor <incoming@patchwork.ozlabs.org>;\n\tFri, 29 Sep 2017 02:37:53 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1751595AbdI1Qhv (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tThu, 28 Sep 2017 12:37:51 -0400","from mail-wr0-f172.google.com ([209.85.128.172]:43190 \"EHLO\n\tmail-wr0-f172.google.com\" rhost-flags-OK-OK-OK-OK) by vger.kernel.org\n\twith ESMTP id S1750942AbdI1Qhu (ORCPT\n\t<rfc822;linux-tegra@vger.kernel.org>);\n\tThu, 28 Sep 2017 12:37:50 -0400","by mail-wr0-f172.google.com with SMTP id a43so3753013wrc.0;\n\tThu, 28 Sep 2017 09:37:49 -0700 (PDT)","from [192.168.1.145] (ppp109-252-90-109.pppoe.spdop.ru.\n\t[109.252.90.109]) by smtp.googlemail.com with ESMTPSA id\n\tp9sm370680lja.65.2017.09.28.09.37.46\n\t(version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);\n\tThu, 28 Sep 2017 09:37:47 -0700 (PDT)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=gmail.com; s=20161025;\n\th=subject:to:cc:references:from:message-id:date:user-agent\n\t:mime-version:in-reply-to:content-language:content-transfer-encoding; \n\tbh=3DwLcKRCwvhP/W7C1VBqdCBH9VglNKp794Yf2WC/9CM=;\n\tb=WNFyw3jvb22y6exWrleInzRLuobwVzZWRb8hfTsRaF+Fd279NoE+s8J2A9k5fmlpGt\n\toamcWxOKJeAxJ5Jf3tU8x9xSlRm3WRaYYgoVKc0UVQ2Aqp4NjTaYVD1tkAr8Bc6tq9Y/\n\tDevRxjWWRuvmsd2XkzfJjSROuT8EPFPKfVdbqf0WBPwssE6U2Zp48Xpmx0O30UFM326J\n\tyncmBkjy8ugfHlkQn9Yqra2NWmVFET8ArH2umgLQPGegmIdI51cIDI46y/gp6Bk4R+H4\n\twnfwo5GmoCNaCWjiMsuVqL35OJio6PPZpLf8rtOJHvhFMjP1Vy4t/D6+Ch0cdK5/tX8H\n\tULGg==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:subject:to:cc:references:from:message-id:date\n\t:user-agent:mime-version:in-reply-to:content-language\n\t:content-transfer-encoding;\n\tbh=3DwLcKRCwvhP/W7C1VBqdCBH9VglNKp794Yf2WC/9CM=;\n\tb=K9cBUXnLPI/Plij9qA1Q/dTq+NAkMANkBcX4hcErl2n053hGuxLtS09DzyBp6Nob6W\n\tVCFEqNpFiboGoHHFk1unSrfAxXkvzCg8cLdETDp2zqh5E9hQ3tg9XWIF62Iki9FYfxzm\n\tnLHeyw4GWmep0C6SffqzHUDcv1Y1sXTGPuR4PivbQlcJWt3oA4zFZoR8IxGxDnmbL/0i\n\t+rayCG9QzZ9Tl+zH9rh4Nfmr3dFg9irvaCLfH1Y1tsOv32mqEwGDZeVkXWqqIYrToiPy\n\thBunXTmRZKj66DvPnPWR5zSYyunC2VrTMwpFLfl/ePrH4Q6FbEQxDF+c2XPspZnXVnOP\n\tIYAw==","X-Gm-Message-State":"AHPjjUh0CIKiqOuXF79CEg2xThCGsdsI1g5txDLH2QSmBL/BUeeyTMrO\n\t3BM/+DFxziOlqxPWeXHe8OxUhJjf","X-Google-Smtp-Source":"AOwi7QBRLlKzPz6ahgNxRtDyPZ57V0Qtg8FCzSHXRGhSJ4yDfTnmbbZ5Tu6foTRNCoqzIVLt1d9pfg==","X-Received":"by 10.25.0.144 with SMTP id 138mr432930lfa.64.1506616667857;\n\tThu, 28 Sep 2017 09:37:47 -0700 (PDT)","Subject":"Re: [PATCH v1 4/5] dmaengine: Add driver for NVIDIA Tegra AHB DMA\n\tcontroller","To":"Vinod Koul <vinod.koul@intel.com>","Cc":"Thierry Reding <thierry.reding@gmail.com>,\n\tJonathan Hunter <jonathanh@nvidia.com>,\n\tLaxman Dewangan <ldewangan@nvidia.com>,\n\tPeter De Schrijver <pdeschrijver@nvidia.com>,\n\tPrashant Gaikwad <pgaikwad@nvidia.com>,\n\tMichael Turquette <mturquette@baylibre.com>,\n\tStephen Boyd <sboyd@codeaurora.org>,\n\tRob Herring <robh+dt@kernel.org>, linux-tegra@vger.kernel.org,\n\tdevicetree@vger.kernel.org, dmaengine@vger.kernel.org,\n\tlinux-clk@vger.kernel.org, linux-kernel@vger.kernel.org","References":"<cover.1506380746.git.digetx@gmail.com>\n\t<0a45e058baba72124b91c663ce1d908d275f4044.1506380746.git.digetx@gmail.com>\n\t<20170928092949.GB30097@localhost>\n\t<b601b829-d87f-99e0-dcf9-ad3f9a7195df@gmail.com>\n\t<260fa409-0d07-ec9e-9e3b-fb08255026d8@gmail.com>\n\t<20170928162238.GF30097@localhost>","From":"Dmitry Osipenko <digetx@gmail.com>","Message-ID":"<3d7e0b5e-563a-5955-cb06-36ffa1b7e30f@gmail.com>","Date":"Thu, 28 Sep 2017 19:37:45 +0300","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101\n\tThunderbird/52.3.0","MIME-Version":"1.0","In-Reply-To":"<20170928162238.GF30097@localhost>","Content-Type":"text/plain; charset=utf-8","Content-Language":"en-US","Content-Transfer-Encoding":"7bit","Sender":"linux-tegra-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-tegra.vger.kernel.org>","X-Mailing-List":"linux-tegra@vger.kernel.org"}}]