From patchwork Tue May 3 22:50:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626003 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=KJQDLWp4; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFYz23GCz9sG2 for ; Wed, 4 May 2022 08:51:19 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235682AbiECWys (ORCPT ); Tue, 3 May 2022 18:54:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233794AbiECWys (ORCPT ); Tue, 3 May 2022 18:54:48 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8620D28E13; Tue, 3 May 2022 15:51:14 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 5C76C16D1; Wed, 4 May 2022 01:51:47 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 5C76C16D1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618307; bh=mO431tI1SNQWLBayZin6EOk4nSImyeG4ZAo0sqrxslA=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=KJQDLWp4uw+MV/pE08eMUEm3STzjE1jPymvmHWLFZHnRcsKxDCAKVrShxLpr4XSHX ByCbxSCfuRqxhXFeBVmDbPqZWTzvaJKYw9h0jnjtQ0WUpebjgd8vLnR+6beBC2OeoR rv0hbs7+TbZ3nV0C9iCK5bRkYiXBTgpWm5mNODJk= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:13 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Vladimir Murzin CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , , Subject: [PATCH v2 01/26] dma-direct: take dma-ranges/offsets into account in resource mapping Date: Wed, 4 May 2022 01:50:39 +0300 Message-ID: <20220503225104.12108-2-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org A basic device-specific linear memory mapping was introduced back in commit ("dma: Take into account dma_pfn_offset") as a single-valued offset preserved in the device.dma_pfn_offset field, which was initialized for instance by means of the "dma-ranges" DT property. Afterwards the functionality was extended to support more than one device-specific region defined in the device.dma_range_map list of maps. But all of these improvements concerned a single pointer, page or sg DMA-mapping methods, while the system resource mapping function turned to miss the corresponding modification. Thus the dma_direct_map_resource() method now just casts the CPU physical address to the device DMA address with no dma-ranges-based mapping taking into account, which is obviously wrong. Let's fix it by using the phys_to_dma_direct() method to get the device-specific bus address from the passed memory resource for the case of the directly mapped DMA. Fixes: 25f1e1887088 ("dma: Take into account dma_pfn_offset") Signed-off-by: Serge Semin --- kernel/dma/direct.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 9743c6ccce1a..bc06db74dfdb 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -497,7 +497,7 @@ int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - dma_addr_t dma_addr = paddr; + dma_addr_t dma_addr = phys_to_dma_direct(dev, paddr); if (unlikely(!dma_capable(dev, dma_addr, size, false))) { dev_err_once(dev, From patchwork Tue May 3 22:50:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626005 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=hW6NN3Xm; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZ02yRNz9sG3 for ; Wed, 4 May 2022 08:51:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243840AbiECWyu (ORCPT ); Tue, 3 May 2022 18:54:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243812AbiECWyt (ORCPT ); Tue, 3 May 2022 18:54:49 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5FE84326C8; Tue, 3 May 2022 15:51:15 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 607ED16D6; Wed, 4 May 2022 01:51:48 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 607ED16D6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618308; bh=45B91bTRPDAiVVFXTWFyOKTO4rQMMJ7SJOEIZd4mBWc=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=hW6NN3XmaclC1NQBB5+yrm7tUDzI+V27l6nwbpKGVaAhzL9Gmm384L5/JhYDCO0IA HI54mSmbK44lqKbjFhJk6VOhdNGe2g75bXIxogQvHQtBrXwc6tU2ZMiQ4GpY/C85VW E2Si5ykW3/FLLJ2dUVG/3tu8Ivx0RQDhArbjjAMk= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:14 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 02/26] dmaengine: Fix dma_slave_config.dst_addr description Date: Wed, 4 May 2022 01:50:40 +0300 Message-ID: <20220503225104.12108-3-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Most likely due to a copy-paste mistake the dst_addr member of the dma_slave_config structure has been marked as ignored if the !source! address belong to the memory. That is relevant to the src_addr field of the structure while the dst_addr field as containing a destination device address is supposed to be ignored if the destination is the CPU memory. Let's fix the field description accordingly. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- include/linux/dmaengine.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index 842d4f7ca752..f204ea16ac1c 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -395,7 +395,7 @@ enum dma_slave_buswidth { * should be read (RX), if the source is memory this argument is * ignored. * @dst_addr: this is the physical address where DMA slave data - * should be written (TX), if the source is memory this argument + * should be written (TX), if the destination is memory this argument * is ignored. * @src_addr_width: this is the width in bytes of the source (RX) * register where DMA data shall be read. If the source From patchwork Tue May 3 22:50:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626006 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=SPRK52L6; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZ05bgGz9sG2 for ; Wed, 4 May 2022 08:51:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233794AbiECWyv (ORCPT ); Tue, 3 May 2022 18:54:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243837AbiECWyu (ORCPT ); Tue, 3 May 2022 18:54:50 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8E16128990; Tue, 3 May 2022 15:51:16 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 5636016D7; Wed, 4 May 2022 01:51:49 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 5636016D7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618309; bh=dHnrw4Yo3Al5iA5h1bR5VD8xJ9NUF0bPsnyiAWzrWUw=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=SPRK52L6rdgQhI+1MkBl7WUlb4Qadn5O30YG/R6wBSUpdgn90SGy+wChwcLNtejqr szoC/b/KLqAqfAIQMpYL0K+NSeR3LLxXPT5aORrlucAFInEUNrrB1wDp3bj84fq/l6 mRumf8VzycfyyMJxDtKqtMpXyZrzVFHIaVgpfEV8= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:15 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH v2 03/26] dmaengine: dw-edma: Release requested IRQs on failure Date: Wed, 4 May 2022 01:50:41 +0300 Message-ID: <20220503225104.12108-4-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From very beginning of the DW eDMA driver live in the kernel the method dw_edma_irq_request() hasn't been designed quite correct. In case if the request_irq() method fails to initialize the IRQ handler at some point the previously requested IRQs will be left initialized. It's prune to errors up to the system crash. Let's fix that by releasing the previously requested IRQs in the cleanup-on-error path of the dw_edma_irq_request() function. Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- Changelog v2: - This is a new patch added in v2 iteration of the series. --- drivers/dma/dw-edma/dw-edma-core.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 07f756479663..04efcb16d13d 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -899,10 +899,8 @@ static int dw_edma_irq_request(struct dw_edma *dw, dw_edma_interrupt_read, IRQF_SHARED, dw->name, &dw->irq[i]); - if (err) { - dw->nr_irqs = i; - return err; - } + if (err) + goto err_irq_free; if (irq_get_msi_desc(irq)) get_cached_msi_msg(irq, &dw->irq[i].msi); @@ -911,6 +909,14 @@ static int dw_edma_irq_request(struct dw_edma *dw, dw->nr_irqs = i; } + return 0; + +err_irq_free: + for (i--; i >= 0; i--) { + irq = chip->ops->irq_vector(dev, i); + free_irq(irq, &dw->irq[i]); + } + return err; } From patchwork Tue May 3 22:50:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626009 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=kmPcvNEZ; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZ81y1fz9sG2 for ; Wed, 4 May 2022 08:51:28 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243882AbiECWyy (ORCPT ); Tue, 3 May 2022 18:54:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243849AbiECWyv (ORCPT ); Tue, 3 May 2022 18:54:51 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 43DA328E13; Tue, 3 May 2022 15:51:17 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 69E2B16D8; Wed, 4 May 2022 01:51:50 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 69E2B16D8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618310; bh=6ylghrvgUT4d9UhZC8/foUYLQFbArdgaj4rvOFIS31A=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=kmPcvNEZt2a7CFF0fRZmZGBUfM4L2p3Xp99VPb1+1S+I2lS0LFcBVs/HNESgng0b1 crV2XjmqegSxhd4yCTbuki/Vj7S9V4VhyAN6cS/U6pXo32+zm3BvXaE1pw3aJrCSy9 +psNKUwCTJih3mdIz2jxToY9//ts+LoelS9Bs0mI= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:16 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH v2 04/26] dmaengine: dw-edma: Convert ll/dt phys-address to PCIe bus/DMA address Date: Wed, 4 May 2022 01:50:42 +0300 Message-ID: <20220503225104.12108-5-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org In accordance with the dw_edma_region.paddr field semantics it is supposed to be initialized with a memory base address visible by the DW eDMA controller. If the DMA engine is embedded into the DW PCIe Host/EP controller, then the address should belong to the Local CPU/Application memory. If eDMA is remotely accessible across the PCIe bus via the PCIe memory IOs, then the address needs to be a part of the PCIe bus memory space. The later case hasn't been well covered in the corresponding glue-driver. Since in general the PCIe memory space doesn't have to match the CPU memory space and the pci_dev.resource[] arrays contain the resources defined in the CPU memory space, a proper conversion needs to be performed, otherwise either the driver won't properly work or much worse the memory corruption will happen. The conversion can be done by means of the pci_bus_address() method. Let's use it to retrieve the LL, DT and CSRs PCIe memory ranges. Note in addition to that we need to extend the dw_edma_region.paddr field size. The field normally contains a memory range base address to be set in the DW eDMA Linked-List pointer register or as a base address of the Linked-List data buffer. In accordance with [1] the LL range is supposed to be created in the Local CPU/Application memory, but depending on the DW eDMA utilization the memory can be created as a part of the PCIe bus address space (as in the case of the DW PCIe EP prototype kit). Thus in the former case the dw_edma_region.paddr field should have the dma_addr_t type, while in the later one - pci_bus_addr_t. Seeing the corresponding CSRs are always 64-bits wide let's convert the dw_edma_region.paddr field type to be u64 and let the client code logic to make sure it has a valid address visible by the DW eDMA controller. For instance the DW eDMA PCIe glue-driver initializes the field with the addresses from the PCIe bus memory space. [1] DesignWare Cores PCI Express Controller Databook - DWC PCIe Root Port, v.5.40a, March 2019, p.1103 Fixes: 41aaff2a2ac0 ("dmaengine: Add Synopsys eDMA IP PCIe glue-logic") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-pcie.c | 8 ++++---- include/linux/dma/edma.h | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index d6b5e2463884..04c95cba1244 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -231,7 +231,7 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, return -ENOMEM; ll_region->vaddr += ll_block->off; - ll_region->paddr = pdev->resource[ll_block->bar].start; + ll_region->paddr = pci_bus_address(pdev, ll_block->bar); ll_region->paddr += ll_block->off; ll_region->sz = ll_block->sz; @@ -240,7 +240,7 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, return -ENOMEM; dt_region->vaddr += dt_block->off; - dt_region->paddr = pdev->resource[dt_block->bar].start; + dt_region->paddr = pci_bus_address(pdev, dt_block->bar); dt_region->paddr += dt_block->off; dt_region->sz = dt_block->sz; } @@ -256,7 +256,7 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, return -ENOMEM; ll_region->vaddr += ll_block->off; - ll_region->paddr = pdev->resource[ll_block->bar].start; + ll_region->paddr = pci_bus_address(pdev, ll_block->bar); ll_region->paddr += ll_block->off; ll_region->sz = ll_block->sz; @@ -265,7 +265,7 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, return -ENOMEM; dt_region->vaddr += dt_block->off; - dt_region->paddr = pdev->resource[dt_block->bar].start; + dt_region->paddr = pci_bus_address(pdev, dt_block->bar); dt_region->paddr += dt_block->off; dt_region->sz = dt_block->sz; } diff --git a/include/linux/dma/edma.h b/include/linux/dma/edma.h index 8b134896c9ed..5ec7cf2f14fc 100644 --- a/include/linux/dma/edma.h +++ b/include/linux/dma/edma.h @@ -18,7 +18,7 @@ struct dw_edma; struct dw_edma_region { - phys_addr_t paddr; + u64 paddr; void __iomem *vaddr; size_t sz; }; From patchwork Tue May 3 22:50:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626007 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=MyoRhtLE; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZ7384sz9sG2 for ; Wed, 4 May 2022 08:51:27 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243851AbiECWyz (ORCPT ); Tue, 3 May 2022 18:54:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243852AbiECWyw (ORCPT ); Tue, 3 May 2022 18:54:52 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E3CC228990; Tue, 3 May 2022 15:51:17 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 70B4716DA; Wed, 4 May 2022 01:51:51 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 70B4716DA DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618311; bh=4e9cmuHuBiDIdato7BKqLnVJnuF18fszm+JCzcVUJxA=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=MyoRhtLEKLiQcThP0ksI8GV23pqf74RrlitQkdhwDGtLCyxpz7SWdAcfMKeVnL15I JN0U0zcniuhtdAKyCYQCl6GntH9JhprBHg6vRMXw4sgHSkQ0WgWt0SJDSSlmPjS62w Wxno8Z0JLqEhGkrRWw55S0L/Xp/2KCiIV2h+ONIg= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:17 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH v2 05/26] dmaengine: dw-edma: Fix missing src/dst address of the interleaved xfers Date: Wed, 4 May 2022 01:50:43 +0300 Message-ID: <20220503225104.12108-6-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The interleaved DMA transfers support was added in the commit 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support"). It seems like the support was broken from the very beginning. Depending on the selected channel either source or destination address are left uninitialized which was obviously wrong. I don't really know how come the original modification was working for the commit author. Anyway let's fix it by initializing the destination address of the eDMA burst descriptors for the DEV_TO_MEM interleaved operations and by initializing the source address of the eDMA burst descriptors for the MEM_TO_DEV interleaved operations. Fixes: 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-core.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 04efcb16d13d..f0ef87d75ea9 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -456,6 +456,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) * and destination addresses are increased * by the same portion (data length) */ + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { + burst->dar = dst_addr; } } else { burst->dar = dst_addr; @@ -471,6 +473,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) * and destination addresses are increased * by the same portion (data length) */ + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { + burst->sar = src_addr; } } From patchwork Tue May 3 22:50:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626008 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=hW1V6mak; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZ75W7Tz9sG3 for ; Wed, 4 May 2022 08:51:27 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243922AbiECWy4 (ORCPT ); Tue, 3 May 2022 18:54:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243870AbiECWyy (ORCPT ); Tue, 3 May 2022 18:54:54 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 64EB5326E2; Tue, 3 May 2022 15:51:19 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 8552916DC; Wed, 4 May 2022 01:51:52 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 8552916DC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618312; bh=yyJl4C86rRjJF12iu156z23kD5CNtiThc1HugtLxv5M=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=hW1V6makBQur5HqxprJhu5mniGdGM/xOA77SIwjKxW9npom9xRTCMKP7ksxwto87r CyXXF+4idAPVfEKaKLsoUzOB885ETmnD8+d1ssAv5Db6gkZ6JiRJU0kcVeQDg7t02v mJ/+iPefHk1EivVtS/Wp7d5wEBPP9lN/R3Ejg+9E= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:18 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH v2 06/26] dmaengine: dw-edma: Don't permit non-inc interleaved xfers Date: Wed, 4 May 2022 01:50:44 +0300 Message-ID: <20220503225104.12108-7-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org DW eDMA controller always increments both source and destination addresses. Permitting DMA interleaved transfers with no src_inc/dst_inc flags set may lead to unexpected behaviour for the device users. Let's fix that by terminating the interleaved transfers if at least one of the dma_interleaved_template.{src_inc,dst_inc} flag is initialized with false value. Note in addition to that we need to increase the source and destination addresses accordingly after each iteration. Fixes: 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-core.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index f0ef87d75ea9..225eab58acb7 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -386,6 +386,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) return NULL; if (xfer->xfer.il->numf > 0 && xfer->xfer.il->frame_size > 0) return NULL; + if (!xfer->xfer.il->src_inc || !xfer->xfer.il->dst_inc) + return NULL; } else { return NULL; } @@ -485,15 +487,13 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) struct dma_interleaved_template *il = xfer->xfer.il; struct data_chunk *dc = &il->sgl[i]; - if (il->src_sgl) { - src_addr += burst->sz; + src_addr += burst->sz; + if (il->src_sgl) src_addr += dmaengine_get_src_icg(il, dc); - } - if (il->dst_sgl) { - dst_addr += burst->sz; + dst_addr += burst->sz; + if (il->dst_sgl) dst_addr += dmaengine_get_dst_icg(il, dc); - } } } From patchwork Tue May 3 22:50:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626010 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=ZRyCgrTl; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZ84HsYz9sG3 for ; Wed, 4 May 2022 08:51:28 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243857AbiECWy6 (ORCPT ); Tue, 3 May 2022 18:54:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243875AbiECWyy (ORCPT ); Tue, 3 May 2022 18:54:54 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0AD59427DD; Tue, 3 May 2022 15:51:19 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 7E6E716DD; Wed, 4 May 2022 01:51:53 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 7E6E716DD DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618313; bh=Q/3WuBCXs0x02lsNqSV0G+mNn2flqEz4vb3j44J/oHw=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=ZRyCgrTl+px/4BFc6RFxT2xhmTo2/NQP3t6dQ+cARnQpOslIFmy+WK/yipdgKjX3k LtKEd1xEy/diKHkmUU5f7EAnPWwmi1dtFBC91b8KINnwo45mNt+ysaX2TDlLjxBJ4+ mp+9QXlgGB11tZo2V65LnPozbG9hlKKEUN60CE6w= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:19 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH v2 07/26] dmaengine: dw-edma: Fix invalid interleaved xfers semantics Date: Wed, 4 May 2022 01:50:45 +0300 Message-ID: <20220503225104.12108-8-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The interleaved DMA transfer support added in commit 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") seems contradicting to what the DMA-engine defines. The next conditional statements: if (!xfer->xfer.il->numf) return NULL; if (xfer->xfer.il->numf > 0 && xfer->xfer.il->frame_size > 0) return NULL; basically mean that numf can't be zero and frame_size must always be zero, otherwise the transfer won't be executed. But further the transfer execution method takes the frames size from the dma_interleaved_template.sgl[] array for each frame. That array in accordance with [1] is supposed to be of dma_interleaved_template.frame_size size, which as we discovered before the code expects to be zero. So judging by the dw_edma_device_transfer() implementation the method implies the dma_interleaved_template.sgl[] array being of dma_interleaved_template.numf size, which is wrong. Since the dw_edma_device_transfer() method doesn't permit dma_interleaved_template.frame_size being non-zero then actual multi-chunk interleaved transfer turns to be unsupported even though the code implies having it supported. Let's fix that by adding a fully functioning support of the interleaved DMA transfers. First of all dma_interleaved_template.frame_size is supposed to be greater or equal to one thus having at least simple linear chunked frames. Secondly we can create a walk-through all over the chunks and frames just by initializing the number of the eDMA burst transactios as a multiple of dma_interleaved_template.numf and dma_interleaved_template.frame_size and getting the frame_size-modulo of the iteration step as an index of the dma_interleaved_template.sgl[] array. The rest of the dw_edma_device_transfer() method code can be left unchanged. [1] include/linux/dmaengine.h: doc struct dma_interleaved_template Fixes: 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-core.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 225eab58acb7..ef49deb5a7f3 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -333,6 +333,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) struct dw_edma_chunk *chunk; struct dw_edma_burst *burst; struct dw_edma_desc *desc; + size_t fsz = 0; u32 cnt = 0; int i; @@ -382,9 +383,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) if (xfer->xfer.sg.len < 1) return NULL; } else if (xfer->type == EDMA_XFER_INTERLEAVED) { - if (!xfer->xfer.il->numf) - return NULL; - if (xfer->xfer.il->numf > 0 && xfer->xfer.il->frame_size > 0) + if (!xfer->xfer.il->numf || xfer->xfer.il->frame_size < 1) return NULL; if (!xfer->xfer.il->src_inc || !xfer->xfer.il->dst_inc) return NULL; @@ -414,10 +413,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) cnt = xfer->xfer.sg.len; sg = xfer->xfer.sg.sgl; } else if (xfer->type == EDMA_XFER_INTERLEAVED) { - if (xfer->xfer.il->numf > 0) - cnt = xfer->xfer.il->numf; - else - cnt = xfer->xfer.il->frame_size; + cnt = xfer->xfer.il->numf * xfer->xfer.il->frame_size; + fsz = xfer->xfer.il->frame_size; } for (i = 0; i < cnt; i++) { @@ -439,7 +436,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) else if (xfer->type == EDMA_XFER_SCATTER_GATHER) burst->sz = sg_dma_len(sg); else if (xfer->type == EDMA_XFER_INTERLEAVED) - burst->sz = xfer->xfer.il->sgl[i].size; + burst->sz = xfer->xfer.il->sgl[i % fsz].size; chunk->ll_region.sz += burst->sz; desc->alloc_sz += burst->sz; @@ -482,10 +479,9 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) if (xfer->type == EDMA_XFER_SCATTER_GATHER) { sg = sg_next(sg); - } else if (xfer->type == EDMA_XFER_INTERLEAVED && - xfer->xfer.il->frame_size > 0) { + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { struct dma_interleaved_template *il = xfer->xfer.il; - struct data_chunk *dc = &il->sgl[i]; + struct data_chunk *dc = &il->sgl[i % fsz]; src_addr += burst->sz; if (il->src_sgl) From patchwork Tue May 3 22:50:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626011 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=kGa6RwrI; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZK0KkDz9sG2 for ; Wed, 4 May 2022 08:51:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243946AbiECWzI (ORCPT ); Tue, 3 May 2022 18:55:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243847AbiECWyz (ORCPT ); Tue, 3 May 2022 18:54:55 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 84B43326C8; Tue, 3 May 2022 15:51:20 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 8BCC716DE; Wed, 4 May 2022 01:51:54 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 8BCC716DE DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618314; bh=Mrs4arcuSUIBflxwE77VbGUc7c+dlUS54mMf+wSDYxo=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=kGa6RwrIObhyVS2dr3IDa9UFFzhrNulvYcM9pImUvV9EnWX1FiTh6z+42Bnetm4re WPkJRmf4tDKAr/FkYukhxpcGL4HiHeXHnKvHChYMohsRoLdpX6Lf9ruAKbf1D0Ihca BK2vKifpbCh7+W/rq7MrqnlB4LiiRuW9ZtSqZxKk= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:20 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 08/26] dmaengine: dw-edma: Add CPU to PCIe bus address translation Date: Wed, 4 May 2022 01:50:46 +0300 Message-ID: <20220503225104.12108-9-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Starting from commit 9575632052ba ("dmaengine: make slave address physical") the source and destination addresses of the DMA-slave device have been converted to being defined in CPU address space. It's DMA-device driver responsibility to properly convert them to the reachable DMA bus spaces. In case of the DW eDMA device, the source or destination peripheral (slave) devices reside PCIe bus space. Thus we need to perform the PCIe Host/EP windows-based (i.e. ranges DT-property) addresses translation otherwise the eDMA transactions won't work as expected (or can be even harmful) in case if the CPU and PCIe address spaces don't match. Note 1. Even though the DMA interleaved template has both source and destination addresses declared of dma_addr_t type only CPU memory range is supposed to be mapped in a way so to be seen by the DMA device since it's a subject of the DMA getting towards the system side. The device part must not be mapped since slave device resides in the PCIe bus space, which isn't affected by IOMMUs or iATU translations. DW PCIe eDMA generates corresponding MWr/MRd TLPs on its own. Note 2. This functionality is mainly required for the remote eDMA setup since the CPU address must be manually translated into the PCIe bus space before being written to LLI.{SAR,DAR}. If eDMA is embedded into the locally accessible DW PCIe RP/EP software-based translation isn't required since it will be done by hardware by means of the Outbound iATU as long as the DMA_BYPASS flag is cleared. If the later flag is set or there is no Outbound iATU entry found to which the SAR or DAR falls in (for Read and Write channel respectfully), there won't be any translation performed but DMA will proceed with the corresponding source/destination address as is. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-core.c | 18 +++++++++++++++++- include/linux/dma/edma.h | 15 +++++++++++++++ 2 files changed, 32 insertions(+), 1 deletion(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index ef49deb5a7f3..be466b781376 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -40,6 +40,17 @@ struct dw_edma_desc *vd2dw_edma_desc(struct virt_dma_desc *vd) return container_of(vd, struct dw_edma_desc, vd); } +static inline +u64 dw_edma_get_pci_address(struct dw_edma_chan *chan, phys_addr_t cpu_addr) +{ + struct dw_edma_chip *chip = chan->dw->chip; + + if (chip->ops->pci_address) + return chip->ops->pci_address(chip->dev, cpu_addr); + + return cpu_addr; +} + static struct dw_edma_burst *dw_edma_alloc_burst(struct dw_edma_chunk *chunk) { struct dw_edma_burst *burst; @@ -328,11 +339,11 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) { struct dw_edma_chan *chan = dchan2dw_edma_chan(xfer->dchan); enum dma_transfer_direction dir = xfer->direction; - phys_addr_t src_addr, dst_addr; struct scatterlist *sg = NULL; struct dw_edma_chunk *chunk; struct dw_edma_burst *burst; struct dw_edma_desc *desc; + u64 src_addr, dst_addr; size_t fsz = 0; u32 cnt = 0; int i; @@ -407,6 +418,11 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer) dst_addr = chan->config.dst_addr; } + if (dir == DMA_DEV_TO_MEM) + src_addr = dw_edma_get_pci_address(chan, (phys_addr_t)src_addr); + else + dst_addr = dw_edma_get_pci_address(chan, (phys_addr_t)dst_addr); + if (xfer->type == EDMA_XFER_CYCLIC) { cnt = xfer->xfer.cyclic.cnt; } else if (xfer->type == EDMA_XFER_SCATTER_GATHER) { diff --git a/include/linux/dma/edma.h b/include/linux/dma/edma.h index 5ec7cf2f14fc..391db06b74b7 100644 --- a/include/linux/dma/edma.h +++ b/include/linux/dma/edma.h @@ -23,8 +23,23 @@ struct dw_edma_region { size_t sz; }; +/** + * struct dw_edma_core_ops - platform-specific eDMA methods + * @irq_vector: Get IRQ number of the passed eDMA channel. Note the + * method accepts the channel id in the end-to-end + * numbering with the eDMA write channels being placed + * first in the row. + * @pci_address: Get PCIe bus address corresponding to the passed CPU + * address. Note there is no need in specifying this + * function if the address translation is performed by + * the DW PCIe RP/EP controller with the DW eDMA device in + * subject and DMA_BYPASS isn't set for all the outbound + * iATU windows. That will be done by the controller + * automatically. + */ struct dw_edma_core_ops { int (*irq_vector)(struct device *dev, unsigned int nr); + u64 (*pci_address)(struct device *dev, phys_addr_t cpu_addr); }; enum dw_edma_map_format { From patchwork Tue May 3 22:50:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626013 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=qeIoQT8Z; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZM5tYKz9sG2 for ; Wed, 4 May 2022 08:51:39 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243849AbiECWzJ (ORCPT ); Tue, 3 May 2022 18:55:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243921AbiECWy4 (ORCPT ); Tue, 3 May 2022 18:54:56 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5632730F6E; Tue, 3 May 2022 15:51:22 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 9A4E116DF; Wed, 4 May 2022 01:51:55 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 9A4E116DF DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618315; bh=CxoAdowEQDKRV6pC1a3/C5AAZGbzGbkUpy0O4rpKU64=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=qeIoQT8ZF33f1dlcjKsrIXi5Z5jvqYn+ikyGvUj8kTrPeQpzfCmUA2Ykn1nSxmZgj qH9WSMbJCQsUYoTRpOhEGbGv8rCiP8qKNaggtTv02CJm9hMa+kp93FKx1FtEZdOzj+ 8RXaXm+k2EOYpPpHCLYgwEEq4Pc0aftdWohUV7tQ= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:21 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH v2 09/26] dmaengine: dw-edma: Add PCIe bus address getter to the remote EP glue-driver Date: Wed, 4 May 2022 01:50:47 +0300 Message-ID: <20220503225104.12108-10-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org In general the Synopsys PCIe EndPoint IP prototype kit can be attached to a PCIe bus with any PCIe Host controller including to the one with distinctive from CPU address space. Due to that we need to make sure that the source and destination addresses of the DMA-slave devices are properly converted to the PCIe bus address space, otherwise the DMA transaction will not only work as expected, but may cause the memory corruption with subsequent system crash. Let's do that by introducing a new dw_edma_pcie_address() method defined in the dw-edma-pcie.c, which will perform the denoted translation by using the pcibios_resource_to_bus() method. Fixes: 41aaff2a2ac0 ("dmaengine: Add Synopsys eDMA IP PCIe glue-logic") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- Note this patch depends on the patch "dmaengine: dw-edma: Add CPU to PCIe bus address translation" from this series. --- drivers/dma/dw-edma/dw-edma-pcie.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index 04c95cba1244..f530bacfd716 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -95,8 +95,23 @@ static int dw_edma_pcie_irq_vector(struct device *dev, unsigned int nr) return pci_irq_vector(to_pci_dev(dev), nr); } +static u64 dw_edma_pcie_address(struct device *dev, phys_addr_t cpu_addr) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct pci_bus_region region; + struct resource res = { + .flags = IORESOURCE_MEM, + .start = cpu_addr, + .end = cpu_addr, + }; + + pcibios_resource_to_bus(pdev->bus, ®ion, &res); + return region.start; +} + static const struct dw_edma_core_ops dw_edma_pcie_core_ops = { .irq_vector = dw_edma_pcie_irq_vector, + .pci_address = dw_edma_pcie_address, }; static void dw_edma_pcie_get_vsec_dma_data(struct pci_dev *pdev, From patchwork Tue May 3 22:50:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626012 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=PDtLnmz1; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZL5328z9sG2 for ; Wed, 4 May 2022 08:51:38 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243876AbiECWzJ (ORCPT ); Tue, 3 May 2022 18:55:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243890AbiECWy6 (ORCPT ); Tue, 3 May 2022 18:54:58 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3734D326CB; Tue, 3 May 2022 15:51:23 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id A64761DFF8; Wed, 4 May 2022 01:51:56 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru A64761DFF8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618316; bh=2MYB16qpdIVrYxsByKyJJ05Sct/CcoC6KsAd6/vjIys=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=PDtLnmz1COmF8TSpSclu10F+ABKq5MTVExFJLBwY29ddKhcJVAFLkVJ2JKw+FV4lS dhWJrD4ukL+zx8zWdobnHNIHTkFRUwg+3qXPYW7PN8Xz8moNcQlDopo9N2fRJLEo6j ZNpGO38pQbF1B/daunsG0p1ZHh5rKZZwl2FvnZRw= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:22 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH v2 10/26] dmaengine: dw-edma: Drop chancnt initialization Date: Wed, 4 May 2022 01:50:48 +0300 Message-ID: <20220503225104.12108-11-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org DMA device drivers aren't supposed to initialize the dma_device.chancnt field. It will be done by the DMA-engine core in accordance with number of added virtual DMA-channels. Pre-initializing it with some value causes having a wrong number of channels printed in the device summary. Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-core.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index be466b781376..e9cb3056b6b7 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -823,7 +823,6 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write, dma->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); dma->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); dma->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; - dma->chancnt = cnt; /* Set DMA channel callbacks */ dma->dev = chip->dev; From patchwork Tue May 3 22:50:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626015 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=MX/BNOVY; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZc5JvTz9sG2 for ; Wed, 4 May 2022 08:51:52 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243962AbiECWzY (ORCPT ); Tue, 3 May 2022 18:55:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243951AbiECWzI (ORCPT ); Tue, 3 May 2022 18:55:08 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3D0D442A14; Tue, 3 May 2022 15:51:24 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 9F7D51E019; Wed, 4 May 2022 01:51:57 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 9F7D51E019 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618317; bh=mRXT4uv6pVSv/Krzlb01GohfKjx7FAJu1Gh7osX6ntA=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=MX/BNOVYWI34OTv/maKftm0NegykPhdOAqo+xCAnQ9rYr06PyJUCPZJWDBBsO1tRN 0JBCutfSpi7Og0C45XxSQGb2pqVNfdAz4rQoInC/kNXNtAb8+wmrMohftVQuWkle84 yA4MQQfVzjEQL3zAHrIBQByRRDa5gYOaz4EptiPk= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:23 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , , Gustavo Pimentel Subject: [PATCH v2 11/26] dmaengine: dw-edma: Fix DebugFS reg entry type Date: Wed, 4 May 2022 01:50:49 +0300 Message-ID: <20220503225104.12108-12-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org debugfs_entries structure declared in the dw-edma-v0-debugfs.c module contains the DebugFS node' register address. The address is declared as dma_addr_t type, but first it's assigned with virtual CPU IOMEM address and then it's cast back to the virtual address. Even though the castes sandwich will unlikely cause any problem since normally DMA address is at least of the same size as the CPU virtual address, it's at the very least redundant if not to say logically incorrect. Let's fix it by just stop casting the pointer back and worth and just preserve the address as a pointer to void with __iomem qualifier. Fixes: 305aebeff879 ("dmaengine: Add Synopsys eDMA IP version 0 debugfs support") Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 5226c9014703..8e61810dea4b 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -14,7 +14,7 @@ #include "dw-edma-core.h" #define REGS_ADDR(name) \ - ((void __force *)®s->name) + ((void __iomem *)®s->name) #define REGISTER(name) \ { #name, REGS_ADDR(name) } @@ -48,12 +48,13 @@ static struct { struct debugfs_entries { const char *name; - dma_addr_t *reg; + void __iomem *reg; }; static int dw_edma_debugfs_u32_get(void *data, u64 *val) { - void __iomem *reg = (void __force __iomem *)data; + void __iomem *reg = data; + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY && reg >= (void __iomem *)®s->type.legacy.ch) { void __iomem *ptr = ®s->type.legacy.ch; From patchwork Tue May 3 22:50:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626016 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=XFuY/ynA; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZg4wzPz9sG2 for ; Wed, 4 May 2022 08:51:55 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243956AbiECWz0 (ORCPT ); Tue, 3 May 2022 18:55:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40436 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244021AbiECWzV (ORCPT ); Tue, 3 May 2022 18:55:21 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7B84142A1B; Tue, 3 May 2022 15:51:25 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id AE98DC00; Wed, 4 May 2022 01:51:58 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru AE98DC00 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618318; bh=v2tYJrYiAEbZtvDvSvss7D+P15mGrAxcXJq7ntJIjc0=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=XFuY/ynA9ih96fFpK11sgyGOo+RKyt9pNkmPM3HBy6Kq5DXTbVZbgpuZku2fF3+wE sTC+pxrWJmKNxM8EOnI6hgvZz29SIUjwvBmkMR1Dk7pAtybPzHkqgfwm727utTPNdV bPnHY/x7XZwRTAeP197JMMtQK7IWmvrybzO8HtAA= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:24 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 12/26] dmaengine: dw-edma: Stop checking debugfs_create_*() return value Date: Wed, 4 May 2022 01:50:50 +0300 Message-ID: <20220503225104.12108-13-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org First of all they never return NULL. So checking their return value for being not NULL just pointless. Secondly the DebugFS subsystem is designed in a way to be used as simple as possible. So if one of the debugfs_create_*() method in a hierarchy fails, the following methods will just silently return the passed erroneous parental dentry. Finally the code is supposed to be working no matter whether anything DebugFS-related fails. So in order to make code simpler and DebugFS-independent let's drop the debugfs_create_*() methods return value checking in the same way as the most of the kernel drivers do. Note in order to preserve some memory space we suggest to skip the DebugFS nodes initialization if the file system in unavailable. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 20 +++++--------------- 1 file changed, 5 insertions(+), 15 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 8e61810dea4b..6e7f3ef60ca7 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -100,9 +100,8 @@ static void dw_edma_debugfs_create_x32(const struct debugfs_entries entries[], int i; for (i = 0; i < nr_entries; i++) { - if (!debugfs_create_file_unsafe(entries[i].name, 0444, dir, - entries[i].reg, &fops_x32)) - break; + debugfs_create_file_unsafe(entries[i].name, 0444, dir, + entries[i].reg, &fops_x32); } } @@ -168,8 +167,6 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir) char name[16]; regs_dir = debugfs_create_dir(WRITE_STR, dir); - if (!regs_dir) - return; nr_entries = ARRAY_SIZE(debugfs_regs); dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); @@ -184,8 +181,6 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir) snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i); ch_dir = debugfs_create_dir(name, regs_dir); - if (!ch_dir) - return; dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].wr, ch_dir); @@ -237,8 +232,6 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir) char name[16]; regs_dir = debugfs_create_dir(READ_STR, dir); - if (!regs_dir) - return; nr_entries = ARRAY_SIZE(debugfs_regs); dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); @@ -253,8 +246,6 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir) snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i); ch_dir = debugfs_create_dir(name, regs_dir); - if (!ch_dir) - return; dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].rd, ch_dir); @@ -273,8 +264,6 @@ static void dw_edma_debugfs_regs(void) int nr_entries; regs_dir = debugfs_create_dir(REGISTERS_STR, dw->debugfs); - if (!regs_dir) - return; nr_entries = ARRAY_SIZE(debugfs_regs); dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); @@ -285,6 +274,9 @@ static void dw_edma_debugfs_regs(void) void dw_edma_v0_debugfs_on(struct dw_edma *_dw) { + if (!debugfs_initialized()) + return; + dw = _dw; if (!dw) return; @@ -294,8 +286,6 @@ void dw_edma_v0_debugfs_on(struct dw_edma *_dw) return; dw->debugfs = debugfs_create_dir(dw->name, NULL); - if (!dw->debugfs) - return; debugfs_create_u32("mf", 0444, dw->debugfs, &dw->chip->mf); debugfs_create_u16("wr_ch_cnt", 0444, dw->debugfs, &dw->wr_ch_cnt); From patchwork Tue May 3 22:50:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626018 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=HMm9V0Gt; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZj00vmz9sG2 for ; Wed, 4 May 2022 08:51:56 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244029AbiECWz2 (ORCPT ); Tue, 3 May 2022 18:55:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244024AbiECWzV (ORCPT ); Tue, 3 May 2022 18:55:21 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A8C0C42EC0; Tue, 3 May 2022 15:51:26 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id A2D58C01; Wed, 4 May 2022 01:51:59 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru A2D58C01 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618319; bh=kZ8SwbBbwC1Q2n/JREaBZgLDz4WqvYav6xI/zwJ1uTQ=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=HMm9V0Gt2W6rbe1sy2xnwnEKdLlUxrIJy0d6W9m7wsHCvMmnlsmD9sWDgMWFjenmV K4pSgWmPLvRs+qht3ZXzyII98kfeF8gTt1o4nadz6B+9XZ3mSG69x/MBk80PRH5KMz pTYyhm1ORlNz4hbg/5615/wNTqjXrFoedIHKH+6g= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:25 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 13/26] dmaengine: dw-edma: Add dw_edma prefix to the DebugFS nodes descriptor Date: Wed, 4 May 2022 01:50:51 +0300 Message-ID: <20220503225104.12108-14-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The rest of the locally defined and used methods and structures have dw_edma prefix in their names. It's right in accordance with the kernel coding style to follow the locally defined rule of naming. Let's add that prefix to the debugfs_entries structure too especially seeing it's name may be confusing as if that structure belongs to the global DebugFS space. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 6e7f3ef60ca7..2121ffc33cf3 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -46,7 +46,7 @@ static struct { void __iomem *end; } lim[2][EDMA_V0_MAX_NR_CH]; -struct debugfs_entries { +struct dw_edma_debugfs_entry { const char *name; void __iomem *reg; }; @@ -94,7 +94,7 @@ static int dw_edma_debugfs_u32_get(void *data, u64 *val) } DEFINE_DEBUGFS_ATTRIBUTE(fops_x32, dw_edma_debugfs_u32_get, NULL, "0x%08llx\n"); -static void dw_edma_debugfs_create_x32(const struct debugfs_entries entries[], +static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry entries[], int nr_entries, struct dentry *dir) { int i; @@ -108,8 +108,7 @@ static void dw_edma_debugfs_create_x32(const struct debugfs_entries entries[], static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, struct dentry *dir) { - int nr_entries; - const struct debugfs_entries debugfs_regs[] = { + const struct dw_edma_debugfs_entry debugfs_regs[] = { REGISTER(ch_control1), REGISTER(ch_control2), REGISTER(transfer_size), @@ -120,6 +119,7 @@ static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, REGISTER(llp.lsb), REGISTER(llp.msb), }; + int nr_entries; nr_entries = ARRAY_SIZE(debugfs_regs); dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, dir); @@ -127,7 +127,7 @@ static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, static void dw_edma_debugfs_regs_wr(struct dentry *dir) { - const struct debugfs_entries debugfs_regs[] = { + const struct dw_edma_debugfs_entry debugfs_regs[] = { /* eDMA global registers */ WR_REGISTER(engine_en), WR_REGISTER(doorbell), @@ -148,7 +148,7 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir) WR_REGISTER(ch67_imwr_data), WR_REGISTER(linked_list_err_en), }; - const struct debugfs_entries debugfs_unroll_regs[] = { + const struct dw_edma_debugfs_entry debugfs_unroll_regs[] = { /* eDMA channel context grouping */ WR_REGISTER_UNROLL(engine_chgroup), WR_REGISTER_UNROLL(engine_hshake_cnt.lsb), @@ -191,7 +191,7 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir) static void dw_edma_debugfs_regs_rd(struct dentry *dir) { - const struct debugfs_entries debugfs_regs[] = { + const struct dw_edma_debugfs_entry debugfs_regs[] = { /* eDMA global registers */ RD_REGISTER(engine_en), RD_REGISTER(doorbell), @@ -213,7 +213,7 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir) RD_REGISTER(ch45_imwr_data), RD_REGISTER(ch67_imwr_data), }; - const struct debugfs_entries debugfs_unroll_regs[] = { + const struct dw_edma_debugfs_entry debugfs_unroll_regs[] = { /* eDMA channel context grouping */ RD_REGISTER_UNROLL(engine_chgroup), RD_REGISTER_UNROLL(engine_hshake_cnt.lsb), @@ -256,7 +256,7 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir) static void dw_edma_debugfs_regs(void) { - const struct debugfs_entries debugfs_regs[] = { + const struct dw_edma_debugfs_entry debugfs_regs[] = { REGISTER(ctrl_data_arb_prior), REGISTER(ctrl), }; From patchwork Tue May 3 22:50:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626019 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=B9/WD50r; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZk4N3Tz9sG2 for ; Wed, 4 May 2022 08:51:58 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244035AbiECWz3 (ORCPT ); Tue, 3 May 2022 18:55:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244023AbiECWzV (ORCPT ); Tue, 3 May 2022 18:55:21 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A868542A31; Tue, 3 May 2022 15:51:27 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 94932C02; Wed, 4 May 2022 01:52:00 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 94932C02 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618320; bh=coZ4Tk8Je9NrW0abYtYTSxv57iRG+Q7tf8o0Dvrehag=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=B9/WD50rUO7iWvdrZ8MoL2V49cDmILYLZbXRXfUPNeZet2VmFzXsxjjjcnI2lDnLz TIaXeqEtcyK7O6xhUpSUym9zZNQDJ7wnvaSBU7zCQnGlec1j4exodkFi4zXsLUUZco V3VFisTquc7kUljUHSZZ5UVxF5QQbS0xfDSOZ+hk= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:26 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 14/26] dmaengine: dw-edma: Convert DebugFS descs to being kz-allocated Date: Wed, 4 May 2022 01:50:52 +0300 Message-ID: <20220503225104.12108-15-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Currently all the DW eDMA DebugFS nodes descriptors are allocated on stack, while the DW eDMA driver private data and CSR limits are statically preserved. Such design won't work for the multi-eDMA platforms. As a preparation to adding the multi-eDMA system setups support we need to have each DebugFS node separately allocated and described. Afterwards we'll put an addition info there like Read/Write channel flag, channel ID, DW eDMA private data reference. Note this conversion is mainly required due to having the legacy DW eDMA controllers with indirect Read/Write channels context CSRs access. If we didn't need to have a synchronized access to these registers the DebugFS code of the driver would have been much simpler. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- Changelog v2: - Drop __iomem qualifier from the struct dw_edma_debugfs_entry instance definition in the dw_edma_debugfs_u32_get() method. (@Manivannan) --- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 2121ffc33cf3..78f15e4b07ac 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -53,7 +53,8 @@ struct dw_edma_debugfs_entry { static int dw_edma_debugfs_u32_get(void *data, u64 *val) { - void __iomem *reg = data; + struct dw_edma_debugfs_entry *entry = data; + void __iomem *reg = entry->reg; if (dw->chip->mf == EDMA_MF_EDMA_LEGACY && reg >= (void __iomem *)®s->type.legacy.ch) { @@ -94,14 +95,22 @@ static int dw_edma_debugfs_u32_get(void *data, u64 *val) } DEFINE_DEBUGFS_ATTRIBUTE(fops_x32, dw_edma_debugfs_u32_get, NULL, "0x%08llx\n"); -static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry entries[], +static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry ini[], int nr_entries, struct dentry *dir) { + struct dw_edma_debugfs_entry *entries; int i; + entries = devm_kcalloc(dw->chip->dev, nr_entries, sizeof(*entries), + GFP_KERNEL); + if (!entries) + return; + for (i = 0; i < nr_entries; i++) { + entries[i] = ini[i]; + debugfs_create_file_unsafe(entries[i].name, 0444, dir, - entries[i].reg, &fops_x32); + &entries[i], &fops_x32); } } From patchwork Tue May 3 22:50:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626017 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=WCZ55Ar0; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZh37lhz9sG3 for ; Wed, 4 May 2022 08:51:56 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244026AbiECWz0 (ORCPT ); Tue, 3 May 2022 18:55:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244027AbiECWzV (ORCPT ); Tue, 3 May 2022 18:55:21 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9A6CA42ECD; Tue, 3 May 2022 15:51:28 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 9F345C03; Wed, 4 May 2022 01:52:01 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 9F345C03 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618321; bh=J9UEwT5AW+loEjc7QP5qa3r5o9+Oapk6w8M+2P75iAw=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=WCZ55Ar0F6ZD49G1Cq5J2GJue978w2JBEwRb+0isbsc3eR7JW4oM1JOSluRAdVu2Z 4kTlB1RBQdEBWF+Nq43Frdcf5Y9FTa6Kto/o41OMZB+lGJxxDrGu5AS6qX+OmQSpp4 7Yg+fzsUSFu+BhvugJ8UyWA3970E7lHjy8vuUuv4= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:27 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 15/26] dmaengine: dw-edma: Rename DebugFS dentry variables to 'dent' Date: Wed, 4 May 2022 01:50:53 +0300 Message-ID: <20220503225104.12108-16-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Since we are about to add the eDMA channels direction support to the debugfs module it will be confusing to have both the DebugFS directory and the channels direction short names used in the same code. As a preparation patch let's convert the DebugFS dentry 'dir' variables to having the 'dent' name so to prevent the confusion. Suggested-by: Manivannan Sadhasivam Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- Changelog v2: - This is a new patch added in v2. (@Manivannan) --- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 46 ++++++++++++------------ 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 78f15e4b07ac..7bb3363b40e4 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -96,7 +96,7 @@ static int dw_edma_debugfs_u32_get(void *data, u64 *val) DEFINE_DEBUGFS_ATTRIBUTE(fops_x32, dw_edma_debugfs_u32_get, NULL, "0x%08llx\n"); static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry ini[], - int nr_entries, struct dentry *dir) + int nr_entries, struct dentry *dent) { struct dw_edma_debugfs_entry *entries; int i; @@ -109,13 +109,13 @@ static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry ini[], for (i = 0; i < nr_entries; i++) { entries[i] = ini[i]; - debugfs_create_file_unsafe(entries[i].name, 0444, dir, + debugfs_create_file_unsafe(entries[i].name, 0444, dent, &entries[i], &fops_x32); } } static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, - struct dentry *dir) + struct dentry *dent) { const struct dw_edma_debugfs_entry debugfs_regs[] = { REGISTER(ch_control1), @@ -131,10 +131,10 @@ static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, int nr_entries; nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, dir); + dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, dent); } -static void dw_edma_debugfs_regs_wr(struct dentry *dir) +static void dw_edma_debugfs_regs_wr(struct dentry *dent) { const struct dw_edma_debugfs_entry debugfs_regs[] = { /* eDMA global registers */ @@ -171,34 +171,34 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir) WR_REGISTER_UNROLL(ch6_pwr_en), WR_REGISTER_UNROLL(ch7_pwr_en), }; - struct dentry *regs_dir, *ch_dir; + struct dentry *regs_dent, *ch_dent; int nr_entries, i; char name[16]; - regs_dir = debugfs_create_dir(WRITE_STR, dir); + regs_dent = debugfs_create_dir(WRITE_STR, dent); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); + dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent); if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { nr_entries = ARRAY_SIZE(debugfs_unroll_regs); dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, - regs_dir); + regs_dent); } for (i = 0; i < dw->wr_ch_cnt; i++) { snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i); - ch_dir = debugfs_create_dir(name, regs_dir); + ch_dent = debugfs_create_dir(name, regs_dent); - dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].wr, ch_dir); + dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].wr, ch_dent); lim[0][i].start = ®s->type.unroll.ch[i].wr; lim[0][i].end = ®s->type.unroll.ch[i].padding_1[0]; } } -static void dw_edma_debugfs_regs_rd(struct dentry *dir) +static void dw_edma_debugfs_regs_rd(struct dentry *dent) { const struct dw_edma_debugfs_entry debugfs_regs[] = { /* eDMA global registers */ @@ -236,27 +236,27 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir) RD_REGISTER_UNROLL(ch6_pwr_en), RD_REGISTER_UNROLL(ch7_pwr_en), }; - struct dentry *regs_dir, *ch_dir; + struct dentry *regs_dent, *ch_dent; int nr_entries, i; char name[16]; - regs_dir = debugfs_create_dir(READ_STR, dir); + regs_dent = debugfs_create_dir(READ_STR, dent); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); + dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent); if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { nr_entries = ARRAY_SIZE(debugfs_unroll_regs); dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, - regs_dir); + regs_dent); } for (i = 0; i < dw->rd_ch_cnt; i++) { snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i); - ch_dir = debugfs_create_dir(name, regs_dir); + ch_dent = debugfs_create_dir(name, regs_dent); - dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].rd, ch_dir); + dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].rd, ch_dent); lim[1][i].start = ®s->type.unroll.ch[i].rd; lim[1][i].end = ®s->type.unroll.ch[i].padding_2[0]; @@ -269,16 +269,16 @@ static void dw_edma_debugfs_regs(void) REGISTER(ctrl_data_arb_prior), REGISTER(ctrl), }; - struct dentry *regs_dir; + struct dentry *regs_dent; int nr_entries; - regs_dir = debugfs_create_dir(REGISTERS_STR, dw->debugfs); + regs_dent = debugfs_create_dir(REGISTERS_STR, dw->debugfs); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); + dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent); - dw_edma_debugfs_regs_wr(regs_dir); - dw_edma_debugfs_regs_rd(regs_dir); + dw_edma_debugfs_regs_wr(regs_dent); + dw_edma_debugfs_regs_rd(regs_dent); } void dw_edma_v0_debugfs_on(struct dw_edma *_dw) From patchwork Tue May 3 22:50:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626020 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=iaSxaLnc; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFZv0TQCz9sG2 for ; Wed, 4 May 2022 08:52:07 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244043AbiECWzh (ORCPT ); Tue, 3 May 2022 18:55:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244002AbiECWzY (ORCPT ); Tue, 3 May 2022 18:55:24 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 55D4942EF3; Tue, 3 May 2022 15:51:29 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id A00FEC04; Wed, 4 May 2022 01:52:02 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru A00FEC04 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618322; bh=3NGnV4L3yKRDq8qTZRBdv9UvE+k6BF7hNy1V0ERo/1M=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=iaSxaLncR2RpDxfKCoCI5GYgeyYV2dppRHjwrbV4/XIP5jRfVSzAQofBN/8IxwozF 3uxakFetZx0y36/aGR1btXwcR2YZME9cEX31NrtM1z8qANb2TAZVNEIZhtiSGh4AMP BriLbcddP86zZDhys0ECaaj87omCLQzygIVD5HQA= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:28 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 16/26] dmaengine: dw-edma: Simplify the DebugFS context CSRs init procedure Date: Wed, 4 May 2022 01:50:54 +0300 Message-ID: <20220503225104.12108-17-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org DW eDMA v4.70a and older have the read and write channels context CSRs indirectly accessible. It means the CSRs like Channel Control, Xfer size, SAR, DAR and LLP address are accessed over at a fixed MMIO address, but their reference to the corresponding channel is determined by the Viewport CSR. In order to have a coherent access to these registers the CSR IOs are supposed to be protected with a spin-lock. DW eDMA v4.80a and newer normally have unrolled Read/Write channel context registers. That is all CSRs denoted before are directly mapped in the controller MMIO space. Since both normal and viewport-based registers are exposed via the DebugFS nodes, the original code author decided to implement an algorithm based on the unrolled CSRs mapping with the viewport addresses recalculation if it's required. The problem is that such implementation turned to be first unscalable (supports a platform with only single eDMA available since a base address statically preserved) and second needlessly overcomplicated (it loops over all Rd/Wr context addresses and re-calculates the viewport base address on each DebugFS node access). The algorithm can be greatly simplified just by adding the channel ID and it's direction fields in the eDMA DebugFS node descriptor. These new parameters can be used to find a CSR offset within the corresponding channel registers space. The DW eDMA DebugFS node getter afterwards will also use them in order to activate the respective context CSRs viewport before reading data from the specified register. In case of the unrolled version of the CSRs mapping there won't be any spin-lock taken/released, no viewport activation as before this modification. Note this modification fixes the REGISTER() macros using an externally defined local variable. The same problem with the rest of the macro will be fixed in the next commit. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 84 +++++++++++------------- 1 file changed, 38 insertions(+), 46 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 7bb3363b40e4..1596eedf35c5 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -15,9 +15,27 @@ #define REGS_ADDR(name) \ ((void __iomem *)®s->name) + +#define REGS_CH_ADDR(name, _dir, _ch) \ + ({ \ + struct dw_edma_v0_ch_regs __iomem *__ch_regs; \ + \ + if ((dw)->chip->mf == EDMA_MF_EDMA_LEGACY) \ + __ch_regs = ®s->type.legacy.ch; \ + else if (_dir == EDMA_DIR_READ) \ + __ch_regs = ®s->type.unroll.ch[_ch].rd; \ + else \ + __ch_regs = ®s->type.unroll.ch[_ch].wr; \ + \ + (void __iomem *)&__ch_regs->name; \ + }) + #define REGISTER(name) \ { #name, REGS_ADDR(name) } +#define CTX_REGISTER(name, dir, ch) \ + { #name, REGS_CH_ADDR(name, dir, ch), dir, ch } + #define WR_REGISTER(name) \ { #name, REGS_ADDR(wr_##name) } #define RD_REGISTER(name) \ @@ -41,14 +59,11 @@ static struct dw_edma *dw; static struct dw_edma_v0_regs __iomem *regs; -static struct { - void __iomem *start; - void __iomem *end; -} lim[2][EDMA_V0_MAX_NR_CH]; - struct dw_edma_debugfs_entry { const char *name; void __iomem *reg; + enum dw_edma_dir dir; + u16 ch; }; static int dw_edma_debugfs_u32_get(void *data, u64 *val) @@ -58,33 +73,16 @@ static int dw_edma_debugfs_u32_get(void *data, u64 *val) if (dw->chip->mf == EDMA_MF_EDMA_LEGACY && reg >= (void __iomem *)®s->type.legacy.ch) { - void __iomem *ptr = ®s->type.legacy.ch; - u32 viewport_sel = 0; unsigned long flags; - u16 ch; - - for (ch = 0; ch < dw->wr_ch_cnt; ch++) - if (lim[0][ch].start >= reg && reg < lim[0][ch].end) { - ptr += (reg - lim[0][ch].start); - goto legacy_sel_wr; - } - - for (ch = 0; ch < dw->rd_ch_cnt; ch++) - if (lim[1][ch].start >= reg && reg < lim[1][ch].end) { - ptr += (reg - lim[1][ch].start); - goto legacy_sel_rd; - } - - return 0; -legacy_sel_rd: - viewport_sel = BIT(31); -legacy_sel_wr: - viewport_sel |= FIELD_PREP(EDMA_V0_VIEWPORT_MASK, ch); + u32 viewport_sel; + + viewport_sel = entry->dir == EDMA_DIR_READ ? BIT(31) : 0; + viewport_sel |= FIELD_PREP(EDMA_V0_VIEWPORT_MASK, entry->ch); raw_spin_lock_irqsave(&dw->lock, flags); writel(viewport_sel, ®s->type.legacy.viewport_sel); - *val = readl(ptr); + *val = readl(reg); raw_spin_unlock_irqrestore(&dw->lock, flags); } else { @@ -114,19 +112,19 @@ static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry ini[], } } -static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, +static void dw_edma_debugfs_regs_ch(enum dw_edma_dir dir, u16 ch, struct dentry *dent) { - const struct dw_edma_debugfs_entry debugfs_regs[] = { - REGISTER(ch_control1), - REGISTER(ch_control2), - REGISTER(transfer_size), - REGISTER(sar.lsb), - REGISTER(sar.msb), - REGISTER(dar.lsb), - REGISTER(dar.msb), - REGISTER(llp.lsb), - REGISTER(llp.msb), + struct dw_edma_debugfs_entry debugfs_regs[] = { + CTX_REGISTER(ch_control1, dir, ch), + CTX_REGISTER(ch_control2, dir, ch), + CTX_REGISTER(transfer_size, dir, ch), + CTX_REGISTER(sar.lsb, dir, ch), + CTX_REGISTER(sar.msb, dir, ch), + CTX_REGISTER(dar.lsb, dir, ch), + CTX_REGISTER(dar.msb, dir, ch), + CTX_REGISTER(llp.lsb, dir, ch), + CTX_REGISTER(llp.msb, dir, ch), }; int nr_entries; @@ -191,10 +189,7 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dent) ch_dent = debugfs_create_dir(name, regs_dent); - dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].wr, ch_dent); - - lim[0][i].start = ®s->type.unroll.ch[i].wr; - lim[0][i].end = ®s->type.unroll.ch[i].padding_1[0]; + dw_edma_debugfs_regs_ch(EDMA_DIR_WRITE, i, ch_dent); } } @@ -256,10 +251,7 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dent) ch_dent = debugfs_create_dir(name, regs_dent); - dw_edma_debugfs_regs_ch(®s->type.unroll.ch[i].rd, ch_dent); - - lim[1][i].start = ®s->type.unroll.ch[i].rd; - lim[1][i].end = ®s->type.unroll.ch[i].padding_2[0]; + dw_edma_debugfs_regs_ch(EDMA_DIR_READ, i, ch_dent); } } From patchwork Tue May 3 22:50:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626024 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=LXIQmvs0; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFbD4GQ4z9sG3 for ; Wed, 4 May 2022 08:52:24 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244130AbiECWzz (ORCPT ); Tue, 3 May 2022 18:55:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244098AbiECWzd (ORCPT ); Tue, 3 May 2022 18:55:33 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E24B442ECA; Tue, 3 May 2022 15:51:31 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 9CFC7C05; Wed, 4 May 2022 01:52:03 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 9CFC7C05 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618323; bh=xggDHztHEuDzNpRnLRMMn7LiHvh8+bnChzQhu9UvjFo=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=LXIQmvs0gR+GstyVOCuiHDxWpZpwjMBZRtR+BQ48PAdyHnK4zzX88Gd0gRdd28rTf Vu6tyrETJ1IfyG0MWvj4/Iarkt6Bf0wWbrIli/Lb8MwCqTyLNxlT7/0xDt37fgHscR XiUkB4ZzF2BYnT/fbcEopmEysLFs7uyh1K5Qky30= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:29 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 17/26] dmaengine: dw-edma: Move eDMA data pointer to DebugFS node descriptor Date: Wed, 4 May 2022 01:50:55 +0300 Message-ID: <20220503225104.12108-18-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The last thing that really stops the DebugFS part of the eDMA driver from supporting the multi-eDMA platform in is keeping the eDMA private data pointer in the static area of the DebugFS module. Since the DebugFS node descriptors are now kz-allocated we can freely move that pointer to being preserved in the descriptors. After the DebugFS initialization procedure that pointer will be used in the DebugFS files getter to access the common CSRs space and the context CSRs spin-lock. So the main part of this change is connected with the DebugFS nodes descriptors initialization macros, which aside with already defined prototypes now require to have the DW eDMA private data pointer passed. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 242 +++++++++++------------ 1 file changed, 117 insertions(+), 125 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index 1596eedf35c5..e6cf608d121b 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -13,53 +13,55 @@ #include "dw-edma-v0-regs.h" #include "dw-edma-core.h" -#define REGS_ADDR(name) \ - ((void __iomem *)®s->name) +#define REGS_ADDR(dw, name) \ + ({ \ + struct dw_edma_v0_regs __iomem *__regs = (dw)->chip->reg_base; \ + \ + (void __iomem *)&__regs->name; \ + }) -#define REGS_CH_ADDR(name, _dir, _ch) \ +#define REGS_CH_ADDR(dw, name, _dir, _ch) \ ({ \ struct dw_edma_v0_ch_regs __iomem *__ch_regs; \ \ if ((dw)->chip->mf == EDMA_MF_EDMA_LEGACY) \ - __ch_regs = ®s->type.legacy.ch; \ + __ch_regs = REGS_ADDR(dw, type.legacy.ch); \ else if (_dir == EDMA_DIR_READ) \ - __ch_regs = ®s->type.unroll.ch[_ch].rd; \ + __ch_regs = REGS_ADDR(dw, type.unroll.ch[_ch].rd); \ else \ - __ch_regs = ®s->type.unroll.ch[_ch].wr; \ + __ch_regs = REGS_ADDR(dw, type.unroll.ch[_ch].wr); \ \ (void __iomem *)&__ch_regs->name; \ }) -#define REGISTER(name) \ - { #name, REGS_ADDR(name) } +#define REGISTER(dw, name) \ + { dw, #name, REGS_ADDR(dw, name) } -#define CTX_REGISTER(name, dir, ch) \ - { #name, REGS_CH_ADDR(name, dir, ch), dir, ch } +#define CTX_REGISTER(dw, name, dir, ch) \ + { dw, #name, REGS_CH_ADDR(dw, name, dir, ch), dir, ch } -#define WR_REGISTER(name) \ - { #name, REGS_ADDR(wr_##name) } -#define RD_REGISTER(name) \ - { #name, REGS_ADDR(rd_##name) } +#define WR_REGISTER(dw, name) \ + { dw, #name, REGS_ADDR(dw, wr_##name) } +#define RD_REGISTER(dw, name) \ + { dw, #name, REGS_ADDR(dw, rd_##name) } -#define WR_REGISTER_LEGACY(name) \ - { #name, REGS_ADDR(type.legacy.wr_##name) } +#define WR_REGISTER_LEGACY(dw, name) \ + { dw, #name, REGS_ADDR(dw, type.legacy.wr_##name) } #define RD_REGISTER_LEGACY(name) \ - { #name, REGS_ADDR(type.legacy.rd_##name) } + { dw, #name, REGS_ADDR(dw, type.legacy.rd_##name) } -#define WR_REGISTER_UNROLL(name) \ - { #name, REGS_ADDR(type.unroll.wr_##name) } -#define RD_REGISTER_UNROLL(name) \ - { #name, REGS_ADDR(type.unroll.rd_##name) } +#define WR_REGISTER_UNROLL(dw, name) \ + { dw, #name, REGS_ADDR(dw, type.unroll.wr_##name) } +#define RD_REGISTER_UNROLL(dw, name) \ + { dw, #name, REGS_ADDR(dw, type.unroll.rd_##name) } #define WRITE_STR "write" #define READ_STR "read" #define CHANNEL_STR "channel" #define REGISTERS_STR "registers" -static struct dw_edma *dw; -static struct dw_edma_v0_regs __iomem *regs; - struct dw_edma_debugfs_entry { + struct dw_edma *dw; const char *name; void __iomem *reg; enum dw_edma_dir dir; @@ -69,10 +71,11 @@ struct dw_edma_debugfs_entry { static int dw_edma_debugfs_u32_get(void *data, u64 *val) { struct dw_edma_debugfs_entry *entry = data; + struct dw_edma *dw = entry->dw; void __iomem *reg = entry->reg; if (dw->chip->mf == EDMA_MF_EDMA_LEGACY && - reg >= (void __iomem *)®s->type.legacy.ch) { + reg >= REGS_ADDR(dw, type.legacy.ch)) { unsigned long flags; u32 viewport_sel; @@ -81,7 +84,7 @@ static int dw_edma_debugfs_u32_get(void *data, u64 *val) raw_spin_lock_irqsave(&dw->lock, flags); - writel(viewport_sel, ®s->type.legacy.viewport_sel); + writel(viewport_sel, REGS_ADDR(dw, type.legacy.viewport_sel)); *val = readl(reg); raw_spin_unlock_irqrestore(&dw->lock, flags); @@ -93,7 +96,8 @@ static int dw_edma_debugfs_u32_get(void *data, u64 *val) } DEFINE_DEBUGFS_ATTRIBUTE(fops_x32, dw_edma_debugfs_u32_get, NULL, "0x%08llx\n"); -static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry ini[], +static void dw_edma_debugfs_create_x32(struct dw_edma *dw, + const struct dw_edma_debugfs_entry ini[], int nr_entries, struct dentry *dent) { struct dw_edma_debugfs_entry *entries; @@ -112,62 +116,62 @@ static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry ini[], } } -static void dw_edma_debugfs_regs_ch(enum dw_edma_dir dir, u16 ch, - struct dentry *dent) +static void dw_edma_debugfs_regs_ch(struct dw_edma *dw, enum dw_edma_dir dir, + u16 ch, struct dentry *dent) { struct dw_edma_debugfs_entry debugfs_regs[] = { - CTX_REGISTER(ch_control1, dir, ch), - CTX_REGISTER(ch_control2, dir, ch), - CTX_REGISTER(transfer_size, dir, ch), - CTX_REGISTER(sar.lsb, dir, ch), - CTX_REGISTER(sar.msb, dir, ch), - CTX_REGISTER(dar.lsb, dir, ch), - CTX_REGISTER(dar.msb, dir, ch), - CTX_REGISTER(llp.lsb, dir, ch), - CTX_REGISTER(llp.msb, dir, ch), + CTX_REGISTER(dw, ch_control1, dir, ch), + CTX_REGISTER(dw, ch_control2, dir, ch), + CTX_REGISTER(dw, transfer_size, dir, ch), + CTX_REGISTER(dw, sar.lsb, dir, ch), + CTX_REGISTER(dw, sar.msb, dir, ch), + CTX_REGISTER(dw, dar.lsb, dir, ch), + CTX_REGISTER(dw, dar.msb, dir, ch), + CTX_REGISTER(dw, llp.lsb, dir, ch), + CTX_REGISTER(dw, llp.msb, dir, ch), }; int nr_entries; nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, dent); + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, dent); } -static void dw_edma_debugfs_regs_wr(struct dentry *dent) +static void dw_edma_debugfs_regs_wr(struct dw_edma *dw, struct dentry *dent) { const struct dw_edma_debugfs_entry debugfs_regs[] = { /* eDMA global registers */ - WR_REGISTER(engine_en), - WR_REGISTER(doorbell), - WR_REGISTER(ch_arb_weight.lsb), - WR_REGISTER(ch_arb_weight.msb), + WR_REGISTER(dw, engine_en), + WR_REGISTER(dw, doorbell), + WR_REGISTER(dw, ch_arb_weight.lsb), + WR_REGISTER(dw, ch_arb_weight.msb), /* eDMA interrupts registers */ - WR_REGISTER(int_status), - WR_REGISTER(int_mask), - WR_REGISTER(int_clear), - WR_REGISTER(err_status), - WR_REGISTER(done_imwr.lsb), - WR_REGISTER(done_imwr.msb), - WR_REGISTER(abort_imwr.lsb), - WR_REGISTER(abort_imwr.msb), - WR_REGISTER(ch01_imwr_data), - WR_REGISTER(ch23_imwr_data), - WR_REGISTER(ch45_imwr_data), - WR_REGISTER(ch67_imwr_data), - WR_REGISTER(linked_list_err_en), + WR_REGISTER(dw, int_status), + WR_REGISTER(dw, int_mask), + WR_REGISTER(dw, int_clear), + WR_REGISTER(dw, err_status), + WR_REGISTER(dw, done_imwr.lsb), + WR_REGISTER(dw, done_imwr.msb), + WR_REGISTER(dw, abort_imwr.lsb), + WR_REGISTER(dw, abort_imwr.msb), + WR_REGISTER(dw, ch01_imwr_data), + WR_REGISTER(dw, ch23_imwr_data), + WR_REGISTER(dw, ch45_imwr_data), + WR_REGISTER(dw, ch67_imwr_data), + WR_REGISTER(dw, linked_list_err_en), }; const struct dw_edma_debugfs_entry debugfs_unroll_regs[] = { /* eDMA channel context grouping */ - WR_REGISTER_UNROLL(engine_chgroup), - WR_REGISTER_UNROLL(engine_hshake_cnt.lsb), - WR_REGISTER_UNROLL(engine_hshake_cnt.msb), - WR_REGISTER_UNROLL(ch0_pwr_en), - WR_REGISTER_UNROLL(ch1_pwr_en), - WR_REGISTER_UNROLL(ch2_pwr_en), - WR_REGISTER_UNROLL(ch3_pwr_en), - WR_REGISTER_UNROLL(ch4_pwr_en), - WR_REGISTER_UNROLL(ch5_pwr_en), - WR_REGISTER_UNROLL(ch6_pwr_en), - WR_REGISTER_UNROLL(ch7_pwr_en), + WR_REGISTER_UNROLL(dw, engine_chgroup), + WR_REGISTER_UNROLL(dw, engine_hshake_cnt.lsb), + WR_REGISTER_UNROLL(dw, engine_hshake_cnt.msb), + WR_REGISTER_UNROLL(dw, ch0_pwr_en), + WR_REGISTER_UNROLL(dw, ch1_pwr_en), + WR_REGISTER_UNROLL(dw, ch2_pwr_en), + WR_REGISTER_UNROLL(dw, ch3_pwr_en), + WR_REGISTER_UNROLL(dw, ch4_pwr_en), + WR_REGISTER_UNROLL(dw, ch5_pwr_en), + WR_REGISTER_UNROLL(dw, ch6_pwr_en), + WR_REGISTER_UNROLL(dw, ch7_pwr_en), }; struct dentry *regs_dent, *ch_dent; int nr_entries, i; @@ -176,11 +180,11 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dent) regs_dent = debugfs_create_dir(WRITE_STR, dent); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent); + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, regs_dent); if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { nr_entries = ARRAY_SIZE(debugfs_unroll_regs); - dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, + dw_edma_debugfs_create_x32(dw, debugfs_unroll_regs, nr_entries, regs_dent); } @@ -189,47 +193,47 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dent) ch_dent = debugfs_create_dir(name, regs_dent); - dw_edma_debugfs_regs_ch(EDMA_DIR_WRITE, i, ch_dent); + dw_edma_debugfs_regs_ch(dw, EDMA_DIR_WRITE, i, ch_dent); } } -static void dw_edma_debugfs_regs_rd(struct dentry *dent) +static void dw_edma_debugfs_regs_rd(struct dw_edma *dw, struct dentry *dent) { const struct dw_edma_debugfs_entry debugfs_regs[] = { /* eDMA global registers */ - RD_REGISTER(engine_en), - RD_REGISTER(doorbell), - RD_REGISTER(ch_arb_weight.lsb), - RD_REGISTER(ch_arb_weight.msb), + RD_REGISTER(dw, engine_en), + RD_REGISTER(dw, doorbell), + RD_REGISTER(dw, ch_arb_weight.lsb), + RD_REGISTER(dw, ch_arb_weight.msb), /* eDMA interrupts registers */ - RD_REGISTER(int_status), - RD_REGISTER(int_mask), - RD_REGISTER(int_clear), - RD_REGISTER(err_status.lsb), - RD_REGISTER(err_status.msb), - RD_REGISTER(linked_list_err_en), - RD_REGISTER(done_imwr.lsb), - RD_REGISTER(done_imwr.msb), - RD_REGISTER(abort_imwr.lsb), - RD_REGISTER(abort_imwr.msb), - RD_REGISTER(ch01_imwr_data), - RD_REGISTER(ch23_imwr_data), - RD_REGISTER(ch45_imwr_data), - RD_REGISTER(ch67_imwr_data), + RD_REGISTER(dw, int_status), + RD_REGISTER(dw, int_mask), + RD_REGISTER(dw, int_clear), + RD_REGISTER(dw, err_status.lsb), + RD_REGISTER(dw, err_status.msb), + RD_REGISTER(dw, linked_list_err_en), + RD_REGISTER(dw, done_imwr.lsb), + RD_REGISTER(dw, done_imwr.msb), + RD_REGISTER(dw, abort_imwr.lsb), + RD_REGISTER(dw, abort_imwr.msb), + RD_REGISTER(dw, ch01_imwr_data), + RD_REGISTER(dw, ch23_imwr_data), + RD_REGISTER(dw, ch45_imwr_data), + RD_REGISTER(dw, ch67_imwr_data), }; const struct dw_edma_debugfs_entry debugfs_unroll_regs[] = { /* eDMA channel context grouping */ - RD_REGISTER_UNROLL(engine_chgroup), - RD_REGISTER_UNROLL(engine_hshake_cnt.lsb), - RD_REGISTER_UNROLL(engine_hshake_cnt.msb), - RD_REGISTER_UNROLL(ch0_pwr_en), - RD_REGISTER_UNROLL(ch1_pwr_en), - RD_REGISTER_UNROLL(ch2_pwr_en), - RD_REGISTER_UNROLL(ch3_pwr_en), - RD_REGISTER_UNROLL(ch4_pwr_en), - RD_REGISTER_UNROLL(ch5_pwr_en), - RD_REGISTER_UNROLL(ch6_pwr_en), - RD_REGISTER_UNROLL(ch7_pwr_en), + RD_REGISTER_UNROLL(dw, engine_chgroup), + RD_REGISTER_UNROLL(dw, engine_hshake_cnt.lsb), + RD_REGISTER_UNROLL(dw, engine_hshake_cnt.msb), + RD_REGISTER_UNROLL(dw, ch0_pwr_en), + RD_REGISTER_UNROLL(dw, ch1_pwr_en), + RD_REGISTER_UNROLL(dw, ch2_pwr_en), + RD_REGISTER_UNROLL(dw, ch3_pwr_en), + RD_REGISTER_UNROLL(dw, ch4_pwr_en), + RD_REGISTER_UNROLL(dw, ch5_pwr_en), + RD_REGISTER_UNROLL(dw, ch6_pwr_en), + RD_REGISTER_UNROLL(dw, ch7_pwr_en), }; struct dentry *regs_dent, *ch_dent; int nr_entries, i; @@ -238,11 +242,11 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dent) regs_dent = debugfs_create_dir(READ_STR, dent); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent); + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, regs_dent); if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { nr_entries = ARRAY_SIZE(debugfs_unroll_regs); - dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, + dw_edma_debugfs_create_x32(dw, debugfs_unroll_regs, nr_entries, regs_dent); } @@ -251,15 +255,15 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dent) ch_dent = debugfs_create_dir(name, regs_dent); - dw_edma_debugfs_regs_ch(EDMA_DIR_READ, i, ch_dent); + dw_edma_debugfs_regs_ch(dw, EDMA_DIR_READ, i, ch_dent); } } -static void dw_edma_debugfs_regs(void) +static void dw_edma_debugfs_regs(struct dw_edma *dw) { const struct dw_edma_debugfs_entry debugfs_regs[] = { - REGISTER(ctrl_data_arb_prior), - REGISTER(ctrl), + REGISTER(dw, ctrl_data_arb_prior), + REGISTER(dw, ctrl), }; struct dentry *regs_dent; int nr_entries; @@ -267,40 +271,28 @@ static void dw_edma_debugfs_regs(void) regs_dent = debugfs_create_dir(REGISTERS_STR, dw->debugfs); nr_entries = ARRAY_SIZE(debugfs_regs); - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent); + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, regs_dent); - dw_edma_debugfs_regs_wr(regs_dent); - dw_edma_debugfs_regs_rd(regs_dent); + dw_edma_debugfs_regs_wr(dw, regs_dent); + dw_edma_debugfs_regs_rd(dw, regs_dent); } -void dw_edma_v0_debugfs_on(struct dw_edma *_dw) +void dw_edma_v0_debugfs_on(struct dw_edma *dw) { if (!debugfs_initialized()) return; - dw = _dw; - if (!dw) - return; - - regs = dw->chip->reg_base; - if (!regs) - return; - dw->debugfs = debugfs_create_dir(dw->name, NULL); debugfs_create_u32("mf", 0444, dw->debugfs, &dw->chip->mf); debugfs_create_u16("wr_ch_cnt", 0444, dw->debugfs, &dw->wr_ch_cnt); debugfs_create_u16("rd_ch_cnt", 0444, dw->debugfs, &dw->rd_ch_cnt); - dw_edma_debugfs_regs(); + dw_edma_debugfs_regs(dw); } -void dw_edma_v0_debugfs_off(struct dw_edma *_dw) +void dw_edma_v0_debugfs_off(struct dw_edma *dw) { - dw = _dw; - if (!dw) - return; - debugfs_remove_recursive(dw->debugfs); dw->debugfs = NULL; } From patchwork Tue May 3 22:50:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626021 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=aZAv1IH/; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFbC42sDz9sG2 for ; Wed, 4 May 2022 08:52:23 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244109AbiECWzu (ORCPT ); Tue, 3 May 2022 18:55:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243967AbiECWzd (ORCPT ); Tue, 3 May 2022 18:55:33 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4293943395; Tue, 3 May 2022 15:51:31 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id B2EA8C06; Wed, 4 May 2022 01:52:04 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru B2EA8C06 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618324; bh=L/ijhvq/jFe/mifKkaYDmhw4Yb3oJDxQ883Lx2E1faI=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=aZAv1IH/aur7KOqOZ2IejN/mXKx1yl3u5YqQ7NlxF9hKLs0h4gq1XzeEmyLXmk9iN O3zNBUdj5CASBspFKu5pBWBa/23jb0GqkHLt0Gc0dSFe+btSyoY1iEDXdSDcxjqMyr VP3hpFi6oUx7iB/TZkol8BEzsyjsgo1y8McrUiAE= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:30 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 18/26] dmaengine: dw-edma: Join Write/Read channels into a single device Date: Wed, 4 May 2022 01:50:56 +0300 Message-ID: <20220503225104.12108-19-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Indeed there is no point in such split up because due to multiple reasons. First of all eDMA read and write channels belong to one physical controller. Splitting them up illogical. Secondly the channels differentiating can be done by means of the filtering and the dma_get_slave_caps() method. Finally having these channels handled separately not only needlessly complicates the code, but also causes the DebugFS error printed to console: >> Debugfs: Directory '1f052000.pcie' with parent 'dmaengine' already present! So to speak let's join the read/write channels into a single DMA device. The client drivers will be able to choose the channel with required capability by getting the DMA slave direction setting. It's default value is overridden by the dw_edma_device_caps() callback in accordance with the channel nature. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-core.c | 116 +++++++++++++++-------------- drivers/dma/dw-edma/dw-edma-core.h | 5 +- 2 files changed, 61 insertions(+), 60 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index e9cb3056b6b7..f5124e7b50cf 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -209,6 +209,24 @@ static void dw_edma_start_transfer(struct dw_edma_chan *chan) desc->chunks_alloc--; } +static void dw_edma_device_caps(struct dma_chan *dchan, + struct dma_slave_caps *caps) +{ + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); + + if (chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL) { + if (chan->dir == EDMA_DIR_READ) + caps->directions = BIT(DMA_DEV_TO_MEM); + else + caps->directions = BIT(DMA_MEM_TO_DEV); + } else { + if (chan->dir == EDMA_DIR_WRITE) + caps->directions = BIT(DMA_DEV_TO_MEM); + else + caps->directions = BIT(DMA_MEM_TO_DEV); + } +} + static int dw_edma_device_config(struct dma_chan *dchan, struct dma_slave_config *config) { @@ -723,8 +741,7 @@ static void dw_edma_free_chan_resources(struct dma_chan *dchan) pm_runtime_put(chan->dw->chip->dev); } -static int dw_edma_channel_setup(struct dw_edma *dw, bool write, - u32 wr_alloc, u32 rd_alloc) +static int dw_edma_channel_setup(struct dw_edma *dw, u32 wr_alloc, u32 rd_alloc) { struct dw_edma_chip *chip = dw->chip; struct dw_edma_region *dt_region; @@ -732,27 +749,15 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write, struct dw_edma_chan *chan; struct dw_edma_irq *irq; struct dma_device *dma; - u32 alloc, off_alloc; - u32 i, j, cnt; - int err = 0; + u32 i, ch_cnt; u32 pos; - if (write) { - i = 0; - cnt = dw->wr_ch_cnt; - dma = &dw->wr_edma; - alloc = wr_alloc; - off_alloc = 0; - } else { - i = dw->wr_ch_cnt; - cnt = dw->rd_ch_cnt; - dma = &dw->rd_edma; - alloc = rd_alloc; - off_alloc = wr_alloc; - } + ch_cnt = dw->wr_ch_cnt + dw->rd_ch_cnt; + dma = &dw->dma; INIT_LIST_HEAD(&dma->channels); - for (j = 0; (alloc || dw->nr_irqs == 1) && j < cnt; j++, i++) { + + for (i = 0; i < ch_cnt; i++) { chan = &dw->chan[i]; dt_region = devm_kzalloc(dev, sizeof(*dt_region), GFP_KERNEL); @@ -762,52 +767,62 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write, chan->vc.chan.private = dt_region; chan->dw = dw; - chan->id = j; - chan->dir = write ? EDMA_DIR_WRITE : EDMA_DIR_READ; + + if (i < dw->wr_ch_cnt) { + chan->id = i; + chan->dir = EDMA_DIR_WRITE; + } else { + chan->id = i - dw->wr_ch_cnt; + chan->dir = EDMA_DIR_READ; + } + chan->configured = false; chan->request = EDMA_REQ_NONE; chan->status = EDMA_ST_IDLE; - if (write) - chan->ll_max = (chip->ll_region_wr[j].sz / EDMA_LL_SZ); + if (chan->dir == EDMA_DIR_WRITE) + chan->ll_max = (chip->ll_region_wr[chan->id].sz / EDMA_LL_SZ); else - chan->ll_max = (chip->ll_region_rd[j].sz / EDMA_LL_SZ); + chan->ll_max = (chip->ll_region_rd[chan->id].sz / EDMA_LL_SZ); chan->ll_max -= 1; dev_vdbg(dev, "L. List:\tChannel %s[%u] max_cnt=%u\n", - write ? "write" : "read", j, chan->ll_max); + chan->dir == EDMA_DIR_WRITE ? "write" : "read", + chan->id, chan->ll_max); if (dw->nr_irqs == 1) pos = 0; + else if (chan->dir == EDMA_DIR_WRITE) + pos = chan->id % wr_alloc; else - pos = off_alloc + (j % alloc); + pos = wr_alloc + chan->id % rd_alloc; irq = &dw->irq[pos]; - if (write) - irq->wr_mask |= BIT(j); + if (chan->dir == EDMA_DIR_WRITE) + irq->wr_mask |= BIT(chan->id); else - irq->rd_mask |= BIT(j); + irq->rd_mask |= BIT(chan->id); irq->dw = dw; memcpy(&chan->msi, &irq->msi, sizeof(chan->msi)); dev_vdbg(dev, "MSI:\t\tChannel %s[%u] addr=0x%.8x%.8x, data=0x%.8x\n", - write ? "write" : "read", j, + chan->dir == EDMA_DIR_WRITE ? "write" : "read", chan->id, chan->msi.address_hi, chan->msi.address_lo, chan->msi.data); chan->vc.desc_free = vchan_free_desc; vchan_init(&chan->vc, dma); - if (write) { - dt_region->paddr = chip->dt_region_wr[j].paddr; - dt_region->vaddr = chip->dt_region_wr[j].vaddr; - dt_region->sz = chip->dt_region_wr[j].sz; + if (chan->dir == EDMA_DIR_WRITE) { + dt_region->paddr = chip->dt_region_wr[chan->id].paddr; + dt_region->vaddr = chip->dt_region_wr[chan->id].vaddr; + dt_region->sz = chip->dt_region_wr[chan->id].sz; } else { - dt_region->paddr = chip->dt_region_rd[j].paddr; - dt_region->vaddr = chip->dt_region_rd[j].vaddr; - dt_region->sz = chip->dt_region_rd[j].sz; + dt_region->paddr = chip->dt_region_rd[chan->id].paddr; + dt_region->vaddr = chip->dt_region_rd[chan->id].vaddr; + dt_region->sz = chip->dt_region_rd[chan->id].sz; } dw_edma_v0_core_device_config(chan); @@ -819,7 +834,7 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write, dma_cap_set(DMA_CYCLIC, dma->cap_mask); dma_cap_set(DMA_PRIVATE, dma->cap_mask); dma_cap_set(DMA_INTERLEAVE, dma->cap_mask); - dma->directions = BIT(write ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV); + dma->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); dma->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); dma->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); dma->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; @@ -828,6 +843,7 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write, dma->dev = chip->dev; dma->device_alloc_chan_resources = dw_edma_alloc_chan_resources; dma->device_free_chan_resources = dw_edma_free_chan_resources; + dma->device_caps = dw_edma_device_caps; dma->device_config = dw_edma_device_config; dma->device_pause = dw_edma_device_pause; dma->device_resume = dw_edma_device_resume; @@ -841,9 +857,7 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write, dma_set_max_seg_size(dma->dev, U32_MAX); /* Register DMA device */ - err = dma_async_device_register(dma); - - return err; + return dma_async_device_register(dma); } static inline void dw_edma_dec_irq_alloc(int *nr_irqs, u32 *alloc, u16 cnt) @@ -988,13 +1002,8 @@ int dw_edma_probe(struct dw_edma_chip *chip) if (err) return err; - /* Setup write channels */ - err = dw_edma_channel_setup(dw, true, wr_alloc, rd_alloc); - if (err) - goto err_irq_free; - - /* Setup read channels */ - err = dw_edma_channel_setup(dw, false, wr_alloc, rd_alloc); + /* Setup write/read channels */ + err = dw_edma_channel_setup(dw, wr_alloc, rd_alloc); if (err) goto err_irq_free; @@ -1034,15 +1043,8 @@ int dw_edma_remove(struct dw_edma_chip *chip) pm_runtime_disable(dev); /* Deregister eDMA device */ - dma_async_device_unregister(&dw->wr_edma); - list_for_each_entry_safe(chan, _chan, &dw->wr_edma.channels, - vc.chan.device_node) { - tasklet_kill(&chan->vc.task); - list_del(&chan->vc.chan.device_node); - } - - dma_async_device_unregister(&dw->rd_edma); - list_for_each_entry_safe(chan, _chan, &dw->rd_edma.channels, + dma_async_device_unregister(&dw->dma); + list_for_each_entry_safe(chan, _chan, &dw->dma.channels, vc.chan.device_node) { tasklet_kill(&chan->vc.task); list_del(&chan->vc.chan.device_node); diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h index 85df2d511907..b576a8fff45a 100644 --- a/drivers/dma/dw-edma/dw-edma-core.h +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -98,10 +98,9 @@ struct dw_edma_irq { struct dw_edma { char name[20]; - struct dma_device wr_edma; - u16 wr_ch_cnt; + struct dma_device dma; - struct dma_device rd_edma; + u16 wr_ch_cnt; u16 rd_ch_cnt; struct dw_edma_irq *irq; From patchwork Tue May 3 22:50:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626027 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=Wu0khgAC; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFbF6Dpxz9sG2 for ; Wed, 4 May 2022 08:52:25 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244024AbiECWz5 (ORCPT ); Tue, 3 May 2022 18:55:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244134AbiECWzi (ORCPT ); Tue, 3 May 2022 18:55:38 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id DA0C0433BA; Tue, 3 May 2022 15:51:32 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 315FAC07; Wed, 4 May 2022 01:52:06 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 315FAC07 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618326; bh=Tnv3c0nlZBq31YnMEiCDH/uw1APag3tDvsgntFMphxU=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=Wu0khgAC0R09ZRZcfxfLWiroxK0t5i8LunwONqTx3z36tE6xLNT7QxLoV5SK2srmJ 1YnPqxQRoNE9WE8srBiTXA2RcX/lm0p7/VqoH3z1qZBZm/CUEvQHSIahXV9I0cgZ6u sXp34VRVr3fiNSBw3Ksvx1kSH8GqNQxYlEw6u7LQ= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:32 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 19/26] dmaengine: dw-edma: Use DMA-engine device DebugFS subdirectory Date: Wed, 4 May 2022 01:50:57 +0300 Message-ID: <20220503225104.12108-20-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Since all DW eDMA read and write channels are now installed in a framework of a single DMA-engine device, we can freely move all the DW eDMA-specific DebugFS nodes into a ready-to-use DMA-engine DebugFS subdirectory. It's created during the DMA-device registration and can be found in the dma_device.dbg_dev_root field. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-core.c | 3 --- drivers/dma/dw-edma/dw-edma-core.h | 3 --- drivers/dma/dw-edma/dw-edma-v0-core.c | 5 ----- drivers/dma/dw-edma/dw-edma-v0-core.h | 1 - drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 16 ++++------------ drivers/dma/dw-edma/dw-edma-v0-debugfs.h | 5 ----- 6 files changed, 4 insertions(+), 29 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index f5124e7b50cf..7ba3b60c960c 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -1050,9 +1050,6 @@ int dw_edma_remove(struct dw_edma_chip *chip) list_del(&chan->vc.chan.device_node); } - /* Turn debugfs off */ - dw_edma_v0_core_debugfs_off(dw); - return 0; } EXPORT_SYMBOL_GPL(dw_edma_remove); diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h index b576a8fff45a..e3ad3e372b55 100644 --- a/drivers/dma/dw-edma/dw-edma-core.h +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -111,9 +111,6 @@ struct dw_edma { raw_spinlock_t lock; /* Only for legacy */ struct dw_edma_chip *chip; -#ifdef CONFIG_DEBUG_FS - struct dentry *debugfs; -#endif /* CONFIG_DEBUG_FS */ }; struct dw_edma_sg { diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c index 2d3f74ccc340..e6d611176891 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-core.c +++ b/drivers/dma/dw-edma/dw-edma-v0-core.c @@ -511,8 +511,3 @@ void dw_edma_v0_core_debugfs_on(struct dw_edma *dw) { dw_edma_v0_debugfs_on(dw); } - -void dw_edma_v0_core_debugfs_off(struct dw_edma *dw) -{ - dw_edma_v0_debugfs_off(dw); -} diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.h b/drivers/dma/dw-edma/dw-edma-v0-core.h index 75aec6d31b21..ab96a1f48080 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-core.h +++ b/drivers/dma/dw-edma/dw-edma-v0-core.h @@ -23,6 +23,5 @@ void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first); int dw_edma_v0_core_device_config(struct dw_edma_chan *chan); /* eDMA debug fs callbacks */ void dw_edma_v0_core_debugfs_on(struct dw_edma *dw); -void dw_edma_v0_core_debugfs_off(struct dw_edma *dw); #endif /* _DW_EDMA_V0_CORE_H */ diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c index e6cf608d121b..d12c607433bf 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c @@ -268,7 +268,7 @@ static void dw_edma_debugfs_regs(struct dw_edma *dw) struct dentry *regs_dent; int nr_entries; - regs_dent = debugfs_create_dir(REGISTERS_STR, dw->debugfs); + regs_dent = debugfs_create_dir(REGISTERS_STR, dw->dma.dbg_dev_root); nr_entries = ARRAY_SIZE(debugfs_regs); dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, regs_dent); @@ -282,17 +282,9 @@ void dw_edma_v0_debugfs_on(struct dw_edma *dw) if (!debugfs_initialized()) return; - dw->debugfs = debugfs_create_dir(dw->name, NULL); - - debugfs_create_u32("mf", 0444, dw->debugfs, &dw->chip->mf); - debugfs_create_u16("wr_ch_cnt", 0444, dw->debugfs, &dw->wr_ch_cnt); - debugfs_create_u16("rd_ch_cnt", 0444, dw->debugfs, &dw->rd_ch_cnt); + debugfs_create_u32("mf", 0444, dw->dma.dbg_dev_root, &dw->chip->mf); + debugfs_create_u16("wr_ch_cnt", 0444, dw->dma.dbg_dev_root, &dw->wr_ch_cnt); + debugfs_create_u16("rd_ch_cnt", 0444, dw->dma.dbg_dev_root, &dw->rd_ch_cnt); dw_edma_debugfs_regs(dw); } - -void dw_edma_v0_debugfs_off(struct dw_edma *dw) -{ - debugfs_remove_recursive(dw->debugfs); - dw->debugfs = NULL; -} diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.h b/drivers/dma/dw-edma/dw-edma-v0-debugfs.h index 3391b86edf5a..fb3342d97d6d 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.h +++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.h @@ -13,15 +13,10 @@ #ifdef CONFIG_DEBUG_FS void dw_edma_v0_debugfs_on(struct dw_edma *dw); -void dw_edma_v0_debugfs_off(struct dw_edma *dw); #else static inline void dw_edma_v0_debugfs_on(struct dw_edma *dw) { } - -static inline void dw_edma_v0_debugfs_off(struct dw_edma *dw) -{ -} #endif /* CONFIG_DEBUG_FS */ #endif /* _DW_EDMA_V0_DEBUG_FS_H */ From patchwork Tue May 3 22:50:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626022 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=c3X+0SoG; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFbC6Nv8z9sG3 for ; Wed, 4 May 2022 08:52:23 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243910AbiECWzx (ORCPT ); Tue, 3 May 2022 18:55:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243966AbiECWzd (ORCPT ); Tue, 3 May 2022 18:55:33 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id ED0CB433BE; Tue, 3 May 2022 15:51:33 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 3E5C5C08; Wed, 4 May 2022 01:52:07 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 3E5C5C08 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618327; bh=mlbrREDWnaYRYCU292g215ETXH93C3iQ6RN6HiEltyY=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=c3X+0SoGAyf5cwpKv+JF10ZSGkfGchJGwy97QGr39nfvBUkAGMhL4T6rd7JuLONa7 hGZfHycsBF6zYUWtSokWNsr3MViHuMLOVvBKjM300vF//rGVnAbCTKyaMD+ti00LK0 Ufd9jwM44MzUqUGmoKckLLd/+ZVkDG9UGz7seGi0= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:33 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 20/26] dmaengine: dw-edma: Use non-atomic io-64 methods Date: Wed, 4 May 2022 01:50:58 +0300 Message-ID: <20220503225104.12108-21-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Instead of splitting the 64-bits IOs up into two 32-bits ones it's possible to use an available set of the non-atomic readq/writeq methods implemented exactly for such cases. They are defined in the dedicated header files io-64-nonatomic-lo-hi.h/io-64-nonatomic-hi-lo.h. So in case if the 64-bits readq/writeq methods are unavailable on some platforms at consideration, the corresponding drivers can have any of these headers included and stop locally re-implementing the 64-bits IO accessors taking into account the non-atomic nature of the included methods. Let's do that in the DW eDMA driver too. Note by doing so we can discard the CONFIG_64BIT config ifdefs from the code. Also note that if a platform doesn't support 64-bit DBI IOs then the corresponding accessors will just directly call the lo_hi_readq()/lo_hi_writeq() methods. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-v0-core.c | 71 +++++++++------------------ 1 file changed, 24 insertions(+), 47 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c index e6d611176891..4348d2323125 100644 --- a/drivers/dma/dw-edma/dw-edma-v0-core.c +++ b/drivers/dma/dw-edma/dw-edma-v0-core.c @@ -8,6 +8,8 @@ #include +#include + #include "dw-edma-core.h" #include "dw-edma-v0-core.h" #include "dw-edma-v0-regs.h" @@ -53,8 +55,6 @@ static inline struct dw_edma_v0_regs __iomem *__dw_regs(struct dw_edma *dw) SET_32(dw, rd_##name, value); \ } while (0) -#ifdef CONFIG_64BIT - #define SET_64(dw, name, value) \ writeq(value, &(__dw_regs(dw)->name)) @@ -80,8 +80,6 @@ static inline struct dw_edma_v0_regs __iomem *__dw_regs(struct dw_edma *dw) SET_64(dw, rd_##name, value); \ } while (0) -#endif /* CONFIG_64BIT */ - #define SET_COMPAT(dw, name, value) \ writel(value, &(__dw_regs(dw)->type.unroll.name)) @@ -164,14 +162,13 @@ static inline u32 readl_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, #define SET_LL_32(ll, value) \ writel(value, ll) -#ifdef CONFIG_64BIT - static inline void writeq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, u64 value, void __iomem *addr) { + unsigned long flags; + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) { u32 viewport_sel; - unsigned long flags; raw_spin_lock_irqsave(&dw->lock, flags); @@ -181,22 +178,25 @@ static inline void writeq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, writel(viewport_sel, &(__dw_regs(dw)->type.legacy.viewport_sel)); + } + + if (dw->chip->flags & DW_EDMA_CHIP_32BIT_DBI) + lo_hi_writeq(value, addr); + else writeq(value, addr); + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) raw_spin_unlock_irqrestore(&dw->lock, flags); - } else { - writeq(value, addr); - } } static inline u64 readq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, const void __iomem *addr) { - u32 value; + unsigned long flags; + u64 value; if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) { u32 viewport_sel; - unsigned long flags; raw_spin_lock_irqsave(&dw->lock, flags); @@ -206,12 +206,15 @@ static inline u64 readq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, writel(viewport_sel, &(__dw_regs(dw)->type.legacy.viewport_sel)); + } + + if (dw->chip->flags & DW_EDMA_CHIP_32BIT_DBI) + value = lo_hi_readq(addr); + else value = readq(addr); + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) raw_spin_unlock_irqrestore(&dw->lock, flags); - } else { - value = readq(addr); - } return value; } @@ -225,8 +228,6 @@ static inline u64 readq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, #define SET_LL_64(ll, value) \ writeq(value, ll) -#endif /* CONFIG_64BIT */ - /* eDMA management callbacks */ void dw_edma_v0_core_off(struct dw_edma *dw) { @@ -325,19 +326,10 @@ static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk) /* Transfer size */ SET_LL_32(&lli[i].transfer_size, child->sz); /* SAR */ - #ifdef CONFIG_64BIT - SET_LL_64(&lli[i].sar.reg, child->sar); - #else /* CONFIG_64BIT */ - SET_LL_32(&lli[i].sar.lsb, lower_32_bits(child->sar)); - SET_LL_32(&lli[i].sar.msb, upper_32_bits(child->sar)); - #endif /* CONFIG_64BIT */ + SET_LL_64(&lli[i].sar.reg, child->sar); /* DAR */ - #ifdef CONFIG_64BIT - SET_LL_64(&lli[i].dar.reg, child->dar); - #else /* CONFIG_64BIT */ - SET_LL_32(&lli[i].dar.lsb, lower_32_bits(child->dar)); - SET_LL_32(&lli[i].dar.msb, upper_32_bits(child->dar)); - #endif /* CONFIG_64BIT */ + SET_LL_64(&lli[i].dar.reg, child->dar); + i++; } @@ -349,12 +341,7 @@ static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk) /* Channel control */ SET_LL_32(&llp->control, control); /* Linked list */ - #ifdef CONFIG_64BIT - SET_LL_64(&llp->llp.reg, chunk->ll_region.paddr); - #else /* CONFIG_64BIT */ - SET_LL_32(&llp->llp.lsb, lower_32_bits(chunk->ll_region.paddr)); - SET_LL_32(&llp->llp.msb, upper_32_bits(chunk->ll_region.paddr)); - #endif /* CONFIG_64BIT */ + SET_LL_64(&llp->llp.reg, chunk->ll_region.paddr); } void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first) @@ -417,18 +404,8 @@ void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first) SET_CH_32(dw, chan->dir, chan->id, ch_control1, (DW_EDMA_V0_CCS | DW_EDMA_V0_LLE)); /* Linked list */ - if ((chan->dw->chip->flags & DW_EDMA_CHIP_32BIT_DBI) || - !IS_ENABLED(CONFIG_64BIT)) { - SET_CH_32(dw, chan->dir, chan->id, llp.lsb, - lower_32_bits(chunk->ll_region.paddr)); - SET_CH_32(dw, chan->dir, chan->id, llp.msb, - upper_32_bits(chunk->ll_region.paddr)); - } else { - #ifdef CONFIG_64BIT - SET_CH_64(dw, chan->dir, chan->id, llp.reg, - chunk->ll_region.paddr); - #endif - } + SET_CH_64(dw, chan->dir, chan->id, llp.reg, + chunk->ll_region.paddr); } /* Doorbell */ SET_RW_32(dw, chan->dir, doorbell, From patchwork Tue May 3 22:50:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626023 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=r0X1zvhx; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFbD1vp0z9sG2 for ; Wed, 4 May 2022 08:52:24 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244116AbiECWzy (ORCPT ); Tue, 3 May 2022 18:55:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243947AbiECWzg (ORCPT ); Tue, 3 May 2022 18:55:36 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id F30C043481; Tue, 3 May 2022 15:51:34 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 4303BC09; Wed, 4 May 2022 01:52:08 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 4303BC09 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618328; bh=GBRws/TBOn11+O7ZKQYwDUOw7BY/xdxVcHS/6e+PG2M=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=r0X1zvhxLhqcOa/mxf/b2QeRAICH64ICg1wFRH976iBEFAbdOVPFCCSGBssmCQult 14/xw/xbpJmBcX0iZZLuQNQTTISwcV7CTP3GWXAYBPbEAjidOj/J+whd+UfGRW3zRp zpJAYuYyCTqeIMQgDcY4LitFjj+ogpd6xoZoBf8o= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:34 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 21/26] dmaengine: dw-edma: Drop DT-region allocation Date: Wed, 4 May 2022 01:50:59 +0300 Message-ID: <20220503225104.12108-22-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org There is no point in allocating an additional memory for the data target regions passed then to the client drivers. Just use the already available structures defined in the dw_edma_chip instance. Note these regions are unused in normal circumstances since they are specific to the case of eDMA being embedded into the DW PCIe End-point and having it's CSRs accessible over a End-point' BAR. This case is only known to be implemented as a part of the Synopsys PCIe EndPoint IP prototype kit. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-core.c | 21 ++++----------------- 1 file changed, 4 insertions(+), 17 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 7ba3b60c960c..98a94a66fb82 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -744,7 +744,6 @@ static void dw_edma_free_chan_resources(struct dma_chan *dchan) static int dw_edma_channel_setup(struct dw_edma *dw, u32 wr_alloc, u32 rd_alloc) { struct dw_edma_chip *chip = dw->chip; - struct dw_edma_region *dt_region; struct device *dev = chip->dev; struct dw_edma_chan *chan; struct dw_edma_irq *irq; @@ -760,12 +759,6 @@ static int dw_edma_channel_setup(struct dw_edma *dw, u32 wr_alloc, u32 rd_alloc) for (i = 0; i < ch_cnt; i++) { chan = &dw->chan[i]; - dt_region = devm_kzalloc(dev, sizeof(*dt_region), GFP_KERNEL); - if (!dt_region) - return -ENOMEM; - - chan->vc.chan.private = dt_region; - chan->dw = dw; if (i < dw->wr_ch_cnt) { @@ -813,17 +806,11 @@ static int dw_edma_channel_setup(struct dw_edma *dw, u32 wr_alloc, u32 rd_alloc) chan->msi.data); chan->vc.desc_free = vchan_free_desc; - vchan_init(&chan->vc, dma); + chan->vc.chan.private = chan->dir == EDMA_DIR_WRITE ? + &dw->chip->dt_region_wr[chan->id] : + &dw->chip->dt_region_rd[chan->id]; - if (chan->dir == EDMA_DIR_WRITE) { - dt_region->paddr = chip->dt_region_wr[chan->id].paddr; - dt_region->vaddr = chip->dt_region_wr[chan->id].vaddr; - dt_region->sz = chip->dt_region_wr[chan->id].sz; - } else { - dt_region->paddr = chip->dt_region_rd[chan->id].paddr; - dt_region->vaddr = chip->dt_region_rd[chan->id].vaddr; - dt_region->sz = chip->dt_region_rd[chan->id].sz; - } + vchan_init(&chan->vc, dma); dw_edma_v0_core_device_config(chan); } From patchwork Tue May 3 22:51:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626028 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=nwG3+uTT; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFbM6l8Qz9sG2 for ; Wed, 4 May 2022 08:52:31 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244251AbiECW4B (ORCPT ); Tue, 3 May 2022 18:56:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244149AbiECWzi (ORCPT ); Tue, 3 May 2022 18:55:38 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E9A2A42EF1; Tue, 3 May 2022 15:51:35 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 8363FC0A; Wed, 4 May 2022 01:52:09 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 8363FC0A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618329; bh=A74AOGlvbHPrr444scvLK1UIyhkfbqt0k85xNXXjYKs=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=nwG3+uTTHLwCVbZ5+G8IR/zavjsV7TkhCrShAK2VMmLcYPGKJvUWuqxqrHhOzQfWj ev9x9fTv60PFdvzLksh8I3nZE+yRz1pbLs3IHbeaZqsriaeMEg3gE51qlMt1TEQasC d0fx72aQYjW+D5tntsIDWEFY2XZWeZEB4uL0Vjgc= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:35 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 22/26] dmaengine: dw-edma: Replace chip ID number with device name Date: Wed, 4 May 2022 01:51:00 +0300 Message-ID: <20220503225104.12108-23-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Using some abstract number as the DW eDMA chip identifier isn't really practical. First of all there can be more than one DW eDMA controller on the platform some of them can be detected as the PCIe end-points, some of them can be embedded into the DW PCIe Root Port/End-point controllers. Seeing some abstract number in for instance IRQ handlers list doesn't give a notion regarding their reference to the particular DMA controller. Secondly current DW eDMA chip id implementation doesn't provide the multi-eDMA platforms support for same reason of possibly having eDMA detected on different system buses. At the same time re-implementing something ida-based won't give much benefits especially seeing the DW eDMA chip ID is only used in the IRQ request procedure. So to speak in order to preserve the code simplicity and get to have the multi-eDMA platforms support let's just use the parental device name to create the DW eDMA controller name. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- Changelog v2: - Slightly extend the eDMA name array. (@Manivannan) --- drivers/dma/dw-edma/dw-edma-core.c | 3 ++- drivers/dma/dw-edma/dw-edma-core.h | 2 +- drivers/dma/dw-edma/dw-edma-pcie.c | 1 - include/linux/dma/edma.h | 1 - 4 files changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 98a94a66fb82..6a8282eaebaf 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -979,7 +979,8 @@ int dw_edma_probe(struct dw_edma_chip *chip) if (!dw->chan) return -ENOMEM; - snprintf(dw->name, sizeof(dw->name), "dw-edma-core:%d", chip->id); + snprintf(dw->name, sizeof(dw->name), "dw-edma-core:%s", + dev_name(chip->dev)); /* Disable eDMA, only to establish the ideal initial conditions */ dw_edma_v0_core_off(dw); diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h index e3ad3e372b55..0ab2b6dba880 100644 --- a/drivers/dma/dw-edma/dw-edma-core.h +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -96,7 +96,7 @@ struct dw_edma_irq { }; struct dw_edma { - char name[20]; + char name[32]; struct dma_device dma; diff --git a/drivers/dma/dw-edma/dw-edma-pcie.c b/drivers/dma/dw-edma/dw-edma-pcie.c index f530bacfd716..3f9dadc73854 100644 --- a/drivers/dma/dw-edma/dw-edma-pcie.c +++ b/drivers/dma/dw-edma/dw-edma-pcie.c @@ -222,7 +222,6 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev, /* Data structure initialization */ chip->dev = dev; - chip->id = pdev->devfn; chip->mf = vsec_data.mf; chip->nr_irqs = nr_irqs; diff --git a/include/linux/dma/edma.h b/include/linux/dma/edma.h index 391db06b74b7..346aabf231f1 100644 --- a/include/linux/dma/edma.h +++ b/include/linux/dma/edma.h @@ -76,7 +76,6 @@ enum dw_edma_chip_flags { */ struct dw_edma_chip { struct device *dev; - int id; int nr_irqs; const struct dw_edma_core_ops *ops; u32 flags; From patchwork Tue May 3 22:51:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626025 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=CoeT8kfh; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFbD6cnkz9sG2 for ; Wed, 4 May 2022 08:52:24 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244236AbiECWzz (ORCPT ); Tue, 3 May 2022 18:55:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244125AbiECWzh (ORCPT ); Tue, 3 May 2022 18:55:37 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 97E224349C; Tue, 3 May 2022 15:51:37 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 82B3EC0B; Wed, 4 May 2022 01:52:10 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 82B3EC0B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618330; bh=0ptXtCK1i99miO8i1gK/SouMlHAeFCgwJ3UskvlQy9g=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=CoeT8kfhyMTbwdz1k1coKdWNM57Ubaw6x99Qw2O4k+BkOIfGqD1RLvo2B1l5MunRY Jx3d4X+oFVc/fb2PGWf7sVo0rerAdIUD6tDHN4nG0UXAqVzay9ntFOoErj1kB+7m5I cDuDf28hgxRL+5FZ9laqMBC7a6Hwtn2xRKpKZcdE= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:36 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 23/26] dmaengine: dw-edma: Bypass dma-ranges mapping for the local setup Date: Wed, 4 May 2022 01:51:01 +0300 Message-ID: <20220503225104.12108-24-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org DW eDMA doesn't perform any translation of the traffic generated on the CPU/Application side. It just generates read/write AXI-bus requests with the specified addresses. But in case if the dma-ranges DT-property is specified for a platform device node, Linux will use it to map the CPU memory regions into the DMAable bus ranges. This isn't what we want for the eDMA embedded into the locally accessed DW PCIe Root Port and End-point. In order to work that around let's set the chan_dma_dev flag for each DW eDMA channel thus forcing the client drivers to getting a custom dma-ranges-less parental device for the mappings. Note it will only work for the client drivers using the dmaengine_get_dma_device() method to get the parental DMA device. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- Changelog v2: - Fix the comment a bit to being clearer. (@Manivannan) --- drivers/dma/dw-edma/dw-edma-core.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 6a8282eaebaf..908607785401 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -716,6 +716,21 @@ static int dw_edma_alloc_chan_resources(struct dma_chan *dchan) if (chan->status != EDMA_ST_IDLE) return -EBUSY; + /* Bypass the dma-ranges based memory regions mapping for the eDMA + * controlled from the CPU/Application side since in that case + * the local memory address is left untranslated. + */ + if (chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL) { + dchan->dev->chan_dma_dev = true; + + dchan->dev->device.dma_coherent = chan->dw->chip->dev->dma_coherent; + dma_coerce_mask_and_coherent(&dchan->dev->device, + dma_get_mask(chan->dw->chip->dev)); + dchan->dev->device.dma_parms = chan->dw->chip->dev->dma_parms; + } else { + dchan->dev->chan_dma_dev = false; + } + pm_runtime_get(chan->dw->chip->dev); return 0; From patchwork Tue May 3 22:51:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626029 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=Z/dV+e/1; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFbP30FSz9sG2 for ; Wed, 4 May 2022 08:52:33 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244067AbiECW4E (ORCPT ); Tue, 3 May 2022 18:56:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244167AbiECWzo (ORCPT ); Tue, 3 May 2022 18:55:44 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 87DB843AC1; Tue, 3 May 2022 15:51:38 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 7F439C0C; Wed, 4 May 2022 01:52:11 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 7F439C0C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618331; bh=y5IO3jDpjKtpa6V0+SnC4iOUIBpWgU86hTUe3uaN7w8=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=Z/dV+e/1p5YNY1xa5ja/5kgC3lDjAH0IcdtoAJMAmE7+3THAJJzPth7BpGfhD+rbL 8pUnoDguP2Wh7IRziAsnxAuhI2tfVmIDjGezirsik9Da0yvpix6pj6ZZPzRGPuWnrc NcmcP9iO0VUzjO5/zyPVRdYjdzvMKsYSPxoP/vYc= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:37 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , , , Subject: [PATCH v2 24/26] dmaengine: dw-edma: Skip cleanup procedure if no private data found Date: Wed, 4 May 2022 01:51:02 +0300 Message-ID: <20220503225104.12108-25-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org DW eDMA driver private data is preserved in the passed DW eDMA chip info structure. If either probe procedure failed or for some reason the passed info object doesn't have private data pointer initialized we need to halt the DMA device cleanup procedure in order to prevent possible system crashes. Signed-off-by: Serge Semin Reviewed-by: Manivannan Sadhasivam --- drivers/dma/dw-edma/dw-edma-core.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 908607785401..561686b51915 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -1035,6 +1035,10 @@ int dw_edma_remove(struct dw_edma_chip *chip) struct dw_edma *dw = chip->dw; int i; + /* Skip removal if no private data found */ + if (!dw) + return -ENODEV; + /* Disable eDMA */ dw_edma_v0_core_off(dw); From patchwork Tue May 3 22:51:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626031 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=WkD51AEW; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFbX3bSyz9sG3 for ; Wed, 4 May 2022 08:52:40 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236572AbiECW4K (ORCPT ); Tue, 3 May 2022 18:56:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244200AbiECWzr (ORCPT ); Tue, 3 May 2022 18:55:47 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id DE82543ACA; Tue, 3 May 2022 15:51:39 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id CC1C9C0D; Wed, 4 May 2022 01:52:12 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru CC1C9C0D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618332; bh=Rn3qRaOQPZj8s+wlKxCUQb+RVz0erO/Tc5KHt3cNz0s=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=WkD51AEWXM8aWlFZsICk+Db4KR/+PC0GVeXEKfBlHgrB56tRHomlcZh4tdXXzeXob LG1wbzcQI/fMFvgOZQ8pUfZtQgipH5mlhqL/Qx9iE+dlbnH74Ek2U+dwMCmN4oFYq+ ppkXndKNe4V4EIZmSTIQ877X63mmPs2YRd3Fti9I= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:38 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , , , Subject: [PATCH v2 25/26] PCI: dwc: Add generic iATU/eDMA CSRs space detection method Date: Wed, 4 May 2022 01:51:03 +0300 Message-ID: <20220503225104.12108-26-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The iATU and eDMA are accessible over the same AXI/DBI interface and have the same access mode (viewport-based or unrolled). Due to that it will be more suitable to perform the iATU and eDMA CSRs space detection in the same coherent function. Since the iATU CSRs space and access mode detection procedure is already available in the driver let's move it into the dedicated method. The eDMA CSRs space detection will be added there in the next commit. This change is a preparation before adding the eDMA support to the DW PCIe controller driver. Note the new dw_pcie_map_detect() method fails if the iATU MMIO region IO-remapping fails. It should have been done in the first place since the failed remapping of the detected reg-space is certainly the erroneous situation. Signed-off-by: Serge Semin --- Changelog v2: - This is a new patch added in v2. (@Manivannan) --- .../pci/controller/dwc/pcie-designware-ep.c | 4 + .../pci/controller/dwc/pcie-designware-host.c | 4 + drivers/pci/controller/dwc/pcie-designware.c | 76 +++++++++---------- drivers/pci/controller/dwc/pcie-designware.h | 3 +- 4 files changed, 48 insertions(+), 39 deletions(-) diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c index 9b0540cfa9e8..6cfcfa34e587 100644 --- a/drivers/pci/controller/dwc/pcie-designware-ep.c +++ b/drivers/pci/controller/dwc/pcie-designware-ep.c @@ -717,6 +717,10 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) dw_pcie_version_detect(pci); + ret = dw_pcie_map_detect(pci); + if (ret) + return ret; + dw_pcie_iatu_detect(pci); ep->ib_window_map = devm_kcalloc(dev, diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c index 9cb406f5c185..3cd5b096a427 100644 --- a/drivers/pci/controller/dwc/pcie-designware-host.c +++ b/drivers/pci/controller/dwc/pcie-designware-host.c @@ -409,6 +409,10 @@ int dw_pcie_host_init(struct pcie_port *pp) dw_pcie_version_detect(pci); + ret = dw_pcie_map_detect(pci); + if (ret) + goto err_free_msi; + dw_pcie_iatu_detect(pci); ret = dw_pcie_setup_rc(pp); diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c index 33718ed6c511..68bd3fc66fd7 100644 --- a/drivers/pci/controller/dwc/pcie-designware.c +++ b/drivers/pci/controller/dwc/pcie-designware.c @@ -213,7 +213,7 @@ static inline void __iomem *dw_pcie_select_atu(struct dw_pcie *pci, u32 dir, { void __iomem *base = pci->atu_base; - if (pci->iatu_unroll_enabled) + if (pci->iatu_dma_unrolled) base += PCIE_ATU_UNROLL_BASE(dir, index); else dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, dir | index); @@ -579,24 +579,56 @@ static void dw_pcie_link_set_max_speed(struct dw_pcie *pci, u32 link_gen) } -static bool dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci) +int dw_pcie_map_detect(struct dw_pcie *pci) { + struct platform_device *pdev = to_platform_device(pci->dev); + struct resource *res; u32 val; val = dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT); if (val == 0xffffffff) - return true; + pci->iatu_dma_unrolled = true; + else + pci->iatu_dma_unrolled = false; + + if (!pci->iatu_dma_unrolled) { + pci->atu_base = pci->dbi_base + PCIE_ATU_VIEWPORT_BASE; + pci->atu_size = PCIE_ATU_VIEWPORT_SIZE; + + dev_info(pci->dev, "iATU/DMA unroll: disabled\n"); + + return 0; + } - return false; + if (!pci->atu_base) { + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "atu"); + if (res) { + pci->atu_base = devm_ioremap_resource(pci->dev, res); + if (IS_ERR(pci->atu_base)) + return PTR_ERR(pci->atu_base); + + pci->atu_size = resource_size(res); + } else { + pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET; + } + } + + /* Pick a minimal default, enough for 8 in and 8 out windows */ + if (!pci->atu_size) + pci->atu_size = SZ_4K; + + dev_info(pci->dev, "iATU/DMA unroll: enabled\n"); + + return 0; } -static void dw_pcie_iatu_detect_regions(struct dw_pcie *pci) +void dw_pcie_iatu_detect(struct dw_pcie *pci) { int max_region, ob, ib; u32 val, min, dir; u64 max; - if (pci->iatu_unroll_enabled) { + if (pci->iatu_dma_unrolled) { max_region = min((int)pci->atu_size / 512, 256); } else { dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, 0xFF); @@ -640,38 +672,6 @@ static void dw_pcie_iatu_detect_regions(struct dw_pcie *pci) pci->num_ib_windows = ib; pci->region_align = 1 << fls(min); pci->region_limit = (max << 32) | (SZ_4G - 1); -} - -void dw_pcie_iatu_detect(struct dw_pcie *pci) -{ - struct device *dev = pci->dev; - struct platform_device *pdev = to_platform_device(dev); - - pci->iatu_unroll_enabled = dw_pcie_iatu_unroll_enabled(pci); - if (pci->iatu_unroll_enabled) { - if (!pci->atu_base) { - struct resource *res = - platform_get_resource_byname(pdev, IORESOURCE_MEM, "atu"); - if (res) { - pci->atu_size = resource_size(res); - pci->atu_base = devm_ioremap_resource(dev, res); - } - if (!pci->atu_base || IS_ERR(pci->atu_base)) - pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET; - } - - if (!pci->atu_size) - /* Pick a minimal default, enough for 8 in and 8 out windows */ - pci->atu_size = SZ_4K; - } else { - pci->atu_base = pci->dbi_base + PCIE_ATU_VIEWPORT_BASE; - pci->atu_size = PCIE_ATU_VIEWPORT_SIZE; - } - - dw_pcie_iatu_detect_regions(pci); - - dev_info(pci->dev, "iATU unroll: %s\n", pci->iatu_unroll_enabled ? - "enabled" : "disabled"); dev_info(pci->dev, "iATU regions: %u ob, %u ib, align %uK, limit %lluG\n", pci->num_ob_windows, pci->num_ib_windows, diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h index 765d99d5bfaa..e10647b96c68 100644 --- a/drivers/pci/controller/dwc/pcie-designware.h +++ b/drivers/pci/controller/dwc/pcie-designware.h @@ -310,7 +310,7 @@ struct dw_pcie { int num_lanes; int link_gen; u8 n_fts[2]; - bool iatu_unroll_enabled: 1; + bool iatu_dma_unrolled: 1; bool io_cfg_atu_shared: 1; }; @@ -343,6 +343,7 @@ int dw_pcie_prog_ep_inbound_atu(struct dw_pcie *pci, u8 func_no, int index, int type, u64 cpu_addr, u8 bar); void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index); void dw_pcie_setup(struct dw_pcie *pci); +int dw_pcie_map_detect(struct dw_pcie *pci); void dw_pcie_iatu_detect(struct dw_pcie *pci); static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val) From patchwork Tue May 3 22:51:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 1626030 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=baikalelectronics.ru header.i=@baikalelectronics.ru header.a=rsa-sha256 header.s=mail header.b=UJZYtuuw; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KtFbX05XDz9sG2 for ; Wed, 4 May 2022 08:52:40 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244090AbiECW4J (ORCPT ); Tue, 3 May 2022 18:56:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244191AbiECWzr (ORCPT ); Tue, 3 May 2022 18:55:47 -0400 Received: from mail.baikalelectronics.ru (mail.baikalelectronics.com [87.245.175.226]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CE32943ADA; Tue, 3 May 2022 15:51:40 -0700 (PDT) Received: from mail.baikalelectronics.ru (unknown [192.168.51.25]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 1D1FEC0E; Wed, 4 May 2022 01:52:14 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mail.baikalelectronics.ru 1D1FEC0E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baikalelectronics.ru; s=mail; t=1651618334; bh=Xf5TTVV8o12m3XVpFeXWY0JpLQE6zkgjlmXy5aj1J0g=; h=From:To:CC:Subject:Date:In-Reply-To:References:From; b=UJZYtuuw/AiIYMWEPiPJvg8HxsIuWu2kQgaI20J+zkEheyIfUCKBwh7V7xC5vAuE4 a+BmdT2kSHqz1byKRjqRGIyRxE4hvVZDFEZ71MOUCT3h8mwtsrqDBKxwkIL4b897zN c+VnnC7HAAAWH/nz9uGB9JMFBG9qlGxYCI+l9vyI= Received: from localhost (192.168.53.207) by mail (192.168.51.25) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 4 May 2022 01:51:40 +0300 From: Serge Semin To: Gustavo Pimentel , Vinod Koul , Jingoo Han , Bjorn Helgaas , Lorenzo Pieralisi , Frank Li , Manivannan Sadhasivam , Rob Herring , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= CC: Serge Semin , Serge Semin , Alexey Malahov , Pavel Parkhomenko , , , Subject: [PATCH v2 26/26] PCI: dwc: Add DW eDMA engine support Date: Wed, 4 May 2022 01:51:04 +0300 Message-ID: <20220503225104.12108-27-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> References: <20220503225104.12108-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Since the DW eDMA driver now supports eDMA controllers embedded into the locally accessible DW PCIe Root Ports and End-points, we can use the updated interface to register DW eDMA as DMA engine device if it's available. In order to successfully do that the DW PCIe core driver need to perform some preparations first. First of all it needs to find out the eDMA controller CSRs base address, whether they are accessible over the Port Logic or iATU unrolled space. Afterwards it can try to auto-detect the eDMA controller availability and number of it's read/write channels. If none was found the procedure will just silently halt with no error returned. Secondly the platform is supposed to provide either combined or per-channel IRQ signals. If no valid IRQs set is found the procedure will also halt with no error returned so to be backward compatible with platforms where DW PCIe controllers have eDMA embedded but lack of the IRQs defined for them. Finally before actually probing the eDMA device we need to allocate LLP items buffers. After that the DW eDMA can be registered. If registration is successful the info-message regarding the number of detected Read/Write eDMA channels will be printed to the system log in the same way as it's done for iATU settings. Signed-off-by: Serge Semin --- Changelog v2: - Don't fail eDMA detection procedure if the DW eDMA driver couldn't probe device. That happens if the driver is disabled. (@Manivannan) - Add "dma" registers resource mapping procedure. (@Manivannan) - Move the eDMA CSRs space detection into the dw_pcie_map_detect() method. - Remove eDMA on the dw_pcie_ep_init() internal errors. (@Manivannan) - Remove eDMA in the dw_pcie_ep_exit() method. - Move the dw_pcie_edma_detect() method execution to the tail of the dw_pcie_ep_init() function. --- .../pci/controller/dwc/pcie-designware-ep.c | 12 +- .../pci/controller/dwc/pcie-designware-host.c | 13 +- drivers/pci/controller/dwc/pcie-designware.c | 187 ++++++++++++++++++ drivers/pci/controller/dwc/pcie-designware.h | 20 ++ 4 files changed, 229 insertions(+), 3 deletions(-) diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c index 6cfcfa34e587..a3f3fa15ebe5 100644 --- a/drivers/pci/controller/dwc/pcie-designware-ep.c +++ b/drivers/pci/controller/dwc/pcie-designware-ep.c @@ -610,8 +610,11 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, void dw_pcie_ep_exit(struct dw_pcie_ep *ep) { + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); struct pci_epc *epc = ep->epc; + dw_pcie_edma_remove(pci); + pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem, epc->mem->window.page_size); @@ -791,6 +794,10 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) goto err_exit_epc_mem; } + ret = dw_pcie_edma_detect(pci); + if (ret) + goto err_free_epc_mem; + if (ep->ops->get_features) { epc_features = ep->ops->get_features(ep); if (epc_features->core_init_notifier) @@ -799,10 +806,13 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) ret = dw_pcie_ep_init_complete(ep); if (ret) - goto err_free_epc_mem; + goto err_remove_edma; return 0; +err_remove_edma: + dw_pcie_edma_remove(pci); + err_free_epc_mem: pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem, epc->mem->window.page_size); diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c index 3cd5b096a427..0ffef8526d54 100644 --- a/drivers/pci/controller/dwc/pcie-designware-host.c +++ b/drivers/pci/controller/dwc/pcie-designware-host.c @@ -415,14 +415,18 @@ int dw_pcie_host_init(struct pcie_port *pp) dw_pcie_iatu_detect(pci); - ret = dw_pcie_setup_rc(pp); + ret = dw_pcie_edma_detect(pci); if (ret) goto err_free_msi; + ret = dw_pcie_setup_rc(pp); + if (ret) + goto err_remove_edma; + if (!dw_pcie_link_up(pci) && pci->ops && pci->ops->start_link) { ret = pci->ops->start_link(pci); if (ret) - goto err_free_msi; + goto err_remove_edma; } /* Ignore errors, the link may come up later */ @@ -440,6 +444,9 @@ int dw_pcie_host_init(struct pcie_port *pp) if (pci->ops && pci->ops->stop_link) pci->ops->stop_link(pci); +err_remove_edma: + dw_pcie_edma_remove(pci); + err_free_msi: if (pp->has_msi_ctrl) dw_pcie_free_msi(pp); @@ -462,6 +469,8 @@ void dw_pcie_host_deinit(struct pcie_port *pp) if (pci->ops && pci->ops->stop_link) pci->ops->stop_link(pci); + dw_pcie_edma_remove(pci); + if (pp->has_msi_ctrl) dw_pcie_free_msi(pp); diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c index 68bd3fc66fd7..aae8b03757f5 100644 --- a/drivers/pci/controller/dwc/pcie-designware.c +++ b/drivers/pci/controller/dwc/pcie-designware.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -595,6 +596,8 @@ int dw_pcie_map_detect(struct dw_pcie *pci) pci->atu_base = pci->dbi_base + PCIE_ATU_VIEWPORT_BASE; pci->atu_size = PCIE_ATU_VIEWPORT_SIZE; + pci->edma.reg_base = pci->dbi_base + PCIE_DMA_VIEWPORT_BASE; + dev_info(pci->dev, "iATU/DMA unroll: disabled\n"); return 0; @@ -617,6 +620,17 @@ int dw_pcie_map_detect(struct dw_pcie *pci) if (!pci->atu_size) pci->atu_size = SZ_4K; + if (!pci->edma.reg_base) { + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dma"); + if (res) { + pci->edma.reg_base = devm_ioremap_resource(pci->dev, res); + if (IS_ERR(pci->edma.reg_base)) + return PTR_ERR(pci->edma.reg_base); + } else if (pci->atu_size >= 2 * DEFAULT_DBI_DMA_OFFSET) { + pci->edma.reg_base = pci->atu_base + DEFAULT_DBI_DMA_OFFSET; + } + } + dev_info(pci->dev, "iATU/DMA unroll: enabled\n"); return 0; @@ -678,6 +692,179 @@ void dw_pcie_iatu_detect(struct dw_pcie *pci) pci->region_align / SZ_1K, (pci->region_limit + 1) / SZ_1G); } +static u32 dw_pcie_readl_dma(struct dw_pcie *pci, u32 reg) +{ + u32 val = 0; + int ret; + + if (pci->ops && pci->ops->read_dbi) + return pci->ops->read_dbi(pci, pci->edma.reg_base, reg, 4); + + ret = dw_pcie_read(pci->edma.reg_base + reg, 4, &val); + if (ret) + dev_err(pci->dev, "Read DMA address failed\n"); + + return val; +} + +static int dw_pcie_edma_irq_vector(struct device *dev, unsigned int nr) +{ + struct platform_device *pdev = to_platform_device(dev); + char name[6]; + int ret; + + if (nr >= EDMA_MAX_WR_CH + EDMA_MAX_RD_CH) + return -EINVAL; + + ret = platform_get_irq_byname_optional(pdev, "dma"); + if (ret > 0) + return ret; + + snprintf(name, sizeof(name), "dma%u", nr); + + return platform_get_irq_byname_optional(pdev, name); +} + +static struct dw_edma_core_ops dw_pcie_edma_ops = { + .irq_vector = dw_pcie_edma_irq_vector, +}; + +static int dw_pcie_edma_detect_channels(struct dw_pcie *pci) +{ + u32 val; + + val = dw_pcie_readl_dma(pci, PCIE_DMA_CTRL); + if (!val || val == 0xffffffff) + return 0; + + pci->edma.ll_wr_cnt = FIELD_GET(PCIE_DMA_NUM_WR_CHAN, val); + pci->edma.ll_rd_cnt = FIELD_GET(PCIE_DMA_NUM_RD_CHAN, val); + + if (pci->edma.ll_wr_cnt > EDMA_MAX_WR_CH || + pci->edma.ll_rd_cnt > EDMA_MAX_RD_CH) + return -EINVAL; + + return 0; +} + +static int dw_pcie_edma_irq_verify(struct dw_pcie *pci) +{ + struct platform_device *pdev = to_platform_device(pci->dev); + u16 ch_cnt = pci->edma.ll_wr_cnt + pci->edma.ll_rd_cnt; + char name[6]; + int ret; + + if (pci->edma.nr_irqs == 1) + return 0; + else if (pci->edma.nr_irqs > 1) + return pci->edma.nr_irqs != ch_cnt ? -EINVAL : 0; + + ret = platform_get_irq_byname_optional(pdev, "dma"); + if (ret > 0) { + pci->edma.nr_irqs = 1; + return 0; + } + + for (; pci->edma.nr_irqs < ch_cnt; pci->edma.nr_irqs++) { + snprintf(name, sizeof(name), "dma%d", pci->edma.nr_irqs); + + ret = platform_get_irq_byname_optional(pdev, name); + if (ret <= 0) + return -EINVAL; + } + + return 0; +} + +static int dw_pcie_edma_ll_alloc(struct dw_pcie *pci) +{ + struct dw_edma_region *ll; + dma_addr_t paddr; + int i; + + for (i = 0; i < pci->edma.ll_wr_cnt; i++) { + ll = &pci->edma.ll_region_wr[i]; + ll->sz = DMA_LLP_MEM_SIZE; + ll->vaddr = dmam_alloc_coherent(pci->dev, ll->sz, + &paddr, GFP_KERNEL); + if (!ll->vaddr) + return -ENOMEM; + + ll->paddr = paddr; + } + + for (i = 0; i < pci->edma.ll_rd_cnt; i++) { + ll = &pci->edma.ll_region_rd[i]; + ll->sz = DMA_LLP_MEM_SIZE; + ll->vaddr = dmam_alloc_coherent(pci->dev, ll->sz, + &paddr, GFP_KERNEL); + if (!ll->vaddr) + return -ENOMEM; + + ll->paddr = paddr; + } + + return 0; +} + +int dw_pcie_edma_detect(struct dw_pcie *pci) +{ + int ret; + + if (!pci->edma.reg_base) + return 0; + + pci->edma.dev = pci->dev; + if (!pci->edma.ops) + pci->edma.ops = &dw_pcie_edma_ops; + pci->edma.flags |= DW_EDMA_CHIP_LOCAL; + + if (pci->iatu_dma_unrolled) + pci->edma.mf = EDMA_MF_EDMA_UNROLL; + else + pci->edma.mf = EDMA_MF_EDMA_LEGACY; + + ret = dw_pcie_edma_detect_channels(pci); + if (ret) { + dev_err(pci->dev, "Unexpected NoF eDMA channels found\n"); + return ret; + } + + /* Skip any further initialization if no eDMA found */ + if (!pci->edma.ll_wr_cnt && !pci->edma.ll_rd_cnt) + return 0; + + /* Don't fail on the IRQs verification for the backward compatibility */ + ret = dw_pcie_edma_irq_verify(pci); + if (ret) { + dev_err(pci->dev, "Invalid eDMA IRQs found\n"); + return 0; + } + + ret = dw_pcie_edma_ll_alloc(pci); + if (ret) { + dev_err(pci->dev, "Couldn't allocate LLP memory\n"); + return ret; + } + + /* Don't fail if the DW eDMA driver can't find the device */ + ret = dw_edma_probe(&pci->edma); + if (ret && ret != -ENODEV) { + dev_err(pci->dev, "Couldn't register eDMA device\n"); + return ret; + } + + dev_info(pci->dev, "eDMA channels: %hu wr, %hu rd\n", + pci->edma.ll_wr_cnt, pci->edma.ll_rd_cnt); + + return 0; +} + +void dw_pcie_edma_remove(struct dw_pcie *pci) +{ + dw_edma_remove(&pci->edma); +} + void dw_pcie_setup(struct dw_pcie *pci) { u32 val; diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h index e10647b96c68..87add1920f0d 100644 --- a/drivers/pci/controller/dwc/pcie-designware.h +++ b/drivers/pci/controller/dwc/pcie-designware.h @@ -13,6 +13,7 @@ #include #include +#include #include #include #include @@ -145,6 +146,18 @@ #define PCIE_MSIX_DOORBELL 0x948 #define PCIE_MSIX_DOORBELL_PF_SHIFT 24 +/* + * eDMA CSRs. DW PCIe IP-core v4.70a and older had the eDMA registers accessible + * over the Port Logic registers space. Afterwords the unrolled mapping was + * introduced so eDMA and iATU could be accessed via a dedicated registers + * space. + */ +#define PCIE_DMA_VIEWPORT_BASE 0x970 +#define PCIE_DMA_UNROLL_BASE 0x80000 +#define PCIE_DMA_CTRL 0x008 +#define PCIE_DMA_NUM_WR_CHAN GENMASK(3, 0) +#define PCIE_DMA_NUM_RD_CHAN GENMASK(19, 16) + #define PCIE_PL_CHK_REG_CONTROL_STATUS 0xB20 #define PCIE_PL_CHK_REG_CHK_REG_START BIT(0) #define PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS BIT(1) @@ -161,6 +174,7 @@ * this offset, if atu_base not set. */ #define DEFAULT_DBI_ATU_OFFSET (0x3 << 20) +#define DEFAULT_DBI_DMA_OFFSET (0x1 << 19) #define MAX_MSI_IRQS 256 #define MAX_MSI_IRQS_PER_CTRL 32 @@ -172,6 +186,9 @@ #define MAX_IATU_IN 256 #define MAX_IATU_OUT 256 +/* Default eDMA LLP memory size */ +#define DMA_LLP_MEM_SIZE PAGE_SIZE + struct pcie_port; struct dw_pcie; struct dw_pcie_ep; @@ -310,6 +327,7 @@ struct dw_pcie { int num_lanes; int link_gen; u8 n_fts[2]; + struct dw_edma_chip edma; bool iatu_dma_unrolled: 1; bool io_cfg_atu_shared: 1; }; @@ -345,6 +363,8 @@ void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index); void dw_pcie_setup(struct dw_pcie *pci); int dw_pcie_map_detect(struct dw_pcie *pci); void dw_pcie_iatu_detect(struct dw_pcie *pci); +int dw_pcie_edma_detect(struct dw_pcie *pci); +void dw_pcie_edma_remove(struct dw_pcie *pci); static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val) {