From patchwork Wed Jan 26 19:50:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lad Prabhakar X-Patchwork-Id: 1584648 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4JkZ8k6qdhz9tT8 for ; Thu, 27 Jan 2022 06:51:02 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244539AbiAZTvC (ORCPT ); Wed, 26 Jan 2022 14:51:02 -0500 Received: from relmlor1.renesas.com ([210.160.252.171]:46865 "EHLO relmlie5.idc.renesas.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S231611AbiAZTvB (ORCPT ); Wed, 26 Jan 2022 14:51:01 -0500 X-IronPort-AV: E=Sophos;i="5.88,319,1635174000"; d="scan'208";a="107766003" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie5.idc.renesas.com with ESMTP; 27 Jan 2022 04:51:00 +0900 Received: from localhost.localdomain (unknown [10.226.36.204]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id C97AF4008541; Thu, 27 Jan 2022 04:50:57 +0900 (JST) From: Lad Prabhakar To: Kishon Vijay Abraham I , Bjorn Helgaas , Lorenzo Pieralisi , =?utf-8?q?Krzysztof_Wilczy?= =?utf-8?q?=C5=84ski?= , Arnd Bergmann , Greg Kroah-Hartman , Marek Vasut , Yoshihiro Shimoda , Rob Herring , linux-pci@vger.kernel.org, linux-renesas-soc@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Prabhakar , Biju Das , Lad Prabhakar Subject: [RFC PATCH 1/5] PCI: endpoint: Add ops and flag to support internal DMAC Date: Wed, 26 Jan 2022 19:50:39 +0000 Message-Id: <20220126195043.28376-2-prabhakar.mahadev-lad.rj@bp.renesas.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220126195043.28376-1-prabhakar.mahadev-lad.rj@bp.renesas.com> References: <20220126195043.28376-1-prabhakar.mahadev-lad.rj@bp.renesas.com> Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add flag to indicate if PCIe EP supports internal DMAC and also add a wrapper function which invokes dmac_transfer() callback which lands in the PCIe EP driver. Signed-off-by: Lad Prabhakar --- drivers/pci/endpoint/pci-epf-core.c | 32 +++++++++++++++++++++++++++++ include/linux/pci-epc.h | 8 ++++++++ include/linux/pci-epf.h | 7 +++++++ 3 files changed, 47 insertions(+) diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c index 9ed556936f48..f70576d0d4b2 100644 --- a/drivers/pci/endpoint/pci-epf-core.c +++ b/drivers/pci/endpoint/pci-epf-core.c @@ -239,6 +239,38 @@ void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf) } EXPORT_SYMBOL_GPL(pci_epf_remove_vepf); +/** + * pci_epf_internal_dmac_xfr() - transfer data between EPC and remote PCIe RC + * @epf: the EPF device that performs the data transfer operation + * @dma_dst: The destination address of the data transfer. It can be a physical + * address given by pci_epc_mem_alloc_addr or DMA mapping APIs. + * @dma_src: The source address of the data transfer. It can be a physical + * address given by pci_epc_mem_alloc_addr or DMA mapping APIs. + * @len: The size of the data transfer + * + * Invoke to transfer data between EPC and remote PCIe RC using internal dmac. + */ +int pci_epf_internal_dmac_xfr(struct pci_epf *epf, dma_addr_t dma_dst, + dma_addr_t dma_src, size_t len, + enum pci_epf_xfr_direction dir) +{ + struct pci_epc *epc = epf->epc; + int ret; + + if (IS_ERR_OR_NULL(epc) || IS_ERR_OR_NULL(epf)) + return -EINVAL; + + if (!epc->ops->dmac_transfer) + return -EINVAL; + + mutex_lock(&epf->lock); + ret = epc->ops->dmac_transfer(epc, epf, dma_dst, dma_src, len, dir); + mutex_unlock(&epf->lock); + + return ret; +} +EXPORT_SYMBOL_GPL(pci_epf_internal_dmac_xfr); + /** * pci_epf_free_space() - free the allocated PCI EPF register space * @epf: the EPF device from whom to free the memory diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h index a48778e1a4ee..b55dacd09e1e 100644 --- a/include/linux/pci-epc.h +++ b/include/linux/pci-epc.h @@ -58,6 +58,7 @@ pci_epc_interface_string(enum pci_epc_interface_type type) * @map_msi_irq: ops to map physical address to MSI address and return MSI data * @start: ops to start the PCI link * @stop: ops to stop the PCI link + * @dmac_transfer: ops to transfer data using internal DMAC * @get_features: ops to get the features supported by the EPC * @owner: the module owner containing the ops */ @@ -86,6 +87,9 @@ struct pci_epc_ops { u32 *msi_addr_offset); int (*start)(struct pci_epc *epc); void (*stop)(struct pci_epc *epc); + int (*dmac_transfer)(struct pci_epc *epc, struct pci_epf *epf, + dma_addr_t dma_dst, dma_addr_t dma_src, + size_t len, enum pci_epf_xfr_direction dir); const struct pci_epc_features* (*get_features)(struct pci_epc *epc, u8 func_no, u8 vfunc_no); struct module *owner; @@ -159,6 +163,8 @@ struct pci_epc { * for initialization * @msi_capable: indicate if the endpoint function has MSI capability * @msix_capable: indicate if the endpoint function has MSI-X capability + * @internal_dmac: indicate if the endpoint function has internal DMAC + * @internal_dmac_mask: indicates the DMA mask to be applied for the device * @reserved_bar: bitmap to indicate reserved BAR unavailable to function driver * @bar_fixed_64bit: bitmap to indicate fixed 64bit BARs * @bar_fixed_size: Array specifying the size supported by each BAR @@ -169,6 +175,8 @@ struct pci_epc_features { unsigned int core_init_notifier : 1; unsigned int msi_capable : 1; unsigned int msix_capable : 1; + unsigned int internal_dmac : 1; + u64 internal_dmac_mask; u8 reserved_bar; u8 bar_fixed_64bit; u64 bar_fixed_size[PCI_STD_NUM_BARS]; diff --git a/include/linux/pci-epf.h b/include/linux/pci-epf.h index 009a07147c61..78d661db085d 100644 --- a/include/linux/pci-epf.h +++ b/include/linux/pci-epf.h @@ -32,6 +32,11 @@ enum pci_barno { BAR_5, }; +enum pci_epf_xfr_direction { + PCIE_TO_INTERNAL, + INTERNAL_TO_PCIE, +}; + /** * struct pci_epf_header - represents standard configuration header * @vendorid: identifies device manufacturer @@ -209,6 +214,8 @@ void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar, enum pci_epc_interface_type type); int pci_epf_bind(struct pci_epf *epf); void pci_epf_unbind(struct pci_epf *epf); +int pci_epf_internal_dmac_xfr(struct pci_epf *epf, dma_addr_t dma_dst, dma_addr_t dma_src, + size_t len, enum pci_epf_xfr_direction dir); struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf, struct config_group *group); int pci_epf_add_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf); From patchwork Wed Jan 26 19:50:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lad Prabhakar X-Patchwork-Id: 1584649 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4JkZ8q2F2Qz9tT8 for ; Thu, 27 Jan 2022 06:51:07 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244552AbiAZTvG (ORCPT ); Wed, 26 Jan 2022 14:51:06 -0500 Received: from relmlor2.renesas.com ([210.160.252.172]:62469 "EHLO relmlie6.idc.renesas.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S244546AbiAZTvF (ORCPT ); Wed, 26 Jan 2022 14:51:05 -0500 X-IronPort-AV: E=Sophos;i="5.88,319,1635174000"; d="scan'208";a="108403300" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie6.idc.renesas.com with ESMTP; 27 Jan 2022 04:51:04 +0900 Received: from localhost.localdomain (unknown [10.226.36.204]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id 1F1FF4008541; Thu, 27 Jan 2022 04:51:00 +0900 (JST) From: Lad Prabhakar To: Kishon Vijay Abraham I , Bjorn Helgaas , Lorenzo Pieralisi , =?utf-8?q?Krzysztof_Wilczy?= =?utf-8?q?=C5=84ski?= , Arnd Bergmann , Greg Kroah-Hartman , Marek Vasut , Yoshihiro Shimoda , Rob Herring , linux-pci@vger.kernel.org, linux-renesas-soc@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Prabhakar , Biju Das , Lad Prabhakar Subject: [RFC PATCH 2/5] PCI: endpoint: Add support to data transfer using internal dmac Date: Wed, 26 Jan 2022 19:50:40 +0000 Message-Id: <20220126195043.28376-3-prabhakar.mahadev-lad.rj@bp.renesas.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220126195043.28376-1-prabhakar.mahadev-lad.rj@bp.renesas.com> References: <20220126195043.28376-1-prabhakar.mahadev-lad.rj@bp.renesas.com> Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org For PCIe EP capable with internal DMAC, transfer data using this when -d option is used with pcitest. Signed-off-by: Lad Prabhakar --- drivers/pci/endpoint/functions/pci-epf-test.c | 184 ++++++++++++++---- 1 file changed, 141 insertions(+), 43 deletions(-) diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c index 90d84d3bc868..f792b1a15c44 100644 --- a/drivers/pci/endpoint/functions/pci-epf-test.c +++ b/drivers/pci/endpoint/functions/pci-epf-test.c @@ -55,6 +55,7 @@ struct pci_epf_test { struct dma_chan *dma_chan; struct completion transfer_complete; bool dma_supported; + bool internal_dmac; const struct pci_epc_features *epc_features; }; @@ -148,6 +149,40 @@ static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test, return 0; } +/** + * pci_epf_test_internal_dmac_data_transfer() - Function that uses internal DMAC + * to transfer data between PCIe EP and remote PCIe RC + * @epf_test: the EPF test device that performs the data transfer operation + * @dma_dst: The destination address of the data transfer. It can be a physical + * address given by pci_epc_mem_alloc_addr or DMA mapping APIs. + * @dma_src: The source address of the data transfer. It can be a physical + * address given by pci_epc_mem_alloc_addr or DMA mapping APIs. + * @len: The size of the data transfer + * @dir: Direction of data transfer + * + * Function that uses internal dmac supported by the controller to transfer data + * between PCIe EP and remote PCIe RC. + * + * The function returns '0' on success and negative value on failure. + */ +static int +pci_epf_test_internal_dmac_data_transfer(struct pci_epf_test *epf_test, + dma_addr_t dma_dst, dma_addr_t dma_src, + size_t len, enum pci_epf_xfr_direction dir) +{ + struct pci_epf *epf = epf_test->epf; + int ret; + + if (!epf_test->internal_dmac) + return -EINVAL; + + ret = pci_epf_internal_dmac_xfr(epf, dma_dst, dma_src, len, dir); + if (ret) + return -EIO; + + return 0; +} + /** * pci_epf_test_init_dma_chan() - Function to initialize EPF test DMA channel * @epf_test: the EPF test device that performs data transfer operation @@ -238,6 +273,14 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test) struct pci_epc *epc = epf->epc; enum pci_barno test_reg_bar = epf_test->test_reg_bar; struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; + bool internal_dmac = epf_test->internal_dmac; + + use_dma = !!(reg->flags & FLAG_USE_DMA); + + if (use_dma && internal_dmac) { + dev_err(dev, "Operation not supported\n"); + return -EINVAL; + } src_addr = pci_epc_mem_alloc_addr(epc, &src_phys_addr, reg->size); if (!src_addr) { @@ -272,7 +315,6 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test) } ktime_get_ts64(&start); - use_dma = !!(reg->flags & FLAG_USE_DMA); if (use_dma) { if (!epf_test->dma_supported) { dev_err(dev, "Cannot transfer data using DMA\n"); @@ -322,31 +364,49 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test) struct device *dma_dev = epf->epc->dev.parent; enum pci_barno test_reg_bar = epf_test->test_reg_bar; struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; + bool internal_dmac = epf_test->internal_dmac; - src_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size); - if (!src_addr) { - dev_err(dev, "Failed to allocate address\n"); - reg->status = STATUS_SRC_ADDR_INVALID; - ret = -ENOMEM; - goto err; - } + use_dma = !!(reg->flags & FLAG_USE_DMA); - ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, phys_addr, - reg->src_addr, reg->size); - if (ret) { - dev_err(dev, "Failed to map address\n"); - reg->status = STATUS_SRC_ADDR_INVALID; - goto err_addr; + if (use_dma && internal_dmac) { + phys_addr = reg->src_addr; + src_addr = NULL; + } else { + src_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size); + if (!src_addr) { + dev_err(dev, "Failed to allocate address\n"); + reg->status = STATUS_SRC_ADDR_INVALID; + ret = -ENOMEM; + goto err; + } + + ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, phys_addr, + reg->src_addr, reg->size); + if (ret) { + dev_err(dev, "Failed to map address\n"); + reg->status = STATUS_SRC_ADDR_INVALID; + goto err_addr; + } } - buf = kzalloc(reg->size, GFP_KERNEL); + if (use_dma && internal_dmac) + buf = dma_alloc_coherent(dev, reg->size, &dst_phys_addr, GFP_KERNEL | GFP_DMA); + else + buf = kzalloc(reg->size, GFP_KERNEL); if (!buf) { ret = -ENOMEM; goto err_map_addr; } - use_dma = !!(reg->flags & FLAG_USE_DMA); - if (use_dma) { + if (use_dma && internal_dmac) { + ktime_get_ts64(&start); + ret = pci_epf_test_internal_dmac_data_transfer(epf_test, dst_phys_addr, + phys_addr, reg->size, + PCIE_TO_INTERNAL); + if (ret) + dev_err(dev, "Data transfer failed\n"); + ktime_get_ts64(&end); + } else if (use_dma) { if (!epf_test->dma_supported) { dev_err(dev, "Cannot transfer data using DMA\n"); ret = -EINVAL; @@ -383,13 +443,18 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test) ret = -EIO; err_dma_map: - kfree(buf); + if (use_dma && internal_dmac) + dma_free_coherent(dev, reg->size, buf, dst_phys_addr); + else + kfree(buf); err_map_addr: - pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, phys_addr); + if (!(use_dma && internal_dmac)) + pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, phys_addr); err_addr: - pci_epc_mem_free_addr(epc, phys_addr, src_addr, reg->size); + if (!(use_dma && internal_dmac)) + pci_epc_mem_free_addr(epc, phys_addr, src_addr, reg->size); err: return ret; @@ -410,24 +475,36 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test) struct device *dma_dev = epf->epc->dev.parent; enum pci_barno test_reg_bar = epf_test->test_reg_bar; struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; + bool internal_dmac = epf_test->internal_dmac; - dst_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size); - if (!dst_addr) { - dev_err(dev, "Failed to allocate address\n"); - reg->status = STATUS_DST_ADDR_INVALID; - ret = -ENOMEM; - goto err; - } + use_dma = !!(reg->flags & FLAG_USE_DMA); - ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, phys_addr, - reg->dst_addr, reg->size); - if (ret) { - dev_err(dev, "Failed to map address\n"); - reg->status = STATUS_DST_ADDR_INVALID; - goto err_addr; + if (use_dma && internal_dmac) { + phys_addr = reg->dst_addr; + dst_addr = NULL; + } else { + dst_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size); + if (!dst_addr) { + dev_err(dev, "Failed to allocate address\n"); + reg->status = STATUS_DST_ADDR_INVALID; + ret = -ENOMEM; + goto err; + } + + ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, phys_addr, + reg->dst_addr, reg->size); + if (ret) { + dev_err(dev, "Failed to map address\n"); + reg->status = STATUS_DST_ADDR_INVALID; + goto err_addr; + } } - buf = kzalloc(reg->size, GFP_KERNEL); + if (use_dma && internal_dmac) + buf = dma_alloc_coherent(dev, reg->size, + &src_phys_addr, GFP_KERNEL | GFP_DMA); + else + buf = kzalloc(reg->size, GFP_KERNEL); if (!buf) { ret = -ENOMEM; goto err_map_addr; @@ -436,8 +513,15 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test) get_random_bytes(buf, reg->size); reg->checksum = crc32_le(~0, buf, reg->size); - use_dma = !!(reg->flags & FLAG_USE_DMA); - if (use_dma) { + if (use_dma && internal_dmac) { + ktime_get_ts64(&start); + ret = pci_epf_test_internal_dmac_data_transfer(epf_test, phys_addr, + src_phys_addr, reg->size, + INTERNAL_TO_PCIE); + if (ret) + dev_err(dev, "Data transfer failed\n"); + ktime_get_ts64(&end); + } else if (use_dma) { if (!epf_test->dma_supported) { dev_err(dev, "Cannot transfer data using DMA\n"); ret = -EINVAL; @@ -476,13 +560,18 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test) usleep_range(1000, 2000); err_dma_map: - kfree(buf); + if (use_dma && internal_dmac) + dma_free_coherent(dev, reg->size, buf, src_phys_addr); + else + kfree(buf); err_map_addr: - pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, phys_addr); + if (!(use_dma && internal_dmac)) + pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, phys_addr); err_addr: - pci_epc_mem_free_addr(epc, phys_addr, dst_addr, reg->size); + if (!(use_dma && internal_dmac)) + pci_epc_mem_free_addr(epc, phys_addr, dst_addr, reg->size); err: return ret; @@ -838,6 +927,7 @@ static int pci_epf_test_bind(struct pci_epf *epf) struct pci_epc *epc = epf->epc; bool linkup_notifier = false; bool core_init_notifier = false; + struct device *dev = &epf->dev; if (WARN_ON_ONCE(!epc)) return -EINVAL; @@ -857,6 +947,12 @@ static int pci_epf_test_bind(struct pci_epf *epf) epf_test->test_reg_bar = test_reg_bar; epf_test->epc_features = epc_features; + epf_test->internal_dmac = epc_features->internal_dmac; + if (epf_test->internal_dmac && epc_features->internal_dmac_mask) { + ret = dma_set_coherent_mask(dev, epc_features->internal_dmac_mask); + if (ret) + return ret; + } ret = pci_epf_test_alloc_space(epf); if (ret) @@ -868,11 +964,13 @@ static int pci_epf_test_bind(struct pci_epf *epf) return ret; } - epf_test->dma_supported = true; + epf_test->dma_supported = false; - ret = pci_epf_test_init_dma_chan(epf_test); - if (ret) - epf_test->dma_supported = false; + if (!epf_test->internal_dmac) { + ret = pci_epf_test_init_dma_chan(epf_test); + if (!ret) + epf_test->dma_supported = true; + } if (linkup_notifier) { epf->nb.notifier_call = pci_epf_test_notifier; From patchwork Wed Jan 26 19:50:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lad Prabhakar X-Patchwork-Id: 1584650 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4JkZ8y5VXWz9tT8 for ; Thu, 27 Jan 2022 06:51:14 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244598AbiAZTvN (ORCPT ); Wed, 26 Jan 2022 14:51:13 -0500 Received: from relmlor2.renesas.com ([210.160.252.172]:62469 "EHLO relmlie6.idc.renesas.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S244546AbiAZTvI (ORCPT ); Wed, 26 Jan 2022 14:51:08 -0500 X-IronPort-AV: E=Sophos;i="5.88,319,1635174000"; d="scan'208";a="108403304" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie6.idc.renesas.com with ESMTP; 27 Jan 2022 04:51:07 +0900 Received: from localhost.localdomain (unknown [10.226.36.204]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id 66EE94008541; Thu, 27 Jan 2022 04:51:04 +0900 (JST) From: Lad Prabhakar To: Kishon Vijay Abraham I , Bjorn Helgaas , Lorenzo Pieralisi , =?utf-8?q?Krzysztof_Wilczy?= =?utf-8?q?=C5=84ski?= , Arnd Bergmann , Greg Kroah-Hartman , Marek Vasut , Yoshihiro Shimoda , Rob Herring , linux-pci@vger.kernel.org, linux-renesas-soc@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Prabhakar , Biju Das , Lad Prabhakar Subject: [RFC PATCH 3/5] misc: pci_endpoint_test: Add driver data for Renesas RZ/G2{EHMN} Date: Wed, 26 Jan 2022 19:50:41 +0000 Message-Id: <20220126195043.28376-4-prabhakar.mahadev-lad.rj@bp.renesas.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220126195043.28376-1-prabhakar.mahadev-lad.rj@bp.renesas.com> References: <20220126195043.28376-1-prabhakar.mahadev-lad.rj@bp.renesas.com> Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add "dmac_data_alignment" member (indicating the alignment requirement for internal DMAC for data transfers) to struct pci_endpoint_test_data and add driver_data to Renesas RZ/G2{EHMN}. Signed-off-by: Lad Prabhakar --- drivers/misc/pci_endpoint_test.c | 40 ++++++++++++++++++++++++++------ 1 file changed, 33 insertions(+), 7 deletions(-) diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c index 8f786a225dcf..0a00d45830e9 100644 --- a/drivers/misc/pci_endpoint_test.c +++ b/drivers/misc/pci_endpoint_test.c @@ -116,6 +116,7 @@ struct pci_endpoint_test { struct miscdevice miscdev; enum pci_barno test_reg_bar; size_t alignment; + size_t dmac_data_alignment; const char *name; }; @@ -123,6 +124,7 @@ struct pci_endpoint_test_data { enum pci_barno test_reg_bar; size_t alignment; int irq_type; + size_t dmac_data_alignment; }; static inline u32 pci_endpoint_test_readl(struct pci_endpoint_test *test, @@ -368,8 +370,11 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test, goto err; use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA); - if (use_dma) + if (use_dma) { flags |= FLAG_USE_DMA; + if (test->dmac_data_alignment) + size = ALIGN(size, test->dmac_data_alignment); + } if (irq_type < IRQ_TYPE_LEGACY || irq_type > IRQ_TYPE_MSIX) { dev_err(dev, "Invalid IRQ type option\n"); @@ -502,8 +507,11 @@ static bool pci_endpoint_test_write(struct pci_endpoint_test *test, goto err; use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA); - if (use_dma) + if (use_dma) { flags |= FLAG_USE_DMA; + if (test->dmac_data_alignment) + size = ALIGN(size, test->dmac_data_alignment); + } if (irq_type < IRQ_TYPE_LEGACY || irq_type > IRQ_TYPE_MSIX) { dev_err(dev, "Invalid IRQ type option\n"); @@ -600,8 +608,11 @@ static bool pci_endpoint_test_read(struct pci_endpoint_test *test, goto err; use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA); - if (use_dma) + if (use_dma) { flags |= FLAG_USE_DMA; + if (test->dmac_data_alignment) + size = ALIGN(size, test->dmac_data_alignment); + } if (irq_type < IRQ_TYPE_LEGACY || irq_type > IRQ_TYPE_MSIX) { dev_err(dev, "Invalid IRQ type option\n"); @@ -787,6 +798,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev, test->test_reg_bar = test_reg_bar; test->alignment = data->alignment; irq_type = data->irq_type; + test->dmac_data_alignment = data->dmac_data_alignment; } init_completion(&test->irq_raised); @@ -948,6 +960,12 @@ static const struct pci_endpoint_test_data j721e_data = { .irq_type = IRQ_TYPE_MSI, }; +static const struct pci_endpoint_test_data renesas_rzg2x_data = { + .test_reg_bar = BAR_0, + .irq_type = IRQ_TYPE_MSI, + .dmac_data_alignment = 8, +}; + static const struct pci_device_id pci_endpoint_test_tbl[] = { { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x), .driver_data = (kernel_ulong_t)&default_data, @@ -965,10 +983,18 @@ static const struct pci_device_id pci_endpoint_test_tbl[] = { { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM654), .driver_data = (kernel_ulong_t)&am654_data }, - { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774A1),}, - { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774B1),}, - { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774C0),}, - { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774E1),}, + { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774A1), + .driver_data = (kernel_ulong_t)&renesas_rzg2x_data, + }, + { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774B1), + .driver_data = (kernel_ulong_t)&renesas_rzg2x_data, + }, + { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774C0), + .driver_data = (kernel_ulong_t)&renesas_rzg2x_data, + }, + { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774E1), + .driver_data = (kernel_ulong_t)&renesas_rzg2x_data, + }, { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721E), .driver_data = (kernel_ulong_t)&j721e_data, }, From patchwork Wed Jan 26 19:50:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lad Prabhakar X-Patchwork-Id: 1584651 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4JkZ950bdWz9tT8 for ; Thu, 27 Jan 2022 06:51:21 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244611AbiAZTvQ (ORCPT ); Wed, 26 Jan 2022 14:51:16 -0500 Received: from relmlor1.renesas.com ([210.160.252.171]:10375 "EHLO relmlie5.idc.renesas.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S244584AbiAZTvL (ORCPT ); Wed, 26 Jan 2022 14:51:11 -0500 X-IronPort-AV: E=Sophos;i="5.88,319,1635174000"; d="scan'208";a="107766021" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie5.idc.renesas.com with ESMTP; 27 Jan 2022 04:51:10 +0900 Received: from localhost.localdomain (unknown [10.226.36.204]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id B2533400C44F; Thu, 27 Jan 2022 04:51:07 +0900 (JST) From: Lad Prabhakar To: Kishon Vijay Abraham I , Bjorn Helgaas , Lorenzo Pieralisi , =?utf-8?q?Krzysztof_Wilczy?= =?utf-8?q?=C5=84ski?= , Arnd Bergmann , Greg Kroah-Hartman , Marek Vasut , Yoshihiro Shimoda , Rob Herring , linux-pci@vger.kernel.org, linux-renesas-soc@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Prabhakar , Biju Das , Lad Prabhakar Subject: [RFC PATCH 4/5] misc: pci_endpoint_test: Add support to pass flags for buffer allocation Date: Wed, 26 Jan 2022 19:50:42 +0000 Message-Id: <20220126195043.28376-5-prabhakar.mahadev-lad.rj@bp.renesas.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220126195043.28376-1-prabhakar.mahadev-lad.rj@bp.renesas.com> References: <20220126195043.28376-1-prabhakar.mahadev-lad.rj@bp.renesas.com> Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org By default GFP_KERNEL flag is used for buffer allocation in read, write and copy test and then later mapped using streaming DMA api. But on Renesas RZ/G2{EHMN} platforms using the default flag causes the tests to fail. Allocating the buffers from DMA zone (using the GFP_DMA flag) make the test cases to pass. To handle such case add flags as part of struct pci_endpoint_test_data so that platforms can pass the required flags based on the requirement. Signed-off-by: Lad Prabhakar --- Hi All, This patch is based on the conversation where switching to streaming DMA api causes read/write/copy tests to fail on Renesas RZ/G2 platforms when buffers are allocated using GFP_KERNEL. [0] https://www.spinics.net/lists/linux-pci/msg92385.html Cheers, Prabhakar --- drivers/misc/pci_endpoint_test.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c index 0a00d45830e9..974546992c5e 100644 --- a/drivers/misc/pci_endpoint_test.c +++ b/drivers/misc/pci_endpoint_test.c @@ -117,6 +117,7 @@ struct pci_endpoint_test { enum pci_barno test_reg_bar; size_t alignment; size_t dmac_data_alignment; + gfp_t flags; const char *name; }; @@ -125,6 +126,7 @@ struct pci_endpoint_test_data { size_t alignment; int irq_type; size_t dmac_data_alignment; + gfp_t flags; }; static inline u32 pci_endpoint_test_readl(struct pci_endpoint_test *test, @@ -381,7 +383,7 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test, goto err; } - orig_src_addr = kzalloc(size + alignment, GFP_KERNEL); + orig_src_addr = kzalloc(size + alignment, test->flags); if (!orig_src_addr) { dev_err(dev, "Failed to allocate source buffer\n"); ret = false; @@ -414,7 +416,7 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test, src_crc32 = crc32_le(~0, src_addr, size); - orig_dst_addr = kzalloc(size + alignment, GFP_KERNEL); + orig_dst_addr = kzalloc(size + alignment, test->flags); if (!orig_dst_addr) { dev_err(dev, "Failed to allocate destination address\n"); ret = false; @@ -518,7 +520,7 @@ static bool pci_endpoint_test_write(struct pci_endpoint_test *test, goto err; } - orig_addr = kzalloc(size + alignment, GFP_KERNEL); + orig_addr = kzalloc(size + alignment, test->flags); if (!orig_addr) { dev_err(dev, "Failed to allocate address\n"); ret = false; @@ -619,7 +621,7 @@ static bool pci_endpoint_test_read(struct pci_endpoint_test *test, goto err; } - orig_addr = kzalloc(size + alignment, GFP_KERNEL); + orig_addr = kzalloc(size + alignment, test->flags); if (!orig_addr) { dev_err(dev, "Failed to allocate destination address\n"); ret = false; @@ -788,6 +790,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev, test->alignment = 0; test->pdev = pdev; test->irq_type = IRQ_TYPE_UNDEFINED; + test->flags = GFP_KERNEL; if (no_msi) irq_type = IRQ_TYPE_LEGACY; @@ -799,6 +802,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev, test->alignment = data->alignment; irq_type = data->irq_type; test->dmac_data_alignment = data->dmac_data_alignment; + test->flags = data->flags; } init_completion(&test->irq_raised); @@ -947,23 +951,27 @@ static const struct pci_endpoint_test_data default_data = { .test_reg_bar = BAR_0, .alignment = SZ_4K, .irq_type = IRQ_TYPE_MSI, + .flags = GFP_KERNEL, }; static const struct pci_endpoint_test_data am654_data = { .test_reg_bar = BAR_2, .alignment = SZ_64K, .irq_type = IRQ_TYPE_MSI, + .flags = GFP_KERNEL, }; static const struct pci_endpoint_test_data j721e_data = { .alignment = 256, .irq_type = IRQ_TYPE_MSI, + .flags = GFP_KERNEL, }; static const struct pci_endpoint_test_data renesas_rzg2x_data = { .test_reg_bar = BAR_0, .irq_type = IRQ_TYPE_MSI, .dmac_data_alignment = 8, + .flags = GFP_KERNEL | GFP_DMA, }; static const struct pci_device_id pci_endpoint_test_tbl[] = { From patchwork Wed Jan 26 19:50:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lad Prabhakar X-Patchwork-Id: 1584652 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4JkZ9j2D9Dz9tT8 for ; Thu, 27 Jan 2022 06:51:53 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244572AbiAZTvv (ORCPT ); Wed, 26 Jan 2022 14:51:51 -0500 Received: from relmlor2.renesas.com ([210.160.252.172]:62469 "EHLO relmlie6.idc.renesas.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S244559AbiAZTvO (ORCPT ); Wed, 26 Jan 2022 14:51:14 -0500 X-IronPort-AV: E=Sophos;i="5.88,319,1635174000"; d="scan'208";a="108403312" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie6.idc.renesas.com with ESMTP; 27 Jan 2022 04:51:13 +0900 Received: from localhost.localdomain (unknown [10.226.36.204]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id 06C9E400C441; Thu, 27 Jan 2022 04:51:10 +0900 (JST) From: Lad Prabhakar To: Kishon Vijay Abraham I , Bjorn Helgaas , Lorenzo Pieralisi , =?utf-8?q?Krzysztof_Wilczy?= =?utf-8?q?=C5=84ski?= , Arnd Bergmann , Greg Kroah-Hartman , Marek Vasut , Yoshihiro Shimoda , Rob Herring , linux-pci@vger.kernel.org, linux-renesas-soc@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Prabhakar , Biju Das , Lad Prabhakar Subject: [RFC PATCH 5/5] PCI: rcar-ep: Add support for DMAC Date: Wed, 26 Jan 2022 19:50:43 +0000 Message-Id: <20220126195043.28376-6-prabhakar.mahadev-lad.rj@bp.renesas.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220126195043.28376-1-prabhakar.mahadev-lad.rj@bp.renesas.com> References: <20220126195043.28376-1-prabhakar.mahadev-lad.rj@bp.renesas.com> Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org R-Car PCIe controller has an internal DMAC to support data transfer between Internal Bus -> PCI Express and vice versa. This patch fills in the required flags and ops for the PCIe EP to support DMAC transfer. Signed-off-by: Lad Prabhakar --- drivers/pci/controller/pcie-rcar-ep.c | 227 ++++++++++++++++++++++++++ drivers/pci/controller/pcie-rcar.h | 23 +++ 2 files changed, 250 insertions(+) diff --git a/drivers/pci/controller/pcie-rcar-ep.c b/drivers/pci/controller/pcie-rcar-ep.c index f9682df1da61..c49b25069328 100644 --- a/drivers/pci/controller/pcie-rcar-ep.c +++ b/drivers/pci/controller/pcie-rcar-ep.c @@ -18,6 +18,21 @@ #define RCAR_EPC_MAX_FUNCTIONS 1 +#define RCAR_PCIE_MAX_DMAC_BYTE_COUNT 0x7FFFFFFU +#define RCAR_PCIE_DMAC_BYTE_COUNT_MULTIPLE 8 +#define RCAR_PCIE_DMAC_TIMEOUT (msecs_to_jiffies(3 * 1000)) +#define RCAR_PCIE_DMAC_DEFAULT_CHANNEL 0 + +enum rcar_pcie_ep_dmac_xfr_status { + RCAR_PCIE_DMA_XFR_SUCCESS, + RCAR_PCIE_DMA_XFR_ERROR, +}; + +struct rcar_pcie_ep_dmac_info { + enum rcar_pcie_ep_dmac_xfr_status status; + size_t bytes; +}; + /* Structure representing the PCIe interface */ struct rcar_pcie_endpoint { struct rcar_pcie pcie; @@ -28,8 +43,114 @@ struct rcar_pcie_endpoint { unsigned long *ib_window_map; u32 num_ib_windows; u32 num_ob_windows; + struct completion irq_raised; + struct mutex dma_operation; + spinlock_t lock; + struct rcar_pcie_ep_dmac_info xfr; }; +static inline bool rcar_pcie_ep_is_dmac_active(struct rcar_pcie_endpoint *ep) +{ + if (rcar_pci_read_reg(&ep->pcie, PCIEDMAOR) & PCIEDMAOR_DMAACT) + return true; + + return false; +} + +static void +rcar_pcie_ep_setup_dmac_request(struct rcar_pcie_endpoint *ep, + dma_addr_t dma_dst, dma_addr_t dma_src, + size_t len, enum pci_epf_xfr_direction dir, u8 ch) +{ + struct rcar_pcie *pcie = &ep->pcie; + u32 val; + + ep->xfr.status = RCAR_PCIE_DMA_XFR_ERROR; + ep->xfr.bytes = RCAR_PCIE_MAX_DMAC_BYTE_COUNT; + + /* swap values if xfr is from pcie to internal */ + if (dir == PCIE_TO_INTERNAL) + swap(dma_dst, dma_src); + + /* Configure the PCI Express lower */ + rcar_pci_write_reg(pcie, lower_32_bits(dma_dst), PCIEDMPALR(ch)); + + /* Configure the PCI Express upper */ + rcar_pci_write_reg(pcie, upper_32_bits(dma_dst), PCIEDMPAUR(ch)); + + /* Configure the internal bus address */ + rcar_pci_write_reg(pcie, lower_32_bits(dma_src), PCIEDMIAR(ch)); + + /* Configure the byte count values */ + rcar_pci_write_reg(pcie, len, PCIEDMBCNTR(ch)); + + /* Enable interrupts */ + val = rcar_pci_read_reg(pcie, PCIEDMCHSR(ch)); + + /* set enable flags */ + val |= PCIEDMCHSR_IE; + val |= PCIEDMCHSR_IBEE; + val |= PCIEDMCHSR_PEEE; + val |= PCIEDMCHSR_CHTCE; + + /* Clear error flags */ + val &= ~PCIEDMCHSR_TE; + val &= ~PCIEDMCHSR_PEE; + val &= ~PCIEDMCHSR_IBE; + val &= ~PCIEDMCHSR_CHTC; + + rcar_pci_write_reg(pcie, val, PCIEDMCHSR(ch)); + + wmb(); /* flush the settings */ +} + +static void rcar_pcie_ep_execute_dmac_request(struct rcar_pcie_endpoint *ep, + enum pci_epf_xfr_direction dir, u8 ch) +{ + struct rcar_pcie *pcie = &ep->pcie; + u32 val; + + /* Enable DMA */ + val = rcar_pci_read_reg(pcie, PCIEDMAOR); + val |= PCIEDMAOR_DMAE; + rcar_pci_write_reg(pcie, val, PCIEDMAOR); + + /* Configure the DMA direction */ + val = rcar_pci_read_reg(pcie, PCIEDMCHCR(ch)); + if (dir == INTERNAL_TO_PCIE) + val |= PCIEDMCHCR_DIR; + else + val &= ~PCIEDMCHCR_DIR; + + val |= PCIEDMCHCR_CHE; + rcar_pci_write_reg(pcie, val, PCIEDMCHCR(ch)); + + wmb(); /* flush the settings */ +} + +static enum rcar_pcie_ep_dmac_xfr_status +rcar_pcie_ep_get_dmac_status(struct rcar_pcie_endpoint *ep, + size_t *count, u8 ch) +{ + *count = ep->xfr.bytes; + return ep->xfr.status; +} + +static void rcar_pcie_ep_stop_dmac_request(struct rcar_pcie_endpoint *ep, u8 ch) +{ + struct rcar_pcie *pcie = &ep->pcie; + u32 val; + + val = rcar_pci_read_reg(pcie, PCIEDMCHCR(ch)); + val &= ~PCIEDMCHCR_CHE; + rcar_pci_write_reg(pcie, val, PCIEDMCHCR(ch)); + + /* Disable interrupt */ + val = rcar_pci_read_reg(pcie, PCIEDMAOR); + val &= ~PCIEDMAOR_DMAE; + rcar_pci_write_reg(pcie, val, PCIEDMAOR); +} + static void rcar_pcie_ep_hw_init(struct rcar_pcie *pcie) { u32 val; @@ -419,6 +540,44 @@ static int rcar_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, u8 vfn, } } +static int rcar_pcie_ep_data_transfer(struct pci_epc *epc, struct pci_epf *epf, + dma_addr_t dma_dst, dma_addr_t dma_src, + size_t len, enum pci_epf_xfr_direction dir) +{ + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); + u8 ch = RCAR_PCIE_DMAC_DEFAULT_CHANNEL; + enum rcar_pcie_ep_dmac_xfr_status stat; + int ret = -EINVAL; + long wait_status; + size_t count; + + if (len > RCAR_PCIE_MAX_DMAC_BYTE_COUNT || + (len % RCAR_PCIE_DMAC_BYTE_COUNT_MULTIPLE) != 0) + return -EINVAL; + + if (mutex_is_locked(&ep->dma_operation) || rcar_pcie_ep_is_dmac_active(ep)) + return -EBUSY; + + mutex_lock(&ep->dma_operation); + + rcar_pcie_ep_setup_dmac_request(ep, dma_dst, dma_src, len, dir, ch); + + rcar_pcie_ep_execute_dmac_request(ep, dir, ch); + + wait_status = wait_for_completion_interruptible_timeout(&ep->irq_raised, + RCAR_PCIE_DMAC_TIMEOUT); + if (wait_status <= 0) { + rcar_pcie_ep_stop_dmac_request(ep, ch); + } else { + stat = rcar_pcie_ep_get_dmac_status(ep, &count, ch); + if (stat == RCAR_PCIE_DMA_XFR_SUCCESS && !count) + ret = 0; + } + + mutex_unlock(&ep->dma_operation); + return ret; +} + static int rcar_pcie_ep_start(struct pci_epc *epc) { struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); @@ -429,6 +588,55 @@ static int rcar_pcie_ep_start(struct pci_epc *epc) return 0; } +static irqreturn_t rcar_pcie_ep_dmac_irq_handler(int irq, void *arg) +{ + u8 ch = RCAR_PCIE_DMAC_DEFAULT_CHANNEL; + struct rcar_pcie_endpoint *ep = arg; + struct rcar_pcie *pcie = &ep->pcie; + unsigned long flags; + u32 chsr_val; + u32 chcr_val; + u32 bytes; + + spin_lock_irqsave(&ep->lock, flags); + + chsr_val = rcar_pci_read_reg(pcie, PCIEDMCHSR(ch)); + + chcr_val = rcar_pci_read_reg(pcie, PCIEDMCHCR(ch)); + + if (mutex_is_locked(&ep->dma_operation)) { + if ((chsr_val & PCIEDMCHSR_PEE) || + (chsr_val & PCIEDMCHSR_IBE) || + (chsr_val & PCIEDMCHSR_CHTC)) + ep->xfr.status = RCAR_PCIE_DMA_XFR_ERROR; + else if (chsr_val & PCIEDMCHSR_TE) + ep->xfr.status = RCAR_PCIE_DMA_XFR_SUCCESS; + + /* get byte count */ + bytes = rcar_pci_read_reg(pcie, PCIEDMBCNTR(ch)); + ep->xfr.bytes = bytes; + + if ((chsr_val & PCIEDMCHSR_PEE) || (chsr_val & PCIEDMCHSR_IBE) || + (chsr_val & PCIEDMCHSR_TE) || (chsr_val & PCIEDMCHSR_CHTC)) { + complete(&ep->irq_raised); + } + } else { + spin_unlock_irqrestore(&ep->lock, flags); + return IRQ_NONE; + } + + if (chcr_val & PCIEDMCHCR_CHE) + chcr_val &= ~PCIEDMCHCR_CHE; + rcar_pci_write_reg(pcie, chcr_val, PCIEDMCHCR(ch)); + + /* Clear DMA interrupt source */ + rcar_pci_write_reg(pcie, chsr_val, PCIEDMCHSR(ch)); + + spin_unlock_irqrestore(&ep->lock, flags); + + return IRQ_HANDLED; +} + static void rcar_pcie_ep_stop(struct pci_epc *epc) { struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); @@ -446,6 +654,8 @@ static const struct pci_epc_features rcar_pcie_epc_features = { .bar_fixed_size[0] = 128, .bar_fixed_size[2] = 256, .bar_fixed_size[4] = 256, + .internal_dmac = true, + .internal_dmac_mask = DMA_BIT_MASK(32), }; static const struct pci_epc_features* @@ -466,6 +676,7 @@ static const struct pci_epc_ops rcar_pcie_epc_ops = { .start = rcar_pcie_ep_start, .stop = rcar_pcie_ep_stop, .get_features = rcar_pcie_ep_get_features, + .dmac_transfer = rcar_pcie_ep_data_transfer, }; static const struct of_device_id rcar_pcie_ep_of_match[] = { @@ -480,6 +691,7 @@ static int rcar_pcie_ep_probe(struct platform_device *pdev) struct rcar_pcie_endpoint *ep; struct rcar_pcie *pcie; struct pci_epc *epc; + int dmac_irq; int err; ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL); @@ -502,6 +714,14 @@ static int rcar_pcie_ep_probe(struct platform_device *pdev) goto err_pm_put; } + dmac_irq = platform_get_irq(pdev, 1); + if (dmac_irq < 0) + goto err_pm_put; + + init_completion(&ep->irq_raised); + mutex_init(&ep->dma_operation); + spin_lock_init(&ep->lock); + ep->num_ib_windows = MAX_NR_INBOUND_MAPS; ep->ib_window_map = devm_kcalloc(dev, BITS_TO_LONGS(ep->num_ib_windows), @@ -533,6 +753,13 @@ static int rcar_pcie_ep_probe(struct platform_device *pdev) rcar_pcie_ep_hw_init(pcie); + err = devm_request_irq(dev, dmac_irq, rcar_pcie_ep_dmac_irq_handler, + 0, "pcie-rcar-ep-dmac", pcie); + if (err) { + dev_err(dev, "failed to request dmac irq\n"); + goto err_pm_put; + } + err = pci_epc_multi_mem_init(epc, ep->ob_window, ep->num_ob_windows); if (err < 0) { dev_err(dev, "failed to initialize the epc memory space\n"); diff --git a/drivers/pci/controller/pcie-rcar.h b/drivers/pci/controller/pcie-rcar.h index 9bb125db85c6..874f8a384e6d 100644 --- a/drivers/pci/controller/pcie-rcar.h +++ b/drivers/pci/controller/pcie-rcar.h @@ -54,6 +54,29 @@ #define PAR_ENABLE BIT(31) #define IO_SPACE BIT(8) +/* PCIe DMAC control reg & mask */ +#define PCIEDMAOR 0x04000 +#define PCIEDMAOR_DMAE BIT(31) +#define PCIEDMAOR_DMAACT BIT(16) +#define PCIEDMPALR(x) (0x04100 + ((x) * 0x40)) +#define PCIEDMPAUR(x) (0x04104 + ((x) * 0x40)) +#define PCIEDMIAR(x) (0x04108 + ((x) * 0x40)) +#define PCIEDMBCNTR(x) (0x04110 + ((x) * 0x40)) +#define PCIEDMCCAR(x) (0x04120 + ((x) * 0x40)) +#define PCIEDMCHCR(x) (0x04128 + ((x) * 0x40)) +#define PCIEDMCHCR_CHE BIT(31) +#define PCIEDMCHCR_DIR BIT(30) +#define PCIEDMCHSR(x) (0x0412c + ((x) * 0x40)) +#define PCIEDMCHSR_CHTCE BIT(28) +#define PCIEDMCHSR_PEEE BIT(27) +#define PCIEDMCHSR_IBEE BIT(25) +#define PCIEDMCHSR_CHTC BIT(12) +#define PCIEDMCHSR_PEE BIT(11) +#define PCIEDMCHSR_IBE BIT(9) +#define PCIEDMCHSR_IE BIT(3) +#define PCIEDMCHSR_TE BIT(0) +#define PCIEDMCHC2R(x) (0x04130 + ((x) * 0x40)) + /* Configuration */ #define PCICONF(x) (0x010000 + ((x) * 0x4)) #define INTDIS BIT(10)