From patchwork Fri Sep 8 12:23:03 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ravi Shankar Jonnalagadda X-Patchwork-Id: 811563 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=xilinx.onmicrosoft.com header.i=@xilinx.onmicrosoft.com header.b="1HerF428"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3xpcLX2LYgz9tY3 for ; Fri, 8 Sep 2017 22:38:28 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755141AbdIHMZT (ORCPT ); Fri, 8 Sep 2017 08:25:19 -0400 Received: from mail-by2nam01on0042.outbound.protection.outlook.com ([104.47.34.42]:53408 "EHLO NAM01-BY2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753202AbdIHMXa (ORCPT ); Fri, 8 Sep 2017 08:23:30 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector1-xilinx-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=hrQWz/sxZ1kn6sr8u++lVXhTwR9/yeTElLKxerRAeKU=; b=1HerF428+bSLH4hJFiJzAVmHU5jZATe4YcLt4+moMtpq9JBQ7qALPTX9FunfpBy16IFWxFxyDVVGCeH9hyEEdvHFc9qVwp4zCWLfQmwxzdgRXrjunCkNctmd4K0/BZwl4y+UnNGMKety0W3roDfFcTKv1IBM/55Xtftsj15MYyE= Received: from CY4PR02CA0040.namprd02.prod.outlook.com (10.175.57.154) by CY1PR0201MB1931.namprd02.prod.outlook.com (10.163.56.29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.35.12; Fri, 8 Sep 2017 12:23:27 +0000 Received: from BL2NAM02FT017.eop-nam02.prod.protection.outlook.com (2a01:111:f400:7e46::205) by CY4PR02CA0040.outlook.office365.com (2603:10b6:903:117::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.35.12 via Frontend Transport; Fri, 8 Sep 2017 12:23:27 +0000 Authentication-Results: spf=pass (sender IP is 149.199.60.83) smtp.mailfrom=xilinx.com; vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=bestguesspass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.60.83 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01; Received: from xsj-pvapsmtpgw01 (149.199.60.83) by BL2NAM02FT017.mail.protection.outlook.com (10.152.77.174) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.20.13.11 via Frontend Transport; Fri, 8 Sep 2017 12:23:22 +0000 Received: from unknown-38-66.xilinx.com ([149.199.38.66] helo=xsj-pvapsmtp01) by xsj-pvapsmtpgw01 with esmtp (Exim 4.63) (envelope-from ) id 1dqIJe-00026t-A6; Fri, 08 Sep 2017 05:23:22 -0700 Received: from [127.0.0.1] (helo=localhost) by xsj-pvapsmtp01 with smtp (Exim 4.63) (envelope-from ) id 1dqIJe-0004Nc-4R; Fri, 08 Sep 2017 05:23:22 -0700 Received: from xsj-pvapsmtp01 (mailman.xilinx.com [149.199.38.66]) by xsj-smtp-dlp2.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id v88CNDOW000502; Fri, 8 Sep 2017 05:23:13 -0700 Received: from [172.23.37.80] (helo=xhd-paegbuild40.xilinx.com) by xsj-pvapsmtp01 with esmtp (Exim 4.63) (envelope-from ) id 1dqIJV-0004MQ-8w; Fri, 08 Sep 2017 05:23:13 -0700 Received: by xhd-paegbuild40.xilinx.com (Postfix, from userid 12633) id 59457B2085B; Fri, 8 Sep 2017 17:53:12 +0530 (IST) From: Ravi Shankar Jonnalagadda To: , , , , , , , , , , , , , , , Subject: [PATCH v2 1/5] PCI:xilinx-nwl: Enable Root DMA Date: Fri, 8 Sep 2017 17:53:03 +0530 Message-ID: <1504873388-29195-2-git-send-email-vjonnal@xilinx.com> X-Mailer: git-send-email 2.1.1 In-Reply-To: <1504873388-29195-1-git-send-email-vjonnal@xilinx.com> References: <1504873388-29195-1-git-send-email-vjonnal@xilinx.com> X-RCIS-Action: ALLOW X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.1.0.1062-23314.003 X-TM-AS-User-Approved-Sender: Yes;Yes X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:149.199.60.83; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(39860400002)(2980300002)(438002)(199003)(189002)(50226002)(7416002)(5660300001)(305945005)(356003)(103686004)(2950100002)(6636002)(6666003)(478600001)(2201001)(189998001)(575784001)(52956003)(45336002)(46386002)(47776003)(2906002)(106466001)(48376002)(42186005)(50986999)(8936002)(50466002)(36756003)(5003940100001)(33646002)(76176999)(63266004)(6266002)(81156014)(36386004)(81166006)(8676002)(90966002)(921003)(107986001)(2101003)(1121003)(83996005); DIR:OUT; SFP:1101; SCL:1; SRVR:CY1PR0201MB1931; H:xsj-pvapsmtpgw01; FPR:; SPF:Pass; PTR:unknown-60-83.xilinx.com; A:1; MX:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BL2NAM02FT017; 1:MnFM5qYpAcRAVXEMq4SemGhHka4Mm3lH1WI27KKorvvTh5rhKx16GFPHI29rpieR7gH5iAYq7u93UBcaZ5ZYPcljxSQmcDJLYldY+cJakQj0LapDrAdxws5dlzuGcaVP MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5d8fde19-49b8-43d8-3f79-08d4f6b4684f X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(2017030254152)(8251501002)(300000503095)(300135400095)(2017052603199)(201703131423075)(201703031133081)(201702281549075)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095); SRVR:CY1PR0201MB1931; X-Microsoft-Exchange-Diagnostics: 1; CY1PR0201MB1931; 3:dyQM93OgKhEDJ3iBs/l4MgLXz/zETIK4Sp9nru2M8wmsL/ihzPvKGDPQ2WGYXXl+FWGrjsK5gS5V1/+lOGkZaxZACqtuNGgwiyN+UXrkQzzrt34/uH2oQay1FRChhzx7ud7CPRcomFk0XbfsiXHp1eexOPlyYLtNPZHrLGPxRA9bA3zu9lecmwQjj7gmYuUb98JuNtfEBK7Waj2SQVEjSPK7c5MPEN8Nq3Z9HeLz0qRL3zuOhuec7OGA8wK2GvhaZ3o/y2EKp7j3Ucc5KrC5KSRamAm3ocivV7MDHWHES0a9078fPnLA8JM0xQu00YJQ/eHGUl6Mi+WFzBiiRlN3MaSVXTeIKc68EVSBtzBqlLM=; 25:a2i1rO+EO+53wl7bXneGeZboz9N8EtWkvYy0m5L32v+gfdoCQsfJs1pitPrNNxzqQDSS9N0k9z4YpwiJFlH26AbMgMLj4O/WaQXMvWlYg848YdSSot1SMq0J3FoTQZLpi+ZVWCKTCeNrn15tiJ6oxczhdCYfqa05XGZu17bFd9YJM+4lH6nCqY1SPKOsteLlG5vQyYpiLoNtzgoFHbMFDShBj+T9FqdgMKudcNzhEZN1EDg6mreslp8QiLhPlGHZorjVpjCGYO8WKnjxZI3qmbW241C4QuPXaSM3hCIWrLkpOyP+yJpCb7xRqPLqF15UstQ3K3Cq4+fjDjTvGeWvwg== X-MS-TrafficTypeDiagnostic: CY1PR0201MB1931: X-LD-Processed: 657af505-d5df-48d0-8300-c31994686c5c,ExtAddr X-Microsoft-Exchange-Diagnostics: 1; CY1PR0201MB1931; 31:CaAUkUoYN+qyWxck7iYGCLpmXCpYjMDUtmFi4cBNJR4LrxB8BP73AYWMxXqDg/Xyzxd95q/1iwd8/vm7v1THw1CZltiYgWJ4CMDv59AuDmzl9+moCIoVdo2P1GPCm2v9KYvRx00176pyOizc3InR203gPitb/SDJRDJcYXJx8cbKS+A0TEVwctZf9sWzAu8VXiPtfDff1sBtb5NdHRYfWY+I+rDwSDvaa6xCEsMgulo=; 20:LPxGtPGk95UUFeRUyPXBsIurbLWyjbBMsT8E96b8MFQH5KIH8XSotYDRL3P+GT2h7hktPOuQG/Rr/iJQmLmgjbxtAPJdhjVoEIrAg5LUekgBIkmZuGmmZWdz9ZOBMRD8om88P28GDVX75knAhb3gWEc9q8pVgVOVOIeQzhdF8BbzfczcMCkwlJ0THk7N/ZNUFkgzabUz+BdffVnvkBF4Jzp7L34OXj7GdKUtvERwELzqXX1MnP3sy2qMwgwU3z8TRDyRWXsn47obnEyhDLIl0ieX3PCnJMYsZGy21SHblsM48LrNJcX57pJeDYb68Gq9v9RjouijXksrpCq4lDbfVHkydwZ0KMWYfPciyk2Fnu6werMN7TAx3kIJB6LKErATiOzPijJ8vQs/CY+J7GfZOhPx3Za4H6Kb8ppuUCc3+DijR2mulOgUOElq6FOiy3Glt3RXVka/qUKXjVDqX5m7aVGjBJ7EiIpa/v0h3cmDqtJOh4pa72cTpekBj54mHOWP X-Exchange-Antispam-Report-Test: UriScan:(192813158149592); X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(3002001)(10201501046)(100000703101)(100105400095)(93006095)(93004095)(6055026)(6041248)(20161123562025)(20161123564025)(20161123555025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123560025)(20161123558100)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:CY1PR0201MB1931; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:CY1PR0201MB1931; X-Microsoft-Exchange-Diagnostics: 1; CY1PR0201MB1931; 4:91q3veXBaSQtmX34DgIZ22by5tz9/M3F700Pml3VEHCk6JumnrpcYqHgFjqOuy/R5soBzDPrbsgnByuVUB+KQch4oiozunW/Gtl0jgNAE29299lM5kdXVbCrgz1dQ6PZfI3Yi+kmsogfM1bAGxKCHRGAcuujlY4o9s1s2wz0gAqZ5pOsppphPRCz8sPDgxO13VV+yw0sqyj58lOCb1LZG5sbsJL2gGgNyIb8mdmDCVT5VvrvaeMqxorlZdpZMOshkvvXWN3agfyPWAXWyjm/C0c2FNF3tHaJYY8lxOrt/0Q= X-Forefront-PRVS: 04244E0DC5 X-Microsoft-Exchange-Diagnostics: 1; CY1PR0201MB1931; 23:wwbg9ASv+jTbLZODoSEP2p3NF3j2h2oxDJgizxJeYxo8BSZtJOq6cZ2v3O8HNp+ygmMSwXVaVuUl1JqVSziBRaedEZCMrByXNB7zuYfyH17eCUZfifvlT2riCZSKGJskIU2y3QZ1HQi3Z2ca+xrMw0ZSb1WVWzC2WzVpRMBtXnpKNTJI1QtMtImnqTKJYq/xywUWiTb3suDkwuVIfXth+Vvf63hcqEAe6wbDRnGlOFGZHa3qP9qbw6b0XTDaaaUNrZF9xtTnC14dsDheQx3AvgD5LqcWPrxurpywIURbQoh3wvOJp5zdCnnwQDXkG9AYr/Ad3J8k1iRWPnZIrwALWNoU01Aj8PegYhugrM+fxmv492mzznSlhB+YqIBTpPqapb6hXjyFWVoq7hAbiL6IKL0SoM4CL9DgFocumn7SioaqOclKndDeZSEUu5PwqUfhub9KwGjNXGAtDFDBP9UXP3KB4abAAqqoossOBGKfccCDl6E8rEHhSX3RXy+64G8SjHzAhvrLwYZKQeaB2bxRME6QgQ/ze4gIANCWK2KpiOjLWtvPzdh40Rhtz0XlUCvDK5RwFAU/ZhbmUGyEdQ1okVfMmjoAF66yS+7fFKUglKItrv3tEOaZ+xtXCzSxoZbmKFm8L2QKB7uzSUDJo4GvzOEG2mIXPen0fO8RLkBMttx5Gf+R4e8O6kuRcZKP6A8MhveKcFRL8VzeJ7aNfjUhEAPlsBT5LdhUExUSYE+XjqF9wew/0vZbGd3IOEFMEFT9mlP5UkjUk6yEffD6V3b/BPZDZJoQpsIiKLfAvANeKn/GLjjLE0YxSu6tUhJWEbLO/Xc/8PuCaoH+91GnLfGqbw5JOnml9wjzQ8FYsfiIW58nxQ0uGWfxDm/nBZohAl1EILNNQDrVSaGWkrM6MHtiavmaI+fCoPRwqf7zxWSz8Te+6mJhdaV/fLyIbcyPCcx560bzVdz3I7Zqb51yLzR/0V3OCSH4LgmXPZsa8iNZMwiL+JGv9NEYAN+Qjn7FZzEjiQbUXykT41X/3iZ3VWcjq7t9lpGMsBHX8tvQK5WRFhIaiRiELIrkvStz9bTObhwcZOWBin4TUJJt6HGFuVx5mg== X-Microsoft-Exchange-Diagnostics: 1; CY1PR0201MB1931; 6:8pt5YrQ0UEE5y0VLstYiZkRFJh4jDxFDi0UkZYVUWvjSZI3mpc7fUNdqHLdob91kTOeIBh/0M6cblzZEIk8o0MmNMtcEq6X7wvrx+HCXohnT3UgAm0F551+WGdiag2wifh/PKGamVcyUFqYpCTCfdZiEjiO6QLvyu1eDhRL+C6t5HdrlYoeH/WgJXs+FDFC1+wwvHy4sM3gFEmf6LNWK7WrdDL+ftsMfgOnUCjl9OvZMUlw5oz138GqNW5nWcesaaS3hdR2pTnXyQCVvpEjzu5aQY8N+ouVDRcOy9jKq5xZ3H1UDL+Hu4N29HAyh0+SW37PUvahyylDgVHUA3cflYQ==; 5:kV8TTc2Q9YrxOfk3nixQp03LxY82yKu3Tq9vFxoMXLmWXP+/pL8ZwNgd7loD2Jampnl2xLVd1ZUjtVxSjhmly15Dqy7lEW8BiVgNI/kk/xB8kAqq7WR9toVMkds8hlVZL9N4mM/zVBFUAZfyIZQNRg==; 24:iykwUBGqNIEWeN19VQl+kJ4G1ye+5ZKucR/P3h6w20aqvBkWnl23oKYzxi3fbdZ3Jjk6u6biZNtwqaab+E9w5AXsZ21tvfMS0Sa8OIk3+g8=; 7:b6xlmXx7SIBlnLjFAFFykP50AJUTS0uxelkUWdfp2QGECrthPMHZPHEIWo0ixtGdSb3Q/MTxaS4hLZwxEe3+nM2PcdAUTOpNuSOWhJb3QQahvyppWIhOf0EuB674JasSgau0VKz0gksJPs/ILz4ifatJTVzdXW3+ewP0B25MoBXqgqdenmIPqcCKIOFvE2tSHsowPb0zkUE2wvqupI0Hztw6pT1jM751PYGr6izYi50= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2017 12:23:22.9870 (UTC) X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.83]; Helo=[xsj-pvapsmtpgw01] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR0201MB1931 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Enabling Root DMA interrupts Adding Root DMA translations to bridge for Register Access Signed-off-by: Ravi Shankar Jonnalagadda Signed-off-by: RaviKiran Gummaluri --- drivers/pci/host/pcie-xilinx-nwl.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/drivers/pci/host/pcie-xilinx-nwl.c b/drivers/pci/host/pcie-xilinx-nwl.c index eec641a..5766582 100644 --- a/drivers/pci/host/pcie-xilinx-nwl.c +++ b/drivers/pci/host/pcie-xilinx-nwl.c @@ -39,6 +39,11 @@ #define E_ECAM_CONTROL 0x00000228 #define E_ECAM_BASE_LO 0x00000230 #define E_ECAM_BASE_HI 0x00000234 +#define E_DREG_CTRL 0x00000288 +#define E_DREG_BASE_LO 0x00000290 + +#define DREG_DMA_EN BIT(0) +#define DREG_DMA_BASE_LO 0xFD0F0000 /* Ingress - address translations */ #define I_MSII_CAPABILITIES 0x00000300 @@ -57,6 +62,10 @@ #define MSGF_MSI_STATUS_HI 0x00000444 #define MSGF_MSI_MASK_LO 0x00000448 #define MSGF_MSI_MASK_HI 0x0000044C +/* Root DMA Interrupt register */ +#define MSGF_DMA_MASK 0x00000464 + +#define MSGF_INTR_EN BIT(0) /* Msg filter mask bits */ #define CFG_ENABLE_PM_MSG_FWD BIT(1) @@ -766,6 +775,12 @@ static int nwl_pcie_bridge_init(struct nwl_pcie *pcie) nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, MSGF_LEG_STATUS) & MSGF_LEG_SR_MASKALL, MSGF_LEG_STATUS); + /* Enabling DREG translations */ + nwl_bridge_writel(pcie, DREG_DMA_EN, E_DREG_CTRL); + nwl_bridge_writel(pcie, DREG_DMA_BASE_LO, E_DREG_BASE_LO); + /* Enabling Root DMA interrupts */ + nwl_bridge_writel(pcie, MSGF_INTR_EN, MSGF_DMA_MASK); + /* Enable all legacy interrupts */ nwl_bridge_writel(pcie, MSGF_LEG_SR_MASKALL, MSGF_LEG_MASK); From patchwork Fri Sep 8 12:23:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ravi Shankar Jonnalagadda X-Patchwork-Id: 811553 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=xilinx.onmicrosoft.com header.i=@xilinx.onmicrosoft.com header.b="ArO9JRTT"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3xpc1J5vwXz9s1h for ; Fri, 8 Sep 2017 22:23:32 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753645AbdIHMXa (ORCPT ); Fri, 8 Sep 2017 08:23:30 -0400 Received: from mail-sn1nam01on0048.outbound.protection.outlook.com ([104.47.32.48]:39328 "EHLO NAM01-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752858AbdIHMX1 (ORCPT ); Fri, 8 Sep 2017 08:23:27 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector1-xilinx-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=TWCHZp9WCK7ME4PJduNlc1Qr6/sHZacfOBxRzIApcsQ=; b=ArO9JRTTFTVv08o4j8YWEyU5caBCfKNAn47f4wsqm6OdJABOSs87m+dwl9WmqFnQSrKvQ2Wik2pUbqBJEoMxbzVPRMiTaFfoQ9IfOl5ggh32MBe8S6zNtPYP4FFl/m5hYu/tK4i9ih03qtYQKY0skQFltJ9hdqBMaQps69yDU3k= Received: from MWHPR02CA0010.namprd02.prod.outlook.com (10.168.209.148) by BLUPR02MB1123.namprd02.prod.outlook.com (10.163.79.149) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.13.10; Fri, 8 Sep 2017 12:23:24 +0000 Received: from SN1NAM02FT002.eop-nam02.prod.protection.outlook.com (2a01:111:f400:7e44::207) by MWHPR02CA0010.outlook.office365.com (2603:10b6:300:4b::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.35.12 via Frontend Transport; Fri, 8 Sep 2017 12:23:23 +0000 Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.60.83 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01; Received: from xsj-pvapsmtpgw01 (149.199.60.83) by SN1NAM02FT002.mail.protection.outlook.com (10.152.72.94) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.20.13.11 via Frontend Transport; Fri, 8 Sep 2017 12:23:23 +0000 Received: from unknown-38-66.xilinx.com ([149.199.38.66] helo=xsj-pvapsmtp01) by xsj-pvapsmtpgw01 with esmtp (Exim 4.63) (envelope-from ) id 1dqIJe-000275-GI; Fri, 08 Sep 2017 05:23:22 -0700 Received: from [127.0.0.1] (helo=localhost) by xsj-pvapsmtp01 with smtp (Exim 4.63) (envelope-from ) id 1dqIJe-0004Nc-Cx; Fri, 08 Sep 2017 05:23:22 -0700 Received: from xsj-pvapsmtp01 (maildrop.xilinx.com [149.199.38.66]) by xsj-smtp-dlp1.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id v88CNEHX013157; Fri, 8 Sep 2017 05:23:14 -0700 Received: from [172.23.37.80] (helo=xhd-paegbuild40.xilinx.com) by xsj-pvapsmtp01 with esmtp (Exim 4.63) (envelope-from ) id 1dqIJW-0004MX-3X; Fri, 08 Sep 2017 05:23:14 -0700 Received: by xhd-paegbuild40.xilinx.com (Postfix, from userid 12633) id 3183BB20857; Fri, 8 Sep 2017 17:53:13 +0530 (IST) From: Ravi Shankar Jonnalagadda To: , , , , , , , , , , , , , , , Subject: [PATCH v2 2/5] PCI:xilinx-nwl: Correcting Styling checks Date: Fri, 8 Sep 2017 17:53:04 +0530 Message-ID: <1504873388-29195-3-git-send-email-vjonnal@xilinx.com> X-Mailer: git-send-email 2.1.1 In-Reply-To: <1504873388-29195-1-git-send-email-vjonnal@xilinx.com> References: <1504873388-29195-1-git-send-email-vjonnal@xilinx.com> X-RCIS-Action: ALLOW X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.1.0.1062-23314.003 X-TM-AS-User-Approved-Sender: Yes;Yes X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:149.199.60.83; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(39860400002)(2980300002)(438002)(189002)(199003)(42186005)(2906002)(36386004)(50226002)(8936002)(356003)(305945005)(47776003)(8676002)(81166006)(81156014)(5003940100001)(189998001)(6636002)(6266002)(2950100002)(6666003)(2201001)(33646002)(46386002)(50466002)(50986999)(48376002)(76176999)(106466001)(63266004)(45336002)(103686004)(7416002)(478600001)(90966002)(36756003)(5660300001)(52956003)(107986001)(921003)(1121003)(2101003)(83996005); DIR:OUT; SFP:1101; SCL:1; SRVR:BLUPR02MB1123; H:xsj-pvapsmtpgw01; FPR:; SPF:Pass; PTR:unknown-60-83.xilinx.com; MX:1; A:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; SN1NAM02FT002; 1:KtQzLeepdXXuLNwgBVtxGInbkNxna1IrVzls7XAwrGI1Sb6YQAHHq8ktgRtCAYILXs4EVRhOc/68FLo+qbbIZJdzZDE13f1fN+A6o6naQmAOuA3TQT4N2bKDTUSnuteu MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 26c171a0-a1e8-44b5-6f4c-08d4f6b46607 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(2017030254152)(8251501002)(300000503095)(300135400095)(2017052603199)(201703131423075)(201703031133081)(201702281549075)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095); SRVR:BLUPR02MB1123; X-Microsoft-Exchange-Diagnostics: 1; BLUPR02MB1123; 3:hdQQMMJdIn/aDHTm068pS6LI8mKhw8SAsG4RIWPfq5wNhR7rjRATv3l2XMPwVvH3/NGeXoMk5PVS8RM0yT40uvj4VNp11AD8QpY8L3LEw1v9Iaxmo8JeexgPvIAJQpAZaZddZYI6fWxOvpGin/Adr8dIthoZmTLlakKOfEKL8tLPDgyMR/5pmmqpPD1KmiEzY9I3pNYKK7CAVFtDsTN9XBZCZgzU89mtaAyA2EZKg6dSw5rR9FizWColO/sOUA/034WqUTs6gf+VTZJ5Esn5n1vFjqrewgtgT812XaqYjt52k7cmzJ2hiap3NvivveHaDKL2FKA9mLl7YYTUyGOz+V878wDaepHOvyRpx6uWSWo=; 25:8mtB/X50dS8ejixE5Q5eQPR0XQthlL1OvRKeJRZtOn5Qq1OGLYYywKIVcOuENqoq2Kl/UyWeDpUTOkCwU+ZaTIgTiFXfGCuM4Elxf9zprcW87aC5REhlmkgcL0PXZMQ42GlL9H8DMDJmYfSrt/TNxlFL7K1My5PIOg6TANeIUJNvNpsFF6/FkofKitfFA5WDFsBeZfnohfkdiEnljxz/MnS/RlqHbj/I5QyqXrIo1e4QhxfTpbeb3mBxZ5VhXUuDPU2CtdC0u9M2Fh3PSrxmW4VXeNydu4rwez1puDJvBosd1xE/Y71Vk7R6KjlkXFmogylye/PUllc6+w1U2Yh09w== X-MS-TrafficTypeDiagnostic: BLUPR02MB1123: X-LD-Processed: 657af505-d5df-48d0-8300-c31994686c5c,ExtAddr X-Microsoft-Exchange-Diagnostics: 1; BLUPR02MB1123; 31:WfWXFaxNoCfVyzA9EPP9jNuGVzaspzBbp9YGwE5QBKqQPQb46iawkcDHEUrVl9nUQ6oRhVqLNm+jnyI8x+rE3bXsU6QqFzwmc56Up60MrdqCfntj6wu2GgHYzA6ir1ucRUmEgBDUgDDCy1nU1xN1JAY8t+nM+Nhf+ULGu2MB3bLwAzSG/gGtKBjI7lqA2CmHI3VXYWJiK56Ou2t1lCo8gbSBeXb9qYdRinpnub5oKAY=; 20:CfxkBDwMkrLaL4fxLE6uSGd+x8hp8xeSPsV6Dr2L93RsKV9c2iS++AHCwx0hTrHgzHWVBCaJHGAOgtSvMfSUjLXMWJeZrwpwUwJ+zERTJoTrAwuLs9UBTY9wwDJfhBxdYRuepsCUKW5TKvOxbjYkXuXKk7IzFvuFRE4xzY5HgaJ17MfIKzfcbwzRJOJknTH7Y9xvVwMUUqXIyF0IkHMDpsBdYTr1tPrrnnflbjEoo8L3nQJK01vJhBIml9tKr0hRHYwtPWnF+WK7B68c28FVvGhSJLLMe58e4L9f5LHmBAkevnLqTgjMPgOjWju6hFRzMRADyIXm5VI25SKBB4YTq630y0TAjHpttxDKCEnhUitcRspyv4yiPO4YjTY/4nyTNgDP/3DewtRDjB0KLsKhk+Rlc0OIziABw1ov61a71oxqrzQ5GBgJcXWVkhUoj9R/vK/LGkp8Do6IVHiXeAT1fAawSZNJsFE63Hpkc0Wg8gahl6/EELLcL7V3GZu9IMc3 X-Exchange-Antispam-Report-Test: UriScan:(192813158149592); X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(100000703101)(100105400095)(10201501046)(3002001)(93006095)(93004095)(6055026)(6041248)(20161123560025)(20161123564025)(20161123558100)(20161123562025)(20161123555025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:BLUPR02MB1123; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:BLUPR02MB1123; X-Microsoft-Exchange-Diagnostics: 1; BLUPR02MB1123; 4:37QSmEQ/KR88OQWIJFE2cUUMuRcDTCaCPuW5NjUJjypn0B1JFA1PwtD4i7OqhTy52xGvt8H9wsudfl35eiyr+Pk+EdP0G1a0PEAOfQ4pEd9JqAvFULKMWLSO+5duzW6RRvLQNUa2/agvqKDvAZu1jaYQF4SKjry4Onpmd7gGDH/lMt2LTO+AamRwCsqeMwjjQgfvWY5+24m0BVr8IA0t4OWMbp75Jc/C3o7Q7+u6p2EHat/gX5n1Ia2LV8huElSHUuXYkTYjAH7FuVoNWwDh2/I4vwulvnm263VRQllM8dQ= X-Forefront-PRVS: 04244E0DC5 X-Microsoft-Exchange-Diagnostics: 1; BLUPR02MB1123; 23:EDV0DLIeMzepPbzVfB8t1ZEAII8ekzNH12QA2SKUJGEMrZ04MEvawZLExIp0PpHBOJ9PKTQLE8XxfikN+bN1EU2iQplE88wYI6ols+q7RXEzHW2nzrUulwm+o48cWGgWYfAhppRy5idpPeQRomMHQGvYvhla054xOqnsoDvkWOXuuu2jEzC6bgKWCAshx9joROxhpcDCX4+PsVqXy+dxPYH2Nzksxzwv7zd8rIdiTC7U4DrtblHi3ZQvx53T+M6oh+1lTmT6OeojgshQv7YTfYx5k4MPpCBzA7lfuS96LXFPmlkIYp/YUBflA9IM0baSPWrLBp07qmCwm5onUgF7hUpmBBXIRRzEJtWkQyvwyLT1IR0yijKeTbhRYCPkEjZn/H/kP0tsELVeFHMlcP0Tnd+31oi+QMmpAxItdhEwNlHLMMmTRSm20bK2hINiyrsYm/uL7QrP50nJ2Z49Xzqay1cRq2zArJFVHaB78fqbNG3bhnNfetA6enGEkHBrryih42zELKZazHiWZYD+bEyaXkE3SpmDcNHmVCLHf8z20HvrHETg79KZaIQcWoRsKV8W+YgUIbfIzKFJDYkL2Jm3RhAh06KFkUAjl+dRP1HqMKCgHfa3HTUFdY+ihAO1gTxFTMGGWa6NW88WtmIiR1NI2/58/+CcDjHATLQIkwy2rlWjfiBbJE1D1dNxjIc6oYcFKRz4RRfvQJceuL2zP4idRu9PWCRiG5QIQhpr19jbP1RWOxTZMCvoZDzRVPX37ATSIW45U9LGa4Gh+7ZghT/5NO/LZ2kkWd7st6souf+dHqVWA0a1CW+mSwuChWa2kQv7KGNMK/S1vLrfOf7+tAeya2d5sUH4VCz739w+Kflzomdr/e3zS8qnJaJaKIUZdKa9nBWqlaPIymdlunIoLXWB1730kpcKOU49Ugq59736Z4isojft6d1hmGdneFng9FIEWp/m498hJWX3M8bOR1IzRHP1zYeO/I1TPk1zC2yexPzhZpx1JPPCcNGhVyXMvlmr0rTdq4VdSgHu0FbaEf18iszn9MiYx2tNBvNEA4DCE8u/+71O/YObIq0hFPBb7Els X-Microsoft-Exchange-Diagnostics: 1; BLUPR02MB1123; 6:7xwWC1zMbv6mdKt6skvVpV6pJLevcGNOAK2+BIHmqqPkoLNqFlLIFqZfkjDYQlbJJOMRSzgwOqlovLbX84PWKjtxLQoIVVeszYyk6GUt0OiR9AMlDjCaQXYeFxCgBHbTowrblnhAmgHPh3bRF7qFjQOsqjFYYatktd6FWRWbjnUNMVy375QFjRUOzCzpMxgVxNji8jAz4EehXPij2EBqkvwac8hq31EqKX+D2nl2mmfA4ITvoJCt9rdskx49sydbFEg+eh+qATf97wRIbo79TYP15FprOpWPvhgzp74SFXDdad0GsPIY9/tP95lPJQviQlVWqHGhF+3CbL5lZlhzKg==; 5:mUeDumVU7uFmhMON/sQSpsGf+IGzIK6z4ad99glELL9IFswCLorfboKYExJ2S3lqNgcCdOMTACyPkwAP3nqiJ3uqCYIk0L0zwP4xBJcZ/DYnOY+dCbPB6VOSqeQKdmdPu8rsB3UMitdCGoiD3aj0KQ==; 24:CTfZ7KF7ve5/miRPI/zvCIeg5TOLZSos5RXhAdfu7wE9ipdY1+F3PX/0HAHt3M67NN+IhcIbbIyikLfl5qu7ANGwvdwsi/pYZC3FXprqO9E=; 7:rwlErYxoJlVI036Ksf4Vk8YQb+T2VvOf+5Lk2YA0lHJFmkpKl0EG1Pb5syP1xR2Kc/gJSqYjO+X3tq0AcalQd9xic2cnCzt7LyR2y/Szpm5pz2G+CJDr7qOg0CTbujViFxLk0uwmMDya4D37PCTynOoKqmxZCUMordN/5jIC0c/Fsa4xKA9X7nTfEdJKM/lGhX3ZW4zUHDcCQfTc+LyxQSeT+R5gF40RDc5DrdN4FsI= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2017 12:23:23.1141 (UTC) X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.83]; Helo=[xsj-pvapsmtpgw01] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLUPR02MB1123 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Correcting Style checks thrown by checkpatch scripts Signed-off-by: Ravi Shankar Jonnalagadda Signed-off-by: RaviKiran Gummaluri --- drivers/pci/host/pcie-xilinx-nwl.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/pci/host/pcie-xilinx-nwl.c b/drivers/pci/host/pcie-xilinx-nwl.c index 5766582..3c62e3d 100644 --- a/drivers/pci/host/pcie-xilinx-nwl.c +++ b/drivers/pci/host/pcie-xilinx-nwl.c @@ -506,15 +506,15 @@ static int nwl_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, for (i = 0; i < nr_irqs; i++) { irq_domain_set_info(domain, virq + i, bit + i, &nwl_irq_chip, - domain->host_data, handle_simple_irq, - NULL, NULL); + domain->host_data, handle_simple_irq, + NULL, NULL); } mutex_unlock(&msi->lock); return 0; } static void nwl_irq_domain_free(struct irq_domain *domain, unsigned int virq, - unsigned int nr_irqs) + unsigned int nr_irqs) { struct irq_data *data = irq_domain_get_irq_data(domain, virq); struct nwl_pcie *pcie = irq_data_get_irq_chip_data(data); @@ -767,7 +767,6 @@ static int nwl_pcie_bridge_init(struct nwl_pcie *pcie) /* Enable all misc interrupts */ nwl_bridge_writel(pcie, MSGF_MISC_SR_MASKALL, MSGF_MISC_MASK); - /* Disable all legacy interrupts */ nwl_bridge_writel(pcie, (u32)~MSGF_LEG_SR_MASKALL, MSGF_LEG_MASK); @@ -932,4 +931,5 @@ static struct platform_driver nwl_pcie_driver = { }, .probe = nwl_pcie_probe, }; + builtin_platform_driver(nwl_pcie_driver); From patchwork Fri Sep 8 12:23:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ravi Shankar Jonnalagadda X-Patchwork-Id: 811556 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=xilinx.onmicrosoft.com header.i=@xilinx.onmicrosoft.com header.b="k7PvXR43"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3xpc4t20m8z9t3V for ; Fri, 8 Sep 2017 22:26:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755541AbdIHMXs (ORCPT ); Fri, 8 Sep 2017 08:23:48 -0400 Received: from mail-bn3nam01on0040.outbound.protection.outlook.com ([104.47.33.40]:45873 "EHLO NAM01-BN3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754792AbdIHMXb (ORCPT ); Fri, 8 Sep 2017 08:23:31 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector1-xilinx-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=REsHjY0wWqoyqo0+WWfvBeSWNP3FREMCSiKuPfc+hzE=; b=k7PvXR43HCnZb+X7bpN92WD6TaOFbLLrTWtAvsieWkqF2QsNK1RmFsyLtfGu+JBlfZc0s7/oMcef8JitvGRINBIGw5d8HRoaI77mvDHAWwVdBDsiM91rjKYt76JpEXdqq1BIUEAMgE0SI4iTTN3x4CI3Gk7ZzoVlQNgZHB0TRX4= Received: from BLUPR0201CA0011.namprd02.prod.outlook.com (10.163.116.21) by BN3PR02MB1126.namprd02.prod.outlook.com (10.162.168.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.35.12; Fri, 8 Sep 2017 12:23:28 +0000 Received: from SN1NAM02FT017.eop-nam02.prod.protection.outlook.com (2a01:111:f400:7e44::203) by BLUPR0201CA0011.outlook.office365.com (2a01:111:e400:52e7::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.35.12 via Frontend Transport; Fri, 8 Sep 2017 12:23:27 +0000 Authentication-Results: spf=pass (sender IP is 149.199.60.83) smtp.mailfrom=xilinx.com; vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=bestguesspass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.60.83 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01; Received: from xsj-pvapsmtpgw01 (149.199.60.83) by SN1NAM02FT017.mail.protection.outlook.com (10.152.72.115) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.20.13.11 via Frontend Transport; Fri, 8 Sep 2017 12:23:23 +0000 Received: from unknown-38-66.xilinx.com ([149.199.38.66] helo=xsj-pvapsmtp01) by xsj-pvapsmtpgw01 with esmtp (Exim 4.63) (envelope-from ) id 1dqIJe-00026v-Ak; Fri, 08 Sep 2017 05:23:22 -0700 Received: from [127.0.0.1] (helo=localhost) by xsj-pvapsmtp01 with smtp (Exim 4.63) (envelope-from ) id 1dqIJe-0004Nc-6J; Fri, 08 Sep 2017 05:23:22 -0700 Received: from xsj-pvapsmtp01 (mailhost.xilinx.com [149.199.38.66]) by xsj-smtp-dlp1.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id v88CNFKJ013161; Fri, 8 Sep 2017 05:23:15 -0700 Received: from [172.23.37.80] (helo=xhd-paegbuild40.xilinx.com) by xsj-pvapsmtp01 with esmtp (Exim 4.63) (envelope-from ) id 1dqIJW-0004Me-SJ; Fri, 08 Sep 2017 05:23:15 -0700 Received: by xhd-paegbuild40.xilinx.com (Postfix, from userid 12633) id EEF82B2085B; Fri, 8 Sep 2017 17:53:13 +0530 (IST) From: Ravi Shankar Jonnalagadda To: , , , , , , , , , , , , , , , Subject: [PATCH v2 3/5] dmaengine: zynqmp_ps_pcie: Adding PS PCIe DMA driver Date: Fri, 8 Sep 2017 17:53:05 +0530 Message-ID: <1504873388-29195-4-git-send-email-vjonnal@xilinx.com> X-Mailer: git-send-email 2.1.1 In-Reply-To: <1504873388-29195-1-git-send-email-vjonnal@xilinx.com> References: <1504873388-29195-1-git-send-email-vjonnal@xilinx.com> X-RCIS-Action: ALLOW X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.1.0.1062-23314.003 X-TM-AS-User-Approved-Sender: Yes;Yes X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:149.199.60.83; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(39860400002)(2980300002)(438002)(189002)(199003)(2906002)(45336002)(90966002)(46386002)(8936002)(50466002)(103686004)(48376002)(52956003)(2201001)(50226002)(478600001)(8676002)(36756003)(81166006)(5660300001)(81156014)(33646002)(36386004)(7416002)(47776003)(106466001)(42186005)(76176999)(356003)(5003940100001)(50986999)(305945005)(189998001)(6636002)(6666003)(2950100002)(63266004)(6266002)(921003)(2004002)(107986001)(1121003)(83996005)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:BN3PR02MB1126; H:xsj-pvapsmtpgw01; FPR:; SPF:Pass; PTR:unknown-60-83.xilinx.com; MX:1; A:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; SN1NAM02FT017; 1:6IXxzXnypUhxGzXI14DPeGozryEdVOPeHZmb5p0K2qyI90RDCRLQJAWrbGBeel9tFGcg5eDP/mCGRht20rBeTTcpdmd5igJ4IyyquZr4MZ6yi6ZwLnA00+2o7Pl1Fa3D MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7faaf42b-f6be-4624-73e6-08d4f6b46889 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(2017030254152)(8251501002)(300000503095)(300135400095)(2017052603199)(201703131423075)(201703031133081)(201702281549075)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095); SRVR:BN3PR02MB1126; X-Microsoft-Exchange-Diagnostics: 1; BN3PR02MB1126; 3:zzwwGfdksLcl5O004DE5GOlG1rh/v1JPv/50aoGagDIPesCk5injoztnhCCFGy+BWP53XaGBfUnC/2eZ390jtgFyvVEMRg8Np7cB3WZKk/X4IWiHG3XRVUqbmqgKC3zcdttkULf24pg0ueJDH586LtrCIOGIDlC/es3ci/xptWxGD9LWx1wSe90a7ZfyQSuMDtQeqzcxoUbNfqXT1NrE6obMg95jaHXGlPJMTVLKQPHnscVgtdU8QC3prjODQc0fBL0mB+Rlx0c3WlyUT33hd7vmXGBAlT9s6ic7nOzScACr0KuRpcrfy4b52LIfH3pNj1hvxfCMYD7BzDRHf2dqOXmvZM9Svn/lnZcjr2inOy4=; 25:VqAh7BtabPTAF09lmr3FO6qI1zqEGOYSb03sjqB+1vnpeGi/r1qn8zmaWKxgLtVZx611J0HgU7ZCXS8AvEyWW2ZCcm/6rAPkTF2uYxG3xVt2icIotYagVgup9hzLWcCOEoDYyo++YrUsFzSWVy98fZyR8slj3oBwOqka3U3cOZ/JkcCu1c2mC/PsSkMkc0/GbBmS9RX1xSwxTqlQKArvNdU6jiyAWgXrXIrgpnmKx2Gdcm2lP0NW2GP7uxfnU0SaT7vkTvBv5fv4lSdANOGul7Ka7WI7RUS1Wv9MwO54T0maCfOwy2lFMoubjlm/7jEhPjpdiQysDFw35xELTl6NhA== X-MS-TrafficTypeDiagnostic: BN3PR02MB1126: X-LD-Processed: 657af505-d5df-48d0-8300-c31994686c5c,ExtAddr X-Microsoft-Exchange-Diagnostics: 1; BN3PR02MB1126; 31:IhEB0w99BzHLzVSCPeZtynjZPyhChNwkg9PA0qDfrgBWHszWwfsLuyXfjvNiupwL1Gs8pB0SXG8bfgVKhym6fZwcQqF5ZZbAlnF6tLofGXW04TQvmNf+r/onQGBZ2Me+IbDY2U2iZqQVuo70uyWdLz7LSFjS5i6iqU36RsAQm6+GndagPiSme5g/cNC0hGn8jwJHwzsEeBTHaG9fQ24GzDuWI6DcP2TWXsUxjBpZlDw=; 20:A1aqOYLET052HFbf95SZMZlofr389TAPv+ViDzUP8cIFzdOeLCya6DV8+PCgaLofTUkSf2IYSzvc2gx2AX6L+m1B5Q8T040FdJMaEhaNeb25iiHdfIOwgWBnzZuX+Y9k8JzP3X349S0VEG8H3WyOMHkRB8QsyUKf3XK8vNm8qfrZE8Iujc6nuYGT9pOMsxdpXCtCwcvhhH8OWmWCBFClioBJWMziln596uts+y/msu8tct5GyP8NkAhS2m6okXs2MJcXvcqPPm2H4xk42GKVk3BtW5uWlCoLiwbG3aDdKtS6y4gwGqoZ43fqXtewIhrkE+EuJfomnr7ZhsRr0/rLYwM1qCI/ctWc9SpQu3NptElYDt0lFShfhdEMZ6+Q0lj20lh8aLlrxWiYo3CPnchCFs+8OarOq6RQnc8lS6auO5EyWvz9qrwndcsKVmqJZO0zAipDgdEcDZwqUPxwkEZ8Ok/I6Ddg6Q9N3BuTtBLquAW2bb1ohWcpbJGsWclGFtdP X-Exchange-Antispam-Report-Test: UriScan:(192813158149592); X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(93006095)(93004095)(100000703101)(100105400095)(10201501046)(3002001)(6055026)(6041248)(20161123564025)(20161123560025)(20161123562025)(20161123558100)(20161123555025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:BN3PR02MB1126; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:BN3PR02MB1126; X-Microsoft-Exchange-Diagnostics: 1; BN3PR02MB1126; 4:kwvKrxgrsdOOsheQhEx7GXG3TEJ03IOGH6saLOQf/Qzpof7tZ3T5LGwuGi84axb4vmsKg+MoGR+3uac1BpOyqfroClz4oZoXQcxqS2Dj2BXNHdnC2onUj8ZPkhGiovFfQkIJqXU1OX7CZ4dDOahb7YIU7hB0rPUJuahkmGnuqyUxmgJambBnob1rPWpJ7/LC4uyGY/8u80uzsKzsK42hi7cTmZJwNu8U/i4VZLQmGwZ++y4oU9ClNrW3v461l3Ad3wCIWvT55Jcr2nXy4BDpeMvXZJ0oC4fD6vAw2iFQvyM= X-Forefront-PRVS: 04244E0DC5 X-Microsoft-Exchange-Diagnostics: 1; BN3PR02MB1126; 23:1AX49Rp+EgSwoBrCMOLgC/l/ksC11lWyHfs6GIx+yLvyARlxA5/YS1aZ3nU2+ATZvoB7enAdvk837yMDHiioLIMPB8c9d2WBYZrcjV9HcS0z3caY5oGRu79/aVVYZBJdME1Er0J56KRXPJo0y52UHVBPM8G1y0TzbKfpLAGlmfgwkH/4MyBDbMG0auL5OmDabMHpNKHt5xeDe1Tdbc2SntnfBk/rLAv1bNUg9VAqXMBCO+BsjDKzqMQkznROQSKOt28EIZJnYCPfhvdPqwtGf19JaPZmZgXiKuV5oB0/AI/ytKlxu88BhQmde1kugcltIILAHTG4mXaBsAMBEl16XLLXaPUZpu/4hAe+Rs5oafakMchGG2IS5CfoX28/IvUMB1Wjis6QTfKNYzN6vBKdDS8l5GBJCP0wqtCKf9w1s2Da0k7LRql9Ph0Gxewb2lgWBvYudH325UT4375r2lxSqtjqj7/DehZaajrAgDwxbdgD2I5CmM+pENIZaKiUDNPTz3bSommF8EDZD82scI1cfFgR/fcnTN0LeT1lDKRTbtquDTRt553Y0m2I9uPZSvsbOy1ump0j6SGRQMnR97OvSLa6AVjQTxLoojDvA8ksHq9jTapt035ISxkE1CdDEtAAAns7WgqLDX2c4W8NNpRfAjS03Zea1yTScdkK6GCnmNaHOUTnlSdLsXb52tpQa4clx0g2Zfs4lxIznORXU6i8fSGlGpVrFhSu0u0g+WP+QjUeLneCFL2wE90XUHdojxTV09WIfEdRdUNc/61Ukf0kUnOGxwyXF14sZvB8gRCWpLKGCdsuY1aKBTygOOR8Yc233HfjJkWxuu4JpEhMWCrnFawlh8ARe/E10CxixAB9YiS9ISJjUCGORG9Lb24J1oqE8ytFX6TwZC4v07SaDSE3VdbUzWUQlZ3eYBtxUKsLMtyxea5Wx6OoQZoRV8VqBQHVujFOX/vgSetPvyLH90TrRi8DNe21mXfTPvXqnlMCM1rNv/v9qK2w96PD0gU8J2Ci+X6d5aDtcI72YMe62rtExLr7fsp/F79P6Iz8GacGUwj7UWG714/JS9LDozj61tlFQIRJzu9LrRbD66XLZonNJg== X-Microsoft-Exchange-Diagnostics: 1; BN3PR02MB1126; 6:HHlZtyP26Afy477Iq9Yo4bM/wnTY81t+XIywftTP5MCgHm501m8CTHtCWO1PoFGZHQw3+5VYeZprdibH3/WsyIHicu9BSK8XjVazDeYFP9e9VQcsKmZIk5UzppvaUXpae04dODEYnuAAamCnwLhPuwmnrNCMBIdnDDwDevSY5NCAxwqCH0FGX70aY8vNhkFmobJyZNomYBHRhTZKFG//MAP7kmAoDn/r585tV6Ult89fbNTaziICBXfGXquCZebPi0rd49R1l51AYsYgiGXZd1utOO8JLwsJXEeuhWHNEHDfKAt5E/fqkhjehimMhQmg/BIipEia5xBYk69qm0S6NQ==; 5:TDRSK1H34pxuVUKIRTDkwgPCUm6sat863oxiz1P5H/ZOuyZ49A/n912xN/Iq9DC/1HAtKTbmBeRkqmprtAugqi3E2ZwV3SxnIkjmUP/z86zw1o1SsBur0FyR6VFBaGQlLz+D2du27Va7G5KgkI8ObQ==; 24:MoqecuZZH/4YUj/6Hh+92xmZG+pqjKRZoYBW1SQd7WgAuUv8OXoInAj1AJGfxfDAkKvS+Oyf7j8/BU5SQawNGEN9xGiwuQ0k2G0G/2rm6Ew=; 7:s81Mj6ifjViKDnvrz2whLz8Re4rdCX0gk340MKAM5H9vM02ng1U3BTdezwY/RgfwXH79tqf4dA3+dqM/U2XAw7p9tluRd2cz0iiEAWSSh0jOsmYvnOxpcwnumaZG+CcRhqQkz444fvJF5ivsg4xu5c0ffjZhngGUGKJIiYfJ8Vp1VigYW7ncWpRLvMuqr1rOLOEDM48AoILDXGwleIBueGuwzJMDihzbY3c4tcLurMw= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2017 12:23:23.2904 (UTC) X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.83]; Helo=[xsj-pvapsmtpgw01] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN3PR02MB1126 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Adding support for ZynqmMP PS PCIe EP driver. Adding support for ZynqmMP PS PCIe Root DMA driver. Modifying Kconfig and Makefile to add the support. Signed-off-by: Ravi Shankar Jonnalagadda Signed-off-by: RaviKiran Gummaluri --- drivers/dma/Kconfig | 12 +++ drivers/dma/xilinx/Makefile | 2 + drivers/dma/xilinx/ps_pcie.h | 44 +++++++++ drivers/dma/xilinx/ps_pcie_main.c | 200 ++++++++++++++++++++++++++++++++++++++ include/linux/dma/ps_pcie_dma.h | 69 +++++++++++++ 5 files changed, 327 insertions(+) create mode 100644 drivers/dma/xilinx/ps_pcie.h create mode 100644 drivers/dma/xilinx/ps_pcie_main.c create mode 100644 include/linux/dma/ps_pcie_dma.h diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig index fa8f9c0..e2fe4e5 100644 --- a/drivers/dma/Kconfig +++ b/drivers/dma/Kconfig @@ -586,6 +586,18 @@ config XILINX_ZYNQMP_DMA help Enable support for Xilinx ZynqMP DMA controller. +config XILINX_PS_PCIE_DMA + tristate "Xilinx PS PCIe DMA support" + depends on (PCI && X86_64 || ARM64) + select DMA_ENGINE + help + Enable support for the Xilinx PS PCIe DMA engine present + in recent Xilinx ZynqMP chipsets. + + Say Y here if you have such a chipset. + + If unsure, say N. + config ZX_DMA tristate "ZTE ZX DMA support" depends on ARCH_ZX || COMPILE_TEST diff --git a/drivers/dma/xilinx/Makefile b/drivers/dma/xilinx/Makefile index 9e91f8f..04f6f99 100644 --- a/drivers/dma/xilinx/Makefile +++ b/drivers/dma/xilinx/Makefile @@ -1,2 +1,4 @@ obj-$(CONFIG_XILINX_DMA) += xilinx_dma.o obj-$(CONFIG_XILINX_ZYNQMP_DMA) += zynqmp_dma.o +ps_pcie_dma-objs := ps_pcie_main.o ps_pcie_platform.o +obj-$(CONFIG_XILINX_PS_PCIE_DMA) += ps_pcie_dma.o diff --git a/drivers/dma/xilinx/ps_pcie.h b/drivers/dma/xilinx/ps_pcie.h new file mode 100644 index 0000000..351f051 --- /dev/null +++ b/drivers/dma/xilinx/ps_pcie.h @@ -0,0 +1,44 @@ +/* + * Xilinx PS PCIe DMA Engine platform header file + * + * Copyright (C) 2010-2017 Xilinx, Inc. All rights reserved. + * + * This program is free software: you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation + */ + +#ifndef __XILINX_PS_PCIE_H +#define __XILINX_PS_PCIE_H + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/** + * dma_platform_driver_register - This will be invoked by module init + * + * Return: returns status of platform_driver_register + */ +int dma_platform_driver_register(void); +/** + * dma_platform_driver_unregister - This will be invoked by module exit + * + * Return: returns void after unregustering platform driver + */ +void dma_platform_driver_unregister(void); + +#endif diff --git a/drivers/dma/xilinx/ps_pcie_main.c b/drivers/dma/xilinx/ps_pcie_main.c new file mode 100644 index 0000000..4ccd8ef --- /dev/null +++ b/drivers/dma/xilinx/ps_pcie_main.c @@ -0,0 +1,200 @@ +/* + * XILINX PS PCIe driver + * + * Copyright (C) 2017 Xilinx, Inc. All rights reserved. + * + * Description + * PS PCIe DMA is memory mapped DMA used to execute PS to PL transfers + * on ZynqMP UltraScale+ Devices. + * This PCIe driver creates a platform device with specific platform + * info enabling creation of DMA device corresponding to the channel + * information provided in the properties + * + * This program is free software: you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation + */ + +#include "ps_pcie.h" +#include "../dmaengine.h" + +#define DRV_MODULE_NAME "ps_pcie_dma" + +static int ps_pcie_dma_probe(struct pci_dev *pdev, + const struct pci_device_id *ent); +static void ps_pcie_dma_remove(struct pci_dev *pdev); + +static u32 channel_properties_pcie_axi[] = { + (u32)(PCIE_AXI_DIRECTION), (u32)(NUMBER_OF_BUFFER_DESCRIPTORS), + (u32)(DEFAULT_DMA_QUEUES), (u32)(CHANNEL_COAELSE_COUNT), + (u32)(CHANNEL_POLL_TIMER_FREQUENCY) }; + +static u32 channel_properties_axi_pcie[] = { + (u32)(AXI_PCIE_DIRECTION), (u32)(NUMBER_OF_BUFFER_DESCRIPTORS), + (u32)(DEFAULT_DMA_QUEUES), (u32)(CHANNEL_COAELSE_COUNT), + (u32)(CHANNEL_POLL_TIMER_FREQUENCY) }; + +static struct property_entry generic_pcie_ep_property[] = { + PROPERTY_ENTRY_U32("numchannels", (u32)MAX_NUMBER_OF_CHANNELS), + PROPERTY_ENTRY_U32_ARRAY("ps_pcie_channel0", + channel_properties_pcie_axi), + PROPERTY_ENTRY_U32_ARRAY("ps_pcie_channel1", + channel_properties_axi_pcie), + PROPERTY_ENTRY_U32_ARRAY("ps_pcie_channel2", + channel_properties_pcie_axi), + PROPERTY_ENTRY_U32_ARRAY("ps_pcie_channel3", + channel_properties_axi_pcie), + { }, +}; + +static const struct platform_device_info xlnx_std_platform_dev_info = { + .name = XLNX_PLATFORM_DRIVER_NAME, + .properties = generic_pcie_ep_property, +}; + +/** + * ps_pcie_dma_probe - Driver probe function + * @pdev: Pointer to the pci_dev structure + * @ent: pci device id + * + * Return: '0' on success and failure value on error + */ +static int ps_pcie_dma_probe(struct pci_dev *pdev, + const struct pci_device_id *ent) +{ + int err; + struct platform_device *platform_dev; + struct platform_device_info platform_dev_info; + + dev_info(&pdev->dev, "PS PCIe DMA Driver probe\n"); + + err = pcim_enable_device(pdev); + if (err) { + dev_err(&pdev->dev, "Cannot enable PCI device, aborting\n"); + return err; + } + + err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64)); + if (err) { + dev_info(&pdev->dev, "Cannot set 64 bit DMA mask\n"); + err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); + if (err) { + dev_err(&pdev->dev, "DMA mask set error\n"); + return err; + } + } + + err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); + if (err) { + dev_info(&pdev->dev, "Cannot set 64 bit consistent DMA mask\n"); + err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); + if (err) { + dev_err(&pdev->dev, "Cannot set consistent DMA mask\n"); + return err; + } + } + + pci_set_master(pdev); + + /* For Root DMA platform device will be created through device tree */ + if (pdev->vendor == PCI_VENDOR_ID_XILINX && + pdev->device == ZYNQMP_RC_DMA_DEVID) + return 0; + + memcpy(&platform_dev_info, &xlnx_std_platform_dev_info, + sizeof(xlnx_std_platform_dev_info)); + + /* Do device specific channel configuration changes to + * platform_dev_info.properties if required + * More information on channel properties can be found + * at Documentation/devicetree/bindings/dma/xilinx/ps-pcie-dma.txt + */ + + platform_dev_info.parent = &pdev->dev; + platform_dev_info.data = &pdev; + platform_dev_info.size_data = sizeof(struct pci_dev **); + + platform_dev = platform_device_register_full(&platform_dev_info); + if (IS_ERR(platform_dev)) { + dev_err(&pdev->dev, + "Cannot create platform device, aborting\n"); + return PTR_ERR(platform_dev); + } + + pci_set_drvdata(pdev, platform_dev); + + dev_info(&pdev->dev, "PS PCIe DMA driver successfully probed\n"); + + return 0; +} + +static struct pci_device_id ps_pcie_dma_tbl[] = { + { PCI_DEVICE(PCI_VENDOR_ID_XILINX, ZYNQMP_DMA_DEVID) }, + { PCI_DEVICE(PCI_VENDOR_ID_XILINX, ZYNQMP_RC_DMA_DEVID) }, + { } +}; + +static struct pci_driver ps_pcie_dma_driver = { + .name = DRV_MODULE_NAME, + .id_table = ps_pcie_dma_tbl, + .probe = ps_pcie_dma_probe, + .remove = ps_pcie_dma_remove, +}; + +/** + * ps_pcie_init - Driver init function + * + * Return: 0 on success. Non zero on failure + */ +static int __init ps_pcie_init(void) +{ + int ret; + + pr_info("%s init()\n", DRV_MODULE_NAME); + + ret = pci_register_driver(&ps_pcie_dma_driver); + if (ret) + return ret; + + ret = dma_platform_driver_register(); + if (ret) + pci_unregister_driver(&ps_pcie_dma_driver); + + return ret; +} + +/** + * ps_pcie_dma_remove - Driver remove function + * @pdev: Pointer to the pci_dev structure + * + * Return: void + */ +static void ps_pcie_dma_remove(struct pci_dev *pdev) +{ + struct platform_device *platform_dev; + + platform_dev = (struct platform_device *)pci_get_drvdata(pdev); + + if (platform_dev) + platform_device_unregister(platform_dev); +} + +/** + * ps_pcie_exit - Driver exit function + * + * Return: void + */ +static void __exit ps_pcie_exit(void) +{ + pr_info("%s exit()\n", DRV_MODULE_NAME); + + dma_platform_driver_unregister(); + pci_unregister_driver(&ps_pcie_dma_driver); +} + +module_init(ps_pcie_init); +module_exit(ps_pcie_exit); + +MODULE_AUTHOR("Xilinx Inc"); +MODULE_DESCRIPTION("Xilinx PS PCIe DMA Driver"); +MODULE_LICENSE("GPL v2"); diff --git a/include/linux/dma/ps_pcie_dma.h b/include/linux/dma/ps_pcie_dma.h new file mode 100644 index 0000000..d11323a --- /dev/null +++ b/include/linux/dma/ps_pcie_dma.h @@ -0,0 +1,69 @@ +/* + * Xilinx PS PCIe DMA Engine support header file + * + * Copyright (C) 2017 Xilinx, Inc. All rights reserved. + * + * This program is free software: you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation + */ + +#ifndef __DMA_XILINX_PS_PCIE_H +#define __DMA_XILINX_PS_PCIE_H + +#include +#include + +#define XLNX_PLATFORM_DRIVER_NAME "xlnx-platform-dma-driver" + +#define ZYNQMP_DMA_DEVID (0xD024) +#define ZYNQMP_RC_DMA_DEVID (0xD021) + +#define MAX_ALLOWED_CHANNELS_IN_HW 4 + +#define MAX_NUMBER_OF_CHANNELS MAX_ALLOWED_CHANNELS_IN_HW + +#define DEFAULT_DMA_QUEUES 4 +#define TWO_DMA_QUEUES 2 + +#define NUMBER_OF_BUFFER_DESCRIPTORS 1999 +#define MAX_DESCRIPTORS 65536 + +#define CHANNEL_COAELSE_COUNT 0 + +#define CHANNEL_POLL_TIMER_FREQUENCY 1000 /* in milli seconds */ + +#define PCIE_AXI_DIRECTION DMA_TO_DEVICE +#define AXI_PCIE_DIRECTION DMA_FROM_DEVICE + +/** + * struct BAR_PARAMS - PCIe Bar Parameters + * @BAR_PHYS_ADDR: PCIe BAR Physical address + * @BAR_LENGTH: Length of PCIe BAR + * @BAR_VIRT_ADDR: Virtual Address to access PCIe BAR + */ +struct BAR_PARAMS { + dma_addr_t BAR_PHYS_ADDR; /**< Base physical address of BAR memory */ + unsigned long BAR_LENGTH; /**< Length of BAR memory window */ + void *BAR_VIRT_ADDR; /**< Virtual Address of mapped BAR memory */ +}; + +/** + * struct ps_pcie_dma_channel_match - Match structure for dma clients + * @pci_vendorid: PCIe Vendor id of PS PCIe DMA device + * @pci_deviceid: PCIe Device id of PS PCIe DMA device + * @board_number: Unique id to identify individual device in a system + * @channel_number: Unique channel number of the device + * @direction: DMA channel direction + * @bar_params: Pointer to BAR_PARAMS for accessing application specific data + */ +struct ps_pcie_dma_channel_match { + u16 pci_vendorid; + u16 pci_deviceid; + u16 board_number; + u16 channel_number; + enum dma_data_direction direction; + struct BAR_PARAMS *bar_params; +}; + +#endif From patchwork Fri Sep 8 12:23:06 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ravi Shankar Jonnalagadda X-Patchwork-Id: 811561 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=xilinx.onmicrosoft.com header.i=@xilinx.onmicrosoft.com header.b="wcrjj+pu"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3xpcFG2LSPz9tXt for ; Fri, 8 Sep 2017 22:33:33 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932161AbdIHMYg (ORCPT ); Fri, 8 Sep 2017 08:24:36 -0400 Received: from mail-by2nam03on0082.outbound.protection.outlook.com ([104.47.42.82]:12416 "EHLO NAM03-BY2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755159AbdIHMXj (ORCPT ); Fri, 8 Sep 2017 08:23:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector1-xilinx-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=agcVwnPac0BDVoUa7hfqyww0H0vMnzvA9Ybpt8zKQOQ=; b=wcrjj+puTQni0ToEHiK65a6Lh7PaSnLXe2ywmGxmTsTy0qHGA5NxV8DnvJPiVQbj6EXUqY8ga1k39tfigKmk+0OZKuS8Tn55MZYvrYlPvoqc7d7k123FwU6z6WWMY2VNkXQSo6rmHtx9pWs/RGzrT3SLNziOd+U3EXn+11O6uXM= Received: from BN6PR02CA0048.namprd02.prod.outlook.com (10.173.146.162) by SN1PR0201MB1933.namprd02.prod.outlook.com (10.163.87.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.13.10; Fri, 8 Sep 2017 12:23:35 +0000 Received: from BL2NAM02FT025.eop-nam02.prod.protection.outlook.com (2a01:111:f400:7e46::205) by BN6PR02CA0048.outlook.office365.com (2603:10b6:404:5f::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.35.12 via Frontend Transport; Fri, 8 Sep 2017 12:23:34 +0000 Authentication-Results: spf=pass (sender IP is 149.199.60.100) smtp.mailfrom=xilinx.com; vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=bestguesspass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.60.100 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.60.100; helo=xsj-pvapsmtpgw02; Received: from xsj-pvapsmtpgw02 (149.199.60.100) by BL2NAM02FT025.mail.protection.outlook.com (10.152.77.151) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.20.13.11 via Frontend Transport; Fri, 8 Sep 2017 12:23:30 +0000 Received: from unknown-38-66.xilinx.com ([149.199.38.66]:47163 helo=xsj-pvapsmtp01) by xsj-pvapsmtpgw02 with esmtp (Exim 4.63) (envelope-from ) id 1dqIJl-00057n-8H; Fri, 08 Sep 2017 05:23:29 -0700 Received: from [127.0.0.1] (helo=localhost) by xsj-pvapsmtp01 with smtp (Exim 4.63) (envelope-from ) id 1dqIJl-0004O7-5Y; Fri, 08 Sep 2017 05:23:29 -0700 Received: from xsj-pvapsmtp01 (mailman.xilinx.com [149.199.38.66]) by xsj-smtp-dlp2.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id v88CNGuZ000526; Fri, 8 Sep 2017 05:23:16 -0700 Received: from [172.23.37.80] (helo=xhd-paegbuild40.xilinx.com) by xsj-pvapsmtp01 with esmtp (Exim 4.63) (envelope-from ) id 1dqIJX-0004MX-4a; Fri, 08 Sep 2017 05:23:16 -0700 Received: by xhd-paegbuild40.xilinx.com (Postfix, from userid 12633) id B7A12B20857; Fri, 8 Sep 2017 17:53:14 +0530 (IST) From: Ravi Shankar Jonnalagadda To: , , , , , , , , , , , , , , , Subject: [PATCH v2 4/5] dmaengine: zynqmp_ps_pcie: Adding PS PCIe platform DMA driver Date: Fri, 8 Sep 2017 17:53:06 +0530 Message-ID: <1504873388-29195-5-git-send-email-vjonnal@xilinx.com> X-Mailer: git-send-email 2.1.1 In-Reply-To: <1504873388-29195-1-git-send-email-vjonnal@xilinx.com> References: <1504873388-29195-1-git-send-email-vjonnal@xilinx.com> X-RCIS-Action: ALLOW X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.1.0.1062-23314.003 X-TM-AS-User-Approved-Sender: Yes;Yes X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:149.199.60.100; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(39860400002)(2980300002)(438002)(189002)(199003)(50986999)(8936002)(8676002)(103686004)(305945005)(47776003)(33646002)(76176999)(36756003)(5003940100001)(81156014)(50466002)(50226002)(36386004)(356003)(48376002)(90966002)(63266004)(46386002)(2201001)(575784001)(45336002)(478600001)(52956003)(81166006)(42186005)(2906002)(6266002)(7416002)(5660300001)(189998001)(53946003)(2950100002)(16200700003)(6666003)(106466001)(551934003)(6636002)(107986001)(2004002)(921003)(2101003)(83996005)(5001870100001)(1121003)(579004)(559001)(569006); DIR:OUT; SFP:1101; SCL:1; SRVR:SN1PR0201MB1933; H:xsj-pvapsmtpgw02; FPR:; SPF:Pass; PTR:unknown-60-100.xilinx.com,xapps1.xilinx.com; MX:1; A:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BL2NAM02FT025; 1:mAk8yP8x2up4RLmALGguDELIXCZtBqsWsh7gDChhHI5TdjPfFbQYqM7iLcOdueO3cGpi/1oTZhP+9o/0PUGR5JUsoJPCRYQUubFCUpCrH+0N4sMo4TDBcw2fpSbA68l+ MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c8afb7cf-422c-40a4-a210-08d4f6b46caf X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(2017030254152)(8251501002)(300000503095)(300135400095)(2017052603199)(201703131423075)(201703031133081)(201702281549075)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095); SRVR:SN1PR0201MB1933; X-Microsoft-Exchange-Diagnostics: 1; SN1PR0201MB1933; 3:gHZbdWxy/QEfTfxFPZ/Y7n1vf4UI1SWaMKmb6fxwHuSlo3PYSkD1vlYQ//bnH++uSUAgi/EwzaA0dSiP8qPYq5w/cX+lqFtCc3PS1FIVltr1/n6FVwQUUdQtT/kHnB/CMIWk3eZTkS8BxHkILcH+qJ5BEXgW08PersbQgvqOPK5XhGfn7QZHFTHxelg2vIzMQFBPPVAIZZ1h6MyUvfon/AsNzsJuHXAao9OEvZwiaVvw76M4v5Bqqkqyz/1FDW0G/xk46PlwM4LXn9pzPPk0O1LBs0CT4kNEs1RrU0mHXXWv7SzpCThuSfE9Nh3bx4jA8yy7I8Dsv2DmqK0I6bajOjlSNqmY+8jQnXIe6i4QYTQ=; 25:fWFjFUhQbg+EYAYYDf8E0sz6Q/5T8bJYp71mJuUP4g9lY4EKADXJ5BY9rU/nGl+xnnE8hnuzy+8Tr3FLbTJaZwrJ1sotOZAPFNVe2lz3SLuFijN/OF32bIXm7VoAWFzGH7ra2x9AkHW6Lyg0cMvmcxWLUIemfjM99YyJ48I+rJLrXY+geSr7/8psv1wzvf2+OUOG2L6v4PPJDw5RYP5dkGwzG7dA7t9YzTF9l7rQ8KMGzlr/ijzYc+dbMEeCYh6yOBRVMuTw7UkpXsvHx33eQhvUoS+9CqiDLZWu9TWJcL6IVx9tWkX53aBYaEHU0M1hXFxf4igoLAtQfiIGxNs7iQ== X-MS-TrafficTypeDiagnostic: SN1PR0201MB1933: X-LD-Processed: 657af505-d5df-48d0-8300-c31994686c5c,ExtAddr X-Microsoft-Exchange-Diagnostics: 1; SN1PR0201MB1933; 31:Y7NsIZZP0GmHnnE79VaOmpN2mQdXV3uD1Da4qOaiTMzaE5kEgw1bYPmLz6fKe/v79462F1r/Qr5pHxwFORV41RoCp9LC/n3yR4u6cPKMun+nRnDRHYqzcKglXeNpwMjRcSNujjG74G93OnjH4iJSzxex8+9NiThYVC5rSoqzokL2YV3BOgek/lXnfPCJSy6gUnVLoZqVAtyGmt2N23KetW+HKSkwrDhBL6vtZy/yn48=; 20:USWSytoJmt8tsQVdE2o1Ymx9ruSkMzKw+4BRb9+nLUEjziVvj1nsp3MjfM0wMZRSZELmR1S3dQsHqs15ir1Hzyi5aD2ywXT8GPS6NC1sSHQwBHW4XESJT92iuWi0+sC6d6icjgaqxeh+caOcv8SwcWrdFnPC5FjDXEb2KMCPBoBEthBGOC0pyWyBFfT5xKwdeLVrcLPR6Jzw6ytXBkLQZ20fnR81lLTpJVgiYYykM0SSaH1dzdt0z2clAYvhzifcEb+U0i1sr7XS2JpqvOl9Oh+uUbfmYeBKkJg5s+mDOq2CYyIitOIP/zZhMffiarRRRoOW8CYJ9qKZ//uHhnYTxeFvFr+V1RUB7YlbG5oqO3/d1oeBKRF/cWFf5Swh1r51+F5JFEN9PyuKSBQJD8/PLYvQTGGsIlphTU69m4p95YCqiQpYxN11ei3JZyKdfxsKTz4eWhmt7+4ywjL8e/JMDyOZPRuAOSKQV/FKCsaYzijkzMYtFBJqaqOCM9REbuBK X-Exchange-Antispam-Report-Test: UriScan:(20558992708506)(131327999870524)(192813158149592); X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(8121501046)(5005006)(10201501046)(3002001)(93006095)(93004095)(100000703101)(100105400095)(6055026)(6041248)(20161123562025)(20161123560025)(20161123555025)(20161123558100)(20161123564025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:SN1PR0201MB1933; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:SN1PR0201MB1933; X-Microsoft-Exchange-Diagnostics: 1; SN1PR0201MB1933; 4:wpXqgFNAmAUWNtMFd/oxlozVmrnzsGFnjuzIY7FqXFI2PLW/Gum9Sq2YVto0RORoDEGk/owG2qA/FUUtnbxEa4dtY5w9l3PvIgGSnCfhttVv/dTdzWYniBnO7qAiAhUQADKWMTPnDPfxp6ujfC/QlhrZSkMDzDl3IJMV28/wdhOn0euKXQPHo881XGDNEn6SMF3s19mFFUMJaEUZuknaqEBdHP4Z6kyxgzkbwcbiJDBnFYyrBsuiSp8c8jaayqEM9SDknVTyQc04HBcOvR6DAyyvccNPK/JjfIkUloXWta+sLn/TSKh/KpYJr5Gg49tvtuY7yYoGlqaSDQmXRkafg37uA/XvQ6h1i0kSGPxA++r2Kp37KmuRWQ18LbxpSV2L X-Forefront-PRVS: 04244E0DC5 X-Microsoft-Exchange-Diagnostics: 1; SN1PR0201MB1933; 23:6pgGjUCXHREj2JloQFTjnfDlVeMl+nylYdeKDR8VU9uhqIKdNmiKqtg/a3vXk9vf1xz1Ni7MQUN2njZDt1tS40fdjd0774W8yaTtCVxf86qSFC7SHiADsA1zpXVKF64//er//eYnEmHZ8IT0zPSg2h8P3tpxXY3InqJ0OfqkURX2j8QiunKFsZ/ji8KXo1P4K6fFFwvBCGb3goKugghZ5W9NQr70J/OivY6KtnqiC1zyPxk8qh6HiEX82CeyRykgFFO5RfkXdj4qB7U5026WcLI1+46r/9DafEcCXK1lwUf30A0hKuMdSsdxXCXtwnSrbooNfzsSXghX9IWcfF/5RvtGceAuvZBqM7oX0uZs/9IKMQqDYs8ZBb4V8ftaB+CVQ4RkhtRyoaNyNVL8y/5q5Zdmw4YE+jOANnqrc1aJ0QLOK7/4YTKmQ3FRcY2jxolqIyGP7uQplHLnGEO1zlt2Vj1mF89s6SWAFUHo5Egmpdh8DIk0ViEKAu2SI8zVBDeteF2qP9XteiHe7zcttzWU3RbUxVlUdG+kCh5pHJSTGjwwb2PFGgCTjKUXa4LwqkLUwrwsFTA/haMmgHM24VOohap5iosJzR21AWQudNaRSULOIN4VOmXknsJqCsQRBp3mFy930LaYfjRXRBiMcxgLPAuE2+b0yxj77/hQq4rRArsCWrecxZRYNlaqdM0HJbU3dw2rb78jLcL+unJOmUykd9Wz2JwQQi7rE5oYFh+ro/GBFxzGnw55iyHsnXrOcjgl2ZPUAZb6DTMg1Z0I5o8j7APMaVIsohWEE34hAeY9tYCHz1xMSYHlblty1RSlLr0XZRlAe/BmpiV6GtYVd/loGv2TRpSawd8vlHxflMMrJfoH2o0eKB0teDvRsGcr0P57rO1ieS3m0tSRjAUAANehcYozfHRNfQroe27f1BGxTg07Tybb9sSGnmnyS02agPJEVX/CwVNqlS6ETcQ5tKByDds4hhQZYDewiePra8ySCfSslWtdGgwzHYYp4adhE6t9X+xDgeWPtrex45ofulmiqrpbZkNw+xMQYzSmB0SrHZ2DOgAjVqwa9uVwMKsZsuvWv07odBs4b5fl4e/wkFZDLeQEhnFKsnX0tYOaRBxEGKpFTGNu9mkj+OaTEW4h9soCZeZAHtIaaKPQHZheWMZQvIN76lzCgq2bnxZDJGeUvRxrIu9C34DeLO3MuGig7SjcwoG1offMqGePGPJo9VG5znV7QRMu0kRRQBVQqA00vPwVpKv8+W+I6Lc9sukhZ974 X-Microsoft-Exchange-Diagnostics: 1; SN1PR0201MB1933; 6:p9qbDCIv6MtqqDuTKijwvvDGz2aaA4Lx0Dhmr7h2h/y1ttueRZmu9US/qXXKs0+dXAGkmQwtWh/Nl7Hm24ss/QnZ7EmQBgVh7CeI1PUGHk7uN8jTqjUjV+eDaGk3H5q3R6vyi8ocukMAa+7aBXI0zDZ4KFCKp7CFGH04G9y5n8XQ2KH01T6vR30rVMbkdU/7yBZMqguZWfdUuCVd2s2/agMoesaHyebEnU6ADD321a6IRgx/ZLb2pRAkY7H1vcXaFe/Mv38EYiEQl0uGggRFCCwhcXXGqmxutQWn/KzhXXGz2IeC876CzoywTmfHEKrfd1yeSeNvwc9Qdscsq3PHDA==; 5:Fr3y6pZKJEcwlgT1/ViS220ymSXJ/EO7qFUueKH5Rb40CZs7UalW7HhuNIwUjFb+/YIqBZuJMUEgzBSM029h83cfo41CjdMk6y6aMBwst5dpIPFtTI35PoONj02KuZNFVo75jOW0I6lEc2Et34PrrQ==; 24:Uo0gBo41FDKhaaUcE3yu00vfkQ9Px9+3lR7XVUA4rJHFG9ydvwCEX49zO8AE4Y1XSriQ7ybsaKX7Pjg1ujosV+Ba1PVYWEDMjYTDYbMigRA=; 7:GuMjwvBBkm7HnhdRD4N1IzGbZlQ05XOsmhBlatBHmsbDZIHPuE8BFY8nCFawulRB+lOVB2CoRfuTmSfP91M1TcRFJNnFuhi30bdurlISxtK/bOtg0NtgKNJrZG7GhBZZiK3Yh8qVfdrYMFnLMP/oOzeOtAVvsvhciUObYLgiLfkxqXjCRyxeQHE7rA5cjMqiCiIRS7j+kzWXxSCIzIaaBzw0j672XT0BOEqdQK2+hr0= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2017 12:23:30.1264 (UTC) X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.100]; Helo=[xsj-pvapsmtpgw02] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR0201MB1933 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Platform driver handles transactions for PCIe EP DMA and Root DMA Signed-off-by: Ravi Shankar Jonnalagadda Signed-off-by: RaviKiran Gummaluri --- drivers/dma/xilinx/ps_pcie_platform.c | 3055 +++++++++++++++++++++++++++++++++ 1 file changed, 3055 insertions(+) create mode 100644 drivers/dma/xilinx/ps_pcie_platform.c diff --git a/drivers/dma/xilinx/ps_pcie_platform.c b/drivers/dma/xilinx/ps_pcie_platform.c new file mode 100644 index 0000000..79f324a --- /dev/null +++ b/drivers/dma/xilinx/ps_pcie_platform.c @@ -0,0 +1,3055 @@ +/* + * XILINX PS PCIe DMA driver + * + * Copyright (C) 2017 Xilinx, Inc. All rights reserved. + * + * Description + * PS PCIe DMA is memory mapped DMA used to execute PS to PL transfers + * on ZynqMP UltraScale+ Devices + * + * This program is free software: you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation + */ + +#include "ps_pcie.h" +#include "../dmaengine.h" + +#define PLATFORM_DRIVER_NAME "ps_pcie_pform_dma" +#define MAX_BARS 6 + +#define DMA_BAR_NUMBER 0 + +#define MIN_SW_INTR_TRANSACTIONS 2 + +#define CHANNEL_PROPERTY_LENGTH 50 +#define WORKQ_NAME_SIZE 100 +#define INTR_HANDLR_NAME_SIZE 100 + +#define PS_PCIE_DMA_IRQ_NOSHARE 0 + +#define MAX_COALESCE_COUNT 255 + +#define DMA_CHANNEL_REGS_SIZE 0x80 + +#define DMA_SRCQPTRLO_REG_OFFSET (0x00) /* Source Q pointer Lo */ +#define DMA_SRCQPTRHI_REG_OFFSET (0x04) /* Source Q pointer Hi */ +#define DMA_SRCQSZ_REG_OFFSET (0x08) /* Source Q size */ +#define DMA_SRCQLMT_REG_OFFSET (0x0C) /* Source Q limit */ +#define DMA_DSTQPTRLO_REG_OFFSET (0x10) /* Destination Q pointer Lo */ +#define DMA_DSTQPTRHI_REG_OFFSET (0x14) /* Destination Q pointer Hi */ +#define DMA_DSTQSZ_REG_OFFSET (0x18) /* Destination Q size */ +#define DMA_DSTQLMT_REG_OFFSET (0x1C) /* Destination Q limit */ +#define DMA_SSTAQPTRLO_REG_OFFSET (0x20) /* Source Status Q pointer Lo */ +#define DMA_SSTAQPTRHI_REG_OFFSET (0x24) /* Source Status Q pointer Hi */ +#define DMA_SSTAQSZ_REG_OFFSET (0x28) /* Source Status Q size */ +#define DMA_SSTAQLMT_REG_OFFSET (0x2C) /* Source Status Q limit */ +#define DMA_DSTAQPTRLO_REG_OFFSET (0x30) /* Destination Status Q pointer Lo */ +#define DMA_DSTAQPTRHI_REG_OFFSET (0x34) /* Destination Status Q pointer Hi */ +#define DMA_DSTAQSZ_REG_OFFSET (0x38) /* Destination Status Q size */ +#define DMA_DSTAQLMT_REG_OFFSET (0x3C) /* Destination Status Q limit */ +#define DMA_SRCQNXT_REG_OFFSET (0x40) /* Source Q next */ +#define DMA_DSTQNXT_REG_OFFSET (0x44) /* Destination Q next */ +#define DMA_SSTAQNXT_REG_OFFSET (0x48) /* Source Status Q next */ +#define DMA_DSTAQNXT_REG_OFFSET (0x4C) /* Destination Status Q next */ +#define DMA_SCRATCH0_REG_OFFSET (0x50) /* Scratch pad register 0 */ + +#define DMA_PCIE_INTR_CNTRL_REG_OFFSET (0x60) /* DMA PCIe intr control reg */ +#define DMA_PCIE_INTR_STATUS_REG_OFFSET (0x64) /* DMA PCIe intr status reg */ +#define DMA_AXI_INTR_CNTRL_REG_OFFSET (0x68) /* DMA AXI intr control reg */ +#define DMA_AXI_INTR_STATUS_REG_OFFSET (0x6C) /* DMA AXI intr status reg */ +#define DMA_PCIE_INTR_ASSRT_REG_OFFSET (0x70) /* PCIe intr assert reg */ +#define DMA_AXI_INTR_ASSRT_REG_OFFSET (0x74) /* AXI intr assert register */ +#define DMA_CNTRL_REG_OFFSET (0x78) /* DMA control register */ +#define DMA_STATUS_REG_OFFSET (0x7C) /* DMA status register */ + +#define DMA_CNTRL_RST_BIT BIT(1) +#define DMA_CNTRL_64BIT_STAQ_ELEMSZ_BIT BIT(2) +#define DMA_CNTRL_ENABL_BIT BIT(0) +#define DMA_STATUS_DMA_PRES_BIT BIT(15) +#define DMA_STATUS_DMA_RUNNING_BIT BIT(0) +#define DMA_QPTRLO_QLOCAXI_BIT BIT(0) +#define DMA_QPTRLO_Q_ENABLE_BIT BIT(1) +#define DMA_INTSTATUS_DMAERR_BIT BIT(1) +#define DMA_INTSTATUS_SGLINTR_BIT BIT(2) +#define DMA_INTSTATUS_SWINTR_BIT BIT(3) +#define DMA_INTCNTRL_ENABLINTR_BIT BIT(0) +#define DMA_INTCNTRL_DMAERRINTR_BIT BIT(1) +#define DMA_INTCNTRL_DMASGINTR_BIT BIT(2) +#define DMA_SW_INTR_ASSRT_BIT BIT(3) + +#define SOURCE_CONTROL_BD_BYTE_COUNT_MASK GENMASK(23, 0) +#define SOURCE_CONTROL_BD_LOC_AXI BIT(24) +#define SOURCE_CONTROL_BD_EOP_BIT BIT(25) +#define SOURCE_CONTROL_BD_INTR_BIT BIT(26) +#define SOURCE_CONTROL_BACK_TO_BACK_PACK_BIT BIT(25) +#define SOURCE_CONTROL_ATTRIBUTES_MASK GENMASK(31, 28) +#define SRC_CTL_ATTRIB_BIT_SHIFT (29) + +#define STA_BD_COMPLETED_BIT BIT(0) +#define STA_BD_SOURCE_ERROR_BIT BIT(1) +#define STA_BD_DESTINATION_ERROR_BIT BIT(2) +#define STA_BD_INTERNAL_ERROR_BIT BIT(3) +#define STA_BD_UPPER_STATUS_NONZERO_BIT BIT(31) +#define STA_BD_BYTE_COUNT_MASK GENMASK(30, 4) + +#define STA_BD_BYTE_COUNT_SHIFT 4 + +#define DMA_INTCNTRL_SGCOLSCCNT_BIT_SHIFT (16) + +#define DMA_SRC_Q_LOW_BIT_SHIFT GENMASK(5, 0) + +#define MAX_TRANSFER_LENGTH 0x1000000 + +#define AXI_ATTRIBUTE 0x3 +#define PCI_ATTRIBUTE 0x2 + +#define ROOTDMA_Q_READ_ATTRIBUTE 0x8 + +/* + * User Id programmed into Source Q will be copied into Status Q of Destination + */ +#define DEFAULT_UID 1 + +/* + * DMA channel registers + */ +struct DMA_ENGINE_REGISTERS { + u32 src_q_low; /* 0x00 */ + u32 src_q_high; /* 0x04 */ + u32 src_q_size; /* 0x08 */ + u32 src_q_limit; /* 0x0C */ + u32 dst_q_low; /* 0x10 */ + u32 dst_q_high; /* 0x14 */ + u32 dst_q_size; /* 0x18 */ + u32 dst_q_limit; /* 0x1c */ + u32 stas_q_low; /* 0x20 */ + u32 stas_q_high; /* 0x24 */ + u32 stas_q_size; /* 0x28 */ + u32 stas_q_limit; /* 0x2C */ + u32 stad_q_low; /* 0x30 */ + u32 stad_q_high; /* 0x34 */ + u32 stad_q_size; /* 0x38 */ + u32 stad_q_limit; /* 0x3C */ + u32 src_q_next; /* 0x40 */ + u32 dst_q_next; /* 0x44 */ + u32 stas_q_next; /* 0x48 */ + u32 stad_q_next; /* 0x4C */ + u32 scrathc0; /* 0x50 */ + u32 scrathc1; /* 0x54 */ + u32 scrathc2; /* 0x58 */ + u32 scrathc3; /* 0x5C */ + u32 pcie_intr_cntrl; /* 0x60 */ + u32 pcie_intr_status; /* 0x64 */ + u32 axi_intr_cntrl; /* 0x68 */ + u32 axi_intr_status; /* 0x6C */ + u32 pcie_intr_assert; /* 0x70 */ + u32 axi_intr_assert; /* 0x74 */ + u32 dma_channel_ctrl; /* 0x78 */ + u32 dma_channel_status; /* 0x7C */ +} __attribute__((__packed__)); + +/** + * struct SOURCE_DMA_DESCRIPTOR - Source Hardware Descriptor + * @system_address: 64 bit buffer physical address + * @control_byte_count: Byte count/buffer length and control flags + * @user_handle: User handle gets copied to status q on completion + * @user_id: User id gets copied to status q of destination + */ +struct SOURCE_DMA_DESCRIPTOR { + u64 system_address; + u32 control_byte_count; + u16 user_handle; + u16 user_id; +} __attribute__((__packed__)); + +/** + * struct DEST_DMA_DESCRIPTOR - Destination Hardware Descriptor + * @system_address: 64 bit buffer physical address + * @control_byte_count: Byte count/buffer length and control flags + * @user_handle: User handle gets copied to status q on completion + * @reserved: Reserved field + */ +struct DEST_DMA_DESCRIPTOR { + u64 system_address; + u32 control_byte_count; + u16 user_handle; + u16 reserved; +} __attribute__((__packed__)); + +/** + * struct STATUS_DMA_DESCRIPTOR - Status Hardware Descriptor + * @status_flag_byte_count: Byte count/buffer length and status flags + * @user_handle: User handle gets copied from src/dstq on completion + * @user_id: User id gets copied from srcq + */ +struct STATUS_DMA_DESCRIPTOR { + u32 status_flag_byte_count; + u16 user_handle; + u16 user_id; +} __attribute__((__packed__)); + +enum PACKET_CONTEXT_AVAILABILITY { + FREE = 0, /*Packet transfer Parameter context is free.*/ + IN_USE /*Packet transfer Parameter context is in use.*/ +}; + +struct ps_pcie_transfer_elements { + struct scatterlist *src_sgl; + unsigned int srcq_num_elemets; + struct scatterlist *dst_sgl; + unsigned int dstq_num_elemets; +}; + +struct ps_pcie_tx_segment { + struct list_head node; + struct dma_async_tx_descriptor async_tx; + struct ps_pcie_transfer_elements tx_elements; +}; + +struct ps_pcie_intr_segment { + struct list_head node; + struct dma_async_tx_descriptor async_intr_tx; +}; + +/* + * The context structure stored for each DMA transaction + * This structure is maintained separately for Src Q and Destination Q + * @availability_status: Indicates whether packet context is available + * @idx_sop: Indicates starting index of buffer descriptor for a transfer + * @idx_eop: Indicates ending index of buffer descriptor for a transfer + * @sgl: Indicates either src or dst sglist for the transaction + */ +struct PACKET_TRANSFER_PARAMS { + enum PACKET_CONTEXT_AVAILABILITY availability_status; + u16 idx_sop; + u16 idx_eop; + struct scatterlist *sgl; + struct ps_pcie_tx_segment *seg; + u32 requested_bytes; +}; + +enum CHANNEL_STATE { + CHANNEL_RESOURCE_UNALLOCATED = 0, /* Channel resources not allocated */ + CHANNEL_UNAVIALBLE, /* Channel inactive */ + CHANNEL_AVAILABLE, /* Channel available for transfers */ + CHANNEL_ERROR /* Channel encountered errors */ +}; + +enum BUFFER_LOCATION { + BUFFER_LOC_PCI = 0, + BUFFER_LOC_AXI, + BUFFER_LOC_INVALID +}; + +enum dev_channel_properties { + DMA_CHANNEL_DIRECTION = 0, + NUM_DESCRIPTORS, + NUM_QUEUES, + COALESE_COUNT, + POLL_TIMER_FREQUENCY +}; + +/* + * struct ps_pcie_dma_chan - Driver specific DMA channel structure + * @xdev: Driver specific device structure + * @dev: The dma device + * @common: DMA common channel + * @chan_base: Pointer to Channel registers + * @channel_number: DMA channel number in the device + * @num_queues: Number of queues per channel. + * It should be four for memory mapped case and + * two for Streaming case + * @direction: Transfer direction + * @state: Indicates channel state + * @channel_lock: Spin lock to be used before changing channel state + * @cookie_lock: Spin lock to be used before assigning cookie for a transaction + * @coalesce_count: Indicates number of packet transfers before interrupts + * @poll_timer_freq:Indicates frequency of polling for completed transactions + * @poll_timer: Timer to poll dma buffer descriptors if coalesce count is > 0 + * @src_avail_descriptors: Available sgl source descriptors + * @src_desc_lock: Lock for synchronizing src_avail_descriptors + * @dst_avail_descriptors: Available sgl destination descriptors + * @dst_desc_lock: Lock for synchronizing + * dst_avail_descriptors + * @src_sgl_bd_pa: Physical address of Source SGL buffer Descriptors + * @psrc_sgl_bd: Virtual address of Source SGL buffer Descriptors + * @src_sgl_freeidx: Holds index of Source SGL buffer descriptor to be filled + * @sglDestinationQLock:Lock to serialize Destination Q updates + * @dst_sgl_bd_pa: Physical address of Dst SGL buffer Descriptors + * @pdst_sgl_bd: Virtual address of Dst SGL buffer Descriptors + * @dst_sgl_freeidx: Holds index of Destination SGL + * @src_sta_bd_pa: Physical address of StatusQ buffer Descriptors + * @psrc_sta_bd: Virtual address of Src StatusQ buffer Descriptors + * @src_staprobe_idx: Holds index of Status Q to be examined for SrcQ updates + * @src_sta_hw_probe_idx: Holds index of maximum limit of Status Q for hardware + * @dst_sta_bd_pa: Physical address of Dst StatusQ buffer Descriptor + * @pdst_sta_bd: Virtual address of Dst Status Q buffer Descriptors + * @dst_staprobe_idx: Holds index of Status Q to be examined for updates + * @dst_sta_hw_probe_idx: Holds index of max limit of Dst Status Q for hardware + * @@read_attribute: Describes the attributes of buffer in srcq + * @@write_attribute: Describes the attributes of buffer in dstq + * @@intr_status_offset: Register offset to be cheked on receiving interrupt + * @@intr_status_offset: Register offset to be used to control interrupts + * @ppkt_ctx_srcq: Virtual address of packet context to Src Q updates + * @idx_ctx_srcq_head: Holds index of packet context to be filled for Source Q + * @idx_ctx_srcq_tail: Holds index of packet context to be examined for Source Q + * @ppkt_ctx_dstq: Virtual address of packet context to Dst Q updates + * @idx_ctx_dstq_head: Holds index of packet context to be filled for Dst Q + * @idx_ctx_dstq_tail: Holds index of packet context to be examined for Dst Q + * @pending_list_lock: Lock to be taken before updating pending transfers list + * @pending_list: List of transactions submitted to channel + * @active_list_lock: Lock to be taken before transferring transactions from + * pending list to active list which will be subsequently + * submitted to hardware + * @active_list: List of transactions that will be submitted to hardware + * @pending_interrupts_lock: Lock to be taken before updating pending Intr list + * @pending_interrupts_list: List of interrupt transactions submitted to channel + * @active_interrupts_lock: Lock to be taken before transferring transactions + * from pending interrupt list to active interrupt list + * @active_interrupts_list: List of interrupt transactions that are active + * @transactions_pool: Mem pool to allocate dma transactions quickly + * @intr_transactions_pool: Mem pool to allocate interrupt transactions quickly + * @sw_intrs_wrkq: Work Q which performs handling of software intrs + * @handle_sw_intrs:Work function handling software interrupts + * @maintenance_workq: Work Q to perform maintenance tasks during stop or error + * @handle_chan_reset: Work that invokes channel reset function + * @handle_chan_shutdown: Work that invokes channel shutdown function + * @handle_chan_terminate: Work that invokes channel transactions termination + * @chan_shutdown_complt: Completion variable which says shutdown is done + * @chan_terminate_complete: Completion variable which says terminate is done + * @primary_desc_cleanup: Work Q which performs work related to sgl handling + * @handle_primary_desc_cleanup: Work that invokes src Q, dst Q cleanup + * and programming + * @chan_programming: Work Q which performs work related to channel programming + * @handle_chan_programming: Work that invokes channel programming function + * @srcq_desc_cleanup: Work Q which performs src Q descriptor cleanup + * @handle_srcq_desc_cleanup: Work function handling Src Q completions + * @dstq_desc_cleanup: Work Q which performs dst Q descriptor cleanup + * @handle_dstq_desc_cleanup: Work function handling Dst Q completions + * @srcq_work_complete: Src Q Work completion variable for primary work + * @dstq_work_complete: Dst Q Work completion variable for primary work + */ +struct ps_pcie_dma_chan { + struct xlnx_pcie_dma_device *xdev; + struct device *dev; + + struct dma_chan common; + + struct DMA_ENGINE_REGISTERS *chan_base; + u16 channel_number; + + u32 num_queues; + enum dma_data_direction direction; + enum BUFFER_LOCATION srcq_buffer_location; + enum BUFFER_LOCATION dstq_buffer_location; + + u32 total_descriptors; + + enum CHANNEL_STATE state; + spinlock_t channel_lock; /* For changing channel state */ + + spinlock_t cookie_lock; /* For acquiring cookie from dma framework*/ + + u32 coalesce_count; + u32 poll_timer_freq; + + struct timer_list poll_timer; + + u32 src_avail_descriptors; + spinlock_t src_desc_lock; /* For handling srcq available descriptors */ + + u32 dst_avail_descriptors; + spinlock_t dst_desc_lock; /* For handling dstq available descriptors */ + + dma_addr_t src_sgl_bd_pa; + struct SOURCE_DMA_DESCRIPTOR *psrc_sgl_bd; + u32 src_sgl_freeidx; + + dma_addr_t dst_sgl_bd_pa; + struct DEST_DMA_DESCRIPTOR *pdst_sgl_bd; + u32 dst_sgl_freeidx; + + dma_addr_t src_sta_bd_pa; + struct STATUS_DMA_DESCRIPTOR *psrc_sta_bd; + u32 src_staprobe_idx; + u32 src_sta_hw_probe_idx; + + dma_addr_t dst_sta_bd_pa; + struct STATUS_DMA_DESCRIPTOR *pdst_sta_bd; + u32 dst_staprobe_idx; + u32 dst_sta_hw_probe_idx; + + u32 read_attribute; + u32 write_attribute; + + u32 intr_status_offset; + u32 intr_control_offset; + + struct PACKET_TRANSFER_PARAMS *ppkt_ctx_srcq; + u16 idx_ctx_srcq_head; + u16 idx_ctx_srcq_tail; + + struct PACKET_TRANSFER_PARAMS *ppkt_ctx_dstq; + u16 idx_ctx_dstq_head; + u16 idx_ctx_dstq_tail; + + spinlock_t pending_list_lock; /* For handling dma pending_list */ + struct list_head pending_list; + spinlock_t active_list_lock; /* For handling dma active_list */ + struct list_head active_list; + + spinlock_t pending_interrupts_lock; /* For dma pending interrupts list*/ + struct list_head pending_interrupts_list; + spinlock_t active_interrupts_lock; /* For dma active interrupts list*/ + struct list_head active_interrupts_list; + + mempool_t *transactions_pool; + mempool_t *intr_transactions_pool; + + struct workqueue_struct *sw_intrs_wrkq; + struct work_struct handle_sw_intrs; + + struct workqueue_struct *maintenance_workq; + struct work_struct handle_chan_reset; + struct work_struct handle_chan_shutdown; + struct work_struct handle_chan_terminate; + + struct completion chan_shutdown_complt; + struct completion chan_terminate_complete; + + struct workqueue_struct *primary_desc_cleanup; + struct work_struct handle_primary_desc_cleanup; + + struct workqueue_struct *chan_programming; + struct work_struct handle_chan_programming; + + struct workqueue_struct *srcq_desc_cleanup; + struct work_struct handle_srcq_desc_cleanup; + struct completion srcq_work_complete; + + struct workqueue_struct *dstq_desc_cleanup; + struct work_struct handle_dstq_desc_cleanup; + struct completion dstq_work_complete; +}; + +/* + * struct xlnx_pcie_dma_device - Driver specific platform device structure + * @is_rootdma: Indicates whether the dma instance is root port dma + * @dma_buf_ext_addr: Indicates whether target system is 32 bit or 64 bit + * @bar_mask: Indicates available pcie bars + * @board_number: Count value of platform device + * @dev: Device structure pointer for pcie device + * @channels: Pointer to device DMA channels structure + * @common: DMA device structure + * @num_channels: Number of channels active for the device + * @reg_base: Base address of first DMA channel of the device + * @irq_vecs: Number of irq vectors allocated to pci device + * @pci_dev: Parent pci device which created this platform device + * @bar_info: PCIe bar related information + * @platform_irq_vec: Platform irq vector number for root dma + * @rootdma_vendor: PCI Vendor id for root dma + * @rootdma_device: PCI Device id for root dma + */ +struct xlnx_pcie_dma_device { + bool is_rootdma; + bool dma_buf_ext_addr; + u32 bar_mask; + u16 board_number; + struct device *dev; + struct ps_pcie_dma_chan *channels; + struct dma_device common; + int num_channels; + int irq_vecs; + void __iomem *reg_base; + struct pci_dev *pci_dev; + struct BAR_PARAMS bar_info[MAX_BARS]; + int platform_irq_vec; + u16 rootdma_vendor; + u16 rootdma_device; +}; + +#define to_xilinx_chan(chan) \ + container_of(chan, struct ps_pcie_dma_chan, common) +#define to_ps_pcie_dma_tx_descriptor(tx) \ + container_of(tx, struct ps_pcie_tx_segment, async_tx) +#define to_ps_pcie_dma_tx_intr_descriptor(tx) \ + container_of(tx, struct ps_pcie_intr_segment, async_intr_tx) + +/* Function Protypes */ +static u32 ps_pcie_dma_read(struct ps_pcie_dma_chan *chan, u32 reg); +static void ps_pcie_dma_write(struct ps_pcie_dma_chan *chan, u32 reg, + u32 value); +static void ps_pcie_dma_clr_mask(struct ps_pcie_dma_chan *chan, u32 reg, + u32 mask); +static void ps_pcie_dma_set_mask(struct ps_pcie_dma_chan *chan, u32 reg, + u32 mask); +static int irq_setup(struct xlnx_pcie_dma_device *xdev); +static int platform_irq_setup(struct xlnx_pcie_dma_device *xdev); +static int chan_intr_setup(struct xlnx_pcie_dma_device *xdev); +static int device_intr_setup(struct xlnx_pcie_dma_device *xdev); +static int irq_probe(struct xlnx_pcie_dma_device *xdev); +static int ps_pcie_check_intr_status(struct ps_pcie_dma_chan *chan); +static irqreturn_t ps_pcie_dma_dev_intr_handler(int irq, void *data); +static irqreturn_t ps_pcie_dma_chan_intr_handler(int irq, void *data); +static int init_hw_components(struct ps_pcie_dma_chan *chan); +static int init_sw_components(struct ps_pcie_dma_chan *chan); +static void update_channel_read_attribute(struct ps_pcie_dma_chan *chan); +static void update_channel_write_attribute(struct ps_pcie_dma_chan *chan); +static void ps_pcie_chan_reset(struct ps_pcie_dma_chan *chan); +static void poll_completed_transactions(unsigned long arg); +static bool check_descriptors_for_two_queues(struct ps_pcie_dma_chan *chan, + struct ps_pcie_tx_segment *seg); +static bool check_descriptors_for_all_queues(struct ps_pcie_dma_chan *chan, + struct ps_pcie_tx_segment *seg); +static bool check_descriptor_availability(struct ps_pcie_dma_chan *chan, + struct ps_pcie_tx_segment *seg); +static void handle_error(struct ps_pcie_dma_chan *chan); +static void xlnx_ps_pcie_update_srcq(struct ps_pcie_dma_chan *chan, + struct ps_pcie_tx_segment *seg); +static void xlnx_ps_pcie_update_dstq(struct ps_pcie_dma_chan *chan, + struct ps_pcie_tx_segment *seg); +static void ps_pcie_chan_program_work(struct work_struct *work); +static void dst_cleanup_work(struct work_struct *work); +static void src_cleanup_work(struct work_struct *work); +static void ps_pcie_chan_primary_work(struct work_struct *work); +static int probe_channel_properties(struct platform_device *platform_dev, + struct xlnx_pcie_dma_device *xdev, + u16 channel_number); +static void xlnx_ps_pcie_destroy_mempool(struct ps_pcie_dma_chan *chan); +static void xlnx_ps_pcie_free_worker_queues(struct ps_pcie_dma_chan *chan); +static void xlnx_ps_pcie_free_pkt_ctxts(struct ps_pcie_dma_chan *chan); +static void xlnx_ps_pcie_free_descriptors(struct ps_pcie_dma_chan *chan); +static int xlnx_ps_pcie_channel_activate(struct ps_pcie_dma_chan *chan); +static void xlnx_ps_pcie_channel_quiesce(struct ps_pcie_dma_chan *chan); +static void ivk_cbk_for_pending(struct ps_pcie_dma_chan *chan); +static void xlnx_ps_pcie_reset_channel(struct ps_pcie_dma_chan *chan); +static void xlnx_ps_pcie_free_poll_timer(struct ps_pcie_dma_chan *chan); +static int xlnx_ps_pcie_alloc_poll_timer(struct ps_pcie_dma_chan *chan); +static void terminate_transactions_work(struct work_struct *work); +static void chan_shutdown_work(struct work_struct *work); +static void chan_reset_work(struct work_struct *work); +static int xlnx_ps_pcie_alloc_worker_threads(struct ps_pcie_dma_chan *chan); +static int xlnx_ps_pcie_alloc_mempool(struct ps_pcie_dma_chan *chan); +static int xlnx_ps_pcie_alloc_pkt_contexts(struct ps_pcie_dma_chan *chan); +static int dma_alloc_descriptors_two_queues(struct ps_pcie_dma_chan *chan); +static int dma_alloc_decriptors_all_queues(struct ps_pcie_dma_chan *chan); +static void xlnx_ps_pcie_dma_free_chan_resources(struct dma_chan *dchan); +static int xlnx_ps_pcie_dma_alloc_chan_resources(struct dma_chan *dchan); +static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx); +static dma_cookie_t xilinx_intr_tx_submit(struct dma_async_tx_descriptor *tx); +static struct dma_async_tx_descriptor *xlnx_ps_pcie_dma_prep_dma_sg( + struct dma_chan *channel, struct scatterlist *dst_sg, + unsigned int dst_nents, struct scatterlist *src_sg, + unsigned int src_nents, unsigned long flags); +static struct dma_async_tx_descriptor *xlnx_ps_pcie_dma_prep_slave_sg( + struct dma_chan *channel, struct scatterlist *sgl, + unsigned int sg_len, enum dma_transfer_direction direction, + unsigned long flags, void *context); +static struct dma_async_tx_descriptor *xlnx_ps_pcie_dma_prep_interrupt( + struct dma_chan *channel, unsigned long flags); +static void xlnx_ps_pcie_dma_issue_pending(struct dma_chan *channel); +static int xlnx_ps_pcie_dma_terminate_all(struct dma_chan *channel); +static int read_rootdma_config(struct platform_device *platform_dev, + struct xlnx_pcie_dma_device *xdev); +static int read_epdma_config(struct platform_device *platform_dev, + struct xlnx_pcie_dma_device *xdev); +static int xlnx_pcie_dma_driver_probe(struct platform_device *platform_dev); +static int xlnx_pcie_dma_driver_remove(struct platform_device *platform_dev); + +/* IO accessors */ +static inline u32 ps_pcie_dma_read(struct ps_pcie_dma_chan *chan, u32 reg) +{ + return ioread32((void __iomem *)((char *)(chan->chan_base) + reg)); +} + +static inline void ps_pcie_dma_write(struct ps_pcie_dma_chan *chan, u32 reg, + u32 value) +{ + iowrite32(value, (void __iomem *)((char *)(chan->chan_base) + reg)); +} + +static inline void ps_pcie_dma_clr_mask(struct ps_pcie_dma_chan *chan, u32 reg, + u32 mask) +{ + ps_pcie_dma_write(chan, reg, ps_pcie_dma_read(chan, reg) & ~mask); +} + +static inline void ps_pcie_dma_set_mask(struct ps_pcie_dma_chan *chan, u32 reg, + u32 mask) +{ + ps_pcie_dma_write(chan, reg, ps_pcie_dma_read(chan, reg) | mask); +} + +/** + * ps_pcie_dma_dev_intr_handler - This will be invoked for MSI/Legacy interrupts + * + * @irq: IRQ number + * @data: Pointer to the PS PCIe DMA channel structure + * + * Return: IRQ_HANDLED/IRQ_NONE + */ +static irqreturn_t ps_pcie_dma_dev_intr_handler(int irq, void *data) +{ + struct xlnx_pcie_dma_device *xdev = + (struct xlnx_pcie_dma_device *)data; + struct ps_pcie_dma_chan *chan = NULL; + int i; + int err = -1; + int ret = -1; + + for (i = 0; i < xdev->num_channels; i++) { + chan = &xdev->channels[i]; + err = ps_pcie_check_intr_status(chan); + if (err == 0) + ret = 0; + } + + return (ret == 0) ? IRQ_HANDLED : IRQ_NONE; +} + +/** + * ps_pcie_dma_chan_intr_handler - This will be invoked for MSI-X interrupts + * + * @irq: IRQ number + * @data: Pointer to the PS PCIe DMA channel structure + * + * Return: IRQ_HANDLED + */ +static irqreturn_t ps_pcie_dma_chan_intr_handler(int irq, void *data) +{ + struct ps_pcie_dma_chan *chan = (struct ps_pcie_dma_chan *)data; + + ps_pcie_check_intr_status(chan); + + return IRQ_HANDLED; +} + +/** + * chan_intr_setup - Requests Interrupt handler for individual channels + * + * @xdev: Driver specific data for device + * + * Return: 0 on success and non zero value on failure. + */ +static int chan_intr_setup(struct xlnx_pcie_dma_device *xdev) +{ + struct ps_pcie_dma_chan *chan; + int i; + int err = 0; + + for (i = 0; i < xdev->num_channels; i++) { + chan = &xdev->channels[i]; + err = devm_request_irq(xdev->dev, + pci_irq_vector(xdev->pci_dev, i), + ps_pcie_dma_chan_intr_handler, + PS_PCIE_DMA_IRQ_NOSHARE, + "PS PCIe DMA Chan Intr handler", chan); + if (err) { + dev_err(xdev->dev, + "Irq %d for chan %d error %d\n", + pci_irq_vector(xdev->pci_dev, i), + chan->channel_number, err); + break; + } + } + + if (err) { + while (--i >= 0) { + chan = &xdev->channels[i]; + devm_free_irq(xdev->dev, + pci_irq_vector(xdev->pci_dev, i), chan); + } + } + + return err; +} + +/** + * device_intr_setup - Requests interrupt handler for DMA device + * + * @xdev: Driver specific data for device + * + * Return: 0 on success and non zero value on failure. + */ +static int device_intr_setup(struct xlnx_pcie_dma_device *xdev) +{ + int err; + unsigned long intr_flags = IRQF_SHARED; + + if (xdev->pci_dev->msix_enabled || xdev->pci_dev->msi_enabled) + intr_flags = PS_PCIE_DMA_IRQ_NOSHARE; + + err = devm_request_irq(xdev->dev, + pci_irq_vector(xdev->pci_dev, 0), + ps_pcie_dma_dev_intr_handler, + intr_flags, + "PS PCIe DMA Intr Handler", xdev); + if (err) + dev_err(xdev->dev, "Couldn't request irq %d\n", + pci_irq_vector(xdev->pci_dev, 0)); + + return err; +} + +/** + * irq_setup - Requests interrupts based on the interrupt type detected + * + * @xdev: Driver specific data for device + * + * Return: 0 on success and non zero value on failure. + */ +static int irq_setup(struct xlnx_pcie_dma_device *xdev) +{ + int err; + + if (xdev->irq_vecs == xdev->num_channels) + err = chan_intr_setup(xdev); + else + err = device_intr_setup(xdev); + + return err; +} + +static int platform_irq_setup(struct xlnx_pcie_dma_device *xdev) +{ + int err; + + err = devm_request_irq(xdev->dev, + xdev->platform_irq_vec, + ps_pcie_dma_dev_intr_handler, + IRQF_SHARED, + "PS PCIe Root DMA Handler", xdev); + if (err) + dev_err(xdev->dev, "Couldn't request irq %d\n", + xdev->platform_irq_vec); + + return err; +} + +/** + * irq_probe - Checks which interrupt types can be serviced by hardware + * + * @xdev: Driver specific data for device + * + * Return: Number of interrupt vectors when successful or -ENOSPC on failure + */ +static int irq_probe(struct xlnx_pcie_dma_device *xdev) +{ + struct pci_dev *pdev; + + pdev = xdev->pci_dev; + + xdev->irq_vecs = pci_alloc_irq_vectors(pdev, 1, xdev->num_channels, + PCI_IRQ_ALL_TYPES); + return xdev->irq_vecs; +} + +/** + * ps_pcie_check_intr_status - Checks channel interrupt status + * + * @chan: Pointer to the PS PCIe DMA channel structure + * + * Return: 0 if interrupt is pending on channel + * -1 if no interrupt is pending on channel + */ +static int ps_pcie_check_intr_status(struct ps_pcie_dma_chan *chan) +{ + int err = -1; + u32 status; + + if (chan->state != CHANNEL_AVAILABLE) + return err; + + status = ps_pcie_dma_read(chan, chan->intr_status_offset); + + if (status & DMA_INTSTATUS_SGLINTR_BIT) { + if (chan->primary_desc_cleanup) { + queue_work(chan->primary_desc_cleanup, + &chan->handle_primary_desc_cleanup); + } + /* Clearing Persistent bit */ + ps_pcie_dma_set_mask(chan, chan->intr_status_offset, + DMA_INTSTATUS_SGLINTR_BIT); + err = 0; + } + + if (status & DMA_INTSTATUS_SWINTR_BIT) { + if (chan->sw_intrs_wrkq) + queue_work(chan->sw_intrs_wrkq, &chan->handle_sw_intrs); + /* Clearing Persistent bit */ + ps_pcie_dma_set_mask(chan, chan->intr_status_offset, + DMA_INTSTATUS_SWINTR_BIT); + err = 0; + } + + if (status & DMA_INTSTATUS_DMAERR_BIT) { + dev_err(chan->dev, + "DMA Channel %d ControlStatus Reg: 0x%x", + chan->channel_number, status); + dev_err(chan->dev, + "Chn %d SrcQLmt = %d SrcQSz = %d SrcQNxt = %d", + chan->channel_number, + chan->chan_base->src_q_limit, + chan->chan_base->src_q_size, + chan->chan_base->src_q_next); + dev_err(chan->dev, + "Chn %d SrcStaLmt = %d SrcStaSz = %d SrcStaNxt = %d", + chan->channel_number, + chan->chan_base->stas_q_limit, + chan->chan_base->stas_q_size, + chan->chan_base->stas_q_next); + dev_err(chan->dev, + "Chn %d DstQLmt = %d DstQSz = %d DstQNxt = %d", + chan->channel_number, + chan->chan_base->dst_q_limit, + chan->chan_base->dst_q_size, + chan->chan_base->dst_q_next); + dev_err(chan->dev, + "Chan %d DstStaLmt = %d DstStaSz = %d DstStaNxt = %d", + chan->channel_number, + chan->chan_base->stad_q_limit, + chan->chan_base->stad_q_size, + chan->chan_base->stad_q_next); + /* Clearing Persistent bit */ + ps_pcie_dma_set_mask(chan, chan->intr_status_offset, + DMA_INTSTATUS_DMAERR_BIT); + + handle_error(chan); + + err = 0; + } + + return err; +} + +static int init_hw_components(struct ps_pcie_dma_chan *chan) +{ + if (chan->psrc_sgl_bd && chan->psrc_sta_bd) { + /* Programming SourceQ and StatusQ bd addresses */ + chan->chan_base->src_q_next = 0; + chan->chan_base->src_q_high = + upper_32_bits(chan->src_sgl_bd_pa); + chan->chan_base->src_q_size = chan->total_descriptors; + chan->chan_base->src_q_limit = 0; + if (chan->xdev->is_rootdma) { + chan->chan_base->src_q_low = ROOTDMA_Q_READ_ATTRIBUTE + | DMA_QPTRLO_QLOCAXI_BIT; + } else { + chan->chan_base->src_q_low = 0; + } + chan->chan_base->src_q_low |= + (lower_32_bits((chan->src_sgl_bd_pa)) + & ~(DMA_SRC_Q_LOW_BIT_SHIFT)) + | DMA_QPTRLO_Q_ENABLE_BIT; + + chan->chan_base->stas_q_next = 0; + chan->chan_base->stas_q_high = + upper_32_bits(chan->src_sta_bd_pa); + chan->chan_base->stas_q_size = chan->total_descriptors; + chan->chan_base->stas_q_limit = chan->total_descriptors - 1; + if (chan->xdev->is_rootdma) { + chan->chan_base->stas_q_low = ROOTDMA_Q_READ_ATTRIBUTE + | DMA_QPTRLO_QLOCAXI_BIT; + } else { + chan->chan_base->stas_q_low = 0; + } + chan->chan_base->stas_q_low |= + (lower_32_bits(chan->src_sta_bd_pa) + & ~(DMA_SRC_Q_LOW_BIT_SHIFT)) + | DMA_QPTRLO_Q_ENABLE_BIT; + } + + if (chan->pdst_sgl_bd && chan->pdst_sta_bd) { + /* Programming DestinationQ and StatusQ buffer descriptors */ + chan->chan_base->dst_q_next = 0; + chan->chan_base->dst_q_high = + upper_32_bits(chan->dst_sgl_bd_pa); + chan->chan_base->dst_q_size = chan->total_descriptors; + chan->chan_base->dst_q_limit = 0; + if (chan->xdev->is_rootdma) { + chan->chan_base->dst_q_low = ROOTDMA_Q_READ_ATTRIBUTE + | DMA_QPTRLO_QLOCAXI_BIT; + } else { + chan->chan_base->dst_q_low = 0; + } + chan->chan_base->dst_q_low |= + (lower_32_bits(chan->dst_sgl_bd_pa) + & ~(DMA_SRC_Q_LOW_BIT_SHIFT)) + | DMA_QPTRLO_Q_ENABLE_BIT; + + chan->chan_base->stad_q_next = 0; + chan->chan_base->stad_q_high = + upper_32_bits(chan->dst_sta_bd_pa); + chan->chan_base->stad_q_size = chan->total_descriptors; + chan->chan_base->stad_q_limit = chan->total_descriptors - 1; + if (chan->xdev->is_rootdma) { + chan->chan_base->stad_q_low = ROOTDMA_Q_READ_ATTRIBUTE + | DMA_QPTRLO_QLOCAXI_BIT; + } else { + chan->chan_base->stad_q_low = 0; + } + chan->chan_base->stad_q_low |= + (lower_32_bits(chan->dst_sta_bd_pa) + & ~(DMA_SRC_Q_LOW_BIT_SHIFT)) + | DMA_QPTRLO_Q_ENABLE_BIT; + } + + return 0; +} + +static void update_channel_read_attribute(struct ps_pcie_dma_chan *chan) +{ + if (chan->xdev->is_rootdma) { + /* For Root DMA, Host Memory and Buffer Descriptors + * will be on AXI side + */ + if (chan->srcq_buffer_location == BUFFER_LOC_PCI) { + chan->read_attribute = (AXI_ATTRIBUTE << + SRC_CTL_ATTRIB_BIT_SHIFT) | + SOURCE_CONTROL_BD_LOC_AXI; + } else if (chan->srcq_buffer_location == BUFFER_LOC_AXI) { + chan->read_attribute = AXI_ATTRIBUTE << + SRC_CTL_ATTRIB_BIT_SHIFT; + } + } else { + if (chan->srcq_buffer_location == BUFFER_LOC_PCI) { + chan->read_attribute = PCI_ATTRIBUTE << + SRC_CTL_ATTRIB_BIT_SHIFT; + } else if (chan->srcq_buffer_location == BUFFER_LOC_AXI) { + chan->read_attribute = (AXI_ATTRIBUTE << + SRC_CTL_ATTRIB_BIT_SHIFT) | + SOURCE_CONTROL_BD_LOC_AXI; + } + } +} + +static void update_channel_write_attribute(struct ps_pcie_dma_chan *chan) +{ + if (chan->xdev->is_rootdma) { + /* For Root DMA, Host Memory and Buffer Descriptors + * will be on AXI side + */ + if (chan->dstq_buffer_location == BUFFER_LOC_PCI) { + chan->write_attribute = (AXI_ATTRIBUTE << + SRC_CTL_ATTRIB_BIT_SHIFT) | + SOURCE_CONTROL_BD_LOC_AXI; + } else if (chan->srcq_buffer_location == BUFFER_LOC_AXI) { + chan->write_attribute = AXI_ATTRIBUTE << + SRC_CTL_ATTRIB_BIT_SHIFT; + } + } else { + if (chan->dstq_buffer_location == BUFFER_LOC_PCI) { + chan->write_attribute = PCI_ATTRIBUTE << + SRC_CTL_ATTRIB_BIT_SHIFT; + } else if (chan->dstq_buffer_location == BUFFER_LOC_AXI) { + chan->write_attribute = (AXI_ATTRIBUTE << + SRC_CTL_ATTRIB_BIT_SHIFT) | + SOURCE_CONTROL_BD_LOC_AXI; + } + } + chan->write_attribute |= SOURCE_CONTROL_BACK_TO_BACK_PACK_BIT; +} + +static int init_sw_components(struct ps_pcie_dma_chan *chan) +{ + if ((chan->ppkt_ctx_srcq) && (chan->psrc_sgl_bd) && + (chan->psrc_sta_bd)) { + memset(chan->ppkt_ctx_srcq, 0, + sizeof(struct PACKET_TRANSFER_PARAMS) + * chan->total_descriptors); + + memset(chan->psrc_sgl_bd, 0, + sizeof(struct SOURCE_DMA_DESCRIPTOR) + * chan->total_descriptors); + + memset(chan->psrc_sta_bd, 0, + sizeof(struct STATUS_DMA_DESCRIPTOR) + * chan->total_descriptors); + + chan->src_avail_descriptors = chan->total_descriptors; + + chan->src_sgl_freeidx = 0; + chan->src_staprobe_idx = 0; + chan->src_sta_hw_probe_idx = chan->total_descriptors - 1; + chan->idx_ctx_srcq_head = 0; + chan->idx_ctx_srcq_tail = 0; + } + + if ((chan->ppkt_ctx_dstq) && (chan->pdst_sgl_bd) && + (chan->pdst_sta_bd)) { + memset(chan->ppkt_ctx_dstq, 0, + sizeof(struct PACKET_TRANSFER_PARAMS) + * chan->total_descriptors); + + memset(chan->pdst_sgl_bd, 0, + sizeof(struct DEST_DMA_DESCRIPTOR) + * chan->total_descriptors); + + memset(chan->pdst_sta_bd, 0, + sizeof(struct STATUS_DMA_DESCRIPTOR) + * chan->total_descriptors); + + chan->dst_avail_descriptors = chan->total_descriptors; + + chan->dst_sgl_freeidx = 0; + chan->dst_staprobe_idx = 0; + chan->dst_sta_hw_probe_idx = chan->total_descriptors - 1; + chan->idx_ctx_dstq_head = 0; + chan->idx_ctx_dstq_tail = 0; + } + + return 0; +} + +/** + * ps_pcie_chan_reset - Resets channel, by programming relevant registers + * + * @chan: PS PCIe DMA channel information holder + * Return: void + */ +static void ps_pcie_chan_reset(struct ps_pcie_dma_chan *chan) +{ + /* Enable channel reset */ + ps_pcie_dma_set_mask(chan, DMA_CNTRL_REG_OFFSET, DMA_CNTRL_RST_BIT); + + mdelay(10); + + /* Disable channel reset */ + ps_pcie_dma_clr_mask(chan, DMA_CNTRL_REG_OFFSET, DMA_CNTRL_RST_BIT); +} + +/** + * poll_completed_transactions - Function invoked by poll timer + * + * @arg: Pointer to PS PCIe DMA channel information + * Return: void + */ +static void poll_completed_transactions(unsigned long arg) +{ + struct ps_pcie_dma_chan *chan = (struct ps_pcie_dma_chan *)arg; + + if (chan->state == CHANNEL_AVAILABLE) { + queue_work(chan->primary_desc_cleanup, + &chan->handle_primary_desc_cleanup); + } + + mod_timer(&chan->poll_timer, jiffies + chan->poll_timer_freq); +} + +static bool check_descriptors_for_two_queues(struct ps_pcie_dma_chan *chan, + struct ps_pcie_tx_segment *seg) +{ + if (seg->tx_elements.src_sgl) { + if (chan->src_avail_descriptors >= + seg->tx_elements.srcq_num_elemets) { + return true; + } + } else if (seg->tx_elements.dst_sgl) { + if (chan->dst_avail_descriptors >= + seg->tx_elements.dstq_num_elemets) { + return true; + } + } + + return false; +} + +static bool check_descriptors_for_all_queues(struct ps_pcie_dma_chan *chan, + struct ps_pcie_tx_segment *seg) +{ + if ((chan->src_avail_descriptors >= + seg->tx_elements.srcq_num_elemets) && + (chan->dst_avail_descriptors >= + seg->tx_elements.dstq_num_elemets)) { + return true; + } + + return false; +} + +static bool check_descriptor_availability(struct ps_pcie_dma_chan *chan, + struct ps_pcie_tx_segment *seg) +{ + if (chan->num_queues == DEFAULT_DMA_QUEUES) + return check_descriptors_for_all_queues(chan, seg); + else + return check_descriptors_for_two_queues(chan, seg); +} + +static void handle_error(struct ps_pcie_dma_chan *chan) +{ + if (chan->state != CHANNEL_AVAILABLE) + return; + + spin_lock(&chan->channel_lock); + chan->state = CHANNEL_ERROR; + spin_unlock(&chan->channel_lock); + + if (chan->maintenance_workq) + queue_work(chan->maintenance_workq, &chan->handle_chan_reset); +} + +static void xlnx_ps_pcie_update_srcq(struct ps_pcie_dma_chan *chan, + struct ps_pcie_tx_segment *seg) +{ + struct SOURCE_DMA_DESCRIPTOR *pdesc; + struct PACKET_TRANSFER_PARAMS *pkt_ctx = NULL; + struct scatterlist *sgl_ptr; + unsigned int i; + + pkt_ctx = chan->ppkt_ctx_srcq + chan->idx_ctx_srcq_head; + if (pkt_ctx->availability_status == IN_USE) { + dev_err(chan->dev, + "src pkt context not avail for channel %d\n", + chan->channel_number); + handle_error(chan); + return; + } + + pkt_ctx->availability_status = IN_USE; + pkt_ctx->sgl = seg->tx_elements.src_sgl; + + if (chan->srcq_buffer_location == BUFFER_LOC_PCI) + pkt_ctx->seg = seg; + + /* Get the address of the next available DMA Descriptor */ + pdesc = chan->psrc_sgl_bd + chan->src_sgl_freeidx; + pkt_ctx->idx_sop = chan->src_sgl_freeidx; + + /* Build transactions using information in the scatter gather list */ + for_each_sg(seg->tx_elements.src_sgl, sgl_ptr, + seg->tx_elements.srcq_num_elemets, i) { + if (chan->xdev->dma_buf_ext_addr) { + pdesc->system_address = + (u64)sg_dma_address(sgl_ptr); + } else { + pdesc->system_address = + (u32)sg_dma_address(sgl_ptr); + } + + pdesc->control_byte_count = (sg_dma_len(sgl_ptr) & + SOURCE_CONTROL_BD_BYTE_COUNT_MASK) | + chan->read_attribute; + if (pkt_ctx->seg) + pkt_ctx->requested_bytes += sg_dma_len(sgl_ptr); + + pdesc->user_handle = chan->idx_ctx_srcq_head; + pdesc->user_id = DEFAULT_UID; + /* Check if this is last descriptor */ + if (i == (seg->tx_elements.srcq_num_elemets - 1)) { + pkt_ctx->idx_eop = chan->src_sgl_freeidx; + pdesc->control_byte_count = pdesc->control_byte_count | + SOURCE_CONTROL_BD_EOP_BIT | + SOURCE_CONTROL_BD_INTR_BIT; + } + chan->src_sgl_freeidx++; + if (chan->src_sgl_freeidx == chan->total_descriptors) + chan->src_sgl_freeidx = 0; + pdesc = chan->psrc_sgl_bd + chan->src_sgl_freeidx; + spin_lock(&chan->src_desc_lock); + chan->src_avail_descriptors--; + spin_unlock(&chan->src_desc_lock); + } + + chan->chan_base->src_q_limit = chan->src_sgl_freeidx; + chan->idx_ctx_srcq_head++; + if (chan->idx_ctx_srcq_head == chan->total_descriptors) + chan->idx_ctx_srcq_head = 0; +} + +static void xlnx_ps_pcie_update_dstq(struct ps_pcie_dma_chan *chan, + struct ps_pcie_tx_segment *seg) +{ + struct DEST_DMA_DESCRIPTOR *pdesc; + struct PACKET_TRANSFER_PARAMS *pkt_ctx = NULL; + struct scatterlist *sgl_ptr; + unsigned int i; + + pkt_ctx = chan->ppkt_ctx_dstq + chan->idx_ctx_dstq_head; + if (pkt_ctx->availability_status == IN_USE) { + dev_err(chan->dev, + "dst pkt context not avail for channel %d\n", + chan->channel_number); + handle_error(chan); + + return; + } + + pkt_ctx->availability_status = IN_USE; + pkt_ctx->sgl = seg->tx_elements.dst_sgl; + + if (chan->dstq_buffer_location == BUFFER_LOC_PCI) + pkt_ctx->seg = seg; + + pdesc = chan->pdst_sgl_bd + chan->dst_sgl_freeidx; + pkt_ctx->idx_sop = chan->dst_sgl_freeidx; + + /* Build transactions using information in the scatter gather list */ + for_each_sg(seg->tx_elements.dst_sgl, sgl_ptr, + seg->tx_elements.dstq_num_elemets, i) { + if (chan->xdev->dma_buf_ext_addr) { + pdesc->system_address = + (u64)sg_dma_address(sgl_ptr); + } else { + pdesc->system_address = + (u32)sg_dma_address(sgl_ptr); + } + + pdesc->control_byte_count = (sg_dma_len(sgl_ptr) & + SOURCE_CONTROL_BD_BYTE_COUNT_MASK) | + chan->write_attribute; + + if (pkt_ctx->seg) + pkt_ctx->requested_bytes += sg_dma_len(sgl_ptr); + + pdesc->user_handle = chan->idx_ctx_dstq_head; + /* Check if this is last descriptor */ + if (i == (seg->tx_elements.dstq_num_elemets - 1)) + pkt_ctx->idx_eop = chan->dst_sgl_freeidx; + chan->dst_sgl_freeidx++; + if (chan->dst_sgl_freeidx == chan->total_descriptors) + chan->dst_sgl_freeidx = 0; + pdesc = chan->pdst_sgl_bd + chan->dst_sgl_freeidx; + spin_lock(&chan->dst_desc_lock); + chan->dst_avail_descriptors--; + spin_unlock(&chan->dst_desc_lock); + } + + chan->chan_base->dst_q_limit = chan->dst_sgl_freeidx; + chan->idx_ctx_dstq_head++; + if (chan->idx_ctx_dstq_head == chan->total_descriptors) + chan->idx_ctx_dstq_head = 0; +} + +static void ps_pcie_chan_program_work(struct work_struct *work) +{ + struct ps_pcie_dma_chan *chan = + (struct ps_pcie_dma_chan *)container_of(work, + struct ps_pcie_dma_chan, + handle_chan_programming); + struct ps_pcie_tx_segment *seg = NULL; + + while (chan->state == CHANNEL_AVAILABLE) { + spin_lock(&chan->active_list_lock); + seg = list_first_entry_or_null(&chan->active_list, + struct ps_pcie_tx_segment, node); + spin_unlock(&chan->active_list_lock); + + if (!seg) + break; + + if (check_descriptor_availability(chan, seg) == false) + break; + + spin_lock(&chan->active_list_lock); + list_del(&seg->node); + spin_unlock(&chan->active_list_lock); + + if (seg->tx_elements.src_sgl) + xlnx_ps_pcie_update_srcq(chan, seg); + + if (seg->tx_elements.dst_sgl) + xlnx_ps_pcie_update_dstq(chan, seg); + } +} + +/** + * dst_cleanup_work - Goes through all completed elements in status Q + * and invokes callbacks for the concerned DMA transaction. + * + * @work: Work associated with the task + * + * Return: void + */ +static void dst_cleanup_work(struct work_struct *work) +{ + struct ps_pcie_dma_chan *chan = + (struct ps_pcie_dma_chan *)container_of(work, + struct ps_pcie_dma_chan, handle_dstq_desc_cleanup); + + struct STATUS_DMA_DESCRIPTOR *psta_bd; + struct DEST_DMA_DESCRIPTOR *pdst_bd; + struct PACKET_TRANSFER_PARAMS *ppkt_ctx; + struct dmaengine_result rslt; + u32 completed_bytes; + u32 dstq_desc_idx; + + psta_bd = chan->pdst_sta_bd + chan->dst_staprobe_idx; + + while (psta_bd->status_flag_byte_count & STA_BD_COMPLETED_BIT) { + if (psta_bd->status_flag_byte_count & + STA_BD_DESTINATION_ERROR_BIT) { + dev_err(chan->dev, + "Dst Sts Elmnt %d chan %d has Destination Err", + chan->dst_staprobe_idx + 1, + chan->channel_number); + handle_error(chan); + break; + } + if (psta_bd->status_flag_byte_count & STA_BD_SOURCE_ERROR_BIT) { + dev_err(chan->dev, + "Dst Sts Elmnt %d chan %d has Source Error", + chan->dst_staprobe_idx + 1, + chan->channel_number); + handle_error(chan); + break; + } + if (psta_bd->status_flag_byte_count & + STA_BD_INTERNAL_ERROR_BIT) { + dev_err(chan->dev, + "Dst Sts Elmnt %d chan %d has Internal Error", + chan->dst_staprobe_idx + 1, + chan->channel_number); + handle_error(chan); + break; + } + /* we are using 64 bit USER field. */ + if ((psta_bd->status_flag_byte_count & + STA_BD_UPPER_STATUS_NONZERO_BIT) == 0) { + dev_err(chan->dev, + "Dst Sts Elmnt %d for chan %d has NON ZERO", + chan->dst_staprobe_idx + 1, + chan->channel_number); + handle_error(chan); + break; + } + + chan->idx_ctx_dstq_tail = psta_bd->user_handle; + ppkt_ctx = chan->ppkt_ctx_dstq + chan->idx_ctx_dstq_tail; + completed_bytes = (psta_bd->status_flag_byte_count & + STA_BD_BYTE_COUNT_MASK) >> + STA_BD_BYTE_COUNT_SHIFT; + + memset(psta_bd, 0, sizeof(struct STATUS_DMA_DESCRIPTOR)); + + chan->dst_staprobe_idx++; + + if (chan->dst_staprobe_idx == chan->total_descriptors) + chan->dst_staprobe_idx = 0; + + chan->dst_sta_hw_probe_idx++; + + if (chan->dst_sta_hw_probe_idx == chan->total_descriptors) + chan->dst_sta_hw_probe_idx = 0; + + chan->chan_base->stad_q_limit = chan->dst_sta_hw_probe_idx; + + psta_bd = chan->pdst_sta_bd + chan->dst_staprobe_idx; + + dstq_desc_idx = ppkt_ctx->idx_sop; + + do { + pdst_bd = chan->pdst_sgl_bd + dstq_desc_idx; + memset(pdst_bd, 0, + sizeof(struct DEST_DMA_DESCRIPTOR)); + + spin_lock(&chan->dst_desc_lock); + chan->dst_avail_descriptors++; + spin_unlock(&chan->dst_desc_lock); + + if (dstq_desc_idx == ppkt_ctx->idx_eop) + break; + + dstq_desc_idx++; + + if (dstq_desc_idx == chan->total_descriptors) + dstq_desc_idx = 0; + + } while (1); + + /* Invoking callback */ + if (ppkt_ctx->seg) { + spin_lock(&chan->cookie_lock); + dma_cookie_complete(&ppkt_ctx->seg->async_tx); + spin_unlock(&chan->cookie_lock); + rslt.result = DMA_TRANS_NOERROR; + rslt.residue = ppkt_ctx->requested_bytes - + completed_bytes; + dmaengine_desc_get_callback_invoke(&ppkt_ctx->seg->async_tx, + &rslt); + mempool_free(ppkt_ctx->seg, chan->transactions_pool); + } + memset(ppkt_ctx, 0, sizeof(struct PACKET_TRANSFER_PARAMS)); + } + + complete(&chan->dstq_work_complete); +} + +/** + * src_cleanup_work - Goes through all completed elements in status Q and + * invokes callbacks for the concerned DMA transaction. + * + * @work: Work associated with the task + * + * Return: void + */ +static void src_cleanup_work(struct work_struct *work) +{ + struct ps_pcie_dma_chan *chan = + (struct ps_pcie_dma_chan *)container_of( + work, struct ps_pcie_dma_chan, handle_srcq_desc_cleanup); + + struct STATUS_DMA_DESCRIPTOR *psta_bd; + struct SOURCE_DMA_DESCRIPTOR *psrc_bd; + struct PACKET_TRANSFER_PARAMS *ppkt_ctx; + struct dmaengine_result rslt; + u32 completed_bytes; + u32 srcq_desc_idx; + + psta_bd = chan->psrc_sta_bd + chan->src_staprobe_idx; + + while (psta_bd->status_flag_byte_count & STA_BD_COMPLETED_BIT) { + if (psta_bd->status_flag_byte_count & + STA_BD_DESTINATION_ERROR_BIT) { + dev_err(chan->dev, + "Src Sts Elmnt %d chan %d has Dst Error", + chan->src_staprobe_idx + 1, + chan->channel_number); + handle_error(chan); + break; + } + if (psta_bd->status_flag_byte_count & STA_BD_SOURCE_ERROR_BIT) { + dev_err(chan->dev, + "Src Sts Elmnt %d chan %d has Source Error", + chan->src_staprobe_idx + 1, + chan->channel_number); + handle_error(chan); + break; + } + if (psta_bd->status_flag_byte_count & + STA_BD_INTERNAL_ERROR_BIT) { + dev_err(chan->dev, + "Src Sts Elmnt %d chan %d has Internal Error", + chan->src_staprobe_idx + 1, + chan->channel_number); + handle_error(chan); + break; + } + if ((psta_bd->status_flag_byte_count + & STA_BD_UPPER_STATUS_NONZERO_BIT) == 0) { + dev_err(chan->dev, + "Src Sts Elmnt %d chan %d has NonZero", + chan->src_staprobe_idx + 1, + chan->channel_number); + handle_error(chan); + break; + } + chan->idx_ctx_srcq_tail = psta_bd->user_handle; + ppkt_ctx = chan->ppkt_ctx_srcq + chan->idx_ctx_srcq_tail; + completed_bytes = (psta_bd->status_flag_byte_count + & STA_BD_BYTE_COUNT_MASK) >> + STA_BD_BYTE_COUNT_SHIFT; + + memset(psta_bd, 0, sizeof(struct STATUS_DMA_DESCRIPTOR)); + + chan->src_staprobe_idx++; + + if (chan->src_staprobe_idx == chan->total_descriptors) + chan->src_staprobe_idx = 0; + + chan->src_sta_hw_probe_idx++; + + if (chan->src_sta_hw_probe_idx == chan->total_descriptors) + chan->src_sta_hw_probe_idx = 0; + + chan->chan_base->stas_q_limit = chan->src_sta_hw_probe_idx; + + psta_bd = chan->psrc_sta_bd + chan->src_staprobe_idx; + + srcq_desc_idx = ppkt_ctx->idx_sop; + + do { + psrc_bd = chan->psrc_sgl_bd + srcq_desc_idx; + memset(psrc_bd, 0, + sizeof(struct SOURCE_DMA_DESCRIPTOR)); + + spin_lock(&chan->src_desc_lock); + chan->src_avail_descriptors++; + spin_unlock(&chan->src_desc_lock); + + if (srcq_desc_idx == ppkt_ctx->idx_eop) + break; + srcq_desc_idx++; + + if (srcq_desc_idx == chan->total_descriptors) + srcq_desc_idx = 0; + + } while (1); + + /* Invoking callback */ + if (ppkt_ctx->seg) { + spin_lock(&chan->cookie_lock); + dma_cookie_complete(&ppkt_ctx->seg->async_tx); + spin_unlock(&chan->cookie_lock); + rslt.result = DMA_TRANS_NOERROR; + rslt.residue = ppkt_ctx->requested_bytes - + completed_bytes; + dmaengine_desc_get_callback_invoke(&ppkt_ctx->seg->async_tx, + &rslt); + mempool_free(ppkt_ctx->seg, chan->transactions_pool); + } + memset(ppkt_ctx, 0, sizeof(struct PACKET_TRANSFER_PARAMS)); + } + + complete(&chan->srcq_work_complete); +} + +/** + * ps_pcie_chan_primary_work - Masks out interrupts, invokes source Q and + * destination Q processing. Waits for source Q and destination Q processing + * and re enables interrupts. Same work is invoked by timer if coalesce count + * is greater than zero and interrupts are not invoked before the timeout period + * + * @work: Work associated with the task + * + * Return: void + */ +static void ps_pcie_chan_primary_work(struct work_struct *work) +{ + struct ps_pcie_dma_chan *chan = + (struct ps_pcie_dma_chan *)container_of( + work, struct ps_pcie_dma_chan, + handle_primary_desc_cleanup); + + /* Disable interrupts for Channel */ + ps_pcie_dma_clr_mask(chan, chan->intr_control_offset, + DMA_INTCNTRL_ENABLINTR_BIT); + + if (chan->psrc_sgl_bd) { + reinit_completion(&chan->srcq_work_complete); + if (chan->srcq_desc_cleanup) + queue_work(chan->srcq_desc_cleanup, + &chan->handle_srcq_desc_cleanup); + } + if (chan->pdst_sgl_bd) { + reinit_completion(&chan->dstq_work_complete); + if (chan->dstq_desc_cleanup) + queue_work(chan->dstq_desc_cleanup, + &chan->handle_dstq_desc_cleanup); + } + + if (chan->psrc_sgl_bd) + wait_for_completion_interruptible(&chan->srcq_work_complete); + if (chan->pdst_sgl_bd) + wait_for_completion_interruptible(&chan->dstq_work_complete); + + /* Enable interrupts for channel */ + ps_pcie_dma_set_mask(chan, chan->intr_control_offset, + DMA_INTCNTRL_ENABLINTR_BIT); + + if (chan->chan_programming) { + queue_work(chan->chan_programming, + &chan->handle_chan_programming); + } + + if (chan->coalesce_count > 0 && chan->poll_timer.function) + mod_timer(&chan->poll_timer, jiffies + chan->poll_timer_freq); +} + +static int read_rootdma_config(struct platform_device *platform_dev, + struct xlnx_pcie_dma_device *xdev) +{ + int err; + struct resource *r; + + err = dma_set_mask(&platform_dev->dev, DMA_BIT_MASK(64)); + if (err) { + dev_info(&platform_dev->dev, "Cannot set 64 bit DMA mask\n"); + err = dma_set_mask(&platform_dev->dev, DMA_BIT_MASK(32)); + if (err) { + dev_err(&platform_dev->dev, "DMA mask set error\n"); + return err; + } + } + + err = dma_set_coherent_mask(&platform_dev->dev, DMA_BIT_MASK(64)); + if (err) { + dev_info(&platform_dev->dev, "Cannot set 64 bit consistent DMA mask\n"); + err = dma_set_coherent_mask(&platform_dev->dev, + DMA_BIT_MASK(32)); + if (err) { + dev_err(&platform_dev->dev, "Cannot set consistent DMA mask\n"); + return err; + } + } + + r = platform_get_resource_byname(platform_dev, IORESOURCE_MEM, + "ps_pcie_regbase"); + if (!r) { + dev_err(&platform_dev->dev, + "Unable to find memory resource for root dma\n"); + return PTR_ERR(r); + } + + xdev->reg_base = devm_ioremap_resource(&platform_dev->dev, r); + if (IS_ERR(xdev->reg_base)) { + dev_err(&platform_dev->dev, "ioresource error for root dma\n"); + return PTR_ERR(xdev->reg_base); + } + + xdev->platform_irq_vec = + platform_get_irq_byname(platform_dev, + "ps_pcie_rootdma_intr"); + if (xdev->platform_irq_vec < 0) { + dev_err(&platform_dev->dev, + "Unable to get interrupt number for root dma\n"); + return xdev->platform_irq_vec; + } + + err = device_property_read_u16(&platform_dev->dev, "dma_vendorid", + &xdev->rootdma_vendor); + if (err) { + dev_err(&platform_dev->dev, + "Unable to find RootDMA PCI Vendor Id\n"); + return err; + } + + err = device_property_read_u16(&platform_dev->dev, "dma_deviceid", + &xdev->rootdma_device); + if (err) { + dev_err(&platform_dev->dev, + "Unable to find RootDMA PCI Device Id\n"); + return err; + } + + xdev->common.dev = xdev->dev; + + return 0; +} + +static int read_epdma_config(struct platform_device *platform_dev, + struct xlnx_pcie_dma_device *xdev) +{ + int err; + struct pci_dev *pdev; + u16 i; + void __iomem * const *pci_iomap; + unsigned long pci_bar_length; + + pdev = *((struct pci_dev **)(platform_dev->dev.platform_data)); + xdev->pci_dev = pdev; + + for (i = 0; i < MAX_BARS; i++) { + if (pci_resource_len(pdev, i) == 0) + continue; + xdev->bar_mask = xdev->bar_mask | (1 << (i)); + } + + err = pcim_iomap_regions(pdev, xdev->bar_mask, PLATFORM_DRIVER_NAME); + if (err) { + dev_err(&pdev->dev, "Cannot request PCI regions, aborting\n"); + return err; + } + + pci_iomap = pcim_iomap_table(pdev); + if (!pci_iomap) { + err = -ENOMEM; + return err; + } + + for (i = 0; i < MAX_BARS; i++) { + pci_bar_length = pci_resource_len(pdev, i); + if (pci_bar_length == 0) { + xdev->bar_info[i].BAR_LENGTH = 0; + xdev->bar_info[i].BAR_PHYS_ADDR = 0; + xdev->bar_info[i].BAR_VIRT_ADDR = NULL; + } else { + xdev->bar_info[i].BAR_LENGTH = + pci_bar_length; + xdev->bar_info[i].BAR_PHYS_ADDR = + pci_resource_start(pdev, i); + xdev->bar_info[i].BAR_VIRT_ADDR = + pci_iomap[i]; + } + } + + xdev->reg_base = pci_iomap[DMA_BAR_NUMBER]; + + err = irq_probe(xdev); + if (err < 0) { + dev_err(&pdev->dev, "Cannot probe irq lines for device %d\n", + platform_dev->id); + return err; + } + + xdev->common.dev = &pdev->dev; + + return 0; +} + +static int probe_channel_properties(struct platform_device *platform_dev, + struct xlnx_pcie_dma_device *xdev, + u16 channel_number) +{ + int i; + char propertyname[CHANNEL_PROPERTY_LENGTH]; + int numvals, ret; + u32 *val; + struct ps_pcie_dma_chan *channel; + struct ps_pcie_dma_channel_match *xlnx_match; + + snprintf(propertyname, CHANNEL_PROPERTY_LENGTH, + "ps_pcie_channel%d", channel_number); + + channel = &xdev->channels[channel_number]; + + spin_lock_init(&channel->channel_lock); + spin_lock_init(&channel->cookie_lock); + + INIT_LIST_HEAD(&channel->pending_list); + spin_lock_init(&channel->pending_list_lock); + + INIT_LIST_HEAD(&channel->active_list); + spin_lock_init(&channel->active_list_lock); + + spin_lock_init(&channel->src_desc_lock); + spin_lock_init(&channel->dst_desc_lock); + + INIT_LIST_HEAD(&channel->pending_interrupts_list); + spin_lock_init(&channel->pending_interrupts_lock); + + INIT_LIST_HEAD(&channel->active_interrupts_list); + spin_lock_init(&channel->active_interrupts_lock); + + init_completion(&channel->srcq_work_complete); + init_completion(&channel->dstq_work_complete); + init_completion(&channel->chan_shutdown_complt); + init_completion(&channel->chan_terminate_complete); + + if (device_property_present(&platform_dev->dev, propertyname)) { + numvals = device_property_read_u32_array(&platform_dev->dev, + propertyname, NULL, 0); + + if (numvals < 0) + return numvals; + + val = devm_kzalloc(&platform_dev->dev, sizeof(u32) * numvals, + GFP_KERNEL); + + if (!val) + return -ENOMEM; + + ret = device_property_read_u32_array(&platform_dev->dev, + propertyname, val, + numvals); + if (ret < 0) { + dev_err(&platform_dev->dev, + "Unable to read property %s\n", propertyname); + return ret; + } + + for (i = 0; i < numvals; i++) { + switch (i) { + case DMA_CHANNEL_DIRECTION: + channel->direction = + (val[DMA_CHANNEL_DIRECTION] == + PCIE_AXI_DIRECTION) ? + DMA_TO_DEVICE : DMA_FROM_DEVICE; + break; + case NUM_DESCRIPTORS: + channel->total_descriptors = + val[NUM_DESCRIPTORS]; + if (channel->total_descriptors > + MAX_DESCRIPTORS) { + dev_info(&platform_dev->dev, + "Descriptors > alowd max\n"); + channel->total_descriptors = + MAX_DESCRIPTORS; + } + break; + case NUM_QUEUES: + channel->num_queues = val[NUM_QUEUES]; + switch (channel->num_queues) { + case DEFAULT_DMA_QUEUES: + break; + case TWO_DMA_QUEUES: + break; + default: + dev_info(&platform_dev->dev, + "Incorrect Q number for dma chan\n"); + channel->num_queues = DEFAULT_DMA_QUEUES; + } + break; + case COALESE_COUNT: + channel->coalesce_count = val[COALESE_COUNT]; + + if (channel->coalesce_count > + MAX_COALESCE_COUNT) { + dev_info(&platform_dev->dev, + "Invalid coalesce Count\n"); + channel->coalesce_count = + MAX_COALESCE_COUNT; + } + break; + case POLL_TIMER_FREQUENCY: + channel->poll_timer_freq = + val[POLL_TIMER_FREQUENCY]; + break; + default: + dev_err(&platform_dev->dev, + "Check order of channel properties!\n"); + } + } + } else { + dev_err(&platform_dev->dev, + "Property %s not present. Invalid configuration!\n", + propertyname); + return -ENOTSUPP; + } + + if (channel->direction == DMA_TO_DEVICE) { + if (channel->num_queues == DEFAULT_DMA_QUEUES) { + channel->srcq_buffer_location = BUFFER_LOC_PCI; + channel->dstq_buffer_location = BUFFER_LOC_AXI; + } else { + channel->srcq_buffer_location = BUFFER_LOC_PCI; + channel->dstq_buffer_location = BUFFER_LOC_INVALID; + } + } else { + if (channel->num_queues == DEFAULT_DMA_QUEUES) { + channel->srcq_buffer_location = BUFFER_LOC_AXI; + channel->dstq_buffer_location = BUFFER_LOC_PCI; + } else { + channel->srcq_buffer_location = BUFFER_LOC_INVALID; + channel->dstq_buffer_location = BUFFER_LOC_PCI; + } + } + + channel->xdev = xdev; + channel->channel_number = channel_number; + + if (xdev->is_rootdma) { + channel->dev = xdev->dev; + channel->intr_status_offset = DMA_AXI_INTR_STATUS_REG_OFFSET; + channel->intr_control_offset = DMA_AXI_INTR_CNTRL_REG_OFFSET; + } else { + channel->dev = &xdev->pci_dev->dev; + channel->intr_status_offset = DMA_PCIE_INTR_STATUS_REG_OFFSET; + channel->intr_control_offset = DMA_PCIE_INTR_CNTRL_REG_OFFSET; + } + + channel->chan_base = + (struct DMA_ENGINE_REGISTERS *)((__force char *)(xdev->reg_base) + + (channel_number * DMA_CHANNEL_REGS_SIZE)); + + if (((channel->chan_base->dma_channel_status) & + DMA_STATUS_DMA_PRES_BIT) == 0) { + dev_err(&platform_dev->dev, + "Hardware reports channel not present\n"); + return -ENOTSUPP; + } + + update_channel_read_attribute(channel); + update_channel_write_attribute(channel); + + xlnx_match = devm_kzalloc(&platform_dev->dev, + sizeof(struct ps_pcie_dma_channel_match), + GFP_KERNEL); + + if (!xlnx_match) + return -ENOMEM; + + if (xdev->is_rootdma) { + xlnx_match->pci_vendorid = xdev->rootdma_vendor; + xlnx_match->pci_deviceid = xdev->rootdma_device; + } else { + xlnx_match->pci_vendorid = xdev->pci_dev->vendor; + xlnx_match->pci_deviceid = xdev->pci_dev->device; + xlnx_match->bar_params = xdev->bar_info; + } + + xlnx_match->board_number = xdev->board_number; + xlnx_match->channel_number = channel_number; + xlnx_match->direction = xdev->channels[channel_number].direction; + + channel->common.private = (void *)xlnx_match; + + channel->common.device = &xdev->common; + list_add_tail(&channel->common.device_node, &xdev->common.channels); + + return 0; +} + +static void xlnx_ps_pcie_destroy_mempool(struct ps_pcie_dma_chan *chan) +{ + mempool_destroy(chan->transactions_pool); + + mempool_destroy(chan->intr_transactions_pool); +} + +static void xlnx_ps_pcie_free_worker_queues(struct ps_pcie_dma_chan *chan) +{ + if (chan->maintenance_workq) + destroy_workqueue(chan->maintenance_workq); + + if (chan->sw_intrs_wrkq) + destroy_workqueue(chan->sw_intrs_wrkq); + + if (chan->srcq_desc_cleanup) + destroy_workqueue(chan->srcq_desc_cleanup); + + if (chan->dstq_desc_cleanup) + destroy_workqueue(chan->dstq_desc_cleanup); + + if (chan->chan_programming) + destroy_workqueue(chan->chan_programming); + + if (chan->primary_desc_cleanup) + destroy_workqueue(chan->primary_desc_cleanup); +} + +static void xlnx_ps_pcie_free_pkt_ctxts(struct ps_pcie_dma_chan *chan) +{ + kfree(chan->ppkt_ctx_srcq); + + kfree(chan->ppkt_ctx_dstq); +} + +static void xlnx_ps_pcie_free_descriptors(struct ps_pcie_dma_chan *chan) +{ + ssize_t size; + + if (chan->psrc_sgl_bd) { + size = chan->total_descriptors * + sizeof(struct SOURCE_DMA_DESCRIPTOR); + dma_free_coherent(chan->dev, size, chan->psrc_sgl_bd, + chan->src_sgl_bd_pa); + } + + if (chan->pdst_sgl_bd) { + size = chan->total_descriptors * + sizeof(struct DEST_DMA_DESCRIPTOR); + dma_free_coherent(chan->dev, size, chan->pdst_sgl_bd, + chan->dst_sgl_bd_pa); + } + + if (chan->psrc_sta_bd) { + size = chan->total_descriptors * + sizeof(struct STATUS_DMA_DESCRIPTOR); + dma_free_coherent(chan->dev, size, chan->psrc_sta_bd, + chan->src_sta_bd_pa); + } + + if (chan->pdst_sta_bd) { + size = chan->total_descriptors * + sizeof(struct STATUS_DMA_DESCRIPTOR); + dma_free_coherent(chan->dev, size, chan->pdst_sta_bd, + chan->dst_sta_bd_pa); + } +} + +static int xlnx_ps_pcie_channel_activate(struct ps_pcie_dma_chan *chan) +{ + u32 reg = chan->coalesce_count; + + reg = reg << DMA_INTCNTRL_SGCOLSCCNT_BIT_SHIFT; + + /* Enable Interrupts for channel */ + ps_pcie_dma_set_mask(chan, chan->intr_control_offset, + reg | DMA_INTCNTRL_ENABLINTR_BIT | + DMA_INTCNTRL_DMAERRINTR_BIT | + DMA_INTCNTRL_DMASGINTR_BIT); + + /* Enable DMA */ + ps_pcie_dma_set_mask(chan, DMA_CNTRL_REG_OFFSET, + DMA_CNTRL_ENABL_BIT | + DMA_CNTRL_64BIT_STAQ_ELEMSZ_BIT); + + spin_lock(&chan->channel_lock); + chan->state = CHANNEL_AVAILABLE; + spin_unlock(&chan->channel_lock); + + /* Activate timer if required */ + if ((chan->coalesce_count > 0) && !chan->poll_timer.function) + xlnx_ps_pcie_alloc_poll_timer(chan); + + return 0; +} + +static void xlnx_ps_pcie_channel_quiesce(struct ps_pcie_dma_chan *chan) +{ + /* Disable interrupts for Channel */ + ps_pcie_dma_clr_mask(chan, chan->intr_control_offset, + DMA_INTCNTRL_ENABLINTR_BIT); + + /* Delete timer if it is created */ + if ((chan->coalesce_count > 0) && (!chan->poll_timer.function)) + xlnx_ps_pcie_free_poll_timer(chan); + + /* Flush descriptor cleaning work queues */ + if (chan->primary_desc_cleanup) + flush_workqueue(chan->primary_desc_cleanup); + + /* Flush channel programming work queue */ + if (chan->chan_programming) + flush_workqueue(chan->chan_programming); + + /* Clear the persistent bits */ + ps_pcie_dma_set_mask(chan, chan->intr_status_offset, + DMA_INTSTATUS_DMAERR_BIT | + DMA_INTSTATUS_SGLINTR_BIT | + DMA_INTSTATUS_SWINTR_BIT); + + /* Disable DMA channel */ + ps_pcie_dma_clr_mask(chan, DMA_CNTRL_REG_OFFSET, DMA_CNTRL_ENABL_BIT); + + spin_lock(&chan->channel_lock); + chan->state = CHANNEL_UNAVIALBLE; + spin_unlock(&chan->channel_lock); +} + +static u32 total_bytes_in_sgl(struct scatterlist *sgl, + unsigned int num_entries) +{ + u32 total_bytes = 0; + struct scatterlist *sgl_ptr; + unsigned int i; + + for_each_sg(sgl, sgl_ptr, num_entries, i) + total_bytes += sg_dma_len(sgl_ptr); + + return total_bytes; +} + +static void ivk_cbk_intr_seg(struct ps_pcie_intr_segment *intr_seg, + struct ps_pcie_dma_chan *chan, + enum dmaengine_tx_result result) +{ + struct dmaengine_result rslt; + + rslt.result = result; + rslt.residue = 0; + + spin_lock(&chan->cookie_lock); + dma_cookie_complete(&intr_seg->async_intr_tx); + spin_unlock(&chan->cookie_lock); + + dmaengine_desc_get_callback_invoke(&intr_seg->async_intr_tx, &rslt); +} + +static void ivk_cbk_seg(struct ps_pcie_tx_segment *seg, + struct ps_pcie_dma_chan *chan, + enum dmaengine_tx_result result) +{ + struct dmaengine_result rslt, *prslt; + + spin_lock(&chan->cookie_lock); + dma_cookie_complete(&seg->async_tx); + spin_unlock(&chan->cookie_lock); + + rslt.result = result; + if (seg->tx_elements.src_sgl && + chan->srcq_buffer_location == BUFFER_LOC_PCI) { + rslt.residue = + total_bytes_in_sgl(seg->tx_elements.src_sgl, + seg->tx_elements.srcq_num_elemets); + prslt = &rslt; + } else if (seg->tx_elements.dst_sgl && + chan->dstq_buffer_location == BUFFER_LOC_PCI) { + rslt.residue = + total_bytes_in_sgl(seg->tx_elements.dst_sgl, + seg->tx_elements.dstq_num_elemets); + prslt = &rslt; + } else { + prslt = NULL; + } + + dmaengine_desc_get_callback_invoke(&seg->async_tx, prslt); +} + +static void ivk_cbk_ctx(struct PACKET_TRANSFER_PARAMS *ppkt_ctxt, + struct ps_pcie_dma_chan *chan, + enum dmaengine_tx_result result) +{ + if (ppkt_ctxt->availability_status == IN_USE) { + if (ppkt_ctxt->seg) { + ivk_cbk_seg(ppkt_ctxt->seg, chan, result); + mempool_free(ppkt_ctxt->seg, + chan->transactions_pool); + } + } +} + +static void ivk_cbk_for_pending(struct ps_pcie_dma_chan *chan) +{ + int i; + struct PACKET_TRANSFER_PARAMS *ppkt_ctxt; + struct ps_pcie_tx_segment *seg, *seg_nxt; + struct ps_pcie_intr_segment *intr_seg, *intr_seg_next; + + if (chan->ppkt_ctx_srcq) { + if (chan->idx_ctx_srcq_tail != chan->idx_ctx_srcq_head) { + i = chan->idx_ctx_srcq_tail; + while (i != chan->idx_ctx_srcq_head) { + ppkt_ctxt = chan->ppkt_ctx_srcq + i; + ivk_cbk_ctx(ppkt_ctxt, chan, + DMA_TRANS_READ_FAILED); + memset(ppkt_ctxt, 0, + sizeof(struct PACKET_TRANSFER_PARAMS)); + i++; + if (i == chan->total_descriptors) + i = 0; + } + } + } + + if (chan->ppkt_ctx_dstq) { + if (chan->idx_ctx_dstq_tail != chan->idx_ctx_dstq_head) { + i = chan->idx_ctx_dstq_tail; + while (i != chan->idx_ctx_dstq_head) { + ppkt_ctxt = chan->ppkt_ctx_dstq + i; + ivk_cbk_ctx(ppkt_ctxt, chan, + DMA_TRANS_WRITE_FAILED); + memset(ppkt_ctxt, 0, + sizeof(struct PACKET_TRANSFER_PARAMS)); + i++; + if (i == chan->total_descriptors) + i = 0; + } + } + } + + list_for_each_entry_safe(seg, seg_nxt, &chan->active_list, node) { + ivk_cbk_seg(seg, chan, DMA_TRANS_ABORTED); + spin_lock(&chan->active_list_lock); + list_del(&seg->node); + spin_unlock(&chan->active_list_lock); + mempool_free(seg, chan->transactions_pool); + } + + list_for_each_entry_safe(seg, seg_nxt, &chan->pending_list, node) { + ivk_cbk_seg(seg, chan, DMA_TRANS_ABORTED); + spin_lock(&chan->pending_list_lock); + list_del(&seg->node); + spin_unlock(&chan->pending_list_lock); + mempool_free(seg, chan->transactions_pool); + } + + list_for_each_entry_safe(intr_seg, intr_seg_next, + &chan->active_interrupts_list, node) { + ivk_cbk_intr_seg(intr_seg, chan, DMA_TRANS_ABORTED); + spin_lock(&chan->active_interrupts_lock); + list_del(&intr_seg->node); + spin_unlock(&chan->active_interrupts_lock); + mempool_free(intr_seg, chan->intr_transactions_pool); + } + + list_for_each_entry_safe(intr_seg, intr_seg_next, + &chan->pending_interrupts_list, node) { + ivk_cbk_intr_seg(intr_seg, chan, DMA_TRANS_ABORTED); + spin_lock(&chan->pending_interrupts_lock); + list_del(&intr_seg->node); + spin_unlock(&chan->pending_interrupts_lock); + mempool_free(intr_seg, chan->intr_transactions_pool); + } +} + +static void xlnx_ps_pcie_reset_channel(struct ps_pcie_dma_chan *chan) +{ + xlnx_ps_pcie_channel_quiesce(chan); + + ivk_cbk_for_pending(chan); + + ps_pcie_chan_reset(chan); + + init_sw_components(chan); + init_hw_components(chan); + + xlnx_ps_pcie_channel_activate(chan); +} + +static void xlnx_ps_pcie_free_poll_timer(struct ps_pcie_dma_chan *chan) +{ + if (chan->poll_timer.function) { + del_timer_sync(&chan->poll_timer); + chan->poll_timer.function = NULL; + } +} + +static int xlnx_ps_pcie_alloc_poll_timer(struct ps_pcie_dma_chan *chan) +{ + init_timer(&chan->poll_timer); + chan->poll_timer.function = poll_completed_transactions; + chan->poll_timer.expires = jiffies + chan->poll_timer_freq; + chan->poll_timer.data = (unsigned long)chan; + + add_timer(&chan->poll_timer); + + return 0; +} + +static void terminate_transactions_work(struct work_struct *work) +{ + struct ps_pcie_dma_chan *chan = + (struct ps_pcie_dma_chan *)container_of(work, + struct ps_pcie_dma_chan, handle_chan_terminate); + + xlnx_ps_pcie_channel_quiesce(chan); + ivk_cbk_for_pending(chan); + xlnx_ps_pcie_channel_activate(chan); + + complete(&chan->chan_terminate_complete); +} + +static void chan_shutdown_work(struct work_struct *work) +{ + struct ps_pcie_dma_chan *chan = + (struct ps_pcie_dma_chan *)container_of(work, + struct ps_pcie_dma_chan, handle_chan_shutdown); + + xlnx_ps_pcie_channel_quiesce(chan); + + complete(&chan->chan_shutdown_complt); +} + +static void chan_reset_work(struct work_struct *work) +{ + struct ps_pcie_dma_chan *chan = + (struct ps_pcie_dma_chan *)container_of(work, + struct ps_pcie_dma_chan, handle_chan_reset); + + xlnx_ps_pcie_reset_channel(chan); +} + +static void sw_intr_work(struct work_struct *work) +{ + struct ps_pcie_dma_chan *chan = + (struct ps_pcie_dma_chan *)container_of(work, + struct ps_pcie_dma_chan, handle_sw_intrs); + struct ps_pcie_intr_segment *intr_seg, *intr_seg_next; + + list_for_each_entry_safe(intr_seg, intr_seg_next, + &chan->active_interrupts_list, node) { + spin_lock(&chan->cookie_lock); + dma_cookie_complete(&intr_seg->async_intr_tx); + spin_unlock(&chan->cookie_lock); + dmaengine_desc_get_callback_invoke(&intr_seg->async_intr_tx, + NULL); + spin_lock(&chan->active_interrupts_lock); + list_del(&intr_seg->node); + spin_unlock(&chan->active_interrupts_lock); + } +} + +static int xlnx_ps_pcie_alloc_worker_threads(struct ps_pcie_dma_chan *chan) +{ + char wq_name[WORKQ_NAME_SIZE]; + + snprintf(wq_name, WORKQ_NAME_SIZE, + "PS PCIe channel %d descriptor programming wq", + chan->channel_number); + chan->chan_programming = + create_singlethread_workqueue((const char *)wq_name); + if (!chan->chan_programming) { + dev_err(chan->dev, + "Unable to create programming wq for chan %d", + chan->channel_number); + goto err_no_desc_program_wq; + } else { + INIT_WORK(&chan->handle_chan_programming, + ps_pcie_chan_program_work); + } + memset(wq_name, 0, WORKQ_NAME_SIZE); + + snprintf(wq_name, WORKQ_NAME_SIZE, + "PS PCIe channel %d primary cleanup wq", chan->channel_number); + chan->primary_desc_cleanup = + create_singlethread_workqueue((const char *)wq_name); + if (!chan->primary_desc_cleanup) { + dev_err(chan->dev, + "Unable to create primary cleanup wq for channel %d", + chan->channel_number); + goto err_no_primary_clean_wq; + } else { + INIT_WORK(&chan->handle_primary_desc_cleanup, + ps_pcie_chan_primary_work); + } + memset(wq_name, 0, WORKQ_NAME_SIZE); + + snprintf(wq_name, WORKQ_NAME_SIZE, + "PS PCIe channel %d maintenance works wq", + chan->channel_number); + chan->maintenance_workq = + create_singlethread_workqueue((const char *)wq_name); + if (!chan->maintenance_workq) { + dev_err(chan->dev, + "Unable to create maintenance wq for channel %d", + chan->channel_number); + goto err_no_maintenance_wq; + } else { + INIT_WORK(&chan->handle_chan_reset, chan_reset_work); + INIT_WORK(&chan->handle_chan_shutdown, chan_shutdown_work); + INIT_WORK(&chan->handle_chan_terminate, + terminate_transactions_work); + } + memset(wq_name, 0, WORKQ_NAME_SIZE); + + snprintf(wq_name, WORKQ_NAME_SIZE, + "PS PCIe channel %d software Interrupts wq", + chan->channel_number); + chan->sw_intrs_wrkq = + create_singlethread_workqueue((const char *)wq_name); + if (!chan->sw_intrs_wrkq) { + dev_err(chan->dev, + "Unable to create sw interrupts wq for channel %d", + chan->channel_number); + goto err_no_sw_intrs_wq; + } else { + INIT_WORK(&chan->handle_sw_intrs, sw_intr_work); + } + memset(wq_name, 0, WORKQ_NAME_SIZE); + + if (chan->psrc_sgl_bd) { + snprintf(wq_name, WORKQ_NAME_SIZE, + "PS PCIe channel %d srcq handling wq", + chan->channel_number); + chan->srcq_desc_cleanup = + create_singlethread_workqueue((const char *)wq_name); + if (!chan->srcq_desc_cleanup) { + dev_err(chan->dev, + "Unable to create src q completion wq chan %d", + chan->channel_number); + goto err_no_src_q_completion_wq; + } else { + INIT_WORK(&chan->handle_srcq_desc_cleanup, + src_cleanup_work); + } + memset(wq_name, 0, WORKQ_NAME_SIZE); + } + + if (chan->pdst_sgl_bd) { + snprintf(wq_name, WORKQ_NAME_SIZE, + "PS PCIe channel %d dstq handling wq", + chan->channel_number); + chan->dstq_desc_cleanup = + create_singlethread_workqueue((const char *)wq_name); + if (!chan->dstq_desc_cleanup) { + dev_err(chan->dev, + "Unable to create dst q completion wq chan %d", + chan->channel_number); + goto err_no_dst_q_completion_wq; + } else { + INIT_WORK(&chan->handle_dstq_desc_cleanup, + dst_cleanup_work); + } + memset(wq_name, 0, WORKQ_NAME_SIZE); + } + + return 0; +err_no_dst_q_completion_wq: + if (chan->srcq_desc_cleanup) + destroy_workqueue(chan->srcq_desc_cleanup); +err_no_src_q_completion_wq: + if (chan->sw_intrs_wrkq) + destroy_workqueue(chan->sw_intrs_wrkq); +err_no_sw_intrs_wq: + if (chan->maintenance_workq) + destroy_workqueue(chan->maintenance_workq); +err_no_maintenance_wq: + if (chan->primary_desc_cleanup) + destroy_workqueue(chan->primary_desc_cleanup); +err_no_primary_clean_wq: + if (chan->chan_programming) + destroy_workqueue(chan->chan_programming); +err_no_desc_program_wq: + return -ENOMEM; +} + +static int xlnx_ps_pcie_alloc_mempool(struct ps_pcie_dma_chan *chan) +{ + chan->transactions_pool = + mempool_create_kmalloc_pool(chan->total_descriptors, + sizeof(struct ps_pcie_tx_segment)); + + if (!chan->transactions_pool) + goto no_transactions_pool; + + chan->intr_transactions_pool = + mempool_create_kmalloc_pool(MIN_SW_INTR_TRANSACTIONS, + sizeof(struct ps_pcie_intr_segment)); + + if (!chan->intr_transactions_pool) + goto no_intr_transactions_pool; + + return 0; + +no_intr_transactions_pool: + mempool_destroy(chan->transactions_pool); + +no_transactions_pool: + return -ENOMEM; +} + +static int xlnx_ps_pcie_alloc_pkt_contexts(struct ps_pcie_dma_chan *chan) +{ + if (chan->psrc_sgl_bd) { + chan->ppkt_ctx_srcq = + kcalloc(chan->total_descriptors, + sizeof(struct PACKET_TRANSFER_PARAMS), + GFP_KERNEL); + if (!chan->ppkt_ctx_srcq) { + dev_err(chan->dev, + "Src pkt cxt allocation for chan %d failed\n", + chan->channel_number); + goto err_no_src_pkt_ctx; + } + } + + if (chan->pdst_sgl_bd) { + chan->ppkt_ctx_dstq = + kcalloc(chan->total_descriptors, + sizeof(struct PACKET_TRANSFER_PARAMS), + GFP_KERNEL); + if (!chan->ppkt_ctx_dstq) { + dev_err(chan->dev, + "Dst pkt cxt for chan %d failed\n", + chan->channel_number); + goto err_no_dst_pkt_ctx; + } + } + + return 0; + +err_no_dst_pkt_ctx: + kfree(chan->ppkt_ctx_srcq); + +err_no_src_pkt_ctx: + return -ENOMEM; +} + +static int dma_alloc_descriptors_two_queues(struct ps_pcie_dma_chan *chan) +{ + size_t size; + + void *sgl_base; + void *sta_base; + dma_addr_t phy_addr_sglbase; + dma_addr_t phy_addr_stabase; + + size = chan->total_descriptors * + sizeof(struct SOURCE_DMA_DESCRIPTOR); + + sgl_base = dma_zalloc_coherent(chan->dev, size, &phy_addr_sglbase, + GFP_KERNEL); + + if (!sgl_base) { + dev_err(chan->dev, + "Sgl bds in two channel mode for chan %d failed\n", + chan->channel_number); + goto err_no_sgl_bds; + } + + size = chan->total_descriptors * sizeof(struct STATUS_DMA_DESCRIPTOR); + sta_base = dma_zalloc_coherent(chan->dev, size, &phy_addr_stabase, + GFP_KERNEL); + + if (!sta_base) { + dev_err(chan->dev, + "Sta bds in two channel mode for chan %d failed\n", + chan->channel_number); + goto err_no_sta_bds; + } + + if (chan->direction == DMA_TO_DEVICE) { + chan->psrc_sgl_bd = sgl_base; + chan->src_sgl_bd_pa = phy_addr_sglbase; + + chan->psrc_sta_bd = sta_base; + chan->src_sta_bd_pa = phy_addr_stabase; + + chan->pdst_sgl_bd = NULL; + chan->dst_sgl_bd_pa = 0; + + chan->pdst_sta_bd = NULL; + chan->dst_sta_bd_pa = 0; + + } else if (chan->direction == DMA_FROM_DEVICE) { + chan->psrc_sgl_bd = NULL; + chan->src_sgl_bd_pa = 0; + + chan->psrc_sta_bd = NULL; + chan->src_sta_bd_pa = 0; + + chan->pdst_sgl_bd = sgl_base; + chan->dst_sgl_bd_pa = phy_addr_sglbase; + + chan->pdst_sta_bd = sta_base; + chan->dst_sta_bd_pa = phy_addr_stabase; + + } else { + dev_err(chan->dev, + "%d %s() Unsupported channel direction\n", + __LINE__, __func__); + goto unsupported_channel_direction; + } + + return 0; + +unsupported_channel_direction: + size = chan->total_descriptors * + sizeof(struct STATUS_DMA_DESCRIPTOR); + dma_free_coherent(chan->dev, size, sta_base, phy_addr_stabase); +err_no_sta_bds: + size = chan->total_descriptors * + sizeof(struct SOURCE_DMA_DESCRIPTOR); + dma_free_coherent(chan->dev, size, sgl_base, phy_addr_sglbase); +err_no_sgl_bds: + + return -ENOMEM; +} + +static int dma_alloc_decriptors_all_queues(struct ps_pcie_dma_chan *chan) +{ + size_t size; + + size = chan->total_descriptors * + sizeof(struct SOURCE_DMA_DESCRIPTOR); + chan->psrc_sgl_bd = + dma_zalloc_coherent(chan->dev, size, &chan->src_sgl_bd_pa, + GFP_KERNEL); + + if (!chan->psrc_sgl_bd) { + dev_err(chan->dev, + "Alloc fail src q buffer descriptors for chan %d\n", + chan->channel_number); + goto err_no_src_sgl_descriptors; + } + + size = chan->total_descriptors * sizeof(struct DEST_DMA_DESCRIPTOR); + chan->pdst_sgl_bd = + dma_zalloc_coherent(chan->dev, size, &chan->dst_sgl_bd_pa, + GFP_KERNEL); + + if (!chan->pdst_sgl_bd) { + dev_err(chan->dev, + "Alloc fail dst q buffer descriptors for chan %d\n", + chan->channel_number); + goto err_no_dst_sgl_descriptors; + } + + size = chan->total_descriptors * sizeof(struct STATUS_DMA_DESCRIPTOR); + chan->psrc_sta_bd = + dma_zalloc_coherent(chan->dev, size, &chan->src_sta_bd_pa, + GFP_KERNEL); + + if (!chan->psrc_sta_bd) { + dev_err(chan->dev, + "Unable to allocate src q status bds for chan %d\n", + chan->channel_number); + goto err_no_src_sta_descriptors; + } + + chan->pdst_sta_bd = + dma_zalloc_coherent(chan->dev, size, &chan->dst_sta_bd_pa, + GFP_KERNEL); + + if (!chan->pdst_sta_bd) { + dev_err(chan->dev, + "Unable to allocate Dst q status bds for chan %d\n", + chan->channel_number); + goto err_no_dst_sta_descriptors; + } + + return 0; + +err_no_dst_sta_descriptors: + size = chan->total_descriptors * + sizeof(struct STATUS_DMA_DESCRIPTOR); + dma_free_coherent(chan->dev, size, chan->psrc_sta_bd, + chan->src_sta_bd_pa); +err_no_src_sta_descriptors: + size = chan->total_descriptors * + sizeof(struct DEST_DMA_DESCRIPTOR); + dma_free_coherent(chan->dev, size, chan->pdst_sgl_bd, + chan->dst_sgl_bd_pa); +err_no_dst_sgl_descriptors: + size = chan->total_descriptors * + sizeof(struct SOURCE_DMA_DESCRIPTOR); + dma_free_coherent(chan->dev, size, chan->psrc_sgl_bd, + chan->src_sgl_bd_pa); + +err_no_src_sgl_descriptors: + return -ENOMEM; +} + +static void xlnx_ps_pcie_dma_free_chan_resources(struct dma_chan *dchan) +{ + struct ps_pcie_dma_chan *chan; + + if (!dchan) + return; + + chan = to_xilinx_chan(dchan); + + if (chan->state == CHANNEL_RESOURCE_UNALLOCATED) + return; + + if (chan->maintenance_workq) { + if (completion_done(&chan->chan_shutdown_complt)) + reinit_completion(&chan->chan_shutdown_complt); + queue_work(chan->maintenance_workq, + &chan->handle_chan_shutdown); + wait_for_completion_interruptible(&chan->chan_shutdown_complt); + + xlnx_ps_pcie_free_worker_queues(chan); + xlnx_ps_pcie_free_pkt_ctxts(chan); + xlnx_ps_pcie_destroy_mempool(chan); + xlnx_ps_pcie_free_descriptors(chan); + + spin_lock(&chan->channel_lock); + chan->state = CHANNEL_RESOURCE_UNALLOCATED; + spin_unlock(&chan->channel_lock); + } +} + +static int xlnx_ps_pcie_dma_alloc_chan_resources(struct dma_chan *dchan) +{ + struct ps_pcie_dma_chan *chan; + + if (!dchan) + return PTR_ERR(dchan); + + chan = to_xilinx_chan(dchan); + + if (chan->state != CHANNEL_RESOURCE_UNALLOCATED) + return 0; + + if (chan->num_queues == DEFAULT_DMA_QUEUES) { + if (dma_alloc_decriptors_all_queues(chan) != 0) { + dev_err(chan->dev, + "Alloc fail bds for channel %d\n", + chan->channel_number); + goto err_no_descriptors; + } + } else if (chan->num_queues == TWO_DMA_QUEUES) { + if (dma_alloc_descriptors_two_queues(chan) != 0) { + dev_err(chan->dev, + "Alloc fail bds for two queues of channel %d\n", + chan->channel_number); + goto err_no_descriptors; + } + } + + if (xlnx_ps_pcie_alloc_mempool(chan) != 0) { + dev_err(chan->dev, + "Unable to allocate memory pool for channel %d\n", + chan->channel_number); + goto err_no_mempools; + } + + if (xlnx_ps_pcie_alloc_pkt_contexts(chan) != 0) { + dev_err(chan->dev, + "Unable to allocate packet contexts for channel %d\n", + chan->channel_number); + goto err_no_pkt_ctxts; + } + + if (xlnx_ps_pcie_alloc_worker_threads(chan) != 0) { + dev_err(chan->dev, + "Unable to allocate worker queues for channel %d\n", + chan->channel_number); + goto err_no_worker_queues; + } + + xlnx_ps_pcie_reset_channel(chan); + + dma_cookie_init(dchan); + + return 0; + +err_no_worker_queues: + xlnx_ps_pcie_free_pkt_ctxts(chan); +err_no_pkt_ctxts: + xlnx_ps_pcie_destroy_mempool(chan); +err_no_mempools: + xlnx_ps_pcie_free_descriptors(chan); +err_no_descriptors: + return -ENOMEM; +} + +static dma_cookie_t xilinx_intr_tx_submit(struct dma_async_tx_descriptor *tx) +{ + struct ps_pcie_intr_segment *intr_seg = + to_ps_pcie_dma_tx_intr_descriptor(tx); + struct ps_pcie_dma_chan *chan = to_xilinx_chan(tx->chan); + dma_cookie_t cookie; + + if (chan->state != CHANNEL_AVAILABLE) + return -EINVAL; + + spin_lock(&chan->cookie_lock); + cookie = dma_cookie_assign(tx); + spin_unlock(&chan->cookie_lock); + + spin_lock(&chan->pending_interrupts_lock); + list_add_tail(&intr_seg->node, &chan->pending_interrupts_list); + spin_unlock(&chan->pending_interrupts_lock); + + return cookie; +} + +static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx) +{ + struct ps_pcie_tx_segment *seg = to_ps_pcie_dma_tx_descriptor(tx); + struct ps_pcie_dma_chan *chan = to_xilinx_chan(tx->chan); + dma_cookie_t cookie; + + if (chan->state != CHANNEL_AVAILABLE) + return -EINVAL; + + spin_lock(&chan->cookie_lock); + cookie = dma_cookie_assign(tx); + spin_unlock(&chan->cookie_lock); + + spin_lock(&chan->pending_list_lock); + list_add_tail(&seg->node, &chan->pending_list); + spin_unlock(&chan->pending_list_lock); + + return cookie; +} + +static struct dma_async_tx_descriptor *xlnx_ps_pcie_dma_prep_dma_sg( + struct dma_chan *channel, struct scatterlist *dst_sg, + unsigned int dst_nents, struct scatterlist *src_sg, + unsigned int src_nents, unsigned long flags) +{ + struct ps_pcie_dma_chan *chan = to_xilinx_chan(channel); + struct ps_pcie_tx_segment *seg = NULL; + + if (chan->state != CHANNEL_AVAILABLE) + return NULL; + + if (dst_nents == 0 || src_nents == 0) + return NULL; + + if (!dst_sg || !src_sg) + return NULL; + + if (chan->num_queues != DEFAULT_DMA_QUEUES) { + dev_err(chan->dev, "Only prep_slave_sg for channel %d\n", + chan->channel_number); + return NULL; + } + + seg = mempool_alloc(chan->transactions_pool, GFP_ATOMIC); + if (!seg) { + dev_err(chan->dev, "Tx segment alloc for channel %d\n", + chan->channel_number); + return NULL; + } + + memset(seg, 0, sizeof(*seg)); + + seg->tx_elements.dst_sgl = dst_sg; + seg->tx_elements.dstq_num_elemets = dst_nents; + seg->tx_elements.src_sgl = src_sg; + seg->tx_elements.srcq_num_elemets = src_nents; + + dma_async_tx_descriptor_init(&seg->async_tx, &chan->common); + seg->async_tx.flags = flags; + async_tx_ack(&seg->async_tx); + seg->async_tx.tx_submit = xilinx_dma_tx_submit; + + return &seg->async_tx; +} + +static struct dma_async_tx_descriptor *xlnx_ps_pcie_dma_prep_slave_sg( + struct dma_chan *channel, struct scatterlist *sgl, + unsigned int sg_len, enum dma_transfer_direction direction, + unsigned long flags, void *context) +{ + struct ps_pcie_dma_chan *chan = to_xilinx_chan(channel); + struct ps_pcie_tx_segment *seg = NULL; + + if (chan->state != CHANNEL_AVAILABLE) + return NULL; + + if (!(is_slave_direction(direction))) + return NULL; + + if (!sgl || sg_len == 0) + return NULL; + + if (chan->num_queues != TWO_DMA_QUEUES) { + dev_err(chan->dev, "Only prep_dma_sg is supported channel %d\n", + chan->channel_number); + return NULL; + } + + seg = mempool_alloc(chan->transactions_pool, GFP_ATOMIC); + if (!seg) { + dev_err(chan->dev, "Unable to allocate tx segment channel %d\n", + chan->channel_number); + return NULL; + } + + memset(seg, 0, sizeof(*seg)); + + if (chan->direction == DMA_TO_DEVICE) { + seg->tx_elements.src_sgl = sgl; + seg->tx_elements.srcq_num_elemets = sg_len; + seg->tx_elements.dst_sgl = NULL; + seg->tx_elements.dstq_num_elemets = 0; + } else { + seg->tx_elements.src_sgl = NULL; + seg->tx_elements.srcq_num_elemets = 0; + seg->tx_elements.dst_sgl = sgl; + seg->tx_elements.dstq_num_elemets = sg_len; + } + + dma_async_tx_descriptor_init(&seg->async_tx, &chan->common); + seg->async_tx.flags = flags; + async_tx_ack(&seg->async_tx); + seg->async_tx.tx_submit = xilinx_dma_tx_submit; + + return &seg->async_tx; +} + +static void xlnx_ps_pcie_dma_issue_pending(struct dma_chan *channel) +{ + struct ps_pcie_dma_chan *chan; + + if (!channel) + return; + + chan = to_xilinx_chan(channel); + + if (!list_empty(&chan->pending_list)) { + spin_lock(&chan->pending_list_lock); + spin_lock(&chan->active_list_lock); + list_splice_tail_init(&chan->pending_list, + &chan->active_list); + spin_unlock(&chan->active_list_lock); + spin_unlock(&chan->pending_list_lock); + } + + if (!list_empty(&chan->pending_interrupts_list)) { + spin_lock(&chan->pending_interrupts_lock); + spin_lock(&chan->active_interrupts_lock); + list_splice_tail_init(&chan->pending_interrupts_list, + &chan->active_interrupts_list); + spin_unlock(&chan->active_interrupts_lock); + spin_unlock(&chan->pending_interrupts_lock); + } + + if (chan->chan_programming) + queue_work(chan->chan_programming, + &chan->handle_chan_programming); +} + +static int xlnx_ps_pcie_dma_terminate_all(struct dma_chan *channel) +{ + struct ps_pcie_dma_chan *chan; + + if (!channel) + return PTR_ERR(channel); + + chan = to_xilinx_chan(channel); + + if (chan->state != CHANNEL_AVAILABLE) + return 1; + + if (chan->maintenance_workq) { + if (completion_done(&chan->chan_terminate_complete)) + reinit_completion(&chan->chan_terminate_complete); + queue_work(chan->maintenance_workq, + &chan->handle_chan_terminate); + wait_for_completion_interruptible( + &chan->chan_terminate_complete); + } + + return 0; +} + +static struct dma_async_tx_descriptor *xlnx_ps_pcie_dma_prep_interrupt( + struct dma_chan *channel, unsigned long flags) +{ + struct ps_pcie_dma_chan *chan; + struct ps_pcie_intr_segment *intr_segment = NULL; + + if (!channel) + return NULL; + + chan = to_xilinx_chan(channel); + + if (chan->state != CHANNEL_AVAILABLE) + return NULL; + + intr_segment = mempool_alloc(chan->intr_transactions_pool, GFP_ATOMIC); + + memset(intr_segment, 0, sizeof(*intr_segment)); + + dma_async_tx_descriptor_init(&intr_segment->async_intr_tx, + &chan->common); + intr_segment->async_intr_tx.flags = flags; + async_tx_ack(&intr_segment->async_intr_tx); + intr_segment->async_intr_tx.tx_submit = xilinx_intr_tx_submit; + + return &intr_segment->async_intr_tx; +} + +static int xlnx_pcie_dma_driver_probe(struct platform_device *platform_dev) +{ + int err, i; + struct xlnx_pcie_dma_device *xdev; + static u16 board_number; + + xdev = devm_kzalloc(&platform_dev->dev, + sizeof(struct xlnx_pcie_dma_device), GFP_KERNEL); + + if (!xdev) + return -ENOMEM; + +#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT + xdev->dma_buf_ext_addr = true; +#else + xdev->dma_buf_ext_addr = false; +#endif + + xdev->is_rootdma = device_property_read_bool(&platform_dev->dev, + "rootdma"); + + xdev->dev = &platform_dev->dev; + xdev->board_number = board_number; + + err = device_property_read_u32(&platform_dev->dev, "numchannels", + &xdev->num_channels); + if (err) { + dev_err(&platform_dev->dev, + "Unable to find numchannels property\n"); + goto platform_driver_probe_return; + } + + if (xdev->num_channels == 0 || xdev->num_channels > + MAX_ALLOWED_CHANNELS_IN_HW) { + dev_warn(&platform_dev->dev, + "Invalid xlnx-num_channels property value\n"); + xdev->num_channels = MAX_ALLOWED_CHANNELS_IN_HW; + } + + xdev->channels = + (struct ps_pcie_dma_chan *)devm_kzalloc(&platform_dev->dev, + sizeof(struct ps_pcie_dma_chan) + * xdev->num_channels, + GFP_KERNEL); + if (!xdev->channels) { + err = -ENOMEM; + goto platform_driver_probe_return; + } + + if (xdev->is_rootdma) + err = read_rootdma_config(platform_dev, xdev); + else + err = read_epdma_config(platform_dev, xdev); + + if (err) { + dev_err(&platform_dev->dev, + "Unable to initialize dma configuration\n"); + goto platform_driver_probe_return; + } + + /* Initialize the DMA engine */ + INIT_LIST_HEAD(&xdev->common.channels); + + dma_cap_set(DMA_SLAVE, xdev->common.cap_mask); + dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask); + dma_cap_set(DMA_SG, xdev->common.cap_mask); + dma_cap_set(DMA_INTERRUPT, xdev->common.cap_mask); + + xdev->common.src_addr_widths = DMA_SLAVE_BUSWIDTH_UNDEFINED; + xdev->common.dst_addr_widths = DMA_SLAVE_BUSWIDTH_UNDEFINED; + xdev->common.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); + xdev->common.device_alloc_chan_resources = + xlnx_ps_pcie_dma_alloc_chan_resources; + xdev->common.device_free_chan_resources = + xlnx_ps_pcie_dma_free_chan_resources; + xdev->common.device_terminate_all = xlnx_ps_pcie_dma_terminate_all; + xdev->common.device_tx_status = dma_cookie_status; + xdev->common.device_issue_pending = xlnx_ps_pcie_dma_issue_pending; + xdev->common.device_prep_dma_interrupt = + xlnx_ps_pcie_dma_prep_interrupt; + xdev->common.device_prep_dma_sg = xlnx_ps_pcie_dma_prep_dma_sg; + xdev->common.device_prep_slave_sg = xlnx_ps_pcie_dma_prep_slave_sg; + xdev->common.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT; + + for (i = 0; i < xdev->num_channels; i++) { + err = probe_channel_properties(platform_dev, xdev, i); + + if (err != 0) { + dev_err(xdev->dev, + "Unable to read channel properties\n"); + goto platform_driver_probe_return; + } + } + + if (xdev->is_rootdma) + err = platform_irq_setup(xdev); + else + err = irq_setup(xdev); + if (err) { + dev_err(xdev->dev, "Cannot request irq lines for device %d\n", + xdev->board_number); + goto platform_driver_probe_return; + } + + err = dma_async_device_register(&xdev->common); + if (err) { + dev_err(xdev->dev, + "Unable to register board %d with dma framework\n", + xdev->board_number); + goto platform_driver_probe_return; + } + + platform_set_drvdata(platform_dev, xdev); + + board_number++; + + dev_info(&platform_dev->dev, "PS PCIe Platform driver probed\n"); + return 0; + +platform_driver_probe_return: + return err; +} + +static int xlnx_pcie_dma_driver_remove(struct platform_device *platform_dev) +{ + struct xlnx_pcie_dma_device *xdev = + platform_get_drvdata(platform_dev); + int i; + + for (i = 0; i < xdev->num_channels; i++) + xlnx_ps_pcie_dma_free_chan_resources(&xdev->channels[i].common); + + dma_async_device_unregister(&xdev->common); + + return 0; +} + +#ifdef CONFIG_OF +static const struct of_device_id xlnx_pcie_root_dma_of_ids[] = { + { .compatible = "xlnx,ps_pcie_dma-1.00.a", }, + {} +}; +MODULE_DEVICE_TABLE(of, xlnx_pcie_root_dma_of_ids); +#endif + +static struct platform_driver xlnx_pcie_dma_driver = { + .driver = { + .name = XLNX_PLATFORM_DRIVER_NAME, + .of_match_table = of_match_ptr(xlnx_pcie_root_dma_of_ids), + .owner = THIS_MODULE, + }, + .probe = xlnx_pcie_dma_driver_probe, + .remove = xlnx_pcie_dma_driver_remove, +}; + +int dma_platform_driver_register(void) +{ + return platform_driver_register(&xlnx_pcie_dma_driver); +} + +void dma_platform_driver_unregister(void) +{ + platform_driver_unregister(&xlnx_pcie_dma_driver); +} From patchwork Fri Sep 8 12:23:08 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ravi Shankar Jonnalagadda X-Patchwork-Id: 811554 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=xilinx.onmicrosoft.com header.i=@xilinx.onmicrosoft.com header.b="KlEse9S5"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3xpc1m4JGFz9tXx for ; Fri, 8 Sep 2017 22:23:56 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755208AbdIHMXm (ORCPT ); Fri, 8 Sep 2017 08:23:42 -0400 Received: from mail-dm3nam03on0071.outbound.protection.outlook.com ([104.47.41.71]:25184 "EHLO NAM03-DM3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754946AbdIHMXb (ORCPT ); Fri, 8 Sep 2017 08:23:31 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector1-xilinx-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=b361jLTHqwbooNuBnqSK1cGhqAu7ekDYepirkFTJ/9Q=; b=KlEse9S53l0eT3Ak2KlsuVx1XTaimoLEpFUGXd6mlRRBGoYeOldXXSYM/htCGsrtEeIOd4eI3esvd6bFBecxD/r3V43WiY76Y2S5elE38mWLarY57v6eTf13UTfETae0wClSIHVWLWKvpCi9qc2xDPPDYRJGw0PhBwfqL6A0Oh0= Received: from BN6PR02CA0040.namprd02.prod.outlook.com (10.173.146.154) by CY1PR0201MB1930.namprd02.prod.outlook.com (10.163.56.28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.35.12; Fri, 8 Sep 2017 12:23:28 +0000 Received: from SN1NAM02FT022.eop-nam02.prod.protection.outlook.com (2a01:111:f400:7e44::204) by BN6PR02CA0040.outlook.office365.com (2603:10b6:404:5f::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.35.12 via Frontend Transport; Fri, 8 Sep 2017 12:23:27 +0000 Authentication-Results: spf=pass (sender IP is 149.199.60.83) smtp.mailfrom=xilinx.com; vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=bestguesspass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.60.83 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01; Received: from xsj-pvapsmtpgw01 (149.199.60.83) by SN1NAM02FT022.mail.protection.outlook.com (10.152.72.148) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.20.13.11 via Frontend Transport; Fri, 8 Sep 2017 12:23:23 +0000 Received: from unknown-38-66.xilinx.com ([149.199.38.66] helo=xsj-pvapsmtp01) by xsj-pvapsmtpgw01 with esmtp (Exim 4.63) (envelope-from ) id 1dqIJe-00027B-I6; Fri, 08 Sep 2017 05:23:22 -0700 Received: from [127.0.0.1] (helo=localhost) by xsj-pvapsmtp01 with smtp (Exim 4.63) (envelope-from ) id 1dqIJe-0004Nc-Eb; Fri, 08 Sep 2017 05:23:22 -0700 Received: from xsj-pvapsmtp01 (smtp2.xilinx.com [149.199.38.66]) by xsj-smtp-dlp1.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id v88CNGW4013167; Fri, 8 Sep 2017 05:23:17 -0700 Received: from [172.23.37.80] (helo=xhd-paegbuild40.xilinx.com) by xsj-pvapsmtp01 with esmtp (Exim 4.63) (envelope-from ) id 1dqIJY-0004Me-HY; Fri, 08 Sep 2017 05:23:16 -0700 Received: by xhd-paegbuild40.xilinx.com (Postfix, from userid 12633) id 25D72B2085B; Fri, 8 Sep 2017 17:53:16 +0530 (IST) From: Ravi Shankar Jonnalagadda To: , , , , , , , , , , , , , , , Subject: [PATCH v2 5/5] devicetree: zynqmp_ps_pcie: Devicetree binding for Root DMA Date: Fri, 8 Sep 2017 17:53:08 +0530 Message-ID: <1504873388-29195-7-git-send-email-vjonnal@xilinx.com> X-Mailer: git-send-email 2.1.1 In-Reply-To: <1504873388-29195-1-git-send-email-vjonnal@xilinx.com> References: <1504873388-29195-1-git-send-email-vjonnal@xilinx.com> X-RCIS-Action: ALLOW X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.1.0.1062-23314.003 X-TM-AS-User-Approved-Sender: Yes;Yes X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:149.199.60.83; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(2980300002)(438002)(189002)(199003)(81156014)(8936002)(81166006)(50226002)(478600001)(90966002)(8676002)(52956003)(36386004)(76176999)(189998001)(50986999)(2201001)(5003940100001)(575784001)(103686004)(50466002)(106466001)(46386002)(356003)(305945005)(33646002)(5660300001)(2906002)(6636002)(48376002)(6266002)(63266004)(36756003)(47776003)(42186005)(7416002)(6666003)(2950100002)(45336002)(107986001)(921003)(1121003)(2101003)(83996005); DIR:OUT; SFP:1101; SCL:1; SRVR:CY1PR0201MB1930; H:xsj-pvapsmtpgw01; FPR:; SPF:Pass; PTR:unknown-60-83.xilinx.com; MX:1; A:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; SN1NAM02FT022; 1:r4szq9MORzlz3608FDsv0+ExUHFTNIzaTUph2JlE/+h9hvAp3S9k9IQH3ErtSM/rnYFDAUEwAYW7z3SipE0ngcjv+OuRRwJ+PEKJoxWr00JHi1GAxe8x+6uBvI732zR+ MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d74bd6ae-86c5-4916-0b5a-08d4f6b4688a X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(2017030254152)(8251501002)(300000503095)(300135400095)(2017052603199)(201703131423075)(201703031133081)(201702281549075)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095); SRVR:CY1PR0201MB1930; X-Microsoft-Exchange-Diagnostics: 1; CY1PR0201MB1930; 3:e16a2dilAbwu6XyDblm1v7Wy8I4JaxP4MBMaHnvio2F5QGZL9wvgD4+B/TZEmMchG42Cc/CdVXOfd07rkFL0+/OvGCtZeIP+u4W8YoarWHTwycEz04LvTuGhbvZXeD4D5n8e+YWx6QxrhqD2tOb9FsEiGZu2f3SeDnMa2PqkHs8wr8Ay4rt4rzMIS09wSBR/wrGg9Hk8oe4Gs+4v4+JUbOvIOlZoezY27vC1DuN37Ik5570gZREqiMHeyh27xbac4nYL/6mrK2ofvxrli77tbBFc0GTt65hovv/2Lukb9IpMNsvv1wTbibbQao2xFRtkIeGUfVKPOciZiDa8WnLXv6MVly7HDPSQIJE5W1TZ7V8=; 25:qM3hRAsPD5wTRw01SEulKX217t1tN0ICuVrzo+KBtDMn3OXU/JhPXolklqwY8DHru47/f5S2FDE0OcIvZGIpn5aqS/KuV9wk+d8S03GdxUaa0IaYUaAXKasrSi9XHvY9Kenw4ULzCv2pQ4WfhwB8aWKlu4KUzwF5A2/uXALFZxJh9ERwKsmo3nVhta3UuSHEF4mjnBgOyeUTGnbdlW09/oma8AqaYtOxqAVlq+H52w8oZoLLX9o6eb8IuGVu3dEUs9tfNQGohcTNf7YSa2WBUFVup/ooTf6iDGiTm9D2jSRpLXB1hcZ8rdkAY8M7/UXxUUtT5rhxG8TJwoPGLhNxNw== X-MS-TrafficTypeDiagnostic: CY1PR0201MB1930: X-LD-Processed: 657af505-d5df-48d0-8300-c31994686c5c,ExtAddr X-Microsoft-Exchange-Diagnostics: 1; CY1PR0201MB1930; 31:LVMZj2PO6/KBy+5gGmq7d2iCK2EvLLuxH8tK/0cQz6zQ4pKSRf34LMGtkO6rklOpHxuVlv0NWfIvxLOrhlR6Iecii4RsVmbkHq712igfe1Lb+lDSAO0+0BxBqS2ug0BQlrvEWfat6wZ7RgijvLWj5P1f1OdFdASqEVqmi1S3xveuSTLuq8T/Sd+qJ+hZQI+bv3bt21NuneMx106t140hiwkKlH0z4X2s2tUI4soP0/A=; 20:2ybrjV5oxYIuyfryDDy7rYx/BDifotqcaPiwfIQTayMigQ89wIU7d8m97hD2zRohYnh65V5PqJ9pvMxBI7+p386xPicgflY2OpEo4zXc22lcL70IfnZzHD4uEAddPC856287nRZmE5h9XgW3xZo9Vj9qVlOKA6nPEpeXpuSh/OHMQ60qS2asF5yCeT8B5LuhRKSVc5HYbgujqUXXI+KgpQp92rhDCJkw22cYmxEqusPVeZQkKWGlsNrFpvBj0Q6JuoBszX+DjV/QDPvuGBGMwyO551cI/UXmk5eZd2FQ8DOvbXSsGgTcIt5cgf4EwLT+DcpJmP8tuEw2mCk5TQj29+Ed1AdmDjDcGs+bEiRfHvesTxmYQRFCUULgSlBl0nYFV6y7A9e0UUhmahEYqaR5I11k39646dmXroF2erhwMI5aE3anPPaaCTvAxH7IORqVPp28oooX1eqqTkvJgCMbQC046xfWlK88+O/hH6f/onU0HV1cCEttTMYublEYkryU X-Exchange-Antispam-Report-Test: UriScan:(192813158149592); X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(8121501046)(5005006)(100000703101)(100105400095)(10201501046)(93006095)(93004095)(3002001)(6055026)(6041248)(20161123558100)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123560025)(20161123555025)(20161123564025)(20161123562025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:CY1PR0201MB1930; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:CY1PR0201MB1930; X-Microsoft-Exchange-Diagnostics: 1; CY1PR0201MB1930; 4:ChuVyPV+p3TldUw6zwEc6o5A/azGXDYSwKRWCrPLWtkkWs34YAve+tLmqIXyIqexwhxTaVLRACKxsKPZ/m4ma5wtbWx9PSug0UPC7UO+SCvlY15XLshFPWBMthZ5nP5vddy+4tRmmtEJRP5qY54l4lhbOSnt6F4vJ7xjzIYVTRQu/bIBkLu1aa0xCLqCFAVFB3TlYBMdmglvV17HCF+SH3GWo3ca9eB6rSK/67yEzOSM7SOG8i7ayEstWg+UVODmAPpDgCVH930OFH9vsibiWoXEwzudsIOmSTHjg2NNRxs= X-Forefront-PRVS: 04244E0DC5 X-Microsoft-Exchange-Diagnostics: 1; CY1PR0201MB1930; 23:1TG7VQd4PQIzONgeYWscQGwB2CZ2hqTSDfr7w6IUv5bf8i9SHelgom9AKLWg22L+AW2knwtTAMeuPVCFmekVC/6scS4jVLEgEXEa/DVK7ZugU3wblU6Acxl/MQyc2RefjLCkWaXci9pNckCIWBL/ELanDHH26NxjGpaKu80XFKaal2jSBwXot79E6PJOh1qJMSrR6W3u2KylKszU6hrB1iiL/VdvT/0OLGo9EhiGnR5OBpOd4xMuGGGS5cyJOuqwsVa1hLKWROgKBrlV1sO7JsFx5k7c+YrAFZIlrP9mkslWn9yTZ6/6wPGylG5abNTS9FGe9+bx018hzgbnIzkhnH+qPL4lydPAJCTtzpDtDk5B8AIKySAVGM+nDOur5D0O7l4qcMg1irMCi27nOteUtWM7qfmdP/Qi+qvMfCjjuokHV3Gc3ejPGCCVBO/zxSona+gKAo+sHcV3UqdNv2PU8SCLgy3kkiKcbbrSraKqFK4RTE1Fb+27vbKPRi9PomBfzFgILDO/1q+8I8TuX1aGo/p8fTnvtJ0O9YSHmFRv685PDoYF5qn26KYuR77bA+QS8Hu24c7YEpbg+HW2tiHT3LTFvSk8cl9Uks9VDzfU0XFjUXK7v18BDDjwVIZzcYPLcImEboV8n5KxpCkvdVZLM/L2b+QzRcT10gq4AadPD+qXKP+0mPE5TiFIDE0tVP52BBGBu+G7PqBS1Thi3ExDv5yAkAo2agu9wvwZlk9PBS0QvzBSFbbpaIy7h1TmV99qljBNRSQR+y2M9GlwhjozMOE7ocltAjAEuJxZAjatLPDTPNCkkqCWv7lh0pm8FwRL/aoFubvUGUS92HP8ZT9F1dZA+gLPacQ7IrN8PqEzt8FP08u7E7Ig9YdGY8x7BkZtCYjmKW6FHZM4YCj/4aRIx66R2KvLwdtk8UKBe7BxDWQGSb25AOHrng6X6G8quaiqPOKm8MzXO+Ve1/2tLsyNO2VbQBb0l0pdwRGRUK8qu8uEQeH675vn7xDX75ErEqxJsA9ixF906XE7ufxcfHXMxhDrevwihGF/Z5dSZ1Ih7iVZBVjSs6ejyr1Zf7XLiASM X-Microsoft-Exchange-Diagnostics: 1; CY1PR0201MB1930; 6:dzSuqJzeJGx0NjLSriCftokXMJtqFiN4FAzb26GlBzJzduGI6p1KUxUcLPPqhknMe6Wyf6lvE/OA3oCtPf9selWfGVTeATJVpAbRn6TDOk5y847ZrSe1XsR1pcoyvWYLOfugP1nSColfxvDDhLEdGixtfQcdXt/UoA8GSHUHOwq8ViEwSC2e8OGjsmRs0Zn92GdAD8N+i6KBkuZyzrwxI7qILuZ2fGCC6mtxXuocVWiSVSyYhwKR+r5jWwxXI10AffvUFlVukl5+LMYpW6NVxqDGygxwyaF5IPOTq+QuMnX/unkrvuZM02wPM/zhY4jIELujC/aYOw2xmkyyuG4fAg==; 5:prAzFKG695UQQxA3cUQfJ8B2KX2W0AUO7NEOXW62uhkVYAHj6NteRuOJDrB6btu/ztA3XEiV4BKGRSBBN3i7GaUSVgrVfw/Iv8rJscr1Oe2oJkL5hfLESOwbxaVcA0x97XRHPmWHg22v2R9eFLDsgA==; 24:cbRDyeI1ukkHjPUn8X6PkGuAonUCwGwf7zp6b27l3nYUNQnlVkVpbgnA114WpyWk4JN81V+g+ZtWEl30w/9cK9hw1Ccj76Sy0HcioIt7gw4=; 7:e1Ccr8rXgTU6FD/cmaxule2nj+wrneS/Zsfp2Ws0BjS4ZGCxNjUAzWoeYaISlGmt3V8f/cyv8qR/k4Xd8gWIJUBzIrSDzzvhrI9GhpcSv0kPtNHOrt7UNiUOlzEBk9wQ/aJommCwmzn+8bH/rVWiLoOIJ5s5vpS7NHf7vCF/AxW72T6pF5bdw+y3HJ6SIdI01W0nQiW4QtMg1ylDfjJ1Ry5guoRbHmDF6VxURFr9F34= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2017 12:23:23.3567 (UTC) X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.83]; Helo=[xsj-pvapsmtpgw01] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR0201MB1930 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Binding explaining devicetree usage for enabling Root DMA capability Signed-off-by: Ravi Shankar Jonnalagadda Signed-off-by: RaviKiran Gummaluri --- .../devicetree/bindings/dma/xilinx/ps-pcie-dma.txt | 67 ++++++++++++++++++++++ 1 file changed, 67 insertions(+) create mode 100644 Documentation/devicetree/bindings/dma/xilinx/ps-pcie-dma.txt diff --git a/Documentation/devicetree/bindings/dma/xilinx/ps-pcie-dma.txt b/Documentation/devicetree/bindings/dma/xilinx/ps-pcie-dma.txt new file mode 100644 index 0000000..1522a49 --- /dev/null +++ b/Documentation/devicetree/bindings/dma/xilinx/ps-pcie-dma.txt @@ -0,0 +1,67 @@ +* Xilinx PS PCIe Root DMA + +Required properties: +- compatible: Should be "xlnx,ps_pcie_dma-1.00.a" +- reg: Register offset for Root DMA channels +- reg-names: Name for the register. Should be "xlnx,ps_pcie_regbase" +- interrupts: Interrupt pin for Root DMA +- interrupt-names: Name for the interrupt. Should be "xlnx,ps_pcie_rootdma_intr" +- interrupt-parent: Should be gic in case of zynqmp +- xlnx,rootdma: Indicates this platform device is root dma. + This is required as the same platform driver will be invoked by pcie end points too +- xlnx,dma_vendorid: 16 bit PCIe device vendor id. + This can be later used by dma client for matching while using dma_request_channel +- xlnx,dma_deviceid: 16 bit PCIe device id + This can be later used by dma client for matching while using dma_request_channel +- xlnx,numchannels: Indicates number of channels to be enabled for the device. + Valid values are from 1 to 4 for zynqmp +- xlnx,ps_pcie_channel : One for each channel to be enabled. + This array contains channel specific properties. + Index 0: Direction of channel + Direction of channel can be either PCIe Memory to AXI memory i.e., Host to Card or + AXI Memory to PCIe memory i.e., Card to Host + PCIe to AXI Channel Direction is represented as 0x1 + AXI to PCIe Channel Direction is represented as 0x0 + Index 1: Number of Buffer Descriptors + This number describes number of buffer descriptors to be allocated for a channel + Index 2: Number of Queues + Each Channel has four DMA Buffer Descriptor Queues. + By default All four Queues will be managed by Root DMA driver. + User may choose to have only two queues either Source and it's Status Queue or + Destination and it's Status Queue to be handled by Driver. + The other two queues need to be handled by user logic which will not be part of this driver. + All Queues on Host is represented by 0x4 + Two Queues on Host is represented by 0x2 + Index 3: Coaelse Count + This number indicates the number of transfers after which interrupt needs to + be raised for the particular channel. The allowed range is from 0 to 255 + Index 4: Coaelse Count Timer frequency + This property is used to control the frequency of poll timer. Poll timer is + created for a channel whenever coalesce count value (>= 1) is programmed for the particular + channel. This timer is helpful in draining out completed transactions even though interrupt is + not generated. + +Client Usage: + DMA clients can request for these channels using dma_request_channel API + + +Xilinx PS PCIe Root DMA node Example +++++++++++++++++++++++++++++++++++++ + + pci_rootdma: rootdma@fd0f0000 { + compatible = "xlnx,ps_pcie_dma-1.00.a"; + reg = <0x0 0xfd0f0000 0x0 0x1000>; + reg-names = "xlnx,ps_pcie_regbase"; + interrupts = <0 117 4>; + interrupt-names = "xlnx,ps_pcie_rootdma_intr"; + interrupt-parent = <&gic>; + xlnx,rootdma; + xlnx,dma_vendorid = /bits/ 16 <0x10EE>; + xlnx,dma_deviceid = /bits/ 16 <0xD021>; + xlnx,numchannels = <0x4>; + #size-cells = <0x5>; + xlnx,ps_pcie_channel0 = <0x1 0x7CF 0x4 0x0 0x3E8>; + xlnx,ps_pcie_channel1 = <0x0 0x7CF 0x4 0x0 0x3E8>; + xlnx,ps_pcie_channel2 = <0x1 0x7CF 0x4 0x0 0x3E8>; + xlnx,ps_pcie_channel3 = <0x0 0x7CF 0x4 0x0 0x3E8>; + };