get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/811561/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 811561,
    "url": "http://patchwork.ozlabs.org/api/patches/811561/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/linux-pci/patch/1504873388-29195-5-git-send-email-vjonnal@xilinx.com/",
    "project": {
        "id": 28,
        "url": "http://patchwork.ozlabs.org/api/projects/28/?format=api",
        "name": "Linux PCI development",
        "link_name": "linux-pci",
        "list_id": "linux-pci.vger.kernel.org",
        "list_email": "linux-pci@vger.kernel.org",
        "web_url": null,
        "scm_url": null,
        "webscm_url": null,
        "list_archive_url": "",
        "list_archive_url_format": "",
        "commit_url_format": ""
    },
    "msgid": "<1504873388-29195-5-git-send-email-vjonnal@xilinx.com>",
    "list_archive_url": null,
    "date": "2017-09-08T12:23:06",
    "name": "[v2,4/5] dmaengine: zynqmp_ps_pcie: Adding PS PCIe platform DMA driver",
    "commit_ref": null,
    "pull_url": null,
    "state": "changes-requested",
    "archived": false,
    "hash": "0b8db07a077cd3e29a97ac251958fad37838ada5",
    "submitter": {
        "id": 72127,
        "url": "http://patchwork.ozlabs.org/api/people/72127/?format=api",
        "name": "Ravi Shankar Jonnalagadda",
        "email": "venkata.ravi.jonnalagadda@xilinx.com"
    },
    "delegate": null,
    "mbox": "http://patchwork.ozlabs.org/project/linux-pci/patch/1504873388-29195-5-git-send-email-vjonnal@xilinx.com/mbox/",
    "series": [
        {
            "id": 2190,
            "url": "http://patchwork.ozlabs.org/api/series/2190/?format=api",
            "web_url": "http://patchwork.ozlabs.org/project/linux-pci/list/?series=2190",
            "date": "2017-09-08T12:23:04",
            "name": "dmaengine: ZynqMP PS PCIe DMA driver",
            "version": 2,
            "mbox": "http://patchwork.ozlabs.org/series/2190/mbox/"
        }
    ],
    "comments": "http://patchwork.ozlabs.org/api/patches/811561/comments/",
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/811561/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<linux-pci-owner@vger.kernel.org>",
        "X-Original-To": "incoming@patchwork.ozlabs.org",
        "Delivered-To": "patchwork-incoming@bilbo.ozlabs.org",
        "Authentication-Results": [
            "ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-pci-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)",
            "ozlabs.org; dkim=pass (1024-bit key;\n\tunprotected) header.d=xilinx.onmicrosoft.com\n\theader.i=@xilinx.onmicrosoft.com header.b=\"wcrjj+pu\"; \n\tdkim-atps=neutral",
            "spf=pass (sender IP is 149.199.60.100)\n\tsmtp.mailfrom=xilinx.com; vger.kernel.org;\n\tdkim=none (message not signed)\n\theader.d=none;vger.kernel.org; dmarc=bestguesspass action=none\n\theader.from=xilinx.com;"
        ],
        "Received": [
            "from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xpcFG2LSPz9tXt\n\tfor <incoming@patchwork.ozlabs.org>;\n\tFri,  8 Sep 2017 22:33:33 +1000 (AEST)",
            "(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S932161AbdIHMYg (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tFri, 8 Sep 2017 08:24:36 -0400",
            "from mail-by2nam03on0082.outbound.protection.outlook.com\n\t([104.47.42.82]:12416\n\t\"EHLO NAM03-BY2-obe.outbound.protection.outlook.com\"\n\trhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP\n\tid S1755159AbdIHMXj (ORCPT <rfc822;linux-pci@vger.kernel.org>);\n\tFri, 8 Sep 2017 08:23:39 -0400",
            "from BN6PR02CA0048.namprd02.prod.outlook.com (10.173.146.162) by\n\tSN1PR0201MB1933.namprd02.prod.outlook.com (10.163.87.155) with\n\tMicrosoft SMTP Server (version=TLS1_2,\n\tcipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id\n\t15.20.13.10; Fri, 8 Sep 2017 12:23:35 +0000",
            "from BL2NAM02FT025.eop-nam02.prod.protection.outlook.com\n\t(2a01:111:f400:7e46::205) by BN6PR02CA0048.outlook.office365.com\n\t(2603:10b6:404:5f::34) with Microsoft SMTP Server (version=TLS1_2,\n\tcipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.35.12 via\n\tFrontend Transport; Fri, 8 Sep 2017 12:23:34 +0000",
            "from xsj-pvapsmtpgw02 (149.199.60.100) by\n\tBL2NAM02FT025.mail.protection.outlook.com (10.152.77.151) with\n\tMicrosoft SMTP Server (version=TLS1_0,\n\tcipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.20.13.11\n\tvia Frontend Transport; Fri, 8 Sep 2017 12:23:30 +0000",
            "from unknown-38-66.xilinx.com ([149.199.38.66]:47163\n\thelo=xsj-pvapsmtp01) by xsj-pvapsmtpgw02 with esmtp (Exim 4.63)\n\t(envelope-from <venkata.ravi.jonnalagadda@xilinx.com>)\n\tid 1dqIJl-00057n-8H; Fri, 08 Sep 2017 05:23:29 -0700",
            "from [127.0.0.1] (helo=localhost)\n\tby xsj-pvapsmtp01 with smtp (Exim 4.63)\n\t(envelope-from <venkata.ravi.jonnalagadda@xilinx.com>)\n\tid 1dqIJl-0004O7-5Y; Fri, 08 Sep 2017 05:23:29 -0700",
            "from xsj-pvapsmtp01 (mailman.xilinx.com [149.199.38.66])\n\tby xsj-smtp-dlp2.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id\n\tv88CNGuZ000526; Fri, 8 Sep 2017 05:23:16 -0700",
            "from [172.23.37.80] (helo=xhd-paegbuild40.xilinx.com)\n\tby xsj-pvapsmtp01 with esmtp (Exim 4.63)\n\t(envelope-from <vjonnal@xilinx.com>)\n\tid 1dqIJX-0004MX-4a; Fri, 08 Sep 2017 05:23:16 -0700",
            "by xhd-paegbuild40.xilinx.com (Postfix, from userid 12633)\n\tid B7A12B20857; Fri,  8 Sep 2017 17:53:14 +0530 (IST)"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=xilinx.onmicrosoft.com; s=selector1-xilinx-com;\n\th=From:Date:Subject:Message-ID:Content-Type:MIME-Version;\n\tbh=agcVwnPac0BDVoUa7hfqyww0H0vMnzvA9Ybpt8zKQOQ=;\n\tb=wcrjj+puTQni0ToEHiK65a6Lh7PaSnLXe2ywmGxmTsTy0qHGA5NxV8DnvJPiVQbj6EXUqY8ga1k39tfigKmk+0OZKuS8Tn55MZYvrYlPvoqc7d7k123FwU6z6WWMY2VNkXQSo6rmHtx9pWs/RGzrT3SLNziOd+U3EXn+11O6uXM=",
        "Received-SPF": "Pass (protection.outlook.com: domain of xilinx.com designates\n\t149.199.60.100 as permitted sender)\n\treceiver=protection.outlook.com; \n\tclient-ip=149.199.60.100; helo=xsj-pvapsmtpgw02;",
        "From": "Ravi Shankar Jonnalagadda <venkata.ravi.jonnalagadda@xilinx.com>",
        "To": "<vinod.koul@intel.com>, <robh+dt@kernel.org>,\n\t<mark.rutland@arm.com>, <michal.simek@xilinx.com>,\n\t<soren.brinkmann@xilinx.com>, <dan.j.williams@intel.com>,\n\t<bhelgaas@google.com>, <vjonnal@xilinx.com>,\n\t<lorenzo.pieralisi@arm.com>, <bharat.kumar.gogada@xilinx.com>,\n\t<dmaengine@vger.kernel.org>, <devicetree@vger.kernel.org>,\n\t<linux-arm-kernel@lists.infradead.org>,\n\t<linux-kernel@vger.kernel.org>, <linux-pci@vger.kernel.org>,\n\t<rgummal@xilinx.com>",
        "Subject": "[PATCH v2 4/5] dmaengine: zynqmp_ps_pcie: Adding PS PCIe platform\n\tDMA driver",
        "Date": "Fri, 8 Sep 2017 17:53:06 +0530",
        "Message-ID": "<1504873388-29195-5-git-send-email-vjonnal@xilinx.com>",
        "X-Mailer": "git-send-email 2.1.1",
        "In-Reply-To": "<1504873388-29195-1-git-send-email-vjonnal@xilinx.com>",
        "References": "<1504873388-29195-1-git-send-email-vjonnal@xilinx.com>",
        "X-RCIS-Action": "ALLOW",
        "X-TM-AS-Product-Ver": "IMSS-7.1.0.1224-8.1.0.1062-23314.003",
        "X-TM-AS-User-Approved-Sender": "Yes;Yes",
        "X-EOPAttributedMessage": "0",
        "X-MS-Office365-Filtering-HT": "Tenant",
        "X-Forefront-Antispam-Report": "CIP:149.199.60.100; IPV:NLI; CTRY:US; EFV:NLI;\n\tSFV:NSPM;\n\tSFS:(10009020)(6009001)(39860400002)(2980300002)(438002)(189002)(199003)(50986999)(8936002)(8676002)(103686004)(305945005)(47776003)(33646002)(76176999)(36756003)(5003940100001)(81156014)(50466002)(50226002)(36386004)(356003)(48376002)(90966002)(63266004)(46386002)(2201001)(575784001)(45336002)(478600001)(52956003)(81166006)(42186005)(2906002)(6266002)(7416002)(5660300001)(189998001)(53946003)(2950100002)(16200700003)(6666003)(106466001)(551934003)(6636002)(107986001)(2004002)(921003)(2101003)(83996005)(5001870100001)(1121003)(579004)(559001)(569006);\n\tDIR:OUT; SFP:1101; SCL:1; SRVR:SN1PR0201MB1933;\n\tH:xsj-pvapsmtpgw02; FPR:; SPF:Pass;\n\tPTR:unknown-60-100.xilinx.com,xapps1.xilinx.com; MX:1; A:1;\n\tLANG:en; ",
        "X-Microsoft-Exchange-Diagnostics": [
            "1; BL2NAM02FT025;\n\t1:mAk8yP8x2up4RLmALGguDELIXCZtBqsWsh7gDChhHI5TdjPfFbQYqM7iLcOdueO3cGpi/1oTZhP+9o/0PUGR5JUsoJPCRYQUubFCUpCrH+0N4sMo4TDBcw2fpSbA68l+",
            "1; SN1PR0201MB1933;\n\t3:gHZbdWxy/QEfTfxFPZ/Y7n1vf4UI1SWaMKmb6fxwHuSlo3PYSkD1vlYQ//bnH++uSUAgi/EwzaA0dSiP8qPYq5w/cX+lqFtCc3PS1FIVltr1/n6FVwQUUdQtT/kHnB/CMIWk3eZTkS8BxHkILcH+qJ5BEXgW08PersbQgvqOPK5XhGfn7QZHFTHxelg2vIzMQFBPPVAIZZ1h6MyUvfon/AsNzsJuHXAao9OEvZwiaVvw76M4v5Bqqkqyz/1FDW0G/xk46PlwM4LXn9pzPPk0O1LBs0CT4kNEs1RrU0mHXXWv7SzpCThuSfE9Nh3bx4jA8yy7I8Dsv2DmqK0I6bajOjlSNqmY+8jQnXIe6i4QYTQ=;\n\t25:fWFjFUhQbg+EYAYYDf8E0sz6Q/5T8bJYp71mJuUP4g9lY4EKADXJ5BY9rU/nGl+xnnE8hnuzy+8Tr3FLbTJaZwrJ1sotOZAPFNVe2lz3SLuFijN/OF32bIXm7VoAWFzGH7ra2x9AkHW6Lyg0cMvmcxWLUIemfjM99YyJ48I+rJLrXY+geSr7/8psv1wzvf2+OUOG2L6v4PPJDw5RYP5dkGwzG7dA7t9YzTF9l7rQ8KMGzlr/ijzYc+dbMEeCYh6yOBRVMuTw7UkpXsvHx33eQhvUoS+9CqiDLZWu9TWJcL6IVx9tWkX53aBYaEHU0M1hXFxf4igoLAtQfiIGxNs7iQ==",
            "1; SN1PR0201MB1933;\n\t31:Y7NsIZZP0GmHnnE79VaOmpN2mQdXV3uD1Da4qOaiTMzaE5kEgw1bYPmLz6fKe/v79462F1r/Qr5pHxwFORV41RoCp9LC/n3yR4u6cPKMun+nRnDRHYqzcKglXeNpwMjRcSNujjG74G93OnjH4iJSzxex8+9NiThYVC5rSoqzokL2YV3BOgek/lXnfPCJSy6gUnVLoZqVAtyGmt2N23KetW+HKSkwrDhBL6vtZy/yn48=;\n\t20:USWSytoJmt8tsQVdE2o1Ymx9ruSkMzKw+4BRb9+nLUEjziVvj1nsp3MjfM0wMZRSZELmR1S3dQsHqs15ir1Hzyi5aD2ywXT8GPS6NC1sSHQwBHW4XESJT92iuWi0+sC6d6icjgaqxeh+caOcv8SwcWrdFnPC5FjDXEb2KMCPBoBEthBGOC0pyWyBFfT5xKwdeLVrcLPR6Jzw6ytXBkLQZ20fnR81lLTpJVgiYYykM0SSaH1dzdt0z2clAYvhzifcEb+U0i1sr7XS2JpqvOl9Oh+uUbfmYeBKkJg5s+mDOq2CYyIitOIP/zZhMffiarRRRoOW8CYJ9qKZ//uHhnYTxeFvFr+V1RUB7YlbG5oqO3/d1oeBKRF/cWFf5Swh1r51+F5JFEN9PyuKSBQJD8/PLYvQTGGsIlphTU69m4p95YCqiQpYxN11ei3JZyKdfxsKTz4eWhmt7+4ywjL8e/JMDyOZPRuAOSKQV/FKCsaYzijkzMYtFBJqaqOCM9REbuBK",
            "1; SN1PR0201MB1933;\n\t4:wpXqgFNAmAUWNtMFd/oxlozVmrnzsGFnjuzIY7FqXFI2PLW/Gum9Sq2YVto0RORoDEGk/owG2qA/FUUtnbxEa4dtY5w9l3PvIgGSnCfhttVv/dTdzWYniBnO7qAiAhUQADKWMTPnDPfxp6ujfC/QlhrZSkMDzDl3IJMV28/wdhOn0euKXQPHo881XGDNEn6SMF3s19mFFUMJaEUZuknaqEBdHP4Z6kyxgzkbwcbiJDBnFYyrBsuiSp8c8jaayqEM9SDknVTyQc04HBcOvR6DAyyvccNPK/JjfIkUloXWta+sLn/TSKh/KpYJr5Gg49tvtuY7yYoGlqaSDQmXRkafg37uA/XvQ6h1i0kSGPxA++r2Kp37KmuRWQ18LbxpSV2L",
            "1; SN1PR0201MB1933;\n\t23:6pgGjUCXHREj2JloQFTjnfDlVeMl+nylYdeKDR8VU9uhqIKdNmiKqtg/a3vXk9vf1xz1Ni7MQUN2njZDt1tS40fdjd0774W8yaTtCVxf86qSFC7SHiADsA1zpXVKF64//er//eYnEmHZ8IT0zPSg2h8P3tpxXY3InqJ0OfqkURX2j8QiunKFsZ/ji8KXo1P4K6fFFwvBCGb3goKugghZ5W9NQr70J/OivY6KtnqiC1zyPxk8qh6HiEX82CeyRykgFFO5RfkXdj4qB7U5026WcLI1+46r/9DafEcCXK1lwUf30A0hKuMdSsdxXCXtwnSrbooNfzsSXghX9IWcfF/5RvtGceAuvZBqM7oX0uZs/9IKMQqDYs8ZBb4V8ftaB+CVQ4RkhtRyoaNyNVL8y/5q5Zdmw4YE+jOANnqrc1aJ0QLOK7/4YTKmQ3FRcY2jxolqIyGP7uQplHLnGEO1zlt2Vj1mF89s6SWAFUHo5Egmpdh8DIk0ViEKAu2SI8zVBDeteF2qP9XteiHe7zcttzWU3RbUxVlUdG+kCh5pHJSTGjwwb2PFGgCTjKUXa4LwqkLUwrwsFTA/haMmgHM24VOohap5iosJzR21AWQudNaRSULOIN4VOmXknsJqCsQRBp3mFy930LaYfjRXRBiMcxgLPAuE2+b0yxj77/hQq4rRArsCWrecxZRYNlaqdM0HJbU3dw2rb78jLcL+unJOmUykd9Wz2JwQQi7rE5oYFh+ro/GBFxzGnw55iyHsnXrOcjgl2ZPUAZb6DTMg1Z0I5o8j7APMaVIsohWEE34hAeY9tYCHz1xMSYHlblty1RSlLr0XZRlAe/BmpiV6GtYVd/loGv2TRpSawd8vlHxflMMrJfoH2o0eKB0teDvRsGcr0P57rO1ieS3m0tSRjAUAANehcYozfHRNfQroe27f1BGxTg07Tybb9sSGnmnyS02agPJEVX/CwVNqlS6ETcQ5tKByDds4hhQZYDewiePra8ySCfSslWtdGgwzHYYp4adhE6t9X+xDgeWPtrex45ofulmiqrpbZkNw+xMQYzSmB0SrHZ2DOgAjVqwa9uVwMKsZsuvWv07odBs4b5fl4e/wkFZDLeQEhnFKsnX0tYOaRBxEGKpFTGNu9mkj+OaTEW4h9soCZeZAHtIaaKPQHZheWMZQvIN76lzCgq2bnxZDJGeUvRxrIu9C34DeLO3MuGig7SjcwoG1offMqGePGPJo9VG5znV7QRMu0kRRQBVQqA00vPwVpKv8+W+I6Lc9sukhZ974",
            "1; SN1PR0201MB1933;\n\t6:p9qbDCIv6MtqqDuTKijwvvDGz2aaA4Lx0Dhmr7h2h/y1ttueRZmu9US/qXXKs0+dXAGkmQwtWh/Nl7Hm24ss/QnZ7EmQBgVh7CeI1PUGHk7uN8jTqjUjV+eDaGk3H5q3R6vyi8ocukMAa+7aBXI0zDZ4KFCKp7CFGH04G9y5n8XQ2KH01T6vR30rVMbkdU/7yBZMqguZWfdUuCVd2s2/agMoesaHyebEnU6ADD321a6IRgx/ZLb2pRAkY7H1vcXaFe/Mv38EYiEQl0uGggRFCCwhcXXGqmxutQWn/KzhXXGz2IeC876CzoywTmfHEKrfd1yeSeNvwc9Qdscsq3PHDA==;\n\t5:Fr3y6pZKJEcwlgT1/ViS220ymSXJ/EO7qFUueKH5Rb40CZs7UalW7HhuNIwUjFb+/YIqBZuJMUEgzBSM029h83cfo41CjdMk6y6aMBwst5dpIPFtTI35PoONj02KuZNFVo75jOW0I6lEc2Et34PrrQ==;\n\t24:Uo0gBo41FDKhaaUcE3yu00vfkQ9Px9+3lR7XVUA4rJHFG9ydvwCEX49zO8AE4Y1XSriQ7ybsaKX7Pjg1ujosV+Ba1PVYWEDMjYTDYbMigRA=;\n\t7:GuMjwvBBkm7HnhdRD4N1IzGbZlQ05XOsmhBlatBHmsbDZIHPuE8BFY8nCFawulRB+lOVB2CoRfuTmSfP91M1TcRFJNnFuhi30bdurlISxtK/bOtg0NtgKNJrZG7GhBZZiK3Yh8qVfdrYMFnLMP/oOzeOtAVvsvhciUObYLgiLfkxqXjCRyxeQHE7rA5cjMqiCiIRS7j+kzWXxSCIzIaaBzw0j672XT0BOEqdQK2+hr0="
        ],
        "MIME-Version": "1.0",
        "Content-Type": "text/plain",
        "X-MS-PublicTrafficType": "Email",
        "X-MS-Office365-Filtering-Correlation-Id": "c8afb7cf-422c-40a4-a210-08d4f6b46caf",
        "X-Microsoft-Antispam": "UriScan:; BCL:0; PCL:0;\n\tRULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(2017030254152)(8251501002)(300000503095)(300135400095)(2017052603199)(201703131423075)(201703031133081)(201702281549075)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095);\n\tSRVR:SN1PR0201MB1933; ",
        "X-MS-TrafficTypeDiagnostic": "SN1PR0201MB1933:",
        "X-LD-Processed": "657af505-d5df-48d0-8300-c31994686c5c,ExtAddr",
        "X-Exchange-Antispam-Report-Test": "UriScan:(20558992708506)(131327999870524)(192813158149592); ",
        "X-Microsoft-Antispam-PRVS": "<SN1PR0201MB1933CBE79F6AA40A1D71F851C9950@SN1PR0201MB1933.namprd02.prod.outlook.com>",
        "X-Exchange-Antispam-Report-CFA-Test": "BCL:0; PCL:0;\n\tRULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(8121501046)(5005006)(10201501046)(3002001)(93006095)(93004095)(100000703101)(100105400095)(6055026)(6041248)(20161123562025)(20161123560025)(20161123555025)(20161123558100)(20161123564025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095);\n\tSRVR:SN1PR0201MB1933; BCL:0; PCL:0;\n\tRULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095);\n\tSRVR:SN1PR0201MB1933; ",
        "X-Forefront-PRVS": "04244E0DC5",
        "SpamDiagnosticOutput": "1:99",
        "SpamDiagnosticMetadata": "NSPM",
        "X-OriginatorOrg": "xilinx.com",
        "X-MS-Exchange-CrossTenant-OriginalArrivalTime": "08 Sep 2017 12:23:30.1264\n\t(UTC)",
        "X-MS-Exchange-CrossTenant-Id": "657af505-d5df-48d0-8300-c31994686c5c",
        "X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp": "TenantId=657af505-d5df-48d0-8300-c31994686c5c;\n\tIp=[149.199.60.100]; Helo=[xsj-pvapsmtpgw02]",
        "X-MS-Exchange-CrossTenant-FromEntityHeader": "HybridOnPrem",
        "X-MS-Exchange-Transport-CrossTenantHeadersStamped": "SN1PR0201MB1933",
        "Sender": "linux-pci-owner@vger.kernel.org",
        "Precedence": "bulk",
        "List-ID": "<linux-pci.vger.kernel.org>",
        "X-Mailing-List": "linux-pci@vger.kernel.org"
    },
    "content": "Platform driver handles transactions for PCIe EP DMA and Root DMA\n\nSigned-off-by: Ravi Shankar Jonnalagadda <vjonnal@xilinx.com>\nSigned-off-by: RaviKiran Gummaluri <rgummal@xilinx.com>\n---\n drivers/dma/xilinx/ps_pcie_platform.c | 3055 +++++++++++++++++++++++++++++++++\n 1 file changed, 3055 insertions(+)\n create mode 100644 drivers/dma/xilinx/ps_pcie_platform.c",
    "diff": "diff --git a/drivers/dma/xilinx/ps_pcie_platform.c b/drivers/dma/xilinx/ps_pcie_platform.c\nnew file mode 100644\nindex 0000000..79f324a\n--- /dev/null\n+++ b/drivers/dma/xilinx/ps_pcie_platform.c\n@@ -0,0 +1,3055 @@\n+/*\n+ * XILINX PS PCIe DMA driver\n+ *\n+ * Copyright (C) 2017 Xilinx, Inc. All rights reserved.\n+ *\n+ * Description\n+ * PS PCIe DMA is memory mapped DMA used to execute PS to PL transfers\n+ * on ZynqMP UltraScale+ Devices\n+ *\n+ * This program is free software: you can redistribute it and/or modify\n+ * it under the terms of the GNU General Public License version 2 as\n+ * published by the Free Software Foundation\n+ */\n+\n+#include \"ps_pcie.h\"\n+#include \"../dmaengine.h\"\n+\n+#define PLATFORM_DRIVER_NAME\t\t  \"ps_pcie_pform_dma\"\n+#define MAX_BARS 6\n+\n+#define DMA_BAR_NUMBER 0\n+\n+#define MIN_SW_INTR_TRANSACTIONS       2\n+\n+#define CHANNEL_PROPERTY_LENGTH 50\n+#define WORKQ_NAME_SIZE\t\t100\n+#define INTR_HANDLR_NAME_SIZE   100\n+\n+#define PS_PCIE_DMA_IRQ_NOSHARE    0\n+\n+#define MAX_COALESCE_COUNT     255\n+\n+#define DMA_CHANNEL_REGS_SIZE 0x80\n+\n+#define DMA_SRCQPTRLO_REG_OFFSET  (0x00) /* Source Q pointer Lo */\n+#define DMA_SRCQPTRHI_REG_OFFSET  (0x04) /* Source Q pointer Hi */\n+#define DMA_SRCQSZ_REG_OFFSET     (0x08) /* Source Q size */\n+#define DMA_SRCQLMT_REG_OFFSET    (0x0C) /* Source Q limit */\n+#define DMA_DSTQPTRLO_REG_OFFSET  (0x10) /* Destination Q pointer Lo */\n+#define DMA_DSTQPTRHI_REG_OFFSET  (0x14) /* Destination Q pointer Hi */\n+#define DMA_DSTQSZ_REG_OFFSET     (0x18) /* Destination Q size */\n+#define DMA_DSTQLMT_REG_OFFSET    (0x1C) /* Destination Q limit */\n+#define DMA_SSTAQPTRLO_REG_OFFSET (0x20) /* Source Status Q pointer Lo */\n+#define DMA_SSTAQPTRHI_REG_OFFSET (0x24) /* Source Status Q pointer Hi */\n+#define DMA_SSTAQSZ_REG_OFFSET    (0x28) /* Source Status Q size */\n+#define DMA_SSTAQLMT_REG_OFFSET   (0x2C) /* Source Status Q limit */\n+#define DMA_DSTAQPTRLO_REG_OFFSET (0x30) /* Destination Status Q pointer Lo */\n+#define DMA_DSTAQPTRHI_REG_OFFSET (0x34) /* Destination Status Q pointer Hi */\n+#define DMA_DSTAQSZ_REG_OFFSET    (0x38) /* Destination Status Q size */\n+#define DMA_DSTAQLMT_REG_OFFSET   (0x3C) /* Destination Status Q limit */\n+#define DMA_SRCQNXT_REG_OFFSET    (0x40) /* Source Q next */\n+#define DMA_DSTQNXT_REG_OFFSET    (0x44) /* Destination Q next */\n+#define DMA_SSTAQNXT_REG_OFFSET   (0x48) /* Source Status Q next */\n+#define DMA_DSTAQNXT_REG_OFFSET   (0x4C) /* Destination Status Q next */\n+#define DMA_SCRATCH0_REG_OFFSET   (0x50) /* Scratch pad register 0 */\n+\n+#define DMA_PCIE_INTR_CNTRL_REG_OFFSET  (0x60) /* DMA PCIe intr control reg */\n+#define DMA_PCIE_INTR_STATUS_REG_OFFSET (0x64) /* DMA PCIe intr status reg */\n+#define DMA_AXI_INTR_CNTRL_REG_OFFSET   (0x68) /* DMA AXI intr control reg */\n+#define DMA_AXI_INTR_STATUS_REG_OFFSET  (0x6C) /* DMA AXI intr status reg */\n+#define DMA_PCIE_INTR_ASSRT_REG_OFFSET  (0x70) /* PCIe intr assert reg */\n+#define DMA_AXI_INTR_ASSRT_REG_OFFSET   (0x74) /* AXI intr assert register */\n+#define DMA_CNTRL_REG_OFFSET            (0x78) /* DMA control register */\n+#define DMA_STATUS_REG_OFFSET           (0x7C) /* DMA status register */\n+\n+#define DMA_CNTRL_RST_BIT               BIT(1)\n+#define DMA_CNTRL_64BIT_STAQ_ELEMSZ_BIT BIT(2)\n+#define DMA_CNTRL_ENABL_BIT             BIT(0)\n+#define DMA_STATUS_DMA_PRES_BIT         BIT(15)\n+#define DMA_STATUS_DMA_RUNNING_BIT      BIT(0)\n+#define DMA_QPTRLO_QLOCAXI_BIT          BIT(0)\n+#define DMA_QPTRLO_Q_ENABLE_BIT         BIT(1)\n+#define DMA_INTSTATUS_DMAERR_BIT        BIT(1)\n+#define DMA_INTSTATUS_SGLINTR_BIT       BIT(2)\n+#define DMA_INTSTATUS_SWINTR_BIT        BIT(3)\n+#define DMA_INTCNTRL_ENABLINTR_BIT      BIT(0)\n+#define DMA_INTCNTRL_DMAERRINTR_BIT     BIT(1)\n+#define DMA_INTCNTRL_DMASGINTR_BIT      BIT(2)\n+#define DMA_SW_INTR_ASSRT_BIT           BIT(3)\n+\n+#define SOURCE_CONTROL_BD_BYTE_COUNT_MASK       GENMASK(23, 0)\n+#define SOURCE_CONTROL_BD_LOC_AXI\t\tBIT(24)\n+#define SOURCE_CONTROL_BD_EOP_BIT               BIT(25)\n+#define SOURCE_CONTROL_BD_INTR_BIT              BIT(26)\n+#define SOURCE_CONTROL_BACK_TO_BACK_PACK_BIT    BIT(25)\n+#define SOURCE_CONTROL_ATTRIBUTES_MASK          GENMASK(31, 28)\n+#define SRC_CTL_ATTRIB_BIT_SHIFT                (29)\n+\n+#define STA_BD_COMPLETED_BIT            BIT(0)\n+#define STA_BD_SOURCE_ERROR_BIT         BIT(1)\n+#define STA_BD_DESTINATION_ERROR_BIT    BIT(2)\n+#define STA_BD_INTERNAL_ERROR_BIT       BIT(3)\n+#define STA_BD_UPPER_STATUS_NONZERO_BIT BIT(31)\n+#define STA_BD_BYTE_COUNT_MASK          GENMASK(30, 4)\n+\n+#define STA_BD_BYTE_COUNT_SHIFT         4\n+\n+#define DMA_INTCNTRL_SGCOLSCCNT_BIT_SHIFT (16)\n+\n+#define DMA_SRC_Q_LOW_BIT_SHIFT   GENMASK(5, 0)\n+\n+#define MAX_TRANSFER_LENGTH       0x1000000\n+\n+#define AXI_ATTRIBUTE       0x3\n+#define PCI_ATTRIBUTE       0x2\n+\n+#define ROOTDMA_Q_READ_ATTRIBUTE 0x8\n+\n+/*\n+ * User Id programmed into Source Q will be copied into Status Q of Destination\n+ */\n+#define DEFAULT_UID 1\n+\n+/*\n+ * DMA channel registers\n+ */\n+struct DMA_ENGINE_REGISTERS {\n+\tu32 src_q_low;          /* 0x00 */\n+\tu32 src_q_high;         /* 0x04 */\n+\tu32 src_q_size;         /* 0x08 */\n+\tu32 src_q_limit;        /* 0x0C */\n+\tu32 dst_q_low;          /* 0x10 */\n+\tu32 dst_q_high;         /* 0x14 */\n+\tu32 dst_q_size;         /* 0x18 */\n+\tu32 dst_q_limit;        /* 0x1c */\n+\tu32 stas_q_low;         /* 0x20 */\n+\tu32 stas_q_high;        /* 0x24 */\n+\tu32 stas_q_size;        /* 0x28 */\n+\tu32 stas_q_limit;       /* 0x2C */\n+\tu32 stad_q_low;         /* 0x30 */\n+\tu32 stad_q_high;        /* 0x34 */\n+\tu32 stad_q_size;        /* 0x38 */\n+\tu32 stad_q_limit;       /* 0x3C */\n+\tu32 src_q_next;         /* 0x40 */\n+\tu32 dst_q_next;         /* 0x44 */\n+\tu32 stas_q_next;        /* 0x48 */\n+\tu32 stad_q_next;        /* 0x4C */\n+\tu32 scrathc0;           /* 0x50 */\n+\tu32 scrathc1;           /* 0x54 */\n+\tu32 scrathc2;           /* 0x58 */\n+\tu32 scrathc3;           /* 0x5C */\n+\tu32 pcie_intr_cntrl;    /* 0x60 */\n+\tu32 pcie_intr_status;   /* 0x64 */\n+\tu32 axi_intr_cntrl;     /* 0x68 */\n+\tu32 axi_intr_status;    /* 0x6C */\n+\tu32 pcie_intr_assert;   /* 0x70 */\n+\tu32 axi_intr_assert;    /* 0x74 */\n+\tu32 dma_channel_ctrl;   /* 0x78 */\n+\tu32 dma_channel_status; /* 0x7C */\n+} __attribute__((__packed__));\n+\n+/**\n+ * struct SOURCE_DMA_DESCRIPTOR - Source Hardware Descriptor\n+ * @system_address: 64 bit buffer physical address\n+ * @control_byte_count: Byte count/buffer length and control flags\n+ * @user_handle: User handle gets copied to status q on completion\n+ * @user_id: User id gets copied to status q of destination\n+ */\n+struct SOURCE_DMA_DESCRIPTOR {\n+\tu64 system_address;\n+\tu32 control_byte_count;\n+\tu16 user_handle;\n+\tu16 user_id;\n+} __attribute__((__packed__));\n+\n+/**\n+ * struct DEST_DMA_DESCRIPTOR - Destination Hardware Descriptor\n+ * @system_address: 64 bit buffer physical address\n+ * @control_byte_count: Byte count/buffer length and control flags\n+ * @user_handle: User handle gets copied to status q on completion\n+ * @reserved: Reserved field\n+ */\n+struct DEST_DMA_DESCRIPTOR {\n+\tu64 system_address;\n+\tu32 control_byte_count;\n+\tu16 user_handle;\n+\tu16 reserved;\n+} __attribute__((__packed__));\n+\n+/**\n+ * struct STATUS_DMA_DESCRIPTOR - Status Hardware Descriptor\n+ * @status_flag_byte_count: Byte count/buffer length and status flags\n+ * @user_handle: User handle gets copied from src/dstq on completion\n+ * @user_id: User id gets copied from srcq\n+ */\n+struct STATUS_DMA_DESCRIPTOR {\n+\tu32 status_flag_byte_count;\n+\tu16 user_handle;\n+\tu16 user_id;\n+} __attribute__((__packed__));\n+\n+enum PACKET_CONTEXT_AVAILABILITY {\n+\tFREE = 0,    /*Packet transfer Parameter context is free.*/\n+\tIN_USE       /*Packet transfer Parameter context is in use.*/\n+};\n+\n+struct ps_pcie_transfer_elements {\n+\tstruct scatterlist *src_sgl;\n+\tunsigned int srcq_num_elemets;\n+\tstruct scatterlist *dst_sgl;\n+\tunsigned int dstq_num_elemets;\n+};\n+\n+struct  ps_pcie_tx_segment {\n+\tstruct list_head node;\n+\tstruct dma_async_tx_descriptor async_tx;\n+\tstruct ps_pcie_transfer_elements tx_elements;\n+};\n+\n+struct ps_pcie_intr_segment {\n+\tstruct list_head node;\n+\tstruct dma_async_tx_descriptor async_intr_tx;\n+};\n+\n+/*\n+ * The context structure stored for each DMA transaction\n+ * This structure is maintained separately for Src Q and Destination Q\n+ * @availability_status: Indicates whether packet context is available\n+ * @idx_sop: Indicates starting index of buffer descriptor for a transfer\n+ * @idx_eop: Indicates ending index of buffer descriptor for a transfer\n+ * @sgl: Indicates either src or dst sglist for the transaction\n+ */\n+struct PACKET_TRANSFER_PARAMS {\n+\tenum PACKET_CONTEXT_AVAILABILITY availability_status;\n+\tu16 idx_sop;\n+\tu16 idx_eop;\n+\tstruct scatterlist *sgl;\n+\tstruct ps_pcie_tx_segment *seg;\n+\tu32 requested_bytes;\n+};\n+\n+enum CHANNEL_STATE {\n+\tCHANNEL_RESOURCE_UNALLOCATED = 0, /*  Channel resources not allocated */\n+\tCHANNEL_UNAVIALBLE,               /*  Channel inactive */\n+\tCHANNEL_AVAILABLE,                /*  Channel available for transfers */\n+\tCHANNEL_ERROR                     /*  Channel encountered errors */\n+};\n+\n+enum BUFFER_LOCATION {\n+\tBUFFER_LOC_PCI = 0,\n+\tBUFFER_LOC_AXI,\n+\tBUFFER_LOC_INVALID\n+};\n+\n+enum dev_channel_properties {\n+\tDMA_CHANNEL_DIRECTION = 0,\n+\tNUM_DESCRIPTORS,\n+\tNUM_QUEUES,\n+\tCOALESE_COUNT,\n+\tPOLL_TIMER_FREQUENCY\n+};\n+\n+/*\n+ * struct ps_pcie_dma_chan - Driver specific DMA channel structure\n+ * @xdev: Driver specific device structure\n+ * @dev: The dma device\n+ * @common:  DMA common channel\n+ * @chan_base: Pointer to Channel registers\n+ * @channel_number: DMA channel number in the device\n+ * @num_queues: Number of queues per channel.\n+ *\t\tIt should be four for memory mapped case and\n+ *\t\ttwo for Streaming case\n+ * @direction: Transfer direction\n+ * @state: Indicates channel state\n+ * @channel_lock: Spin lock to be used before changing channel state\n+ * @cookie_lock: Spin lock to be used before assigning cookie for a transaction\n+ * @coalesce_count: Indicates number of packet transfers before interrupts\n+ * @poll_timer_freq:Indicates frequency of polling for completed transactions\n+ * @poll_timer: Timer to poll dma buffer descriptors if coalesce count is > 0\n+ * @src_avail_descriptors: Available sgl source descriptors\n+ * @src_desc_lock: Lock for synchronizing src_avail_descriptors\n+ * @dst_avail_descriptors: Available sgl destination descriptors\n+ * @dst_desc_lock: Lock for synchronizing\n+ *\t\tdst_avail_descriptors\n+ * @src_sgl_bd_pa: Physical address of Source SGL buffer Descriptors\n+ * @psrc_sgl_bd: Virtual address of Source SGL buffer Descriptors\n+ * @src_sgl_freeidx: Holds index of Source SGL buffer descriptor to be filled\n+ * @sglDestinationQLock:Lock to serialize Destination Q updates\n+ * @dst_sgl_bd_pa: Physical address of Dst SGL buffer Descriptors\n+ * @pdst_sgl_bd: Virtual address of Dst SGL buffer Descriptors\n+ * @dst_sgl_freeidx: Holds index of Destination SGL\n+ * @src_sta_bd_pa: Physical address of StatusQ buffer Descriptors\n+ * @psrc_sta_bd: Virtual address of Src StatusQ buffer Descriptors\n+ * @src_staprobe_idx: Holds index of Status Q to be examined for SrcQ updates\n+ * @src_sta_hw_probe_idx: Holds index of maximum limit of Status Q for hardware\n+ * @dst_sta_bd_pa: Physical address of Dst StatusQ buffer Descriptor\n+ * @pdst_sta_bd: Virtual address of Dst Status Q buffer Descriptors\n+ * @dst_staprobe_idx: Holds index of Status Q to be examined for updates\n+ * @dst_sta_hw_probe_idx: Holds index of max limit of Dst Status Q for hardware\n+ * @@read_attribute: Describes the attributes of buffer in srcq\n+ * @@write_attribute: Describes the attributes of buffer in dstq\n+ * @@intr_status_offset: Register offset to be cheked on receiving interrupt\n+ * @@intr_status_offset: Register offset to be used to control interrupts\n+ * @ppkt_ctx_srcq: Virtual address of packet context to Src Q updates\n+ * @idx_ctx_srcq_head: Holds index of packet context to be filled for Source Q\n+ * @idx_ctx_srcq_tail: Holds index of packet context to be examined for Source Q\n+ * @ppkt_ctx_dstq: Virtual address of packet context to Dst Q updates\n+ * @idx_ctx_dstq_head: Holds index of packet context to be filled for Dst Q\n+ * @idx_ctx_dstq_tail: Holds index of packet context to be examined for Dst Q\n+ * @pending_list_lock: Lock to be taken before updating pending transfers list\n+ * @pending_list: List of transactions submitted to channel\n+ * @active_list_lock: Lock to be taken before transferring transactions from\n+ *\t\t\tpending list to active list which will be subsequently\n+ *\t\t\t\tsubmitted to hardware\n+ * @active_list: List of transactions that will be submitted to hardware\n+ * @pending_interrupts_lock: Lock to be taken before updating pending Intr list\n+ * @pending_interrupts_list: List of interrupt transactions submitted to channel\n+ * @active_interrupts_lock: Lock to be taken before transferring transactions\n+ *\t\t\tfrom pending interrupt list to active interrupt list\n+ * @active_interrupts_list: List of interrupt transactions that are active\n+ * @transactions_pool: Mem pool to allocate dma transactions quickly\n+ * @intr_transactions_pool: Mem pool to allocate interrupt transactions quickly\n+ * @sw_intrs_wrkq: Work Q which performs handling of software intrs\n+ * @handle_sw_intrs:Work function handling software interrupts\n+ * @maintenance_workq: Work Q to perform maintenance tasks during stop or error\n+ * @handle_chan_reset: Work that invokes channel reset function\n+ * @handle_chan_shutdown: Work that invokes channel shutdown function\n+ * @handle_chan_terminate: Work that invokes channel transactions termination\n+ * @chan_shutdown_complt: Completion variable which says shutdown is done\n+ * @chan_terminate_complete: Completion variable which says terminate is done\n+ * @primary_desc_cleanup: Work Q which performs work related to sgl handling\n+ * @handle_primary_desc_cleanup: Work that invokes src Q, dst Q cleanup\n+ *\t\t\t\tand programming\n+ * @chan_programming: Work Q which performs work related to channel programming\n+ * @handle_chan_programming: Work that invokes channel programming function\n+ * @srcq_desc_cleanup: Work Q which performs src Q descriptor cleanup\n+ * @handle_srcq_desc_cleanup: Work function handling Src Q completions\n+ * @dstq_desc_cleanup: Work Q which performs dst Q descriptor cleanup\n+ * @handle_dstq_desc_cleanup: Work function handling Dst Q completions\n+ * @srcq_work_complete: Src Q Work completion variable for primary work\n+ * @dstq_work_complete: Dst Q Work completion variable for primary work\n+ */\n+struct ps_pcie_dma_chan {\n+\tstruct xlnx_pcie_dma_device *xdev;\n+\tstruct device *dev;\n+\n+\tstruct dma_chan common;\n+\n+\tstruct DMA_ENGINE_REGISTERS *chan_base;\n+\tu16 channel_number;\n+\n+\tu32 num_queues;\n+\tenum dma_data_direction direction;\n+\tenum BUFFER_LOCATION srcq_buffer_location;\n+\tenum BUFFER_LOCATION dstq_buffer_location;\n+\n+\tu32 total_descriptors;\n+\n+\tenum CHANNEL_STATE state;\n+\tspinlock_t channel_lock; /* For changing channel state */\n+\n+\tspinlock_t cookie_lock;  /* For acquiring cookie from dma framework*/\n+\n+\tu32 coalesce_count;\n+\tu32 poll_timer_freq;\n+\n+\tstruct timer_list poll_timer;\n+\n+\tu32 src_avail_descriptors;\n+\tspinlock_t src_desc_lock; /* For handling srcq available descriptors */\n+\n+\tu32 dst_avail_descriptors;\n+\tspinlock_t dst_desc_lock; /* For handling dstq available descriptors */\n+\n+\tdma_addr_t src_sgl_bd_pa;\n+\tstruct SOURCE_DMA_DESCRIPTOR *psrc_sgl_bd;\n+\tu32 src_sgl_freeidx;\n+\n+\tdma_addr_t dst_sgl_bd_pa;\n+\tstruct DEST_DMA_DESCRIPTOR *pdst_sgl_bd;\n+\tu32 dst_sgl_freeidx;\n+\n+\tdma_addr_t src_sta_bd_pa;\n+\tstruct STATUS_DMA_DESCRIPTOR *psrc_sta_bd;\n+\tu32 src_staprobe_idx;\n+\tu32 src_sta_hw_probe_idx;\n+\n+\tdma_addr_t dst_sta_bd_pa;\n+\tstruct STATUS_DMA_DESCRIPTOR *pdst_sta_bd;\n+\tu32 dst_staprobe_idx;\n+\tu32 dst_sta_hw_probe_idx;\n+\n+\tu32 read_attribute;\n+\tu32 write_attribute;\n+\n+\tu32 intr_status_offset;\n+\tu32 intr_control_offset;\n+\n+\tstruct PACKET_TRANSFER_PARAMS *ppkt_ctx_srcq;\n+\tu16 idx_ctx_srcq_head;\n+\tu16 idx_ctx_srcq_tail;\n+\n+\tstruct PACKET_TRANSFER_PARAMS *ppkt_ctx_dstq;\n+\tu16 idx_ctx_dstq_head;\n+\tu16 idx_ctx_dstq_tail;\n+\n+\tspinlock_t  pending_list_lock; /* For handling dma pending_list */\n+\tstruct list_head pending_list;\n+\tspinlock_t  active_list_lock; /* For handling dma active_list */\n+\tstruct list_head active_list;\n+\n+\tspinlock_t pending_interrupts_lock; /* For dma pending interrupts list*/\n+\tstruct list_head pending_interrupts_list;\n+\tspinlock_t active_interrupts_lock;  /* For dma active interrupts list*/\n+\tstruct list_head active_interrupts_list;\n+\n+\tmempool_t *transactions_pool;\n+\tmempool_t *intr_transactions_pool;\n+\n+\tstruct workqueue_struct *sw_intrs_wrkq;\n+\tstruct work_struct handle_sw_intrs;\n+\n+\tstruct workqueue_struct *maintenance_workq;\n+\tstruct work_struct handle_chan_reset;\n+\tstruct work_struct handle_chan_shutdown;\n+\tstruct work_struct handle_chan_terminate;\n+\n+\tstruct completion chan_shutdown_complt;\n+\tstruct completion chan_terminate_complete;\n+\n+\tstruct workqueue_struct *primary_desc_cleanup;\n+\tstruct work_struct handle_primary_desc_cleanup;\n+\n+\tstruct workqueue_struct *chan_programming;\n+\tstruct work_struct handle_chan_programming;\n+\n+\tstruct workqueue_struct *srcq_desc_cleanup;\n+\tstruct work_struct handle_srcq_desc_cleanup;\n+\tstruct completion srcq_work_complete;\n+\n+\tstruct workqueue_struct *dstq_desc_cleanup;\n+\tstruct work_struct handle_dstq_desc_cleanup;\n+\tstruct completion dstq_work_complete;\n+};\n+\n+/*\n+ * struct xlnx_pcie_dma_device - Driver specific platform device structure\n+ * @is_rootdma: Indicates whether the dma instance is root port dma\n+ * @dma_buf_ext_addr: Indicates whether target system is 32 bit or 64 bit\n+ * @bar_mask: Indicates available pcie bars\n+ * @board_number: Count value of platform device\n+ * @dev: Device structure pointer for pcie device\n+ * @channels: Pointer to device DMA channels structure\n+ * @common: DMA device structure\n+ * @num_channels: Number of channels active for the device\n+ * @reg_base: Base address of first DMA channel of the device\n+ * @irq_vecs: Number of irq vectors allocated to pci device\n+ * @pci_dev: Parent pci device which created this platform device\n+ * @bar_info: PCIe bar related information\n+ * @platform_irq_vec: Platform irq vector number for root dma\n+ * @rootdma_vendor: PCI Vendor id for root dma\n+ * @rootdma_device: PCI Device id for root dma\n+ */\n+struct xlnx_pcie_dma_device {\n+\tbool is_rootdma;\n+\tbool dma_buf_ext_addr;\n+\tu32 bar_mask;\n+\tu16 board_number;\n+\tstruct device *dev;\n+\tstruct ps_pcie_dma_chan *channels;\n+\tstruct dma_device common;\n+\tint num_channels;\n+\tint irq_vecs;\n+\tvoid __iomem *reg_base;\n+\tstruct pci_dev *pci_dev;\n+\tstruct BAR_PARAMS bar_info[MAX_BARS];\n+\tint platform_irq_vec;\n+\tu16 rootdma_vendor;\n+\tu16 rootdma_device;\n+};\n+\n+#define to_xilinx_chan(chan) \\\n+\tcontainer_of(chan, struct ps_pcie_dma_chan, common)\n+#define to_ps_pcie_dma_tx_descriptor(tx) \\\n+\tcontainer_of(tx, struct ps_pcie_tx_segment, async_tx)\n+#define to_ps_pcie_dma_tx_intr_descriptor(tx) \\\n+\tcontainer_of(tx, struct ps_pcie_intr_segment, async_intr_tx)\n+\n+/* Function Protypes */\n+static u32 ps_pcie_dma_read(struct ps_pcie_dma_chan *chan, u32 reg);\n+static void ps_pcie_dma_write(struct ps_pcie_dma_chan *chan, u32 reg,\n+\t\t\t      u32 value);\n+static void ps_pcie_dma_clr_mask(struct ps_pcie_dma_chan *chan, u32 reg,\n+\t\t\t\t u32 mask);\n+static void ps_pcie_dma_set_mask(struct ps_pcie_dma_chan *chan, u32 reg,\n+\t\t\t\t u32 mask);\n+static int irq_setup(struct xlnx_pcie_dma_device *xdev);\n+static int platform_irq_setup(struct xlnx_pcie_dma_device *xdev);\n+static int chan_intr_setup(struct xlnx_pcie_dma_device *xdev);\n+static int device_intr_setup(struct xlnx_pcie_dma_device *xdev);\n+static int irq_probe(struct xlnx_pcie_dma_device *xdev);\n+static int ps_pcie_check_intr_status(struct ps_pcie_dma_chan *chan);\n+static irqreturn_t ps_pcie_dma_dev_intr_handler(int irq, void *data);\n+static irqreturn_t ps_pcie_dma_chan_intr_handler(int irq, void *data);\n+static int init_hw_components(struct ps_pcie_dma_chan *chan);\n+static int init_sw_components(struct ps_pcie_dma_chan *chan);\n+static void update_channel_read_attribute(struct ps_pcie_dma_chan *chan);\n+static void update_channel_write_attribute(struct ps_pcie_dma_chan *chan);\n+static void ps_pcie_chan_reset(struct ps_pcie_dma_chan *chan);\n+static void poll_completed_transactions(unsigned long arg);\n+static bool check_descriptors_for_two_queues(struct ps_pcie_dma_chan *chan,\n+\t\t\t\t\t     struct ps_pcie_tx_segment *seg);\n+static bool check_descriptors_for_all_queues(struct ps_pcie_dma_chan *chan,\n+\t\t\t\t\t     struct ps_pcie_tx_segment *seg);\n+static bool check_descriptor_availability(struct ps_pcie_dma_chan *chan,\n+\t\t\t\t\t  struct ps_pcie_tx_segment *seg);\n+static void handle_error(struct ps_pcie_dma_chan *chan);\n+static void xlnx_ps_pcie_update_srcq(struct ps_pcie_dma_chan *chan,\n+\t\t\t\t     struct ps_pcie_tx_segment *seg);\n+static void xlnx_ps_pcie_update_dstq(struct ps_pcie_dma_chan *chan,\n+\t\t\t\t     struct ps_pcie_tx_segment *seg);\n+static void ps_pcie_chan_program_work(struct work_struct *work);\n+static void dst_cleanup_work(struct work_struct *work);\n+static void src_cleanup_work(struct work_struct *work);\n+static void ps_pcie_chan_primary_work(struct work_struct *work);\n+static int probe_channel_properties(struct platform_device *platform_dev,\n+\t\t\t\t    struct xlnx_pcie_dma_device *xdev,\n+\t\t\t\t    u16 channel_number);\n+static void xlnx_ps_pcie_destroy_mempool(struct ps_pcie_dma_chan *chan);\n+static void xlnx_ps_pcie_free_worker_queues(struct ps_pcie_dma_chan *chan);\n+static void xlnx_ps_pcie_free_pkt_ctxts(struct ps_pcie_dma_chan *chan);\n+static void xlnx_ps_pcie_free_descriptors(struct ps_pcie_dma_chan *chan);\n+static int xlnx_ps_pcie_channel_activate(struct ps_pcie_dma_chan *chan);\n+static void xlnx_ps_pcie_channel_quiesce(struct ps_pcie_dma_chan *chan);\n+static void ivk_cbk_for_pending(struct ps_pcie_dma_chan *chan);\n+static void xlnx_ps_pcie_reset_channel(struct ps_pcie_dma_chan *chan);\n+static void xlnx_ps_pcie_free_poll_timer(struct ps_pcie_dma_chan *chan);\n+static int xlnx_ps_pcie_alloc_poll_timer(struct ps_pcie_dma_chan *chan);\n+static void terminate_transactions_work(struct work_struct *work);\n+static void chan_shutdown_work(struct work_struct *work);\n+static void chan_reset_work(struct work_struct *work);\n+static int xlnx_ps_pcie_alloc_worker_threads(struct ps_pcie_dma_chan *chan);\n+static int xlnx_ps_pcie_alloc_mempool(struct ps_pcie_dma_chan *chan);\n+static int xlnx_ps_pcie_alloc_pkt_contexts(struct ps_pcie_dma_chan *chan);\n+static int dma_alloc_descriptors_two_queues(struct ps_pcie_dma_chan *chan);\n+static int dma_alloc_decriptors_all_queues(struct ps_pcie_dma_chan *chan);\n+static void xlnx_ps_pcie_dma_free_chan_resources(struct dma_chan *dchan);\n+static int xlnx_ps_pcie_dma_alloc_chan_resources(struct dma_chan *dchan);\n+static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx);\n+static dma_cookie_t xilinx_intr_tx_submit(struct dma_async_tx_descriptor *tx);\n+static struct dma_async_tx_descriptor *xlnx_ps_pcie_dma_prep_dma_sg(\n+\t\tstruct dma_chan *channel, struct scatterlist *dst_sg,\n+\t\tunsigned int dst_nents, struct scatterlist *src_sg,\n+\t\tunsigned int src_nents, unsigned long flags);\n+static struct dma_async_tx_descriptor *xlnx_ps_pcie_dma_prep_slave_sg(\n+\t\tstruct dma_chan *channel, struct scatterlist *sgl,\n+\t\tunsigned int sg_len, enum dma_transfer_direction direction,\n+\t\tunsigned long flags, void *context);\n+static struct dma_async_tx_descriptor *xlnx_ps_pcie_dma_prep_interrupt(\n+\t\tstruct dma_chan *channel, unsigned long flags);\n+static void xlnx_ps_pcie_dma_issue_pending(struct dma_chan *channel);\n+static int xlnx_ps_pcie_dma_terminate_all(struct dma_chan *channel);\n+static int read_rootdma_config(struct platform_device *platform_dev,\n+\t\t\t       struct xlnx_pcie_dma_device *xdev);\n+static int read_epdma_config(struct platform_device *platform_dev,\n+\t\t\t     struct xlnx_pcie_dma_device *xdev);\n+static int xlnx_pcie_dma_driver_probe(struct platform_device *platform_dev);\n+static int xlnx_pcie_dma_driver_remove(struct platform_device *platform_dev);\n+\n+/* IO accessors */\n+static inline u32 ps_pcie_dma_read(struct ps_pcie_dma_chan *chan, u32 reg)\n+{\n+\treturn ioread32((void __iomem *)((char *)(chan->chan_base) + reg));\n+}\n+\n+static inline void ps_pcie_dma_write(struct ps_pcie_dma_chan *chan, u32 reg,\n+\t\t\t\t     u32 value)\n+{\n+\tiowrite32(value, (void __iomem *)((char *)(chan->chan_base) + reg));\n+}\n+\n+static inline void ps_pcie_dma_clr_mask(struct ps_pcie_dma_chan *chan, u32 reg,\n+\t\t\t\t\tu32 mask)\n+{\n+\tps_pcie_dma_write(chan, reg, ps_pcie_dma_read(chan, reg) & ~mask);\n+}\n+\n+static inline void ps_pcie_dma_set_mask(struct ps_pcie_dma_chan *chan, u32 reg,\n+\t\t\t\t\tu32 mask)\n+{\n+\tps_pcie_dma_write(chan, reg, ps_pcie_dma_read(chan, reg) | mask);\n+}\n+\n+/**\n+ * ps_pcie_dma_dev_intr_handler - This will be invoked for MSI/Legacy interrupts\n+ *\n+ * @irq: IRQ number\n+ * @data: Pointer to the PS PCIe DMA channel structure\n+ *\n+ * Return: IRQ_HANDLED/IRQ_NONE\n+ */\n+static irqreturn_t ps_pcie_dma_dev_intr_handler(int irq, void *data)\n+{\n+\tstruct xlnx_pcie_dma_device *xdev =\n+\t\t(struct xlnx_pcie_dma_device *)data;\n+\tstruct ps_pcie_dma_chan *chan = NULL;\n+\tint i;\n+\tint err = -1;\n+\tint ret = -1;\n+\n+\tfor (i = 0; i < xdev->num_channels; i++) {\n+\t\tchan = &xdev->channels[i];\n+\t\terr = ps_pcie_check_intr_status(chan);\n+\t\tif (err == 0)\n+\t\t\tret = 0;\n+\t}\n+\n+\treturn (ret == 0) ? IRQ_HANDLED : IRQ_NONE;\n+}\n+\n+/**\n+ * ps_pcie_dma_chan_intr_handler - This will be invoked for MSI-X interrupts\n+ *\n+ * @irq: IRQ number\n+ * @data: Pointer to the PS PCIe DMA channel structure\n+ *\n+ * Return: IRQ_HANDLED\n+ */\n+static irqreturn_t ps_pcie_dma_chan_intr_handler(int irq, void *data)\n+{\n+\tstruct ps_pcie_dma_chan *chan = (struct ps_pcie_dma_chan *)data;\n+\n+\tps_pcie_check_intr_status(chan);\n+\n+\treturn IRQ_HANDLED;\n+}\n+\n+/**\n+ * chan_intr_setup - Requests Interrupt handler for individual channels\n+ *\n+ * @xdev: Driver specific data for device\n+ *\n+ * Return: 0 on success and non zero value on failure.\n+ */\n+static int chan_intr_setup(struct xlnx_pcie_dma_device *xdev)\n+{\n+\tstruct ps_pcie_dma_chan *chan;\n+\tint i;\n+\tint err = 0;\n+\n+\tfor (i = 0; i < xdev->num_channels; i++) {\n+\t\tchan = &xdev->channels[i];\n+\t\terr = devm_request_irq(xdev->dev,\n+\t\t\t\t       pci_irq_vector(xdev->pci_dev, i),\n+\t\t\t\t       ps_pcie_dma_chan_intr_handler,\n+\t\t\t\t       PS_PCIE_DMA_IRQ_NOSHARE,\n+\t\t\t\t       \"PS PCIe DMA Chan Intr handler\", chan);\n+\t\tif (err) {\n+\t\t\tdev_err(xdev->dev,\n+\t\t\t\t\"Irq %d for chan %d error %d\\n\",\n+\t\t\t\tpci_irq_vector(xdev->pci_dev, i),\n+\t\t\t\tchan->channel_number, err);\n+\t\t\tbreak;\n+\t\t}\n+\t}\n+\n+\tif (err) {\n+\t\twhile (--i >= 0) {\n+\t\t\tchan = &xdev->channels[i];\n+\t\t\tdevm_free_irq(xdev->dev,\n+\t\t\t\t      pci_irq_vector(xdev->pci_dev, i), chan);\n+\t\t}\n+\t}\n+\n+\treturn err;\n+}\n+\n+/**\n+ * device_intr_setup - Requests interrupt handler for DMA device\n+ *\n+ * @xdev: Driver specific data for device\n+ *\n+ * Return: 0 on success and non zero value on failure.\n+ */\n+static int device_intr_setup(struct xlnx_pcie_dma_device *xdev)\n+{\n+\tint err;\n+\tunsigned long intr_flags = IRQF_SHARED;\n+\n+\tif (xdev->pci_dev->msix_enabled || xdev->pci_dev->msi_enabled)\n+\t\tintr_flags = PS_PCIE_DMA_IRQ_NOSHARE;\n+\n+\terr = devm_request_irq(xdev->dev,\n+\t\t\t       pci_irq_vector(xdev->pci_dev, 0),\n+\t\t\t       ps_pcie_dma_dev_intr_handler,\n+\t\t\t       intr_flags,\n+\t\t\t       \"PS PCIe DMA Intr Handler\", xdev);\n+\tif (err)\n+\t\tdev_err(xdev->dev, \"Couldn't request irq %d\\n\",\n+\t\t\tpci_irq_vector(xdev->pci_dev, 0));\n+\n+\treturn err;\n+}\n+\n+/**\n+ * irq_setup - Requests interrupts based on the interrupt type detected\n+ *\n+ * @xdev: Driver specific data for device\n+ *\n+ * Return: 0 on success and non zero value on failure.\n+ */\n+static int irq_setup(struct xlnx_pcie_dma_device *xdev)\n+{\n+\tint err;\n+\n+\tif (xdev->irq_vecs == xdev->num_channels)\n+\t\terr = chan_intr_setup(xdev);\n+\telse\n+\t\terr = device_intr_setup(xdev);\n+\n+\treturn err;\n+}\n+\n+static int platform_irq_setup(struct xlnx_pcie_dma_device *xdev)\n+{\n+\tint err;\n+\n+\terr = devm_request_irq(xdev->dev,\n+\t\t\t       xdev->platform_irq_vec,\n+\t\t\t       ps_pcie_dma_dev_intr_handler,\n+\t\t\t       IRQF_SHARED,\n+\t\t\t       \"PS PCIe Root DMA Handler\", xdev);\n+\tif (err)\n+\t\tdev_err(xdev->dev, \"Couldn't request irq %d\\n\",\n+\t\t\txdev->platform_irq_vec);\n+\n+\treturn err;\n+}\n+\n+/**\n+ * irq_probe - Checks which interrupt types can be serviced by hardware\n+ *\n+ * @xdev: Driver specific data for device\n+ *\n+ * Return: Number of interrupt vectors when successful or -ENOSPC on failure\n+ */\n+static int irq_probe(struct xlnx_pcie_dma_device *xdev)\n+{\n+\tstruct pci_dev *pdev;\n+\n+\tpdev = xdev->pci_dev;\n+\n+\txdev->irq_vecs = pci_alloc_irq_vectors(pdev, 1, xdev->num_channels,\n+\t\t\t\t\t       PCI_IRQ_ALL_TYPES);\n+\treturn xdev->irq_vecs;\n+}\n+\n+/**\n+ * ps_pcie_check_intr_status - Checks channel interrupt status\n+ *\n+ * @chan: Pointer to the PS PCIe DMA channel structure\n+ *\n+ * Return: 0 if interrupt is pending on channel\n+ *\t\t   -1 if no interrupt is pending on channel\n+ */\n+static int ps_pcie_check_intr_status(struct ps_pcie_dma_chan *chan)\n+{\n+\tint err = -1;\n+\tu32 status;\n+\n+\tif (chan->state != CHANNEL_AVAILABLE)\n+\t\treturn err;\n+\n+\tstatus = ps_pcie_dma_read(chan, chan->intr_status_offset);\n+\n+\tif (status & DMA_INTSTATUS_SGLINTR_BIT) {\n+\t\tif (chan->primary_desc_cleanup) {\n+\t\t\tqueue_work(chan->primary_desc_cleanup,\n+\t\t\t\t   &chan->handle_primary_desc_cleanup);\n+\t\t}\n+\t\t/* Clearing Persistent bit */\n+\t\tps_pcie_dma_set_mask(chan, chan->intr_status_offset,\n+\t\t\t\t     DMA_INTSTATUS_SGLINTR_BIT);\n+\t\terr = 0;\n+\t}\n+\n+\tif (status & DMA_INTSTATUS_SWINTR_BIT) {\n+\t\tif (chan->sw_intrs_wrkq)\n+\t\t\tqueue_work(chan->sw_intrs_wrkq, &chan->handle_sw_intrs);\n+\t\t/* Clearing Persistent bit */\n+\t\tps_pcie_dma_set_mask(chan, chan->intr_status_offset,\n+\t\t\t\t     DMA_INTSTATUS_SWINTR_BIT);\n+\t\terr = 0;\n+\t}\n+\n+\tif (status & DMA_INTSTATUS_DMAERR_BIT) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"DMA Channel %d ControlStatus Reg: 0x%x\",\n+\t\t\tchan->channel_number, status);\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Chn %d SrcQLmt = %d SrcQSz = %d SrcQNxt = %d\",\n+\t\t\tchan->channel_number,\n+\t\t\tchan->chan_base->src_q_limit,\n+\t\t\tchan->chan_base->src_q_size,\n+\t\t\tchan->chan_base->src_q_next);\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Chn %d SrcStaLmt = %d SrcStaSz = %d SrcStaNxt = %d\",\n+\t\t\tchan->channel_number,\n+\t\t\tchan->chan_base->stas_q_limit,\n+\t\t\tchan->chan_base->stas_q_size,\n+\t\t\tchan->chan_base->stas_q_next);\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Chn %d DstQLmt = %d DstQSz = %d DstQNxt = %d\",\n+\t\t\tchan->channel_number,\n+\t\t\tchan->chan_base->dst_q_limit,\n+\t\t\tchan->chan_base->dst_q_size,\n+\t\t\tchan->chan_base->dst_q_next);\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Chan %d DstStaLmt = %d DstStaSz = %d DstStaNxt = %d\",\n+\t\t\tchan->channel_number,\n+\t\t\tchan->chan_base->stad_q_limit,\n+\t\t\tchan->chan_base->stad_q_size,\n+\t\t\tchan->chan_base->stad_q_next);\n+\t\t/* Clearing Persistent bit */\n+\t\tps_pcie_dma_set_mask(chan, chan->intr_status_offset,\n+\t\t\t\t     DMA_INTSTATUS_DMAERR_BIT);\n+\n+\t\thandle_error(chan);\n+\n+\t\terr = 0;\n+\t}\n+\n+\treturn err;\n+}\n+\n+static int init_hw_components(struct ps_pcie_dma_chan *chan)\n+{\n+\tif (chan->psrc_sgl_bd && chan->psrc_sta_bd) {\n+\t\t/*  Programming SourceQ and StatusQ bd addresses */\n+\t\tchan->chan_base->src_q_next = 0;\n+\t\tchan->chan_base->src_q_high =\n+\t\t\tupper_32_bits(chan->src_sgl_bd_pa);\n+\t\tchan->chan_base->src_q_size = chan->total_descriptors;\n+\t\tchan->chan_base->src_q_limit = 0;\n+\t\tif (chan->xdev->is_rootdma) {\n+\t\t\tchan->chan_base->src_q_low = ROOTDMA_Q_READ_ATTRIBUTE\n+\t\t\t\t\t\t     | DMA_QPTRLO_QLOCAXI_BIT;\n+\t\t} else {\n+\t\t\tchan->chan_base->src_q_low = 0;\n+\t\t}\n+\t\tchan->chan_base->src_q_low |=\n+\t\t\t(lower_32_bits((chan->src_sgl_bd_pa))\n+\t\t\t & ~(DMA_SRC_Q_LOW_BIT_SHIFT))\n+\t\t\t| DMA_QPTRLO_Q_ENABLE_BIT;\n+\n+\t\tchan->chan_base->stas_q_next = 0;\n+\t\tchan->chan_base->stas_q_high =\n+\t\t\tupper_32_bits(chan->src_sta_bd_pa);\n+\t\tchan->chan_base->stas_q_size = chan->total_descriptors;\n+\t\tchan->chan_base->stas_q_limit = chan->total_descriptors - 1;\n+\t\tif (chan->xdev->is_rootdma) {\n+\t\t\tchan->chan_base->stas_q_low = ROOTDMA_Q_READ_ATTRIBUTE\n+\t\t\t\t\t\t      | DMA_QPTRLO_QLOCAXI_BIT;\n+\t\t} else {\n+\t\t\tchan->chan_base->stas_q_low = 0;\n+\t\t}\n+\t\tchan->chan_base->stas_q_low |=\n+\t\t\t(lower_32_bits(chan->src_sta_bd_pa)\n+\t\t\t & ~(DMA_SRC_Q_LOW_BIT_SHIFT))\n+\t\t\t| DMA_QPTRLO_Q_ENABLE_BIT;\n+\t}\n+\n+\tif (chan->pdst_sgl_bd && chan->pdst_sta_bd) {\n+\t\t/*  Programming DestinationQ and StatusQ buffer descriptors */\n+\t\tchan->chan_base->dst_q_next = 0;\n+\t\tchan->chan_base->dst_q_high =\n+\t\t\tupper_32_bits(chan->dst_sgl_bd_pa);\n+\t\tchan->chan_base->dst_q_size = chan->total_descriptors;\n+\t\tchan->chan_base->dst_q_limit = 0;\n+\t\tif (chan->xdev->is_rootdma) {\n+\t\t\tchan->chan_base->dst_q_low = ROOTDMA_Q_READ_ATTRIBUTE\n+\t\t\t\t\t\t     | DMA_QPTRLO_QLOCAXI_BIT;\n+\t\t} else {\n+\t\t\tchan->chan_base->dst_q_low = 0;\n+\t\t}\n+\t\tchan->chan_base->dst_q_low |=\n+\t\t\t(lower_32_bits(chan->dst_sgl_bd_pa)\n+\t\t\t & ~(DMA_SRC_Q_LOW_BIT_SHIFT))\n+\t\t\t| DMA_QPTRLO_Q_ENABLE_BIT;\n+\n+\t\tchan->chan_base->stad_q_next = 0;\n+\t\tchan->chan_base->stad_q_high =\n+\t\t\tupper_32_bits(chan->dst_sta_bd_pa);\n+\t\tchan->chan_base->stad_q_size = chan->total_descriptors;\n+\t\tchan->chan_base->stad_q_limit = chan->total_descriptors - 1;\n+\t\tif (chan->xdev->is_rootdma) {\n+\t\t\tchan->chan_base->stad_q_low = ROOTDMA_Q_READ_ATTRIBUTE\n+\t\t\t\t\t\t      | DMA_QPTRLO_QLOCAXI_BIT;\n+\t\t} else {\n+\t\t\tchan->chan_base->stad_q_low = 0;\n+\t\t}\n+\t\tchan->chan_base->stad_q_low |=\n+\t\t\t(lower_32_bits(chan->dst_sta_bd_pa)\n+\t\t\t & ~(DMA_SRC_Q_LOW_BIT_SHIFT))\n+\t\t\t| DMA_QPTRLO_Q_ENABLE_BIT;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static void update_channel_read_attribute(struct ps_pcie_dma_chan *chan)\n+{\n+\tif (chan->xdev->is_rootdma) {\n+\t\t/* For Root DMA, Host Memory and Buffer Descriptors\n+\t\t * will be on AXI side\n+\t\t */\n+\t\tif (chan->srcq_buffer_location == BUFFER_LOC_PCI) {\n+\t\t\tchan->read_attribute = (AXI_ATTRIBUTE <<\n+\t\t\t\t\t\tSRC_CTL_ATTRIB_BIT_SHIFT) |\n+\t\t\t\t\t\tSOURCE_CONTROL_BD_LOC_AXI;\n+\t\t} else if (chan->srcq_buffer_location == BUFFER_LOC_AXI) {\n+\t\t\tchan->read_attribute = AXI_ATTRIBUTE <<\n+\t\t\t\t\t       SRC_CTL_ATTRIB_BIT_SHIFT;\n+\t\t}\n+\t} else {\n+\t\tif (chan->srcq_buffer_location == BUFFER_LOC_PCI) {\n+\t\t\tchan->read_attribute = PCI_ATTRIBUTE <<\n+\t\t\t\t\t       SRC_CTL_ATTRIB_BIT_SHIFT;\n+\t\t} else if (chan->srcq_buffer_location == BUFFER_LOC_AXI) {\n+\t\t\tchan->read_attribute = (AXI_ATTRIBUTE <<\n+\t\t\t\t\t\tSRC_CTL_ATTRIB_BIT_SHIFT) |\n+\t\t\t\t\t\tSOURCE_CONTROL_BD_LOC_AXI;\n+\t\t}\n+\t}\n+}\n+\n+static void update_channel_write_attribute(struct ps_pcie_dma_chan *chan)\n+{\n+\tif (chan->xdev->is_rootdma) {\n+\t\t/* For Root DMA, Host Memory and Buffer Descriptors\n+\t\t * will be on AXI side\n+\t\t */\n+\t\tif (chan->dstq_buffer_location == BUFFER_LOC_PCI) {\n+\t\t\tchan->write_attribute = (AXI_ATTRIBUTE <<\n+\t\t\t\t\t\t SRC_CTL_ATTRIB_BIT_SHIFT) |\n+\t\t\t\t\t\tSOURCE_CONTROL_BD_LOC_AXI;\n+\t\t} else if (chan->srcq_buffer_location == BUFFER_LOC_AXI) {\n+\t\t\tchan->write_attribute = AXI_ATTRIBUTE <<\n+\t\t\t\t\t\tSRC_CTL_ATTRIB_BIT_SHIFT;\n+\t\t}\n+\t} else {\n+\t\tif (chan->dstq_buffer_location == BUFFER_LOC_PCI) {\n+\t\t\tchan->write_attribute = PCI_ATTRIBUTE <<\n+\t\t\t\t\t\tSRC_CTL_ATTRIB_BIT_SHIFT;\n+\t\t} else if (chan->dstq_buffer_location == BUFFER_LOC_AXI) {\n+\t\t\tchan->write_attribute = (AXI_ATTRIBUTE <<\n+\t\t\t\t\t\t SRC_CTL_ATTRIB_BIT_SHIFT) |\n+\t\t\t\t\t\tSOURCE_CONTROL_BD_LOC_AXI;\n+\t\t}\n+\t}\n+\tchan->write_attribute |= SOURCE_CONTROL_BACK_TO_BACK_PACK_BIT;\n+}\n+\n+static int init_sw_components(struct ps_pcie_dma_chan *chan)\n+{\n+\tif ((chan->ppkt_ctx_srcq) && (chan->psrc_sgl_bd) &&\n+\t    (chan->psrc_sta_bd)) {\n+\t\tmemset(chan->ppkt_ctx_srcq, 0,\n+\t\t       sizeof(struct PACKET_TRANSFER_PARAMS)\n+\t\t       * chan->total_descriptors);\n+\n+\t\tmemset(chan->psrc_sgl_bd, 0,\n+\t\t       sizeof(struct SOURCE_DMA_DESCRIPTOR)\n+\t\t       * chan->total_descriptors);\n+\n+\t\tmemset(chan->psrc_sta_bd, 0,\n+\t\t       sizeof(struct STATUS_DMA_DESCRIPTOR)\n+\t\t       * chan->total_descriptors);\n+\n+\t\tchan->src_avail_descriptors = chan->total_descriptors;\n+\n+\t\tchan->src_sgl_freeidx = 0;\n+\t\tchan->src_staprobe_idx = 0;\n+\t\tchan->src_sta_hw_probe_idx = chan->total_descriptors - 1;\n+\t\tchan->idx_ctx_srcq_head = 0;\n+\t\tchan->idx_ctx_srcq_tail = 0;\n+\t}\n+\n+\tif ((chan->ppkt_ctx_dstq) && (chan->pdst_sgl_bd) &&\n+\t    (chan->pdst_sta_bd)) {\n+\t\tmemset(chan->ppkt_ctx_dstq, 0,\n+\t\t       sizeof(struct PACKET_TRANSFER_PARAMS)\n+\t\t       * chan->total_descriptors);\n+\n+\t\tmemset(chan->pdst_sgl_bd, 0,\n+\t\t       sizeof(struct DEST_DMA_DESCRIPTOR)\n+\t\t       * chan->total_descriptors);\n+\n+\t\tmemset(chan->pdst_sta_bd, 0,\n+\t\t       sizeof(struct STATUS_DMA_DESCRIPTOR)\n+\t\t       * chan->total_descriptors);\n+\n+\t\tchan->dst_avail_descriptors = chan->total_descriptors;\n+\n+\t\tchan->dst_sgl_freeidx = 0;\n+\t\tchan->dst_staprobe_idx = 0;\n+\t\tchan->dst_sta_hw_probe_idx = chan->total_descriptors - 1;\n+\t\tchan->idx_ctx_dstq_head = 0;\n+\t\tchan->idx_ctx_dstq_tail = 0;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+/**\n+ * ps_pcie_chan_reset - Resets channel, by programming relevant registers\n+ *\n+ * @chan: PS PCIe DMA channel information holder\n+ * Return: void\n+ */\n+static void ps_pcie_chan_reset(struct ps_pcie_dma_chan *chan)\n+{\n+\t/* Enable channel reset */\n+\tps_pcie_dma_set_mask(chan, DMA_CNTRL_REG_OFFSET, DMA_CNTRL_RST_BIT);\n+\n+\tmdelay(10);\n+\n+\t/* Disable channel reset */\n+\tps_pcie_dma_clr_mask(chan, DMA_CNTRL_REG_OFFSET, DMA_CNTRL_RST_BIT);\n+}\n+\n+/**\n+ * poll_completed_transactions - Function invoked by poll timer\n+ *\n+ * @arg: Pointer to PS PCIe DMA channel information\n+ * Return: void\n+ */\n+static void poll_completed_transactions(unsigned long arg)\n+{\n+\tstruct ps_pcie_dma_chan *chan = (struct ps_pcie_dma_chan *)arg;\n+\n+\tif (chan->state == CHANNEL_AVAILABLE) {\n+\t\tqueue_work(chan->primary_desc_cleanup,\n+\t\t\t   &chan->handle_primary_desc_cleanup);\n+\t}\n+\n+\tmod_timer(&chan->poll_timer, jiffies + chan->poll_timer_freq);\n+}\n+\n+static bool check_descriptors_for_two_queues(struct ps_pcie_dma_chan *chan,\n+\t\t\t\t\t     struct ps_pcie_tx_segment *seg)\n+{\n+\tif (seg->tx_elements.src_sgl) {\n+\t\tif (chan->src_avail_descriptors >=\n+\t\t    seg->tx_elements.srcq_num_elemets) {\n+\t\t\treturn true;\n+\t\t}\n+\t} else if (seg->tx_elements.dst_sgl) {\n+\t\tif (chan->dst_avail_descriptors >=\n+\t\t    seg->tx_elements.dstq_num_elemets) {\n+\t\t\treturn true;\n+\t\t}\n+\t}\n+\n+\treturn false;\n+}\n+\n+static bool check_descriptors_for_all_queues(struct ps_pcie_dma_chan *chan,\n+\t\t\t\t\t     struct ps_pcie_tx_segment *seg)\n+{\n+\tif ((chan->src_avail_descriptors >=\n+\t\tseg->tx_elements.srcq_num_elemets) &&\n+\t    (chan->dst_avail_descriptors >=\n+\t\tseg->tx_elements.dstq_num_elemets)) {\n+\t\treturn true;\n+\t}\n+\n+\treturn false;\n+}\n+\n+static bool check_descriptor_availability(struct ps_pcie_dma_chan *chan,\n+\t\t\t\t\t  struct ps_pcie_tx_segment *seg)\n+{\n+\tif (chan->num_queues == DEFAULT_DMA_QUEUES)\n+\t\treturn check_descriptors_for_all_queues(chan, seg);\n+\telse\n+\t\treturn check_descriptors_for_two_queues(chan, seg);\n+}\n+\n+static void handle_error(struct ps_pcie_dma_chan *chan)\n+{\n+\tif (chan->state != CHANNEL_AVAILABLE)\n+\t\treturn;\n+\n+\tspin_lock(&chan->channel_lock);\n+\tchan->state = CHANNEL_ERROR;\n+\tspin_unlock(&chan->channel_lock);\n+\n+\tif (chan->maintenance_workq)\n+\t\tqueue_work(chan->maintenance_workq, &chan->handle_chan_reset);\n+}\n+\n+static void xlnx_ps_pcie_update_srcq(struct ps_pcie_dma_chan *chan,\n+\t\t\t\t     struct ps_pcie_tx_segment *seg)\n+{\n+\tstruct SOURCE_DMA_DESCRIPTOR *pdesc;\n+\tstruct PACKET_TRANSFER_PARAMS *pkt_ctx = NULL;\n+\tstruct scatterlist *sgl_ptr;\n+\tunsigned int i;\n+\n+\tpkt_ctx = chan->ppkt_ctx_srcq + chan->idx_ctx_srcq_head;\n+\tif (pkt_ctx->availability_status == IN_USE) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"src pkt context not avail for channel %d\\n\",\n+\t\t\tchan->channel_number);\n+\t\thandle_error(chan);\n+\t\treturn;\n+\t}\n+\n+\tpkt_ctx->availability_status = IN_USE;\n+\tpkt_ctx->sgl = seg->tx_elements.src_sgl;\n+\n+\tif (chan->srcq_buffer_location == BUFFER_LOC_PCI)\n+\t\tpkt_ctx->seg = seg;\n+\n+\t/*  Get the address of the next available DMA Descriptor */\n+\tpdesc = chan->psrc_sgl_bd + chan->src_sgl_freeidx;\n+\tpkt_ctx->idx_sop = chan->src_sgl_freeidx;\n+\n+\t/* Build transactions using information in the scatter gather list */\n+\tfor_each_sg(seg->tx_elements.src_sgl, sgl_ptr,\n+\t\t    seg->tx_elements.srcq_num_elemets, i) {\n+\t\tif (chan->xdev->dma_buf_ext_addr) {\n+\t\t\tpdesc->system_address =\n+\t\t\t\t(u64)sg_dma_address(sgl_ptr);\n+\t\t} else {\n+\t\t\tpdesc->system_address =\n+\t\t\t\t(u32)sg_dma_address(sgl_ptr);\n+\t\t}\n+\n+\t\tpdesc->control_byte_count = (sg_dma_len(sgl_ptr) &\n+\t\t\t\t\t    SOURCE_CONTROL_BD_BYTE_COUNT_MASK) |\n+\t\t\t\t\t    chan->read_attribute;\n+\t\tif (pkt_ctx->seg)\n+\t\t\tpkt_ctx->requested_bytes += sg_dma_len(sgl_ptr);\n+\n+\t\tpdesc->user_handle = chan->idx_ctx_srcq_head;\n+\t\tpdesc->user_id = DEFAULT_UID;\n+\t\t/* Check if this is last descriptor */\n+\t\tif (i == (seg->tx_elements.srcq_num_elemets - 1)) {\n+\t\t\tpkt_ctx->idx_eop = chan->src_sgl_freeidx;\n+\t\t\tpdesc->control_byte_count = pdesc->control_byte_count |\n+\t\t\t\t\t\tSOURCE_CONTROL_BD_EOP_BIT |\n+\t\t\t\t\t\tSOURCE_CONTROL_BD_INTR_BIT;\n+\t\t}\n+\t\tchan->src_sgl_freeidx++;\n+\t\tif (chan->src_sgl_freeidx == chan->total_descriptors)\n+\t\t\tchan->src_sgl_freeidx = 0;\n+\t\tpdesc = chan->psrc_sgl_bd + chan->src_sgl_freeidx;\n+\t\tspin_lock(&chan->src_desc_lock);\n+\t\tchan->src_avail_descriptors--;\n+\t\tspin_unlock(&chan->src_desc_lock);\n+\t}\n+\n+\tchan->chan_base->src_q_limit = chan->src_sgl_freeidx;\n+\tchan->idx_ctx_srcq_head++;\n+\tif (chan->idx_ctx_srcq_head == chan->total_descriptors)\n+\t\tchan->idx_ctx_srcq_head = 0;\n+}\n+\n+static void xlnx_ps_pcie_update_dstq(struct ps_pcie_dma_chan *chan,\n+\t\t\t\t     struct ps_pcie_tx_segment *seg)\n+{\n+\tstruct DEST_DMA_DESCRIPTOR *pdesc;\n+\tstruct PACKET_TRANSFER_PARAMS *pkt_ctx = NULL;\n+\tstruct scatterlist *sgl_ptr;\n+\tunsigned int i;\n+\n+\tpkt_ctx = chan->ppkt_ctx_dstq + chan->idx_ctx_dstq_head;\n+\tif (pkt_ctx->availability_status == IN_USE) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"dst pkt context not avail for channel %d\\n\",\n+\t\t\tchan->channel_number);\n+\t\thandle_error(chan);\n+\n+\t\treturn;\n+\t}\n+\n+\tpkt_ctx->availability_status = IN_USE;\n+\tpkt_ctx->sgl = seg->tx_elements.dst_sgl;\n+\n+\tif (chan->dstq_buffer_location == BUFFER_LOC_PCI)\n+\t\tpkt_ctx->seg = seg;\n+\n+\tpdesc = chan->pdst_sgl_bd + chan->dst_sgl_freeidx;\n+\tpkt_ctx->idx_sop = chan->dst_sgl_freeidx;\n+\n+\t/* Build transactions using information in the scatter gather list */\n+\tfor_each_sg(seg->tx_elements.dst_sgl, sgl_ptr,\n+\t\t    seg->tx_elements.dstq_num_elemets, i) {\n+\t\tif (chan->xdev->dma_buf_ext_addr) {\n+\t\t\tpdesc->system_address =\n+\t\t\t\t(u64)sg_dma_address(sgl_ptr);\n+\t\t} else {\n+\t\t\tpdesc->system_address =\n+\t\t\t\t(u32)sg_dma_address(sgl_ptr);\n+\t\t}\n+\n+\t\tpdesc->control_byte_count = (sg_dma_len(sgl_ptr) &\n+\t\t\t\t\tSOURCE_CONTROL_BD_BYTE_COUNT_MASK) |\n+\t\t\t\t\t\tchan->write_attribute;\n+\n+\t\tif (pkt_ctx->seg)\n+\t\t\tpkt_ctx->requested_bytes += sg_dma_len(sgl_ptr);\n+\n+\t\tpdesc->user_handle = chan->idx_ctx_dstq_head;\n+\t\t/* Check if this is last descriptor */\n+\t\tif (i == (seg->tx_elements.dstq_num_elemets - 1))\n+\t\t\tpkt_ctx->idx_eop = chan->dst_sgl_freeidx;\n+\t\tchan->dst_sgl_freeidx++;\n+\t\tif (chan->dst_sgl_freeidx == chan->total_descriptors)\n+\t\t\tchan->dst_sgl_freeidx = 0;\n+\t\tpdesc = chan->pdst_sgl_bd + chan->dst_sgl_freeidx;\n+\t\tspin_lock(&chan->dst_desc_lock);\n+\t\tchan->dst_avail_descriptors--;\n+\t\tspin_unlock(&chan->dst_desc_lock);\n+\t}\n+\n+\tchan->chan_base->dst_q_limit = chan->dst_sgl_freeidx;\n+\tchan->idx_ctx_dstq_head++;\n+\tif (chan->idx_ctx_dstq_head == chan->total_descriptors)\n+\t\tchan->idx_ctx_dstq_head = 0;\n+}\n+\n+static void ps_pcie_chan_program_work(struct work_struct *work)\n+{\n+\tstruct ps_pcie_dma_chan *chan =\n+\t\t(struct ps_pcie_dma_chan *)container_of(work,\n+\t\t\t\tstruct ps_pcie_dma_chan,\n+\t\t\t\thandle_chan_programming);\n+\tstruct ps_pcie_tx_segment *seg = NULL;\n+\n+\twhile (chan->state == CHANNEL_AVAILABLE) {\n+\t\tspin_lock(&chan->active_list_lock);\n+\t\tseg = list_first_entry_or_null(&chan->active_list,\n+\t\t\t\t\t       struct ps_pcie_tx_segment, node);\n+\t\tspin_unlock(&chan->active_list_lock);\n+\n+\t\tif (!seg)\n+\t\t\tbreak;\n+\n+\t\tif (check_descriptor_availability(chan, seg) == false)\n+\t\t\tbreak;\n+\n+\t\tspin_lock(&chan->active_list_lock);\n+\t\tlist_del(&seg->node);\n+\t\tspin_unlock(&chan->active_list_lock);\n+\n+\t\tif (seg->tx_elements.src_sgl)\n+\t\t\txlnx_ps_pcie_update_srcq(chan, seg);\n+\n+\t\tif (seg->tx_elements.dst_sgl)\n+\t\t\txlnx_ps_pcie_update_dstq(chan, seg);\n+\t}\n+}\n+\n+/**\n+ * dst_cleanup_work - Goes through all completed elements in status Q\n+ * and invokes callbacks for the concerned DMA transaction.\n+ *\n+ * @work: Work associated with the task\n+ *\n+ * Return: void\n+ */\n+static void dst_cleanup_work(struct work_struct *work)\n+{\n+\tstruct ps_pcie_dma_chan *chan =\n+\t\t(struct ps_pcie_dma_chan *)container_of(work,\n+\t\t\tstruct ps_pcie_dma_chan, handle_dstq_desc_cleanup);\n+\n+\tstruct STATUS_DMA_DESCRIPTOR *psta_bd;\n+\tstruct DEST_DMA_DESCRIPTOR *pdst_bd;\n+\tstruct PACKET_TRANSFER_PARAMS *ppkt_ctx;\n+\tstruct dmaengine_result rslt;\n+\tu32 completed_bytes;\n+\tu32 dstq_desc_idx;\n+\n+\tpsta_bd = chan->pdst_sta_bd + chan->dst_staprobe_idx;\n+\n+\twhile (psta_bd->status_flag_byte_count & STA_BD_COMPLETED_BIT) {\n+\t\tif (psta_bd->status_flag_byte_count &\n+\t\t\t\tSTA_BD_DESTINATION_ERROR_BIT) {\n+\t\t\tdev_err(chan->dev,\n+\t\t\t\t\"Dst Sts Elmnt %d chan %d has Destination Err\",\n+\t\t\t\tchan->dst_staprobe_idx + 1,\n+\t\t\t\tchan->channel_number);\n+\t\t\thandle_error(chan);\n+\t\t\tbreak;\n+\t\t}\n+\t\tif (psta_bd->status_flag_byte_count & STA_BD_SOURCE_ERROR_BIT) {\n+\t\t\tdev_err(chan->dev,\n+\t\t\t\t\"Dst Sts Elmnt %d chan %d has Source Error\",\n+\t\t\t\tchan->dst_staprobe_idx + 1,\n+\t\t\t\tchan->channel_number);\n+\t\t\thandle_error(chan);\n+\t\t\tbreak;\n+\t\t}\n+\t\tif (psta_bd->status_flag_byte_count &\n+\t\t\t\tSTA_BD_INTERNAL_ERROR_BIT) {\n+\t\t\tdev_err(chan->dev,\n+\t\t\t\t\"Dst Sts Elmnt %d chan %d has Internal Error\",\n+\t\t\t\tchan->dst_staprobe_idx + 1,\n+\t\t\t\tchan->channel_number);\n+\t\t\thandle_error(chan);\n+\t\t\tbreak;\n+\t\t}\n+\t\t/* we are using 64 bit USER field. */\n+\t\tif ((psta_bd->status_flag_byte_count &\n+\t\t\t\t\tSTA_BD_UPPER_STATUS_NONZERO_BIT) == 0) {\n+\t\t\tdev_err(chan->dev,\n+\t\t\t\t\"Dst Sts Elmnt %d for chan %d has NON ZERO\",\n+\t\t\t\tchan->dst_staprobe_idx + 1,\n+\t\t\t\tchan->channel_number);\n+\t\t\thandle_error(chan);\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tchan->idx_ctx_dstq_tail = psta_bd->user_handle;\n+\t\tppkt_ctx = chan->ppkt_ctx_dstq + chan->idx_ctx_dstq_tail;\n+\t\tcompleted_bytes = (psta_bd->status_flag_byte_count &\n+\t\t\t\t\tSTA_BD_BYTE_COUNT_MASK) >>\n+\t\t\t\t\t\tSTA_BD_BYTE_COUNT_SHIFT;\n+\n+\t\tmemset(psta_bd, 0, sizeof(struct STATUS_DMA_DESCRIPTOR));\n+\n+\t\tchan->dst_staprobe_idx++;\n+\n+\t\tif (chan->dst_staprobe_idx == chan->total_descriptors)\n+\t\t\tchan->dst_staprobe_idx = 0;\n+\n+\t\tchan->dst_sta_hw_probe_idx++;\n+\n+\t\tif (chan->dst_sta_hw_probe_idx == chan->total_descriptors)\n+\t\t\tchan->dst_sta_hw_probe_idx = 0;\n+\n+\t\tchan->chan_base->stad_q_limit = chan->dst_sta_hw_probe_idx;\n+\n+\t\tpsta_bd = chan->pdst_sta_bd + chan->dst_staprobe_idx;\n+\n+\t\tdstq_desc_idx = ppkt_ctx->idx_sop;\n+\n+\t\tdo {\n+\t\t\tpdst_bd = chan->pdst_sgl_bd + dstq_desc_idx;\n+\t\t\tmemset(pdst_bd, 0,\n+\t\t\t       sizeof(struct DEST_DMA_DESCRIPTOR));\n+\n+\t\t\tspin_lock(&chan->dst_desc_lock);\n+\t\t\tchan->dst_avail_descriptors++;\n+\t\t\tspin_unlock(&chan->dst_desc_lock);\n+\n+\t\t\tif (dstq_desc_idx == ppkt_ctx->idx_eop)\n+\t\t\t\tbreak;\n+\n+\t\t\tdstq_desc_idx++;\n+\n+\t\t\tif (dstq_desc_idx == chan->total_descriptors)\n+\t\t\t\tdstq_desc_idx = 0;\n+\n+\t\t} while (1);\n+\n+\t\t/* Invoking callback */\n+\t\tif (ppkt_ctx->seg) {\n+\t\t\tspin_lock(&chan->cookie_lock);\n+\t\t\tdma_cookie_complete(&ppkt_ctx->seg->async_tx);\n+\t\t\tspin_unlock(&chan->cookie_lock);\n+\t\t\trslt.result = DMA_TRANS_NOERROR;\n+\t\t\trslt.residue = ppkt_ctx->requested_bytes -\n+\t\t\t\t\tcompleted_bytes;\n+\t\t\tdmaengine_desc_get_callback_invoke(&ppkt_ctx->seg->async_tx,\n+\t\t\t\t\t\t\t   &rslt);\n+\t\t\tmempool_free(ppkt_ctx->seg, chan->transactions_pool);\n+\t\t}\n+\t\tmemset(ppkt_ctx, 0, sizeof(struct PACKET_TRANSFER_PARAMS));\n+\t}\n+\n+\tcomplete(&chan->dstq_work_complete);\n+}\n+\n+/**\n+ * src_cleanup_work - Goes through all completed elements in status Q and\n+ * invokes callbacks for the concerned DMA transaction.\n+ *\n+ * @work: Work associated with the task\n+ *\n+ * Return: void\n+ */\n+static void src_cleanup_work(struct work_struct *work)\n+{\n+\tstruct ps_pcie_dma_chan *chan =\n+\t\t(struct ps_pcie_dma_chan *)container_of(\n+\t\twork, struct ps_pcie_dma_chan, handle_srcq_desc_cleanup);\n+\n+\tstruct STATUS_DMA_DESCRIPTOR *psta_bd;\n+\tstruct SOURCE_DMA_DESCRIPTOR *psrc_bd;\n+\tstruct PACKET_TRANSFER_PARAMS *ppkt_ctx;\n+\tstruct dmaengine_result rslt;\n+\tu32 completed_bytes;\n+\tu32 srcq_desc_idx;\n+\n+\tpsta_bd = chan->psrc_sta_bd + chan->src_staprobe_idx;\n+\n+\twhile (psta_bd->status_flag_byte_count & STA_BD_COMPLETED_BIT) {\n+\t\tif (psta_bd->status_flag_byte_count &\n+\t\t\t\tSTA_BD_DESTINATION_ERROR_BIT) {\n+\t\t\tdev_err(chan->dev,\n+\t\t\t\t\"Src Sts Elmnt %d chan %d has Dst Error\",\n+\t\t\t\tchan->src_staprobe_idx + 1,\n+\t\t\t\tchan->channel_number);\n+\t\t\thandle_error(chan);\n+\t\t\tbreak;\n+\t\t}\n+\t\tif (psta_bd->status_flag_byte_count & STA_BD_SOURCE_ERROR_BIT) {\n+\t\t\tdev_err(chan->dev,\n+\t\t\t\t\"Src Sts Elmnt %d chan %d has Source Error\",\n+\t\t\t\tchan->src_staprobe_idx + 1,\n+\t\t\t\tchan->channel_number);\n+\t\t\thandle_error(chan);\n+\t\t\tbreak;\n+\t\t}\n+\t\tif (psta_bd->status_flag_byte_count &\n+\t\t\t\tSTA_BD_INTERNAL_ERROR_BIT) {\n+\t\t\tdev_err(chan->dev,\n+\t\t\t\t\"Src Sts Elmnt %d chan %d has Internal Error\",\n+\t\t\t\tchan->src_staprobe_idx + 1,\n+\t\t\t\tchan->channel_number);\n+\t\t\thandle_error(chan);\n+\t\t\tbreak;\n+\t\t}\n+\t\tif ((psta_bd->status_flag_byte_count\n+\t\t\t\t& STA_BD_UPPER_STATUS_NONZERO_BIT) == 0) {\n+\t\t\tdev_err(chan->dev,\n+\t\t\t\t\"Src Sts Elmnt %d chan %d has NonZero\",\n+\t\t\t\tchan->src_staprobe_idx + 1,\n+\t\t\t\tchan->channel_number);\n+\t\t\thandle_error(chan);\n+\t\t\tbreak;\n+\t\t}\n+\t\tchan->idx_ctx_srcq_tail = psta_bd->user_handle;\n+\t\tppkt_ctx = chan->ppkt_ctx_srcq + chan->idx_ctx_srcq_tail;\n+\t\tcompleted_bytes = (psta_bd->status_flag_byte_count\n+\t\t\t\t\t& STA_BD_BYTE_COUNT_MASK) >>\n+\t\t\t\t\t\tSTA_BD_BYTE_COUNT_SHIFT;\n+\n+\t\tmemset(psta_bd, 0, sizeof(struct STATUS_DMA_DESCRIPTOR));\n+\n+\t\tchan->src_staprobe_idx++;\n+\n+\t\tif (chan->src_staprobe_idx == chan->total_descriptors)\n+\t\t\tchan->src_staprobe_idx = 0;\n+\n+\t\tchan->src_sta_hw_probe_idx++;\n+\n+\t\tif (chan->src_sta_hw_probe_idx == chan->total_descriptors)\n+\t\t\tchan->src_sta_hw_probe_idx = 0;\n+\n+\t\tchan->chan_base->stas_q_limit = chan->src_sta_hw_probe_idx;\n+\n+\t\tpsta_bd = chan->psrc_sta_bd + chan->src_staprobe_idx;\n+\n+\t\tsrcq_desc_idx = ppkt_ctx->idx_sop;\n+\n+\t\tdo {\n+\t\t\tpsrc_bd = chan->psrc_sgl_bd + srcq_desc_idx;\n+\t\t\tmemset(psrc_bd, 0,\n+\t\t\t       sizeof(struct SOURCE_DMA_DESCRIPTOR));\n+\n+\t\t\tspin_lock(&chan->src_desc_lock);\n+\t\t\tchan->src_avail_descriptors++;\n+\t\t\tspin_unlock(&chan->src_desc_lock);\n+\n+\t\t\tif (srcq_desc_idx == ppkt_ctx->idx_eop)\n+\t\t\t\tbreak;\n+\t\t\tsrcq_desc_idx++;\n+\n+\t\t\tif (srcq_desc_idx == chan->total_descriptors)\n+\t\t\t\tsrcq_desc_idx = 0;\n+\n+\t\t} while (1);\n+\n+\t\t/* Invoking callback */\n+\t\tif (ppkt_ctx->seg) {\n+\t\t\tspin_lock(&chan->cookie_lock);\n+\t\t\tdma_cookie_complete(&ppkt_ctx->seg->async_tx);\n+\t\t\tspin_unlock(&chan->cookie_lock);\n+\t\t\trslt.result = DMA_TRANS_NOERROR;\n+\t\t\trslt.residue = ppkt_ctx->requested_bytes -\n+\t\t\t\t\tcompleted_bytes;\n+\t\t\tdmaengine_desc_get_callback_invoke(&ppkt_ctx->seg->async_tx,\n+\t\t\t\t\t\t\t   &rslt);\n+\t\t\tmempool_free(ppkt_ctx->seg, chan->transactions_pool);\n+\t\t}\n+\t\tmemset(ppkt_ctx, 0, sizeof(struct PACKET_TRANSFER_PARAMS));\n+\t}\n+\n+\tcomplete(&chan->srcq_work_complete);\n+}\n+\n+/**\n+ * ps_pcie_chan_primary_work - Masks out interrupts, invokes source Q and\n+ * destination Q processing. Waits for source Q and destination Q processing\n+ * and re enables interrupts. Same work is invoked by timer if coalesce count\n+ * is greater than zero and interrupts are not invoked before the timeout period\n+ *\n+ * @work: Work associated with the task\n+ *\n+ * Return: void\n+ */\n+static void ps_pcie_chan_primary_work(struct work_struct *work)\n+{\n+\tstruct ps_pcie_dma_chan *chan =\n+\t\t(struct ps_pcie_dma_chan *)container_of(\n+\t\t\t\twork, struct ps_pcie_dma_chan,\n+\t\t\t\thandle_primary_desc_cleanup);\n+\n+\t/* Disable interrupts for Channel */\n+\tps_pcie_dma_clr_mask(chan, chan->intr_control_offset,\n+\t\t\t     DMA_INTCNTRL_ENABLINTR_BIT);\n+\n+\tif (chan->psrc_sgl_bd) {\n+\t\treinit_completion(&chan->srcq_work_complete);\n+\t\tif (chan->srcq_desc_cleanup)\n+\t\t\tqueue_work(chan->srcq_desc_cleanup,\n+\t\t\t\t   &chan->handle_srcq_desc_cleanup);\n+\t}\n+\tif (chan->pdst_sgl_bd) {\n+\t\treinit_completion(&chan->dstq_work_complete);\n+\t\tif (chan->dstq_desc_cleanup)\n+\t\t\tqueue_work(chan->dstq_desc_cleanup,\n+\t\t\t\t   &chan->handle_dstq_desc_cleanup);\n+\t}\n+\n+\tif (chan->psrc_sgl_bd)\n+\t\twait_for_completion_interruptible(&chan->srcq_work_complete);\n+\tif (chan->pdst_sgl_bd)\n+\t\twait_for_completion_interruptible(&chan->dstq_work_complete);\n+\n+\t/* Enable interrupts for channel */\n+\tps_pcie_dma_set_mask(chan, chan->intr_control_offset,\n+\t\t\t     DMA_INTCNTRL_ENABLINTR_BIT);\n+\n+\tif (chan->chan_programming) {\n+\t\tqueue_work(chan->chan_programming,\n+\t\t\t   &chan->handle_chan_programming);\n+\t}\n+\n+\tif (chan->coalesce_count > 0 && chan->poll_timer.function)\n+\t\tmod_timer(&chan->poll_timer, jiffies + chan->poll_timer_freq);\n+}\n+\n+static int read_rootdma_config(struct platform_device *platform_dev,\n+\t\t\t       struct xlnx_pcie_dma_device *xdev)\n+{\n+\tint err;\n+\tstruct resource *r;\n+\n+\terr = dma_set_mask(&platform_dev->dev, DMA_BIT_MASK(64));\n+\tif (err) {\n+\t\tdev_info(&platform_dev->dev, \"Cannot set 64 bit DMA mask\\n\");\n+\t\terr = dma_set_mask(&platform_dev->dev, DMA_BIT_MASK(32));\n+\t\tif (err) {\n+\t\t\tdev_err(&platform_dev->dev, \"DMA mask set error\\n\");\n+\t\t\treturn err;\n+\t\t}\n+\t}\n+\n+\terr = dma_set_coherent_mask(&platform_dev->dev, DMA_BIT_MASK(64));\n+\tif (err) {\n+\t\tdev_info(&platform_dev->dev, \"Cannot set 64 bit consistent DMA mask\\n\");\n+\t\terr = dma_set_coherent_mask(&platform_dev->dev,\n+\t\t\t\t\t    DMA_BIT_MASK(32));\n+\t\tif (err) {\n+\t\t\tdev_err(&platform_dev->dev, \"Cannot set consistent DMA mask\\n\");\n+\t\t\treturn err;\n+\t\t}\n+\t}\n+\n+\tr = platform_get_resource_byname(platform_dev, IORESOURCE_MEM,\n+\t\t\t\t\t \"ps_pcie_regbase\");\n+\tif (!r) {\n+\t\tdev_err(&platform_dev->dev,\n+\t\t\t\"Unable to find memory resource for root dma\\n\");\n+\t\treturn PTR_ERR(r);\n+\t}\n+\n+\txdev->reg_base = devm_ioremap_resource(&platform_dev->dev, r);\n+\tif (IS_ERR(xdev->reg_base)) {\n+\t\tdev_err(&platform_dev->dev, \"ioresource error for root dma\\n\");\n+\t\treturn PTR_ERR(xdev->reg_base);\n+\t}\n+\n+\txdev->platform_irq_vec =\n+\t\tplatform_get_irq_byname(platform_dev,\n+\t\t\t\t\t\"ps_pcie_rootdma_intr\");\n+\tif (xdev->platform_irq_vec < 0) {\n+\t\tdev_err(&platform_dev->dev,\n+\t\t\t\"Unable to get interrupt number for root dma\\n\");\n+\t\treturn xdev->platform_irq_vec;\n+\t}\n+\n+\terr = device_property_read_u16(&platform_dev->dev, \"dma_vendorid\",\n+\t\t\t\t       &xdev->rootdma_vendor);\n+\tif (err) {\n+\t\tdev_err(&platform_dev->dev,\n+\t\t\t\"Unable to find RootDMA PCI Vendor Id\\n\");\n+\t\treturn err;\n+\t}\n+\n+\terr = device_property_read_u16(&platform_dev->dev, \"dma_deviceid\",\n+\t\t\t\t       &xdev->rootdma_device);\n+\tif (err) {\n+\t\tdev_err(&platform_dev->dev,\n+\t\t\t\"Unable to find RootDMA PCI Device Id\\n\");\n+\t\treturn err;\n+\t}\n+\n+\txdev->common.dev = xdev->dev;\n+\n+\treturn 0;\n+}\n+\n+static int read_epdma_config(struct platform_device *platform_dev,\n+\t\t\t     struct xlnx_pcie_dma_device *xdev)\n+{\n+\tint err;\n+\tstruct pci_dev *pdev;\n+\tu16 i;\n+\tvoid __iomem * const *pci_iomap;\n+\tunsigned long pci_bar_length;\n+\n+\tpdev = *((struct pci_dev **)(platform_dev->dev.platform_data));\n+\txdev->pci_dev = pdev;\n+\n+\tfor (i = 0; i < MAX_BARS; i++) {\n+\t\tif (pci_resource_len(pdev, i) == 0)\n+\t\t\tcontinue;\n+\t\txdev->bar_mask = xdev->bar_mask | (1 << (i));\n+\t}\n+\n+\terr = pcim_iomap_regions(pdev, xdev->bar_mask, PLATFORM_DRIVER_NAME);\n+\tif (err) {\n+\t\tdev_err(&pdev->dev, \"Cannot request PCI regions, aborting\\n\");\n+\t\treturn err;\n+\t}\n+\n+\tpci_iomap = pcim_iomap_table(pdev);\n+\tif (!pci_iomap) {\n+\t\terr = -ENOMEM;\n+\t\treturn err;\n+\t}\n+\n+\tfor (i = 0; i < MAX_BARS; i++) {\n+\t\tpci_bar_length = pci_resource_len(pdev, i);\n+\t\tif (pci_bar_length == 0) {\n+\t\t\txdev->bar_info[i].BAR_LENGTH = 0;\n+\t\t\txdev->bar_info[i].BAR_PHYS_ADDR = 0;\n+\t\t\txdev->bar_info[i].BAR_VIRT_ADDR = NULL;\n+\t\t} else {\n+\t\t\txdev->bar_info[i].BAR_LENGTH =\n+\t\t\t\tpci_bar_length;\n+\t\t\txdev->bar_info[i].BAR_PHYS_ADDR =\n+\t\t\t\tpci_resource_start(pdev, i);\n+\t\t\txdev->bar_info[i].BAR_VIRT_ADDR =\n+\t\t\t\tpci_iomap[i];\n+\t\t}\n+\t}\n+\n+\txdev->reg_base = pci_iomap[DMA_BAR_NUMBER];\n+\n+\terr = irq_probe(xdev);\n+\tif (err < 0) {\n+\t\tdev_err(&pdev->dev, \"Cannot probe irq lines for device %d\\n\",\n+\t\t\tplatform_dev->id);\n+\t\treturn err;\n+\t}\n+\n+\txdev->common.dev = &pdev->dev;\n+\n+\treturn 0;\n+}\n+\n+static int probe_channel_properties(struct platform_device *platform_dev,\n+\t\t\t\t    struct xlnx_pcie_dma_device *xdev,\n+\t\t\t\t    u16 channel_number)\n+{\n+\tint i;\n+\tchar propertyname[CHANNEL_PROPERTY_LENGTH];\n+\tint numvals, ret;\n+\tu32 *val;\n+\tstruct ps_pcie_dma_chan *channel;\n+\tstruct ps_pcie_dma_channel_match *xlnx_match;\n+\n+\tsnprintf(propertyname, CHANNEL_PROPERTY_LENGTH,\n+\t\t \"ps_pcie_channel%d\", channel_number);\n+\n+\tchannel = &xdev->channels[channel_number];\n+\n+\tspin_lock_init(&channel->channel_lock);\n+\tspin_lock_init(&channel->cookie_lock);\n+\n+\tINIT_LIST_HEAD(&channel->pending_list);\n+\tspin_lock_init(&channel->pending_list_lock);\n+\n+\tINIT_LIST_HEAD(&channel->active_list);\n+\tspin_lock_init(&channel->active_list_lock);\n+\n+\tspin_lock_init(&channel->src_desc_lock);\n+\tspin_lock_init(&channel->dst_desc_lock);\n+\n+\tINIT_LIST_HEAD(&channel->pending_interrupts_list);\n+\tspin_lock_init(&channel->pending_interrupts_lock);\n+\n+\tINIT_LIST_HEAD(&channel->active_interrupts_list);\n+\tspin_lock_init(&channel->active_interrupts_lock);\n+\n+\tinit_completion(&channel->srcq_work_complete);\n+\tinit_completion(&channel->dstq_work_complete);\n+\tinit_completion(&channel->chan_shutdown_complt);\n+\tinit_completion(&channel->chan_terminate_complete);\n+\n+\tif (device_property_present(&platform_dev->dev, propertyname)) {\n+\t\tnumvals = device_property_read_u32_array(&platform_dev->dev,\n+\t\t\t\t\t\t\t propertyname, NULL, 0);\n+\n+\t\tif (numvals < 0)\n+\t\t\treturn numvals;\n+\n+\t\tval = devm_kzalloc(&platform_dev->dev, sizeof(u32) * numvals,\n+\t\t\t\t   GFP_KERNEL);\n+\n+\t\tif (!val)\n+\t\t\treturn -ENOMEM;\n+\n+\t\tret = device_property_read_u32_array(&platform_dev->dev,\n+\t\t\t\t\t\t     propertyname, val,\n+\t\t\t\t\t\t     numvals);\n+\t\tif (ret < 0) {\n+\t\t\tdev_err(&platform_dev->dev,\n+\t\t\t\t\"Unable to read property %s\\n\", propertyname);\n+\t\t\treturn ret;\n+\t\t}\n+\n+\t\tfor (i = 0; i < numvals; i++) {\n+\t\t\tswitch (i) {\n+\t\t\tcase DMA_CHANNEL_DIRECTION:\n+\t\t\t\tchannel->direction =\n+\t\t\t\t\t(val[DMA_CHANNEL_DIRECTION] ==\n+\t\t\t\t\t\tPCIE_AXI_DIRECTION) ?\n+\t\t\t\t\t\tDMA_TO_DEVICE : DMA_FROM_DEVICE;\n+\t\t\t\tbreak;\n+\t\t\tcase NUM_DESCRIPTORS:\n+\t\t\t\tchannel->total_descriptors =\n+\t\t\t\t\t\tval[NUM_DESCRIPTORS];\n+\t\t\t\tif (channel->total_descriptors >\n+\t\t\t\t\tMAX_DESCRIPTORS) {\n+\t\t\t\t\tdev_info(&platform_dev->dev,\n+\t\t\t\t\t\t \"Descriptors > alowd max\\n\");\n+\t\t\t\t\tchannel->total_descriptors =\n+\t\t\t\t\t\t\tMAX_DESCRIPTORS;\n+\t\t\t\t}\n+\t\t\t\tbreak;\n+\t\t\tcase NUM_QUEUES:\n+\t\t\t\tchannel->num_queues = val[NUM_QUEUES];\n+\t\t\t\tswitch (channel->num_queues) {\n+\t\t\t\tcase DEFAULT_DMA_QUEUES:\n+\t\t\t\t\t\tbreak;\n+\t\t\t\tcase TWO_DMA_QUEUES:\n+\t\t\t\t\t\tbreak;\n+\t\t\t\tdefault:\n+\t\t\t\tdev_info(&platform_dev->dev,\n+\t\t\t\t\t \"Incorrect Q number for dma chan\\n\");\n+\t\t\t\tchannel->num_queues = DEFAULT_DMA_QUEUES;\n+\t\t\t\t}\n+\t\t\t\tbreak;\n+\t\t\tcase COALESE_COUNT:\n+\t\t\t\tchannel->coalesce_count = val[COALESE_COUNT];\n+\n+\t\t\t\tif (channel->coalesce_count >\n+\t\t\t\t\tMAX_COALESCE_COUNT) {\n+\t\t\t\t\tdev_info(&platform_dev->dev,\n+\t\t\t\t\t\t \"Invalid coalesce Count\\n\");\n+\t\t\t\t\tchannel->coalesce_count =\n+\t\t\t\t\t\tMAX_COALESCE_COUNT;\n+\t\t\t\t}\n+\t\t\t\tbreak;\n+\t\t\tcase POLL_TIMER_FREQUENCY:\n+\t\t\t\tchannel->poll_timer_freq =\n+\t\t\t\t\tval[POLL_TIMER_FREQUENCY];\n+\t\t\t\tbreak;\n+\t\t\tdefault:\n+\t\t\t\tdev_err(&platform_dev->dev,\n+\t\t\t\t\t\"Check order of channel properties!\\n\");\n+\t\t\t}\n+\t\t}\n+\t} else {\n+\t\tdev_err(&platform_dev->dev,\n+\t\t\t\"Property %s not present. Invalid configuration!\\n\",\n+\t\t\t\tpropertyname);\n+\t\treturn -ENOTSUPP;\n+\t}\n+\n+\tif (channel->direction == DMA_TO_DEVICE) {\n+\t\tif (channel->num_queues == DEFAULT_DMA_QUEUES) {\n+\t\t\tchannel->srcq_buffer_location = BUFFER_LOC_PCI;\n+\t\t\tchannel->dstq_buffer_location = BUFFER_LOC_AXI;\n+\t\t} else {\n+\t\t\tchannel->srcq_buffer_location = BUFFER_LOC_PCI;\n+\t\t\tchannel->dstq_buffer_location = BUFFER_LOC_INVALID;\n+\t\t}\n+\t} else {\n+\t\tif (channel->num_queues == DEFAULT_DMA_QUEUES) {\n+\t\t\tchannel->srcq_buffer_location = BUFFER_LOC_AXI;\n+\t\t\tchannel->dstq_buffer_location = BUFFER_LOC_PCI;\n+\t\t} else {\n+\t\t\tchannel->srcq_buffer_location = BUFFER_LOC_INVALID;\n+\t\t\tchannel->dstq_buffer_location = BUFFER_LOC_PCI;\n+\t\t}\n+\t}\n+\n+\tchannel->xdev = xdev;\n+\tchannel->channel_number = channel_number;\n+\n+\tif (xdev->is_rootdma) {\n+\t\tchannel->dev = xdev->dev;\n+\t\tchannel->intr_status_offset = DMA_AXI_INTR_STATUS_REG_OFFSET;\n+\t\tchannel->intr_control_offset = DMA_AXI_INTR_CNTRL_REG_OFFSET;\n+\t} else {\n+\t\tchannel->dev = &xdev->pci_dev->dev;\n+\t\tchannel->intr_status_offset = DMA_PCIE_INTR_STATUS_REG_OFFSET;\n+\t\tchannel->intr_control_offset = DMA_PCIE_INTR_CNTRL_REG_OFFSET;\n+\t}\n+\n+\tchannel->chan_base =\n+\t(struct DMA_ENGINE_REGISTERS *)((__force char *)(xdev->reg_base) +\n+\t\t\t\t (channel_number * DMA_CHANNEL_REGS_SIZE));\n+\n+\tif (((channel->chan_base->dma_channel_status) &\n+\t\t\t\tDMA_STATUS_DMA_PRES_BIT) == 0) {\n+\t\tdev_err(&platform_dev->dev,\n+\t\t\t\"Hardware reports channel not present\\n\");\n+\t\treturn -ENOTSUPP;\n+\t}\n+\n+\tupdate_channel_read_attribute(channel);\n+\tupdate_channel_write_attribute(channel);\n+\n+\txlnx_match = devm_kzalloc(&platform_dev->dev,\n+\t\t\t\t  sizeof(struct ps_pcie_dma_channel_match),\n+\t\t\t\t  GFP_KERNEL);\n+\n+\tif (!xlnx_match)\n+\t\treturn -ENOMEM;\n+\n+\tif (xdev->is_rootdma) {\n+\t\txlnx_match->pci_vendorid = xdev->rootdma_vendor;\n+\t\txlnx_match->pci_deviceid = xdev->rootdma_device;\n+\t} else {\n+\t\txlnx_match->pci_vendorid = xdev->pci_dev->vendor;\n+\t\txlnx_match->pci_deviceid = xdev->pci_dev->device;\n+\t\txlnx_match->bar_params = xdev->bar_info;\n+\t}\n+\n+\txlnx_match->board_number = xdev->board_number;\n+\txlnx_match->channel_number = channel_number;\n+\txlnx_match->direction = xdev->channels[channel_number].direction;\n+\n+\tchannel->common.private = (void *)xlnx_match;\n+\n+\tchannel->common.device = &xdev->common;\n+\tlist_add_tail(&channel->common.device_node, &xdev->common.channels);\n+\n+\treturn 0;\n+}\n+\n+static void xlnx_ps_pcie_destroy_mempool(struct ps_pcie_dma_chan *chan)\n+{\n+\tmempool_destroy(chan->transactions_pool);\n+\n+\tmempool_destroy(chan->intr_transactions_pool);\n+}\n+\n+static void xlnx_ps_pcie_free_worker_queues(struct ps_pcie_dma_chan *chan)\n+{\n+\tif (chan->maintenance_workq)\n+\t\tdestroy_workqueue(chan->maintenance_workq);\n+\n+\tif (chan->sw_intrs_wrkq)\n+\t\tdestroy_workqueue(chan->sw_intrs_wrkq);\n+\n+\tif (chan->srcq_desc_cleanup)\n+\t\tdestroy_workqueue(chan->srcq_desc_cleanup);\n+\n+\tif (chan->dstq_desc_cleanup)\n+\t\tdestroy_workqueue(chan->dstq_desc_cleanup);\n+\n+\tif (chan->chan_programming)\n+\t\tdestroy_workqueue(chan->chan_programming);\n+\n+\tif (chan->primary_desc_cleanup)\n+\t\tdestroy_workqueue(chan->primary_desc_cleanup);\n+}\n+\n+static void xlnx_ps_pcie_free_pkt_ctxts(struct ps_pcie_dma_chan *chan)\n+{\n+\tkfree(chan->ppkt_ctx_srcq);\n+\n+\tkfree(chan->ppkt_ctx_dstq);\n+}\n+\n+static void xlnx_ps_pcie_free_descriptors(struct ps_pcie_dma_chan *chan)\n+{\n+\tssize_t size;\n+\n+\tif (chan->psrc_sgl_bd) {\n+\t\tsize = chan->total_descriptors *\n+\t\t\tsizeof(struct SOURCE_DMA_DESCRIPTOR);\n+\t\tdma_free_coherent(chan->dev, size, chan->psrc_sgl_bd,\n+\t\t\t\t  chan->src_sgl_bd_pa);\n+\t}\n+\n+\tif (chan->pdst_sgl_bd) {\n+\t\tsize = chan->total_descriptors *\n+\t\t\tsizeof(struct DEST_DMA_DESCRIPTOR);\n+\t\tdma_free_coherent(chan->dev, size, chan->pdst_sgl_bd,\n+\t\t\t\t  chan->dst_sgl_bd_pa);\n+\t}\n+\n+\tif (chan->psrc_sta_bd) {\n+\t\tsize = chan->total_descriptors *\n+\t\t\tsizeof(struct STATUS_DMA_DESCRIPTOR);\n+\t\tdma_free_coherent(chan->dev, size, chan->psrc_sta_bd,\n+\t\t\t\t  chan->src_sta_bd_pa);\n+\t}\n+\n+\tif (chan->pdst_sta_bd) {\n+\t\tsize = chan->total_descriptors *\n+\t\t\tsizeof(struct STATUS_DMA_DESCRIPTOR);\n+\t\tdma_free_coherent(chan->dev, size, chan->pdst_sta_bd,\n+\t\t\t\t  chan->dst_sta_bd_pa);\n+\t}\n+}\n+\n+static int xlnx_ps_pcie_channel_activate(struct ps_pcie_dma_chan *chan)\n+{\n+\tu32 reg = chan->coalesce_count;\n+\n+\treg = reg << DMA_INTCNTRL_SGCOLSCCNT_BIT_SHIFT;\n+\n+\t/* Enable Interrupts for channel */\n+\tps_pcie_dma_set_mask(chan, chan->intr_control_offset,\n+\t\t\t     reg | DMA_INTCNTRL_ENABLINTR_BIT |\n+\t\t\t     DMA_INTCNTRL_DMAERRINTR_BIT |\n+\t\t\t     DMA_INTCNTRL_DMASGINTR_BIT);\n+\n+\t/* Enable DMA */\n+\tps_pcie_dma_set_mask(chan, DMA_CNTRL_REG_OFFSET,\n+\t\t\t     DMA_CNTRL_ENABL_BIT |\n+\t\t\t     DMA_CNTRL_64BIT_STAQ_ELEMSZ_BIT);\n+\n+\tspin_lock(&chan->channel_lock);\n+\tchan->state = CHANNEL_AVAILABLE;\n+\tspin_unlock(&chan->channel_lock);\n+\n+\t/* Activate timer if required */\n+\tif ((chan->coalesce_count > 0) && !chan->poll_timer.function)\n+\t\txlnx_ps_pcie_alloc_poll_timer(chan);\n+\n+\treturn 0;\n+}\n+\n+static void xlnx_ps_pcie_channel_quiesce(struct ps_pcie_dma_chan *chan)\n+{\n+\t/* Disable interrupts for Channel */\n+\tps_pcie_dma_clr_mask(chan, chan->intr_control_offset,\n+\t\t\t     DMA_INTCNTRL_ENABLINTR_BIT);\n+\n+\t/* Delete timer if it is created */\n+\tif ((chan->coalesce_count > 0) && (!chan->poll_timer.function))\n+\t\txlnx_ps_pcie_free_poll_timer(chan);\n+\n+\t/* Flush descriptor cleaning work queues */\n+\tif (chan->primary_desc_cleanup)\n+\t\tflush_workqueue(chan->primary_desc_cleanup);\n+\n+\t/* Flush channel programming work queue */\n+\tif (chan->chan_programming)\n+\t\tflush_workqueue(chan->chan_programming);\n+\n+\t/*  Clear the persistent bits */\n+\tps_pcie_dma_set_mask(chan, chan->intr_status_offset,\n+\t\t\t     DMA_INTSTATUS_DMAERR_BIT |\n+\t\t\t     DMA_INTSTATUS_SGLINTR_BIT |\n+\t\t\t     DMA_INTSTATUS_SWINTR_BIT);\n+\n+\t/* Disable DMA channel */\n+\tps_pcie_dma_clr_mask(chan, DMA_CNTRL_REG_OFFSET, DMA_CNTRL_ENABL_BIT);\n+\n+\tspin_lock(&chan->channel_lock);\n+\tchan->state = CHANNEL_UNAVIALBLE;\n+\tspin_unlock(&chan->channel_lock);\n+}\n+\n+static u32 total_bytes_in_sgl(struct scatterlist *sgl,\n+\t\t\t      unsigned int num_entries)\n+{\n+\tu32 total_bytes = 0;\n+\tstruct scatterlist *sgl_ptr;\n+\tunsigned int i;\n+\n+\tfor_each_sg(sgl, sgl_ptr, num_entries, i)\n+\t\ttotal_bytes += sg_dma_len(sgl_ptr);\n+\n+\treturn total_bytes;\n+}\n+\n+static void ivk_cbk_intr_seg(struct ps_pcie_intr_segment *intr_seg,\n+\t\t\t     struct ps_pcie_dma_chan *chan,\n+\t\t\t     enum dmaengine_tx_result result)\n+{\n+\tstruct dmaengine_result rslt;\n+\n+\trslt.result = result;\n+\trslt.residue = 0;\n+\n+\tspin_lock(&chan->cookie_lock);\n+\tdma_cookie_complete(&intr_seg->async_intr_tx);\n+\tspin_unlock(&chan->cookie_lock);\n+\n+\tdmaengine_desc_get_callback_invoke(&intr_seg->async_intr_tx, &rslt);\n+}\n+\n+static void ivk_cbk_seg(struct ps_pcie_tx_segment *seg,\n+\t\t\tstruct ps_pcie_dma_chan *chan,\n+\t\t\tenum dmaengine_tx_result result)\n+{\n+\tstruct dmaengine_result rslt, *prslt;\n+\n+\tspin_lock(&chan->cookie_lock);\n+\tdma_cookie_complete(&seg->async_tx);\n+\tspin_unlock(&chan->cookie_lock);\n+\n+\trslt.result = result;\n+\tif (seg->tx_elements.src_sgl &&\n+\t    chan->srcq_buffer_location == BUFFER_LOC_PCI) {\n+\t\trslt.residue =\n+\t\t\ttotal_bytes_in_sgl(seg->tx_elements.src_sgl,\n+\t\t\t\t\t   seg->tx_elements.srcq_num_elemets);\n+\t\tprslt = &rslt;\n+\t} else if (seg->tx_elements.dst_sgl &&\n+\t\t   chan->dstq_buffer_location == BUFFER_LOC_PCI) {\n+\t\trslt.residue =\n+\t\t\ttotal_bytes_in_sgl(seg->tx_elements.dst_sgl,\n+\t\t\t\t\t   seg->tx_elements.dstq_num_elemets);\n+\t\tprslt = &rslt;\n+\t} else {\n+\t\tprslt = NULL;\n+\t}\n+\n+\tdmaengine_desc_get_callback_invoke(&seg->async_tx, prslt);\n+}\n+\n+static void ivk_cbk_ctx(struct PACKET_TRANSFER_PARAMS *ppkt_ctxt,\n+\t\t\tstruct ps_pcie_dma_chan *chan,\n+\t\t\tenum dmaengine_tx_result result)\n+{\n+\tif (ppkt_ctxt->availability_status == IN_USE) {\n+\t\tif (ppkt_ctxt->seg) {\n+\t\t\tivk_cbk_seg(ppkt_ctxt->seg, chan, result);\n+\t\t\tmempool_free(ppkt_ctxt->seg,\n+\t\t\t\t     chan->transactions_pool);\n+\t\t}\n+\t}\n+}\n+\n+static void ivk_cbk_for_pending(struct ps_pcie_dma_chan *chan)\n+{\n+\tint i;\n+\tstruct PACKET_TRANSFER_PARAMS *ppkt_ctxt;\n+\tstruct ps_pcie_tx_segment *seg, *seg_nxt;\n+\tstruct ps_pcie_intr_segment *intr_seg, *intr_seg_next;\n+\n+\tif (chan->ppkt_ctx_srcq) {\n+\t\tif (chan->idx_ctx_srcq_tail != chan->idx_ctx_srcq_head) {\n+\t\t\ti = chan->idx_ctx_srcq_tail;\n+\t\t\twhile (i != chan->idx_ctx_srcq_head) {\n+\t\t\t\tppkt_ctxt = chan->ppkt_ctx_srcq + i;\n+\t\t\t\tivk_cbk_ctx(ppkt_ctxt, chan,\n+\t\t\t\t\t    DMA_TRANS_READ_FAILED);\n+\t\t\t\tmemset(ppkt_ctxt, 0,\n+\t\t\t\t       sizeof(struct PACKET_TRANSFER_PARAMS));\n+\t\t\t\ti++;\n+\t\t\t\tif (i == chan->total_descriptors)\n+\t\t\t\t\ti = 0;\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\tif (chan->ppkt_ctx_dstq) {\n+\t\tif (chan->idx_ctx_dstq_tail != chan->idx_ctx_dstq_head) {\n+\t\t\ti = chan->idx_ctx_dstq_tail;\n+\t\t\twhile (i != chan->idx_ctx_dstq_head) {\n+\t\t\t\tppkt_ctxt = chan->ppkt_ctx_dstq + i;\n+\t\t\t\tivk_cbk_ctx(ppkt_ctxt, chan,\n+\t\t\t\t\t    DMA_TRANS_WRITE_FAILED);\n+\t\t\t\tmemset(ppkt_ctxt, 0,\n+\t\t\t\t       sizeof(struct PACKET_TRANSFER_PARAMS));\n+\t\t\t\ti++;\n+\t\t\t\tif (i == chan->total_descriptors)\n+\t\t\t\t\ti = 0;\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\tlist_for_each_entry_safe(seg, seg_nxt, &chan->active_list, node) {\n+\t\tivk_cbk_seg(seg, chan, DMA_TRANS_ABORTED);\n+\t\tspin_lock(&chan->active_list_lock);\n+\t\tlist_del(&seg->node);\n+\t\tspin_unlock(&chan->active_list_lock);\n+\t\tmempool_free(seg, chan->transactions_pool);\n+\t}\n+\n+\tlist_for_each_entry_safe(seg, seg_nxt, &chan->pending_list, node) {\n+\t\tivk_cbk_seg(seg, chan, DMA_TRANS_ABORTED);\n+\t\tspin_lock(&chan->pending_list_lock);\n+\t\tlist_del(&seg->node);\n+\t\tspin_unlock(&chan->pending_list_lock);\n+\t\tmempool_free(seg, chan->transactions_pool);\n+\t}\n+\n+\tlist_for_each_entry_safe(intr_seg, intr_seg_next,\n+\t\t\t\t &chan->active_interrupts_list, node) {\n+\t\tivk_cbk_intr_seg(intr_seg, chan, DMA_TRANS_ABORTED);\n+\t\tspin_lock(&chan->active_interrupts_lock);\n+\t\tlist_del(&intr_seg->node);\n+\t\tspin_unlock(&chan->active_interrupts_lock);\n+\t\tmempool_free(intr_seg, chan->intr_transactions_pool);\n+\t}\n+\n+\tlist_for_each_entry_safe(intr_seg, intr_seg_next,\n+\t\t\t\t &chan->pending_interrupts_list, node) {\n+\t\tivk_cbk_intr_seg(intr_seg, chan, DMA_TRANS_ABORTED);\n+\t\tspin_lock(&chan->pending_interrupts_lock);\n+\t\tlist_del(&intr_seg->node);\n+\t\tspin_unlock(&chan->pending_interrupts_lock);\n+\t\tmempool_free(intr_seg, chan->intr_transactions_pool);\n+\t}\n+}\n+\n+static void xlnx_ps_pcie_reset_channel(struct ps_pcie_dma_chan *chan)\n+{\n+\txlnx_ps_pcie_channel_quiesce(chan);\n+\n+\tivk_cbk_for_pending(chan);\n+\n+\tps_pcie_chan_reset(chan);\n+\n+\tinit_sw_components(chan);\n+\tinit_hw_components(chan);\n+\n+\txlnx_ps_pcie_channel_activate(chan);\n+}\n+\n+static void xlnx_ps_pcie_free_poll_timer(struct ps_pcie_dma_chan *chan)\n+{\n+\tif (chan->poll_timer.function) {\n+\t\tdel_timer_sync(&chan->poll_timer);\n+\t\tchan->poll_timer.function = NULL;\n+\t}\n+}\n+\n+static int xlnx_ps_pcie_alloc_poll_timer(struct ps_pcie_dma_chan *chan)\n+{\n+\tinit_timer(&chan->poll_timer);\n+\tchan->poll_timer.function = poll_completed_transactions;\n+\tchan->poll_timer.expires = jiffies + chan->poll_timer_freq;\n+\tchan->poll_timer.data = (unsigned long)chan;\n+\n+\tadd_timer(&chan->poll_timer);\n+\n+\treturn 0;\n+}\n+\n+static void terminate_transactions_work(struct work_struct *work)\n+{\n+\tstruct ps_pcie_dma_chan *chan =\n+\t\t(struct ps_pcie_dma_chan *)container_of(work,\n+\t\t\tstruct ps_pcie_dma_chan, handle_chan_terminate);\n+\n+\txlnx_ps_pcie_channel_quiesce(chan);\n+\tivk_cbk_for_pending(chan);\n+\txlnx_ps_pcie_channel_activate(chan);\n+\n+\tcomplete(&chan->chan_terminate_complete);\n+}\n+\n+static void chan_shutdown_work(struct work_struct *work)\n+{\n+\tstruct ps_pcie_dma_chan *chan =\n+\t\t(struct ps_pcie_dma_chan *)container_of(work,\n+\t\t\t\tstruct ps_pcie_dma_chan, handle_chan_shutdown);\n+\n+\txlnx_ps_pcie_channel_quiesce(chan);\n+\n+\tcomplete(&chan->chan_shutdown_complt);\n+}\n+\n+static void chan_reset_work(struct work_struct *work)\n+{\n+\tstruct ps_pcie_dma_chan *chan =\n+\t\t(struct ps_pcie_dma_chan *)container_of(work,\n+\t\t\t\tstruct ps_pcie_dma_chan, handle_chan_reset);\n+\n+\txlnx_ps_pcie_reset_channel(chan);\n+}\n+\n+static void sw_intr_work(struct work_struct *work)\n+{\n+\tstruct ps_pcie_dma_chan *chan =\n+\t\t(struct ps_pcie_dma_chan *)container_of(work,\n+\t\t\t\tstruct ps_pcie_dma_chan, handle_sw_intrs);\n+\tstruct ps_pcie_intr_segment *intr_seg, *intr_seg_next;\n+\n+\tlist_for_each_entry_safe(intr_seg, intr_seg_next,\n+\t\t\t\t &chan->active_interrupts_list, node) {\n+\t\tspin_lock(&chan->cookie_lock);\n+\t\tdma_cookie_complete(&intr_seg->async_intr_tx);\n+\t\tspin_unlock(&chan->cookie_lock);\n+\t\tdmaengine_desc_get_callback_invoke(&intr_seg->async_intr_tx,\n+\t\t\t\t\t\t   NULL);\n+\t\tspin_lock(&chan->active_interrupts_lock);\n+\t\tlist_del(&intr_seg->node);\n+\t\tspin_unlock(&chan->active_interrupts_lock);\n+\t}\n+}\n+\n+static int xlnx_ps_pcie_alloc_worker_threads(struct ps_pcie_dma_chan *chan)\n+{\n+\tchar wq_name[WORKQ_NAME_SIZE];\n+\n+\tsnprintf(wq_name, WORKQ_NAME_SIZE,\n+\t\t \"PS PCIe channel %d descriptor programming wq\",\n+\t\t chan->channel_number);\n+\tchan->chan_programming =\n+\t\tcreate_singlethread_workqueue((const char *)wq_name);\n+\tif (!chan->chan_programming) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Unable to create programming wq for chan %d\",\n+\t\t\tchan->channel_number);\n+\t\tgoto err_no_desc_program_wq;\n+\t} else {\n+\t\tINIT_WORK(&chan->handle_chan_programming,\n+\t\t\t  ps_pcie_chan_program_work);\n+\t}\n+\tmemset(wq_name, 0, WORKQ_NAME_SIZE);\n+\n+\tsnprintf(wq_name, WORKQ_NAME_SIZE,\n+\t\t \"PS PCIe channel %d primary cleanup wq\", chan->channel_number);\n+\tchan->primary_desc_cleanup =\n+\t\tcreate_singlethread_workqueue((const char *)wq_name);\n+\tif (!chan->primary_desc_cleanup) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Unable to create primary cleanup wq for channel %d\",\n+\t\t\tchan->channel_number);\n+\t\tgoto err_no_primary_clean_wq;\n+\t} else {\n+\t\tINIT_WORK(&chan->handle_primary_desc_cleanup,\n+\t\t\t  ps_pcie_chan_primary_work);\n+\t}\n+\tmemset(wq_name, 0, WORKQ_NAME_SIZE);\n+\n+\tsnprintf(wq_name, WORKQ_NAME_SIZE,\n+\t\t \"PS PCIe channel %d maintenance works wq\",\n+\t\t chan->channel_number);\n+\tchan->maintenance_workq =\n+\t\tcreate_singlethread_workqueue((const char *)wq_name);\n+\tif (!chan->maintenance_workq) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Unable to create maintenance wq for channel %d\",\n+\t\t\tchan->channel_number);\n+\t\tgoto err_no_maintenance_wq;\n+\t} else {\n+\t\tINIT_WORK(&chan->handle_chan_reset, chan_reset_work);\n+\t\tINIT_WORK(&chan->handle_chan_shutdown, chan_shutdown_work);\n+\t\tINIT_WORK(&chan->handle_chan_terminate,\n+\t\t\t  terminate_transactions_work);\n+\t}\n+\tmemset(wq_name, 0, WORKQ_NAME_SIZE);\n+\n+\tsnprintf(wq_name, WORKQ_NAME_SIZE,\n+\t\t \"PS PCIe channel %d software Interrupts wq\",\n+\t\t chan->channel_number);\n+\tchan->sw_intrs_wrkq =\n+\t\tcreate_singlethread_workqueue((const char *)wq_name);\n+\tif (!chan->sw_intrs_wrkq) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Unable to create sw interrupts wq for channel %d\",\n+\t\t\tchan->channel_number);\n+\t\tgoto err_no_sw_intrs_wq;\n+\t} else {\n+\t\tINIT_WORK(&chan->handle_sw_intrs, sw_intr_work);\n+\t}\n+\tmemset(wq_name, 0, WORKQ_NAME_SIZE);\n+\n+\tif (chan->psrc_sgl_bd) {\n+\t\tsnprintf(wq_name, WORKQ_NAME_SIZE,\n+\t\t\t \"PS PCIe channel %d srcq handling wq\",\n+\t\t\t chan->channel_number);\n+\t\tchan->srcq_desc_cleanup =\n+\t\t\tcreate_singlethread_workqueue((const char *)wq_name);\n+\t\tif (!chan->srcq_desc_cleanup) {\n+\t\t\tdev_err(chan->dev,\n+\t\t\t\t\"Unable to create src q completion wq chan %d\",\n+\t\t\t\tchan->channel_number);\n+\t\t\tgoto err_no_src_q_completion_wq;\n+\t\t} else {\n+\t\t\tINIT_WORK(&chan->handle_srcq_desc_cleanup,\n+\t\t\t\t  src_cleanup_work);\n+\t\t}\n+\t\tmemset(wq_name, 0, WORKQ_NAME_SIZE);\n+\t}\n+\n+\tif (chan->pdst_sgl_bd) {\n+\t\tsnprintf(wq_name, WORKQ_NAME_SIZE,\n+\t\t\t \"PS PCIe channel %d dstq handling wq\",\n+\t\t\t chan->channel_number);\n+\t\tchan->dstq_desc_cleanup =\n+\t\t\tcreate_singlethread_workqueue((const char *)wq_name);\n+\t\tif (!chan->dstq_desc_cleanup) {\n+\t\t\tdev_err(chan->dev,\n+\t\t\t\t\"Unable to create dst q completion wq chan %d\",\n+\t\t\t\tchan->channel_number);\n+\t\t\tgoto err_no_dst_q_completion_wq;\n+\t\t} else {\n+\t\t\tINIT_WORK(&chan->handle_dstq_desc_cleanup,\n+\t\t\t\t  dst_cleanup_work);\n+\t\t}\n+\t\tmemset(wq_name, 0, WORKQ_NAME_SIZE);\n+\t}\n+\n+\treturn 0;\n+err_no_dst_q_completion_wq:\n+\tif (chan->srcq_desc_cleanup)\n+\t\tdestroy_workqueue(chan->srcq_desc_cleanup);\n+err_no_src_q_completion_wq:\n+\tif (chan->sw_intrs_wrkq)\n+\t\tdestroy_workqueue(chan->sw_intrs_wrkq);\n+err_no_sw_intrs_wq:\n+\tif (chan->maintenance_workq)\n+\t\tdestroy_workqueue(chan->maintenance_workq);\n+err_no_maintenance_wq:\n+\tif (chan->primary_desc_cleanup)\n+\t\tdestroy_workqueue(chan->primary_desc_cleanup);\n+err_no_primary_clean_wq:\n+\tif (chan->chan_programming)\n+\t\tdestroy_workqueue(chan->chan_programming);\n+err_no_desc_program_wq:\n+\treturn -ENOMEM;\n+}\n+\n+static int xlnx_ps_pcie_alloc_mempool(struct ps_pcie_dma_chan *chan)\n+{\n+\tchan->transactions_pool =\n+\t\tmempool_create_kmalloc_pool(chan->total_descriptors,\n+\t\t\t\t\t    sizeof(struct ps_pcie_tx_segment));\n+\n+\tif (!chan->transactions_pool)\n+\t\tgoto no_transactions_pool;\n+\n+\tchan->intr_transactions_pool =\n+\tmempool_create_kmalloc_pool(MIN_SW_INTR_TRANSACTIONS,\n+\t\t\t\t    sizeof(struct ps_pcie_intr_segment));\n+\n+\tif (!chan->intr_transactions_pool)\n+\t\tgoto no_intr_transactions_pool;\n+\n+\treturn 0;\n+\n+no_intr_transactions_pool:\n+\tmempool_destroy(chan->transactions_pool);\n+\n+no_transactions_pool:\n+\treturn -ENOMEM;\n+}\n+\n+static int xlnx_ps_pcie_alloc_pkt_contexts(struct ps_pcie_dma_chan *chan)\n+{\n+\tif (chan->psrc_sgl_bd) {\n+\t\tchan->ppkt_ctx_srcq =\n+\t\t\tkcalloc(chan->total_descriptors,\n+\t\t\t\tsizeof(struct PACKET_TRANSFER_PARAMS),\n+\t\t\t\tGFP_KERNEL);\n+\t\tif (!chan->ppkt_ctx_srcq) {\n+\t\t\tdev_err(chan->dev,\n+\t\t\t\t\"Src pkt cxt allocation for chan %d failed\\n\",\n+\t\t\t\tchan->channel_number);\n+\t\t\tgoto err_no_src_pkt_ctx;\n+\t\t}\n+\t}\n+\n+\tif (chan->pdst_sgl_bd) {\n+\t\tchan->ppkt_ctx_dstq =\n+\t\t\tkcalloc(chan->total_descriptors,\n+\t\t\t\tsizeof(struct PACKET_TRANSFER_PARAMS),\n+\t\t\t\tGFP_KERNEL);\n+\t\tif (!chan->ppkt_ctx_dstq) {\n+\t\t\tdev_err(chan->dev,\n+\t\t\t\t\"Dst pkt cxt for chan %d failed\\n\",\n+\t\t\t\tchan->channel_number);\n+\t\t\tgoto err_no_dst_pkt_ctx;\n+\t\t}\n+\t}\n+\n+\treturn 0;\n+\n+err_no_dst_pkt_ctx:\n+\tkfree(chan->ppkt_ctx_srcq);\n+\n+err_no_src_pkt_ctx:\n+\treturn -ENOMEM;\n+}\n+\n+static int dma_alloc_descriptors_two_queues(struct ps_pcie_dma_chan *chan)\n+{\n+\tsize_t size;\n+\n+\tvoid *sgl_base;\n+\tvoid *sta_base;\n+\tdma_addr_t phy_addr_sglbase;\n+\tdma_addr_t phy_addr_stabase;\n+\n+\tsize = chan->total_descriptors *\n+\t\tsizeof(struct SOURCE_DMA_DESCRIPTOR);\n+\n+\tsgl_base = dma_zalloc_coherent(chan->dev, size, &phy_addr_sglbase,\n+\t\t\t\t       GFP_KERNEL);\n+\n+\tif (!sgl_base) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Sgl bds in two channel mode for chan %d failed\\n\",\n+\t\t\tchan->channel_number);\n+\t\tgoto err_no_sgl_bds;\n+\t}\n+\n+\tsize = chan->total_descriptors * sizeof(struct STATUS_DMA_DESCRIPTOR);\n+\tsta_base = dma_zalloc_coherent(chan->dev, size, &phy_addr_stabase,\n+\t\t\t\t       GFP_KERNEL);\n+\n+\tif (!sta_base) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Sta bds in two channel mode for chan %d failed\\n\",\n+\t\t\tchan->channel_number);\n+\t\tgoto err_no_sta_bds;\n+\t}\n+\n+\tif (chan->direction == DMA_TO_DEVICE) {\n+\t\tchan->psrc_sgl_bd = sgl_base;\n+\t\tchan->src_sgl_bd_pa = phy_addr_sglbase;\n+\n+\t\tchan->psrc_sta_bd = sta_base;\n+\t\tchan->src_sta_bd_pa = phy_addr_stabase;\n+\n+\t\tchan->pdst_sgl_bd = NULL;\n+\t\tchan->dst_sgl_bd_pa = 0;\n+\n+\t\tchan->pdst_sta_bd = NULL;\n+\t\tchan->dst_sta_bd_pa = 0;\n+\n+\t} else if (chan->direction == DMA_FROM_DEVICE) {\n+\t\tchan->psrc_sgl_bd = NULL;\n+\t\tchan->src_sgl_bd_pa = 0;\n+\n+\t\tchan->psrc_sta_bd = NULL;\n+\t\tchan->src_sta_bd_pa = 0;\n+\n+\t\tchan->pdst_sgl_bd = sgl_base;\n+\t\tchan->dst_sgl_bd_pa = phy_addr_sglbase;\n+\n+\t\tchan->pdst_sta_bd = sta_base;\n+\t\tchan->dst_sta_bd_pa = phy_addr_stabase;\n+\n+\t} else {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"%d %s() Unsupported channel direction\\n\",\n+\t\t\t__LINE__, __func__);\n+\t\tgoto unsupported_channel_direction;\n+\t}\n+\n+\treturn 0;\n+\n+unsupported_channel_direction:\n+\tsize = chan->total_descriptors *\n+\t\tsizeof(struct STATUS_DMA_DESCRIPTOR);\n+\tdma_free_coherent(chan->dev, size, sta_base, phy_addr_stabase);\n+err_no_sta_bds:\n+\tsize = chan->total_descriptors *\n+\t\tsizeof(struct SOURCE_DMA_DESCRIPTOR);\n+\tdma_free_coherent(chan->dev, size, sgl_base, phy_addr_sglbase);\n+err_no_sgl_bds:\n+\n+\treturn -ENOMEM;\n+}\n+\n+static int dma_alloc_decriptors_all_queues(struct ps_pcie_dma_chan *chan)\n+{\n+\tsize_t size;\n+\n+\tsize = chan->total_descriptors *\n+\t\tsizeof(struct SOURCE_DMA_DESCRIPTOR);\n+\tchan->psrc_sgl_bd =\n+\t\tdma_zalloc_coherent(chan->dev, size, &chan->src_sgl_bd_pa,\n+\t\t\t\t    GFP_KERNEL);\n+\n+\tif (!chan->psrc_sgl_bd) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Alloc fail src q buffer descriptors for chan %d\\n\",\n+\t\t\tchan->channel_number);\n+\t\tgoto err_no_src_sgl_descriptors;\n+\t}\n+\n+\tsize = chan->total_descriptors * sizeof(struct DEST_DMA_DESCRIPTOR);\n+\tchan->pdst_sgl_bd =\n+\t\tdma_zalloc_coherent(chan->dev, size, &chan->dst_sgl_bd_pa,\n+\t\t\t\t    GFP_KERNEL);\n+\n+\tif (!chan->pdst_sgl_bd) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Alloc fail dst q buffer descriptors for chan %d\\n\",\n+\t\t\tchan->channel_number);\n+\t\tgoto err_no_dst_sgl_descriptors;\n+\t}\n+\n+\tsize = chan->total_descriptors * sizeof(struct STATUS_DMA_DESCRIPTOR);\n+\tchan->psrc_sta_bd =\n+\t\tdma_zalloc_coherent(chan->dev, size, &chan->src_sta_bd_pa,\n+\t\t\t\t    GFP_KERNEL);\n+\n+\tif (!chan->psrc_sta_bd) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Unable to allocate src q status bds for chan %d\\n\",\n+\t\t\tchan->channel_number);\n+\t\tgoto err_no_src_sta_descriptors;\n+\t}\n+\n+\tchan->pdst_sta_bd =\n+\t\tdma_zalloc_coherent(chan->dev, size, &chan->dst_sta_bd_pa,\n+\t\t\t\t    GFP_KERNEL);\n+\n+\tif (!chan->pdst_sta_bd) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Unable to allocate Dst q status bds for chan %d\\n\",\n+\t\t\tchan->channel_number);\n+\t\tgoto err_no_dst_sta_descriptors;\n+\t}\n+\n+\treturn 0;\n+\n+err_no_dst_sta_descriptors:\n+\tsize = chan->total_descriptors *\n+\t\tsizeof(struct STATUS_DMA_DESCRIPTOR);\n+\tdma_free_coherent(chan->dev, size, chan->psrc_sta_bd,\n+\t\t\t  chan->src_sta_bd_pa);\n+err_no_src_sta_descriptors:\n+\tsize = chan->total_descriptors *\n+\t\tsizeof(struct DEST_DMA_DESCRIPTOR);\n+\tdma_free_coherent(chan->dev, size, chan->pdst_sgl_bd,\n+\t\t\t  chan->dst_sgl_bd_pa);\n+err_no_dst_sgl_descriptors:\n+\tsize = chan->total_descriptors *\n+\t\tsizeof(struct SOURCE_DMA_DESCRIPTOR);\n+\tdma_free_coherent(chan->dev, size, chan->psrc_sgl_bd,\n+\t\t\t  chan->src_sgl_bd_pa);\n+\n+err_no_src_sgl_descriptors:\n+\treturn -ENOMEM;\n+}\n+\n+static void xlnx_ps_pcie_dma_free_chan_resources(struct dma_chan *dchan)\n+{\n+\tstruct ps_pcie_dma_chan *chan;\n+\n+\tif (!dchan)\n+\t\treturn;\n+\n+\tchan = to_xilinx_chan(dchan);\n+\n+\tif (chan->state == CHANNEL_RESOURCE_UNALLOCATED)\n+\t\treturn;\n+\n+\tif (chan->maintenance_workq) {\n+\t\tif (completion_done(&chan->chan_shutdown_complt))\n+\t\t\treinit_completion(&chan->chan_shutdown_complt);\n+\t\tqueue_work(chan->maintenance_workq,\n+\t\t\t   &chan->handle_chan_shutdown);\n+\t\twait_for_completion_interruptible(&chan->chan_shutdown_complt);\n+\n+\t\txlnx_ps_pcie_free_worker_queues(chan);\n+\t\txlnx_ps_pcie_free_pkt_ctxts(chan);\n+\t\txlnx_ps_pcie_destroy_mempool(chan);\n+\t\txlnx_ps_pcie_free_descriptors(chan);\n+\n+\t\tspin_lock(&chan->channel_lock);\n+\t\tchan->state = CHANNEL_RESOURCE_UNALLOCATED;\n+\t\tspin_unlock(&chan->channel_lock);\n+\t}\n+}\n+\n+static int xlnx_ps_pcie_dma_alloc_chan_resources(struct dma_chan *dchan)\n+{\n+\tstruct ps_pcie_dma_chan *chan;\n+\n+\tif (!dchan)\n+\t\treturn PTR_ERR(dchan);\n+\n+\tchan = to_xilinx_chan(dchan);\n+\n+\tif (chan->state != CHANNEL_RESOURCE_UNALLOCATED)\n+\t\treturn 0;\n+\n+\tif (chan->num_queues == DEFAULT_DMA_QUEUES) {\n+\t\tif (dma_alloc_decriptors_all_queues(chan) != 0) {\n+\t\t\tdev_err(chan->dev,\n+\t\t\t\t\"Alloc fail bds for channel %d\\n\",\n+\t\t\t\tchan->channel_number);\n+\t\t\tgoto err_no_descriptors;\n+\t\t}\n+\t} else if (chan->num_queues == TWO_DMA_QUEUES) {\n+\t\tif (dma_alloc_descriptors_two_queues(chan) != 0) {\n+\t\t\tdev_err(chan->dev,\n+\t\t\t\t\"Alloc fail bds for two queues of channel %d\\n\",\n+\t\t\tchan->channel_number);\n+\t\t\tgoto err_no_descriptors;\n+\t\t}\n+\t}\n+\n+\tif (xlnx_ps_pcie_alloc_mempool(chan) != 0) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Unable to allocate memory pool for channel %d\\n\",\n+\t\t\tchan->channel_number);\n+\t\tgoto err_no_mempools;\n+\t}\n+\n+\tif (xlnx_ps_pcie_alloc_pkt_contexts(chan) != 0) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Unable to allocate packet contexts for channel %d\\n\",\n+\t\t\tchan->channel_number);\n+\t\tgoto err_no_pkt_ctxts;\n+\t}\n+\n+\tif (xlnx_ps_pcie_alloc_worker_threads(chan) != 0) {\n+\t\tdev_err(chan->dev,\n+\t\t\t\"Unable to allocate worker queues for channel %d\\n\",\n+\t\t\tchan->channel_number);\n+\t\tgoto err_no_worker_queues;\n+\t}\n+\n+\txlnx_ps_pcie_reset_channel(chan);\n+\n+\tdma_cookie_init(dchan);\n+\n+\treturn 0;\n+\n+err_no_worker_queues:\n+\txlnx_ps_pcie_free_pkt_ctxts(chan);\n+err_no_pkt_ctxts:\n+\txlnx_ps_pcie_destroy_mempool(chan);\n+err_no_mempools:\n+\txlnx_ps_pcie_free_descriptors(chan);\n+err_no_descriptors:\n+\treturn -ENOMEM;\n+}\n+\n+static dma_cookie_t xilinx_intr_tx_submit(struct dma_async_tx_descriptor *tx)\n+{\n+\tstruct ps_pcie_intr_segment *intr_seg =\n+\t\tto_ps_pcie_dma_tx_intr_descriptor(tx);\n+\tstruct ps_pcie_dma_chan *chan = to_xilinx_chan(tx->chan);\n+\tdma_cookie_t cookie;\n+\n+\tif (chan->state != CHANNEL_AVAILABLE)\n+\t\treturn -EINVAL;\n+\n+\tspin_lock(&chan->cookie_lock);\n+\tcookie = dma_cookie_assign(tx);\n+\tspin_unlock(&chan->cookie_lock);\n+\n+\tspin_lock(&chan->pending_interrupts_lock);\n+\tlist_add_tail(&intr_seg->node, &chan->pending_interrupts_list);\n+\tspin_unlock(&chan->pending_interrupts_lock);\n+\n+\treturn cookie;\n+}\n+\n+static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx)\n+{\n+\tstruct ps_pcie_tx_segment *seg = to_ps_pcie_dma_tx_descriptor(tx);\n+\tstruct ps_pcie_dma_chan *chan = to_xilinx_chan(tx->chan);\n+\tdma_cookie_t cookie;\n+\n+\tif (chan->state != CHANNEL_AVAILABLE)\n+\t\treturn -EINVAL;\n+\n+\tspin_lock(&chan->cookie_lock);\n+\tcookie = dma_cookie_assign(tx);\n+\tspin_unlock(&chan->cookie_lock);\n+\n+\tspin_lock(&chan->pending_list_lock);\n+\tlist_add_tail(&seg->node, &chan->pending_list);\n+\tspin_unlock(&chan->pending_list_lock);\n+\n+\treturn cookie;\n+}\n+\n+static struct dma_async_tx_descriptor *xlnx_ps_pcie_dma_prep_dma_sg(\n+\t\tstruct dma_chan *channel, struct scatterlist *dst_sg,\n+\t\tunsigned int dst_nents, struct scatterlist *src_sg,\n+\t\tunsigned int src_nents, unsigned long flags)\n+{\n+\tstruct ps_pcie_dma_chan *chan = to_xilinx_chan(channel);\n+\tstruct ps_pcie_tx_segment *seg = NULL;\n+\n+\tif (chan->state != CHANNEL_AVAILABLE)\n+\t\treturn NULL;\n+\n+\tif (dst_nents == 0 || src_nents == 0)\n+\t\treturn NULL;\n+\n+\tif (!dst_sg || !src_sg)\n+\t\treturn NULL;\n+\n+\tif (chan->num_queues != DEFAULT_DMA_QUEUES) {\n+\t\tdev_err(chan->dev, \"Only prep_slave_sg for channel %d\\n\",\n+\t\t\tchan->channel_number);\n+\t\treturn NULL;\n+\t}\n+\n+\tseg = mempool_alloc(chan->transactions_pool, GFP_ATOMIC);\n+\tif (!seg) {\n+\t\tdev_err(chan->dev, \"Tx segment alloc for channel %d\\n\",\n+\t\t\tchan->channel_number);\n+\t\treturn NULL;\n+\t}\n+\n+\tmemset(seg, 0, sizeof(*seg));\n+\n+\tseg->tx_elements.dst_sgl = dst_sg;\n+\tseg->tx_elements.dstq_num_elemets = dst_nents;\n+\tseg->tx_elements.src_sgl = src_sg;\n+\tseg->tx_elements.srcq_num_elemets = src_nents;\n+\n+\tdma_async_tx_descriptor_init(&seg->async_tx, &chan->common);\n+\tseg->async_tx.flags = flags;\n+\tasync_tx_ack(&seg->async_tx);\n+\tseg->async_tx.tx_submit = xilinx_dma_tx_submit;\n+\n+\treturn &seg->async_tx;\n+}\n+\n+static struct dma_async_tx_descriptor *xlnx_ps_pcie_dma_prep_slave_sg(\n+\t\tstruct dma_chan *channel, struct scatterlist *sgl,\n+\t\tunsigned int sg_len, enum dma_transfer_direction direction,\n+\t\tunsigned long flags, void *context)\n+{\n+\tstruct ps_pcie_dma_chan *chan = to_xilinx_chan(channel);\n+\tstruct ps_pcie_tx_segment *seg = NULL;\n+\n+\tif (chan->state != CHANNEL_AVAILABLE)\n+\t\treturn NULL;\n+\n+\tif (!(is_slave_direction(direction)))\n+\t\treturn NULL;\n+\n+\tif (!sgl || sg_len == 0)\n+\t\treturn NULL;\n+\n+\tif (chan->num_queues != TWO_DMA_QUEUES) {\n+\t\tdev_err(chan->dev, \"Only prep_dma_sg is supported channel %d\\n\",\n+\t\t\tchan->channel_number);\n+\t\treturn NULL;\n+\t}\n+\n+\tseg = mempool_alloc(chan->transactions_pool, GFP_ATOMIC);\n+\tif (!seg) {\n+\t\tdev_err(chan->dev, \"Unable to allocate tx segment channel %d\\n\",\n+\t\t\tchan->channel_number);\n+\t\treturn NULL;\n+\t}\n+\n+\tmemset(seg, 0, sizeof(*seg));\n+\n+\tif (chan->direction == DMA_TO_DEVICE) {\n+\t\tseg->tx_elements.src_sgl = sgl;\n+\t\tseg->tx_elements.srcq_num_elemets = sg_len;\n+\t\tseg->tx_elements.dst_sgl = NULL;\n+\t\tseg->tx_elements.dstq_num_elemets = 0;\n+\t} else {\n+\t\tseg->tx_elements.src_sgl = NULL;\n+\t\tseg->tx_elements.srcq_num_elemets = 0;\n+\t\tseg->tx_elements.dst_sgl = sgl;\n+\t\tseg->tx_elements.dstq_num_elemets = sg_len;\n+\t}\n+\n+\tdma_async_tx_descriptor_init(&seg->async_tx, &chan->common);\n+\tseg->async_tx.flags = flags;\n+\tasync_tx_ack(&seg->async_tx);\n+\tseg->async_tx.tx_submit = xilinx_dma_tx_submit;\n+\n+\treturn &seg->async_tx;\n+}\n+\n+static void xlnx_ps_pcie_dma_issue_pending(struct dma_chan *channel)\n+{\n+\tstruct ps_pcie_dma_chan *chan;\n+\n+\tif (!channel)\n+\t\treturn;\n+\n+\tchan = to_xilinx_chan(channel);\n+\n+\tif (!list_empty(&chan->pending_list)) {\n+\t\tspin_lock(&chan->pending_list_lock);\n+\t\tspin_lock(&chan->active_list_lock);\n+\t\tlist_splice_tail_init(&chan->pending_list,\n+\t\t\t\t      &chan->active_list);\n+\t\tspin_unlock(&chan->active_list_lock);\n+\t\tspin_unlock(&chan->pending_list_lock);\n+\t}\n+\n+\tif (!list_empty(&chan->pending_interrupts_list)) {\n+\t\tspin_lock(&chan->pending_interrupts_lock);\n+\t\tspin_lock(&chan->active_interrupts_lock);\n+\t\tlist_splice_tail_init(&chan->pending_interrupts_list,\n+\t\t\t\t      &chan->active_interrupts_list);\n+\t\tspin_unlock(&chan->active_interrupts_lock);\n+\t\tspin_unlock(&chan->pending_interrupts_lock);\n+\t}\n+\n+\tif (chan->chan_programming)\n+\t\tqueue_work(chan->chan_programming,\n+\t\t\t   &chan->handle_chan_programming);\n+}\n+\n+static int xlnx_ps_pcie_dma_terminate_all(struct dma_chan *channel)\n+{\n+\tstruct ps_pcie_dma_chan *chan;\n+\n+\tif (!channel)\n+\t\treturn PTR_ERR(channel);\n+\n+\tchan = to_xilinx_chan(channel);\n+\n+\tif (chan->state != CHANNEL_AVAILABLE)\n+\t\treturn 1;\n+\n+\tif (chan->maintenance_workq) {\n+\t\tif (completion_done(&chan->chan_terminate_complete))\n+\t\t\treinit_completion(&chan->chan_terminate_complete);\n+\t\tqueue_work(chan->maintenance_workq,\n+\t\t\t   &chan->handle_chan_terminate);\n+\t\twait_for_completion_interruptible(\n+\t\t\t   &chan->chan_terminate_complete);\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static struct dma_async_tx_descriptor *xlnx_ps_pcie_dma_prep_interrupt(\n+\t\tstruct dma_chan *channel, unsigned long flags)\n+{\n+\tstruct ps_pcie_dma_chan *chan;\n+\tstruct ps_pcie_intr_segment *intr_segment = NULL;\n+\n+\tif (!channel)\n+\t\treturn NULL;\n+\n+\tchan = to_xilinx_chan(channel);\n+\n+\tif (chan->state != CHANNEL_AVAILABLE)\n+\t\treturn NULL;\n+\n+\tintr_segment = mempool_alloc(chan->intr_transactions_pool, GFP_ATOMIC);\n+\n+\tmemset(intr_segment, 0, sizeof(*intr_segment));\n+\n+\tdma_async_tx_descriptor_init(&intr_segment->async_intr_tx,\n+\t\t\t\t     &chan->common);\n+\tintr_segment->async_intr_tx.flags = flags;\n+\tasync_tx_ack(&intr_segment->async_intr_tx);\n+\tintr_segment->async_intr_tx.tx_submit = xilinx_intr_tx_submit;\n+\n+\treturn &intr_segment->async_intr_tx;\n+}\n+\n+static int xlnx_pcie_dma_driver_probe(struct platform_device *platform_dev)\n+{\n+\tint err, i;\n+\tstruct xlnx_pcie_dma_device *xdev;\n+\tstatic u16 board_number;\n+\n+\txdev = devm_kzalloc(&platform_dev->dev,\n+\t\t\t    sizeof(struct xlnx_pcie_dma_device), GFP_KERNEL);\n+\n+\tif (!xdev)\n+\t\treturn -ENOMEM;\n+\n+#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT\n+\txdev->dma_buf_ext_addr = true;\n+#else\n+\txdev->dma_buf_ext_addr = false;\n+#endif\n+\n+\txdev->is_rootdma = device_property_read_bool(&platform_dev->dev,\n+\t\t\t\t\t\t     \"rootdma\");\n+\n+\txdev->dev = &platform_dev->dev;\n+\txdev->board_number = board_number;\n+\n+\terr = device_property_read_u32(&platform_dev->dev, \"numchannels\",\n+\t\t\t\t       &xdev->num_channels);\n+\tif (err) {\n+\t\tdev_err(&platform_dev->dev,\n+\t\t\t\"Unable to find numchannels property\\n\");\n+\t\tgoto platform_driver_probe_return;\n+\t}\n+\n+\tif (xdev->num_channels == 0 || xdev->num_channels >\n+\t\tMAX_ALLOWED_CHANNELS_IN_HW) {\n+\t\tdev_warn(&platform_dev->dev,\n+\t\t\t \"Invalid xlnx-num_channels property value\\n\");\n+\t\txdev->num_channels = MAX_ALLOWED_CHANNELS_IN_HW;\n+\t}\n+\n+\txdev->channels =\n+\t(struct ps_pcie_dma_chan *)devm_kzalloc(&platform_dev->dev,\n+\t\t\t\t\t\tsizeof(struct ps_pcie_dma_chan)\n+\t\t\t\t\t\t\t* xdev->num_channels,\n+\t\t\t\t\t\tGFP_KERNEL);\n+\tif (!xdev->channels) {\n+\t\terr = -ENOMEM;\n+\t\tgoto platform_driver_probe_return;\n+\t}\n+\n+\tif (xdev->is_rootdma)\n+\t\terr = read_rootdma_config(platform_dev, xdev);\n+\telse\n+\t\terr = read_epdma_config(platform_dev, xdev);\n+\n+\tif (err) {\n+\t\tdev_err(&platform_dev->dev,\n+\t\t\t\"Unable to initialize dma configuration\\n\");\n+\t\tgoto platform_driver_probe_return;\n+\t}\n+\n+\t/* Initialize the DMA engine */\n+\tINIT_LIST_HEAD(&xdev->common.channels);\n+\n+\tdma_cap_set(DMA_SLAVE, xdev->common.cap_mask);\n+\tdma_cap_set(DMA_PRIVATE, xdev->common.cap_mask);\n+\tdma_cap_set(DMA_SG, xdev->common.cap_mask);\n+\tdma_cap_set(DMA_INTERRUPT, xdev->common.cap_mask);\n+\n+\txdev->common.src_addr_widths = DMA_SLAVE_BUSWIDTH_UNDEFINED;\n+\txdev->common.dst_addr_widths = DMA_SLAVE_BUSWIDTH_UNDEFINED;\n+\txdev->common.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);\n+\txdev->common.device_alloc_chan_resources =\n+\t\txlnx_ps_pcie_dma_alloc_chan_resources;\n+\txdev->common.device_free_chan_resources =\n+\t\txlnx_ps_pcie_dma_free_chan_resources;\n+\txdev->common.device_terminate_all = xlnx_ps_pcie_dma_terminate_all;\n+\txdev->common.device_tx_status =  dma_cookie_status;\n+\txdev->common.device_issue_pending = xlnx_ps_pcie_dma_issue_pending;\n+\txdev->common.device_prep_dma_interrupt =\n+\t\txlnx_ps_pcie_dma_prep_interrupt;\n+\txdev->common.device_prep_dma_sg = xlnx_ps_pcie_dma_prep_dma_sg;\n+\txdev->common.device_prep_slave_sg = xlnx_ps_pcie_dma_prep_slave_sg;\n+\txdev->common.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;\n+\n+\tfor (i = 0; i < xdev->num_channels; i++) {\n+\t\terr = probe_channel_properties(platform_dev, xdev, i);\n+\n+\t\tif (err != 0) {\n+\t\t\tdev_err(xdev->dev,\n+\t\t\t\t\"Unable to read channel properties\\n\");\n+\t\t\tgoto platform_driver_probe_return;\n+\t\t}\n+\t}\n+\n+\tif (xdev->is_rootdma)\n+\t\terr = platform_irq_setup(xdev);\n+\telse\n+\t\terr = irq_setup(xdev);\n+\tif (err) {\n+\t\tdev_err(xdev->dev, \"Cannot request irq lines for device %d\\n\",\n+\t\t\txdev->board_number);\n+\t\tgoto platform_driver_probe_return;\n+\t}\n+\n+\terr = dma_async_device_register(&xdev->common);\n+\tif (err) {\n+\t\tdev_err(xdev->dev,\n+\t\t\t\"Unable to register board %d with dma framework\\n\",\n+\t\t\txdev->board_number);\n+\t\tgoto platform_driver_probe_return;\n+\t}\n+\n+\tplatform_set_drvdata(platform_dev, xdev);\n+\n+\tboard_number++;\n+\n+\tdev_info(&platform_dev->dev, \"PS PCIe Platform driver probed\\n\");\n+\treturn 0;\n+\n+platform_driver_probe_return:\n+\treturn err;\n+}\n+\n+static int xlnx_pcie_dma_driver_remove(struct platform_device *platform_dev)\n+{\n+\tstruct xlnx_pcie_dma_device *xdev =\n+\t\tplatform_get_drvdata(platform_dev);\n+\tint i;\n+\n+\tfor (i = 0; i < xdev->num_channels; i++)\n+\t\txlnx_ps_pcie_dma_free_chan_resources(&xdev->channels[i].common);\n+\n+\tdma_async_device_unregister(&xdev->common);\n+\n+\treturn 0;\n+}\n+\n+#ifdef CONFIG_OF\n+static const struct of_device_id xlnx_pcie_root_dma_of_ids[] = {\n+\t{ .compatible = \"xlnx,ps_pcie_dma-1.00.a\", },\n+\t{}\n+};\n+MODULE_DEVICE_TABLE(of, xlnx_pcie_root_dma_of_ids);\n+#endif\n+\n+static struct platform_driver xlnx_pcie_dma_driver = {\n+\t.driver = {\n+\t\t.name = XLNX_PLATFORM_DRIVER_NAME,\n+\t\t.of_match_table = of_match_ptr(xlnx_pcie_root_dma_of_ids),\n+\t\t.owner = THIS_MODULE,\n+\t},\n+\t.probe =  xlnx_pcie_dma_driver_probe,\n+\t.remove = xlnx_pcie_dma_driver_remove,\n+};\n+\n+int dma_platform_driver_register(void)\n+{\n+\treturn platform_driver_register(&xlnx_pcie_dma_driver);\n+}\n+\n+void dma_platform_driver_unregister(void)\n+{\n+\tplatform_driver_unregister(&xlnx_pcie_dma_driver);\n+}\n",
    "prefixes": [
        "v2",
        "4/5"
    ]
}