Show a cover letter.

GET /api/covers/964077/?format=api
HTTP 200 OK
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 964077,
    "url": "http://patchwork.ozlabs.org/api/covers/964077/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/linux-pci/cover/20180830185352.3369-1-logang@deltatee.com/",
    "project": {
        "id": 28,
        "url": "http://patchwork.ozlabs.org/api/projects/28/?format=api",
        "name": "Linux PCI development",
        "link_name": "linux-pci",
        "list_id": "linux-pci.vger.kernel.org",
        "list_email": "linux-pci@vger.kernel.org",
        "web_url": null,
        "scm_url": null,
        "webscm_url": null,
        "list_archive_url": "",
        "list_archive_url_format": "",
        "commit_url_format": ""
    },
    "msgid": "<20180830185352.3369-1-logang@deltatee.com>",
    "list_archive_url": null,
    "date": "2018-08-30T18:53:39",
    "name": "[v5,00/13] Copy Offload in NVMe Fabrics with P2P PCI Memory",
    "submitter": {
        "id": 70191,
        "url": "http://patchwork.ozlabs.org/api/people/70191/?format=api",
        "name": "Logan Gunthorpe",
        "email": "logang@deltatee.com"
    },
    "mbox": "http://patchwork.ozlabs.org/project/linux-pci/cover/20180830185352.3369-1-logang@deltatee.com/mbox/",
    "series": [
        {
            "id": 63352,
            "url": "http://patchwork.ozlabs.org/api/series/63352/?format=api",
            "web_url": "http://patchwork.ozlabs.org/project/linux-pci/list/?series=63352",
            "date": "2018-08-30T18:53:41",
            "name": "Copy Offload in NVMe Fabrics with P2P PCI Memory",
            "version": 5,
            "mbox": "http://patchwork.ozlabs.org/series/63352/mbox/"
        }
    ],
    "comments": "http://patchwork.ozlabs.org/api/covers/964077/comments/",
    "headers": {
        "Return-Path": "<linux-pci-owner@vger.kernel.org>",
        "X-Original-To": "incoming@patchwork.ozlabs.org",
        "Delivered-To": "patchwork-incoming@bilbo.ozlabs.org",
        "Authentication-Results": [
            "ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-pci-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)",
            "ozlabs.org; dmarc=none (p=none dis=none)\n\theader.from=deltatee.com"
        ],
        "Received": [
            "from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 421Wrw2RLxz9s1x\n\tfor <incoming@patchwork.ozlabs.org>;\n\tFri, 31 Aug 2018 04:55:12 +1000 (AEST)",
            "(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1728067AbeH3W6e (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tThu, 30 Aug 2018 18:58:34 -0400",
            "from ale.deltatee.com ([207.54.116.67]:40058 \"EHLO\n\tale.deltatee.com\"\n\trhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n\tid S1727340AbeH3W5l (ORCPT <rfc822;linux-pci@vger.kernel.org>);\n\tThu, 30 Aug 2018 18:57:41 -0400",
            "from cgy1-donard.priv.deltatee.com ([172.16.1.31])\n\tby ale.deltatee.com with esmtps\n\t(TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)\n\t(envelope-from <gunthorp@deltatee.com>)\n\tid 1fvS4t-0006Of-Nc; Thu, 30 Aug 2018 12:54:02 -0600",
            "from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim\n\t4.89) (envelope-from <gunthorp@deltatee.com>)\n\tid 1fvS4n-0000tF-W1; Thu, 30 Aug 2018 12:53:54 -0600"
        ],
        "From": "Logan Gunthorpe <logang@deltatee.com>",
        "To": "linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org,\n\tlinux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org,\n\tlinux-nvdimm@lists.01.org, linux-block@vger.kernel.org",
        "Cc": "Stephen Bates <sbates@raithlin.com>, Christoph Hellwig <hch@lst.de>,\n\tKeith Busch <keith.busch@intel.com>, Sagi Grimberg <sagi@grimberg.me>,\n\tBjorn Helgaas <bhelgaas@google.com>, Jason Gunthorpe <jgg@mellanox.com>, \n\tMax Gurtovoy <maxg@mellanox.com>,\n\tDan Williams <dan.j.williams@intel.com>, =?utf-8?b?SsOpcsO0bWUgR2xp?=\n\t=?utf-8?q?sse?= <jglisse@redhat.com>,\n\tBenjamin Herrenschmidt <benh@kernel.crashing.org>, Alex Williamson\n\t<alex.williamson@redhat.com>, =?utf-8?q?Christian_K=C3=B6nig?=\n\t<christian.koenig@amd.com>, Logan Gunthorpe <logang@deltatee.com>",
        "Date": "Thu, 30 Aug 2018 12:53:39 -0600",
        "Message-Id": "<20180830185352.3369-1-logang@deltatee.com>",
        "X-Mailer": "git-send-email 2.11.0",
        "X-SA-Exim-Connect-IP": "172.16.1.31",
        "X-SA-Exim-Rcpt-To": "linux-nvme@lists.infradead.org, linux-nvdimm@lists.01.org,\n\tlinux-kernel@vger.kernel.org, linux-pci@vger.kernel.org,\n\tlinux-rdma@vger.kernel.org, linux-block@vger.kernel.org,\n\tsbates@raithlin.com, hch@lst.de, sagi@grimberg.me,\n\tbhelgaas@google.com, jgg@mellanox.com, maxg@mellanox.com,\n\tkeith.busch@intel.com, dan.j.williams@intel.com,\n\tbenh@kernel.crashing.org, jglisse@redhat.com,\n\talex.williamson@redhat.com, christian.koenig@amd.com,\n\tlogang@deltatee.com",
        "X-SA-Exim-Mail-From": "gunthorp@deltatee.com",
        "X-Spam-Checker-Version": "SpamAssassin 3.4.1 (2015-04-28) on ale.deltatee.com",
        "X-Spam-Level": "",
        "X-Spam-Status": "No, score=-8.5 required=5.0 tests=ALL_TRUSTED,BAYES_00,\n\tGREYLIST_ISWHITE,MYRULES_FREE,MYRULES_NO_TEXT autolearn=ham\n\tautolearn_force=no version=3.4.1",
        "Subject": "[PATCH v5 00/13] Copy Offload in NVMe Fabrics with P2P PCI Memory",
        "X-SA-Exim-Version": "4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000)",
        "X-SA-Exim-Scanned": "Yes (on ale.deltatee.com)",
        "Sender": "linux-pci-owner@vger.kernel.org",
        "Precedence": "bulk",
        "List-ID": "<linux-pci.vger.kernel.org>",
        "X-Mailing-List": "linux-pci@vger.kernel.org"
    },
    "content": "Hi Everyone,\n\nNow that the patchset which creates a command line option to disable\nACS redirection has landed it's time to revisit the P2P patchset for\ncopy offoad in NVMe fabrics.\n\nI present version 5 wihch no longer does any magic with the ACS bits and\ninstead will reject P2P transactions between devices that would be affected\nby them. A few other cleanups were done which are described in the\nchangelog below.\n\nThis version is based on v4.19-rc1 and a git repo is here:\n\nhttps://github.com/sbates130272/linux-p2pmem pci-p2p-v5\n\nThanks,\n\nLogan\n\n--\n\nChanges in v5:\n\n* Rebased on v4.19-rc1\n\n* Drop changing ACS settings in this patchset. Now, the code\n  will only allow P2P transactions between devices whos\n  downstream ports do not restrict P2P TLPs.\n\n* Drop the REQ_PCI_P2PDMA block flag and instead use\n  is_pci_p2pdma_page() to tell if a request is P2P or not. In that\n  case we check for queue support and enforce using REQ_NOMERGE.\n  Per feedback from Christoph.\n\n* Drop the pci_p2pdma_unmap_sg() function as it was empty and only\n  there for symmetry and compatibility with dma_unmap_sg. Per feedback\n  from Christoph.\n\n* Split off the logic to handle enabling P2P in NVMe fabrics' configfs\n  into specific helpers in the p2pdma code. Per feedback from Christoph.\n\n* A number of other minor cleanups and fixes as pointed out by\n  Christoph and others.\n\nChanges in v4:\n\n* Change the original upstream_bridges_match() function to\n  upstream_bridge_distance() which calculates the distance between two\n  devices as long as they are behind the same root port. This should\n  address Bjorn's concerns that the code was to focused on\n  being behind a single switch.\n\n* The disable ACS function now disables ACS for all bridge ports instead\n  of switch ports (ie. those that had two upstream_bridge ports).\n\n* Change the pci_p2pmem_alloc_sgl() and pci_p2pmem_free_sgl()\n  API to be more like sgl_alloc() in that the alloc function returns\n  the allocated scatterlist and nents is not required bythe free\n  function.\n\n* Moved the new documentation into the driver-api tree as requested\n  by Jonathan\n\n* Add SGL alloc and free helpers in the nvmet code so that the\n  individual drivers can share the code that allocates P2P memory.\n  As requested by Christoph.\n\n* Cleanup the nvmet_p2pmem_store() function as Christoph\n  thought my first attempt was ugly.\n\n* Numerous commit message and comment fix-ups\n\nChanges in v3:\n\n* Many more fixes and minor cleanups that were spotted by Bjorn\n\n* Additional explanation of the ACS change in both the commit message\n  and Kconfig doc. Also, the code that disables the ACS bits is surrounded\n  explicitly by an #ifdef\n\n* Removed the flag we added to rdma_rw_ctx() in favour of using\n  is_pci_p2pdma_page(), as suggested by Sagi.\n\n* Adjust pci_p2pmem_find() so that it prefers P2P providers that\n  are closest to (or the same as) the clients using them. In cases\n  of ties, the provider is randomly chosen.\n\n* Modify the NVMe Target code so that the PCI device name of the provider\n  may be explicitly specified, bypassing the logic in pci_p2pmem_find().\n  (Note: it's still enforced that the provider must be behind the\n   same switch as the clients).\n\n* As requested by Bjorn, added documentation for driver writers.\n\n\nChanges in v2:\n\n* Renamed everything to 'p2pdma' per the suggestion from Bjorn as well\n  as a bunch of cleanup and spelling fixes he pointed out in the last\n  series.\n\n* To address Alex's ACS concerns, we change to a simpler method of\n  just disabling ACS behind switches for any kernel that has\n  CONFIG_PCI_P2PDMA.\n\n* We also reject using devices that employ 'dma_virt_ops' which should\n  fairly simply handle Jason's concerns that this work might break with\n  the HFI, QIB and rxe drivers that use the virtual ops to implement\n  their own special DMA operations.\n\n--\n\nThis is a continuation of our work to enable using Peer-to-Peer PCI\nmemory in the kernel with initial support for the NVMe fabrics target\nsubsystem. Many thanks go to Christoph Hellwig who provided valuable\nfeedback to get these patches to where they are today.\n\nThe concept here is to use memory that's exposed on a PCI BAR as\ndata buffers in the NVMe target code such that data can be transferred\nfrom an RDMA NIC to the special memory and then directly to an NVMe\ndevice avoiding system memory entirely. The upside of this is better\nQoS for applications running on the CPU utilizing memory and lower\nPCI bandwidth required to the CPU (such that systems could be designed\nwith fewer lanes connected to the CPU).\n\nDue to these trade-offs we've designed the system to only enable using\nthe PCI memory in cases where the NIC, NVMe devices and memory are all\nbehind the same PCI switch hierarchy. This will mean many setups that\ncould likely work well will not be supported so that we can be more\nconfident it will work and not place any responsibility on the user to\nunderstand their topology. (We chose to go this route based on feedback\nwe received at the last LSF). Future work may enable these transfers\nusing a white list of known good root complexes. However, at this time,\nthere is no reliable way to ensure that Peer-to-Peer transactions are\npermitted between PCI Root Ports.\n\nIn order to enable this functionality, we introduce a few new PCI\nfunctions such that a driver can register P2P memory with the system.\nStruct pages are created for this memory using devm_memremap_pages()\nand the PCI bus offset is stored in the corresponding pagemap structure.\n\nWhen the PCI P2PDMA config option is selected the ACS bits in every\nbridge port in the system are turned off to allow traffic to\npass freely behind the root port. At this time, the bit must be disabled\nat boot so the IOMMU subsystem can correctly create the groups, though\nthis could be addressed in the future. There is no way to dynamically\ndisable the bit and alter the groups.\n\nAnother set of functions allow a client driver to create a list of\nclient devices that will be used in a given P2P transactions and then\nuse that list to find any P2P memory that is supported by all the\nclient devices.\n\nIn the block layer, we also introduce a P2P request flag to indicate a\ngiven request targets P2P memory as well as a flag for a request queue\nto indicate a given queue supports targeting P2P memory. P2P requests\nwill only be accepted by queues that support it. Also, P2P requests\nare marked to not be merged seeing a non-homogenous request would\ncomplicate the DMA mapping requirements.\n\nIn the PCI NVMe driver, we modify the existing CMB support to utilize\nthe new PCI P2P memory infrastructure and also add support for P2P\nmemory in its request queue. When a P2P request is received it uses the\npci_p2pmem_map_sg() function which applies the necessary transformation\nto get the corrent pci_bus_addr_t for the DMA transactions.\n\nIn the RDMA core, we also adjust rdma_rw_ctx_init() and\nrdma_rw_ctx_destroy() to take a flags argument which indicates whether\nto use the PCI P2P mapping functions or not. To avoid odd RDMA devices\nthat don't use the proper DMA infrastructure this code rejects using\nany device that employs the virt_dma_ops implementation.\n\nFinally, in the NVMe fabrics target port we introduce a new\nconfiguration boolean: 'allow_p2pmem'. When set, the port will attempt\nto find P2P memory supported by the RDMA NIC and all namespaces. If\nsupported memory is found, it will be used in all IO transfers. And if\na port is using P2P memory, adding new namespaces that are not supported\nby that memory will fail.\n\nThese patches have been tested on a number of Intel based systems and\nfor a variety of RDMA NICs (Mellanox, Broadcomm, Chelsio) and NVMe\nSSDs (Intel, Seagate, Samsung) and p2pdma devices (Eideticom,\nMicrosemi, Chelsio and Everspin) using switches from both Microsemi\nand Broadcomm.\n\nLogan Gunthorpe (13):\n  PCI/P2PDMA: Support peer-to-peer memory\n  PCI/P2PDMA: Add sysfs group to display p2pmem stats\n  PCI/P2PDMA: Add PCI p2pmem DMA mappings to adjust the bus offset\n  PCI/P2PDMA: Introduce configfs/sysfs enable attribute helpers\n  docs-rst: Add a new directory for PCI documentation\n  PCI/P2PDMA: Add P2P DMA driver writer's documentation\n  block: Add PCI P2P flag for request queue and check support for\n    requests\n  IB/core: Ensure we map P2P memory correctly in\n    rdma_rw_ctx_[init|destroy]()\n  nvme-pci: Use PCI p2pmem subsystem to manage the CMB\n  nvme-pci: Add support for P2P memory in requests\n  nvme-pci: Add a quirk for a pseudo CMB\n  nvmet: Introduce helper functions to allocate and free request SGLs\n  nvmet: Optionally use PCI P2P memory\n\n Documentation/ABI/testing/sysfs-bus-pci    |  25 +\n Documentation/driver-api/index.rst         |   2 +-\n Documentation/driver-api/pci/index.rst     |  21 +\n Documentation/driver-api/pci/p2pdma.rst    | 170 ++++++\n Documentation/driver-api/{ => pci}/pci.rst |   0\n block/blk-core.c                           |  14 +\n drivers/infiniband/core/rw.c               |  11 +-\n drivers/nvme/host/core.c                   |   4 +\n drivers/nvme/host/nvme.h                   |   8 +\n drivers/nvme/host/pci.c                    | 121 ++--\n drivers/nvme/target/configfs.c             |  36 ++\n drivers/nvme/target/core.c                 | 149 +++++\n drivers/nvme/target/nvmet.h                |  15 +\n drivers/nvme/target/rdma.c                 |  22 +-\n drivers/pci/Kconfig                        |  17 +\n drivers/pci/Makefile                       |   1 +\n drivers/pci/p2pdma.c                       | 941 +++++++++++++++++++++++++++++\n include/linux/blkdev.h                     |   3 +\n include/linux/memremap.h                   |   6 +\n include/linux/mm.h                         |  18 +\n include/linux/pci-p2pdma.h                 | 124 ++++\n include/linux/pci.h                        |   4 +\n 22 files changed, 1658 insertions(+), 54 deletions(-)\n create mode 100644 Documentation/driver-api/pci/index.rst\n create mode 100644 Documentation/driver-api/pci/p2pdma.rst\n rename Documentation/driver-api/{ => pci}/pci.rst (100%)\n create mode 100644 drivers/pci/p2pdma.c\n create mode 100644 include/linux/pci-p2pdma.h\n\n--\n2.11.0"
}