[{"id":1760325,"web_url":"http://patchwork.ozlabs.org/comment/1760325/","msgid":"<20170830164020.GC18250@bhelgaas-glaptop.roam.corp.google.com>","list_archive_url":null,"date":"2017-08-30T16:40:20","subject":"Re: [PATCH] vmd: Remove IRQ affinity","submitter":{"id":67298,"url":"http://patchwork.ozlabs.org/api/people/67298/","name":"Bjorn Helgaas","email":"helgaas@kernel.org"},"content":"[+cc Christoph]\n\nOn Wed, Aug 30, 2017 at 12:15:04PM -0400, Keith Busch wrote:\n> VMD hardware has to share its vectors among child devices in its PCI\n> domain so we should allocate as many as possible rather than just ones\n> that can be affinitized.\n\nI don't understand this changelog.  It suggests that\npci_alloc_irq_vectors() will allocate more vectors than\npci_alloc_irq_vectors_affinity() would.\n\nBut my understanding was that pci_alloc_irq_vectors_affinity() does have\nanything to do with the number of vectors allocated, but that it only\nprovided more fine-grained control of affinity.\n\n  commit 402723ad5c62\n  Author: Christoph Hellwig <hch@lst.de>\n  Date:   Tue Nov 8 17:15:05 2016 -0800\n\n    PCI/MSI: Provide pci_alloc_irq_vectors_affinity()\n    \n    This is a variant of pci_alloc_irq_vectors() that allows passing a struct\n    irq_affinity to provide fine-grained IRQ affinity control.\n    \n    For now this means being able to exclude vectors at the beginning or end of\n    the MSI vector space, but it could also be used for any other quirks needed\n    in the future (e.g. more vectors than CPUs, or excluding CPUs from the\n    spreading).\n\nSo IIUC, this patch does not change the number of vectors allocated.  It\ndoes remove PCI_IRQ_AFFINITY, which I suppose means all the vectors target\nthe same CPU instead of being spread across CPUs.\n\n> Reported-by: Brad Goodman <Bradley.Goodman@dell.com>\n> Signed-off-by: Keith Busch <keith.busch@intel.com>\n> ---\n>  drivers/pci/host/vmd.c | 12 ++----------\n>  1 file changed, 2 insertions(+), 10 deletions(-)\n> \n> diff --git a/drivers/pci/host/vmd.c b/drivers/pci/host/vmd.c\n> index 4fe1756..509893b 100644\n> --- a/drivers/pci/host/vmd.c\n> +++ b/drivers/pci/host/vmd.c\n> @@ -671,14 +671,6 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)\n>  \tstruct vmd_dev *vmd;\n>  \tint i, err;\n>  \n> -\t/*\n> -\t * The first vector is reserved for special use, so start affinity at\n> -\t * the second vector\n> -\t */\n> -\tstruct irq_affinity affd = {\n> -\t\t.pre_vectors = 1,\n> -\t};\n> -\n>  \tif (resource_size(&dev->resource[VMD_CFGBAR]) < (1 << 20))\n>  \t\treturn -ENOMEM;\n>  \n> @@ -704,8 +696,8 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)\n>  \tif (vmd->msix_count < 0)\n>  \t\treturn -ENODEV;\n>  \n> -\tvmd->msix_count = pci_alloc_irq_vectors_affinity(dev, 1, vmd->msix_count,\n> -\t\t\t\t\tPCI_IRQ_MSIX | PCI_IRQ_AFFINITY, &affd);\n> +\tvmd->msix_count = pci_alloc_irq_vectors(dev, 1, vmd->msix_count,\n> +\t\t\t\t\tPCI_IRQ_MSIX);\n>  \tif (vmd->msix_count < 0)\n>  \t\treturn vmd->msix_count;\n>  \n> -- \n> 2.5.5\n>","headers":{"Return-Path":"<linux-pci-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-pci-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","mail.kernel.org;\n\tdmarc=none (p=none dis=none) header.from=kernel.org","mail.kernel.org;\n\tspf=none smtp.mailfrom=helgaas@kernel.org"],"Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xjB7r6mxcz9sPt\n\tfor <incoming@patchwork.ozlabs.org>;\n\tThu, 31 Aug 2017 02:40:24 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1751329AbdH3QkX (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tWed, 30 Aug 2017 12:40:23 -0400","from mail.kernel.org ([198.145.29.99]:45664 \"EHLO mail.kernel.org\"\n\trhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n\tid S1750972AbdH3QkX (ORCPT <rfc822;linux-pci@vger.kernel.org>);\n\tWed, 30 Aug 2017 12:40:23 -0400","from localhost (unknown [69.55.156.165])\n\t(using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits))\n\t(No client certificate requested)\n\tby mail.kernel.org (Postfix) with ESMTPSA id A16962133E;\n\tWed, 30 Aug 2017 16:40:22 +0000 (UTC)"],"DMARC-Filter":"OpenDMARC Filter v1.3.2 mail.kernel.org A16962133E","Date":"Wed, 30 Aug 2017 11:40:20 -0500","From":"Bjorn Helgaas <helgaas@kernel.org>","To":"Keith Busch <keith.busch@intel.com>","Cc":"linux-pci@vger.kernel.org, Bjorn Helgaas <bhelgaas@google.com>,\n\tJon Derrick <jonathan.derrick@intel.com>, Christoph Hellwig <hch@lst.de>","Subject":"Re: [PATCH] vmd: Remove IRQ affinity","Message-ID":"<20170830164020.GC18250@bhelgaas-glaptop.roam.corp.google.com>","References":"<1504109704-17033-1-git-send-email-keith.busch@intel.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=us-ascii","Content-Disposition":"inline","In-Reply-To":"<1504109704-17033-1-git-send-email-keith.busch@intel.com>","User-Agent":"Mutt/1.5.21 (2010-09-15)","Sender":"linux-pci-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-pci.vger.kernel.org>","X-Mailing-List":"linux-pci@vger.kernel.org"}},{"id":1760462,"web_url":"http://patchwork.ozlabs.org/comment/1760462/","msgid":"<20170830202340.GA17331@localhost.localdomain>","list_archive_url":null,"date":"2017-08-30T20:23:40","subject":"Re: [PATCH] vmd: Remove IRQ affinity","submitter":{"id":19950,"url":"http://patchwork.ozlabs.org/api/people/19950/","name":"Keith Busch","email":"keith.busch@intel.com"},"content":"On Wed, Aug 30, 2017 at 09:40:20AM -0700, Bjorn Helgaas wrote:\n> [+cc Christoph]\n> \n> On Wed, Aug 30, 2017 at 12:15:04PM -0400, Keith Busch wrote:\n> > VMD hardware has to share its vectors among child devices in its PCI\n> > domain so we should allocate as many as possible rather than just ones\n> > that can be affinitized.\n> \n> I don't understand this changelog.  It suggests that\n> pci_alloc_irq_vectors() will allocate more vectors than\n> pci_alloc_irq_vectors_affinity() would.\n> \n> But my understanding was that pci_alloc_irq_vectors_affinity() does have\n> anything to do with the number of vectors allocated, but that it only\n> provided more fine-grained control of affinity.\n> \n>   commit 402723ad5c62\n>   Author: Christoph Hellwig <hch@lst.de>\n>   Date:   Tue Nov 8 17:15:05 2016 -0800\n> \n>     PCI/MSI: Provide pci_alloc_irq_vectors_affinity()\n>     \n>     This is a variant of pci_alloc_irq_vectors() that allows passing a struct\n>     irq_affinity to provide fine-grained IRQ affinity control.\n>     \n>     For now this means being able to exclude vectors at the beginning or end of\n>     the MSI vector space, but it could also be used for any other quirks needed\n>     in the future (e.g. more vectors than CPUs, or excluding CPUs from the\n>     spreading).\n> \n> So IIUC, this patch does not change the number of vectors allocated.  It\n> does remove PCI_IRQ_AFFINITY, which I suppose means all the vectors target\n> the same CPU instead of being spread across CPUs.\n\nVMD has to divvy interrupt vectors up among potentially many devices,\nso we want to always get the maximum vectors possible.\n\nBy default, PCI_IRQ_AFFINITY flag will have 'nvecs' capped by\nirq_calc_affinity_vectors, which is the number of present CPUs and\npotentially lower than the available vectors.\n\nWe could use the struct irq_affinity to define pre/post vectors to be\nexcluded from affinity consideration so that we can get more vectors\nthan CPUs, but it would be weird to have some of these general purpose\nvectors affinity set by the kernel and others set by the user.","headers":{"Return-Path":"<linux-pci-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":"ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-pci-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xjGz56JBWz9sN7\n\tfor <incoming@patchwork.ozlabs.org>;\n\tThu, 31 Aug 2017 06:18:09 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1750793AbdH3USI (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tWed, 30 Aug 2017 16:18:08 -0400","from mga02.intel.com ([134.134.136.20]:58568 \"EHLO mga02.intel.com\"\n\trhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n\tid S1750757AbdH3USI (ORCPT <rfc822;linux-pci@vger.kernel.org>);\n\tWed, 30 Aug 2017 16:18:08 -0400","from orsmga005.jf.intel.com ([10.7.209.41])\n\tby orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t30 Aug 2017 13:18:07 -0700","from unknown (HELO localhost.localdomain) ([10.232.112.96])\n\tby orsmga005.jf.intel.com with ESMTP; 30 Aug 2017 13:18:02 -0700"],"X-ExtLoop1":"1","X-IronPort-AV":"E=Sophos;i=\"5.41,449,1498546800\"; d=\"scan'208\";a=\"143829916\"","Date":"Wed, 30 Aug 2017 16:23:40 -0400","From":"Keith Busch <keith.busch@intel.com>","To":"Bjorn Helgaas <helgaas@kernel.org>","Cc":"\"linux-pci@vger.kernel.org\" <linux-pci@vger.kernel.org>,\n\tBjorn Helgaas <bhelgaas@google.com>,\n\t\"Derrick, Jonathan\" <jonathan.derrick@intel.com>,\n\tChristoph Hellwig <hch@lst.de>","Subject":"Re: [PATCH] vmd: Remove IRQ affinity","Message-ID":"<20170830202340.GA17331@localhost.localdomain>","References":"<1504109704-17033-1-git-send-email-keith.busch@intel.com>\n\t<20170830164020.GC18250@bhelgaas-glaptop.roam.corp.google.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=us-ascii","Content-Disposition":"inline","In-Reply-To":"<20170830164020.GC18250@bhelgaas-glaptop.roam.corp.google.com>","User-Agent":"Mutt/1.7.1 (2016-10-04)","Sender":"linux-pci-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-pci.vger.kernel.org>","X-Mailing-List":"linux-pci@vger.kernel.org"}},{"id":1760517,"web_url":"http://patchwork.ozlabs.org/comment/1760517/","msgid":"<20170830214139.GY8154@bhelgaas-glaptop.roam.corp.google.com>","list_archive_url":null,"date":"2017-08-30T21:41:39","subject":"Re: [PATCH] vmd: Remove IRQ affinity","submitter":{"id":67298,"url":"http://patchwork.ozlabs.org/api/people/67298/","name":"Bjorn Helgaas","email":"helgaas@kernel.org"},"content":"On Wed, Aug 30, 2017 at 04:23:40PM -0400, Keith Busch wrote:\n> On Wed, Aug 30, 2017 at 09:40:20AM -0700, Bjorn Helgaas wrote:\n> > [+cc Christoph]\n> > \n> > On Wed, Aug 30, 2017 at 12:15:04PM -0400, Keith Busch wrote:\n> > > VMD hardware has to share its vectors among child devices in its PCI\n> > > domain so we should allocate as many as possible rather than just ones\n> > > that can be affinitized.\n> > \n> > I don't understand this changelog.  It suggests that\n> > pci_alloc_irq_vectors() will allocate more vectors than\n> > pci_alloc_irq_vectors_affinity() would.\n> > \n> > But my understanding was that pci_alloc_irq_vectors_affinity() does have\n> > anything to do with the number of vectors allocated, but that it only\n> > provided more fine-grained control of affinity.\n> > \n> >   commit 402723ad5c62\n> >   Author: Christoph Hellwig <hch@lst.de>\n> >   Date:   Tue Nov 8 17:15:05 2016 -0800\n> > \n> >     PCI/MSI: Provide pci_alloc_irq_vectors_affinity()\n> >     \n> >     This is a variant of pci_alloc_irq_vectors() that allows passing a struct\n> >     irq_affinity to provide fine-grained IRQ affinity control.\n> >     \n> >     For now this means being able to exclude vectors at the beginning or end of\n> >     the MSI vector space, but it could also be used for any other quirks needed\n> >     in the future (e.g. more vectors than CPUs, or excluding CPUs from the\n> >     spreading).\n> > \n> > So IIUC, this patch does not change the number of vectors allocated.  It\n> > does remove PCI_IRQ_AFFINITY, which I suppose means all the vectors target\n> > the same CPU instead of being spread across CPUs.\n> \n> VMD has to divvy interrupt vectors up among potentially many devices,\n> so we want to always get the maximum vectors possible.\n> \n> By default, PCI_IRQ_AFFINITY flag will have 'nvecs' capped by\n> irq_calc_affinity_vectors, which is the number of present CPUs and\n> potentially lower than the available vectors.\n\nMmmm, OK.  I guess there's a hint in the changelog above, but it\nwasn't obvious from the pci_alloc_irq_vectors_affinity() comment that\nit caps to the number of CPUs.  \n\n> We could use the struct irq_affinity to define pre/post vectors to be\n> excluded from affinity consideration so that we can get more vectors\n> than CPUs, but it would be weird to have some of these general purpose\n> vectors affinity set by the kernel and others set by the user.\n\nI added some breadcrumbs to the changelog about this connection\nbetween affinity and limiting the number of IRQs.  Did I get this\nright?\n\nThis is on pci/host-vmd for v4.14.\n\n\ncommit be85af02e1b00d49cd678d8f2ea6f391bdbaca19\nAuthor: Keith Busch <keith.busch@intel.com>\nDate:   Wed Aug 30 12:15:04 2017 -0400\n\n    PCI: vmd: Remove IRQ affinity so we can allocate more IRQs\n    \n    VMD hardware has to share its vectors among child devices in its PCI\n    domain so we should allocate as many as possible rather than just ones\n    that can be affinitized.\n    \n    pci_alloc_irq_vectors_affinity() limits the number of affinitized IRQs to\n    the number of present CPUs (see irq_calc_affinity_vectors()).  But we'd\n    prefer to have more vectors, even if they aren't distributed across the\n    CPUs, so use pci_alloc_irq_vectors() instead.\n    \n    Reported-by: Brad Goodman <Bradley.Goodman@dell.com>\n    Signed-off-by: Keith Busch <keith.busch@intel.com>\n    [bhelgaas: add irq_calc_affinity_vectors() reference to changelog]\n    Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>\n\ndiff --git a/drivers/pci/host/vmd.c b/drivers/pci/host/vmd.c\nindex 4fe1756af010..509893bc3e63 100644\n--- a/drivers/pci/host/vmd.c\n+++ b/drivers/pci/host/vmd.c\n@@ -671,14 +671,6 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)\n \tstruct vmd_dev *vmd;\n \tint i, err;\n \n-\t/*\n-\t * The first vector is reserved for special use, so start affinity at\n-\t * the second vector\n-\t */\n-\tstruct irq_affinity affd = {\n-\t\t.pre_vectors = 1,\n-\t};\n-\n \tif (resource_size(&dev->resource[VMD_CFGBAR]) < (1 << 20))\n \t\treturn -ENOMEM;\n \n@@ -704,8 +696,8 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)\n \tif (vmd->msix_count < 0)\n \t\treturn -ENODEV;\n \n-\tvmd->msix_count = pci_alloc_irq_vectors_affinity(dev, 1, vmd->msix_count,\n-\t\t\t\t\tPCI_IRQ_MSIX | PCI_IRQ_AFFINITY, &affd);\n+\tvmd->msix_count = pci_alloc_irq_vectors(dev, 1, vmd->msix_count,\n+\t\t\t\t\tPCI_IRQ_MSIX);\n \tif (vmd->msix_count < 0)\n \t\treturn vmd->msix_count;","headers":{"Return-Path":"<linux-pci-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-pci-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","mail.kernel.org;\n\tdmarc=none (p=none dis=none) header.from=kernel.org","mail.kernel.org;\n\tspf=none smtp.mailfrom=helgaas@kernel.org"],"Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xjJqZ3lXqz9s8P\n\tfor <incoming@patchwork.ozlabs.org>;\n\tThu, 31 Aug 2017 07:41:46 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1751330AbdH3Vln (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tWed, 30 Aug 2017 17:41:43 -0400","from mail.kernel.org ([198.145.29.99]:41120 \"EHLO mail.kernel.org\"\n\trhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n\tid S1750885AbdH3Vlm (ORCPT <rfc822;linux-pci@vger.kernel.org>);\n\tWed, 30 Aug 2017 17:41:42 -0400","from localhost (unknown [64.22.249.253])\n\t(using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits))\n\t(No client certificate requested)\n\tby mail.kernel.org (Postfix) with ESMTPSA id 5751F214AB;\n\tWed, 30 Aug 2017 21:41:41 +0000 (UTC)"],"DMARC-Filter":"OpenDMARC Filter v1.3.2 mail.kernel.org 5751F214AB","Date":"Wed, 30 Aug 2017 16:41:39 -0500","From":"Bjorn Helgaas <helgaas@kernel.org>","To":"Keith Busch <keith.busch@intel.com>","Cc":"\"linux-pci@vger.kernel.org\" <linux-pci@vger.kernel.org>,\n\tBjorn Helgaas <bhelgaas@google.com>,\n\t\"Derrick, Jonathan\" <jonathan.derrick@intel.com>,\n\tChristoph Hellwig <hch@lst.de>","Subject":"Re: [PATCH] vmd: Remove IRQ affinity","Message-ID":"<20170830214139.GY8154@bhelgaas-glaptop.roam.corp.google.com>","References":"<1504109704-17033-1-git-send-email-keith.busch@intel.com>\n\t<20170830164020.GC18250@bhelgaas-glaptop.roam.corp.google.com>\n\t<20170830202340.GA17331@localhost.localdomain>","MIME-Version":"1.0","Content-Type":"text/plain; charset=us-ascii","Content-Disposition":"inline","In-Reply-To":"<20170830202340.GA17331@localhost.localdomain>","User-Agent":"Mutt/1.5.21 (2010-09-15)","Sender":"linux-pci-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-pci.vger.kernel.org>","X-Mailing-List":"linux-pci@vger.kernel.org"}},{"id":1760519,"web_url":"http://patchwork.ozlabs.org/comment/1760519/","msgid":"<20170830215002.GB17331@localhost.localdomain>","list_archive_url":null,"date":"2017-08-30T21:50:02","subject":"Re: [PATCH] vmd: Remove IRQ affinity","submitter":{"id":19950,"url":"http://patchwork.ozlabs.org/api/people/19950/","name":"Keith Busch","email":"keith.busch@intel.com"},"content":"On Wed, Aug 30, 2017 at 04:41:39PM -0500, Bjorn Helgaas wrote:\n> I added some breadcrumbs to the changelog about this connection\n> between affinity and limiting the number of IRQs.  Did I get this\n> right?\n> \n> This is on pci/host-vmd for v4.14.\n\nAwesome, sounds good to me! \n \n> commit be85af02e1b00d49cd678d8f2ea6f391bdbaca19\n> Author: Keith Busch <keith.busch@intel.com>\n> Date:   Wed Aug 30 12:15:04 2017 -0400\n> \n>     PCI: vmd: Remove IRQ affinity so we can allocate more IRQs\n>     \n>     VMD hardware has to share its vectors among child devices in its PCI\n>     domain so we should allocate as many as possible rather than just ones\n>     that can be affinitized.\n>     \n>     pci_alloc_irq_vectors_affinity() limits the number of affinitized IRQs to\n>     the number of present CPUs (see irq_calc_affinity_vectors()).  But we'd\n>     prefer to have more vectors, even if they aren't distributed across the\n>     CPUs, so use pci_alloc_irq_vectors() instead.\n>     \n>     Reported-by: Brad Goodman <Bradley.Goodman@dell.com>\n>     Signed-off-by: Keith Busch <keith.busch@intel.com>\n>     [bhelgaas: add irq_calc_affinity_vectors() reference to changelog]\n>     Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>","headers":{"Return-Path":"<linux-pci-owner@vger.kernel.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":"ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=linux-pci-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xjJtf5Skkz9s8P\n\tfor <incoming@patchwork.ozlabs.org>;\n\tThu, 31 Aug 2017 07:44:26 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1750828AbdH3VoZ (ORCPT <rfc822;incoming@patchwork.ozlabs.org>);\n\tWed, 30 Aug 2017 17:44:25 -0400","from mga05.intel.com ([192.55.52.43]:5170 \"EHLO mga05.intel.com\"\n\trhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n\tid S1750814AbdH3VoY (ORCPT <rfc822;linux-pci@vger.kernel.org>);\n\tWed, 30 Aug 2017 17:44:24 -0400","from fmsmga002.fm.intel.com ([10.253.24.26])\n\tby fmsmga105.fm.intel.com with ESMTP; 30 Aug 2017 14:44:24 -0700","from unknown (HELO localhost.localdomain) ([10.232.112.96])\n\tby fmsmga002.fm.intel.com with ESMTP; 30 Aug 2017 14:44:24 -0700"],"X-ExtLoop1":"1","X-IronPort-AV":"E=Sophos;i=\"5.41,450,1498546800\"; d=\"scan'208\";a=\"1212699884\"","Date":"Wed, 30 Aug 2017 17:50:02 -0400","From":"Keith Busch <keith.busch@intel.com>","To":"Bjorn Helgaas <helgaas@kernel.org>","Cc":"\"linux-pci@vger.kernel.org\" <linux-pci@vger.kernel.org>,\n\tBjorn Helgaas <bhelgaas@google.com>,\n\t\"Derrick, Jonathan\" <jonathan.derrick@intel.com>,\n\tChristoph Hellwig <hch@lst.de>","Subject":"Re: [PATCH] vmd: Remove IRQ affinity","Message-ID":"<20170830215002.GB17331@localhost.localdomain>","References":"<1504109704-17033-1-git-send-email-keith.busch@intel.com>\n\t<20170830164020.GC18250@bhelgaas-glaptop.roam.corp.google.com>\n\t<20170830202340.GA17331@localhost.localdomain>\n\t<20170830214139.GY8154@bhelgaas-glaptop.roam.corp.google.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=us-ascii","Content-Disposition":"inline","In-Reply-To":"<20170830214139.GY8154@bhelgaas-glaptop.roam.corp.google.com>","User-Agent":"Mutt/1.7.1 (2016-10-04)","Sender":"linux-pci-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<linux-pci.vger.kernel.org>","X-Mailing-List":"linux-pci@vger.kernel.org"}}]