From patchwork Fri Jan 13 20:07:51 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Streetman X-Patchwork-Id: 715235 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3v0YbY6539z9vDw for ; Sat, 14 Jan 2017 07:08:25 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="QPo98ykZ"; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751197AbdAMUIY (ORCPT ); Fri, 13 Jan 2017 15:08:24 -0500 Received: from mail-qt0-f195.google.com ([209.85.216.195]:36606 "EHLO mail-qt0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751048AbdAMUIX (ORCPT ); Fri, 13 Jan 2017 15:08:23 -0500 Received: by mail-qt0-f195.google.com with SMTP id l7so7532472qtd.3; Fri, 13 Jan 2017 12:08:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to; bh=XpjFgfj14UwIriQlQRKvj5AP9b78pYuUam8LX9590jQ=; b=QPo98ykZTlHsCNDX9GnkmOblxb0oiuxYv2OrVyAkJBIKP6VtiBtLINBvO7xq67jlvH 4sqed+FfvUd4VJqVj6+R6ORbS3VmNqTrKQR5kVwNoGyIdH3aBf6cGAyEaA2XvpBeFJ6l yiWCwF4khFwOal0dkyV0LB0c1wxVJWAtvbCKtulXNQPKiTRg93vp20s9wWlW0cAw0l6y a0LodZu19QOUoeqZK/HHGK+8KE/44Pk67ia1AoQ+LCkH+e26sexB0FchnZn7Ifh7c4yr /4LT7w1ylWVFtNlqaKkV+uWcHCNnKGHyiPBC8dc/84zngvXosSqYNjjG5l2oaxc3qfsL 6gMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to; bh=XpjFgfj14UwIriQlQRKvj5AP9b78pYuUam8LX9590jQ=; b=X/31JIMqY30PYdo4BHGfOgWPGP80jV6t8rsV1cDRcyU5kkQkDFUPPYv4nB3irxrZod eO+In0/zSY0TIf0eRhoSGCT0/h9HLV1uwWP5h7Ejq4sdfnrSI95mzWac+0kCcM8A5e+Z jVoXkkhcCo103WroDFJy8wtTF3br8pudnvO6cGtOVK8UHF91Ul9D9ahiWXDWGzFMHx/m sYJNvjpMxcNsi5D57N03HBvX6G5va789qFDuBOosTQkj65qW/GXI1kvdXpIZl9am24yO BXJLhYn+d2hcbdYmMy4AUUHwVfPDQNA/qmiVtDrYAAj7xi2eNlELYeCGoFrnc0WimU3d 95oQ== X-Gm-Message-State: AIkVDXLDHkw9yAU0wPrtrOG3L50X0oq2iqWvZddyK4hMmdtwRVrX/9UuRCHqWjDjNnua4Q== X-Received: by 10.237.34.150 with SMTP id p22mr2259228qtc.76.1484338102031; Fri, 13 Jan 2017 12:08:22 -0800 (PST) Received: from toughbook.lan (45-27-90-188.lightspeed.rlghnc.sbcglobal.net. [45.27.90.188]) by smtp.gmail.com with ESMTPSA id b12sm9975088qta.27.2017.01.13.12.08.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Jan 2017 12:08:21 -0800 (PST) From: Dan Streetman To: Stefano Stabellini , jgross@suse.com, Konrad Rzeszutek Wilk , Boris Ostrovsky Cc: Dan Streetman , Dan Streetman , Bjorn Helgaas , xen-devel@lists.xenproject.org, linux-kernel , linux-pci@vger.kernel.org Subject: [PATCH] xen: do not re-use pirq number cached in pci device msi msg data Date: Fri, 13 Jan 2017 15:07:51 -0500 Message-Id: <20170113200751.20125-1-ddstreet@ieee.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <04f2a09f-59be-a720-bc98-4afb53171790@oracle.com> Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Revert the main part of commit: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests") That commit introduced reading the pci device's msi message data to see if a pirq was previously configured for the device's msi/msix, and re-use that pirq. At the time, that was the correct behavior. However, a later change to Qemu caused it to call into the Xen hypervisor to unmap all pirqs for a pci device, when the pci device disables its MSI/MSIX vectors; specifically the Qemu commit: c976437c7dba9c7444fb41df45468968aaa326ad ("qemu-xen: free all the pirqs for msi/msix when driver unload") Once Qemu added this pirq unmapping, it was no longer correct for the kernel to re-use the pirq number cached in the pci device msi message data. All Qemu releases since 2.1.0 contain the patch that unmaps the pirqs when the pci device disables its MSI/MSIX vectors. This bug is causing failures to initialize multiple NVMe controllers under Xen, because the NVMe driver sets up a single MSIX vector for each controller (concurrently), and then after using that to talk to the controller for some configuration data, it disables the single MSIX vector and re-configures all the MSIX vectors it needs. So the MSIX setup code tries to re-use the cached pirq from the first vector for each controller, but the hypervisor has already given away that pirq to another controller, and its initialization fails. This is discussed in more detail at: https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests") Signed-off-by: Dan Streetman Reviewed-by: Stefano Stabellini Reviewed-by: Boris Ostrovsky Acked-by: Konrad Rzeszutek Wilk --- arch/x86/pci/xen.c | 23 +++++++---------------- 1 file changed, 7 insertions(+), 16 deletions(-) diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c index bedfab9..a00a6c0 100644 --- a/arch/x86/pci/xen.c +++ b/arch/x86/pci/xen.c @@ -234,23 +234,14 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) return 1; for_each_pci_msi_entry(msidesc, dev) { - __pci_read_msi_msg(msidesc, &msg); - pirq = MSI_ADDR_EXT_DEST_ID(msg.address_hi) | - ((msg.address_lo >> MSI_ADDR_DEST_ID_SHIFT) & 0xff); - if (msg.data != XEN_PIRQ_MSI_DATA || - xen_irq_from_pirq(pirq) < 0) { - pirq = xen_allocate_pirq_msi(dev, msidesc); - if (pirq < 0) { - irq = -ENODEV; - goto error; - } - xen_msi_compose_msg(dev, pirq, &msg); - __pci_write_msi_msg(msidesc, &msg); - dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq); - } else { - dev_dbg(&dev->dev, - "xen: msi already bound to pirq=%d\n", pirq); + pirq = xen_allocate_pirq_msi(dev, msidesc); + if (pirq < 0) { + irq = -ENODEV; + goto error; } + xen_msi_compose_msg(dev, pirq, &msg); + __pci_write_msi_msg(msidesc, &msg); + dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq); irq = xen_bind_pirq_msi_to_irq(dev, msidesc, pirq, (type == PCI_CAP_ID_MSI) ? nvec : 1, (type == PCI_CAP_ID_MSIX) ?