From patchwork Tue Mar 6 18:21:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dexuan Cui X-Patchwork-Id: 882178 X-Patchwork-Delegate: lorenzo.pieralisi@arm.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=microsoft.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=microsoft.com header.i=@microsoft.com header.b="hvNyN9vD"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zwlVn6sHLz9sh9 for ; Wed, 7 Mar 2018 05:22:25 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753976AbeCFSWL (ORCPT ); Tue, 6 Mar 2018 13:22:11 -0500 Received: from mail-sg2apc01on0120.outbound.protection.outlook.com ([104.47.125.120]:45036 "EHLO APC01-SG2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753926AbeCFSWH (ORCPT ); Tue, 6 Mar 2018 13:22:07 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=HzEwt1pV0pMe3mGM2wLalXobxPwL8NE0WfclyX7tYeU=; b=hvNyN9vDe0fKzVTIS7tID1QN1fv+H9Ns6rP+GZNayzjOUvlJg1YJdKpGdB4tQVrAcWFiVgbh+OB04/EYYzWF7ePwk/5GAA3HtTaDpbvildN3PTQ3C82pK/zAEUgCXCBxomvefSduMjXg+rsrP3OxOw+JHfuu0KvUL96AtvW9e8o= Received: from KL1P15301MB0006.APCP153.PROD.OUTLOOK.COM (10.170.167.17) by KL1P15301MB0021.APCP153.PROD.OUTLOOK.COM (10.170.167.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.588.7; Tue, 6 Mar 2018 18:21:57 +0000 Received: from KL1P15301MB0006.APCP153.PROD.OUTLOOK.COM ([10.170.167.17]) by KL1P15301MB0006.APCP153.PROD.OUTLOOK.COM ([10.170.167.17]) with mapi id 15.20.0588.001; Tue, 6 Mar 2018 18:21:57 +0000 From: Dexuan Cui To: "bhelgaas@google.com" , "linux-pci@vger.kernel.org" , KY Srinivasan , Stephen Hemminger , "olaf@aepfle.de" , "apw@canonical.com" , "jasowang@redhat.com" CC: "linux-kernel@vger.kernel.org" , "driverdev-devel@linuxdriverproject.org" , Haiyang Zhang , "vkuznets@redhat.com" , "marcelo.cerri@canonical.com" , "Michael Kelley (EOSG)" , Dexuan Cui , "stable@vger.kernel.org" , Jack Morgenstein Subject: [PATCH v3 6/6] PCI: hv: fix 2 hang issues in hv_compose_msi_msg() Thread-Topic: [PATCH v3 6/6] PCI: hv: fix 2 hang issues in hv_compose_msi_msg() Thread-Index: AQHTtXgCGuFYGGn/Jki0TgY7mD098w== Date: Tue, 6 Mar 2018 18:21:56 +0000 Message-ID: <20180306182128.23281-7-decui@microsoft.com> References: <20180306182128.23281-1-decui@microsoft.com> In-Reply-To: <20180306182128.23281-1-decui@microsoft.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-mailer: git-send-email 2.15.1 authentication-results: spf=none (sender IP is ) smtp.mailfrom=decui@microsoft.com; x-originating-ip: [52.168.54.252] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; KL1P15301MB0021; 20:m9gQqhvr1w+owKtsW/YeMgQC6K2iNjECK5cdYbAL2Lm400rCF/+A17GxukB7fzJVJtTPAL1LJ/94arKRrS8gH4TodTgbbtIKrA5wyiTgEWCGZTBzaXLNsM/namyOJLJ8PnEMtQ/+LB/adc2eCHCEamnSWILqaTEGntRRKZZZJOU= x-ms-exchange-antispam-srfa-diagnostics: SSOS;SSOR; x-ms-office365-filtering-ht: Tenant x-ms-office365-filtering-correlation-id: f812c978-69a3-4a91-59f7-08d5838f2576 x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(48565401081)(5600026)(4604075)(3008032)(4534165)(4627221)(201703031133081)(201702281549075)(2017052603328)(7193020); SRVR:KL1P15301MB0021; x-ms-traffictypediagnostic: KL1P15301MB0021: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(28532068793085)(89211679590171)(9452136761055); x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(61425038)(6040501)(2401047)(8121501046)(5005006)(93006095)(93001095)(3231220)(944501244)(52105095)(3002001)(10201501046)(6055026)(61426038)(61427038)(6041288)(20161123560045)(20161123558120)(20161123562045)(20161123564045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(6072148)(201708071742011); SRVR:KL1P15301MB0021; BCL:0; PCL:0; RULEID:; SRVR:KL1P15301MB0021; x-forefront-prvs: 06036BD506 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(396003)(39860400002)(39380400002)(346002)(376002)(366004)(199004)(189003)(5660300001)(36756003)(22452003)(1511001)(10090500001)(2501003)(66066001)(316002)(7736002)(14454004)(10290500003)(2201001)(99286004)(305945005)(7416002)(478600001)(76176011)(25786009)(4326008)(86362001)(102836004)(106356001)(6506007)(105586002)(186003)(77096007)(26005)(3660700001)(59450400001)(110136005)(2900100001)(54906003)(68736007)(50226002)(97736004)(8676002)(81156014)(81166006)(8936002)(53936002)(3280700002)(6116002)(86612001)(6436002)(6512007)(2906002)(1076002)(6486002)(2950100002)(3846002)(22906009); DIR:OUT; SFP:1102; SCL:1; SRVR:KL1P15301MB0021; H:KL1P15301MB0006.APCP153.PROD.OUTLOOK.COM; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; received-spf: None (protection.outlook.com: microsoft.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: Z+DQi9QUKLFO76ZxXIY8Nr4XEUxzCLzhesqk6t7s7v+KuyyLKAjgnxglsmzWd+6yrEaG5YQ3hKr0M2AF1bmfZTMme9n6V2o4FD3H+WHbD3mdyNGXGj/Et5f78BO99+fhNR7qFGj/l28UQpamAr5Jcrcpq99ZGHr6pHCDAgYWuMkTcSwp8TF8lBNvg9BCK0urTFxcyw0wl/ipx3AaVZ9SldptUfR5oOIQs7hoxFHotMU5Ztvk2gmJ9mzvsKDvWM5Tw2qL+O5t6W02MKJV5I7w4fcGhgYtvwnP3X1ZDWR0zWmXsS5u+JArFQRf36u0sAx/cg5/+MBQGLem0amdCgSx9Q== spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM MIME-Version: 1.0 X-OriginatorOrg: microsoft.com X-MS-Exchange-CrossTenant-Network-Message-Id: f812c978-69a3-4a91-59f7-08d5838f2576 X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Mar 2018 18:21:56.9934 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 72f988bf-86f1-41af-91ab-2d7cd011db47 X-MS-Exchange-Transport-CrossTenantHeadersStamped: KL1P15301MB0021 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org 1. With the patch "x86/vector/msi: Switch to global reservation mode" (4900be8360), the recent v4.15 and newer kernels always hang for 1-vCPU Hyper-V VM with SR-IOV. This is because when we reach hv_compose_msi_msg() by request_irq() -> request_threaded_irq() -> __setup_irq()->irq_startup() -> __irq_startup() -> irq_domain_activate_irq() -> ... -> msi_domain_activate() -> ... -> hv_compose_msi_msg(), local irq is disabled in __setup_irq(). Fix this by polling the channel. 2. If the host is ejecting the VF device before we reach hv_compose_msi_msg(), in a UP VM, we can hang in hv_compose_msi_msg() forever, because at this time the host doesn't respond to the CREATE_INTERRUPT request. This issue also happens to old kernels like v4.14, v4.13, etc. Fix this by polling the channel for the PCI_EJECT message and hpdev->state, and by checking the PCI vendor ID. Note: actually the above issues also happen to a SMP VM, if "hbus->hdev->channel->target_cpu == smp_processor_id()" is true. Signed-off-by: Dexuan Cui Tested-by: Adrian Suhov Tested-by: Chris Valean Cc: stable@vger.kernel.org Cc: Stephen Hemminger Cc: K. Y. Srinivasan Cc: Vitaly Kuznetsov Cc: Jack Morgenstein Reviewed-by: Michael Kelley Acked-by: Haiyang Zhang --- drivers/pci/host/pci-hyperv.c | 58 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 57 insertions(+), 1 deletion(-) diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c index 265ba11e53e2..50cdefe3f6d3 100644 --- a/drivers/pci/host/pci-hyperv.c +++ b/drivers/pci/host/pci-hyperv.c @@ -521,6 +521,8 @@ struct hv_pci_compl { s32 completion_status; }; +static void hv_pci_onchannelcallback(void *context); + /** * hv_pci_generic_compl() - Invoked for a completion packet * @context: Set up by the sender of the packet. @@ -665,6 +667,31 @@ static void _hv_pcifront_read_config(struct hv_pci_dev *hpdev, int where, } } +static u16 hv_pcifront_get_vendor_id(struct hv_pci_dev *hpdev) +{ + u16 ret; + unsigned long flags; + void __iomem *addr = hpdev->hbus->cfg_addr + CFG_PAGE_OFFSET + + PCI_VENDOR_ID; + + spin_lock_irqsave(&hpdev->hbus->config_lock, flags); + + /* Choose the function to be read. (See comment above) */ + writel(hpdev->desc.win_slot.slot, hpdev->hbus->cfg_addr); + /* Make sure the function was chosen before we start reading. */ + mb(); + /* Read from that function's config space. */ + ret = readw(addr); + /* + * mb() is not required here, because the spin_unlock_irqrestore() + * is a barrier. + */ + + spin_unlock_irqrestore(&hpdev->hbus->config_lock, flags); + + return ret; +} + /** * _hv_pcifront_write_config() - Internal PCI config write * @hpdev: The PCI driver's representation of the device @@ -1107,8 +1134,37 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) * Since this function is called with IRQ locks held, can't * do normal wait for completion; instead poll. */ - while (!try_wait_for_completion(&comp.comp_pkt.host_event)) + while (!try_wait_for_completion(&comp.comp_pkt.host_event)) { + /* 0xFFFF means an invalid PCI VENDOR ID. */ + if (hv_pcifront_get_vendor_id(hpdev) == 0xFFFF) { + dev_err_once(&hbus->hdev->device, + "the device has gone\n"); + goto free_int_desc; + } + + /* + * When the higher level interrupt code calls us with + * interrupt disabled, we must poll the channel by calling + * the channel callback directly when channel->target_cpu is + * the current CPU. When the higher level interrupt code + * calls us with interrupt enabled, let's add the + * local_bh_disable()/enable() to avoid race. + */ + local_bh_disable(); + + if (hbus->hdev->channel->target_cpu == smp_processor_id()) + hv_pci_onchannelcallback(hbus); + + local_bh_enable(); + + if (hpdev->state == hv_pcichild_ejecting) { + dev_err_once(&hbus->hdev->device, + "the device is being ejected\n"); + goto free_int_desc; + } + udelay(100); + } if (comp.comp_pkt.completion_status < 0) { dev_err(&hbus->hdev->device,