From patchwork Mon Dec 13 02:07:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kai-Heng Feng X-Patchwork-Id: 1567110 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=SQC3i2G+; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4JC4fh3vHMz9t4b for ; Mon, 13 Dec 2021 13:08:11 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1mwakz-00081L-4m; Mon, 13 Dec 2021 02:08:01 +0000 Received: from smtp-relay-canonical-0.internal ([10.131.114.83] helo=smtp-relay-canonical-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1mwakx-000810-0b for kernel-team@lists.ubuntu.com; Mon, 13 Dec 2021 02:07:59 +0000 Received: from HP-EliteBook-840-G7.. (1-171-85-191.dynamic-ip.hinet.net [1.171.85.191]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-canonical-0.canonical.com (Postfix) with ESMTPSA id 825F440DEA for ; Mon, 13 Dec 2021 02:07:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1639361278; bh=RWLZLGuL5NxYS147X7ENKrTC6gV0tbY3FrfnVVku15o=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=SQC3i2G+KIkwML/w+LKsofv7+vKMu1WVvbmwyxSEDJ/TZm4hXAdCJLnZut4VbwjP4 eYUPk6HaEfEHWvW9GDeOsxNhJI5Y8fb6HaFrd3t1TK7Pk6m1vWKC9iq94nR+asDriY 4U5YrNNBmKf1k08JGSMj/6cSJEG6o4dFjyCU8bbochA9PVXtkzzzeD76yEWzFQzxrO KVRbk1KjN+0v0mW0b2YJJ9ebCSYwl7/N54NVEXtFUhWxQSHZlpos0cR3q/0p5xDqhi l42rguWjksg22/1KJIELw8PyRWY9fDXy1fDxmcRVKgLnAXKAa+MGvVMOQF1DrHpzPu 3eesz/J9XF7MA== From: Kai-Heng Feng To: kernel-team@lists.ubuntu.com Subject: [Unstable/OEM-5.14] [PATCH 1/1] UBUNTU: SAUCE: PCI: vmd: Honor ACPI _OSC on PCIe features Date: Mon, 13 Dec 2021 10:07:50 +0800 Message-Id: <20211213020750.155279-2-kai.heng.feng@canonical.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211213020750.155279-1-kai.heng.feng@canonical.com> References: <20211213020750.155279-1-kai.heng.feng@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" BugLink: https://bugs.launchpad.net/bugs/1954611 When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the combination causes AER message flood and drags the system performance down. The issue doesn't happen when VMD mode is disabled in BIOS, since AER isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER is enabled regardless of _OSC: [ 0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER] ... [ 1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146 Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to disable PCIe features accordingly to resolve the issue. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215027 Suggested-by: Rafael J. Wysocki Reviewed-by: Rafael J. Wysocki Signed-off-by: Kai-Heng Feng [Reviewed by ACPI maintainer but not yet merged to PCI tree] --- drivers/pci/controller/vmd.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index a5987e52700e3..274e42e967f63 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -658,6 +658,21 @@ static int vmd_alloc_irqs(struct vmd_dev *vmd) return 0; } +/* + * Since VMD is an aperture to regular PCIe root ports, only allow it to + * control features that the OS is allowed to control on the physical PCI bus. + */ +static void vmd_copy_host_bridge_flags(struct pci_host_bridge *root_bridge, + struct pci_host_bridge *vmd_bridge) +{ + vmd_bridge->native_pcie_hotplug = root_bridge->native_pcie_hotplug; + vmd_bridge->native_shpc_hotplug = root_bridge->native_shpc_hotplug; + vmd_bridge->native_aer = root_bridge->native_aer; + vmd_bridge->native_pme = root_bridge->native_pme; + vmd_bridge->native_ltr = root_bridge->native_ltr; + vmd_bridge->native_dpc = root_bridge->native_dpc; +} + static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) { struct pci_sysdata *sd = &vmd->sysdata; @@ -794,6 +809,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) return -ENODEV; } + vmd_copy_host_bridge_flags(pci_find_host_bridge(vmd->dev->bus), + to_pci_host_bridge(vmd->bus->bridge)); + vmd_attach_resources(vmd); if (vmd->irq_domain) dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);