From patchwork Mon Dec 6 22:27:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564252 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=pe+GcvL6; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=ZkE5ZE9y; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J2q0VlWz9s1l for ; Tue, 7 Dec 2021 09:27:31 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353569AbhLFWa5 (ORCPT ); Mon, 6 Dec 2021 17:30:57 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45394 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343838AbhLFWa4 (ORCPT ); Mon, 6 Dec 2021 17:30:56 -0500 Message-ID: <20211206210223.872249537@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829645; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=gqz42Pofka0siw8CcddaQz8SUgWLIxuaB16oER6dwV4=; b=pe+GcvL6ODtEZD4skWxk/PvzK4F2OW++iXRqeyGbhc0IlKAUIUscA/0YJ8LK43n/sn9oot 4a+8++vMkE5sKGwHy/ADtrtQAUoXJY48gJoE/AR9bBgX/KzNPqc/113f8bs1q76ETUz7iH eM7jScACOqG3CV6StNnqqhuKbbL/SYxHEzsSSIvdjcOOf1HiAyfStD7FbIzX6vU4tqBGEF FhRme1JpynfK8Szyi6xWoQlc7HPLO1yvWoEkbxTTZ2KgqRBsLuUmoOuff/xWezXF1i/sBB eXgIQTCvaTyUQLeZ8NwlR8UrvXbQAbU/jOQHxFJiOUeSAc+QA2QUEsqYIB8hLg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829645; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=gqz42Pofka0siw8CcddaQz8SUgWLIxuaB16oER6dwV4=; b=ZkE5ZE9yfufGy0hPQxtaVpW4nh5o7qTk0egIVSjesPqKuGg4dq5nuUEI42HrRuv5UjaXT9 yqKgJ4nKA8OI9YDw== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Juergen Gross , Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 01/23] powerpc/4xx: Remove MSI support which never worked References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:25 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org This code is broken since day one. ppc4xx_setup_msi_irqs() has the following gems: 1) The handling of the result of msi_bitmap_alloc_hwirqs() is completely broken: When the result is greater than or equal 0 (bitmap allocation successful) then the loop terminates and the function returns 0 (success) despite not having installed an interrupt. When the result is less than 0 (bitmap allocation fails), it prints an error message and continues to "work" with that error code which would eventually end up in the MSI message data. 2) On every invocation the file global pp4xx_msi::msi_virqs bitmap is allocated thereby leaking the previous one. IOW, this has never worked and for more than 10 years nobody cared. Remove the gunk. Fixes: 3fb7933850fa ("powerpc/4xx: Adding PCIe MSI support") Fixes: 247540b03bfc ("powerpc/44x: Fix PCI MSI support for Maui APM821xx SoC and Bluestone board") Signed-off-by: Thomas Gleixner Reviewed-by: Jason Gunthorpe Cc: Michael Ellerman Cc: Paul Mackerras Cc: Benjamin Herrenschmidt Cc: linuxppc-dev@lists.ozlabs.org --- arch/powerpc/platforms/4xx/Makefile | 1 arch/powerpc/platforms/4xx/msi.c | 281 ------------------------------------ arch/powerpc/sysdev/Kconfig | 6 3 files changed, 288 deletions(-) --- a/arch/powerpc/platforms/4xx/Makefile +++ b/arch/powerpc/platforms/4xx/Makefile @@ -3,6 +3,5 @@ obj-y += uic.o machine_check.o obj-$(CONFIG_4xx_SOC) += soc.o obj-$(CONFIG_PCI) += pci.o obj-$(CONFIG_PPC4xx_HSTA_MSI) += hsta_msi.o -obj-$(CONFIG_PPC4xx_MSI) += msi.o obj-$(CONFIG_PPC4xx_CPM) += cpm.o obj-$(CONFIG_PPC4xx_GPIO) += gpio.o --- a/arch/powerpc/platforms/4xx/msi.c +++ /dev/null @@ -1,281 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Adding PCI-E MSI support for PPC4XX SoCs. - * - * Copyright (c) 2010, Applied Micro Circuits Corporation - * Authors: Tirumala R Marri - * Feng Kan - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#define PEIH_TERMADH 0x00 -#define PEIH_TERMADL 0x08 -#define PEIH_MSIED 0x10 -#define PEIH_MSIMK 0x18 -#define PEIH_MSIASS 0x20 -#define PEIH_FLUSH0 0x30 -#define PEIH_FLUSH1 0x38 -#define PEIH_CNTRST 0x48 - -static int msi_irqs; - -struct ppc4xx_msi { - u32 msi_addr_lo; - u32 msi_addr_hi; - void __iomem *msi_regs; - int *msi_virqs; - struct msi_bitmap bitmap; - struct device_node *msi_dev; -}; - -static struct ppc4xx_msi ppc4xx_msi; - -static int ppc4xx_msi_init_allocator(struct platform_device *dev, - struct ppc4xx_msi *msi_data) -{ - int err; - - err = msi_bitmap_alloc(&msi_data->bitmap, msi_irqs, - dev->dev.of_node); - if (err) - return err; - - err = msi_bitmap_reserve_dt_hwirqs(&msi_data->bitmap); - if (err < 0) { - msi_bitmap_free(&msi_data->bitmap); - return err; - } - - return 0; -} - -static int ppc4xx_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) -{ - int int_no = -ENOMEM; - unsigned int virq; - struct msi_msg msg; - struct msi_desc *entry; - struct ppc4xx_msi *msi_data = &ppc4xx_msi; - - dev_dbg(&dev->dev, "PCIE-MSI:%s called. vec %x type %d\n", - __func__, nvec, type); - if (type == PCI_CAP_ID_MSIX) - pr_debug("ppc4xx msi: MSI-X untested, trying anyway.\n"); - - msi_data->msi_virqs = kmalloc_array(msi_irqs, sizeof(int), GFP_KERNEL); - if (!msi_data->msi_virqs) - return -ENOMEM; - - for_each_pci_msi_entry(entry, dev) { - int_no = msi_bitmap_alloc_hwirqs(&msi_data->bitmap, 1); - if (int_no >= 0) - break; - if (int_no < 0) { - pr_debug("%s: fail allocating msi interrupt\n", - __func__); - } - virq = irq_of_parse_and_map(msi_data->msi_dev, int_no); - if (!virq) { - dev_err(&dev->dev, "%s: fail mapping irq\n", __func__); - msi_bitmap_free_hwirqs(&msi_data->bitmap, int_no, 1); - return -ENOSPC; - } - dev_dbg(&dev->dev, "%s: virq = %d\n", __func__, virq); - - /* Setup msi address space */ - msg.address_hi = msi_data->msi_addr_hi; - msg.address_lo = msi_data->msi_addr_lo; - - irq_set_msi_desc(virq, entry); - msg.data = int_no; - pci_write_msi_msg(virq, &msg); - } - return 0; -} - -void ppc4xx_teardown_msi_irqs(struct pci_dev *dev) -{ - struct msi_desc *entry; - struct ppc4xx_msi *msi_data = &ppc4xx_msi; - irq_hw_number_t hwirq; - - dev_dbg(&dev->dev, "PCIE-MSI: tearing down msi irqs\n"); - - for_each_pci_msi_entry(entry, dev) { - if (!entry->irq) - continue; - hwirq = virq_to_hw(entry->irq); - irq_set_msi_desc(entry->irq, NULL); - irq_dispose_mapping(entry->irq); - msi_bitmap_free_hwirqs(&msi_data->bitmap, hwirq, 1); - } -} - -static int ppc4xx_setup_pcieh_hw(struct platform_device *dev, - struct resource res, struct ppc4xx_msi *msi) -{ - const u32 *msi_data; - const u32 *msi_mask; - const u32 *sdr_addr; - dma_addr_t msi_phys; - void *msi_virt; - int err; - - sdr_addr = of_get_property(dev->dev.of_node, "sdr-base", NULL); - if (!sdr_addr) - return -EINVAL; - - msi_data = of_get_property(dev->dev.of_node, "msi-data", NULL); - if (!msi_data) - return -EINVAL; - - msi_mask = of_get_property(dev->dev.of_node, "msi-mask", NULL); - if (!msi_mask) - return -EINVAL; - - msi->msi_dev = of_find_node_by_name(NULL, "ppc4xx-msi"); - if (!msi->msi_dev) - return -ENODEV; - - msi->msi_regs = of_iomap(msi->msi_dev, 0); - if (!msi->msi_regs) { - dev_err(&dev->dev, "of_iomap failed\n"); - err = -ENOMEM; - goto node_put; - } - dev_dbg(&dev->dev, "PCIE-MSI: msi register mapped 0x%x 0x%x\n", - (u32) (msi->msi_regs + PEIH_TERMADH), (u32) (msi->msi_regs)); - - msi_virt = dma_alloc_coherent(&dev->dev, 64, &msi_phys, GFP_KERNEL); - if (!msi_virt) { - err = -ENOMEM; - goto iounmap; - } - msi->msi_addr_hi = upper_32_bits(msi_phys); - msi->msi_addr_lo = lower_32_bits(msi_phys & 0xffffffff); - dev_dbg(&dev->dev, "PCIE-MSI: msi address high 0x%x, low 0x%x\n", - msi->msi_addr_hi, msi->msi_addr_lo); - - mtdcri(SDR0, *sdr_addr, upper_32_bits(res.start)); /*HIGH addr */ - mtdcri(SDR0, *sdr_addr + 1, lower_32_bits(res.start)); /* Low addr */ - - /* Progam the Interrupt handler Termination addr registers */ - out_be32(msi->msi_regs + PEIH_TERMADH, msi->msi_addr_hi); - out_be32(msi->msi_regs + PEIH_TERMADL, msi->msi_addr_lo); - - /* Program MSI Expected data and Mask bits */ - out_be32(msi->msi_regs + PEIH_MSIED, *msi_data); - out_be32(msi->msi_regs + PEIH_MSIMK, *msi_mask); - - dma_free_coherent(&dev->dev, 64, msi_virt, msi_phys); - - return 0; - -iounmap: - iounmap(msi->msi_regs); -node_put: - of_node_put(msi->msi_dev); - return err; -} - -static int ppc4xx_of_msi_remove(struct platform_device *dev) -{ - struct ppc4xx_msi *msi = dev->dev.platform_data; - int i; - int virq; - - for (i = 0; i < msi_irqs; i++) { - virq = msi->msi_virqs[i]; - if (virq) - irq_dispose_mapping(virq); - } - - if (msi->bitmap.bitmap) - msi_bitmap_free(&msi->bitmap); - iounmap(msi->msi_regs); - of_node_put(msi->msi_dev); - - return 0; -} - -static int ppc4xx_msi_probe(struct platform_device *dev) -{ - struct ppc4xx_msi *msi; - struct resource res; - int err = 0; - struct pci_controller *phb; - - dev_dbg(&dev->dev, "PCIE-MSI: Setting up MSI support...\n"); - - msi = devm_kzalloc(&dev->dev, sizeof(*msi), GFP_KERNEL); - if (!msi) - return -ENOMEM; - dev->dev.platform_data = msi; - - /* Get MSI ranges */ - err = of_address_to_resource(dev->dev.of_node, 0, &res); - if (err) { - dev_err(&dev->dev, "%pOF resource error!\n", dev->dev.of_node); - return err; - } - - msi_irqs = of_irq_count(dev->dev.of_node); - if (!msi_irqs) - return -ENODEV; - - err = ppc4xx_setup_pcieh_hw(dev, res, msi); - if (err) - return err; - - err = ppc4xx_msi_init_allocator(dev, msi); - if (err) { - dev_err(&dev->dev, "Error allocating MSI bitmap\n"); - goto error_out; - } - ppc4xx_msi = *msi; - - list_for_each_entry(phb, &hose_list, list_node) { - phb->controller_ops.setup_msi_irqs = ppc4xx_setup_msi_irqs; - phb->controller_ops.teardown_msi_irqs = ppc4xx_teardown_msi_irqs; - } - return 0; - -error_out: - ppc4xx_of_msi_remove(dev); - return err; -} -static const struct of_device_id ppc4xx_msi_ids[] = { - { - .compatible = "amcc,ppc4xx-msi", - }, - {} -}; -static struct platform_driver ppc4xx_msi_driver = { - .probe = ppc4xx_msi_probe, - .remove = ppc4xx_of_msi_remove, - .driver = { - .name = "ppc4xx-msi", - .of_match_table = ppc4xx_msi_ids, - }, - -}; - -static __init int ppc4xx_msi_init(void) -{ - return platform_driver_register(&ppc4xx_msi_driver); -} - -subsys_initcall(ppc4xx_msi_init); --- a/arch/powerpc/sysdev/Kconfig +++ b/arch/powerpc/sysdev/Kconfig @@ -12,17 +12,11 @@ config PPC4xx_HSTA_MSI depends on PCI_MSI depends on PCI && 4xx -config PPC4xx_MSI - bool - depends on PCI_MSI - depends on PCI && 4xx - config PPC_MSI_BITMAP bool depends on PCI_MSI default y if MPIC default y if FSL_PCI - default y if PPC4xx_MSI default y if PPC_POWERNV source "arch/powerpc/sysdev/xics/Kconfig" From patchwork Mon Dec 6 22:27:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564255 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=cLSzakHM; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=AC1UyRei; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J2s3WYNz9sCD for ; Tue, 7 Dec 2021 09:27:33 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356601AbhLFWbA (ORCPT ); Mon, 6 Dec 2021 17:31:00 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45442 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242901AbhLFWa6 (ORCPT ); Mon, 6 Dec 2021 17:30:58 -0500 Message-ID: <20211206210223.929792157@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829647; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=wYOapsewmvCjRKNhvhV0vpzUO9z80CJEqQBEg5ZWMSo=; b=cLSzakHMaYNtI4aJ6BLkP0Y1ckKByNqQRdbeialjjHb7VsdJKr1LXNnGFwD5jPorL/dB3X r8LL7BBEXT9ttMDduYesmAWnKu89MMUh+WbP//3UR9Oqi8u6Ke9f0j6kvDzk55pgKJlLw8 pXM/pbZ+DKZAxpGnovR5d0BXBdradoJwITrMoZLhwgv5vAP0K0ZMzZys/cJmxk0kAoxLqB 9HeDnDnYxU3SNfvQbR+KwVpjQ6g9EPn1CLChmqViSe3oPCpxc2+ZyZ9945UmAuQkVFQb/B Ms+5OGgiP91cMgej9Fb+5+mTuU/pnBqgmKdO+oCNOau73VDp3kwOD/z+rrVPrQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829647; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=wYOapsewmvCjRKNhvhV0vpzUO9z80CJEqQBEg5ZWMSo=; b=AC1UyRei3djvmFCd8sCxirgN9BF19ZHUhoSg/ujiTsv7YGfDjxkFl5ncJxZvQt8R7ZWn4X QnYXgZVa/LFvUdBg== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 02/23] PCI/MSI: Fix pci_irq_vector()/pci_irq_get_affinity() References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:26 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org pci_irq_vector() and pci_irq_get_affinity() use the list position to find the MSI-X descriptor at a given index. That's correct for the normal case where the entry number is the same as the list position. But it's wrong for cases where MSI-X was allocated with an entries array describing sparse entry numbers into the hardware message descriptor table. That's inconsistent at best. Make it always check the entry number because that's what the zero base index really means. This change won't break existing users which use a sparse entries array for allocation because these users retrieve the Linux interrupt number from the entries array after allocation and none of them uses pci_irq_vector() or pci_irq_get_affinity(). Fixes: aff171641d18 ("PCI: Provide sensible IRQ vector alloc/free routines") Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Acked-by: Bjorn Helgaas --- V2: Fix typo in subject - Jason --- drivers/pci/msi.c | 26 ++++++++++++++++++-------- 1 file changed, 18 insertions(+), 8 deletions(-) --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -1187,19 +1187,24 @@ EXPORT_SYMBOL(pci_free_irq_vectors); /** * pci_irq_vector - return Linux IRQ number of a device vector - * @dev: PCI device to operate on - * @nr: device-relative interrupt vector index (0-based). + * @dev: PCI device to operate on + * @nr: Interrupt vector index (0-based) + * + * @nr has the following meanings depending on the interrupt mode: + * MSI-X: The index in the MSI-X vector table + * MSI: The index of the enabled MSI vectors + * INTx: Must be 0 + * + * Return: The Linux interrupt number or -EINVAl if @nr is out of range. */ int pci_irq_vector(struct pci_dev *dev, unsigned int nr) { if (dev->msix_enabled) { struct msi_desc *entry; - int i = 0; for_each_pci_msi_entry(entry, dev) { - if (i == nr) + if (entry->msi_attrib.entry_nr == nr) return entry->irq; - i++; } WARN_ON_ONCE(1); return -EINVAL; @@ -1223,17 +1228,22 @@ EXPORT_SYMBOL(pci_irq_vector); * pci_irq_get_affinity - return the affinity of a particular MSI vector * @dev: PCI device to operate on * @nr: device-relative interrupt vector index (0-based). + * + * @nr has the following meanings depending on the interrupt mode: + * MSI-X: The index in the MSI-X vector table + * MSI: The index of the enabled MSI vectors + * INTx: Must be 0 + * + * Return: A cpumask pointer or NULL if @nr is out of range */ const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr) { if (dev->msix_enabled) { struct msi_desc *entry; - int i = 0; for_each_pci_msi_entry(entry, dev) { - if (i == nr) + if (entry->msi_attrib.entry_nr == nr) return &entry->affinity->mask; - i++; } WARN_ON_ONCE(1); return NULL; From patchwork Mon Dec 6 22:27:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564256 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=dtv/3kpi; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=ezCUc6zc; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J2t2FJjz9s1l for ; Tue, 7 Dec 2021 09:27:34 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356669AbhLFWbB (ORCPT ); Mon, 6 Dec 2021 17:31:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356541AbhLFWa7 (ORCPT ); Mon, 6 Dec 2021 17:30:59 -0500 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F9E2C061746; Mon, 6 Dec 2021 14:27:30 -0800 (PST) Message-ID: <20211206210223.985907940@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829648; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=id3JgIbjZgRZL0ofCnsEvlAPkFN569FAc5JY13SwTCA=; b=dtv/3kpiYCPMOT1Cw9v7Go2jJd37JGwlcLZUvP4q4CZcykY4ErTRRN6blBqFVnD+UMlUBq IbSPQ6DvtU6NnokqTJsz/cifnF9TB+UGigT98bUbP9QgHqbfvg52eg506MBfCI6r0vmnKg MjmihgmNqzycJJpD74maNXj+PIYFNlMBjC2jDqEuC5sJzApTh0mV0k/vM21wv4lw/G5jbY c+q6mtr7VmBYGvYZ1YxwZqR2FwhcE+UxV1sory9IQr1hjBCtnqVHcLm1yLHb4mf4LwL7lv 6hxQ11J1Pdt/jtTptuAe42u7/7PQhmsoi6nY65ueu367lfi5FaYMMDOataiBBg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829648; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=id3JgIbjZgRZL0ofCnsEvlAPkFN569FAc5JY13SwTCA=; b=ezCUc6zchqpEga5Qcxe/swEQdw1F2Cr+alSEzDGugyM3PP2DlRV+GaFXioXu01RmdcdePb sl3sBReGfGtugoBQ== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 03/23] genirq/msi: Guard sysfs code References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:28 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org No point in building unused code when CONFIG_SYSFS=n. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Reviewed-by: Greg Kroah-Hartman --- include/linux/msi.h | 10 ++++++++++ kernel/irq/msi.c | 2 ++ 2 files changed, 12 insertions(+) --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -239,9 +239,19 @@ void __pci_write_msi_msg(struct msi_desc void pci_msi_mask_irq(struct irq_data *data); void pci_msi_unmask_irq(struct irq_data *data); +#ifdef CONFIG_SYSFS const struct attribute_group **msi_populate_sysfs(struct device *dev); void msi_destroy_sysfs(struct device *dev, const struct attribute_group **msi_irq_groups); +#else +static inline const struct attribute_group **msi_populate_sysfs(struct device *dev) +{ + return NULL; +} +static inline void msi_destroy_sysfs(struct device *dev, const struct attribute_group **msi_irq_groups) +{ +} +#endif /* * The arch hooks to setup up msi irqs. Default functions are implemented --- a/kernel/irq/msi.c +++ b/kernel/irq/msi.c @@ -72,6 +72,7 @@ void get_cached_msi_msg(unsigned int irq } EXPORT_SYMBOL_GPL(get_cached_msi_msg); +#ifdef CONFIG_SYSFS static ssize_t msi_mode_show(struct device *dev, struct device_attribute *attr, char *buf) { @@ -204,6 +205,7 @@ void msi_destroy_sysfs(struct device *de kfree(msi_irq_groups); } } +#endif #ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN static inline void irq_chip_write_msi_msg(struct irq_data *data, From patchwork Mon Dec 6 22:27:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564260 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=VUkO3W8P; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=kAM4q1j8; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J2z6X8pz9s1l for ; Tue, 7 Dec 2021 09:27:39 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356910AbhLFWbH (ORCPT ); Mon, 6 Dec 2021 17:31:07 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45516 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356553AbhLFWbB (ORCPT ); Mon, 6 Dec 2021 17:31:01 -0500 Message-ID: <20211206210224.041777889@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829650; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=tZSxCAPD1C/72AGbeyH6kqtxyoXqz8oMI6L3bIBgwo8=; b=VUkO3W8Pm9/sA/VUkP33Z1m5XdVqjjFScjNp9xzbJHH6VrzKV611tW5ns/aEvdBtDB5hhY 4sUYsRTR30anXGxzbOb0K9PMdNFmLOz3BtlIbLryJFmRON/QnRUAVR/tUTC6oCBXwHrZNU z+uQJjknPy98RzoGlyywBCTzDA5LQQ/GGEJBDUQgMdOGLnUvWgS3MwvOzMBbN3MyH24wyE wloaWdPwNovCCcXh3rks0tF7L7gx6H5HzNZzGmXGzeG4yaX/ZJz5jQYq+o9N1e9u0u4iqL y3XBSQqFZdf+aTZPjClD1dgm+ky3LS8iWYvusLGtZCCQqQWdojsreK8LMBSkSg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829650; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=tZSxCAPD1C/72AGbeyH6kqtxyoXqz8oMI6L3bIBgwo8=; b=kAM4q1j8ks4DVQFHaEciD0kyWWmX1az4hnbSd/G606ypPv2fKA8RKlEiRFpQ/EsaY+4iOs 9cLVCE7hHJDUqHDA== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 04/23] genirq/msi: Remove unused domain callbacks References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:29 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org No users and there is no need to grow them. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Link: https://lore.kernel.org/r/20211126223824.322987915@linutronix.de --- include/linux/msi.h | 11 ++++------- kernel/irq/msi.c | 5 ----- 2 files changed, 4 insertions(+), 12 deletions(-) --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -304,7 +304,6 @@ struct msi_domain_info; * @msi_free: Domain specific function to free a MSI interrupts * @msi_check: Callback for verification of the domain/info/dev data * @msi_prepare: Prepare the allocation of the interrupts in the domain - * @msi_finish: Optional callback to finalize the allocation * @set_desc: Set the msi descriptor for an interrupt * @handle_error: Optional error handler if the allocation fails * @domain_alloc_irqs: Optional function to override the default allocation @@ -312,12 +311,11 @@ struct msi_domain_info; * @domain_free_irqs: Optional function to override the default free * function. * - * @get_hwirq, @msi_init and @msi_free are callbacks used by - * msi_create_irq_domain() and related interfaces + * @get_hwirq, @msi_init and @msi_free are callbacks used by the underlying + * irqdomain. * - * @msi_check, @msi_prepare, @msi_finish, @set_desc and @handle_error - * are callbacks used by msi_domain_alloc_irqs() and related - * interfaces which are based on msi_desc. + * @msi_check, @msi_prepare, @handle_error and @set_desc are callbacks used by + * msi_domain_alloc/free_irqs(). * * @domain_alloc_irqs, @domain_free_irqs can be used to override the * default allocation/free functions (__msi_domain_alloc/free_irqs). This @@ -351,7 +349,6 @@ struct msi_domain_ops { int (*msi_prepare)(struct irq_domain *domain, struct device *dev, int nvec, msi_alloc_info_t *arg); - void (*msi_finish)(msi_alloc_info_t *arg, int retval); void (*set_desc)(msi_alloc_info_t *arg, struct msi_desc *desc); int (*handle_error)(struct irq_domain *domain, --- a/kernel/irq/msi.c +++ b/kernel/irq/msi.c @@ -562,8 +562,6 @@ int __msi_domain_alloc_irqs(struct irq_d ret = -ENOSPC; if (ops->handle_error) ret = ops->handle_error(domain, desc, ret); - if (ops->msi_finish) - ops->msi_finish(&arg, ret); return ret; } @@ -573,9 +571,6 @@ int __msi_domain_alloc_irqs(struct irq_d } } - if (ops->msi_finish) - ops->msi_finish(&arg, 0); - can_reserve = msi_check_reservation_mode(domain, info, dev); /* From patchwork Mon Dec 6 22:27:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564258 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=tdR1Yo1b; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=rM6g+Z9D; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J2y3cxxz9s1l for ; Tue, 7 Dec 2021 09:27:38 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356723AbhLFWbE (ORCPT ); Mon, 6 Dec 2021 17:31:04 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45548 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242901AbhLFWbC (ORCPT ); Mon, 6 Dec 2021 17:31:02 -0500 Message-ID: <20211206210224.103502021@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829652; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=aL/VIL4bKAOk4G3UXRaDbYDCQCUHomOCFOLYCw7n0T4=; b=tdR1Yo1b4L979PReLtRwSr9KoFKCjaxkig8tEe2FBIPXHXjOv7Mc2jnpFy5d5ujcRG9zf2 YANt+QuUbpnUYaWhnBtSHaKqGvAj3EA/vWxXcC93VD/psiez55uaqQuMXahn3k2wcp7TIs dzvgsa3MguKW/H/7qISbFdepvIsonMp+pcJQTxLy0Tq8xqNv4AzIHYrBlQjpabukLMoAQH +8WoxFZhtgRRZ0BM+5uKEmfxHrc+cg3hLFITAABCGn+9ztS/P7IusvhBD0BzMAiM5V91it NZB9Ix02RBqpBYkaQPqR4J0ny70Kpv7PWUnkultE1cGpFyC/CxWO/mhIGaySJg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829652; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=aL/VIL4bKAOk4G3UXRaDbYDCQCUHomOCFOLYCw7n0T4=; b=rM6g+Z9DTE4BADbMORMbZP61UuPqp6XkB7syFu8ZEotroMrF5V2TzTS9qjDZR3MfVEmuj2 pW0Kwnn6e9i/AjBg== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 05/23] genirq/msi: Fixup includes References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:31 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Remove the kobject.h include from msi.h as it's not required and add a sysfs.h include to the core code instead. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Reviewed-by: Greg Kroah-Hartman --- include/linux/msi.h | 2 +- kernel/irq/msi.c | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -2,7 +2,7 @@ #ifndef LINUX_MSI_H #define LINUX_MSI_H -#include +#include #include #include --- a/kernel/irq/msi.c +++ b/kernel/irq/msi.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include "internals.h" From patchwork Mon Dec 6 22:27:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564262 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=faGOpwdX; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=KcSH+DBC; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J326rZCz9s1l for ; Tue, 7 Dec 2021 09:27:42 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356968AbhLFWbK (ORCPT ); Mon, 6 Dec 2021 17:31:10 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45612 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356707AbhLFWbE (ORCPT ); Mon, 6 Dec 2021 17:31:04 -0500 Message-ID: <20211206210224.157070464@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829653; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=TDFYojyJUnT1Pr+HD2qBTTTO8BHHsqWcbvlgAiLi1mo=; b=faGOpwdXacJz+kDCBSBkEhVaO4tWIE9P4Qj5ETLfpZZFSN40Jc6KRKvtUjKqzbcgr/ztKa qbs3AQMJ5YqDzSZ/LBJnHdYuDtL3UTT2HjTS9uDFx0+ryP9JO6T8QJqki3Nd//oVR/X9VW e1FjY69NA0GKaGyYgqvsTrBfJX5lElCMCyT9FX6YoUuOuuEqqOc/GLtzdgl3zj7Vtoiv7E DlxF1pyTqCvYRbWG64ZI7oDzx/Ar7iOdm474EVal1taH8l1XMh6ZbffomzLXz0G5zILmHX sQC0TNZYbr8j+7dwqKqYgzrivx/aEdj/MXodn28TIjm3ZJ1lp6FCpndDXoNOSQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829653; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=TDFYojyJUnT1Pr+HD2qBTTTO8BHHsqWcbvlgAiLi1mo=; b=KcSH+DBC8ml3AxFwN340Wlqo3YBI/t/phMRlVFFD4wwY62x/NNn0qh9e3mmURC/L3bKGnm CtUH2opZXJZR53Dw== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 06/23] PCI/MSI: Make pci_msi_domain_write_msg() static References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:33 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org There is no point to have this function public as it is set by the PCI core anyway when a PCI/MSI irqdomain is created. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Acked-by: Bjorn Helgaas # PCI --- drivers/irqchip/irq-gic-v2m.c | 1 - drivers/irqchip/irq-gic-v3-its-pci-msi.c | 1 - drivers/irqchip/irq-gic-v3-mbi.c | 1 - drivers/pci/msi.c | 2 +- include/linux/msi.h | 1 - 5 files changed, 1 insertion(+), 5 deletions(-) --- a/drivers/irqchip/irq-gic-v2m.c +++ b/drivers/irqchip/irq-gic-v2m.c @@ -88,7 +88,6 @@ static struct irq_chip gicv2m_msi_irq_ch .irq_mask = gicv2m_mask_msi_irq, .irq_unmask = gicv2m_unmask_msi_irq, .irq_eoi = irq_chip_eoi_parent, - .irq_write_msi_msg = pci_msi_domain_write_msg, }; static struct msi_domain_info gicv2m_msi_domain_info = { --- a/drivers/irqchip/irq-gic-v3-its-pci-msi.c +++ b/drivers/irqchip/irq-gic-v3-its-pci-msi.c @@ -28,7 +28,6 @@ static struct irq_chip its_msi_irq_chip .irq_unmask = its_unmask_msi_irq, .irq_mask = its_mask_msi_irq, .irq_eoi = irq_chip_eoi_parent, - .irq_write_msi_msg = pci_msi_domain_write_msg, }; static int its_pci_msi_vec_count(struct pci_dev *pdev, void *data) --- a/drivers/irqchip/irq-gic-v3-mbi.c +++ b/drivers/irqchip/irq-gic-v3-mbi.c @@ -171,7 +171,6 @@ static struct irq_chip mbi_msi_irq_chip .irq_unmask = mbi_unmask_msi_irq, .irq_eoi = irq_chip_eoi_parent, .irq_compose_msi_msg = mbi_compose_msi_msg, - .irq_write_msi_msg = pci_msi_domain_write_msg, }; static struct msi_domain_info mbi_msi_domain_info = { --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -1281,7 +1281,7 @@ EXPORT_SYMBOL_GPL(msi_desc_to_pci_sysdat * @irq_data: Pointer to interrupt data of the MSI interrupt * @msg: Pointer to the message */ -void pci_msi_domain_write_msg(struct irq_data *irq_data, struct msi_msg *msg) +static void pci_msi_domain_write_msg(struct irq_data *irq_data, struct msi_msg *msg) { struct msi_desc *desc = irq_data_get_msi_desc(irq_data); --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -455,7 +455,6 @@ void *platform_msi_get_host_data(struct #endif /* CONFIG_GENERIC_MSI_IRQ_DOMAIN */ #ifdef CONFIG_PCI_MSI_IRQ_DOMAIN -void pci_msi_domain_write_msg(struct irq_data *irq_data, struct msi_msg *msg); struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode, struct msi_domain_info *info, struct irq_domain *parent); From patchwork Mon Dec 6 22:27:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564265 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=GUOH6mWh; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=HGDrJvU5; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J385Q27z9s1l for ; Tue, 7 Dec 2021 09:27:48 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357078AbhLFWbO (ORCPT ); Mon, 6 Dec 2021 17:31:14 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45658 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356739AbhLFWbF (ORCPT ); Mon, 6 Dec 2021 17:31:05 -0500 Message-ID: <20211206210224.210768199@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829655; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=BDAIfQQJ/HYBQlNRSL+3v2qiAdmkDsDIsSFt3VNvN68=; b=GUOH6mWh41yMRUMKAp98aOVCT8hYo1MhFqsH8cMnArd7Joaq5tWDId+1tzWGfmFp4MhkDK fHIyaV3g98EAAJDH4thajAxgYu0m3cKfigGCoPT2xKRUJOay1yoB6gVJaZ4KQr73+lJIhF 217Vew7bGgy7Rg6jJWbryS3guIp3T2avsXNZqL5l2fzu6eAEZf94hGHdwrc0sDuVqRRr8F YOFSttwR9zQl+lUR33+FUHV/lA4yrN4ussU95RvZHyirqbrl8Swdecf1BIQxTJKV9akSLD 72+bCTmvb3yxzwtxXQKS+Dp9IUTPEzYeXHcPBKvJp67pMRTXdzTjCaz75gvKWw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829655; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=BDAIfQQJ/HYBQlNRSL+3v2qiAdmkDsDIsSFt3VNvN68=; b=HGDrJvU5isVeAMd5DGIpUPfL17YjFqAjIDf/BKb6AVkDejEhNeX5X92R0PYs3gYuQD20lS ukq3hUeso9wqpYBQ== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 07/23] PCI/MSI: Remove msi_desc_to_pci_sysdata() References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:34 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Last user is gone long ago. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Acked-by: Bjorn Helgaas --- drivers/pci/msi.c | 8 -------- include/linux/msi.h | 5 ----- 2 files changed, 13 deletions(-) --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -1267,14 +1267,6 @@ struct pci_dev *msi_desc_to_pci_dev(stru } EXPORT_SYMBOL(msi_desc_to_pci_dev); -void *msi_desc_to_pci_sysdata(struct msi_desc *desc) -{ - struct pci_dev *dev = msi_desc_to_pci_dev(desc); - - return dev->bus->sysdata; -} -EXPORT_SYMBOL_GPL(msi_desc_to_pci_sysdata); - #ifdef CONFIG_PCI_MSI_IRQ_DOMAIN /** * pci_msi_domain_write_msg - Helper to write MSI message to PCI config space --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -218,13 +218,8 @@ static inline void msi_desc_set_iommu_co for_each_msi_entry((desc), &(pdev)->dev) struct pci_dev *msi_desc_to_pci_dev(struct msi_desc *desc); -void *msi_desc_to_pci_sysdata(struct msi_desc *desc); void pci_write_msi_msg(unsigned int irq, struct msi_msg *msg); #else /* CONFIG_PCI_MSI */ -static inline void *msi_desc_to_pci_sysdata(struct msi_desc *desc) -{ - return NULL; -} static inline void pci_write_msi_msg(unsigned int irq, struct msi_msg *msg) { } From patchwork Mon Dec 6 22:27:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564266 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=vZzNCRO+; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=M/NxrnxG; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J394n3bz9s1l for ; Tue, 7 Dec 2021 09:27:49 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357168AbhLFWbP (ORCPT ); Mon, 6 Dec 2021 17:31:15 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45684 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356856AbhLFWbI (ORCPT ); Mon, 6 Dec 2021 17:31:08 -0500 Message-ID: <20211206210224.265589103@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829657; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=OKsXQmZewVMh9C4veaPA025YU+pW4cMZqzIAn23b/9c=; b=vZzNCRO+uOrMt1rLYUlSjWHvY3GnOwXQ/odIk4OnimL55yM4E7H9Z+gKxR4cdWzJuc9jJT M0aZip9AGf/BRov7wd4DUsRP6NuHKtBNXayKEfTNAPYWG4mlPDCTJXpLfq+Eq2Q7uel0c0 GCYlxK1amEskkZn0CPcuIPlwQANt3pdUSVU0mXb6Dh6glegghN2NwscVEl8RJS6xJpYmzr 0oSjRQ0h7Lm4xi7TTTOAuPFvY1F1jeW22TqUuxURSvBD6YjS5m5HOdgSgSZmuNW7/hljHH 0p6i/mhLMFw2u59Zr1gi+Goyq9OYh0ugIdb202nzF0CcX0f/aD/i2jlLaPiQJw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829657; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=OKsXQmZewVMh9C4veaPA025YU+pW4cMZqzIAn23b/9c=; b=M/NxrnxGKVPLqAlsqI5UQ6dl9IZUEhUwGcQJKRyC/Ho89KfJQzOP0MU4EFQc7/B84xAAnW 7QekxbEDbxz2NvAA== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 08/23] PCI/sysfs: Use pci_irq_vector() References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:36 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org instead of fiddling with msi descriptors. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Reviewed-by: Greg Kroah-Hartman Acked-by: Bjorn Helgaas --- drivers/pci/pci-sysfs.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) --- a/drivers/pci/pci-sysfs.c +++ b/drivers/pci/pci-sysfs.c @@ -62,11 +62,8 @@ static ssize_t irq_show(struct device *d * For MSI, show the first MSI IRQ; for all other cases including * MSI-X, show the legacy INTx IRQ. */ - if (pdev->msi_enabled) { - struct msi_desc *desc = first_pci_msi_entry(pdev); - - return sysfs_emit(buf, "%u\n", desc->irq); - } + if (pdev->msi_enabled) + return sysfs_emit(buf, "%u\n", pci_irq_vector(pdev, 0)); #endif return sysfs_emit(buf, "%u\n", pdev->irq); From patchwork Mon Dec 6 22:27:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564269 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=dvaOjaxx; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=jqiplyDo; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J3B16D2z9t1r for ; Tue, 7 Dec 2021 09:27:50 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357172AbhLFWbQ (ORCPT ); Mon, 6 Dec 2021 17:31:16 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45752 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242901AbhLFWbJ (ORCPT ); Mon, 6 Dec 2021 17:31:09 -0500 Message-ID: <20211206210224.319201379@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829658; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=gOhEwOl7pWXAdyfdjPv/xL32Zd5rtVvQRFE5wZzX1CM=; b=dvaOjaxxL6xiDQA7FFy4cxQQ/uHU9sUdt6e/yyFwN1tu+bJ1yi46Si+LsK8MUpqWL66GBZ 2vYuDYX1gMzlWrQlXSNF7Gy5YE1JoRbELnLUr87259Kjgtgb01AkT3/PzG4pDRcVNe6nnu ivxMXrLRTAd3saTznKV7mbiAHy2Rtw3hMMQvRO+thrnTc3FA0K+E4ppbPA/2gfw4NYQ+xk oJH8E0LDjQDUOmg7euoxTjnOnIPyEBUSb+AO4mByR8WoxcxvvNGBpWjsmNaE7QFybpIRFg x8MhqCDCTsZoEj80vUenfUClW3vGGZTwtNcCQ4Iw5T8t8x7oh0OUHc455oO1Ww== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829658; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=gOhEwOl7pWXAdyfdjPv/xL32Zd5rtVvQRFE5wZzX1CM=; b=jqiplyDoFdSA46RvOetOaqUacacbKtOW+CT5oA5zeEczBM+vXFTNHE6+JVcjlrsieqeQMT qjTsGymloYl2PTAw== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Thomas Bogendoerfer , linux-mips@vger.kernel.org, Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Juergen Gross , Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 09/23] MIPS: Octeon: Use arch_setup_msi_irq() References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:38 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The core code provides the same loop code except for the MSI-X reject. Move that to arch_setup_msi_irq() and remove the duplicated code. No functional change. Signed-off-by: Thomas Gleixner Reviewed-by: Jason Gunthorpe Acked-by: Thomas Bogendoerfer Cc: linux-mips@vger.kernel.org --- arch/mips/pci/msi-octeon.c | 32 +++----------------------------- 1 file changed, 3 insertions(+), 29 deletions(-) --- a/arch/mips/pci/msi-octeon.c +++ b/arch/mips/pci/msi-octeon.c @@ -68,6 +68,9 @@ int arch_setup_msi_irq(struct pci_dev *d u64 search_mask; int index; + if (desc->pci.msi_attrib.is_msix) + return -EINVAL; + /* * Read the MSI config to figure out how many IRQs this device * wants. Most devices only want 1, which will give @@ -182,35 +185,6 @@ int arch_setup_msi_irq(struct pci_dev *d return 0; } -int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) -{ - struct msi_desc *entry; - int ret; - - /* - * MSI-X is not supported. - */ - if (type == PCI_CAP_ID_MSIX) - return -EINVAL; - - /* - * If an architecture wants to support multiple MSI, it needs to - * override arch_setup_msi_irqs() - */ - if (type == PCI_CAP_ID_MSI && nvec > 1) - return 1; - - for_each_pci_msi_entry(entry, dev) { - ret = arch_setup_msi_irq(dev, entry); - if (ret < 0) - return ret; - if (ret > 0) - return -ENOSPC; - } - - return 0; -} - /** * Called when a device no longer needs its MSI interrupts. All * MSI interrupts for the device are freed. From patchwork Mon Dec 6 22:27:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564286 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=I7IEi0L+; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=UhyVywDj; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J452qt1z9s1l for ; Tue, 7 Dec 2021 09:28:37 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356672AbhLFWbx (ORCPT ); Mon, 6 Dec 2021 17:31:53 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45612 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356682AbhLFWbK (ORCPT ); Mon, 6 Dec 2021 17:31:10 -0500 Message-ID: <20211206210224.374863119@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829660; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=APl9OfydjYVzHfUnQOfchIEV6DUtk8GhU1Hvwr0OstY=; b=I7IEi0L+SYsfDSo0rBNvc9+796I6o3zdq7OVMexIwSv2Qg/J3DuMrDWj56GAT+Boo2zKWY lPQQGx0OP1quMSSVvJxiXTlx+59zueB3m2Y7z0K2rTbEjcD9tkSQ2/9I+B6Y8reVL42CWU Au5uQgfxXhlm+2eofjibZJKiqvOMki278TivK9YRJHtnAUNk7fahKQ88jOsEMr/wHipLvx FuMg528tPJLsVowcM3ZN2yvRCrjL8B6NPpmw8rMH608UNvUP9BORKc7Gy8r7HYEt+qzY68 lI6++yXHenMateQm1doyun0emYFs4tmAaQe9XsuTWfqONsfoK1YUSMMxUtwbfQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829660; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=APl9OfydjYVzHfUnQOfchIEV6DUtk8GhU1Hvwr0OstY=; b=UhyVywDjms11SmgbcvKZOnCjDFxgYP1hpU/2y5e6tHW5XyF3pBnNcKcOFhNI+i78s38+LL LrvfY0KURqUsTyBw== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 10/23] genirq/msi, treewide: Use a named struct for PCI/MSI attributes References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:39 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The unnamed struct sucks and is in the way of further cleanups. Stick the PCI related MSI data into a real data structure and cleanup all users. No functional change. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Acked-by: Kalle Valo Cc: Greg Kroah-Hartman Cc: sparclinux@vger.kernel.org Cc: x86@kernel.org Cc: xen-devel@lists.xenproject.org Cc: ath11k@lists.infradead.org Reviewed-by: Greg Kroah-Hartman --- arch/powerpc/platforms/cell/axon_msi.c | 2 arch/powerpc/platforms/powernv/pci-ioda.c | 4 - arch/powerpc/platforms/pseries/msi.c | 6 - arch/sparc/kernel/pci_msi.c | 4 - arch/x86/kernel/apic/msi.c | 2 arch/x86/pci/xen.c | 6 - drivers/net/wireless/ath/ath11k/pci.c | 2 drivers/pci/msi.c | 116 +++++++++++++++--------------- drivers/pci/xen-pcifront.c | 2 include/linux/msi.h | 84 ++++++++++----------- kernel/irq/msi.c | 4 - 11 files changed, 115 insertions(+), 117 deletions(-) --- a/arch/powerpc/platforms/cell/axon_msi.c +++ b/arch/powerpc/platforms/cell/axon_msi.c @@ -212,7 +212,7 @@ static int setup_msi_msg_address(struct entry = first_pci_msi_entry(dev); for (; dn; dn = of_get_next_parent(dn)) { - if (entry->msi_attrib.is_64) { + if (entry->pci.msi_attrib.is_64) { prop = of_get_property(dn, "msi-address-64", &len); if (prop) break; --- a/arch/powerpc/platforms/powernv/pci-ioda.c +++ b/arch/powerpc/platforms/powernv/pci-ioda.c @@ -2154,10 +2154,10 @@ static void pnv_msi_compose_msg(struct i int rc; rc = __pnv_pci_ioda_msi_setup(phb, pdev, d->hwirq, - entry->msi_attrib.is_64, msg); + entry->pci.msi_attrib.is_64, msg); if (rc) dev_err(&pdev->dev, "Failed to setup %s-bit MSI #%ld : %d\n", - entry->msi_attrib.is_64 ? "64" : "32", d->hwirq, rc); + entry->pci.msi_attrib.is_64 ? "64" : "32", d->hwirq, rc); } /* --- a/arch/powerpc/platforms/pseries/msi.c +++ b/arch/powerpc/platforms/pseries/msi.c @@ -332,7 +332,7 @@ static int check_msix_entries(struct pci expected = 0; for_each_pci_msi_entry(entry, pdev) { - if (entry->msi_attrib.entry_nr != expected) { + if (entry->pci.msi_attrib.entry_nr != expected) { pr_debug("rtas_msi: bad MSI-X entries.\n"); return -EINVAL; } @@ -449,7 +449,7 @@ static int pseries_msi_ops_prepare(struc { struct pci_dev *pdev = to_pci_dev(dev); struct msi_desc *desc = first_pci_msi_entry(pdev); - int type = desc->msi_attrib.is_msix ? PCI_CAP_ID_MSIX : PCI_CAP_ID_MSI; + int type = desc->pci.msi_attrib.is_msix ? PCI_CAP_ID_MSIX : PCI_CAP_ID_MSI; return rtas_prepare_msi_irqs(pdev, nvec, type, arg); } @@ -580,7 +580,7 @@ static int pseries_irq_domain_alloc(stru int hwirq; int i, ret; - hwirq = rtas_query_irq_number(pci_get_pdn(pdev), desc->msi_attrib.entry_nr); + hwirq = rtas_query_irq_number(pci_get_pdn(pdev), desc->pci.msi_attrib.entry_nr); if (hwirq < 0) { dev_err(&pdev->dev, "Failed to query HW IRQ: %d\n", hwirq); return hwirq; --- a/arch/sparc/kernel/pci_msi.c +++ b/arch/sparc/kernel/pci_msi.c @@ -146,13 +146,13 @@ static int sparc64_setup_msi_irq(unsigne msiqid = pick_msiq(pbm); err = ops->msi_setup(pbm, msiqid, msi, - (entry->msi_attrib.is_64 ? 1 : 0)); + (entry->pci.msi_attrib.is_64 ? 1 : 0)); if (err) goto out_msi_free; pbm->msi_irq_table[msi - pbm->msi_first] = *irq_p; - if (entry->msi_attrib.is_64) { + if (entry->pci.msi_attrib.is_64) { msg.address_hi = pbm->msi64_start >> 32; msg.address_lo = pbm->msi64_start & 0xffffffff; } else { --- a/arch/x86/kernel/apic/msi.c +++ b/arch/x86/kernel/apic/msi.c @@ -163,7 +163,7 @@ int pci_msi_prepare(struct irq_domain *d struct msi_desc *desc = first_pci_msi_entry(pdev); init_irq_alloc_info(arg, NULL); - if (desc->msi_attrib.is_msix) { + if (desc->pci.msi_attrib.is_msix) { arg->type = X86_IRQ_ALLOC_TYPE_PCI_MSIX; } else { arg->type = X86_IRQ_ALLOC_TYPE_PCI_MSI; --- a/arch/x86/pci/xen.c +++ b/arch/x86/pci/xen.c @@ -306,7 +306,7 @@ static int xen_initdom_setup_msi_irqs(st return -EINVAL; map_irq.table_base = pci_resource_start(dev, bir); - map_irq.entry_nr = msidesc->msi_attrib.entry_nr; + map_irq.entry_nr = msidesc->pci.msi_attrib.entry_nr; } ret = -EINVAL; @@ -398,7 +398,7 @@ static void xen_pv_teardown_msi_irqs(str { struct msi_desc *msidesc = first_pci_msi_entry(dev); - if (msidesc->msi_attrib.is_msix) + if (msidesc->pci.msi_attrib.is_msix) xen_pci_frontend_disable_msix(dev); else xen_pci_frontend_disable_msi(dev); @@ -414,7 +414,7 @@ static int xen_msi_domain_alloc_irqs(str if (WARN_ON_ONCE(!dev_is_pci(dev))) return -EINVAL; - if (first_msi_entry(dev)->msi_attrib.is_msix) + if (first_msi_entry(dev)->pci.msi_attrib.is_msix) type = PCI_CAP_ID_MSIX; else type = PCI_CAP_ID_MSI; --- a/drivers/net/wireless/ath/ath11k/pci.c +++ b/drivers/net/wireless/ath/ath11k/pci.c @@ -911,7 +911,7 @@ static int ath11k_pci_alloc_msi(struct a } ab_pci->msi_ep_base_data = msi_desc->msg.data; - if (msi_desc->msi_attrib.is_64) + if (msi_desc->pci.msi_attrib.is_64) set_bit(ATH11K_PCI_FLAG_IS_MSI_64, &ab_pci->flags); ath11k_dbg(ab, ATH11K_DBG_PCI, "msi base data is %d\n", ab_pci->msi_ep_base_data); --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -138,9 +138,9 @@ void __weak arch_restore_msi_irqs(struct static inline __attribute_const__ u32 msi_multi_mask(struct msi_desc *desc) { /* Don't shift by >= width of type */ - if (desc->msi_attrib.multi_cap >= 5) + if (desc->pci.msi_attrib.multi_cap >= 5) return 0xffffffff; - return (1 << (1 << desc->msi_attrib.multi_cap)) - 1; + return (1 << (1 << desc->pci.msi_attrib.multi_cap)) - 1; } static noinline void pci_msi_update_mask(struct msi_desc *desc, u32 clear, u32 set) @@ -148,14 +148,14 @@ static noinline void pci_msi_update_mask raw_spinlock_t *lock = &desc->dev->msi_lock; unsigned long flags; - if (!desc->msi_attrib.can_mask) + if (!desc->pci.msi_attrib.can_mask) return; raw_spin_lock_irqsave(lock, flags); - desc->msi_mask &= ~clear; - desc->msi_mask |= set; - pci_write_config_dword(msi_desc_to_pci_dev(desc), desc->mask_pos, - desc->msi_mask); + desc->pci.msi_mask &= ~clear; + desc->pci.msi_mask |= set; + pci_write_config_dword(msi_desc_to_pci_dev(desc), desc->pci.mask_pos, + desc->pci.msi_mask); raw_spin_unlock_irqrestore(lock, flags); } @@ -171,7 +171,7 @@ static inline void pci_msi_unmask(struct static inline void __iomem *pci_msix_desc_addr(struct msi_desc *desc) { - return desc->mask_base + desc->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE; + return desc->pci.mask_base + desc->pci.msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE; } /* @@ -184,27 +184,27 @@ static void pci_msix_write_vector_ctrl(s { void __iomem *desc_addr = pci_msix_desc_addr(desc); - if (desc->msi_attrib.can_mask) + if (desc->pci.msi_attrib.can_mask) writel(ctrl, desc_addr + PCI_MSIX_ENTRY_VECTOR_CTRL); } static inline void pci_msix_mask(struct msi_desc *desc) { - desc->msix_ctrl |= PCI_MSIX_ENTRY_CTRL_MASKBIT; - pci_msix_write_vector_ctrl(desc, desc->msix_ctrl); + desc->pci.msix_ctrl |= PCI_MSIX_ENTRY_CTRL_MASKBIT; + pci_msix_write_vector_ctrl(desc, desc->pci.msix_ctrl); /* Flush write to device */ - readl(desc->mask_base); + readl(desc->pci.mask_base); } static inline void pci_msix_unmask(struct msi_desc *desc) { - desc->msix_ctrl &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT; - pci_msix_write_vector_ctrl(desc, desc->msix_ctrl); + desc->pci.msix_ctrl &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT; + pci_msix_write_vector_ctrl(desc, desc->pci.msix_ctrl); } static void __pci_msi_mask_desc(struct msi_desc *desc, u32 mask) { - if (desc->msi_attrib.is_msix) + if (desc->pci.msi_attrib.is_msix) pci_msix_mask(desc); else pci_msi_mask(desc, mask); @@ -212,7 +212,7 @@ static void __pci_msi_mask_desc(struct m static void __pci_msi_unmask_desc(struct msi_desc *desc, u32 mask) { - if (desc->msi_attrib.is_msix) + if (desc->pci.msi_attrib.is_msix) pci_msix_unmask(desc); else pci_msi_unmask(desc, mask); @@ -256,10 +256,10 @@ void __pci_read_msi_msg(struct msi_desc BUG_ON(dev->current_state != PCI_D0); - if (entry->msi_attrib.is_msix) { + if (entry->pci.msi_attrib.is_msix) { void __iomem *base = pci_msix_desc_addr(entry); - if (WARN_ON_ONCE(entry->msi_attrib.is_virtual)) + if (WARN_ON_ONCE(entry->pci.msi_attrib.is_virtual)) return; msg->address_lo = readl(base + PCI_MSIX_ENTRY_LOWER_ADDR); @@ -271,7 +271,7 @@ void __pci_read_msi_msg(struct msi_desc pci_read_config_dword(dev, pos + PCI_MSI_ADDRESS_LO, &msg->address_lo); - if (entry->msi_attrib.is_64) { + if (entry->pci.msi_attrib.is_64) { pci_read_config_dword(dev, pos + PCI_MSI_ADDRESS_HI, &msg->address_hi); pci_read_config_word(dev, pos + PCI_MSI_DATA_64, &data); @@ -289,12 +289,12 @@ void __pci_write_msi_msg(struct msi_desc if (dev->current_state != PCI_D0 || pci_dev_is_disconnected(dev)) { /* Don't touch the hardware now */ - } else if (entry->msi_attrib.is_msix) { + } else if (entry->pci.msi_attrib.is_msix) { void __iomem *base = pci_msix_desc_addr(entry); - u32 ctrl = entry->msix_ctrl; + u32 ctrl = entry->pci.msix_ctrl; bool unmasked = !(ctrl & PCI_MSIX_ENTRY_CTRL_MASKBIT); - if (entry->msi_attrib.is_virtual) + if (entry->pci.msi_attrib.is_virtual) goto skip; /* @@ -323,12 +323,12 @@ void __pci_write_msi_msg(struct msi_desc pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &msgctl); msgctl &= ~PCI_MSI_FLAGS_QSIZE; - msgctl |= entry->msi_attrib.multiple << 4; + msgctl |= entry->pci.msi_attrib.multiple << 4; pci_write_config_word(dev, pos + PCI_MSI_FLAGS, msgctl); pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_LO, msg->address_lo); - if (entry->msi_attrib.is_64) { + if (entry->pci.msi_attrib.is_64) { pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_HI, msg->address_hi); pci_write_config_word(dev, pos + PCI_MSI_DATA_64, @@ -376,9 +376,9 @@ static void free_msi_irqs(struct pci_dev pci_msi_teardown_msi_irqs(dev); list_for_each_entry_safe(entry, tmp, msi_list, list) { - if (entry->msi_attrib.is_msix) { + if (entry->pci.msi_attrib.is_msix) { if (list_is_last(&entry->list, msi_list)) - iounmap(entry->mask_base); + iounmap(entry->pci.mask_base); } list_del(&entry->list); @@ -420,7 +420,7 @@ static void __pci_restore_msi_state(stru pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); pci_msi_update_mask(entry, 0, 0); control &= ~PCI_MSI_FLAGS_QSIZE; - control |= (entry->msi_attrib.multiple << 4) | PCI_MSI_FLAGS_ENABLE; + control |= (entry->pci.msi_attrib.multiple << 4) | PCI_MSI_FLAGS_ENABLE; pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control); } @@ -449,7 +449,7 @@ static void __pci_restore_msix_state(str arch_restore_msi_irqs(dev); for_each_pci_msi_entry(entry, dev) - pci_msix_write_vector_ctrl(entry, entry->msix_ctrl); + pci_msix_write_vector_ctrl(entry, entry->pci.msix_ctrl); pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); } @@ -481,24 +481,24 @@ msi_setup_entry(struct pci_dev *dev, int if (dev->dev_flags & PCI_DEV_FLAGS_HAS_MSI_MASKING) control |= PCI_MSI_FLAGS_MASKBIT; - entry->msi_attrib.is_msix = 0; - entry->msi_attrib.is_64 = !!(control & PCI_MSI_FLAGS_64BIT); - entry->msi_attrib.is_virtual = 0; - entry->msi_attrib.entry_nr = 0; - entry->msi_attrib.can_mask = !pci_msi_ignore_mask && + entry->pci.msi_attrib.is_msix = 0; + entry->pci.msi_attrib.is_64 = !!(control & PCI_MSI_FLAGS_64BIT); + entry->pci.msi_attrib.is_virtual = 0; + entry->pci.msi_attrib.entry_nr = 0; + entry->pci.msi_attrib.can_mask = !pci_msi_ignore_mask && !!(control & PCI_MSI_FLAGS_MASKBIT); - entry->msi_attrib.default_irq = dev->irq; /* Save IOAPIC IRQ */ - entry->msi_attrib.multi_cap = (control & PCI_MSI_FLAGS_QMASK) >> 1; - entry->msi_attrib.multiple = ilog2(__roundup_pow_of_two(nvec)); + entry->pci.msi_attrib.default_irq = dev->irq; /* Save IOAPIC IRQ */ + entry->pci.msi_attrib.multi_cap = (control & PCI_MSI_FLAGS_QMASK) >> 1; + entry->pci.msi_attrib.multiple = ilog2(__roundup_pow_of_two(nvec)); if (control & PCI_MSI_FLAGS_64BIT) - entry->mask_pos = dev->msi_cap + PCI_MSI_MASK_64; + entry->pci.mask_pos = dev->msi_cap + PCI_MSI_MASK_64; else - entry->mask_pos = dev->msi_cap + PCI_MSI_MASK_32; + entry->pci.mask_pos = dev->msi_cap + PCI_MSI_MASK_32; /* Save the initial mask status */ - if (entry->msi_attrib.can_mask) - pci_read_config_dword(dev, entry->mask_pos, &entry->msi_mask); + if (entry->pci.msi_attrib.can_mask) + pci_read_config_dword(dev, entry->pci.mask_pos, &entry->pci.msi_mask); out: kfree(masks); @@ -630,26 +630,26 @@ static int msix_setup_entries(struct pci goto out; } - entry->msi_attrib.is_msix = 1; - entry->msi_attrib.is_64 = 1; + entry->pci.msi_attrib.is_msix = 1; + entry->pci.msi_attrib.is_64 = 1; if (entries) - entry->msi_attrib.entry_nr = entries[i].entry; + entry->pci.msi_attrib.entry_nr = entries[i].entry; else - entry->msi_attrib.entry_nr = i; + entry->pci.msi_attrib.entry_nr = i; - entry->msi_attrib.is_virtual = - entry->msi_attrib.entry_nr >= vec_count; + entry->pci.msi_attrib.is_virtual = + entry->pci.msi_attrib.entry_nr >= vec_count; - entry->msi_attrib.can_mask = !pci_msi_ignore_mask && - !entry->msi_attrib.is_virtual; + entry->pci.msi_attrib.can_mask = !pci_msi_ignore_mask && + !entry->pci.msi_attrib.is_virtual; - entry->msi_attrib.default_irq = dev->irq; - entry->mask_base = base; + entry->pci.msi_attrib.default_irq = dev->irq; + entry->pci.mask_base = base; - if (entry->msi_attrib.can_mask) { + if (entry->pci.msi_attrib.can_mask) { addr = pci_msix_desc_addr(entry); - entry->msix_ctrl = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL); + entry->pci.msix_ctrl = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL); } list_add_tail(&entry->list, dev_to_msi_list(&dev->dev)); @@ -874,7 +874,7 @@ static void pci_msi_shutdown(struct pci_ pci_msi_unmask(desc, msi_multi_mask(desc)); /* Restore dev->irq to its default pin-assertion IRQ */ - dev->irq = desc->msi_attrib.default_irq; + dev->irq = desc->pci.msi_attrib.default_irq; pcibios_alloc_irq(dev); } @@ -1203,7 +1203,7 @@ int pci_irq_vector(struct pci_dev *dev, struct msi_desc *entry; for_each_pci_msi_entry(entry, dev) { - if (entry->msi_attrib.entry_nr == nr) + if (entry->pci.msi_attrib.entry_nr == nr) return entry->irq; } WARN_ON_ONCE(1); @@ -1242,7 +1242,7 @@ const struct cpumask *pci_irq_get_affini struct msi_desc *entry; for_each_pci_msi_entry(entry, dev) { - if (entry->msi_attrib.entry_nr == nr) + if (entry->pci.msi_attrib.entry_nr == nr) return &entry->affinity->mask; } WARN_ON_ONCE(1); @@ -1295,14 +1295,14 @@ static irq_hw_number_t pci_msi_domain_ca { struct pci_dev *dev = msi_desc_to_pci_dev(desc); - return (irq_hw_number_t)desc->msi_attrib.entry_nr | + return (irq_hw_number_t)desc->pci.msi_attrib.entry_nr | pci_dev_id(dev) << 11 | (pci_domain_nr(dev->bus) & 0xFFFFFFFF) << 27; } static inline bool pci_msi_desc_is_multi_msi(struct msi_desc *desc) { - return !desc->msi_attrib.is_msix && desc->nvec_used > 1; + return !desc->pci.msi_attrib.is_msix && desc->nvec_used > 1; } /** @@ -1326,7 +1326,7 @@ int pci_msi_domain_check_cap(struct irq_ if (pci_msi_desc_is_multi_msi(desc) && !(info->flags & MSI_FLAG_MULTI_PCI_MSI)) return 1; - else if (desc->msi_attrib.is_msix && !(info->flags & MSI_FLAG_PCI_MSIX)) + else if (desc->pci.msi_attrib.is_msix && !(info->flags & MSI_FLAG_PCI_MSIX)) return -ENOTSUPP; return 0; --- a/drivers/pci/xen-pcifront.c +++ b/drivers/pci/xen-pcifront.c @@ -263,7 +263,7 @@ static int pci_frontend_enable_msix(stru i = 0; for_each_pci_msi_entry(entry, dev) { - op.msix_entries[i].entry = entry->msi_attrib.entry_nr; + op.msix_entries[i].entry = entry->pci.msi_attrib.entry_nr; /* Vector is useless at this point. */ op.msix_entries[i].vector = -1; i++; --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -69,6 +69,42 @@ typedef void (*irq_write_msi_msg_t)(stru struct msi_msg *msg); /** + * pci_msi_desc - PCI/MSI specific MSI descriptor data + * + * @msi_mask: [PCI MSI] MSI cached mask bits + * @msix_ctrl: [PCI MSI-X] MSI-X cached per vector control bits + * @is_msix: [PCI MSI/X] True if MSI-X + * @multiple: [PCI MSI/X] log2 num of messages allocated + * @multi_cap: [PCI MSI/X] log2 num of messages supported + * @can_mask: [PCI MSI/X] Masking supported? + * @is_64: [PCI MSI/X] Address size: 0=32bit 1=64bit + * @entry_nr: [PCI MSI/X] Entry which is described by this descriptor + * @default_irq:[PCI MSI/X] The default pre-assigned non-MSI irq + * @mask_pos: [PCI MSI] Mask register position + * @mask_base: [PCI MSI-X] Mask register base address + */ +struct pci_msi_desc { + union { + u32 msi_mask; + u32 msix_ctrl; + }; + struct { + u8 is_msix : 1; + u8 multiple : 3; + u8 multi_cap : 3; + u8 can_mask : 1; + u8 is_64 : 1; + u8 is_virtual : 1; + u16 entry_nr; + unsigned default_irq; + } msi_attrib; + union { + u8 mask_pos; + void __iomem *mask_base; + }; +}; + +/** * platform_msi_desc - Platform device specific msi descriptor data * @msi_priv_data: Pointer to platform private data * @msi_index: The index of the MSI descriptor for multi MSI @@ -107,17 +143,7 @@ struct ti_sci_inta_msi_desc { * address or data changes * @write_msi_msg_data: Data parameter for the callback. * - * @msi_mask: [PCI MSI] MSI cached mask bits - * @msix_ctrl: [PCI MSI-X] MSI-X cached per vector control bits - * @is_msix: [PCI MSI/X] True if MSI-X - * @multiple: [PCI MSI/X] log2 num of messages allocated - * @multi_cap: [PCI MSI/X] log2 num of messages supported - * @maskbit: [PCI MSI/X] Mask-Pending bit supported? - * @is_64: [PCI MSI/X] Address size: 0=32bit 1=64bit - * @entry_nr: [PCI MSI/X] Entry which is described by this descriptor - * @default_irq:[PCI MSI/X] The default pre-assigned non-MSI irq - * @mask_pos: [PCI MSI] Mask register position - * @mask_base: [PCI MSI-X] Mask register base address + * @pci: [PCI] PCI speficic msi descriptor data * @platform: [platform] Platform device specific msi descriptor data * @fsl_mc: [fsl-mc] FSL MC device specific msi descriptor data * @inta: [INTA] TISCI based INTA specific msi descriptor data @@ -138,38 +164,10 @@ struct msi_desc { void *write_msi_msg_data; union { - /* PCI MSI/X specific data */ - struct { - union { - u32 msi_mask; - u32 msix_ctrl; - }; - struct { - u8 is_msix : 1; - u8 multiple : 3; - u8 multi_cap : 3; - u8 can_mask : 1; - u8 is_64 : 1; - u8 is_virtual : 1; - u16 entry_nr; - unsigned default_irq; - } msi_attrib; - union { - u8 mask_pos; - void __iomem *mask_base; - }; - }; - - /* - * Non PCI variants add their data structure here. New - * entries need to use a named structure. We want - * proper name spaces for this. The PCI part is - * anonymous for now as it would require an immediate - * tree wide cleanup. - */ - struct platform_msi_desc platform; - struct fsl_mc_msi_desc fsl_mc; - struct ti_sci_inta_msi_desc inta; + struct pci_msi_desc pci; + struct platform_msi_desc platform; + struct fsl_mc_msi_desc fsl_mc; + struct ti_sci_inta_msi_desc inta; }; }; --- a/kernel/irq/msi.c +++ b/kernel/irq/msi.c @@ -91,7 +91,7 @@ static ssize_t msi_mode_show(struct devi return -ENODEV; if (dev_is_pci(dev)) - is_msix = entry->msi_attrib.is_msix; + is_msix = entry->pci.msi_attrib.is_msix; return sysfs_emit(buf, "%s\n", is_msix ? "msix" : "msi"); } @@ -535,7 +535,7 @@ static bool msi_check_reservation_mode(s * masking and MSI does so when the can_mask attribute is set. */ desc = first_msi_entry(dev); - return desc->msi_attrib.is_msix || desc->msi_attrib.can_mask; + return desc->pci.msi_attrib.is_msix || desc->pci.msi_attrib.can_mask; } int __msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev, From patchwork Mon Dec 6 22:27:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564285 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=hq7ndPqC; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=QKsxVt2/; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J3r2pgVz9sCD for ; Tue, 7 Dec 2021 09:28:24 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357171AbhLFWbs (ORCPT ); Mon, 6 Dec 2021 17:31:48 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45658 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356744AbhLFWbM (ORCPT ); Mon, 6 Dec 2021 17:31:12 -0500 Message-ID: <20211206210224.429625690@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829661; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=0nldqX+BIdCQSZe/qG73eGbIrco+/ubuHfsykwvcOLw=; b=hq7ndPqCIkYDSqc3aknpP0229cYCDRhoC9zQSCdYk5vM3i10WTZ7av+6rlEAZ6u05mIkbr XQDS6b4vGsXFqVoC8oP4KMoWd1eM06kWded8LIrvh6OjRqXQSape8awI79GsZVK8u9eX85 aD0FDvGiqwvqK/Cp05pH8gDNLFxywYdDcwRzqFrfA9dJF6ZcS2GW43vxTzQkYENzlCGZky BY7obUIDVSkFmUZxkFtoCauQiQu+i1InDE5QQr8JmsWmr3vI1aYNQ3VOuIyKzaHGlqfRLM 9QjgPuj7fL5uEDouHN+Uw0zGzemm1NJG4B4f588z9m3byoVyCwYZWW8vxb8wOA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829661; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=0nldqX+BIdCQSZe/qG73eGbIrco+/ubuHfsykwvcOLw=; b=QKsxVt2/sl1syiew/XZKhTW9zsEcW7PxttjhPWSCOKgIZScBCrlr47vPlSJl0Ltkx0XjvV If8/ySJn0kVVjADg== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Wei Liu , x86@kernel.org, linux-hyperv@vger.kernel.org, Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Juergen Gross , Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 11/23] x86/hyperv: Refactor hv_msi_domain_free_irqs() References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:41 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org No point in looking up things over and over. Just look up the associated irq data and work from there. No functional change. Signed-off-by: Thomas Gleixner Reviewed-by: Jason Gunthorpe Acked-by: Wei Liu Cc: x86@kernel.org Cc: linux-hyperv@vger.kernel.org --- arch/x86/hyperv/irqdomain.c | 55 +++++++++++++------------------------------- 1 file changed, 17 insertions(+), 38 deletions(-) --- a/arch/x86/hyperv/irqdomain.c +++ b/arch/x86/hyperv/irqdomain.c @@ -253,64 +253,43 @@ static int hv_unmap_msi_interrupt(struct return hv_unmap_interrupt(hv_build_pci_dev_id(dev).as_uint64, old_entry); } -static void hv_teardown_msi_irq_common(struct pci_dev *dev, struct msi_desc *msidesc, int irq) +static void hv_teardown_msi_irq(struct pci_dev *dev, struct irq_data *irqd) { - u64 status; struct hv_interrupt_entry old_entry; - struct irq_desc *desc; - struct irq_data *data; struct msi_msg msg; + u64 status; - desc = irq_to_desc(irq); - if (!desc) { - pr_debug("%s: no irq desc\n", __func__); - return; - } - - data = &desc->irq_data; - if (!data) { - pr_debug("%s: no irq data\n", __func__); - return; - } - - if (!data->chip_data) { + if (!irqd->chip_data) { pr_debug("%s: no chip data\n!", __func__); return; } - old_entry = *(struct hv_interrupt_entry *)data->chip_data; + old_entry = *(struct hv_interrupt_entry *)irqd->chip_data; entry_to_msi_msg(&old_entry, &msg); - kfree(data->chip_data); - data->chip_data = NULL; + kfree(irqd->chip_data); + irqd->chip_data = NULL; status = hv_unmap_msi_interrupt(dev, &old_entry); - if (status != HV_STATUS_SUCCESS) { + if (status != HV_STATUS_SUCCESS) pr_err("%s: hypercall failed, status %lld\n", __func__, status); - return; - } } -static void hv_msi_domain_free_irqs(struct irq_domain *domain, struct device *dev) +static void hv_msi_free_irq(struct irq_domain *domain, + struct msi_domain_info *info, unsigned int virq) { - int i; - struct msi_desc *entry; - struct pci_dev *pdev; + struct irq_data *irqd = irq_get_irq_data(virq); + struct msi_desc *desc; - if (WARN_ON_ONCE(!dev_is_pci(dev))) + if (!irqd) return; - pdev = to_pci_dev(dev); + desc = irq_data_get_msi_desc(irqd); + if (!desc || !desc->irq || WARN_ON_ONCE(!dev_is_pci(desc->dev))) + return; - for_each_pci_msi_entry(entry, pdev) { - if (entry->irq) { - for (i = 0; i < entry->nvec_used; i++) { - hv_teardown_msi_irq_common(pdev, entry, entry->irq + i); - irq_domain_free_irqs(entry->irq + i, 1); - } - } - } + hv_teardown_msi_irq(to_pci_dev(desc->dev), irqd); } /* @@ -329,7 +308,7 @@ static struct irq_chip hv_pci_msi_contro }; static struct msi_domain_ops pci_msi_domain_ops = { - .domain_free_irqs = hv_msi_domain_free_irqs, + .msi_free = hv_msi_free_irq, .msi_prepare = pci_msi_prepare, }; From patchwork Mon Dec 6 22:27:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564279 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=MTZjX5pN; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=TNvJwf8M; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J3g1VRDz9s1l for ; Tue, 7 Dec 2021 09:28:15 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357925AbhLFWbi (ORCPT ); Mon, 6 Dec 2021 17:31:38 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45684 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356945AbhLFWbO (ORCPT ); Mon, 6 Dec 2021 17:31:14 -0500 Message-ID: <20211206210224.485668098@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829663; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=72+QGnlfGFYAah1BUmp4cO4oz9eONu1lmUkDHP/6Oes=; b=MTZjX5pNCpaLIFSorwc1uUDuJV8DuheJ4/PGRwS1V0RSzJLru3QK+UoDdKktWtJ56DE+pU XMpfnvWnX2DR2KWbp528BWcbOq9/jHSo0OnwKIJ8pTjKPXuby1tve6XYSQnjznzx4p1fBU w9WdmdFGIspyqZDGRSwYdx6WrLg3PlS+GMjjb4rlRHfSZXzCU3kz76eVMP7TXFSr/bxAqB hknvBA5hjCwrysbiAFo8k4a+IMj8BlgkaRUzgsM4eFkl42Z3vu1UzIfASxszOhpin7rnmz 04sigZF0h+k3QB7F9/sJJGfxNvzCN++bfCAbCCj0/SAXu469vIUDg9AhAd93Ag== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829663; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=72+QGnlfGFYAah1BUmp4cO4oz9eONu1lmUkDHP/6Oes=; b=TNvJwf8MfIZ37eFOPxUxrqpZTcApvnb0v/Ziy52Uyn6Q/lFKZMvbhi8K0X/S9Y8AgRN6JC cp1HJaq7RkboAqDA== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , x86@kernel.org, xen-devel@lists.xenproject.org, Christian Borntraeger , Heiko Carstens , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org Subject: [patch V2 12/23] PCI/MSI: Make arch_restore_msi_irqs() less horrible. References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:42 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Make arch_restore_msi_irqs() return a boolean which indicates whether the core code should restore the MSI message or not. Get rid of the indirection in x86. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Cc: x86@kernel.org Cc: xen-devel@lists.xenproject.org Cc: Christian Borntraeger Cc: Heiko Carstens Acked-by: Bjorn Helgaas # PCI --- arch/s390/pci/pci_irq.c | 4 +- arch/x86/include/asm/x86_init.h | 6 --- arch/x86/include/asm/xen/hypervisor.h | 8 +++++ arch/x86/kernel/apic/msi.c | 6 +++ arch/x86/kernel/x86_init.c | 12 ------- arch/x86/pci/xen.c | 13 ++++---- drivers/pci/msi.c | 54 +++++++++++----------------------- include/linux/msi.h | 7 +--- 8 files changed, 45 insertions(+), 65 deletions(-) --- a/arch/s390/pci/pci_irq.c +++ b/arch/s390/pci/pci_irq.c @@ -387,13 +387,13 @@ void arch_teardown_msi_irqs(struct pci_d airq_iv_free(zpci_ibv[0], zdev->msi_first_bit, zdev->msi_nr_irqs); } -void arch_restore_msi_irqs(struct pci_dev *pdev) +bool arch_restore_msi_irqs(struct pci_dev *pdev) { struct zpci_dev *zdev = to_zpci(pdev); if (!zdev->irqs_registered) zpci_set_irq(zdev); - default_restore_msi_irqs(pdev); + return true; } static struct airq_struct zpci_airq = { --- a/arch/x86/include/asm/x86_init.h +++ b/arch/x86/include/asm/x86_init.h @@ -289,12 +289,6 @@ struct x86_platform_ops { struct x86_hyper_runtime hyper; }; -struct pci_dev; - -struct x86_msi_ops { - void (*restore_msi_irqs)(struct pci_dev *dev); -}; - struct x86_apic_ops { unsigned int (*io_apic_read) (unsigned int apic, unsigned int reg); void (*restore)(void); --- a/arch/x86/include/asm/xen/hypervisor.h +++ b/arch/x86/include/asm/xen/hypervisor.h @@ -57,6 +57,14 @@ static inline bool __init xen_x2apic_par } #endif +struct pci_dev; + +#ifdef CONFIG_XEN_DOM0 +bool xen_initdom_restore_msi(struct pci_dev *dev); +#else +static inline bool xen_initdom_restore_msi(struct pci_dev *dev) { return true; } +#endif + #ifdef CONFIG_HOTPLUG_CPU void xen_arch_register_cpu(int num); void xen_arch_unregister_cpu(int num); --- a/arch/x86/kernel/apic/msi.c +++ b/arch/x86/kernel/apic/msi.c @@ -19,6 +19,7 @@ #include #include #include +#include struct irq_domain *x86_pci_msi_default_domain __ro_after_init; @@ -345,3 +346,8 @@ void dmar_free_hwirq(int irq) irq_domain_free_irqs(irq, 1); } #endif + +bool arch_restore_msi_irqs(struct pci_dev *dev) +{ + return xen_initdom_restore_msi(dev); +} --- a/arch/x86/kernel/x86_init.c +++ b/arch/x86/kernel/x86_init.c @@ -145,18 +145,6 @@ struct x86_platform_ops x86_platform __r EXPORT_SYMBOL_GPL(x86_platform); -#if defined(CONFIG_PCI_MSI) -struct x86_msi_ops x86_msi __ro_after_init = { - .restore_msi_irqs = default_restore_msi_irqs, -}; - -/* MSI arch specific hooks */ -void arch_restore_msi_irqs(struct pci_dev *dev) -{ - x86_msi.restore_msi_irqs(dev); -} -#endif - struct x86_apic_ops x86_apic_ops __ro_after_init = { .io_apic_read = native_io_apic_read, .restore = native_restore_boot_irq_mode, --- a/arch/x86/pci/xen.c +++ b/arch/x86/pci/xen.c @@ -351,10 +351,13 @@ static int xen_initdom_setup_msi_irqs(st return ret; } -static void xen_initdom_restore_msi_irqs(struct pci_dev *dev) +bool xen_initdom_restore_msi(struct pci_dev *dev) { int ret = 0; + if (!xen_initial_domain()) + return true; + if (pci_seg_supported) { struct physdev_pci_device restore_ext; @@ -375,10 +378,10 @@ static void xen_initdom_restore_msi_irqs ret = HYPERVISOR_physdev_op(PHYSDEVOP_restore_msi, &restore); WARN(ret && ret != -ENOSYS, "restore_msi -> %d\n", ret); } + return false; } #else /* CONFIG_XEN_PV_DOM0 */ #define xen_initdom_setup_msi_irqs NULL -#define xen_initdom_restore_msi_irqs NULL #endif /* !CONFIG_XEN_PV_DOM0 */ static void xen_teardown_msi_irqs(struct pci_dev *dev) @@ -466,12 +469,10 @@ static __init struct irq_domain *xen_cre static __init void xen_setup_pci_msi(void) { if (xen_pv_domain()) { - if (xen_initial_domain()) { + if (xen_initial_domain()) xen_msi_ops.setup_msi_irqs = xen_initdom_setup_msi_irqs; - x86_msi.restore_msi_irqs = xen_initdom_restore_msi_irqs; - } else { + else xen_msi_ops.setup_msi_irqs = xen_setup_msi_irqs; - } xen_msi_ops.teardown_msi_irqs = xen_pv_teardown_msi_irqs; pci_msi_ignore_mask = 1; } else if (xen_hvm_domain()) { --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -106,29 +106,6 @@ void __weak arch_teardown_msi_irqs(struc } #endif /* CONFIG_PCI_MSI_ARCH_FALLBACKS */ -static void default_restore_msi_irq(struct pci_dev *dev, int irq) -{ - struct msi_desc *entry; - - entry = NULL; - if (dev->msix_enabled) { - for_each_pci_msi_entry(entry, dev) { - if (irq == entry->irq) - break; - } - } else if (dev->msi_enabled) { - entry = irq_get_msi_desc(irq); - } - - if (entry) - __pci_write_msi_msg(entry, &entry->msg); -} - -void __weak arch_restore_msi_irqs(struct pci_dev *dev) -{ - return default_restore_msi_irqs(dev); -} - /* * PCI 2.3 does not specify mask bits for each MSI interrupt. Attempting to * mask all MSI interrupts by clearing the MSI enable bit does not work @@ -242,14 +219,6 @@ void pci_msi_unmask_irq(struct irq_data } EXPORT_SYMBOL_GPL(pci_msi_unmask_irq); -void default_restore_msi_irqs(struct pci_dev *dev) -{ - struct msi_desc *entry; - - for_each_pci_msi_entry(entry, dev) - default_restore_msi_irq(dev, entry->irq); -} - void __pci_read_msi_msg(struct msi_desc *entry, struct msi_msg *msg) { struct pci_dev *dev = msi_desc_to_pci_dev(entry); @@ -403,10 +372,19 @@ static void pci_msi_set_enable(struct pc pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control); } +/* + * Architecture override returns true when the PCI MSI message should be + * written by the generic restore function. + */ +bool __weak arch_restore_msi_irqs(struct pci_dev *dev) +{ + return true; +} + static void __pci_restore_msi_state(struct pci_dev *dev) { - u16 control; struct msi_desc *entry; + u16 control; if (!dev->msi_enabled) return; @@ -415,7 +393,8 @@ static void __pci_restore_msi_state(stru pci_intx_for_msi(dev, 0); pci_msi_set_enable(dev, 0); - arch_restore_msi_irqs(dev); + if (arch_restore_msi_irqs(dev)) + __pci_write_msi_msg(entry, &entry->msg); pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); pci_msi_update_mask(entry, 0, 0); @@ -437,6 +416,7 @@ static void pci_msix_clear_and_set_ctrl( static void __pci_restore_msix_state(struct pci_dev *dev) { struct msi_desc *entry; + bool write_msg; if (!dev->msix_enabled) return; @@ -447,9 +427,13 @@ static void __pci_restore_msix_state(str pci_msix_clear_and_set_ctrl(dev, 0, PCI_MSIX_FLAGS_ENABLE | PCI_MSIX_FLAGS_MASKALL); - arch_restore_msi_irqs(dev); - for_each_pci_msi_entry(entry, dev) + write_msg = arch_restore_msi_irqs(dev); + + for_each_pci_msi_entry(entry, dev) { + if (write_msg) + __pci_write_msi_msg(entry, &entry->msg); pci_msix_write_vector_ctrl(entry, entry->pci.msix_ctrl); + } pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); } --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -273,11 +273,10 @@ static inline void arch_teardown_msi_irq #endif /* - * The restore hooks are still available as they are useful even - * for fully irq domain based setups. Courtesy to XEN/X86. + * The restore hook is still available even for fully irq domain based + * setups. Courtesy to XEN/X86. */ -void arch_restore_msi_irqs(struct pci_dev *dev); -void default_restore_msi_irqs(struct pci_dev *dev); +bool arch_restore_msi_irqs(struct pci_dev *dev); #ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN From patchwork Mon Dec 6 22:27:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564273 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=pnGIDteY; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=RuOHGkoA; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J3K3MYjz9sCD for ; Tue, 7 Dec 2021 09:27:57 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357483AbhLFWbX (ORCPT ); Mon, 6 Dec 2021 17:31:23 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45918 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357095AbhLFWbP (ORCPT ); Mon, 6 Dec 2021 17:31:15 -0500 Message-ID: <20211206210224.539281124@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829665; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=fpyg7awX8dijQzBRp+WIS78Jmos3C/8mTMn+TSrhxD8=; b=pnGIDteYP84F0igC2k+ACUhibYQ3881OhRiiuaVIOBjTshXhjGeVdgZAA/lcK5sjvLEMoj ROvDFYYNJX0JhR2QOKUEjXyWb04lLq7PP9xV81Z0D1FytvGg8OABck9JmC/ylB+YU2AvRJ PZVaY5Ia3WuXqX49dyC//GQiLcA+5A6GZy5/1+M3d4XVPgxEHo1T/dGhvFuUZ1a4SJvQjn ICbirvZcyT0c9k1KJC4jGdKAhnGZprc4Xqsi6XNBDU2/0hwWf55EV2aY3VwLmgUvxv9n/d Nfe/m+FudbplubW4O/Khqp8m6T0J7JnGYcrRL/PejELQssL7PtUMHvHQVkjmhA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829665; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=fpyg7awX8dijQzBRp+WIS78Jmos3C/8mTMn+TSrhxD8=; b=RuOHGkoAA1Vj7KuipSnVapOZhwsUvKElxjyliNOso5sc0g+RXjuUG8oJb8SOmIunMIh0Pt uWzxF181jxyPh3AQ== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 13/23] PCI/MSI: Cleanup include zoo References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:44 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Get rid of the pile of unneeded includes which accumulated over time. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Acked-by: Bjorn Helgaas --- V2: Address build fail on powerpc - Cedric --- drivers/pci/msi.c | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -7,22 +7,14 @@ * Copyright (C) 2016 Christoph Hellwig. */ +#include #include -#include -#include -#include #include -#include -#include -#include -#include -#include -#include -#include -#include -#include +#include #include +#include #include +#include #include "pci.h" From patchwork Mon Dec 6 22:27:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564271 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=o92QiOvG; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=1tAEn67I; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J3H4VPXz9s1l for ; Tue, 7 Dec 2021 09:27:55 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357460AbhLFWbW (ORCPT ); Mon, 6 Dec 2021 17:31:22 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45612 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356824AbhLFWbR (ORCPT ); Mon, 6 Dec 2021 17:31:17 -0500 Message-ID: <20211206210224.600351129@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829666; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=C4aqXfAyJVKaI/8MMq5VzC8OB/kEeYXdYvUuhO7tMOc=; b=o92QiOvGzQZzHfMpzPjv7r4xBNXDoXGJJvsICsdexRSxr3AhYBn+Fzw5huMDLf3tG2MlDP WPSsWv+ttwoXNCO+kVRoYtBXd2056cZn+SXvjveHEIGW4wLgMSNBAD+EM1KCcz3isX+VnT 7oA+S94AQMnlPWXc6OAJe8Vh+beStrB/fVp0eysUbozportvEaGqCfzk+tSiRYaAb9ydmC tUqitzzV2rJq9he1vqAIx8of13iw8IY3uTwV+VW+qoDZuhKvoTgvmfg6uyCTUe4lGxMUHD D8um59x3FnVAH2aKssNe992rOhVmHhCOdInM7HatkhXksCk91x8Z5YuFrRqVxQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829666; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=C4aqXfAyJVKaI/8MMq5VzC8OB/kEeYXdYvUuhO7tMOc=; b=1tAEn67IPG8WXGNZydrODGd+7PMJ5gxXdDqFiaHVsGUGG9TZGgvwu2VK3mH3VYJ+XD3fmr KDMRzr5BxWyF64CA== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 14/23] PCI/MSI: Make msix_update_entries() smarter References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:46 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org No need to walk the descriptors and check for each one whether the entries pointer function argument is NULL. Do it once. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Reviewed-by: Greg Kroah-Hartman Acked-by: Bjorn Helgaas --- drivers/pci/msi.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -642,8 +642,8 @@ static void msix_update_entries(struct p { struct msi_desc *entry; - for_each_pci_msi_entry(entry, dev) { - if (entries) { + if (entries) { + for_each_pci_msi_entry(entry, dev) { entries->vector = entry->irq; entries++; } From patchwork Mon Dec 6 22:27:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564277 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=DD7Z+qzS; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=J/QydmuG; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J3Y2Bb1z9s1l for ; Tue, 7 Dec 2021 09:28:09 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357807AbhLFWbg (ORCPT ); Mon, 6 Dec 2021 17:31:36 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45658 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356734AbhLFWbU (ORCPT ); Mon, 6 Dec 2021 17:31:20 -0500 Message-ID: <20211206210224.655043033@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829669; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=hpRKFVK8c6sDVWghKmB03eN3IVYFpF158rP8c0DN+zU=; b=DD7Z+qzS1P/gjNNk2znBKgd9uRGSqVR88yvssnFC0CZU/afuXistxfer8K0hwXIoB1c4Jn PqRrEAglAqx367yBrLe6dup6ML0/Id/g9Ml0/rdwgF1xSYCegVtQCz7W3sUZ0ffbFQ2DJn /lj6lp1L1s3O5iAnCOo6SpNl5SAcpT1t5/bUm/1Tuo2g2WeUKVYgK2cFKSl+DAquVOKovj 6K0NSK5RUuvpJ+E4VaPJ7BMrc3L4ka5XNf7kc+g9QTNbi9fHQ0tRV7tqx7g7xh1c4NYYMw q80yD/5r4rTIjRE+pFImF2AtyTutmLOm8jVVOo4zggySWH9XUTdZwWs4NL56mA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829669; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=hpRKFVK8c6sDVWghKmB03eN3IVYFpF158rP8c0DN+zU=; b=J/QydmuG4kUWmq0qp7FRGdfW6TFqRC4+PJUo1op2aZStVi39MK/MNRVQ3vj1evJCECiHKX YSFYnk5fQDetOUCw== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 15/23] PCI/MSI: Move code into a separate directory References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:47 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org msi.c is getting larger and really could do with a splitup. Move it into it's own directory to prepare for that. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Acked-by: Bjorn Helgaas --- Documentation/driver-api/pci/pci.rst | 2 drivers/pci/Makefile | 3 drivers/pci/msi.c | 1532 ----------------------------------- drivers/pci/msi/Makefile | 4 drivers/pci/msi/msi.c | 1532 +++++++++++++++++++++++++++++++++++ 5 files changed, 1539 insertions(+), 1534 deletions(-) --- a/Documentation/driver-api/pci/pci.rst +++ b/Documentation/driver-api/pci/pci.rst @@ -13,7 +13,7 @@ PCI Support Library .. kernel-doc:: drivers/pci/search.c :export: -.. kernel-doc:: drivers/pci/msi.c +.. kernel-doc:: drivers/pci/msi/msi.c :export: .. kernel-doc:: drivers/pci/bus.c --- a/drivers/pci/Makefile +++ b/drivers/pci/Makefile @@ -5,8 +5,9 @@ obj-$(CONFIG_PCI) += access.o bus.o probe.o host-bridge.o \ remove.o pci.o pci-driver.o search.o \ pci-sysfs.o rom.o setup-res.o irq.o vpd.o \ - setup-bus.o vc.o mmap.o setup-irq.o msi.o + setup-bus.o vc.o mmap.o setup-irq.o +obj-$(CONFIG_PCI) += msi/ obj-$(CONFIG_PCI) += pcie/ ifdef CONFIG_PCI --- a/drivers/pci/msi.c +++ /dev/null @@ -1,1532 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * PCI Message Signaled Interrupt (MSI) - * - * Copyright (C) 2003-2004 Intel - * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com) - * Copyright (C) 2016 Christoph Hellwig. - */ - -#include -#include -#include -#include -#include -#include -#include -#include - -#include "pci.h" - -#ifdef CONFIG_PCI_MSI - -static int pci_msi_enable = 1; -int pci_msi_ignore_mask; - -#define msix_table_size(flags) ((flags & PCI_MSIX_FLAGS_QSIZE) + 1) - -#ifdef CONFIG_PCI_MSI_IRQ_DOMAIN -static int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) -{ - struct irq_domain *domain; - - domain = dev_get_msi_domain(&dev->dev); - if (domain && irq_domain_is_hierarchy(domain)) - return msi_domain_alloc_irqs(domain, &dev->dev, nvec); - - return arch_setup_msi_irqs(dev, nvec, type); -} - -static void pci_msi_teardown_msi_irqs(struct pci_dev *dev) -{ - struct irq_domain *domain; - - domain = dev_get_msi_domain(&dev->dev); - if (domain && irq_domain_is_hierarchy(domain)) - msi_domain_free_irqs(domain, &dev->dev); - else - arch_teardown_msi_irqs(dev); -} -#else -#define pci_msi_setup_msi_irqs arch_setup_msi_irqs -#define pci_msi_teardown_msi_irqs arch_teardown_msi_irqs -#endif - -#ifdef CONFIG_PCI_MSI_ARCH_FALLBACKS -/* Arch hooks */ -int __weak arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *desc) -{ - return -EINVAL; -} - -void __weak arch_teardown_msi_irq(unsigned int irq) -{ -} - -int __weak arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) -{ - struct msi_desc *entry; - int ret; - - /* - * If an architecture wants to support multiple MSI, it needs to - * override arch_setup_msi_irqs() - */ - if (type == PCI_CAP_ID_MSI && nvec > 1) - return 1; - - for_each_pci_msi_entry(entry, dev) { - ret = arch_setup_msi_irq(dev, entry); - if (ret < 0) - return ret; - if (ret > 0) - return -ENOSPC; - } - - return 0; -} - -void __weak arch_teardown_msi_irqs(struct pci_dev *dev) -{ - int i; - struct msi_desc *entry; - - for_each_pci_msi_entry(entry, dev) - if (entry->irq) - for (i = 0; i < entry->nvec_used; i++) - arch_teardown_msi_irq(entry->irq + i); -} -#endif /* CONFIG_PCI_MSI_ARCH_FALLBACKS */ - -/* - * PCI 2.3 does not specify mask bits for each MSI interrupt. Attempting to - * mask all MSI interrupts by clearing the MSI enable bit does not work - * reliably as devices without an INTx disable bit will then generate a - * level IRQ which will never be cleared. - */ -static inline __attribute_const__ u32 msi_multi_mask(struct msi_desc *desc) -{ - /* Don't shift by >= width of type */ - if (desc->pci.msi_attrib.multi_cap >= 5) - return 0xffffffff; - return (1 << (1 << desc->pci.msi_attrib.multi_cap)) - 1; -} - -static noinline void pci_msi_update_mask(struct msi_desc *desc, u32 clear, u32 set) -{ - raw_spinlock_t *lock = &desc->dev->msi_lock; - unsigned long flags; - - if (!desc->pci.msi_attrib.can_mask) - return; - - raw_spin_lock_irqsave(lock, flags); - desc->pci.msi_mask &= ~clear; - desc->pci.msi_mask |= set; - pci_write_config_dword(msi_desc_to_pci_dev(desc), desc->pci.mask_pos, - desc->pci.msi_mask); - raw_spin_unlock_irqrestore(lock, flags); -} - -static inline void pci_msi_mask(struct msi_desc *desc, u32 mask) -{ - pci_msi_update_mask(desc, 0, mask); -} - -static inline void pci_msi_unmask(struct msi_desc *desc, u32 mask) -{ - pci_msi_update_mask(desc, mask, 0); -} - -static inline void __iomem *pci_msix_desc_addr(struct msi_desc *desc) -{ - return desc->pci.mask_base + desc->pci.msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE; -} - -/* - * This internal function does not flush PCI writes to the device. All - * users must ensure that they read from the device before either assuming - * that the device state is up to date, or returning out of this file. - * It does not affect the msi_desc::msix_ctrl cache either. Use with care! - */ -static void pci_msix_write_vector_ctrl(struct msi_desc *desc, u32 ctrl) -{ - void __iomem *desc_addr = pci_msix_desc_addr(desc); - - if (desc->pci.msi_attrib.can_mask) - writel(ctrl, desc_addr + PCI_MSIX_ENTRY_VECTOR_CTRL); -} - -static inline void pci_msix_mask(struct msi_desc *desc) -{ - desc->pci.msix_ctrl |= PCI_MSIX_ENTRY_CTRL_MASKBIT; - pci_msix_write_vector_ctrl(desc, desc->pci.msix_ctrl); - /* Flush write to device */ - readl(desc->pci.mask_base); -} - -static inline void pci_msix_unmask(struct msi_desc *desc) -{ - desc->pci.msix_ctrl &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT; - pci_msix_write_vector_ctrl(desc, desc->pci.msix_ctrl); -} - -static void __pci_msi_mask_desc(struct msi_desc *desc, u32 mask) -{ - if (desc->pci.msi_attrib.is_msix) - pci_msix_mask(desc); - else - pci_msi_mask(desc, mask); -} - -static void __pci_msi_unmask_desc(struct msi_desc *desc, u32 mask) -{ - if (desc->pci.msi_attrib.is_msix) - pci_msix_unmask(desc); - else - pci_msi_unmask(desc, mask); -} - -/** - * pci_msi_mask_irq - Generic IRQ chip callback to mask PCI/MSI interrupts - * @data: pointer to irqdata associated to that interrupt - */ -void pci_msi_mask_irq(struct irq_data *data) -{ - struct msi_desc *desc = irq_data_get_msi_desc(data); - - __pci_msi_mask_desc(desc, BIT(data->irq - desc->irq)); -} -EXPORT_SYMBOL_GPL(pci_msi_mask_irq); - -/** - * pci_msi_unmask_irq - Generic IRQ chip callback to unmask PCI/MSI interrupts - * @data: pointer to irqdata associated to that interrupt - */ -void pci_msi_unmask_irq(struct irq_data *data) -{ - struct msi_desc *desc = irq_data_get_msi_desc(data); - - __pci_msi_unmask_desc(desc, BIT(data->irq - desc->irq)); -} -EXPORT_SYMBOL_GPL(pci_msi_unmask_irq); - -void __pci_read_msi_msg(struct msi_desc *entry, struct msi_msg *msg) -{ - struct pci_dev *dev = msi_desc_to_pci_dev(entry); - - BUG_ON(dev->current_state != PCI_D0); - - if (entry->pci.msi_attrib.is_msix) { - void __iomem *base = pci_msix_desc_addr(entry); - - if (WARN_ON_ONCE(entry->pci.msi_attrib.is_virtual)) - return; - - msg->address_lo = readl(base + PCI_MSIX_ENTRY_LOWER_ADDR); - msg->address_hi = readl(base + PCI_MSIX_ENTRY_UPPER_ADDR); - msg->data = readl(base + PCI_MSIX_ENTRY_DATA); - } else { - int pos = dev->msi_cap; - u16 data; - - pci_read_config_dword(dev, pos + PCI_MSI_ADDRESS_LO, - &msg->address_lo); - if (entry->pci.msi_attrib.is_64) { - pci_read_config_dword(dev, pos + PCI_MSI_ADDRESS_HI, - &msg->address_hi); - pci_read_config_word(dev, pos + PCI_MSI_DATA_64, &data); - } else { - msg->address_hi = 0; - pci_read_config_word(dev, pos + PCI_MSI_DATA_32, &data); - } - msg->data = data; - } -} - -void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg) -{ - struct pci_dev *dev = msi_desc_to_pci_dev(entry); - - if (dev->current_state != PCI_D0 || pci_dev_is_disconnected(dev)) { - /* Don't touch the hardware now */ - } else if (entry->pci.msi_attrib.is_msix) { - void __iomem *base = pci_msix_desc_addr(entry); - u32 ctrl = entry->pci.msix_ctrl; - bool unmasked = !(ctrl & PCI_MSIX_ENTRY_CTRL_MASKBIT); - - if (entry->pci.msi_attrib.is_virtual) - goto skip; - - /* - * The specification mandates that the entry is masked - * when the message is modified: - * - * "If software changes the Address or Data value of an - * entry while the entry is unmasked, the result is - * undefined." - */ - if (unmasked) - pci_msix_write_vector_ctrl(entry, ctrl | PCI_MSIX_ENTRY_CTRL_MASKBIT); - - writel(msg->address_lo, base + PCI_MSIX_ENTRY_LOWER_ADDR); - writel(msg->address_hi, base + PCI_MSIX_ENTRY_UPPER_ADDR); - writel(msg->data, base + PCI_MSIX_ENTRY_DATA); - - if (unmasked) - pci_msix_write_vector_ctrl(entry, ctrl); - - /* Ensure that the writes are visible in the device */ - readl(base + PCI_MSIX_ENTRY_DATA); - } else { - int pos = dev->msi_cap; - u16 msgctl; - - pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &msgctl); - msgctl &= ~PCI_MSI_FLAGS_QSIZE; - msgctl |= entry->pci.msi_attrib.multiple << 4; - pci_write_config_word(dev, pos + PCI_MSI_FLAGS, msgctl); - - pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_LO, - msg->address_lo); - if (entry->pci.msi_attrib.is_64) { - pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_HI, - msg->address_hi); - pci_write_config_word(dev, pos + PCI_MSI_DATA_64, - msg->data); - } else { - pci_write_config_word(dev, pos + PCI_MSI_DATA_32, - msg->data); - } - /* Ensure that the writes are visible in the device */ - pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &msgctl); - } - -skip: - entry->msg = *msg; - - if (entry->write_msi_msg) - entry->write_msi_msg(entry, entry->write_msi_msg_data); - -} - -void pci_write_msi_msg(unsigned int irq, struct msi_msg *msg) -{ - struct msi_desc *entry = irq_get_msi_desc(irq); - - __pci_write_msi_msg(entry, msg); -} -EXPORT_SYMBOL_GPL(pci_write_msi_msg); - -static void free_msi_irqs(struct pci_dev *dev) -{ - struct list_head *msi_list = dev_to_msi_list(&dev->dev); - struct msi_desc *entry, *tmp; - int i; - - for_each_pci_msi_entry(entry, dev) - if (entry->irq) - for (i = 0; i < entry->nvec_used; i++) - BUG_ON(irq_has_action(entry->irq + i)); - - if (dev->msi_irq_groups) { - msi_destroy_sysfs(&dev->dev, dev->msi_irq_groups); - dev->msi_irq_groups = NULL; - } - - pci_msi_teardown_msi_irqs(dev); - - list_for_each_entry_safe(entry, tmp, msi_list, list) { - if (entry->pci.msi_attrib.is_msix) { - if (list_is_last(&entry->list, msi_list)) - iounmap(entry->pci.mask_base); - } - - list_del(&entry->list); - free_msi_entry(entry); - } -} - -static void pci_intx_for_msi(struct pci_dev *dev, int enable) -{ - if (!(dev->dev_flags & PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG)) - pci_intx(dev, enable); -} - -static void pci_msi_set_enable(struct pci_dev *dev, int enable) -{ - u16 control; - - pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); - control &= ~PCI_MSI_FLAGS_ENABLE; - if (enable) - control |= PCI_MSI_FLAGS_ENABLE; - pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control); -} - -/* - * Architecture override returns true when the PCI MSI message should be - * written by the generic restore function. - */ -bool __weak arch_restore_msi_irqs(struct pci_dev *dev) -{ - return true; -} - -static void __pci_restore_msi_state(struct pci_dev *dev) -{ - struct msi_desc *entry; - u16 control; - - if (!dev->msi_enabled) - return; - - entry = irq_get_msi_desc(dev->irq); - - pci_intx_for_msi(dev, 0); - pci_msi_set_enable(dev, 0); - if (arch_restore_msi_irqs(dev)) - __pci_write_msi_msg(entry, &entry->msg); - - pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); - pci_msi_update_mask(entry, 0, 0); - control &= ~PCI_MSI_FLAGS_QSIZE; - control |= (entry->pci.msi_attrib.multiple << 4) | PCI_MSI_FLAGS_ENABLE; - pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control); -} - -static void pci_msix_clear_and_set_ctrl(struct pci_dev *dev, u16 clear, u16 set) -{ - u16 ctrl; - - pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &ctrl); - ctrl &= ~clear; - ctrl |= set; - pci_write_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, ctrl); -} - -static void __pci_restore_msix_state(struct pci_dev *dev) -{ - struct msi_desc *entry; - bool write_msg; - - if (!dev->msix_enabled) - return; - BUG_ON(list_empty(dev_to_msi_list(&dev->dev))); - - /* route the table */ - pci_intx_for_msi(dev, 0); - pci_msix_clear_and_set_ctrl(dev, 0, - PCI_MSIX_FLAGS_ENABLE | PCI_MSIX_FLAGS_MASKALL); - - write_msg = arch_restore_msi_irqs(dev); - - for_each_pci_msi_entry(entry, dev) { - if (write_msg) - __pci_write_msi_msg(entry, &entry->msg); - pci_msix_write_vector_ctrl(entry, entry->pci.msix_ctrl); - } - - pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); -} - -void pci_restore_msi_state(struct pci_dev *dev) -{ - __pci_restore_msi_state(dev); - __pci_restore_msix_state(dev); -} -EXPORT_SYMBOL_GPL(pci_restore_msi_state); - -static struct msi_desc * -msi_setup_entry(struct pci_dev *dev, int nvec, struct irq_affinity *affd) -{ - struct irq_affinity_desc *masks = NULL; - struct msi_desc *entry; - u16 control; - - if (affd) - masks = irq_create_affinity_masks(nvec, affd); - - /* MSI Entry Initialization */ - entry = alloc_msi_entry(&dev->dev, nvec, masks); - if (!entry) - goto out; - - pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); - /* Lies, damned lies, and MSIs */ - if (dev->dev_flags & PCI_DEV_FLAGS_HAS_MSI_MASKING) - control |= PCI_MSI_FLAGS_MASKBIT; - - entry->pci.msi_attrib.is_msix = 0; - entry->pci.msi_attrib.is_64 = !!(control & PCI_MSI_FLAGS_64BIT); - entry->pci.msi_attrib.is_virtual = 0; - entry->pci.msi_attrib.entry_nr = 0; - entry->pci.msi_attrib.can_mask = !pci_msi_ignore_mask && - !!(control & PCI_MSI_FLAGS_MASKBIT); - entry->pci.msi_attrib.default_irq = dev->irq; /* Save IOAPIC IRQ */ - entry->pci.msi_attrib.multi_cap = (control & PCI_MSI_FLAGS_QMASK) >> 1; - entry->pci.msi_attrib.multiple = ilog2(__roundup_pow_of_two(nvec)); - - if (control & PCI_MSI_FLAGS_64BIT) - entry->pci.mask_pos = dev->msi_cap + PCI_MSI_MASK_64; - else - entry->pci.mask_pos = dev->msi_cap + PCI_MSI_MASK_32; - - /* Save the initial mask status */ - if (entry->pci.msi_attrib.can_mask) - pci_read_config_dword(dev, entry->pci.mask_pos, &entry->pci.msi_mask); - -out: - kfree(masks); - return entry; -} - -static int msi_verify_entries(struct pci_dev *dev) -{ - struct msi_desc *entry; - - if (!dev->no_64bit_msi) - return 0; - - for_each_pci_msi_entry(entry, dev) { - if (entry->msg.address_hi) { - pci_err(dev, "arch assigned 64-bit MSI address %#x%08x but device only supports 32 bits\n", - entry->msg.address_hi, entry->msg.address_lo); - return -EIO; - } - } - return 0; -} - -/** - * msi_capability_init - configure device's MSI capability structure - * @dev: pointer to the pci_dev data structure of MSI device function - * @nvec: number of interrupts to allocate - * @affd: description of automatic IRQ affinity assignments (may be %NULL) - * - * Setup the MSI capability structure of the device with the requested - * number of interrupts. A return value of zero indicates the successful - * setup of an entry with the new MSI IRQ. A negative return value indicates - * an error, and a positive return value indicates the number of interrupts - * which could have been allocated. - */ -static int msi_capability_init(struct pci_dev *dev, int nvec, - struct irq_affinity *affd) -{ - const struct attribute_group **groups; - struct msi_desc *entry; - int ret; - - pci_msi_set_enable(dev, 0); /* Disable MSI during set up */ - - entry = msi_setup_entry(dev, nvec, affd); - if (!entry) - return -ENOMEM; - - /* All MSIs are unmasked by default; mask them all */ - pci_msi_mask(entry, msi_multi_mask(entry)); - - list_add_tail(&entry->list, dev_to_msi_list(&dev->dev)); - - /* Configure MSI capability structure */ - ret = pci_msi_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSI); - if (ret) - goto err; - - ret = msi_verify_entries(dev); - if (ret) - goto err; - - groups = msi_populate_sysfs(&dev->dev); - if (IS_ERR(groups)) { - ret = PTR_ERR(groups); - goto err; - } - - dev->msi_irq_groups = groups; - - /* Set MSI enabled bits */ - pci_intx_for_msi(dev, 0); - pci_msi_set_enable(dev, 1); - dev->msi_enabled = 1; - - pcibios_free_irq(dev); - dev->irq = entry->irq; - return 0; - -err: - pci_msi_unmask(entry, msi_multi_mask(entry)); - free_msi_irqs(dev); - return ret; -} - -static void __iomem *msix_map_region(struct pci_dev *dev, - unsigned int nr_entries) -{ - resource_size_t phys_addr; - u32 table_offset; - unsigned long flags; - u8 bir; - - pci_read_config_dword(dev, dev->msix_cap + PCI_MSIX_TABLE, - &table_offset); - bir = (u8)(table_offset & PCI_MSIX_TABLE_BIR); - flags = pci_resource_flags(dev, bir); - if (!flags || (flags & IORESOURCE_UNSET)) - return NULL; - - table_offset &= PCI_MSIX_TABLE_OFFSET; - phys_addr = pci_resource_start(dev, bir) + table_offset; - - return ioremap(phys_addr, nr_entries * PCI_MSIX_ENTRY_SIZE); -} - -static int msix_setup_entries(struct pci_dev *dev, void __iomem *base, - struct msix_entry *entries, int nvec, - struct irq_affinity *affd) -{ - struct irq_affinity_desc *curmsk, *masks = NULL; - struct msi_desc *entry; - void __iomem *addr; - int ret, i; - int vec_count = pci_msix_vec_count(dev); - - if (affd) - masks = irq_create_affinity_masks(nvec, affd); - - for (i = 0, curmsk = masks; i < nvec; i++) { - entry = alloc_msi_entry(&dev->dev, 1, curmsk); - if (!entry) { - if (!i) - iounmap(base); - else - free_msi_irqs(dev); - /* No enough memory. Don't try again */ - ret = -ENOMEM; - goto out; - } - - entry->pci.msi_attrib.is_msix = 1; - entry->pci.msi_attrib.is_64 = 1; - - if (entries) - entry->pci.msi_attrib.entry_nr = entries[i].entry; - else - entry->pci.msi_attrib.entry_nr = i; - - entry->pci.msi_attrib.is_virtual = - entry->pci.msi_attrib.entry_nr >= vec_count; - - entry->pci.msi_attrib.can_mask = !pci_msi_ignore_mask && - !entry->pci.msi_attrib.is_virtual; - - entry->pci.msi_attrib.default_irq = dev->irq; - entry->pci.mask_base = base; - - if (entry->pci.msi_attrib.can_mask) { - addr = pci_msix_desc_addr(entry); - entry->pci.msix_ctrl = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL); - } - - list_add_tail(&entry->list, dev_to_msi_list(&dev->dev)); - if (masks) - curmsk++; - } - ret = 0; -out: - kfree(masks); - return ret; -} - -static void msix_update_entries(struct pci_dev *dev, struct msix_entry *entries) -{ - struct msi_desc *entry; - - if (entries) { - for_each_pci_msi_entry(entry, dev) { - entries->vector = entry->irq; - entries++; - } - } -} - -static void msix_mask_all(void __iomem *base, int tsize) -{ - u32 ctrl = PCI_MSIX_ENTRY_CTRL_MASKBIT; - int i; - - if (pci_msi_ignore_mask) - return; - - for (i = 0; i < tsize; i++, base += PCI_MSIX_ENTRY_SIZE) - writel(ctrl, base + PCI_MSIX_ENTRY_VECTOR_CTRL); -} - -/** - * msix_capability_init - configure device's MSI-X capability - * @dev: pointer to the pci_dev data structure of MSI-X device function - * @entries: pointer to an array of struct msix_entry entries - * @nvec: number of @entries - * @affd: Optional pointer to enable automatic affinity assignment - * - * Setup the MSI-X capability structure of device function with a - * single MSI-X IRQ. A return of zero indicates the successful setup of - * requested MSI-X entries with allocated IRQs or non-zero for otherwise. - **/ -static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries, - int nvec, struct irq_affinity *affd) -{ - const struct attribute_group **groups; - void __iomem *base; - int ret, tsize; - u16 control; - - /* - * Some devices require MSI-X to be enabled before the MSI-X - * registers can be accessed. Mask all the vectors to prevent - * interrupts coming in before they're fully set up. - */ - pci_msix_clear_and_set_ctrl(dev, 0, PCI_MSIX_FLAGS_MASKALL | - PCI_MSIX_FLAGS_ENABLE); - - pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &control); - /* Request & Map MSI-X table region */ - tsize = msix_table_size(control); - base = msix_map_region(dev, tsize); - if (!base) { - ret = -ENOMEM; - goto out_disable; - } - - /* Ensure that all table entries are masked. */ - msix_mask_all(base, tsize); - - ret = msix_setup_entries(dev, base, entries, nvec, affd); - if (ret) - goto out_disable; - - ret = pci_msi_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX); - if (ret) - goto out_avail; - - /* Check if all MSI entries honor device restrictions */ - ret = msi_verify_entries(dev); - if (ret) - goto out_free; - - msix_update_entries(dev, entries); - - groups = msi_populate_sysfs(&dev->dev); - if (IS_ERR(groups)) { - ret = PTR_ERR(groups); - goto out_free; - } - - dev->msi_irq_groups = groups; - - /* Set MSI-X enabled bits and unmask the function */ - pci_intx_for_msi(dev, 0); - dev->msix_enabled = 1; - pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); - - pcibios_free_irq(dev); - return 0; - -out_avail: - if (ret < 0) { - /* - * If we had some success, report the number of IRQs - * we succeeded in setting up. - */ - struct msi_desc *entry; - int avail = 0; - - for_each_pci_msi_entry(entry, dev) { - if (entry->irq != 0) - avail++; - } - if (avail != 0) - ret = avail; - } - -out_free: - free_msi_irqs(dev); - -out_disable: - pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); - - return ret; -} - -/** - * pci_msi_supported - check whether MSI may be enabled on a device - * @dev: pointer to the pci_dev data structure of MSI device function - * @nvec: how many MSIs have been requested? - * - * Look at global flags, the device itself, and its parent buses - * to determine if MSI/-X are supported for the device. If MSI/-X is - * supported return 1, else return 0. - **/ -static int pci_msi_supported(struct pci_dev *dev, int nvec) -{ - struct pci_bus *bus; - - /* MSI must be globally enabled and supported by the device */ - if (!pci_msi_enable) - return 0; - - if (!dev || dev->no_msi) - return 0; - - /* - * You can't ask to have 0 or less MSIs configured. - * a) it's stupid .. - * b) the list manipulation code assumes nvec >= 1. - */ - if (nvec < 1) - return 0; - - /* - * Any bridge which does NOT route MSI transactions from its - * secondary bus to its primary bus must set NO_MSI flag on - * the secondary pci_bus. - * - * The NO_MSI flag can either be set directly by: - * - arch-specific PCI host bus controller drivers (deprecated) - * - quirks for specific PCI bridges - * - * or indirectly by platform-specific PCI host bridge drivers by - * advertising the 'msi_domain' property, which results in - * the NO_MSI flag when no MSI domain is found for this bridge - * at probe time. - */ - for (bus = dev->bus; bus; bus = bus->parent) - if (bus->bus_flags & PCI_BUS_FLAGS_NO_MSI) - return 0; - - return 1; -} - -/** - * pci_msi_vec_count - Return the number of MSI vectors a device can send - * @dev: device to report about - * - * This function returns the number of MSI vectors a device requested via - * Multiple Message Capable register. It returns a negative errno if the - * device is not capable sending MSI interrupts. Otherwise, the call succeeds - * and returns a power of two, up to a maximum of 2^5 (32), according to the - * MSI specification. - **/ -int pci_msi_vec_count(struct pci_dev *dev) -{ - int ret; - u16 msgctl; - - if (!dev->msi_cap) - return -EINVAL; - - pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl); - ret = 1 << ((msgctl & PCI_MSI_FLAGS_QMASK) >> 1); - - return ret; -} -EXPORT_SYMBOL(pci_msi_vec_count); - -static void pci_msi_shutdown(struct pci_dev *dev) -{ - struct msi_desc *desc; - - if (!pci_msi_enable || !dev || !dev->msi_enabled) - return; - - BUG_ON(list_empty(dev_to_msi_list(&dev->dev))); - desc = first_pci_msi_entry(dev); - - pci_msi_set_enable(dev, 0); - pci_intx_for_msi(dev, 1); - dev->msi_enabled = 0; - - /* Return the device with MSI unmasked as initial states */ - pci_msi_unmask(desc, msi_multi_mask(desc)); - - /* Restore dev->irq to its default pin-assertion IRQ */ - dev->irq = desc->pci.msi_attrib.default_irq; - pcibios_alloc_irq(dev); -} - -void pci_disable_msi(struct pci_dev *dev) -{ - if (!pci_msi_enable || !dev || !dev->msi_enabled) - return; - - pci_msi_shutdown(dev); - free_msi_irqs(dev); -} -EXPORT_SYMBOL(pci_disable_msi); - -/** - * pci_msix_vec_count - return the number of device's MSI-X table entries - * @dev: pointer to the pci_dev data structure of MSI-X device function - * This function returns the number of device's MSI-X table entries and - * therefore the number of MSI-X vectors device is capable of sending. - * It returns a negative errno if the device is not capable of sending MSI-X - * interrupts. - **/ -int pci_msix_vec_count(struct pci_dev *dev) -{ - u16 control; - - if (!dev->msix_cap) - return -EINVAL; - - pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &control); - return msix_table_size(control); -} -EXPORT_SYMBOL(pci_msix_vec_count); - -static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, - int nvec, struct irq_affinity *affd, int flags) -{ - int nr_entries; - int i, j; - - if (!pci_msi_supported(dev, nvec) || dev->current_state != PCI_D0) - return -EINVAL; - - nr_entries = pci_msix_vec_count(dev); - if (nr_entries < 0) - return nr_entries; - if (nvec > nr_entries && !(flags & PCI_IRQ_VIRTUAL)) - return nr_entries; - - if (entries) { - /* Check for any invalid entries */ - for (i = 0; i < nvec; i++) { - if (entries[i].entry >= nr_entries) - return -EINVAL; /* invalid entry */ - for (j = i + 1; j < nvec; j++) { - if (entries[i].entry == entries[j].entry) - return -EINVAL; /* duplicate entry */ - } - } - } - - /* Check whether driver already requested for MSI IRQ */ - if (dev->msi_enabled) { - pci_info(dev, "can't enable MSI-X (MSI IRQ already assigned)\n"); - return -EINVAL; - } - return msix_capability_init(dev, entries, nvec, affd); -} - -static void pci_msix_shutdown(struct pci_dev *dev) -{ - struct msi_desc *entry; - - if (!pci_msi_enable || !dev || !dev->msix_enabled) - return; - - if (pci_dev_is_disconnected(dev)) { - dev->msix_enabled = 0; - return; - } - - /* Return the device with MSI-X masked as initial states */ - for_each_pci_msi_entry(entry, dev) - pci_msix_mask(entry); - - pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); - pci_intx_for_msi(dev, 1); - dev->msix_enabled = 0; - pcibios_alloc_irq(dev); -} - -void pci_disable_msix(struct pci_dev *dev) -{ - if (!pci_msi_enable || !dev || !dev->msix_enabled) - return; - - pci_msix_shutdown(dev); - free_msi_irqs(dev); -} -EXPORT_SYMBOL(pci_disable_msix); - -void pci_no_msi(void) -{ - pci_msi_enable = 0; -} - -/** - * pci_msi_enabled - is MSI enabled? - * - * Returns true if MSI has not been disabled by the command-line option - * pci=nomsi. - **/ -int pci_msi_enabled(void) -{ - return pci_msi_enable; -} -EXPORT_SYMBOL(pci_msi_enabled); - -static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec, - struct irq_affinity *affd) -{ - int nvec; - int rc; - - if (!pci_msi_supported(dev, minvec) || dev->current_state != PCI_D0) - return -EINVAL; - - /* Check whether driver already requested MSI-X IRQs */ - if (dev->msix_enabled) { - pci_info(dev, "can't enable MSI (MSI-X already enabled)\n"); - return -EINVAL; - } - - if (maxvec < minvec) - return -ERANGE; - - if (WARN_ON_ONCE(dev->msi_enabled)) - return -EINVAL; - - nvec = pci_msi_vec_count(dev); - if (nvec < 0) - return nvec; - if (nvec < minvec) - return -ENOSPC; - - if (nvec > maxvec) - nvec = maxvec; - - for (;;) { - if (affd) { - nvec = irq_calc_affinity_vectors(minvec, nvec, affd); - if (nvec < minvec) - return -ENOSPC; - } - - rc = msi_capability_init(dev, nvec, affd); - if (rc == 0) - return nvec; - - if (rc < 0) - return rc; - if (rc < minvec) - return -ENOSPC; - - nvec = rc; - } -} - -/* deprecated, don't use */ -int pci_enable_msi(struct pci_dev *dev) -{ - int rc = __pci_enable_msi_range(dev, 1, 1, NULL); - if (rc < 0) - return rc; - return 0; -} -EXPORT_SYMBOL(pci_enable_msi); - -static int __pci_enable_msix_range(struct pci_dev *dev, - struct msix_entry *entries, int minvec, - int maxvec, struct irq_affinity *affd, - int flags) -{ - int rc, nvec = maxvec; - - if (maxvec < minvec) - return -ERANGE; - - if (WARN_ON_ONCE(dev->msix_enabled)) - return -EINVAL; - - for (;;) { - if (affd) { - nvec = irq_calc_affinity_vectors(minvec, nvec, affd); - if (nvec < minvec) - return -ENOSPC; - } - - rc = __pci_enable_msix(dev, entries, nvec, affd, flags); - if (rc == 0) - return nvec; - - if (rc < 0) - return rc; - if (rc < minvec) - return -ENOSPC; - - nvec = rc; - } -} - -/** - * pci_enable_msix_range - configure device's MSI-X capability structure - * @dev: pointer to the pci_dev data structure of MSI-X device function - * @entries: pointer to an array of MSI-X entries - * @minvec: minimum number of MSI-X IRQs requested - * @maxvec: maximum number of MSI-X IRQs requested - * - * Setup the MSI-X capability structure of device function with a maximum - * possible number of interrupts in the range between @minvec and @maxvec - * upon its software driver call to request for MSI-X mode enabled on its - * hardware device function. It returns a negative errno if an error occurs. - * If it succeeds, it returns the actual number of interrupts allocated and - * indicates the successful configuration of MSI-X capability structure - * with new allocated MSI-X interrupts. - **/ -int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries, - int minvec, int maxvec) -{ - return __pci_enable_msix_range(dev, entries, minvec, maxvec, NULL, 0); -} -EXPORT_SYMBOL(pci_enable_msix_range); - -/** - * pci_alloc_irq_vectors_affinity - allocate multiple IRQs for a device - * @dev: PCI device to operate on - * @min_vecs: minimum number of vectors required (must be >= 1) - * @max_vecs: maximum (desired) number of vectors - * @flags: flags or quirks for the allocation - * @affd: optional description of the affinity requirements - * - * Allocate up to @max_vecs interrupt vectors for @dev, using MSI-X or MSI - * vectors if available, and fall back to a single legacy vector - * if neither is available. Return the number of vectors allocated, - * (which might be smaller than @max_vecs) if successful, or a negative - * error code on error. If less than @min_vecs interrupt vectors are - * available for @dev the function will fail with -ENOSPC. - * - * To get the Linux IRQ number used for a vector that can be passed to - * request_irq() use the pci_irq_vector() helper. - */ -int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs, - unsigned int max_vecs, unsigned int flags, - struct irq_affinity *affd) -{ - struct irq_affinity msi_default_affd = {0}; - int nvecs = -ENOSPC; - - if (flags & PCI_IRQ_AFFINITY) { - if (!affd) - affd = &msi_default_affd; - } else { - if (WARN_ON(affd)) - affd = NULL; - } - - if (flags & PCI_IRQ_MSIX) { - nvecs = __pci_enable_msix_range(dev, NULL, min_vecs, max_vecs, - affd, flags); - if (nvecs > 0) - return nvecs; - } - - if (flags & PCI_IRQ_MSI) { - nvecs = __pci_enable_msi_range(dev, min_vecs, max_vecs, affd); - if (nvecs > 0) - return nvecs; - } - - /* use legacy IRQ if allowed */ - if (flags & PCI_IRQ_LEGACY) { - if (min_vecs == 1 && dev->irq) { - /* - * Invoke the affinity spreading logic to ensure that - * the device driver can adjust queue configuration - * for the single interrupt case. - */ - if (affd) - irq_create_affinity_masks(1, affd); - pci_intx(dev, 1); - return 1; - } - } - - return nvecs; -} -EXPORT_SYMBOL(pci_alloc_irq_vectors_affinity); - -/** - * pci_free_irq_vectors - free previously allocated IRQs for a device - * @dev: PCI device to operate on - * - * Undoes the allocations and enabling in pci_alloc_irq_vectors(). - */ -void pci_free_irq_vectors(struct pci_dev *dev) -{ - pci_disable_msix(dev); - pci_disable_msi(dev); -} -EXPORT_SYMBOL(pci_free_irq_vectors); - -/** - * pci_irq_vector - return Linux IRQ number of a device vector - * @dev: PCI device to operate on - * @nr: Interrupt vector index (0-based) - * - * @nr has the following meanings depending on the interrupt mode: - * MSI-X: The index in the MSI-X vector table - * MSI: The index of the enabled MSI vectors - * INTx: Must be 0 - * - * Return: The Linux interrupt number or -EINVAl if @nr is out of range. - */ -int pci_irq_vector(struct pci_dev *dev, unsigned int nr) -{ - if (dev->msix_enabled) { - struct msi_desc *entry; - - for_each_pci_msi_entry(entry, dev) { - if (entry->pci.msi_attrib.entry_nr == nr) - return entry->irq; - } - WARN_ON_ONCE(1); - return -EINVAL; - } - - if (dev->msi_enabled) { - struct msi_desc *entry = first_pci_msi_entry(dev); - - if (WARN_ON_ONCE(nr >= entry->nvec_used)) - return -EINVAL; - } else { - if (WARN_ON_ONCE(nr > 0)) - return -EINVAL; - } - - return dev->irq + nr; -} -EXPORT_SYMBOL(pci_irq_vector); - -/** - * pci_irq_get_affinity - return the affinity of a particular MSI vector - * @dev: PCI device to operate on - * @nr: device-relative interrupt vector index (0-based). - * - * @nr has the following meanings depending on the interrupt mode: - * MSI-X: The index in the MSI-X vector table - * MSI: The index of the enabled MSI vectors - * INTx: Must be 0 - * - * Return: A cpumask pointer or NULL if @nr is out of range - */ -const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr) -{ - if (dev->msix_enabled) { - struct msi_desc *entry; - - for_each_pci_msi_entry(entry, dev) { - if (entry->pci.msi_attrib.entry_nr == nr) - return &entry->affinity->mask; - } - WARN_ON_ONCE(1); - return NULL; - } else if (dev->msi_enabled) { - struct msi_desc *entry = first_pci_msi_entry(dev); - - if (WARN_ON_ONCE(!entry || !entry->affinity || - nr >= entry->nvec_used)) - return NULL; - - return &entry->affinity[nr].mask; - } else { - return cpu_possible_mask; - } -} -EXPORT_SYMBOL(pci_irq_get_affinity); - -struct pci_dev *msi_desc_to_pci_dev(struct msi_desc *desc) -{ - return to_pci_dev(desc->dev); -} -EXPORT_SYMBOL(msi_desc_to_pci_dev); - -#ifdef CONFIG_PCI_MSI_IRQ_DOMAIN -/** - * pci_msi_domain_write_msg - Helper to write MSI message to PCI config space - * @irq_data: Pointer to interrupt data of the MSI interrupt - * @msg: Pointer to the message - */ -static void pci_msi_domain_write_msg(struct irq_data *irq_data, struct msi_msg *msg) -{ - struct msi_desc *desc = irq_data_get_msi_desc(irq_data); - - /* - * For MSI-X desc->irq is always equal to irq_data->irq. For - * MSI only the first interrupt of MULTI MSI passes the test. - */ - if (desc->irq == irq_data->irq) - __pci_write_msi_msg(desc, msg); -} - -/** - * pci_msi_domain_calc_hwirq - Generate a unique ID for an MSI source - * @desc: Pointer to the MSI descriptor - * - * The ID number is only used within the irqdomain. - */ -static irq_hw_number_t pci_msi_domain_calc_hwirq(struct msi_desc *desc) -{ - struct pci_dev *dev = msi_desc_to_pci_dev(desc); - - return (irq_hw_number_t)desc->pci.msi_attrib.entry_nr | - pci_dev_id(dev) << 11 | - (pci_domain_nr(dev->bus) & 0xFFFFFFFF) << 27; -} - -static inline bool pci_msi_desc_is_multi_msi(struct msi_desc *desc) -{ - return !desc->pci.msi_attrib.is_msix && desc->nvec_used > 1; -} - -/** - * pci_msi_domain_check_cap - Verify that @domain supports the capabilities - * for @dev - * @domain: The interrupt domain to check - * @info: The domain info for verification - * @dev: The device to check - * - * Returns: - * 0 if the functionality is supported - * 1 if Multi MSI is requested, but the domain does not support it - * -ENOTSUPP otherwise - */ -int pci_msi_domain_check_cap(struct irq_domain *domain, - struct msi_domain_info *info, struct device *dev) -{ - struct msi_desc *desc = first_pci_msi_entry(to_pci_dev(dev)); - - /* Special handling to support __pci_enable_msi_range() */ - if (pci_msi_desc_is_multi_msi(desc) && - !(info->flags & MSI_FLAG_MULTI_PCI_MSI)) - return 1; - else if (desc->pci.msi_attrib.is_msix && !(info->flags & MSI_FLAG_PCI_MSIX)) - return -ENOTSUPP; - - return 0; -} - -static int pci_msi_domain_handle_error(struct irq_domain *domain, - struct msi_desc *desc, int error) -{ - /* Special handling to support __pci_enable_msi_range() */ - if (pci_msi_desc_is_multi_msi(desc) && error == -ENOSPC) - return 1; - - return error; -} - -static void pci_msi_domain_set_desc(msi_alloc_info_t *arg, - struct msi_desc *desc) -{ - arg->desc = desc; - arg->hwirq = pci_msi_domain_calc_hwirq(desc); -} - -static struct msi_domain_ops pci_msi_domain_ops_default = { - .set_desc = pci_msi_domain_set_desc, - .msi_check = pci_msi_domain_check_cap, - .handle_error = pci_msi_domain_handle_error, -}; - -static void pci_msi_domain_update_dom_ops(struct msi_domain_info *info) -{ - struct msi_domain_ops *ops = info->ops; - - if (ops == NULL) { - info->ops = &pci_msi_domain_ops_default; - } else { - if (ops->set_desc == NULL) - ops->set_desc = pci_msi_domain_set_desc; - if (ops->msi_check == NULL) - ops->msi_check = pci_msi_domain_check_cap; - if (ops->handle_error == NULL) - ops->handle_error = pci_msi_domain_handle_error; - } -} - -static void pci_msi_domain_update_chip_ops(struct msi_domain_info *info) -{ - struct irq_chip *chip = info->chip; - - BUG_ON(!chip); - if (!chip->irq_write_msi_msg) - chip->irq_write_msi_msg = pci_msi_domain_write_msg; - if (!chip->irq_mask) - chip->irq_mask = pci_msi_mask_irq; - if (!chip->irq_unmask) - chip->irq_unmask = pci_msi_unmask_irq; -} - -/** - * pci_msi_create_irq_domain - Create a MSI interrupt domain - * @fwnode: Optional fwnode of the interrupt controller - * @info: MSI domain info - * @parent: Parent irq domain - * - * Updates the domain and chip ops and creates a MSI interrupt domain. - * - * Returns: - * A domain pointer or NULL in case of failure. - */ -struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode, - struct msi_domain_info *info, - struct irq_domain *parent) -{ - struct irq_domain *domain; - - if (WARN_ON(info->flags & MSI_FLAG_LEVEL_CAPABLE)) - info->flags &= ~MSI_FLAG_LEVEL_CAPABLE; - - if (info->flags & MSI_FLAG_USE_DEF_DOM_OPS) - pci_msi_domain_update_dom_ops(info); - if (info->flags & MSI_FLAG_USE_DEF_CHIP_OPS) - pci_msi_domain_update_chip_ops(info); - - info->flags |= MSI_FLAG_ACTIVATE_EARLY; - if (IS_ENABLED(CONFIG_GENERIC_IRQ_RESERVATION_MODE)) - info->flags |= MSI_FLAG_MUST_REACTIVATE; - - /* PCI-MSI is oneshot-safe */ - info->chip->flags |= IRQCHIP_ONESHOT_SAFE; - - domain = msi_create_irq_domain(fwnode, info, parent); - if (!domain) - return NULL; - - irq_domain_update_bus_token(domain, DOMAIN_BUS_PCI_MSI); - return domain; -} -EXPORT_SYMBOL_GPL(pci_msi_create_irq_domain); - -/* - * Users of the generic MSI infrastructure expect a device to have a single ID, - * so with DMA aliases we have to pick the least-worst compromise. Devices with - * DMA phantom functions tend to still emit MSIs from the real function number, - * so we ignore those and only consider topological aliases where either the - * alias device or RID appears on a different bus number. We also make the - * reasonable assumption that bridges are walked in an upstream direction (so - * the last one seen wins), and the much braver assumption that the most likely - * case is that of PCI->PCIe so we should always use the alias RID. This echoes - * the logic from intel_irq_remapping's set_msi_sid(), which presumably works - * well enough in practice; in the face of the horrible PCIe<->PCI-X conditions - * for taking ownership all we can really do is close our eyes and hope... - */ -static int get_msi_id_cb(struct pci_dev *pdev, u16 alias, void *data) -{ - u32 *pa = data; - u8 bus = PCI_BUS_NUM(*pa); - - if (pdev->bus->number != bus || PCI_BUS_NUM(alias) != bus) - *pa = alias; - - return 0; -} - -/** - * pci_msi_domain_get_msi_rid - Get the MSI requester id (RID) - * @domain: The interrupt domain - * @pdev: The PCI device. - * - * The RID for a device is formed from the alias, with a firmware - * supplied mapping applied - * - * Returns: The RID. - */ -u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev) -{ - struct device_node *of_node; - u32 rid = pci_dev_id(pdev); - - pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); - - of_node = irq_domain_get_of_node(domain); - rid = of_node ? of_msi_map_id(&pdev->dev, of_node, rid) : - iort_msi_map_id(&pdev->dev, rid); - - return rid; -} - -/** - * pci_msi_get_device_domain - Get the MSI domain for a given PCI device - * @pdev: The PCI device - * - * Use the firmware data to find a device-specific MSI domain - * (i.e. not one that is set as a default). - * - * Returns: The corresponding MSI domain or NULL if none has been found. - */ -struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev) -{ - struct irq_domain *dom; - u32 rid = pci_dev_id(pdev); - - pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); - dom = of_msi_map_get_device_domain(&pdev->dev, rid, DOMAIN_BUS_PCI_MSI); - if (!dom) - dom = iort_get_device_domain(&pdev->dev, rid, - DOMAIN_BUS_PCI_MSI); - return dom; -} - -/** - * pci_dev_has_special_msi_domain - Check whether the device is handled by - * a non-standard PCI-MSI domain - * @pdev: The PCI device to check. - * - * Returns: True if the device irqdomain or the bus irqdomain is - * non-standard PCI/MSI. - */ -bool pci_dev_has_special_msi_domain(struct pci_dev *pdev) -{ - struct irq_domain *dom = dev_get_msi_domain(&pdev->dev); - - if (!dom) - dom = dev_get_msi_domain(&pdev->bus->dev); - - if (!dom) - return true; - - return dom->bus_token != DOMAIN_BUS_PCI_MSI; -} - -#endif /* CONFIG_PCI_MSI_IRQ_DOMAIN */ -#endif /* CONFIG_PCI_MSI */ - -void pci_msi_init(struct pci_dev *dev) -{ - u16 ctrl; - - /* - * Disable the MSI hardware to avoid screaming interrupts - * during boot. This is the power on reset default so - * usually this should be a noop. - */ - dev->msi_cap = pci_find_capability(dev, PCI_CAP_ID_MSI); - if (!dev->msi_cap) - return; - - pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &ctrl); - if (ctrl & PCI_MSI_FLAGS_ENABLE) - pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, - ctrl & ~PCI_MSI_FLAGS_ENABLE); - - if (!(ctrl & PCI_MSI_FLAGS_64BIT)) - dev->no_64bit_msi = 1; -} - -void pci_msix_init(struct pci_dev *dev) -{ - u16 ctrl; - - dev->msix_cap = pci_find_capability(dev, PCI_CAP_ID_MSIX); - if (!dev->msix_cap) - return; - - pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &ctrl); - if (ctrl & PCI_MSIX_FLAGS_ENABLE) - pci_write_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, - ctrl & ~PCI_MSIX_FLAGS_ENABLE); -} --- /dev/null +++ b/drivers/pci/msi/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for the PCI/MSI +obj-$(CONFIG_PCI) += msi.o --- /dev/null +++ b/drivers/pci/msi/msi.c @@ -0,0 +1,1532 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * PCI Message Signaled Interrupt (MSI) + * + * Copyright (C) 2003-2004 Intel + * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com) + * Copyright (C) 2016 Christoph Hellwig. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../pci.h" + +#ifdef CONFIG_PCI_MSI + +static int pci_msi_enable = 1; +int pci_msi_ignore_mask; + +#define msix_table_size(flags) ((flags & PCI_MSIX_FLAGS_QSIZE) + 1) + +#ifdef CONFIG_PCI_MSI_IRQ_DOMAIN +static int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) +{ + struct irq_domain *domain; + + domain = dev_get_msi_domain(&dev->dev); + if (domain && irq_domain_is_hierarchy(domain)) + return msi_domain_alloc_irqs(domain, &dev->dev, nvec); + + return arch_setup_msi_irqs(dev, nvec, type); +} + +static void pci_msi_teardown_msi_irqs(struct pci_dev *dev) +{ + struct irq_domain *domain; + + domain = dev_get_msi_domain(&dev->dev); + if (domain && irq_domain_is_hierarchy(domain)) + msi_domain_free_irqs(domain, &dev->dev); + else + arch_teardown_msi_irqs(dev); +} +#else +#define pci_msi_setup_msi_irqs arch_setup_msi_irqs +#define pci_msi_teardown_msi_irqs arch_teardown_msi_irqs +#endif + +#ifdef CONFIG_PCI_MSI_ARCH_FALLBACKS +/* Arch hooks */ +int __weak arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *desc) +{ + return -EINVAL; +} + +void __weak arch_teardown_msi_irq(unsigned int irq) +{ +} + +int __weak arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) +{ + struct msi_desc *entry; + int ret; + + /* + * If an architecture wants to support multiple MSI, it needs to + * override arch_setup_msi_irqs() + */ + if (type == PCI_CAP_ID_MSI && nvec > 1) + return 1; + + for_each_pci_msi_entry(entry, dev) { + ret = arch_setup_msi_irq(dev, entry); + if (ret < 0) + return ret; + if (ret > 0) + return -ENOSPC; + } + + return 0; +} + +void __weak arch_teardown_msi_irqs(struct pci_dev *dev) +{ + int i; + struct msi_desc *entry; + + for_each_pci_msi_entry(entry, dev) + if (entry->irq) + for (i = 0; i < entry->nvec_used; i++) + arch_teardown_msi_irq(entry->irq + i); +} +#endif /* CONFIG_PCI_MSI_ARCH_FALLBACKS */ + +/* + * PCI 2.3 does not specify mask bits for each MSI interrupt. Attempting to + * mask all MSI interrupts by clearing the MSI enable bit does not work + * reliably as devices without an INTx disable bit will then generate a + * level IRQ which will never be cleared. + */ +static inline __attribute_const__ u32 msi_multi_mask(struct msi_desc *desc) +{ + /* Don't shift by >= width of type */ + if (desc->pci.msi_attrib.multi_cap >= 5) + return 0xffffffff; + return (1 << (1 << desc->pci.msi_attrib.multi_cap)) - 1; +} + +static noinline void pci_msi_update_mask(struct msi_desc *desc, u32 clear, u32 set) +{ + raw_spinlock_t *lock = &desc->dev->msi_lock; + unsigned long flags; + + if (!desc->pci.msi_attrib.can_mask) + return; + + raw_spin_lock_irqsave(lock, flags); + desc->pci.msi_mask &= ~clear; + desc->pci.msi_mask |= set; + pci_write_config_dword(msi_desc_to_pci_dev(desc), desc->pci.mask_pos, + desc->pci.msi_mask); + raw_spin_unlock_irqrestore(lock, flags); +} + +static inline void pci_msi_mask(struct msi_desc *desc, u32 mask) +{ + pci_msi_update_mask(desc, 0, mask); +} + +static inline void pci_msi_unmask(struct msi_desc *desc, u32 mask) +{ + pci_msi_update_mask(desc, mask, 0); +} + +static inline void __iomem *pci_msix_desc_addr(struct msi_desc *desc) +{ + return desc->pci.mask_base + desc->pci.msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE; +} + +/* + * This internal function does not flush PCI writes to the device. All + * users must ensure that they read from the device before either assuming + * that the device state is up to date, or returning out of this file. + * It does not affect the msi_desc::msix_ctrl cache either. Use with care! + */ +static void pci_msix_write_vector_ctrl(struct msi_desc *desc, u32 ctrl) +{ + void __iomem *desc_addr = pci_msix_desc_addr(desc); + + if (desc->pci.msi_attrib.can_mask) + writel(ctrl, desc_addr + PCI_MSIX_ENTRY_VECTOR_CTRL); +} + +static inline void pci_msix_mask(struct msi_desc *desc) +{ + desc->pci.msix_ctrl |= PCI_MSIX_ENTRY_CTRL_MASKBIT; + pci_msix_write_vector_ctrl(desc, desc->pci.msix_ctrl); + /* Flush write to device */ + readl(desc->pci.mask_base); +} + +static inline void pci_msix_unmask(struct msi_desc *desc) +{ + desc->pci.msix_ctrl &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT; + pci_msix_write_vector_ctrl(desc, desc->pci.msix_ctrl); +} + +static void __pci_msi_mask_desc(struct msi_desc *desc, u32 mask) +{ + if (desc->pci.msi_attrib.is_msix) + pci_msix_mask(desc); + else + pci_msi_mask(desc, mask); +} + +static void __pci_msi_unmask_desc(struct msi_desc *desc, u32 mask) +{ + if (desc->pci.msi_attrib.is_msix) + pci_msix_unmask(desc); + else + pci_msi_unmask(desc, mask); +} + +/** + * pci_msi_mask_irq - Generic IRQ chip callback to mask PCI/MSI interrupts + * @data: pointer to irqdata associated to that interrupt + */ +void pci_msi_mask_irq(struct irq_data *data) +{ + struct msi_desc *desc = irq_data_get_msi_desc(data); + + __pci_msi_mask_desc(desc, BIT(data->irq - desc->irq)); +} +EXPORT_SYMBOL_GPL(pci_msi_mask_irq); + +/** + * pci_msi_unmask_irq - Generic IRQ chip callback to unmask PCI/MSI interrupts + * @data: pointer to irqdata associated to that interrupt + */ +void pci_msi_unmask_irq(struct irq_data *data) +{ + struct msi_desc *desc = irq_data_get_msi_desc(data); + + __pci_msi_unmask_desc(desc, BIT(data->irq - desc->irq)); +} +EXPORT_SYMBOL_GPL(pci_msi_unmask_irq); + +void __pci_read_msi_msg(struct msi_desc *entry, struct msi_msg *msg) +{ + struct pci_dev *dev = msi_desc_to_pci_dev(entry); + + BUG_ON(dev->current_state != PCI_D0); + + if (entry->pci.msi_attrib.is_msix) { + void __iomem *base = pci_msix_desc_addr(entry); + + if (WARN_ON_ONCE(entry->pci.msi_attrib.is_virtual)) + return; + + msg->address_lo = readl(base + PCI_MSIX_ENTRY_LOWER_ADDR); + msg->address_hi = readl(base + PCI_MSIX_ENTRY_UPPER_ADDR); + msg->data = readl(base + PCI_MSIX_ENTRY_DATA); + } else { + int pos = dev->msi_cap; + u16 data; + + pci_read_config_dword(dev, pos + PCI_MSI_ADDRESS_LO, + &msg->address_lo); + if (entry->pci.msi_attrib.is_64) { + pci_read_config_dword(dev, pos + PCI_MSI_ADDRESS_HI, + &msg->address_hi); + pci_read_config_word(dev, pos + PCI_MSI_DATA_64, &data); + } else { + msg->address_hi = 0; + pci_read_config_word(dev, pos + PCI_MSI_DATA_32, &data); + } + msg->data = data; + } +} + +void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg) +{ + struct pci_dev *dev = msi_desc_to_pci_dev(entry); + + if (dev->current_state != PCI_D0 || pci_dev_is_disconnected(dev)) { + /* Don't touch the hardware now */ + } else if (entry->pci.msi_attrib.is_msix) { + void __iomem *base = pci_msix_desc_addr(entry); + u32 ctrl = entry->pci.msix_ctrl; + bool unmasked = !(ctrl & PCI_MSIX_ENTRY_CTRL_MASKBIT); + + if (entry->pci.msi_attrib.is_virtual) + goto skip; + + /* + * The specification mandates that the entry is masked + * when the message is modified: + * + * "If software changes the Address or Data value of an + * entry while the entry is unmasked, the result is + * undefined." + */ + if (unmasked) + pci_msix_write_vector_ctrl(entry, ctrl | PCI_MSIX_ENTRY_CTRL_MASKBIT); + + writel(msg->address_lo, base + PCI_MSIX_ENTRY_LOWER_ADDR); + writel(msg->address_hi, base + PCI_MSIX_ENTRY_UPPER_ADDR); + writel(msg->data, base + PCI_MSIX_ENTRY_DATA); + + if (unmasked) + pci_msix_write_vector_ctrl(entry, ctrl); + + /* Ensure that the writes are visible in the device */ + readl(base + PCI_MSIX_ENTRY_DATA); + } else { + int pos = dev->msi_cap; + u16 msgctl; + + pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &msgctl); + msgctl &= ~PCI_MSI_FLAGS_QSIZE; + msgctl |= entry->pci.msi_attrib.multiple << 4; + pci_write_config_word(dev, pos + PCI_MSI_FLAGS, msgctl); + + pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_LO, + msg->address_lo); + if (entry->pci.msi_attrib.is_64) { + pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_HI, + msg->address_hi); + pci_write_config_word(dev, pos + PCI_MSI_DATA_64, + msg->data); + } else { + pci_write_config_word(dev, pos + PCI_MSI_DATA_32, + msg->data); + } + /* Ensure that the writes are visible in the device */ + pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &msgctl); + } + +skip: + entry->msg = *msg; + + if (entry->write_msi_msg) + entry->write_msi_msg(entry, entry->write_msi_msg_data); + +} + +void pci_write_msi_msg(unsigned int irq, struct msi_msg *msg) +{ + struct msi_desc *entry = irq_get_msi_desc(irq); + + __pci_write_msi_msg(entry, msg); +} +EXPORT_SYMBOL_GPL(pci_write_msi_msg); + +static void free_msi_irqs(struct pci_dev *dev) +{ + struct list_head *msi_list = dev_to_msi_list(&dev->dev); + struct msi_desc *entry, *tmp; + int i; + + for_each_pci_msi_entry(entry, dev) + if (entry->irq) + for (i = 0; i < entry->nvec_used; i++) + BUG_ON(irq_has_action(entry->irq + i)); + + if (dev->msi_irq_groups) { + msi_destroy_sysfs(&dev->dev, dev->msi_irq_groups); + dev->msi_irq_groups = NULL; + } + + pci_msi_teardown_msi_irqs(dev); + + list_for_each_entry_safe(entry, tmp, msi_list, list) { + if (entry->pci.msi_attrib.is_msix) { + if (list_is_last(&entry->list, msi_list)) + iounmap(entry->pci.mask_base); + } + + list_del(&entry->list); + free_msi_entry(entry); + } +} + +static void pci_intx_for_msi(struct pci_dev *dev, int enable) +{ + if (!(dev->dev_flags & PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG)) + pci_intx(dev, enable); +} + +static void pci_msi_set_enable(struct pci_dev *dev, int enable) +{ + u16 control; + + pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); + control &= ~PCI_MSI_FLAGS_ENABLE; + if (enable) + control |= PCI_MSI_FLAGS_ENABLE; + pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control); +} + +/* + * Architecture override returns true when the PCI MSI message should be + * written by the generic restore function. + */ +bool __weak arch_restore_msi_irqs(struct pci_dev *dev) +{ + return true; +} + +static void __pci_restore_msi_state(struct pci_dev *dev) +{ + struct msi_desc *entry; + u16 control; + + if (!dev->msi_enabled) + return; + + entry = irq_get_msi_desc(dev->irq); + + pci_intx_for_msi(dev, 0); + pci_msi_set_enable(dev, 0); + if (arch_restore_msi_irqs(dev)) + __pci_write_msi_msg(entry, &entry->msg); + + pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); + pci_msi_update_mask(entry, 0, 0); + control &= ~PCI_MSI_FLAGS_QSIZE; + control |= (entry->pci.msi_attrib.multiple << 4) | PCI_MSI_FLAGS_ENABLE; + pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control); +} + +static void pci_msix_clear_and_set_ctrl(struct pci_dev *dev, u16 clear, u16 set) +{ + u16 ctrl; + + pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &ctrl); + ctrl &= ~clear; + ctrl |= set; + pci_write_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, ctrl); +} + +static void __pci_restore_msix_state(struct pci_dev *dev) +{ + struct msi_desc *entry; + bool write_msg; + + if (!dev->msix_enabled) + return; + BUG_ON(list_empty(dev_to_msi_list(&dev->dev))); + + /* route the table */ + pci_intx_for_msi(dev, 0); + pci_msix_clear_and_set_ctrl(dev, 0, + PCI_MSIX_FLAGS_ENABLE | PCI_MSIX_FLAGS_MASKALL); + + write_msg = arch_restore_msi_irqs(dev); + + for_each_pci_msi_entry(entry, dev) { + if (write_msg) + __pci_write_msi_msg(entry, &entry->msg); + pci_msix_write_vector_ctrl(entry, entry->pci.msix_ctrl); + } + + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); +} + +void pci_restore_msi_state(struct pci_dev *dev) +{ + __pci_restore_msi_state(dev); + __pci_restore_msix_state(dev); +} +EXPORT_SYMBOL_GPL(pci_restore_msi_state); + +static struct msi_desc * +msi_setup_entry(struct pci_dev *dev, int nvec, struct irq_affinity *affd) +{ + struct irq_affinity_desc *masks = NULL; + struct msi_desc *entry; + u16 control; + + if (affd) + masks = irq_create_affinity_masks(nvec, affd); + + /* MSI Entry Initialization */ + entry = alloc_msi_entry(&dev->dev, nvec, masks); + if (!entry) + goto out; + + pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); + /* Lies, damned lies, and MSIs */ + if (dev->dev_flags & PCI_DEV_FLAGS_HAS_MSI_MASKING) + control |= PCI_MSI_FLAGS_MASKBIT; + + entry->pci.msi_attrib.is_msix = 0; + entry->pci.msi_attrib.is_64 = !!(control & PCI_MSI_FLAGS_64BIT); + entry->pci.msi_attrib.is_virtual = 0; + entry->pci.msi_attrib.entry_nr = 0; + entry->pci.msi_attrib.can_mask = !pci_msi_ignore_mask && + !!(control & PCI_MSI_FLAGS_MASKBIT); + entry->pci.msi_attrib.default_irq = dev->irq; /* Save IOAPIC IRQ */ + entry->pci.msi_attrib.multi_cap = (control & PCI_MSI_FLAGS_QMASK) >> 1; + entry->pci.msi_attrib.multiple = ilog2(__roundup_pow_of_two(nvec)); + + if (control & PCI_MSI_FLAGS_64BIT) + entry->pci.mask_pos = dev->msi_cap + PCI_MSI_MASK_64; + else + entry->pci.mask_pos = dev->msi_cap + PCI_MSI_MASK_32; + + /* Save the initial mask status */ + if (entry->pci.msi_attrib.can_mask) + pci_read_config_dword(dev, entry->pci.mask_pos, &entry->pci.msi_mask); + +out: + kfree(masks); + return entry; +} + +static int msi_verify_entries(struct pci_dev *dev) +{ + struct msi_desc *entry; + + if (!dev->no_64bit_msi) + return 0; + + for_each_pci_msi_entry(entry, dev) { + if (entry->msg.address_hi) { + pci_err(dev, "arch assigned 64-bit MSI address %#x%08x but device only supports 32 bits\n", + entry->msg.address_hi, entry->msg.address_lo); + return -EIO; + } + } + return 0; +} + +/** + * msi_capability_init - configure device's MSI capability structure + * @dev: pointer to the pci_dev data structure of MSI device function + * @nvec: number of interrupts to allocate + * @affd: description of automatic IRQ affinity assignments (may be %NULL) + * + * Setup the MSI capability structure of the device with the requested + * number of interrupts. A return value of zero indicates the successful + * setup of an entry with the new MSI IRQ. A negative return value indicates + * an error, and a positive return value indicates the number of interrupts + * which could have been allocated. + */ +static int msi_capability_init(struct pci_dev *dev, int nvec, + struct irq_affinity *affd) +{ + const struct attribute_group **groups; + struct msi_desc *entry; + int ret; + + pci_msi_set_enable(dev, 0); /* Disable MSI during set up */ + + entry = msi_setup_entry(dev, nvec, affd); + if (!entry) + return -ENOMEM; + + /* All MSIs are unmasked by default; mask them all */ + pci_msi_mask(entry, msi_multi_mask(entry)); + + list_add_tail(&entry->list, dev_to_msi_list(&dev->dev)); + + /* Configure MSI capability structure */ + ret = pci_msi_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSI); + if (ret) + goto err; + + ret = msi_verify_entries(dev); + if (ret) + goto err; + + groups = msi_populate_sysfs(&dev->dev); + if (IS_ERR(groups)) { + ret = PTR_ERR(groups); + goto err; + } + + dev->msi_irq_groups = groups; + + /* Set MSI enabled bits */ + pci_intx_for_msi(dev, 0); + pci_msi_set_enable(dev, 1); + dev->msi_enabled = 1; + + pcibios_free_irq(dev); + dev->irq = entry->irq; + return 0; + +err: + pci_msi_unmask(entry, msi_multi_mask(entry)); + free_msi_irqs(dev); + return ret; +} + +static void __iomem *msix_map_region(struct pci_dev *dev, + unsigned int nr_entries) +{ + resource_size_t phys_addr; + u32 table_offset; + unsigned long flags; + u8 bir; + + pci_read_config_dword(dev, dev->msix_cap + PCI_MSIX_TABLE, + &table_offset); + bir = (u8)(table_offset & PCI_MSIX_TABLE_BIR); + flags = pci_resource_flags(dev, bir); + if (!flags || (flags & IORESOURCE_UNSET)) + return NULL; + + table_offset &= PCI_MSIX_TABLE_OFFSET; + phys_addr = pci_resource_start(dev, bir) + table_offset; + + return ioremap(phys_addr, nr_entries * PCI_MSIX_ENTRY_SIZE); +} + +static int msix_setup_entries(struct pci_dev *dev, void __iomem *base, + struct msix_entry *entries, int nvec, + struct irq_affinity *affd) +{ + struct irq_affinity_desc *curmsk, *masks = NULL; + struct msi_desc *entry; + void __iomem *addr; + int ret, i; + int vec_count = pci_msix_vec_count(dev); + + if (affd) + masks = irq_create_affinity_masks(nvec, affd); + + for (i = 0, curmsk = masks; i < nvec; i++) { + entry = alloc_msi_entry(&dev->dev, 1, curmsk); + if (!entry) { + if (!i) + iounmap(base); + else + free_msi_irqs(dev); + /* No enough memory. Don't try again */ + ret = -ENOMEM; + goto out; + } + + entry->pci.msi_attrib.is_msix = 1; + entry->pci.msi_attrib.is_64 = 1; + + if (entries) + entry->pci.msi_attrib.entry_nr = entries[i].entry; + else + entry->pci.msi_attrib.entry_nr = i; + + entry->pci.msi_attrib.is_virtual = + entry->pci.msi_attrib.entry_nr >= vec_count; + + entry->pci.msi_attrib.can_mask = !pci_msi_ignore_mask && + !entry->pci.msi_attrib.is_virtual; + + entry->pci.msi_attrib.default_irq = dev->irq; + entry->pci.mask_base = base; + + if (entry->pci.msi_attrib.can_mask) { + addr = pci_msix_desc_addr(entry); + entry->pci.msix_ctrl = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL); + } + + list_add_tail(&entry->list, dev_to_msi_list(&dev->dev)); + if (masks) + curmsk++; + } + ret = 0; +out: + kfree(masks); + return ret; +} + +static void msix_update_entries(struct pci_dev *dev, struct msix_entry *entries) +{ + struct msi_desc *entry; + + if (entries) { + for_each_pci_msi_entry(entry, dev) { + entries->vector = entry->irq; + entries++; + } + } +} + +static void msix_mask_all(void __iomem *base, int tsize) +{ + u32 ctrl = PCI_MSIX_ENTRY_CTRL_MASKBIT; + int i; + + if (pci_msi_ignore_mask) + return; + + for (i = 0; i < tsize; i++, base += PCI_MSIX_ENTRY_SIZE) + writel(ctrl, base + PCI_MSIX_ENTRY_VECTOR_CTRL); +} + +/** + * msix_capability_init - configure device's MSI-X capability + * @dev: pointer to the pci_dev data structure of MSI-X device function + * @entries: pointer to an array of struct msix_entry entries + * @nvec: number of @entries + * @affd: Optional pointer to enable automatic affinity assignment + * + * Setup the MSI-X capability structure of device function with a + * single MSI-X IRQ. A return of zero indicates the successful setup of + * requested MSI-X entries with allocated IRQs or non-zero for otherwise. + **/ +static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries, + int nvec, struct irq_affinity *affd) +{ + const struct attribute_group **groups; + void __iomem *base; + int ret, tsize; + u16 control; + + /* + * Some devices require MSI-X to be enabled before the MSI-X + * registers can be accessed. Mask all the vectors to prevent + * interrupts coming in before they're fully set up. + */ + pci_msix_clear_and_set_ctrl(dev, 0, PCI_MSIX_FLAGS_MASKALL | + PCI_MSIX_FLAGS_ENABLE); + + pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &control); + /* Request & Map MSI-X table region */ + tsize = msix_table_size(control); + base = msix_map_region(dev, tsize); + if (!base) { + ret = -ENOMEM; + goto out_disable; + } + + /* Ensure that all table entries are masked. */ + msix_mask_all(base, tsize); + + ret = msix_setup_entries(dev, base, entries, nvec, affd); + if (ret) + goto out_disable; + + ret = pci_msi_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX); + if (ret) + goto out_avail; + + /* Check if all MSI entries honor device restrictions */ + ret = msi_verify_entries(dev); + if (ret) + goto out_free; + + msix_update_entries(dev, entries); + + groups = msi_populate_sysfs(&dev->dev); + if (IS_ERR(groups)) { + ret = PTR_ERR(groups); + goto out_free; + } + + dev->msi_irq_groups = groups; + + /* Set MSI-X enabled bits and unmask the function */ + pci_intx_for_msi(dev, 0); + dev->msix_enabled = 1; + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); + + pcibios_free_irq(dev); + return 0; + +out_avail: + if (ret < 0) { + /* + * If we had some success, report the number of IRQs + * we succeeded in setting up. + */ + struct msi_desc *entry; + int avail = 0; + + for_each_pci_msi_entry(entry, dev) { + if (entry->irq != 0) + avail++; + } + if (avail != 0) + ret = avail; + } + +out_free: + free_msi_irqs(dev); + +out_disable: + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); + + return ret; +} + +/** + * pci_msi_supported - check whether MSI may be enabled on a device + * @dev: pointer to the pci_dev data structure of MSI device function + * @nvec: how many MSIs have been requested? + * + * Look at global flags, the device itself, and its parent buses + * to determine if MSI/-X are supported for the device. If MSI/-X is + * supported return 1, else return 0. + **/ +static int pci_msi_supported(struct pci_dev *dev, int nvec) +{ + struct pci_bus *bus; + + /* MSI must be globally enabled and supported by the device */ + if (!pci_msi_enable) + return 0; + + if (!dev || dev->no_msi) + return 0; + + /* + * You can't ask to have 0 or less MSIs configured. + * a) it's stupid .. + * b) the list manipulation code assumes nvec >= 1. + */ + if (nvec < 1) + return 0; + + /* + * Any bridge which does NOT route MSI transactions from its + * secondary bus to its primary bus must set NO_MSI flag on + * the secondary pci_bus. + * + * The NO_MSI flag can either be set directly by: + * - arch-specific PCI host bus controller drivers (deprecated) + * - quirks for specific PCI bridges + * + * or indirectly by platform-specific PCI host bridge drivers by + * advertising the 'msi_domain' property, which results in + * the NO_MSI flag when no MSI domain is found for this bridge + * at probe time. + */ + for (bus = dev->bus; bus; bus = bus->parent) + if (bus->bus_flags & PCI_BUS_FLAGS_NO_MSI) + return 0; + + return 1; +} + +/** + * pci_msi_vec_count - Return the number of MSI vectors a device can send + * @dev: device to report about + * + * This function returns the number of MSI vectors a device requested via + * Multiple Message Capable register. It returns a negative errno if the + * device is not capable sending MSI interrupts. Otherwise, the call succeeds + * and returns a power of two, up to a maximum of 2^5 (32), according to the + * MSI specification. + **/ +int pci_msi_vec_count(struct pci_dev *dev) +{ + int ret; + u16 msgctl; + + if (!dev->msi_cap) + return -EINVAL; + + pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl); + ret = 1 << ((msgctl & PCI_MSI_FLAGS_QMASK) >> 1); + + return ret; +} +EXPORT_SYMBOL(pci_msi_vec_count); + +static void pci_msi_shutdown(struct pci_dev *dev) +{ + struct msi_desc *desc; + + if (!pci_msi_enable || !dev || !dev->msi_enabled) + return; + + BUG_ON(list_empty(dev_to_msi_list(&dev->dev))); + desc = first_pci_msi_entry(dev); + + pci_msi_set_enable(dev, 0); + pci_intx_for_msi(dev, 1); + dev->msi_enabled = 0; + + /* Return the device with MSI unmasked as initial states */ + pci_msi_unmask(desc, msi_multi_mask(desc)); + + /* Restore dev->irq to its default pin-assertion IRQ */ + dev->irq = desc->pci.msi_attrib.default_irq; + pcibios_alloc_irq(dev); +} + +void pci_disable_msi(struct pci_dev *dev) +{ + if (!pci_msi_enable || !dev || !dev->msi_enabled) + return; + + pci_msi_shutdown(dev); + free_msi_irqs(dev); +} +EXPORT_SYMBOL(pci_disable_msi); + +/** + * pci_msix_vec_count - return the number of device's MSI-X table entries + * @dev: pointer to the pci_dev data structure of MSI-X device function + * This function returns the number of device's MSI-X table entries and + * therefore the number of MSI-X vectors device is capable of sending. + * It returns a negative errno if the device is not capable of sending MSI-X + * interrupts. + **/ +int pci_msix_vec_count(struct pci_dev *dev) +{ + u16 control; + + if (!dev->msix_cap) + return -EINVAL; + + pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &control); + return msix_table_size(control); +} +EXPORT_SYMBOL(pci_msix_vec_count); + +static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, + int nvec, struct irq_affinity *affd, int flags) +{ + int nr_entries; + int i, j; + + if (!pci_msi_supported(dev, nvec) || dev->current_state != PCI_D0) + return -EINVAL; + + nr_entries = pci_msix_vec_count(dev); + if (nr_entries < 0) + return nr_entries; + if (nvec > nr_entries && !(flags & PCI_IRQ_VIRTUAL)) + return nr_entries; + + if (entries) { + /* Check for any invalid entries */ + for (i = 0; i < nvec; i++) { + if (entries[i].entry >= nr_entries) + return -EINVAL; /* invalid entry */ + for (j = i + 1; j < nvec; j++) { + if (entries[i].entry == entries[j].entry) + return -EINVAL; /* duplicate entry */ + } + } + } + + /* Check whether driver already requested for MSI IRQ */ + if (dev->msi_enabled) { + pci_info(dev, "can't enable MSI-X (MSI IRQ already assigned)\n"); + return -EINVAL; + } + return msix_capability_init(dev, entries, nvec, affd); +} + +static void pci_msix_shutdown(struct pci_dev *dev) +{ + struct msi_desc *entry; + + if (!pci_msi_enable || !dev || !dev->msix_enabled) + return; + + if (pci_dev_is_disconnected(dev)) { + dev->msix_enabled = 0; + return; + } + + /* Return the device with MSI-X masked as initial states */ + for_each_pci_msi_entry(entry, dev) + pci_msix_mask(entry); + + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); + pci_intx_for_msi(dev, 1); + dev->msix_enabled = 0; + pcibios_alloc_irq(dev); +} + +void pci_disable_msix(struct pci_dev *dev) +{ + if (!pci_msi_enable || !dev || !dev->msix_enabled) + return; + + pci_msix_shutdown(dev); + free_msi_irqs(dev); +} +EXPORT_SYMBOL(pci_disable_msix); + +void pci_no_msi(void) +{ + pci_msi_enable = 0; +} + +/** + * pci_msi_enabled - is MSI enabled? + * + * Returns true if MSI has not been disabled by the command-line option + * pci=nomsi. + **/ +int pci_msi_enabled(void) +{ + return pci_msi_enable; +} +EXPORT_SYMBOL(pci_msi_enabled); + +static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec, + struct irq_affinity *affd) +{ + int nvec; + int rc; + + if (!pci_msi_supported(dev, minvec) || dev->current_state != PCI_D0) + return -EINVAL; + + /* Check whether driver already requested MSI-X IRQs */ + if (dev->msix_enabled) { + pci_info(dev, "can't enable MSI (MSI-X already enabled)\n"); + return -EINVAL; + } + + if (maxvec < minvec) + return -ERANGE; + + if (WARN_ON_ONCE(dev->msi_enabled)) + return -EINVAL; + + nvec = pci_msi_vec_count(dev); + if (nvec < 0) + return nvec; + if (nvec < minvec) + return -ENOSPC; + + if (nvec > maxvec) + nvec = maxvec; + + for (;;) { + if (affd) { + nvec = irq_calc_affinity_vectors(minvec, nvec, affd); + if (nvec < minvec) + return -ENOSPC; + } + + rc = msi_capability_init(dev, nvec, affd); + if (rc == 0) + return nvec; + + if (rc < 0) + return rc; + if (rc < minvec) + return -ENOSPC; + + nvec = rc; + } +} + +/* deprecated, don't use */ +int pci_enable_msi(struct pci_dev *dev) +{ + int rc = __pci_enable_msi_range(dev, 1, 1, NULL); + if (rc < 0) + return rc; + return 0; +} +EXPORT_SYMBOL(pci_enable_msi); + +static int __pci_enable_msix_range(struct pci_dev *dev, + struct msix_entry *entries, int minvec, + int maxvec, struct irq_affinity *affd, + int flags) +{ + int rc, nvec = maxvec; + + if (maxvec < minvec) + return -ERANGE; + + if (WARN_ON_ONCE(dev->msix_enabled)) + return -EINVAL; + + for (;;) { + if (affd) { + nvec = irq_calc_affinity_vectors(minvec, nvec, affd); + if (nvec < minvec) + return -ENOSPC; + } + + rc = __pci_enable_msix(dev, entries, nvec, affd, flags); + if (rc == 0) + return nvec; + + if (rc < 0) + return rc; + if (rc < minvec) + return -ENOSPC; + + nvec = rc; + } +} + +/** + * pci_enable_msix_range - configure device's MSI-X capability structure + * @dev: pointer to the pci_dev data structure of MSI-X device function + * @entries: pointer to an array of MSI-X entries + * @minvec: minimum number of MSI-X IRQs requested + * @maxvec: maximum number of MSI-X IRQs requested + * + * Setup the MSI-X capability structure of device function with a maximum + * possible number of interrupts in the range between @minvec and @maxvec + * upon its software driver call to request for MSI-X mode enabled on its + * hardware device function. It returns a negative errno if an error occurs. + * If it succeeds, it returns the actual number of interrupts allocated and + * indicates the successful configuration of MSI-X capability structure + * with new allocated MSI-X interrupts. + **/ +int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries, + int minvec, int maxvec) +{ + return __pci_enable_msix_range(dev, entries, minvec, maxvec, NULL, 0); +} +EXPORT_SYMBOL(pci_enable_msix_range); + +/** + * pci_alloc_irq_vectors_affinity - allocate multiple IRQs for a device + * @dev: PCI device to operate on + * @min_vecs: minimum number of vectors required (must be >= 1) + * @max_vecs: maximum (desired) number of vectors + * @flags: flags or quirks for the allocation + * @affd: optional description of the affinity requirements + * + * Allocate up to @max_vecs interrupt vectors for @dev, using MSI-X or MSI + * vectors if available, and fall back to a single legacy vector + * if neither is available. Return the number of vectors allocated, + * (which might be smaller than @max_vecs) if successful, or a negative + * error code on error. If less than @min_vecs interrupt vectors are + * available for @dev the function will fail with -ENOSPC. + * + * To get the Linux IRQ number used for a vector that can be passed to + * request_irq() use the pci_irq_vector() helper. + */ +int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs, + unsigned int max_vecs, unsigned int flags, + struct irq_affinity *affd) +{ + struct irq_affinity msi_default_affd = {0}; + int nvecs = -ENOSPC; + + if (flags & PCI_IRQ_AFFINITY) { + if (!affd) + affd = &msi_default_affd; + } else { + if (WARN_ON(affd)) + affd = NULL; + } + + if (flags & PCI_IRQ_MSIX) { + nvecs = __pci_enable_msix_range(dev, NULL, min_vecs, max_vecs, + affd, flags); + if (nvecs > 0) + return nvecs; + } + + if (flags & PCI_IRQ_MSI) { + nvecs = __pci_enable_msi_range(dev, min_vecs, max_vecs, affd); + if (nvecs > 0) + return nvecs; + } + + /* use legacy IRQ if allowed */ + if (flags & PCI_IRQ_LEGACY) { + if (min_vecs == 1 && dev->irq) { + /* + * Invoke the affinity spreading logic to ensure that + * the device driver can adjust queue configuration + * for the single interrupt case. + */ + if (affd) + irq_create_affinity_masks(1, affd); + pci_intx(dev, 1); + return 1; + } + } + + return nvecs; +} +EXPORT_SYMBOL(pci_alloc_irq_vectors_affinity); + +/** + * pci_free_irq_vectors - free previously allocated IRQs for a device + * @dev: PCI device to operate on + * + * Undoes the allocations and enabling in pci_alloc_irq_vectors(). + */ +void pci_free_irq_vectors(struct pci_dev *dev) +{ + pci_disable_msix(dev); + pci_disable_msi(dev); +} +EXPORT_SYMBOL(pci_free_irq_vectors); + +/** + * pci_irq_vector - return Linux IRQ number of a device vector + * @dev: PCI device to operate on + * @nr: Interrupt vector index (0-based) + * + * @nr has the following meanings depending on the interrupt mode: + * MSI-X: The index in the MSI-X vector table + * MSI: The index of the enabled MSI vectors + * INTx: Must be 0 + * + * Return: The Linux interrupt number or -EINVAl if @nr is out of range. + */ +int pci_irq_vector(struct pci_dev *dev, unsigned int nr) +{ + if (dev->msix_enabled) { + struct msi_desc *entry; + + for_each_pci_msi_entry(entry, dev) { + if (entry->pci.msi_attrib.entry_nr == nr) + return entry->irq; + } + WARN_ON_ONCE(1); + return -EINVAL; + } + + if (dev->msi_enabled) { + struct msi_desc *entry = first_pci_msi_entry(dev); + + if (WARN_ON_ONCE(nr >= entry->nvec_used)) + return -EINVAL; + } else { + if (WARN_ON_ONCE(nr > 0)) + return -EINVAL; + } + + return dev->irq + nr; +} +EXPORT_SYMBOL(pci_irq_vector); + +/** + * pci_irq_get_affinity - return the affinity of a particular MSI vector + * @dev: PCI device to operate on + * @nr: device-relative interrupt vector index (0-based). + * + * @nr has the following meanings depending on the interrupt mode: + * MSI-X: The index in the MSI-X vector table + * MSI: The index of the enabled MSI vectors + * INTx: Must be 0 + * + * Return: A cpumask pointer or NULL if @nr is out of range + */ +const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr) +{ + if (dev->msix_enabled) { + struct msi_desc *entry; + + for_each_pci_msi_entry(entry, dev) { + if (entry->pci.msi_attrib.entry_nr == nr) + return &entry->affinity->mask; + } + WARN_ON_ONCE(1); + return NULL; + } else if (dev->msi_enabled) { + struct msi_desc *entry = first_pci_msi_entry(dev); + + if (WARN_ON_ONCE(!entry || !entry->affinity || + nr >= entry->nvec_used)) + return NULL; + + return &entry->affinity[nr].mask; + } else { + return cpu_possible_mask; + } +} +EXPORT_SYMBOL(pci_irq_get_affinity); + +struct pci_dev *msi_desc_to_pci_dev(struct msi_desc *desc) +{ + return to_pci_dev(desc->dev); +} +EXPORT_SYMBOL(msi_desc_to_pci_dev); + +#ifdef CONFIG_PCI_MSI_IRQ_DOMAIN +/** + * pci_msi_domain_write_msg - Helper to write MSI message to PCI config space + * @irq_data: Pointer to interrupt data of the MSI interrupt + * @msg: Pointer to the message + */ +static void pci_msi_domain_write_msg(struct irq_data *irq_data, struct msi_msg *msg) +{ + struct msi_desc *desc = irq_data_get_msi_desc(irq_data); + + /* + * For MSI-X desc->irq is always equal to irq_data->irq. For + * MSI only the first interrupt of MULTI MSI passes the test. + */ + if (desc->irq == irq_data->irq) + __pci_write_msi_msg(desc, msg); +} + +/** + * pci_msi_domain_calc_hwirq - Generate a unique ID for an MSI source + * @desc: Pointer to the MSI descriptor + * + * The ID number is only used within the irqdomain. + */ +static irq_hw_number_t pci_msi_domain_calc_hwirq(struct msi_desc *desc) +{ + struct pci_dev *dev = msi_desc_to_pci_dev(desc); + + return (irq_hw_number_t)desc->pci.msi_attrib.entry_nr | + pci_dev_id(dev) << 11 | + (pci_domain_nr(dev->bus) & 0xFFFFFFFF) << 27; +} + +static inline bool pci_msi_desc_is_multi_msi(struct msi_desc *desc) +{ + return !desc->pci.msi_attrib.is_msix && desc->nvec_used > 1; +} + +/** + * pci_msi_domain_check_cap - Verify that @domain supports the capabilities + * for @dev + * @domain: The interrupt domain to check + * @info: The domain info for verification + * @dev: The device to check + * + * Returns: + * 0 if the functionality is supported + * 1 if Multi MSI is requested, but the domain does not support it + * -ENOTSUPP otherwise + */ +int pci_msi_domain_check_cap(struct irq_domain *domain, + struct msi_domain_info *info, struct device *dev) +{ + struct msi_desc *desc = first_pci_msi_entry(to_pci_dev(dev)); + + /* Special handling to support __pci_enable_msi_range() */ + if (pci_msi_desc_is_multi_msi(desc) && + !(info->flags & MSI_FLAG_MULTI_PCI_MSI)) + return 1; + else if (desc->pci.msi_attrib.is_msix && !(info->flags & MSI_FLAG_PCI_MSIX)) + return -ENOTSUPP; + + return 0; +} + +static int pci_msi_domain_handle_error(struct irq_domain *domain, + struct msi_desc *desc, int error) +{ + /* Special handling to support __pci_enable_msi_range() */ + if (pci_msi_desc_is_multi_msi(desc) && error == -ENOSPC) + return 1; + + return error; +} + +static void pci_msi_domain_set_desc(msi_alloc_info_t *arg, + struct msi_desc *desc) +{ + arg->desc = desc; + arg->hwirq = pci_msi_domain_calc_hwirq(desc); +} + +static struct msi_domain_ops pci_msi_domain_ops_default = { + .set_desc = pci_msi_domain_set_desc, + .msi_check = pci_msi_domain_check_cap, + .handle_error = pci_msi_domain_handle_error, +}; + +static void pci_msi_domain_update_dom_ops(struct msi_domain_info *info) +{ + struct msi_domain_ops *ops = info->ops; + + if (ops == NULL) { + info->ops = &pci_msi_domain_ops_default; + } else { + if (ops->set_desc == NULL) + ops->set_desc = pci_msi_domain_set_desc; + if (ops->msi_check == NULL) + ops->msi_check = pci_msi_domain_check_cap; + if (ops->handle_error == NULL) + ops->handle_error = pci_msi_domain_handle_error; + } +} + +static void pci_msi_domain_update_chip_ops(struct msi_domain_info *info) +{ + struct irq_chip *chip = info->chip; + + BUG_ON(!chip); + if (!chip->irq_write_msi_msg) + chip->irq_write_msi_msg = pci_msi_domain_write_msg; + if (!chip->irq_mask) + chip->irq_mask = pci_msi_mask_irq; + if (!chip->irq_unmask) + chip->irq_unmask = pci_msi_unmask_irq; +} + +/** + * pci_msi_create_irq_domain - Create a MSI interrupt domain + * @fwnode: Optional fwnode of the interrupt controller + * @info: MSI domain info + * @parent: Parent irq domain + * + * Updates the domain and chip ops and creates a MSI interrupt domain. + * + * Returns: + * A domain pointer or NULL in case of failure. + */ +struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode, + struct msi_domain_info *info, + struct irq_domain *parent) +{ + struct irq_domain *domain; + + if (WARN_ON(info->flags & MSI_FLAG_LEVEL_CAPABLE)) + info->flags &= ~MSI_FLAG_LEVEL_CAPABLE; + + if (info->flags & MSI_FLAG_USE_DEF_DOM_OPS) + pci_msi_domain_update_dom_ops(info); + if (info->flags & MSI_FLAG_USE_DEF_CHIP_OPS) + pci_msi_domain_update_chip_ops(info); + + info->flags |= MSI_FLAG_ACTIVATE_EARLY; + if (IS_ENABLED(CONFIG_GENERIC_IRQ_RESERVATION_MODE)) + info->flags |= MSI_FLAG_MUST_REACTIVATE; + + /* PCI-MSI is oneshot-safe */ + info->chip->flags |= IRQCHIP_ONESHOT_SAFE; + + domain = msi_create_irq_domain(fwnode, info, parent); + if (!domain) + return NULL; + + irq_domain_update_bus_token(domain, DOMAIN_BUS_PCI_MSI); + return domain; +} +EXPORT_SYMBOL_GPL(pci_msi_create_irq_domain); + +/* + * Users of the generic MSI infrastructure expect a device to have a single ID, + * so with DMA aliases we have to pick the least-worst compromise. Devices with + * DMA phantom functions tend to still emit MSIs from the real function number, + * so we ignore those and only consider topological aliases where either the + * alias device or RID appears on a different bus number. We also make the + * reasonable assumption that bridges are walked in an upstream direction (so + * the last one seen wins), and the much braver assumption that the most likely + * case is that of PCI->PCIe so we should always use the alias RID. This echoes + * the logic from intel_irq_remapping's set_msi_sid(), which presumably works + * well enough in practice; in the face of the horrible PCIe<->PCI-X conditions + * for taking ownership all we can really do is close our eyes and hope... + */ +static int get_msi_id_cb(struct pci_dev *pdev, u16 alias, void *data) +{ + u32 *pa = data; + u8 bus = PCI_BUS_NUM(*pa); + + if (pdev->bus->number != bus || PCI_BUS_NUM(alias) != bus) + *pa = alias; + + return 0; +} + +/** + * pci_msi_domain_get_msi_rid - Get the MSI requester id (RID) + * @domain: The interrupt domain + * @pdev: The PCI device. + * + * The RID for a device is formed from the alias, with a firmware + * supplied mapping applied + * + * Returns: The RID. + */ +u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev) +{ + struct device_node *of_node; + u32 rid = pci_dev_id(pdev); + + pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); + + of_node = irq_domain_get_of_node(domain); + rid = of_node ? of_msi_map_id(&pdev->dev, of_node, rid) : + iort_msi_map_id(&pdev->dev, rid); + + return rid; +} + +/** + * pci_msi_get_device_domain - Get the MSI domain for a given PCI device + * @pdev: The PCI device + * + * Use the firmware data to find a device-specific MSI domain + * (i.e. not one that is set as a default). + * + * Returns: The corresponding MSI domain or NULL if none has been found. + */ +struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev) +{ + struct irq_domain *dom; + u32 rid = pci_dev_id(pdev); + + pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); + dom = of_msi_map_get_device_domain(&pdev->dev, rid, DOMAIN_BUS_PCI_MSI); + if (!dom) + dom = iort_get_device_domain(&pdev->dev, rid, + DOMAIN_BUS_PCI_MSI); + return dom; +} + +/** + * pci_dev_has_special_msi_domain - Check whether the device is handled by + * a non-standard PCI-MSI domain + * @pdev: The PCI device to check. + * + * Returns: True if the device irqdomain or the bus irqdomain is + * non-standard PCI/MSI. + */ +bool pci_dev_has_special_msi_domain(struct pci_dev *pdev) +{ + struct irq_domain *dom = dev_get_msi_domain(&pdev->dev); + + if (!dom) + dom = dev_get_msi_domain(&pdev->bus->dev); + + if (!dom) + return true; + + return dom->bus_token != DOMAIN_BUS_PCI_MSI; +} + +#endif /* CONFIG_PCI_MSI_IRQ_DOMAIN */ +#endif /* CONFIG_PCI_MSI */ + +void pci_msi_init(struct pci_dev *dev) +{ + u16 ctrl; + + /* + * Disable the MSI hardware to avoid screaming interrupts + * during boot. This is the power on reset default so + * usually this should be a noop. + */ + dev->msi_cap = pci_find_capability(dev, PCI_CAP_ID_MSI); + if (!dev->msi_cap) + return; + + pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &ctrl); + if (ctrl & PCI_MSI_FLAGS_ENABLE) + pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, + ctrl & ~PCI_MSI_FLAGS_ENABLE); + + if (!(ctrl & PCI_MSI_FLAGS_64BIT)) + dev->no_64bit_msi = 1; +} + +void pci_msix_init(struct pci_dev *dev) +{ + u16 ctrl; + + dev->msix_cap = pci_find_capability(dev, PCI_CAP_ID_MSIX); + if (!dev->msix_cap) + return; + + pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &ctrl); + if (ctrl & PCI_MSIX_FLAGS_ENABLE) + pci_write_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, + ctrl & ~PCI_MSIX_FLAGS_ENABLE); +} From patchwork Mon Dec 6 22:27:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564275 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=YKghtY03; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=A1eH8lsr; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J3T1hYWz9s1l for ; Tue, 7 Dec 2021 09:28:05 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357598AbhLFWb2 (ORCPT ); Mon, 6 Dec 2021 17:31:28 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45684 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357321AbhLFWbU (ORCPT ); Mon, 6 Dec 2021 17:31:20 -0500 Message-ID: <20211206210224.710137730@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829670; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=ib3yuqujnroRqi+hLldSTb0C/KdcxiKhDFhBeAeajh8=; b=YKghtY03bcoLP8OvEUv6g9Jjx1TFo+pzmg+zViZrV9C1Zqdyd+udciNW1BVKPTuIL8Z+sh VkfSoebHJBGRLhECku0Xcu/1wA6jahAvvipQ9NOJi7sfSqnslLIaiWdUsi6UumMv8Ii0JT 2DbzczPuytoquIUE4qI7a+Rs+3SZz2k8vJC78RhGv6O3a0yi2pFj1Gt6yYNlPg+9HE8EhD OMVLu3Q5kecJhaNaLyTGpaijaksYWeRdazPJXewvgHnUCQUGdmFPOc2DkNnsaNSCejEPc7 IQlwWECz6/FR1cQY3Dggy4ASx4++0pFep56S/3oZez+mph5GyIICQFrfOQSrhQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829670; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=ib3yuqujnroRqi+hLldSTb0C/KdcxiKhDFhBeAeajh8=; b=A1eH8lsrzvvUb+UOPZHwegNVeviEvBxF3n2PgKLDm1Kz3CElL95PW6TVkRlKmYp31F7use sr1vtn736Uy7EoCw== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 16/23] PCI/MSI: Split out CONFIG_PCI_MSI independent part References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:49 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org These functions are required even when CONFIG_PCI_MSI is not set. Move them to their own file. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Reviewed-by: Greg Kroah-Hartman Acked-by: Bjorn Helgaas --- drivers/pci/msi/Makefile | 3 ++- drivers/pci/msi/msi.c | 39 --------------------------------------- drivers/pci/msi/pcidev_msi.c | 43 +++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 45 insertions(+), 40 deletions(-) --- a/drivers/pci/msi/Makefile +++ b/drivers/pci/msi/Makefile @@ -1,4 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 # # Makefile for the PCI/MSI -obj-$(CONFIG_PCI) += msi.o +obj-$(CONFIG_PCI) += pcidev_msi.o +obj-$(CONFIG_PCI_MSI) += msi.o --- a/drivers/pci/msi/msi.c +++ b/drivers/pci/msi/msi.c @@ -18,8 +18,6 @@ #include "../pci.h" -#ifdef CONFIG_PCI_MSI - static int pci_msi_enable = 1; int pci_msi_ignore_mask; @@ -1493,40 +1491,3 @@ bool pci_dev_has_special_msi_domain(stru } #endif /* CONFIG_PCI_MSI_IRQ_DOMAIN */ -#endif /* CONFIG_PCI_MSI */ - -void pci_msi_init(struct pci_dev *dev) -{ - u16 ctrl; - - /* - * Disable the MSI hardware to avoid screaming interrupts - * during boot. This is the power on reset default so - * usually this should be a noop. - */ - dev->msi_cap = pci_find_capability(dev, PCI_CAP_ID_MSI); - if (!dev->msi_cap) - return; - - pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &ctrl); - if (ctrl & PCI_MSI_FLAGS_ENABLE) - pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, - ctrl & ~PCI_MSI_FLAGS_ENABLE); - - if (!(ctrl & PCI_MSI_FLAGS_64BIT)) - dev->no_64bit_msi = 1; -} - -void pci_msix_init(struct pci_dev *dev) -{ - u16 ctrl; - - dev->msix_cap = pci_find_capability(dev, PCI_CAP_ID_MSIX); - if (!dev->msix_cap) - return; - - pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &ctrl); - if (ctrl & PCI_MSIX_FLAGS_ENABLE) - pci_write_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, - ctrl & ~PCI_MSIX_FLAGS_ENABLE); -} --- /dev/null +++ b/drivers/pci/msi/pcidev_msi.c @@ -0,0 +1,43 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * MSI[X} related functions which are available unconditionally. + */ +#include "../pci.h" + +/* + * Disable the MSI[X] hardware to avoid screaming interrupts during boot. + * This is the power on reset default so usually this should be a noop. + */ + +void pci_msi_init(struct pci_dev *dev) +{ + u16 ctrl; + + dev->msi_cap = pci_find_capability(dev, PCI_CAP_ID_MSI); + if (!dev->msi_cap) + return; + + pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &ctrl); + if (ctrl & PCI_MSI_FLAGS_ENABLE) { + pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, + ctrl & ~PCI_MSI_FLAGS_ENABLE); + } + + if (!(ctrl & PCI_MSI_FLAGS_64BIT)) + dev->no_64bit_msi = 1; +} + +void pci_msix_init(struct pci_dev *dev) +{ + u16 ctrl; + + dev->msix_cap = pci_find_capability(dev, PCI_CAP_ID_MSIX); + if (!dev->msix_cap) + return; + + pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &ctrl); + if (ctrl & PCI_MSIX_FLAGS_ENABLE) { + pci_write_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, + ctrl & ~PCI_MSIX_FLAGS_ENABLE); + } +} From patchwork Mon Dec 6 22:27:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564284 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=yogWCLCl; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=xWk0QhW9; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J3r0M1Cz9s1l for ; Tue, 7 Dec 2021 09:28:24 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358003AbhLFWbr (ORCPT ); Mon, 6 Dec 2021 17:31:47 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:46106 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357140AbhLFWbX (ORCPT ); Mon, 6 Dec 2021 17:31:23 -0500 Message-ID: <20211206210224.763574089@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829671; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=yrY1ZB0NumM3iY3b+dL6tdvji5YdLo8YZzow0Pc98K0=; b=yogWCLClZ5HawXr86pUvSu27UdiujY09fLCr9qODZ6WHG6tdy1jo4R5oUed+NZBlb4D+aL 7QpxacnGafJnNFuGA3Qr4uuaCUoNpqcC0RQIn/nv2i/ZqqOPRpDk41N3evYyZcNUIORUqQ RmyJmVXQ4SgC9OEaoFrfssQoJ8Xwn+4ZyexyySLnXH1jXW3w7H7gjCvNcp6z/9YZb3iZef SnjRI8A4v1jtxW1cvkAQLxfhlEO0mCCMG5y3BXvNH9MRSgIaMFqX4JKVHvpx5vMHBoS0Bb XHborQKOUtJXUnqPXT77WiZSbh3UWDmLFWdvo+NEVEOm05/Ol4MBGoFkTWQRhw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829671; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=yrY1ZB0NumM3iY3b+dL6tdvji5YdLo8YZzow0Pc98K0=; b=xWk0QhW9rvUKaVhvOUrP23TOY6t2RbQv3eb8/ILfXzb+qQTtu1Dgw8jDLlkSfce65pUPAy AUFj7t7KeK1aKUCQ== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 17/23] PCI/MSI: Split out !IRQDOMAIN code References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:51 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Split out the non irqdomain code into its own file. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Reviewed-by: Greg Kroah-Hartman Acked-by: Bjorn Helgaas --- V2: Add proper includes and fix variable name - Cedric --- drivers/pci/msi/Makefile | 5 ++-- drivers/pci/msi/legacy.c | 52 +++++++++++++++++++++++++++++++++++++++++++++++ drivers/pci/msi/msi.c | 46 ----------------------------------------- 3 files changed, 55 insertions(+), 48 deletions(-) --- a/drivers/pci/msi/Makefile +++ b/drivers/pci/msi/Makefile @@ -1,5 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 # # Makefile for the PCI/MSI -obj-$(CONFIG_PCI) += pcidev_msi.o -obj-$(CONFIG_PCI_MSI) += msi.o +obj-$(CONFIG_PCI) += pcidev_msi.o +obj-$(CONFIG_PCI_MSI) += msi.o +obj-$(CONFIG_PCI_MSI_ARCH_FALLBACKS) += legacy.o --- /dev/null +++ b/drivers/pci/msi/legacy.c @@ -0,0 +1,52 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * PCI Message Signaled Interrupt (MSI). + * + * Legacy architecture specific setup and teardown mechanism. + */ +#include +#include + +/* Arch hooks */ +int __weak arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *desc) +{ + return -EINVAL; +} + +void __weak arch_teardown_msi_irq(unsigned int irq) +{ +} + +int __weak arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) +{ + struct msi_desc *desc; + int ret; + + /* + * If an architecture wants to support multiple MSI, it needs to + * override arch_setup_msi_irqs() + */ + if (type == PCI_CAP_ID_MSI && nvec > 1) + return 1; + + for_each_pci_msi_entry(desc, dev) { + ret = arch_setup_msi_irq(dev, desc); + if (ret) + return ret < 0 ? ret : -ENOSPC; + } + + return 0; +} + +void __weak arch_teardown_msi_irqs(struct pci_dev *dev) +{ + struct msi_desc *desc; + int i; + + for_each_pci_msi_entry(desc, dev) { + if (desc->irq) { + for (i = 0; i < desc->nvec_used; i++) + arch_teardown_msi_irq(desc->irq + i); + } + } +} --- a/drivers/pci/msi/msi.c +++ b/drivers/pci/msi/msi.c @@ -50,52 +50,6 @@ static void pci_msi_teardown_msi_irqs(st #define pci_msi_teardown_msi_irqs arch_teardown_msi_irqs #endif -#ifdef CONFIG_PCI_MSI_ARCH_FALLBACKS -/* Arch hooks */ -int __weak arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *desc) -{ - return -EINVAL; -} - -void __weak arch_teardown_msi_irq(unsigned int irq) -{ -} - -int __weak arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) -{ - struct msi_desc *entry; - int ret; - - /* - * If an architecture wants to support multiple MSI, it needs to - * override arch_setup_msi_irqs() - */ - if (type == PCI_CAP_ID_MSI && nvec > 1) - return 1; - - for_each_pci_msi_entry(entry, dev) { - ret = arch_setup_msi_irq(dev, entry); - if (ret < 0) - return ret; - if (ret > 0) - return -ENOSPC; - } - - return 0; -} - -void __weak arch_teardown_msi_irqs(struct pci_dev *dev) -{ - int i; - struct msi_desc *entry; - - for_each_pci_msi_entry(entry, dev) - if (entry->irq) - for (i = 0; i < entry->nvec_used; i++) - arch_teardown_msi_irq(entry->irq + i); -} -#endif /* CONFIG_PCI_MSI_ARCH_FALLBACKS */ - /* * PCI 2.3 does not specify mask bits for each MSI interrupt. Attempting to * mask all MSI interrupts by clearing the MSI enable bit does not work From patchwork Mon Dec 6 22:27:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564287 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=vfpqB1EP; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=4082KXHz; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J455LbSz9sCD for ; Tue, 7 Dec 2021 09:28:37 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356990AbhLFWcA (ORCPT ); Mon, 6 Dec 2021 17:32:00 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45918 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357469AbhLFWbY (ORCPT ); Mon, 6 Dec 2021 17:31:24 -0500 Message-ID: <20211206210224.817754783@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829673; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=gtmzP0QXcojZTfwh0b3CXIKwtMUd+mwqAeyOUXCCvG4=; b=vfpqB1EPtmssuD5id4/VrSZYSiLwnckjq/anGWKx/Xp8WX7t79DcF+2VZPzENWQgNki5uf crep/FhLwGx8/MzRGdCmWqkq+m5K5bkju8FHWJQu7RM7r3xfcJKSpdZESCk2X5Y0OXIfhY QIjVqsZFDgmvEHFXI0T005yihpkJ94XBQkVEpurEE7Gp1E1MaJd2psFV3shZ2GlaqYKWDC oJJ1295cYhxD0luai5sQijbxNshugZRDr/Bp2bTw+8MbpCu4wTdUZLb4lZ2SN0AVueg15j tUjKgxHWL4FIDnSFITIIPOlSsvZ5wNZXuqJ6tj8pwN8gDFgXNwo++B8YHz0RQg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829673; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=gtmzP0QXcojZTfwh0b3CXIKwtMUd+mwqAeyOUXCCvG4=; b=4082KXHzJsSMf1c0EVxwBcQqrnyiZAe4a8hWndgO8+KXQ211i2rXjv4AYuNbJmZfYV6OiE 21F2XIZX6iCAYIDw== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 18/23] PCI/MSI: Split out irqdomain code References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:52 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Move the irqdomain specific code into it's own file. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Reviewed-by: Greg Kroah-Hartman Acked-by: Bjorn Helgaas --- drivers/pci/msi/Makefile | 1 drivers/pci/msi/irqdomain.c | 279 ++++++++++++++++++++++++++++++++++++++ drivers/pci/msi/legacy.c | 13 + drivers/pci/msi/msi.c | 319 +------------------------------------------- drivers/pci/msi/msi.h | 39 +++++ include/linux/msi.h | 11 - 6 files changed, 340 insertions(+), 322 deletions(-) --- a/drivers/pci/msi/Makefile +++ b/drivers/pci/msi/Makefile @@ -3,4 +3,5 @@ # Makefile for the PCI/MSI obj-$(CONFIG_PCI) += pcidev_msi.o obj-$(CONFIG_PCI_MSI) += msi.o +obj-$(CONFIG_PCI_MSI_IRQ_DOMAIN) += irqdomain.o obj-$(CONFIG_PCI_MSI_ARCH_FALLBACKS) += legacy.o --- /dev/null +++ b/drivers/pci/msi/irqdomain.c @@ -0,0 +1,279 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * PCI Message Signaled Interrupt (MSI) - irqdomain support + */ +#include +#include +#include + +#include "msi.h" + +int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) +{ + struct irq_domain *domain; + + domain = dev_get_msi_domain(&dev->dev); + if (domain && irq_domain_is_hierarchy(domain)) + return msi_domain_alloc_irqs(domain, &dev->dev, nvec); + + return pci_msi_legacy_setup_msi_irqs(dev, nvec, type); +} + +void pci_msi_teardown_msi_irqs(struct pci_dev *dev) +{ + struct irq_domain *domain; + + domain = dev_get_msi_domain(&dev->dev); + if (domain && irq_domain_is_hierarchy(domain)) + msi_domain_free_irqs(domain, &dev->dev); + else + pci_msi_legacy_teardown_msi_irqs(dev); +} + +/** + * pci_msi_domain_write_msg - Helper to write MSI message to PCI config space + * @irq_data: Pointer to interrupt data of the MSI interrupt + * @msg: Pointer to the message + */ +static void pci_msi_domain_write_msg(struct irq_data *irq_data, struct msi_msg *msg) +{ + struct msi_desc *desc = irq_data_get_msi_desc(irq_data); + + /* + * For MSI-X desc->irq is always equal to irq_data->irq. For + * MSI only the first interrupt of MULTI MSI passes the test. + */ + if (desc->irq == irq_data->irq) + __pci_write_msi_msg(desc, msg); +} + +/** + * pci_msi_domain_calc_hwirq - Generate a unique ID for an MSI source + * @desc: Pointer to the MSI descriptor + * + * The ID number is only used within the irqdomain. + */ +static irq_hw_number_t pci_msi_domain_calc_hwirq(struct msi_desc *desc) +{ + struct pci_dev *dev = msi_desc_to_pci_dev(desc); + + return (irq_hw_number_t)desc->pci.msi_attrib.entry_nr | + pci_dev_id(dev) << 11 | + (pci_domain_nr(dev->bus) & 0xFFFFFFFF) << 27; +} + +static inline bool pci_msi_desc_is_multi_msi(struct msi_desc *desc) +{ + return !desc->pci.msi_attrib.is_msix && desc->nvec_used > 1; +} + +/** + * pci_msi_domain_check_cap - Verify that @domain supports the capabilities + * for @dev + * @domain: The interrupt domain to check + * @info: The domain info for verification + * @dev: The device to check + * + * Returns: + * 0 if the functionality is supported + * 1 if Multi MSI is requested, but the domain does not support it + * -ENOTSUPP otherwise + */ +int pci_msi_domain_check_cap(struct irq_domain *domain, + struct msi_domain_info *info, struct device *dev) +{ + struct msi_desc *desc = first_pci_msi_entry(to_pci_dev(dev)); + + /* Special handling to support __pci_enable_msi_range() */ + if (pci_msi_desc_is_multi_msi(desc) && + !(info->flags & MSI_FLAG_MULTI_PCI_MSI)) + return 1; + else if (desc->pci.msi_attrib.is_msix && !(info->flags & MSI_FLAG_PCI_MSIX)) + return -ENOTSUPP; + + return 0; +} + +static int pci_msi_domain_handle_error(struct irq_domain *domain, + struct msi_desc *desc, int error) +{ + /* Special handling to support __pci_enable_msi_range() */ + if (pci_msi_desc_is_multi_msi(desc) && error == -ENOSPC) + return 1; + + return error; +} + +static void pci_msi_domain_set_desc(msi_alloc_info_t *arg, + struct msi_desc *desc) +{ + arg->desc = desc; + arg->hwirq = pci_msi_domain_calc_hwirq(desc); +} + +static struct msi_domain_ops pci_msi_domain_ops_default = { + .set_desc = pci_msi_domain_set_desc, + .msi_check = pci_msi_domain_check_cap, + .handle_error = pci_msi_domain_handle_error, +}; + +static void pci_msi_domain_update_dom_ops(struct msi_domain_info *info) +{ + struct msi_domain_ops *ops = info->ops; + + if (ops == NULL) { + info->ops = &pci_msi_domain_ops_default; + } else { + if (ops->set_desc == NULL) + ops->set_desc = pci_msi_domain_set_desc; + if (ops->msi_check == NULL) + ops->msi_check = pci_msi_domain_check_cap; + if (ops->handle_error == NULL) + ops->handle_error = pci_msi_domain_handle_error; + } +} + +static void pci_msi_domain_update_chip_ops(struct msi_domain_info *info) +{ + struct irq_chip *chip = info->chip; + + BUG_ON(!chip); + if (!chip->irq_write_msi_msg) + chip->irq_write_msi_msg = pci_msi_domain_write_msg; + if (!chip->irq_mask) + chip->irq_mask = pci_msi_mask_irq; + if (!chip->irq_unmask) + chip->irq_unmask = pci_msi_unmask_irq; +} + +/** + * pci_msi_create_irq_domain - Create a MSI interrupt domain + * @fwnode: Optional fwnode of the interrupt controller + * @info: MSI domain info + * @parent: Parent irq domain + * + * Updates the domain and chip ops and creates a MSI interrupt domain. + * + * Returns: + * A domain pointer or NULL in case of failure. + */ +struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode, + struct msi_domain_info *info, + struct irq_domain *parent) +{ + struct irq_domain *domain; + + if (WARN_ON(info->flags & MSI_FLAG_LEVEL_CAPABLE)) + info->flags &= ~MSI_FLAG_LEVEL_CAPABLE; + + if (info->flags & MSI_FLAG_USE_DEF_DOM_OPS) + pci_msi_domain_update_dom_ops(info); + if (info->flags & MSI_FLAG_USE_DEF_CHIP_OPS) + pci_msi_domain_update_chip_ops(info); + + info->flags |= MSI_FLAG_ACTIVATE_EARLY; + if (IS_ENABLED(CONFIG_GENERIC_IRQ_RESERVATION_MODE)) + info->flags |= MSI_FLAG_MUST_REACTIVATE; + + /* PCI-MSI is oneshot-safe */ + info->chip->flags |= IRQCHIP_ONESHOT_SAFE; + + domain = msi_create_irq_domain(fwnode, info, parent); + if (!domain) + return NULL; + + irq_domain_update_bus_token(domain, DOMAIN_BUS_PCI_MSI); + return domain; +} +EXPORT_SYMBOL_GPL(pci_msi_create_irq_domain); + +/* + * Users of the generic MSI infrastructure expect a device to have a single ID, + * so with DMA aliases we have to pick the least-worst compromise. Devices with + * DMA phantom functions tend to still emit MSIs from the real function number, + * so we ignore those and only consider topological aliases where either the + * alias device or RID appears on a different bus number. We also make the + * reasonable assumption that bridges are walked in an upstream direction (so + * the last one seen wins), and the much braver assumption that the most likely + * case is that of PCI->PCIe so we should always use the alias RID. This echoes + * the logic from intel_irq_remapping's set_msi_sid(), which presumably works + * well enough in practice; in the face of the horrible PCIe<->PCI-X conditions + * for taking ownership all we can really do is close our eyes and hope... + */ +static int get_msi_id_cb(struct pci_dev *pdev, u16 alias, void *data) +{ + u32 *pa = data; + u8 bus = PCI_BUS_NUM(*pa); + + if (pdev->bus->number != bus || PCI_BUS_NUM(alias) != bus) + *pa = alias; + + return 0; +} + +/** + * pci_msi_domain_get_msi_rid - Get the MSI requester id (RID) + * @domain: The interrupt domain + * @pdev: The PCI device. + * + * The RID for a device is formed from the alias, with a firmware + * supplied mapping applied + * + * Returns: The RID. + */ +u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev) +{ + struct device_node *of_node; + u32 rid = pci_dev_id(pdev); + + pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); + + of_node = irq_domain_get_of_node(domain); + rid = of_node ? of_msi_map_id(&pdev->dev, of_node, rid) : + iort_msi_map_id(&pdev->dev, rid); + + return rid; +} + +/** + * pci_msi_get_device_domain - Get the MSI domain for a given PCI device + * @pdev: The PCI device + * + * Use the firmware data to find a device-specific MSI domain + * (i.e. not one that is set as a default). + * + * Returns: The corresponding MSI domain or NULL if none has been found. + */ +struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev) +{ + struct irq_domain *dom; + u32 rid = pci_dev_id(pdev); + + pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); + dom = of_msi_map_get_device_domain(&pdev->dev, rid, DOMAIN_BUS_PCI_MSI); + if (!dom) + dom = iort_get_device_domain(&pdev->dev, rid, + DOMAIN_BUS_PCI_MSI); + return dom; +} + +/** + * pci_dev_has_special_msi_domain - Check whether the device is handled by + * a non-standard PCI-MSI domain + * @pdev: The PCI device to check. + * + * Returns: True if the device irqdomain or the bus irqdomain is + * non-standard PCI/MSI. + */ +bool pci_dev_has_special_msi_domain(struct pci_dev *pdev) +{ + struct irq_domain *dom = dev_get_msi_domain(&pdev->dev); + + if (!dom) + dom = dev_get_msi_domain(&pdev->bus->dev); + + if (!dom) + return true; + + return dom->bus_token != DOMAIN_BUS_PCI_MSI; +} --- a/drivers/pci/msi/legacy.c +++ b/drivers/pci/msi/legacy.c @@ -4,8 +4,7 @@ * * Legacy architecture specific setup and teardown mechanism. */ -#include -#include +#include "msi.h" /* Arch hooks */ int __weak arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *desc) @@ -50,3 +49,13 @@ void __weak arch_teardown_msi_irqs(struc } } } + +int pci_msi_legacy_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) +{ + return arch_setup_msi_irqs(dev, nvec, type); +} + +void pci_msi_legacy_teardown_msi_irqs(struct pci_dev *dev) +{ + arch_teardown_msi_irqs(dev); +} --- a/drivers/pci/msi/msi.c +++ b/drivers/pci/msi/msi.c @@ -6,64 +6,16 @@ * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com) * Copyright (C) 2016 Christoph Hellwig. */ - -#include #include #include #include -#include -#include -#include -#include #include "../pci.h" +#include "msi.h" static int pci_msi_enable = 1; int pci_msi_ignore_mask; -#define msix_table_size(flags) ((flags & PCI_MSIX_FLAGS_QSIZE) + 1) - -#ifdef CONFIG_PCI_MSI_IRQ_DOMAIN -static int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) -{ - struct irq_domain *domain; - - domain = dev_get_msi_domain(&dev->dev); - if (domain && irq_domain_is_hierarchy(domain)) - return msi_domain_alloc_irqs(domain, &dev->dev, nvec); - - return arch_setup_msi_irqs(dev, nvec, type); -} - -static void pci_msi_teardown_msi_irqs(struct pci_dev *dev) -{ - struct irq_domain *domain; - - domain = dev_get_msi_domain(&dev->dev); - if (domain && irq_domain_is_hierarchy(domain)) - msi_domain_free_irqs(domain, &dev->dev); - else - arch_teardown_msi_irqs(dev); -} -#else -#define pci_msi_setup_msi_irqs arch_setup_msi_irqs -#define pci_msi_teardown_msi_irqs arch_teardown_msi_irqs -#endif - -/* - * PCI 2.3 does not specify mask bits for each MSI interrupt. Attempting to - * mask all MSI interrupts by clearing the MSI enable bit does not work - * reliably as devices without an INTx disable bit will then generate a - * level IRQ which will never be cleared. - */ -static inline __attribute_const__ u32 msi_multi_mask(struct msi_desc *desc) -{ - /* Don't shift by >= width of type */ - if (desc->pci.msi_attrib.multi_cap >= 5) - return 0xffffffff; - return (1 << (1 << desc->pci.msi_attrib.multi_cap)) - 1; -} - static noinline void pci_msi_update_mask(struct msi_desc *desc, u32 clear, u32 set) { raw_spinlock_t *lock = &desc->dev->msi_lock; @@ -903,23 +855,6 @@ void pci_disable_msix(struct pci_dev *de } EXPORT_SYMBOL(pci_disable_msix); -void pci_no_msi(void) -{ - pci_msi_enable = 0; -} - -/** - * pci_msi_enabled - is MSI enabled? - * - * Returns true if MSI has not been disabled by the command-line option - * pci=nomsi. - **/ -int pci_msi_enabled(void) -{ - return pci_msi_enable; -} -EXPORT_SYMBOL(pci_msi_enabled); - static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec, struct irq_affinity *affd) { @@ -1195,253 +1130,19 @@ struct pci_dev *msi_desc_to_pci_dev(stru } EXPORT_SYMBOL(msi_desc_to_pci_dev); -#ifdef CONFIG_PCI_MSI_IRQ_DOMAIN -/** - * pci_msi_domain_write_msg - Helper to write MSI message to PCI config space - * @irq_data: Pointer to interrupt data of the MSI interrupt - * @msg: Pointer to the message - */ -static void pci_msi_domain_write_msg(struct irq_data *irq_data, struct msi_msg *msg) -{ - struct msi_desc *desc = irq_data_get_msi_desc(irq_data); - - /* - * For MSI-X desc->irq is always equal to irq_data->irq. For - * MSI only the first interrupt of MULTI MSI passes the test. - */ - if (desc->irq == irq_data->irq) - __pci_write_msi_msg(desc, msg); -} - -/** - * pci_msi_domain_calc_hwirq - Generate a unique ID for an MSI source - * @desc: Pointer to the MSI descriptor - * - * The ID number is only used within the irqdomain. - */ -static irq_hw_number_t pci_msi_domain_calc_hwirq(struct msi_desc *desc) -{ - struct pci_dev *dev = msi_desc_to_pci_dev(desc); - - return (irq_hw_number_t)desc->pci.msi_attrib.entry_nr | - pci_dev_id(dev) << 11 | - (pci_domain_nr(dev->bus) & 0xFFFFFFFF) << 27; -} - -static inline bool pci_msi_desc_is_multi_msi(struct msi_desc *desc) -{ - return !desc->pci.msi_attrib.is_msix && desc->nvec_used > 1; -} - -/** - * pci_msi_domain_check_cap - Verify that @domain supports the capabilities - * for @dev - * @domain: The interrupt domain to check - * @info: The domain info for verification - * @dev: The device to check - * - * Returns: - * 0 if the functionality is supported - * 1 if Multi MSI is requested, but the domain does not support it - * -ENOTSUPP otherwise - */ -int pci_msi_domain_check_cap(struct irq_domain *domain, - struct msi_domain_info *info, struct device *dev) -{ - struct msi_desc *desc = first_pci_msi_entry(to_pci_dev(dev)); - - /* Special handling to support __pci_enable_msi_range() */ - if (pci_msi_desc_is_multi_msi(desc) && - !(info->flags & MSI_FLAG_MULTI_PCI_MSI)) - return 1; - else if (desc->pci.msi_attrib.is_msix && !(info->flags & MSI_FLAG_PCI_MSIX)) - return -ENOTSUPP; - - return 0; -} - -static int pci_msi_domain_handle_error(struct irq_domain *domain, - struct msi_desc *desc, int error) -{ - /* Special handling to support __pci_enable_msi_range() */ - if (pci_msi_desc_is_multi_msi(desc) && error == -ENOSPC) - return 1; - - return error; -} - -static void pci_msi_domain_set_desc(msi_alloc_info_t *arg, - struct msi_desc *desc) -{ - arg->desc = desc; - arg->hwirq = pci_msi_domain_calc_hwirq(desc); -} - -static struct msi_domain_ops pci_msi_domain_ops_default = { - .set_desc = pci_msi_domain_set_desc, - .msi_check = pci_msi_domain_check_cap, - .handle_error = pci_msi_domain_handle_error, -}; - -static void pci_msi_domain_update_dom_ops(struct msi_domain_info *info) -{ - struct msi_domain_ops *ops = info->ops; - - if (ops == NULL) { - info->ops = &pci_msi_domain_ops_default; - } else { - if (ops->set_desc == NULL) - ops->set_desc = pci_msi_domain_set_desc; - if (ops->msi_check == NULL) - ops->msi_check = pci_msi_domain_check_cap; - if (ops->handle_error == NULL) - ops->handle_error = pci_msi_domain_handle_error; - } -} - -static void pci_msi_domain_update_chip_ops(struct msi_domain_info *info) -{ - struct irq_chip *chip = info->chip; - - BUG_ON(!chip); - if (!chip->irq_write_msi_msg) - chip->irq_write_msi_msg = pci_msi_domain_write_msg; - if (!chip->irq_mask) - chip->irq_mask = pci_msi_mask_irq; - if (!chip->irq_unmask) - chip->irq_unmask = pci_msi_unmask_irq; -} - -/** - * pci_msi_create_irq_domain - Create a MSI interrupt domain - * @fwnode: Optional fwnode of the interrupt controller - * @info: MSI domain info - * @parent: Parent irq domain - * - * Updates the domain and chip ops and creates a MSI interrupt domain. - * - * Returns: - * A domain pointer or NULL in case of failure. - */ -struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode, - struct msi_domain_info *info, - struct irq_domain *parent) -{ - struct irq_domain *domain; - - if (WARN_ON(info->flags & MSI_FLAG_LEVEL_CAPABLE)) - info->flags &= ~MSI_FLAG_LEVEL_CAPABLE; - - if (info->flags & MSI_FLAG_USE_DEF_DOM_OPS) - pci_msi_domain_update_dom_ops(info); - if (info->flags & MSI_FLAG_USE_DEF_CHIP_OPS) - pci_msi_domain_update_chip_ops(info); - - info->flags |= MSI_FLAG_ACTIVATE_EARLY; - if (IS_ENABLED(CONFIG_GENERIC_IRQ_RESERVATION_MODE)) - info->flags |= MSI_FLAG_MUST_REACTIVATE; - - /* PCI-MSI is oneshot-safe */ - info->chip->flags |= IRQCHIP_ONESHOT_SAFE; - - domain = msi_create_irq_domain(fwnode, info, parent); - if (!domain) - return NULL; - - irq_domain_update_bus_token(domain, DOMAIN_BUS_PCI_MSI); - return domain; -} -EXPORT_SYMBOL_GPL(pci_msi_create_irq_domain); - -/* - * Users of the generic MSI infrastructure expect a device to have a single ID, - * so with DMA aliases we have to pick the least-worst compromise. Devices with - * DMA phantom functions tend to still emit MSIs from the real function number, - * so we ignore those and only consider topological aliases where either the - * alias device or RID appears on a different bus number. We also make the - * reasonable assumption that bridges are walked in an upstream direction (so - * the last one seen wins), and the much braver assumption that the most likely - * case is that of PCI->PCIe so we should always use the alias RID. This echoes - * the logic from intel_irq_remapping's set_msi_sid(), which presumably works - * well enough in practice; in the face of the horrible PCIe<->PCI-X conditions - * for taking ownership all we can really do is close our eyes and hope... - */ -static int get_msi_id_cb(struct pci_dev *pdev, u16 alias, void *data) -{ - u32 *pa = data; - u8 bus = PCI_BUS_NUM(*pa); - - if (pdev->bus->number != bus || PCI_BUS_NUM(alias) != bus) - *pa = alias; - - return 0; -} - -/** - * pci_msi_domain_get_msi_rid - Get the MSI requester id (RID) - * @domain: The interrupt domain - * @pdev: The PCI device. - * - * The RID for a device is formed from the alias, with a firmware - * supplied mapping applied - * - * Returns: The RID. - */ -u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev) -{ - struct device_node *of_node; - u32 rid = pci_dev_id(pdev); - - pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); - - of_node = irq_domain_get_of_node(domain); - rid = of_node ? of_msi_map_id(&pdev->dev, of_node, rid) : - iort_msi_map_id(&pdev->dev, rid); - - return rid; -} - -/** - * pci_msi_get_device_domain - Get the MSI domain for a given PCI device - * @pdev: The PCI device - * - * Use the firmware data to find a device-specific MSI domain - * (i.e. not one that is set as a default). - * - * Returns: The corresponding MSI domain or NULL if none has been found. - */ -struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev) +void pci_no_msi(void) { - struct irq_domain *dom; - u32 rid = pci_dev_id(pdev); - - pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); - dom = of_msi_map_get_device_domain(&pdev->dev, rid, DOMAIN_BUS_PCI_MSI); - if (!dom) - dom = iort_get_device_domain(&pdev->dev, rid, - DOMAIN_BUS_PCI_MSI); - return dom; + pci_msi_enable = 0; } /** - * pci_dev_has_special_msi_domain - Check whether the device is handled by - * a non-standard PCI-MSI domain - * @pdev: The PCI device to check. + * pci_msi_enabled - is MSI enabled? * - * Returns: True if the device irqdomain or the bus irqdomain is - * non-standard PCI/MSI. - */ -bool pci_dev_has_special_msi_domain(struct pci_dev *pdev) + * Returns true if MSI has not been disabled by the command-line option + * pci=nomsi. + **/ +int pci_msi_enabled(void) { - struct irq_domain *dom = dev_get_msi_domain(&pdev->dev); - - if (!dom) - dom = dev_get_msi_domain(&pdev->bus->dev); - - if (!dom) - return true; - - return dom->bus_token != DOMAIN_BUS_PCI_MSI; + return pci_msi_enable; } - -#endif /* CONFIG_PCI_MSI_IRQ_DOMAIN */ +EXPORT_SYMBOL(pci_msi_enabled); --- /dev/null +++ b/drivers/pci/msi/msi.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include +#include + +#define msix_table_size(flags) ((flags & PCI_MSIX_FLAGS_QSIZE) + 1) + +extern int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type); +extern void pci_msi_teardown_msi_irqs(struct pci_dev *dev); + +#ifdef CONFIG_PCI_MSI_ARCH_FALLBACKS +extern int pci_msi_legacy_setup_msi_irqs(struct pci_dev *dev, int nvec, int type); +extern void pci_msi_legacy_teardown_msi_irqs(struct pci_dev *dev); +#else +static inline int pci_msi_legacy_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) +{ + WARN_ON_ONCE(1); + return -ENODEV; +} + +static inline void pci_msi_legacy_teardown_msi_irqs(struct pci_dev *dev) +{ + WARN_ON_ONCE(1); +} +#endif + +/* + * PCI 2.3 does not specify mask bits for each MSI interrupt. Attempting to + * mask all MSI interrupts by clearing the MSI enable bit does not work + * reliably as devices without an INTx disable bit will then generate a + * level IRQ which will never be cleared. + */ +static inline __attribute_const__ u32 msi_multi_mask(struct msi_desc *desc) +{ + /* Don't shift by >= width of type */ + if (desc->pci.msi_attrib.multi_cap >= 5) + return 0xffffffff; + return (1 << (1 << desc->pci.msi_attrib.multi_cap)) - 1; +} --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -259,17 +259,6 @@ int arch_setup_msi_irq(struct pci_dev *d void arch_teardown_msi_irq(unsigned int irq); int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type); void arch_teardown_msi_irqs(struct pci_dev *dev); -#else -static inline int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) -{ - WARN_ON_ONCE(1); - return -ENODEV; -} - -static inline void arch_teardown_msi_irqs(struct pci_dev *dev) -{ - WARN_ON_ONCE(1); -} #endif /* From patchwork Mon Dec 6 22:27:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564291 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=Ir8wMJVO; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=YM/Nr8za; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J4g17yTz9s1l for ; Tue, 7 Dec 2021 09:29:07 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358522AbhLFWce (ORCPT ); Mon, 6 Dec 2021 17:32:34 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:46190 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356785AbhLFWbZ (ORCPT ); Mon, 6 Dec 2021 17:31:25 -0500 Message-ID: <20211206210224.871651518@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829675; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=S+W1Q0LZK9qA36CwaGMZKKV+Y38NSAQ6cljplijOq1c=; b=Ir8wMJVOR0vRWYRtvVvvwdh8vk+hNLZauyW7VLWKrRT++IgOgmGu+iffTCxfxNnPnp/eUL rTyGHuAWLoqtYLFd0zuaiTQla2JtOb0eBIYhLEtEk9jZ3FXwQz4l6Rl7X4EP2BZT/nLmZG gB4XgEyUu0cKAWYBqXdGzv1j+4khf3rt8PXLnTT7HslbLMvlblRWmeFO493rPXuzAQNQvD pdevv23YcY11nJ5sRpy9ritQyb0MV8xDifB3U1aMs4fGqJo2CjNucLCPr7Kd6QZ00kkwLx CpF8pwB2sHyy6TwKDjA1aIW9IsWvP1KTSYgKnNKZ+NMAS6BpK3f7cHa0u/cYAQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829675; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=S+W1Q0LZK9qA36CwaGMZKKV+Y38NSAQ6cljplijOq1c=; b=YM/Nr8za192FA3caYGvQ8jFv+3Zicdz432O3rHkigtDdu6sexgOdl0L4qhS98KEbr6kSi1 mc4fbpLRQSX5C0DA== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 19/23] PCI/MSI: Sanitize MSIX table map handling References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:54 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Unmapping the MSIX base mapping in the loops which allocate/free MSI desciptors is daft and in the way of allowing runtime expansion of MSI-X descriptors. Store the mapping in struct pci_dev and free it after freeing the MSI-X descriptors. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Acked-by: Bjorn Helgaas --- drivers/pci/msi/msi.c | 18 ++++++++---------- include/linux/pci.h | 1 + 2 files changed, 9 insertions(+), 10 deletions(-) --- a/drivers/pci/msi/msi.c +++ b/drivers/pci/msi/msi.c @@ -241,14 +241,14 @@ static void free_msi_irqs(struct pci_dev pci_msi_teardown_msi_irqs(dev); list_for_each_entry_safe(entry, tmp, msi_list, list) { - if (entry->pci.msi_attrib.is_msix) { - if (list_is_last(&entry->list, msi_list)) - iounmap(entry->pci.mask_base); - } - list_del(&entry->list); free_msi_entry(entry); } + + if (dev->msix_base) { + iounmap(dev->msix_base); + dev->msix_base = NULL; + } } static void pci_intx_for_msi(struct pci_dev *dev, int enable) @@ -501,10 +501,6 @@ static int msix_setup_entries(struct pci for (i = 0, curmsk = masks; i < nvec; i++) { entry = alloc_msi_entry(&dev->dev, 1, curmsk); if (!entry) { - if (!i) - iounmap(base); - else - free_msi_irqs(dev); /* No enough memory. Don't try again */ ret = -ENOMEM; goto out; @@ -602,12 +598,14 @@ static int msix_capability_init(struct p goto out_disable; } + dev->msix_base = base; + /* Ensure that all table entries are masked. */ msix_mask_all(base, tsize); ret = msix_setup_entries(dev, base, entries, nvec, affd); if (ret) - goto out_disable; + goto out_free; ret = pci_msi_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX); if (ret) --- a/include/linux/pci.h +++ b/include/linux/pci.h @@ -473,6 +473,7 @@ struct pci_dev { u8 ptm_granularity; #endif #ifdef CONFIG_PCI_MSI + void __iomem *msix_base; const struct attribute_group **msi_irq_groups; #endif struct pci_vpd vpd; From patchwork Mon Dec 6 22:27:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564293 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=XU5P0R3+; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=5xLy8ejX; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J4h1Pv2z9sCD for ; Tue, 7 Dec 2021 09:29:08 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357227AbhLFWcf (ORCPT ); Mon, 6 Dec 2021 17:32:35 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:45658 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357534AbhLFWb1 (ORCPT ); Mon, 6 Dec 2021 17:31:27 -0500 Message-ID: <20211206210224.925241961@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829676; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=FXX41Qfq2rmX4Pnl0ZDgGggJiyIMhnsIgHBe+OGHmI0=; b=XU5P0R3+ZvbfRzdT7eITdUZpVffQhu3mwMQxHIuYgaoigDHKPoocQDa8qE3Uaf7sAC1A9a Js9g6cNcn6mG5wi/DOPiYiYl3E8Zz14cSLHuVeGnH6q1kq55XyN9gRmcVHSh5M702eeuzS bamXvbJtwYpQ7EnWUWKX/KSbvsAjwKbfyZz1Ddew/kluJQEYrt+sgElwsfwUoFZGHB3SW5 JWhsk59n5A4ddQt8nalxZK/oiN+HIb+DfnIB5bCE/+7f4xAhtXvdT/dZztFeYnEkP4j9ik qGpniZKbYB7qVDkE7feBPxpsBYMFmfBZX4PgA2J+mwXW3+TPFkwcsLvleDUFdw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829676; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=FXX41Qfq2rmX4Pnl0ZDgGggJiyIMhnsIgHBe+OGHmI0=; b=5xLy8ejXtHWFxftIt7uUy/JNMg6lFP0zokUkUY/UbBvVYESVtltZxcFPpywZO0EAeM6axN GP3uFRXwCiRvoiAg== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Juergen Gross , Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 20/23] PCI/MSI: Move msi_lock to struct pci_dev References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:56 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org It's only required for PCI/MSI. So no point in having it in every struct device. Signed-off-by: Thomas Gleixner Reviewed-by: Greg Kroah-Hartman Acked-by: Bjorn Helgaas Reviewed-by: Jason Gunthorpe --- V2: New patch --- drivers/base/core.c | 1 - drivers/pci/msi/msi.c | 2 +- drivers/pci/probe.c | 4 +++- include/linux/device.h | 2 -- include/linux/pci.h | 1 + 5 files changed, 5 insertions(+), 5 deletions(-) --- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -2875,7 +2875,6 @@ void device_initialize(struct device *de device_pm_init(dev); set_dev_node(dev, NUMA_NO_NODE); #ifdef CONFIG_GENERIC_MSI_IRQ - raw_spin_lock_init(&dev->msi_lock); INIT_LIST_HEAD(&dev->msi_list); #endif INIT_LIST_HEAD(&dev->links.consumers); --- a/drivers/pci/msi/msi.c +++ b/drivers/pci/msi/msi.c @@ -18,7 +18,7 @@ int pci_msi_ignore_mask; static noinline void pci_msi_update_mask(struct msi_desc *desc, u32 clear, u32 set) { - raw_spinlock_t *lock = &desc->dev->msi_lock; + raw_spinlock_t *lock = &to_pci_dev(desc->dev)->msi_lock; unsigned long flags; if (!desc->pci.msi_attrib.can_mask) --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c @@ -2311,7 +2311,9 @@ struct pci_dev *pci_alloc_dev(struct pci INIT_LIST_HEAD(&dev->bus_list); dev->dev.type = &pci_dev_type; dev->bus = pci_bus_get(bus); - +#ifdef CONFIG_PCI_MSI + raw_spin_lock_init(&dev->msi_lock); +#endif return dev; } EXPORT_SYMBOL(pci_alloc_dev); --- a/include/linux/device.h +++ b/include/linux/device.h @@ -407,7 +407,6 @@ struct dev_links_info { * @em_pd: device's energy model performance domain * @pins: For device pin management. * See Documentation/driver-api/pin-control.rst for details. - * @msi_lock: Lock to protect MSI mask cache and mask register * @msi_list: Hosts MSI descriptors * @msi_domain: The generic MSI domain this device is using. * @numa_node: NUMA node this device is close to. @@ -508,7 +507,6 @@ struct device { struct dev_pin_info *pins; #endif #ifdef CONFIG_GENERIC_MSI_IRQ - raw_spinlock_t msi_lock; struct list_head msi_list; #endif #ifdef CONFIG_DMA_OPS --- a/include/linux/pci.h +++ b/include/linux/pci.h @@ -474,6 +474,7 @@ struct pci_dev { #endif #ifdef CONFIG_PCI_MSI void __iomem *msix_base; + raw_spinlock_t msi_lock; const struct attribute_group **msi_irq_groups; #endif struct pci_vpd vpd; From patchwork Mon Dec 6 22:27:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564295 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=jwThvHAR; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=WWO0wDP1; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J4n2cD7z9s1l for ; Tue, 7 Dec 2021 09:29:13 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358699AbhLFWck (ORCPT ); Mon, 6 Dec 2021 17:32:40 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:46278 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357397AbhLFWb2 (ORCPT ); Mon, 6 Dec 2021 17:31:28 -0500 Message-ID: <20211206210224.980989243@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829678; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=YWBvF5zXcDfRUfEoyBk4Kk35zgLL7tyJ9PWeow343Hs=; b=jwThvHARsPXTlMHIlndJwa10LjHcshFU6NoML8SG2JGecBMWy4JPiHuWFnGcP7i4Q1Ibj9 8/eowhkrx1wKGYm2Yxrw7xrDhLIewEAHBE8layal+21UsPmPoYAdbuy3jJUREs5PCuecye 7m+VUQ/rPIgBX30eS2stYDrLVHUftcTe2LN/Q1eeUdPqRiMNfed020JM797KQYT7nPIA/z TmSwV/cB5rCcOQB4kL1Ge9QBNfhiTLnud2sKQVAEnQK9z3iC+nzZSzw4bFmLKQuIgKUo7a vIALLKQRlHw2SJH97tyJGigjR7KCwlJGJTEagLbQtpMSi36ZAkynmCJEolc1UQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829678; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=YWBvF5zXcDfRUfEoyBk4Kk35zgLL7tyJ9PWeow343Hs=; b=WWO0wDP12OPjmI9cLbgeixKjE+ylewjJ4EqqGA8strpjub+l5CYlIKtxJmN+Pp2T3jnqE9 pSxUKy0jh2EVgdBQ== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 21/23] PCI/MSI: Make pci_msi_domain_check_cap() static References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:57 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org No users outside of that file. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Reviewed-by: Greg Kroah-Hartman Acked-by: Bjorn Helgaas --- drivers/pci/msi/irqdomain.c | 5 +++-- include/linux/msi.h | 2 -- 2 files changed, 3 insertions(+), 4 deletions(-) --- a/drivers/pci/msi/irqdomain.c +++ b/drivers/pci/msi/irqdomain.c @@ -79,8 +79,9 @@ static inline bool pci_msi_desc_is_multi * 1 if Multi MSI is requested, but the domain does not support it * -ENOTSUPP otherwise */ -int pci_msi_domain_check_cap(struct irq_domain *domain, - struct msi_domain_info *info, struct device *dev) +static int pci_msi_domain_check_cap(struct irq_domain *domain, + struct msi_domain_info *info, + struct device *dev) { struct msi_desc *desc = first_pci_msi_entry(to_pci_dev(dev)); --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -439,8 +439,6 @@ void *platform_msi_get_host_data(struct struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode, struct msi_domain_info *info, struct irq_domain *parent); -int pci_msi_domain_check_cap(struct irq_domain *domain, - struct msi_domain_info *info, struct device *dev); u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev); struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev); bool pci_dev_has_special_msi_domain(struct pci_dev *pdev); From patchwork Mon Dec 6 22:27:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564298 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=m/otsXxh; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=XpUaw5kp; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J5D6Kmfz9sCD for ; Tue, 7 Dec 2021 09:29:36 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237836AbhLFWdE (ORCPT ); Mon, 6 Dec 2021 17:33:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357466AbhLFWbw (ORCPT ); Mon, 6 Dec 2021 17:31:52 -0500 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20304C0698C1; Mon, 6 Dec 2021 14:28:01 -0800 (PST) Message-ID: <20211206210225.046615302@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829679; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Llb+UkCn1EdRNyykmbxfFT74AdLeoQ5WEavNgQJIfNI=; b=m/otsXxhhIjzQusJ74eE8x4nGdLq11UjKe/1KMkqIM9y+rVLBafAimp/SstBo/tjgZZJjx jUOoA7IpkqUEML50sZTNQci0D7KnLV1MzjX0YjVCHj6Zz7zlYmxqGLCMvPnqkP+avoKD3g 7U5BFVjR9Hzh1o0t0RWyuvuXMyR9dQNDlfD1uB1foGmdMyJFhSAKleZ0J3XOTQuZnzQ4aP a9DnRcqa1Nin6HKz/IOBO4KLJ3RiPbywM+Ky2yqslP6pLupCe3J3mzyQtH5FhiTln6zTV8 KOeT5CnFn5EmRKWSfUdL/BHH823q5Ikmg75a9L5Qaag5v+7qt1RSQspcMYwX2g== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829679; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Llb+UkCn1EdRNyykmbxfFT74AdLeoQ5WEavNgQJIfNI=; b=XpUaw5kpy0Mws2HZXd/IbELNsaWgjBLSWCLWagCfUU2JhlssyyFIUBSpX9FsWSY6dhwc6T ETc9kRzRw9IdjRCg== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 22/23] genirq/msi: Handle PCI/MSI allocation fail in core code References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:27:59 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Get rid of yet another irqdomain callback and let the core code return the already available information of how many descriptors could be allocated. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Reviewed-by: Greg Kroah-Hartman Acked-by: Bjorn Helgaas # PCI --- drivers/pci/msi/irqdomain.c | 13 ------------- include/linux/msi.h | 5 +---- kernel/irq/msi.c | 29 +++++++++++++++++++++++++---- 3 files changed, 26 insertions(+), 21 deletions(-) --- a/drivers/pci/msi/irqdomain.c +++ b/drivers/pci/msi/irqdomain.c @@ -95,16 +95,6 @@ static int pci_msi_domain_check_cap(stru return 0; } -static int pci_msi_domain_handle_error(struct irq_domain *domain, - struct msi_desc *desc, int error) -{ - /* Special handling to support __pci_enable_msi_range() */ - if (pci_msi_desc_is_multi_msi(desc) && error == -ENOSPC) - return 1; - - return error; -} - static void pci_msi_domain_set_desc(msi_alloc_info_t *arg, struct msi_desc *desc) { @@ -115,7 +105,6 @@ static void pci_msi_domain_set_desc(msi_ static struct msi_domain_ops pci_msi_domain_ops_default = { .set_desc = pci_msi_domain_set_desc, .msi_check = pci_msi_domain_check_cap, - .handle_error = pci_msi_domain_handle_error, }; static void pci_msi_domain_update_dom_ops(struct msi_domain_info *info) @@ -129,8 +118,6 @@ static void pci_msi_domain_update_dom_op ops->set_desc = pci_msi_domain_set_desc; if (ops->msi_check == NULL) ops->msi_check = pci_msi_domain_check_cap; - if (ops->handle_error == NULL) - ops->handle_error = pci_msi_domain_handle_error; } } --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -286,7 +286,6 @@ struct msi_domain_info; * @msi_check: Callback for verification of the domain/info/dev data * @msi_prepare: Prepare the allocation of the interrupts in the domain * @set_desc: Set the msi descriptor for an interrupt - * @handle_error: Optional error handler if the allocation fails * @domain_alloc_irqs: Optional function to override the default allocation * function. * @domain_free_irqs: Optional function to override the default free @@ -295,7 +294,7 @@ struct msi_domain_info; * @get_hwirq, @msi_init and @msi_free are callbacks used by the underlying * irqdomain. * - * @msi_check, @msi_prepare, @handle_error and @set_desc are callbacks used by + * @msi_check, @msi_prepare and @set_desc are callbacks used by * msi_domain_alloc/free_irqs(). * * @domain_alloc_irqs, @domain_free_irqs can be used to override the @@ -332,8 +331,6 @@ struct msi_domain_ops { msi_alloc_info_t *arg); void (*set_desc)(msi_alloc_info_t *arg, struct msi_desc *desc); - int (*handle_error)(struct irq_domain *domain, - struct msi_desc *desc, int error); int (*domain_alloc_irqs)(struct irq_domain *domain, struct device *dev, int nvec); void (*domain_free_irqs)(struct irq_domain *domain, --- a/kernel/irq/msi.c +++ b/kernel/irq/msi.c @@ -538,6 +538,27 @@ static bool msi_check_reservation_mode(s return desc->pci.msi_attrib.is_msix || desc->pci.msi_attrib.can_mask; } +static int msi_handle_pci_fail(struct irq_domain *domain, struct msi_desc *desc, + int allocated) +{ + switch(domain->bus_token) { + case DOMAIN_BUS_PCI_MSI: + case DOMAIN_BUS_VMD_MSI: + if (IS_ENABLED(CONFIG_PCI_MSI)) + break; + fallthrough; + default: + return -ENOSPC; + } + + /* Let a failed PCI multi MSI allocation retry */ + if (desc->nvec_used > 1) + return 1; + + /* If there was a successful allocation let the caller know */ + return allocated ? allocated : -ENOSPC; +} + int __msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev, int nvec) { @@ -546,6 +567,7 @@ int __msi_domain_alloc_irqs(struct irq_d struct irq_data *irq_data; struct msi_desc *desc; msi_alloc_info_t arg = { }; + int allocated = 0; int i, ret, virq; bool can_reserve; @@ -560,16 +582,15 @@ int __msi_domain_alloc_irqs(struct irq_d dev_to_node(dev), &arg, false, desc->affinity); if (virq < 0) { - ret = -ENOSPC; - if (ops->handle_error) - ret = ops->handle_error(domain, desc, ret); - return ret; + ret = msi_handle_pci_fail(domain, desc, allocated); + goto cleanup; } for (i = 0; i < desc->nvec_used; i++) { irq_set_msi_desc_off(virq, i, desc); irq_debugfs_copy_devname(virq + i, dev); } + allocated++; } can_reserve = msi_check_reservation_mode(domain, info, dev); From patchwork Mon Dec 6 22:28:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 1564300 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=linutronix.de header.i=@linutronix.de header.a=rsa-sha256 header.s=2020 header.b=tUgqDZDZ; dkim=pass header.d=linutronix.de header.i=@linutronix.de header.a=ed25519-sha256 header.s=2020e header.b=MCrfZbg7; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4J7J5H1Qv7z9sXS for ; Tue, 7 Dec 2021 09:29:39 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357888AbhLFWdG (ORCPT ); Mon, 6 Dec 2021 17:33:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358032AbhLFWcA (ORCPT ); Mon, 6 Dec 2021 17:32:00 -0500 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4A50C0698C2; Mon, 6 Dec 2021 14:28:02 -0800 (PST) Message-ID: <20211206210225.101336873@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1638829681; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=DA21nVmLDiBI7abTCMjgj20KY6xV+oZ7nL3Tb95GpzU=; b=tUgqDZDZ/EiCdKplAETvDx9zP2J7lXVbAOyLOoBuYDH6TGhvuPG60E8xxxj2easlDQNPkn zTUdmJbnIRjdW2jZn22BV2YC4f0v6JUg4B85PNY0gmnqpQ/sN1zmKIaFOZKwqGTNC8Cwta Zuzg0fNd26HEMDHArMjtG0lzJFq888jFDvle1JTBYdcc+inM3sPjBJytAfcyCtrn23O2UA KEo+a2AtJrT7M4ZyZw3GWFzmCrk8w7SJ8if/eriSWmpCeXjmWpgB3652GOnCNeZesHt7bK srYzZDxpjI6VQFY1vaT/mjtrbyNnxnAjKtFj8+UQL1Hs+wIUbfpLlFZD9d5QMQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1638829681; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=DA21nVmLDiBI7abTCMjgj20KY6xV+oZ7nL3Tb95GpzU=; b=MCrfZbg73639x4t4CYmepEMeEst8qlRnGzxNB/5IvDcEPSsvluAaurSvCLQW1KB8HRZ02F 1dXwMChHv9TjP5Cw== From: Thomas Gleixner To: LKML Cc: Bjorn Helgaas , Marc Zygnier , Alex Williamson , Kevin Tian , Jason Gunthorpe , Megha Dey , Ashok Raj , linux-pci@vger.kernel.org, Cedric Le Goater , Juergen Gross , Michael Ellerman , Paul Mackerras , Benjamin Herrenschmidt , linuxppc-dev@lists.ozlabs.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, Kalle Valo , Greg Kroah-Hartman , sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, ath11k@lists.infradead.org, Wei Liu , linux-hyperv@vger.kernel.org, Christian Borntraeger , Heiko Carstens Subject: [patch V2 23/23] PCI/MSI: Move descriptor counting on allocation fail to the legacy code References: <20211206210147.872865823@linutronix.de> MIME-Version: 1.0 Date: Mon, 6 Dec 2021 23:28:00 +0100 (CET) Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The irqdomain code already returns the information. Move the loop to the legacy code. Signed-off-by: Thomas Gleixner Tested-by: Juergen Gross Reviewed-by: Jason Gunthorpe Reviewed-by: Greg Kroah-Hartman Acked-by: Bjorn Helgaas --- drivers/pci/msi/legacy.c | 20 +++++++++++++++++++- drivers/pci/msi/msi.c | 19 +------------------ 2 files changed, 20 insertions(+), 19 deletions(-) --- a/drivers/pci/msi/legacy.c +++ b/drivers/pci/msi/legacy.c @@ -50,9 +50,27 @@ void __weak arch_teardown_msi_irqs(struc } } +static int pci_msi_setup_check_result(struct pci_dev *dev, int type, int ret) +{ + struct msi_desc *entry; + int avail = 0; + + if (type != PCI_CAP_ID_MSIX || ret >= 0) + return ret; + + /* Scan the MSI descriptors for successfully allocated ones. */ + for_each_pci_msi_entry(entry, dev) { + if (entry->irq != 0) + avail++; + } + return avail ? avail : ret; +} + int pci_msi_legacy_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) { - return arch_setup_msi_irqs(dev, nvec, type); + int ret = arch_setup_msi_irqs(dev, nvec, type); + + return pci_msi_setup_check_result(dev, type, ret); } void pci_msi_legacy_teardown_msi_irqs(struct pci_dev *dev) --- a/drivers/pci/msi/msi.c +++ b/drivers/pci/msi/msi.c @@ -609,7 +609,7 @@ static int msix_capability_init(struct p ret = pci_msi_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX); if (ret) - goto out_avail; + goto out_free; /* Check if all MSI entries honor device restrictions */ ret = msi_verify_entries(dev); @@ -634,23 +634,6 @@ static int msix_capability_init(struct p pcibios_free_irq(dev); return 0; -out_avail: - if (ret < 0) { - /* - * If we had some success, report the number of IRQs - * we succeeded in setting up. - */ - struct msi_desc *entry; - int avail = 0; - - for_each_pci_msi_entry(entry, dev) { - if (entry->irq != 0) - avail++; - } - if (avail != 0) - ret = avail; - } - out_free: free_msi_irqs(dev);