[{"id":3680261,"web_url":"http://patchwork.ozlabs.org/comment/3680261/","msgid":"<2c227e17a573c8c674c027c9b3014c2ec9ebfec7.camel@mailbox.org>","list_archive_url":null,"date":"2026-04-22T07:32:48","subject":"Re: [RESEND PATCH v3 1/3] PCI/MSI: Add Devres managed IRQ vectors\n allocation","submitter":{"id":90407,"url":"http://patchwork.ozlabs.org/api/people/90407/","name":"Philipp Stanner","email":"phasta@mailbox.org"},"content":"On Wed, 2026-04-22 at 10:38 +0800, Shawn Lin wrote:\n> The PCI/MSI subsystem suffers from a long-standing design issue where\n> the implicit, automatic management of IRQ vectors by the devres\n\nnit:\nThe *automatic* handling is not actually the issue. All devres\ninterfaces (pcim_) are automatic by definition.\n\nThe issue is the \"sometimes managed, sometimes not\" nature of some pci_\ninterfaces. I came to call those \"hybrid functions\".\n\n> framework conflicts with explicit driver cleanup, leading to ambiguous\n> ownership and potential resource management bugs.\n> \n> Historically, pcim_enable_device() not only manages standard PCI\n> resources (BARs) via devres \n> \n\nSee cover-letter. As I said, that's not the case anymore. It was the\ncase once.\n\nI think you could just briefly state \"Historically, some functions\nexhibited this hybrid behavior, the last of which by now are …\"\n\nIf you want you could maybe also Link: one  of my patch series's for\nthe wider history.\n\n> but also implicitly sets the is_msi_managed\n> flag if calling pci_alloc_irq_vectors*(). This registers pcim_msi_release()\n> as a cleanup action, creating a hybrid ownership model.\n\nnit:\ns/ownership/responsibility\n\n> \n> This ambiguity causes problems when drivers follow a common but\n> hazardous pattern:\n> 1. Using pcim_enable_device() (which implicitly marks IRQs as managed)\n> 2. Explicitly calling pci_alloc_irq_vectors() for IRQ allocation\n> 3. Also calling pci_free_irq_vectors() in error paths or remove routines\n> \n> In this scenario, the devres framework may attempt to free the IRQ\n> vectors a second time upon device release, leading to double-free\n> issues. Analysis of the tree shows this hazardous pattern exists\n> in multiple drivers, while 35 other drivers correctly rely solely\n> on the implicit cleanup.\n\nSee cover-letter.\n\nI would leave that to Bjorn's preference, but I think that info is\nuseful for the cover letter so that reviewers get what's going on, but\nit's not useful in the commit message. Your description of the\nproblematic API is justification enough IMO.\n\n> \n> To resolve this ambiguity, introduce explicit managed APIs for IRQ\n> vector allocation:\n> - pcim_alloc_irq_vectors()\n> - pcim_alloc_irq_vectors_affinity()\n> \n> These functions are the devres-managed counterparts to\n> pci_alloc_irq_vectors[_affinity](). Drivers that wish to have\n> devres-managed IRQ vectors should use these new APIs instead.\n> \n> The long-term goal is to convert all drivers which use pcim_enable_device()\n> as well as pci_alloc_irq_vectors[_affinity]() to use these managed\n\nnit:\ns/as well as/in combination with\n\n> functions, and eventually remove the problematic hybrid management\n> pattern from pcim_enable_device() and pcim_setup_msi_release() entirely.\n> This patch lays the foundation by introducing the APIs.\n\nBut in general reads very nice and is mostly spot-on :)\n\n> \n> Suggested-by: Philipp Stanner <phasta@kernel.org>\n> Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>\n> \n> ---\n> \n> Changes in v3:\n> - Rework the commit message and function doc (Philipp)\n> - Remove setting is_msi_managed flag from new APIs (Philipp)\n> \n> Changes in v2:\n> - Rebase\n> - Introduce patches only for PCI subsystem to convert the API\n> \n>  drivers/pci/msi/api.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++\n>  include/linux/pci.h   | 22 ++++++++++++++++++++++\n>  2 files changed, 69 insertions(+)\n> \n> diff --git a/drivers/pci/msi/api.c b/drivers/pci/msi/api.c\n> index c18559b..90b67f5 100644\n> --- a/drivers/pci/msi/api.c\n> +++ b/drivers/pci/msi/api.c\n> @@ -297,6 +297,53 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n>  EXPORT_SYMBOL(pci_alloc_irq_vectors_affinity);\n>  \n>  /**\n> + * pcim_alloc_irq_vectors - devres managed pci_alloc_irq_vectors()\n> + * @dev: the PCI device to operate on\n> + * @min_vecs: minimum number of vectors required (must be >= 1)\n> + * @max_vecs: maximum (desired) number of vectors\n> + * @flags: flags for this allocation, see pci_alloc_irq_vectors()\n> + *\n> + * This is a device resource managed version of pci_alloc_irq_vectors().\n> + * Interrupt vectors are automatically freed on driver detach by the\n> + * devres. Drivers do not need to explicitly call pci_free_irq_vectors().\n\ns/the devres/devres\n\ns/do not need/must not  <- You stated doing so would be a bug, right?\n\n> + *\n> + * Returns number of vectors allocated (which might be smaller than\n> + * @max_vecs) on success, or a negative error code on failure.\n> + */\n> +int pcim_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,\n> +\t\t\t   unsigned int max_vecs, unsigned int flags)\n> +{\n> +\treturn pcim_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,\n> +\t\t\t\t\t       flags, NULL);\n> +}\n> +EXPORT_SYMBOL(pcim_alloc_irq_vectors);\n> +\n> +/**\n> + * pcim_alloc_irq_vectors_affinity - devres managed pci_alloc_irq_vectors_affinity()\n> + * @dev: the PCI device to operate on\n> + * @min_vecs: minimum number of vectors required (must be >= 1)\n> + * @max_vecs: maximum (desired) number of vectors\n> + * @flags: flags for this allocation, see pci_alloc_irq_vectors()\n> + * @affd: optional description of the affinity requirements\n> + *\n> + * This is a device resource managed version of pci_alloc_irq_vectors_affinity().\n> + * Interrupt vectors are automatically freed on driver detach by the devres\n> + * machinery. Drivers which use pcim_enable_device() as well as this function,\n> + * do not need to explicitly call pci_free_irq_vectors() currently.\n> + *\n> + * Returns number of vectors allocated (which might be smaller than\n> + * @max_vecs) on success, or a negative error code on failure.\n> + */\n> +int pcim_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n> +\t\t\t\t   unsigned int max_vecs, unsigned int flags,\n> +\t\t\t\t   struct irq_affinity *affd)\n> +{\n> +\treturn pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,\n> +\t\t\t\t\t      flags, affd);\n\nHmmm, OK – I am not thaaaat deep in the topic anymore, so correct me if\nI am wrong. But to me it seems that we might have a problem.\n\nYour intention seems to be to rely on hybrid devres here, isn't it?\n\nBecause pci_alloc_irq_vectors_affinity() runs into\n__pci_enable_msi(x)_range(), which runs into pci_setup_msi_context(),\nwhich sets up the devres callbacks.\n\nAnd this would now mean that your new function here *also* is hybrid,\nbecause it relies on pcim_enable_device() setting the flag, right?\n\nIOW, when someone called pci_enable_device(), your pcim_ function here\nwould NOT be a managed devres function. Right?\n\nThat would obviously not fly. The current (and IMO) desirable design\ngoal for devres PCI is that pcim_ functions are ALWAYS managed (hence\nthe 'm' = managed, and pci_ functions are NEVER managed).\n\nI am sorry if / that I overlooked that while first glimpsing at your\nprevious versions.\n\n\nIIRC for this reason I had to provide a completely unmanaged backend,\nand to do so I actually had to *copy* existing functions from pci.c, so\nfor many months PCI contained two versions of the same functions.\n\nThe reason was that the hybrid behavior was burried so deep in the\nsubsystem.\n\nSee here:\n\nHere I had to reimplement some functions to be used by my new pcim_\nfunctions:\nhttps://lore.kernel.org/linux-pci/20240610093149.20640-4-pstanner@redhat.com/\n\n\nAnd here I could finally remove them later:\nhttps://lore.kernel.org/linux-pci/20250519112959.25487-7-phasta@kernel.org/\n\n\nI hope this helps. And again sorry for potentially causing you\nunnecessary work :(\n\n\nRegards\nPhilipp\n\n\n> +}\n> +EXPORT_SYMBOL(pcim_alloc_irq_vectors_affinity);\n> +\n> +/**\n>   * pci_irq_vector() - Get Linux IRQ number of a device interrupt vector\n>   * @dev: the PCI device to operate on\n>   * @nr:  device-relative interrupt vector index (0-based); has different\n> diff --git a/include/linux/pci.h b/include/linux/pci.h\n> index 2c44545..3716c67 100644\n> --- a/include/linux/pci.h\n> +++ b/include/linux/pci.h\n> @@ -1770,6 +1770,12 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n>  \t\t\t\t   unsigned int max_vecs, unsigned int flags,\n>  \t\t\t\t   struct irq_affinity *affd);\n>  \n> +int pcim_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,\n> +\t\t\t  unsigned int max_vecs, unsigned int flags);\n> +int pcim_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n> +\t\t\t\t   unsigned int max_vecs, unsigned int flags,\n> +\t\t\t\t   struct irq_affinity *affd);\n> +\n>  bool pci_msix_can_alloc_dyn(struct pci_dev *dev);\n>  struct msi_map pci_msix_alloc_irq_at(struct pci_dev *dev, unsigned int index,\n>  \t\t\t\t     const struct irq_affinity_desc *affdesc);\n> @@ -1812,6 +1818,22 @@ pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,\n>  \t\t\t\t\t      flags, NULL);\n>  }\n>  \n> +static inline int\n> +pcim_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n> +\t\t\t       unsigned int max_vecs, unsigned int flags,\n> +\t\t\t       struct irq_affinity *aff_desc)\n> +{\n> +\treturn pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,\n> +\t\t\t\t\t      flags, aff_desc);\n> +}\n> +static inline int\n> +pcim_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,\n> +\t\t      unsigned int max_vecs, unsigned int flags)\n> +{\n> +\treturn pcim_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,\n> +\t\t\t\t\t      flags, NULL);\n> +}\n> +\n>  static inline bool pci_msix_can_alloc_dyn(struct pci_dev *dev)\n>  { return false; }\n>  static inline struct msi_map pci_msix_alloc_irq_at(struct pci_dev *dev, unsigned int index,","headers":{"Return-Path":"\n <linux-pci+bounces-52927-incoming=patchwork.ozlabs.org@vger.kernel.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linux-pci@vger.kernel.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n secure) header.d=mailbox.org header.i=@mailbox.org header.a=rsa-sha256\n header.s=mail20150812 header.b=IfBK0mrl;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org\n (client-ip=2600:3c0a:e001:db::12fc:5321; helo=sea.lore.kernel.org;\n envelope-from=linux-pci+bounces-52927-incoming=patchwork.ozlabs.org@vger.kernel.org;\n receiver=patchwork.ozlabs.org)","smtp.subspace.kernel.org;\n\tdkim=pass (2048-bit key) header.d=mailbox.org header.i=@mailbox.org\n header.b=\"IfBK0mrl\"","smtp.subspace.kernel.org;\n arc=none smtp.client-ip=80.241.56.151","smtp.subspace.kernel.org;\n dmarc=pass (p=reject dis=none) header.from=mailbox.org","smtp.subspace.kernel.org;\n spf=pass smtp.mailfrom=mailbox.org"],"Received":["from sea.lore.kernel.org (sea.lore.kernel.org\n [IPv6:2600:3c0a:e001:db::12fc:5321])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4g0rg84gYGz1y2d\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 22 Apr 2026 17:38:12 +1000 (AEST)","from smtp.subspace.kernel.org (conduit.subspace.kernel.org\n [100.90.174.1])\n\tby sea.lore.kernel.org (Postfix) with ESMTP id ADC69304224B\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 22 Apr 2026 07:32:59 +0000 (UTC)","from localhost.localdomain (localhost.localdomain [127.0.0.1])\n\tby smtp.subspace.kernel.org (Postfix) with ESMTP id D51D136C5BF;\n\tWed, 22 Apr 2026 07:32:58 +0000 (UTC)","from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby smtp.subspace.kernel.org (Postfix) with ESMTPS id 94CF4372686\n\tfor <linux-pci@vger.kernel.org>; Wed, 22 Apr 2026 07:32:55 +0000 (UTC)","from smtp1.mailbox.org (smtp1.mailbox.org\n [IPv6:2001:67c:2050:b231:465::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby mout-p-101.mailbox.org (Postfix) with ESMTPS id 4g0rY0010Tz9thQ;\n\tWed, 22 Apr 2026 09:32:52 +0200 (CEST)"],"ARC-Seal":"i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116;\n\tt=1776843178; cv=none;\n b=cL3aTWM4LbDKCT6sl2b22j4B1cX4P3cAt5HL2XcwikpWWY3okvUIdDwLwQvOLoW0lfUE/cehekGOdVRvAA7bQPL4cEhqkYjgeONvlbN72752rgEC/FAQe+grNudyVBZvfK5xPGQKzdRBCS9L51sbn21j2HGVPTJqVNherOvyZfI=","ARC-Message-Signature":"i=1; a=rsa-sha256; d=subspace.kernel.org;\n\ts=arc-20240116; t=1776843178; c=relaxed/simple;\n\tbh=TtlmjlZQkDPNoPXa/3BrCuSHHq6ZI9/s7vLSNlC348k=;\n\th=Message-ID:Subject:From:To:Cc:Date:In-Reply-To:References:\n\t Content-Type:MIME-Version;\n b=vAsbMQq7GiKQRRN4kBLo8w29N1cEzmiu0tDj83IQ1htgf84o6Ka1XT9Bi8Vc278/cR5JHj5TMZCDtFa+50ZyUcd7fE7bWawT0sfBe7l+kPzNYUGmJXl/R4O1AiMcSbOXj7rlY+Z1jncmyAm+FrtmtY/TnS/eboXewMyjAtjyOjA=","ARC-Authentication-Results":"i=1; smtp.subspace.kernel.org;\n dmarc=pass (p=reject dis=none) header.from=mailbox.org;\n spf=pass smtp.mailfrom=mailbox.org;\n dkim=pass (2048-bit key) header.d=mailbox.org header.i=@mailbox.org\n header.b=IfBK0mrl; arc=none smtp.client-ip=80.241.56.151","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=mailbox.org;\n s=mail20150812;\n\tt=1776843172; h=from:from:reply-to:reply-to:subject:subject:date:date:\n\t message-id:message-id:to:to:cc:cc:mime-version:mime-version:\n\t content-type:content-type:\n\t content-transfer-encoding:content-transfer-encoding:\n\t in-reply-to:in-reply-to:references:references;\n\tbh=XSX2IL12VUjIo/4sxCIjIJEvYBv92skLShm6AFvblI8=;\n\tb=IfBK0mrlA4DyH+FQBQDIjBr5AhGimZU+F/kpF41GLOUwqVjbEpDbhMeXeSNwSm8OtmJ7bN\n\tnP2mkAvtS63BUL1CjEVMLiRMUuJWDNwpIHRBmI8i99UJkz/0zrYoaIjgysgzCiGhz8mZAh\n\tXOq0KJaM0f4FwS5lsppbwh5quUqyBGxwkQhObZXmvl+NgwjEpKhLfDRRDBZv4kXk7tFgL8\n\tidwr/FtvoF6Pqjav3s6iLHiShLiC/4EBd8JbYeODaiuhzXiJSHjTRXgquXJGb6/Lhudunf\n\tOTFHoaU7HdIoKOCheyYOOoOZkw4Ae/aVMQi9YR+gr2KbFaPz/6imuPl/lVg7FQ==","Message-ID":"<2c227e17a573c8c674c027c9b3014c2ec9ebfec7.camel@mailbox.org>","Subject":"Re: [RESEND PATCH v3 1/3] PCI/MSI: Add Devres managed IRQ vectors\n allocation","From":"Philipp Stanner <phasta@mailbox.org>","Reply-To":"phasta@kernel.org","To":"Shawn Lin <shawn.lin@rock-chips.com>, Bjorn Helgaas <bhelgaas@google.com>","Cc":"Nirmal Patel <nirmal.patel@linux.intel.com>, Jonathan Derrick\n <jonathan.derrick@linux.dev>, Kurt Schwemmer\n <kurt.schwemmer@microsemi.com>,  Logan Gunthorpe <logang@deltatee.com>,\n Philipp Stanner <phasta@kernel.org>, linux-pci@vger.kernel.org","Date":"Wed, 22 Apr 2026 09:32:48 +0200","In-Reply-To":"<1776825522-6390-2-git-send-email-shawn.lin@rock-chips.com>","References":"<1776825522-6390-1-git-send-email-shawn.lin@rock-chips.com>\n\t <1776825522-6390-2-git-send-email-shawn.lin@rock-chips.com>","Content-Type":"text/plain; charset=\"UTF-8\"","Content-Transfer-Encoding":"quoted-printable","Precedence":"bulk","X-Mailing-List":"linux-pci@vger.kernel.org","List-Id":"<linux-pci.vger.kernel.org>","List-Subscribe":"<mailto:linux-pci+subscribe@vger.kernel.org>","List-Unsubscribe":"<mailto:linux-pci+unsubscribe@vger.kernel.org>","MIME-Version":"1.0","X-MBO-RS-ID":"616a2858c946addfe3e","X-MBO-RS-META":"e66sd9umwpdgm9mdqzq7hpkux8qqbgy8"}},{"id":3680316,"web_url":"http://patchwork.ozlabs.org/comment/3680316/","msgid":"<f912358d-925b-1e25-2852-1f265d09032a@rock-chips.com>","list_archive_url":null,"date":"2026-04-22T08:36:01","subject":"Re: [RESEND PATCH v3 1/3] PCI/MSI: Add Devres managed IRQ vectors\n allocation","submitter":{"id":66993,"url":"http://patchwork.ozlabs.org/api/people/66993/","name":"Shawn Lin","email":"shawn.lin@rock-chips.com"},"content":"Hi Philipp\n\n在 2026/04/22 星期三 15:32, Philipp Stanner 写道:\n> On Wed, 2026-04-22 at 10:38 +0800, Shawn Lin wrote:\n>> The PCI/MSI subsystem suffers from a long-standing design issue where\n>> the implicit, automatic management of IRQ vectors by the devres\n> \n> nit:\n> The *automatic* handling is not actually the issue. All devres\n> interfaces (pcim_) are automatic by definition.\n> \n> The issue is the \"sometimes managed, sometimes not\" nature of some pci_\n> interfaces. I came to call those \"hybrid functions\".\n> \n>> framework conflicts with explicit driver cleanup, leading to ambiguous\n>> ownership and potential resource management bugs.\n>>\n>> Historically, pcim_enable_device() not only manages standard PCI\n>> resources (BARs) via devres\n>>\n> \n> See cover-letter. As I said, that's not the case anymore. It was the\n> case once.\n> \n> I think you could just briefly state \"Historically, some functions\n> exhibited this hybrid behavior, the last of which by now are …\"\n> \n> If you want you could maybe also Link: one  of my patch series's for\n> the wider history.\n> \n>> but also implicitly sets the is_msi_managed\n>> flag if calling pci_alloc_irq_vectors*(). This registers pcim_msi_release()\n>> as a cleanup action, creating a hybrid ownership model.\n> \n> nit:\n> s/ownership/responsibility\n> \n>>\n>> This ambiguity causes problems when drivers follow a common but\n>> hazardous pattern:\n>> 1. Using pcim_enable_device() (which implicitly marks IRQs as managed)\n>> 2. Explicitly calling pci_alloc_irq_vectors() for IRQ allocation\n>> 3. Also calling pci_free_irq_vectors() in error paths or remove routines\n>>\n>> In this scenario, the devres framework may attempt to free the IRQ\n>> vectors a second time upon device release, leading to double-free\n>> issues. Analysis of the tree shows this hazardous pattern exists\n>> in multiple drivers, while 35 other drivers correctly rely solely\n>> on the implicit cleanup.\n> \n> See cover-letter.\n> \n> I would leave that to Bjorn's preference, but I think that info is\n> useful for the cover letter so that reviewers get what's going on, but\n> it's not useful in the commit message. Your description of the\n> problematic API is justification enough IMO.\n> \n>>\n>> To resolve this ambiguity, introduce explicit managed APIs for IRQ\n>> vector allocation:\n>> - pcim_alloc_irq_vectors()\n>> - pcim_alloc_irq_vectors_affinity()\n>>\n>> These functions are the devres-managed counterparts to\n>> pci_alloc_irq_vectors[_affinity](). Drivers that wish to have\n>> devres-managed IRQ vectors should use these new APIs instead.\n>>\n>> The long-term goal is to convert all drivers which use pcim_enable_device()\n>> as well as pci_alloc_irq_vectors[_affinity]() to use these managed\n> \n> nit:\n> s/as well as/in combination with\n> \n>> functions, and eventually remove the problematic hybrid management\n>> pattern from pcim_enable_device() and pcim_setup_msi_release() entirely.\n>> This patch lays the foundation by introducing the APIs.\n> \n> But in general reads very nice and is mostly spot-on :)\n> \n>>\n>> Suggested-by: Philipp Stanner <phasta@kernel.org>\n>> Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>\n>>\n>> ---\n>>\n>> Changes in v3:\n>> - Rework the commit message and function doc (Philipp)\n>> - Remove setting is_msi_managed flag from new APIs (Philipp)\n>>\n>> Changes in v2:\n>> - Rebase\n>> - Introduce patches only for PCI subsystem to convert the API\n>>\n>>   drivers/pci/msi/api.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++\n>>   include/linux/pci.h   | 22 ++++++++++++++++++++++\n>>   2 files changed, 69 insertions(+)\n>>\n>> diff --git a/drivers/pci/msi/api.c b/drivers/pci/msi/api.c\n>> index c18559b..90b67f5 100644\n>> --- a/drivers/pci/msi/api.c\n>> +++ b/drivers/pci/msi/api.c\n>> @@ -297,6 +297,53 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n>>   EXPORT_SYMBOL(pci_alloc_irq_vectors_affinity);\n>>   \n>>   /**\n>> + * pcim_alloc_irq_vectors - devres managed pci_alloc_irq_vectors()\n>> + * @dev: the PCI device to operate on\n>> + * @min_vecs: minimum number of vectors required (must be >= 1)\n>> + * @max_vecs: maximum (desired) number of vectors\n>> + * @flags: flags for this allocation, see pci_alloc_irq_vectors()\n>> + *\n>> + * This is a device resource managed version of pci_alloc_irq_vectors().\n>> + * Interrupt vectors are automatically freed on driver detach by the\n>> + * devres. Drivers do not need to explicitly call pci_free_irq_vectors().\n> \n> s/the devres/devres\n> \n> s/do not need/must not  <- You stated doing so would be a bug, right?\n> \n>> + *\n>> + * Returns number of vectors allocated (which might be smaller than\n>> + * @max_vecs) on success, or a negative error code on failure.\n>> + */\n>> +int pcim_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,\n>> +\t\t\t   unsigned int max_vecs, unsigned int flags)\n>> +{\n>> +\treturn pcim_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,\n>> +\t\t\t\t\t       flags, NULL);\n>> +}\n>> +EXPORT_SYMBOL(pcim_alloc_irq_vectors);\n>> +\n>> +/**\n>> + * pcim_alloc_irq_vectors_affinity - devres managed pci_alloc_irq_vectors_affinity()\n>> + * @dev: the PCI device to operate on\n>> + * @min_vecs: minimum number of vectors required (must be >= 1)\n>> + * @max_vecs: maximum (desired) number of vectors\n>> + * @flags: flags for this allocation, see pci_alloc_irq_vectors()\n>> + * @affd: optional description of the affinity requirements\n>> + *\n>> + * This is a device resource managed version of pci_alloc_irq_vectors_affinity().\n>> + * Interrupt vectors are automatically freed on driver detach by the devres\n>> + * machinery. Drivers which use pcim_enable_device() as well as this function,\n>> + * do not need to explicitly call pci_free_irq_vectors() currently.\n>> + *\n>> + * Returns number of vectors allocated (which might be smaller than\n>> + * @max_vecs) on success, or a negative error code on failure.\n>> + */\n>> +int pcim_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n>> +\t\t\t\t   unsigned int max_vecs, unsigned int flags,\n>> +\t\t\t\t   struct irq_affinity *affd)\n>> +{\n>> +\treturn pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,\n>> +\t\t\t\t\t      flags, affd);\n> \n> Hmmm, OK – I am not thaaaat deep in the topic anymore, so correct me if\n> I am wrong. But to me it seems that we might have a problem.\n> \n> Your intention seems to be to rely on hybrid devres here, isn't it?\n> \n\nYes, it's.\n\n> Because pci_alloc_irq_vectors_affinity() runs into\n> __pci_enable_msi(x)_range(), which runs into pci_setup_msi_context(),\n> which sets up the devres callbacks.\n> \n> And this would now mean that your new function here *also* is hybrid,\n> because it relies on pcim_enable_device() setting the flag, right?\n> \n> IOW, when someone called pci_enable_device(), your pcim_ function here\n> would NOT be a managed devres function. Right?\n> \n> That would obviously not fly. The current (and IMO) desirable design\n> goal for devres PCI is that pcim_ functions are ALWAYS managed (hence\n> the 'm' = managed, and pci_ functions are NEVER managed).\n> \n> I am sorry if / that I overlooked that while first glimpsing at your\n> previous versions.\n> \n\nDon't worry.\n\n> \n> IIRC for this reason I had to provide a completely unmanaged backend,\n> and to do so I actually had to *copy* existing functions from pci.c, so\n> for many months PCI contained two versions of the same functions.\n> \n\nI was trying to avoid copying the existing functions.\n\nThere are some combinations here:\n1. pcim_enable_device() + pci_alloc   -> correct\n2. pcim_enable_device() + pci_alloc + pci_free  -> bug\n3. pci_enable_device() + pci_alloc + pci_free -> correct\n\nTo solve the 2nd case, I need to remove the hybird mode from\npcim_enable_device(). So what I had in mind is to is :\n\n1. keep pcim_alloc_irq_vectors() rely on pcim_enable_device(), still\nthe hybrid mode as you pointed out.\n2. convert pcim_enable_device() + pci_alloc_irq_vectors() users to\npcim_enable_device() + pcim_alloc_irq_vectors(), which is the first\nabove case. Meanwhile add documentation or code check to prevent new\npci_enable_device() + pcim_alloc_irq_vectors() user from using it.\n\n3. after conversions, we still have three combinations:\n- pcim_enable_device() + pcim_alloc()  -> correct\n- pcim_enable_device() + pci_alloc + pci_free  -> bug\n- pci_enable_device() + pci_alloc + pci_free   -> correct\n\n4. Then remove pcim_setup_msi_release() from pci_setup_msi_context().\nAnd add pcim_setup_msi_release() to my new APIs. This way,\n- pcim_enable_device() + pcim_alloc()  -> correct\n- pcim_enable_device() + pci_alloc + pci_free  -> correct\n- pci_enable_device() + pci_alloc + pci_free   -> correct\n\n5. remove the limitation of using pci_enable_device() + pcim_alloc, as\nmentioned in the 2nd above step. so we could allow this 4th pattern.\nFinally all things work fine.\n\nPlease let me know if this approach works for you?\n\n\n> The reason was that the hybrid behavior was burried so deep in the\n> subsystem.\n> \n> See here:\n> \n> Here I had to reimplement some functions to be used by my new pcim_\n> functions:\n> https://lore.kernel.org/linux-pci/20240610093149.20640-4-pstanner@redhat.com/\n> \n> \n> And here I could finally remove them later:\n> https://lore.kernel.org/linux-pci/20250519112959.25487-7-phasta@kernel.org/\n> \n> \n> I hope this helps. And again sorry for potentially causing you\n> unnecessary work :(\n> \n> \n> Regards\n> Philipp\n> \n> \n>> +}\n>> +EXPORT_SYMBOL(pcim_alloc_irq_vectors_affinity);\n>> +\n>> +/**\n>>    * pci_irq_vector() - Get Linux IRQ number of a device interrupt vector\n>>    * @dev: the PCI device to operate on\n>>    * @nr:  device-relative interrupt vector index (0-based); has different\n>> diff --git a/include/linux/pci.h b/include/linux/pci.h\n>> index 2c44545..3716c67 100644\n>> --- a/include/linux/pci.h\n>> +++ b/include/linux/pci.h\n>> @@ -1770,6 +1770,12 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n>>   \t\t\t\t   unsigned int max_vecs, unsigned int flags,\n>>   \t\t\t\t   struct irq_affinity *affd);\n>>   \n>> +int pcim_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,\n>> +\t\t\t  unsigned int max_vecs, unsigned int flags);\n>> +int pcim_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n>> +\t\t\t\t   unsigned int max_vecs, unsigned int flags,\n>> +\t\t\t\t   struct irq_affinity *affd);\n>> +\n>>   bool pci_msix_can_alloc_dyn(struct pci_dev *dev);\n>>   struct msi_map pci_msix_alloc_irq_at(struct pci_dev *dev, unsigned int index,\n>>   \t\t\t\t     const struct irq_affinity_desc *affdesc);\n>> @@ -1812,6 +1818,22 @@ pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,\n>>   \t\t\t\t\t      flags, NULL);\n>>   }\n>>   \n>> +static inline int\n>> +pcim_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n>> +\t\t\t       unsigned int max_vecs, unsigned int flags,\n>> +\t\t\t       struct irq_affinity *aff_desc)\n>> +{\n>> +\treturn pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,\n>> +\t\t\t\t\t      flags, aff_desc);\n>> +}\n>> +static inline int\n>> +pcim_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,\n>> +\t\t      unsigned int max_vecs, unsigned int flags)\n>> +{\n>> +\treturn pcim_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,\n>> +\t\t\t\t\t      flags, NULL);\n>> +}\n>> +\n>>   static inline bool pci_msix_can_alloc_dyn(struct pci_dev *dev)\n>>   { return false; }\n>>   static inline struct msi_map pci_msix_alloc_irq_at(struct pci_dev *dev, unsigned int index,\n> \n>","headers":{"Return-Path":"\n <linux-pci+bounces-52928-incoming=patchwork.ozlabs.org@vger.kernel.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linux-pci@vger.kernel.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (1024-bit key;\n unprotected) header.d=rock-chips.com header.i=@rock-chips.com\n header.a=rsa-sha256 header.s=default header.b=L9a7Yz7v;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org\n (client-ip=2600:3c09:e001:a7::12fc:5321; helo=sto.lore.kernel.org;\n envelope-from=linux-pci+bounces-52928-incoming=patchwork.ozlabs.org@vger.kernel.org;\n receiver=patchwork.ozlabs.org)","smtp.subspace.kernel.org;\n\tdkim=pass (1024-bit key) header.d=rock-chips.com header.i=@rock-chips.com\n header.b=\"L9a7Yz7v\"","smtp.subspace.kernel.org;\n arc=none smtp.client-ip=45.254.49.209","smtp.subspace.kernel.org;\n dmarc=pass (p=none dis=none) header.from=rock-chips.com","smtp.subspace.kernel.org;\n spf=pass smtp.mailfrom=rock-chips.com"],"Received":["from sto.lore.kernel.org (sto.lore.kernel.org\n [IPv6:2600:3c09:e001:a7::12fc:5321])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4g0syG3gcKz1yD5\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 22 Apr 2026 18:36:22 +1000 (AEST)","from smtp.subspace.kernel.org (conduit.subspace.kernel.org\n [100.90.174.1])\n\tby sto.lore.kernel.org (Postfix) with ESMTP id B5E99300AB3A\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 22 Apr 2026 08:36:19 +0000 (UTC)","from localhost.localdomain (localhost.localdomain [127.0.0.1])\n\tby smtp.subspace.kernel.org (Postfix) with ESMTP id 951E440DFC3;\n\tWed, 22 Apr 2026 08:36:18 +0000 (UTC)","from mail-m49209.qiye.163.com (mail-m49209.qiye.163.com\n [45.254.49.209])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby smtp.subspace.kernel.org (Postfix) with ESMTPS id 9B693388E40\n\tfor <linux-pci@vger.kernel.org>; Wed, 22 Apr 2026 08:36:13 +0000 (UTC)","from [172.16.12.17] (unknown [58.22.7.114])\n\tby smtp.qiye.163.com (Hmail) with ESMTP id 3bb2452e8;\n\tWed, 22 Apr 2026 16:36:02 +0800 (GMT+08:00)"],"ARC-Seal":"i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116;\n\tt=1776846978; cv=none;\n b=E+ltX26FlKxAz7J7c8gXTOul/R3wAB+JdzzseE7gzlchGLHtfiuUyYoipYTao+pMlhaz8vl6rEcBCPccKjBYpcuvnJalZnHdwNm+48rBmxZsC0u3Uhe4+ILgTGzmQuxYSaoF68mMQg1ZeJP5u89lh+VxEwJKl+9USnwCkcsUKQ0=","ARC-Message-Signature":"i=1; a=rsa-sha256; d=subspace.kernel.org;\n\ts=arc-20240116; t=1776846978; c=relaxed/simple;\n\tbh=895r9R8pvH/grRaj2UAM9jvIap20aYhDb3LTZk450HE=;\n\th=Message-ID:Date:MIME-Version:Cc:Subject:To:References:From:\n\t In-Reply-To:Content-Type;\n b=Q6L5cNeyTkMG+ThNPaq8Qecy3/7hZztbIz1y5OdTSqX02MXHCekeyRT5nBsS5aYjK20DHvr2twMrtNfxi+cczyGzHNZdbwKomTMejsgLADwPvdlPB3zBwolz2zArB/2SJj86h+PEmlUsXWXgHlVWrBGvLSs/Xr6aLyua+VJyr78=","ARC-Authentication-Results":"i=1; smtp.subspace.kernel.org;\n dmarc=pass (p=none dis=none) header.from=rock-chips.com;\n spf=pass smtp.mailfrom=rock-chips.com;\n dkim=pass (1024-bit key) header.d=rock-chips.com header.i=@rock-chips.com\n header.b=L9a7Yz7v; arc=none smtp.client-ip=45.254.49.209","Message-ID":"<f912358d-925b-1e25-2852-1f265d09032a@rock-chips.com>","Date":"Wed, 22 Apr 2026 16:36:01 +0800","Precedence":"bulk","X-Mailing-List":"linux-pci@vger.kernel.org","List-Id":"<linux-pci.vger.kernel.org>","List-Subscribe":"<mailto:linux-pci+subscribe@vger.kernel.org>","List-Unsubscribe":"<mailto:linux-pci+unsubscribe@vger.kernel.org>","MIME-Version":"1.0","User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101\n Thunderbird/102.15.1","Cc":"shawn.lin@rock-chips.com, Nirmal Patel <nirmal.patel@linux.intel.com>,\n Jonathan Derrick <jonathan.derrick@linux.dev>,\n Kurt Schwemmer <kurt.schwemmer@microsemi.com>,\n Logan Gunthorpe <logang@deltatee.com>, linux-pci@vger.kernel.org","Subject":"Re: [RESEND PATCH v3 1/3] PCI/MSI: Add Devres managed IRQ vectors\n allocation","To":"phasta@kernel.org, Bjorn Helgaas <bhelgaas@google.com>","References":"<1776825522-6390-1-git-send-email-shawn.lin@rock-chips.com>\n <1776825522-6390-2-git-send-email-shawn.lin@rock-chips.com>\n <2c227e17a573c8c674c027c9b3014c2ec9ebfec7.camel@mailbox.org>","From":"Shawn Lin <shawn.lin@rock-chips.com>","In-Reply-To":"<2c227e17a573c8c674c027c9b3014c2ec9ebfec7.camel@mailbox.org>","Content-Type":"text/plain; charset=UTF-8; format=flowed","Content-Transfer-Encoding":"8bit","X-HM-Tid":"0a9db455000709cckunm35c3acf31c7f24","X-HM-MType":"1","X-HM-Spam-Status":"e1kfGhgUHx5ZQUpXWQgPGg8OCBgUHx5ZQUlOS1dZFg8aDwILHllBWSg2Ly\n\ttZV1koWUFDSUNOT01LS0k3V1ktWUFJV1kPCRoVCBIfWUFZGhpNTlYeQ0pKT05PHUMdSE5WFRQJFh\n\toXVRMBExYaEhckFA4PWVdZGBILWUFZTkNVSUlVTFVKSk9ZV1kWGg8SFR0UWUFZT0tIVUpLSU9PT0\n\thVSktLVUpCS0tZBg++","DKIM-Signature":"a=rsa-sha256;\n\tb=L9a7Yz7vIzyTmPWNR/uyn4V/9h1s8d1Dlgru8WRSPraVbm2JjipSFs1mV8kH8LvmAGfY9jSGmDWewXTGqnBo98JZyl6f2M1GXkJZUHCA+gLpO5/fKUki7R8hCNxH1/y5dTcbCW1//h8/WXFs72N4GRcLFi+XDw5FJyJD1ZtiBKg=;\n c=relaxed/relaxed; s=default; d=rock-chips.com; v=1;\n\tbh=HvzoTKGV5baRQBqmA+S6gxaOOoxHxlZHK5GStkF/uzw=;\n\th=date:mime-version:subject:message-id:from;"}},{"id":3680622,"web_url":"http://patchwork.ozlabs.org/comment/3680622/","msgid":"<42e31ebc86a34001e98aecc7197e5e5a8fd5bb03.camel@mailbox.org>","list_archive_url":null,"date":"2026-04-22T13:07:32","subject":"Re: [RESEND PATCH v3 1/3] PCI/MSI: Add Devres managed IRQ vectors\n allocation","submitter":{"id":90407,"url":"http://patchwork.ozlabs.org/api/people/90407/","name":"Philipp Stanner","email":"phasta@mailbox.org"},"content":"On Wed, 2026-04-22 at 16:36 +0800, Shawn Lin wrote:\n> Hi Philipp\n> \n> 在 2026/04/22 星期三 15:32, Philipp Stanner 写道:\n> > On Wed, 2026-04-22 at 10:38 +0800, Shawn Lin wrote:\n> > > The PCI/MSI subsystem suffers from a long-standing design issue where\n> > > the implicit, automatic management of IRQ vectors by the devres\n> > \n> > nit:\n> > The *automatic* handling is not actually the issue. All devres\n> > interfaces (pcim_) are automatic by definition.\n> > \n> > The issue is the \"sometimes managed, sometimes not\" nature of some pci_\n> > interfaces. I came to call those \"hybrid functions\".\n> > \n> > > framework conflicts with explicit driver cleanup, leading to ambiguous\n> > > ownership and potential resource management bugs.\n> > > \n> > > Historically, pcim_enable_device() not only manages standard PCI\n> > > resources (BARs) via devres\n> > > \n> > \n> > See cover-letter. As I said, that's not the case anymore. It was the\n> > case once.\n> > \n> > I think you could just briefly state \"Historically, some functions\n> > exhibited this hybrid behavior, the last of which by now are …\"\n> > \n> > If you want you could maybe also Link: one  of my patch series's for\n> > the wider history.\n> > \n> > > but also implicitly sets the is_msi_managed\n> > > flag if calling pci_alloc_irq_vectors*(). This registers pcim_msi_release()\n> > > as a cleanup action, creating a hybrid ownership model.\n> > \n> > nit:\n> > s/ownership/responsibility\n> > \n> > > \n> > > This ambiguity causes problems when drivers follow a common but\n> > > hazardous pattern:\n> > > 1. Using pcim_enable_device() (which implicitly marks IRQs as managed)\n> > > 2. Explicitly calling pci_alloc_irq_vectors() for IRQ allocation\n> > > 3. Also calling pci_free_irq_vectors() in error paths or remove routines\n> > > \n> > > In this scenario, the devres framework may attempt to free the IRQ\n> > > vectors a second time upon device release, leading to double-free\n> > > issues. Analysis of the tree shows this hazardous pattern exists\n> > > in multiple drivers, while 35 other drivers correctly rely solely\n> > > on the implicit cleanup.\n> > \n> > See cover-letter.\n> > \n> > I would leave that to Bjorn's preference, but I think that info is\n> > useful for the cover letter so that reviewers get what's going on, but\n> > it's not useful in the commit message. Your description of the\n> > problematic API is justification enough IMO.\n> > \n> > > \n> > > To resolve this ambiguity, introduce explicit managed APIs for IRQ\n> > > vector allocation:\n> > > - pcim_alloc_irq_vectors()\n> > > - pcim_alloc_irq_vectors_affinity()\n> > > \n> > > These functions are the devres-managed counterparts to\n> > > pci_alloc_irq_vectors[_affinity](). Drivers that wish to have\n> > > devres-managed IRQ vectors should use these new APIs instead.\n> > > \n> > > The long-term goal is to convert all drivers which use pcim_enable_device()\n> > > as well as pci_alloc_irq_vectors[_affinity]() to use these managed\n> > \n> > nit:\n> > s/as well as/in combination with\n> > \n> > > functions, and eventually remove the problematic hybrid management\n> > > pattern from pcim_enable_device() and pcim_setup_msi_release() entirely.\n> > > This patch lays the foundation by introducing the APIs.\n> > \n> > But in general reads very nice and is mostly spot-on :)\n> > \n> > > \n> > > Suggested-by: Philipp Stanner <phasta@kernel.org>\n> > > Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>\n> > > \n> > > ---\n> > > \n> > > Changes in v3:\n> > > - Rework the commit message and function doc (Philipp)\n> > > - Remove setting is_msi_managed flag from new APIs (Philipp)\n> > > \n> > > Changes in v2:\n> > > - Rebase\n> > > - Introduce patches only for PCI subsystem to convert the API\n> > > \n> > >   drivers/pci/msi/api.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++\n> > >   include/linux/pci.h   | 22 ++++++++++++++++++++++\n> > >   2 files changed, 69 insertions(+)\n> > > \n> > > diff --git a/drivers/pci/msi/api.c b/drivers/pci/msi/api.c\n> > > index c18559b..90b67f5 100644\n> > > --- a/drivers/pci/msi/api.c\n> > > +++ b/drivers/pci/msi/api.c\n> > > @@ -297,6 +297,53 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n> > >   EXPORT_SYMBOL(pci_alloc_irq_vectors_affinity);\n> > >   \n> > >   /**\n> > > + * pcim_alloc_irq_vectors - devres managed pci_alloc_irq_vectors()\n> > > + * @dev: the PCI device to operate on\n> > > + * @min_vecs: minimum number of vectors required (must be >= 1)\n> > > + * @max_vecs: maximum (desired) number of vectors\n> > > + * @flags: flags for this allocation, see pci_alloc_irq_vectors()\n> > > + *\n> > > + * This is a device resource managed version of pci_alloc_irq_vectors().\n> > > + * Interrupt vectors are automatically freed on driver detach by the\n> > > + * devres. Drivers do not need to explicitly call pci_free_irq_vectors().\n> > \n> > s/the devres/devres\n> > \n> > s/do not need/must not  <- You stated doing so would be a bug, right?\n> > \n> > > + *\n> > > + * Returns number of vectors allocated (which might be smaller than\n> > > + * @max_vecs) on success, or a negative error code on failure.\n> > > + */\n> > > +int pcim_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,\n> > > +    unsigned int max_vecs, unsigned int flags)\n> > > +{\n> > > + return pcim_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,\n> > > +        flags, NULL);\n> > > +}\n> > > +EXPORT_SYMBOL(pcim_alloc_irq_vectors);\n> > > +\n> > > +/**\n> > > + * pcim_alloc_irq_vectors_affinity - devres managed pci_alloc_irq_vectors_affinity()\n> > > + * @dev: the PCI device to operate on\n> > > + * @min_vecs: minimum number of vectors required (must be >= 1)\n> > > + * @max_vecs: maximum (desired) number of vectors\n> > > + * @flags: flags for this allocation, see pci_alloc_irq_vectors()\n> > > + * @affd: optional description of the affinity requirements\n> > > + *\n> > > + * This is a device resource managed version of pci_alloc_irq_vectors_affinity().\n> > > + * Interrupt vectors are automatically freed on driver detach by the devres\n> > > + * machinery. Drivers which use pcim_enable_device() as well as this function,\n> > > + * do not need to explicitly call pci_free_irq_vectors() currently.\n> > > + *\n> > > + * Returns number of vectors allocated (which might be smaller than\n> > > + * @max_vecs) on success, or a negative error code on failure.\n> > > + */\n> > > +int pcim_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n> > > +    unsigned int max_vecs, unsigned int flags,\n> > > +    struct irq_affinity *affd)\n> > > +{\n> > > + return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,\n> > > +       flags, affd);\n> > \n> > Hmmm, OK – I am not thaaaat deep in the topic anymore, so correct me if\n> > I am wrong. But to me it seems that we might have a problem.\n> > \n> > Your intention seems to be to rely on hybrid devres here, isn't it?\n> > \n> \n> Yes, it's.\n> \n> > Because pci_alloc_irq_vectors_affinity() runs into\n> > __pci_enable_msi(x)_range(), which runs into pci_setup_msi_context(),\n> > which sets up the devres callbacks.\n> > \n> > And this would now mean that your new function here *also* is hybrid,\n> > because it relies on pcim_enable_device() setting the flag, right?\n> > \n> > IOW, when someone called pci_enable_device(), your pcim_ function here\n> > would NOT be a managed devres function. Right?\n> > \n> > That would obviously not fly. The current (and IMO) desirable design\n> > goal for devres PCI is that pcim_ functions are ALWAYS managed (hence\n> > the 'm' = managed, and pci_ functions are NEVER managed).\n> > \n> > I am sorry if / that I overlooked that while first glimpsing at your\n> > previous versions.\n> > \n> \n> Don't worry.\n> \n> > \n> > IIRC for this reason I had to provide a completely unmanaged backend,\n> > and to do so I actually had to *copy* existing functions from pci.c, so\n> > for many months PCI contained two versions of the same functions.\n> > \n> \n> I was trying to avoid copying the existing functions.\n> \n> There are some combinations here:\n> 1. pcim_enable_device() + pci_alloc   -> correct\n> 2. pcim_enable_device() + pci_alloc + pci_free  -> bug\n> 3. pci_enable_device() + pci_alloc + pci_free -> correct\n> \n> To solve the 2nd case, I need to remove the hybird mode from\n> pcim_enable_device(). So what I had in mind is to is :\n> \n> 1. keep pcim_alloc_irq_vectors() rely on pcim_enable_device(), still\n> the hybrid mode as you pointed out.\n> 2. convert pcim_enable_device() + pci_alloc_irq_vectors() users to\n> pcim_enable_device() + pcim_alloc_irq_vectors(), which is the first\n> above case. Meanwhile add documentation or code check to prevent new\n> pci_enable_device() + pcim_alloc_irq_vectors() user from using it.\n> \n> 3. after conversions, we still have three combinations:\n> - pcim_enable_device() + pcim_alloc()  -> correct\n> - pcim_enable_device() + pci_alloc + pci_free  -> bug\n> - pci_enable_device() + pci_alloc + pci_free   -> correct\n> \n> 4. Then remove pcim_setup_msi_release() from pci_setup_msi_context().\n> And add pcim_setup_msi_release() to my new APIs. This way,\n> - pcim_enable_device() + pcim_alloc()  -> correct\n> - pcim_enable_device() + pci_alloc + pci_free  -> correct\n> - pci_enable_device() + pci_alloc + pci_free   -> correct\n> \n> 5. remove the limitation of using pci_enable_device() + pcim_alloc, as\n> mentioned in the 2nd above step. so we could allow this 4th pattern.\n> Finally all things work fine.\n> \n> Please let me know if this approach works for you?\n\nThe biggest issue I see is that you add another hybrid function, even\nif it's temporary. pcim_alloc_irq_vectors() will sometimes be managed\nand sometimes not – until you have removed all users of the\naforementioned functions, and finally performed step №4 from above.\n\nNote that even in the past, with the oldest, most broken version of the\nPCI devres API, pcim_ functions were *always* managed, completely\nindependently from pci(m)_enable_device().\n\nWhat pcim_enable_device() did was implicitly switch some pci_\nfunctions, such as pci_request_regions(), into managed mode.\n\nThe solution presented / intended by you would rely on:\n   1. New users of pcim_ being aware (through documentation for\n      example) that this one pcim_ function is special and not always\n      managed (for now).\n   2. New users being nice and really using pcim_enable_device().\n   3. You being able to fully see it all through to the end, i.e., you\n      not being switched to a different project, switching jobs and\n      opening a bar in Hawaii etc. etc.\n\nMy experience has, unfortunately, been that people often don't read\ndocumentation, especially if they are already used to interfaces like\nthat working a certain way.\nHow many drivers are there to be ported, do you have a list and maybe a\ntime horizon? I took almost a year to remove my helper functions again.\n\nThat's not a hard \"NACK\" from my side, it's just I wouldn't do it like\nthat. I suppose it is for the maintainers to pick one of the two\nsolution pathways:\n\na) have another hybrid function temporarily or\nb) have duplicated code temporarily\n\n(where \"temporarily\" means until the contributor finishes the job :)\n\n\nBTW, please keep in mind that in my opinion, once PCI/MSI is solved, I\nbelieve that pcim_enable_device() should do nothing anymore but disable\nthe device on driver detach, without any other side effects.\n\n\nRegards\nP.\n\n> \n> \n> > The reason was that the hybrid behavior was burried so deep in the\n> > subsystem.\n> > \n> > See here:\n> > \n> > Here I had to reimplement some functions to be used by my new pcim_\n> > functions:\n> > https://lore.kernel.org/linux-pci/20240610093149.20640-4-pstanner@redhat.com/\n> > \n> > \n> > And here I could finally remove them later:\n> > https://lore.kernel.org/linux-pci/20250519112959.25487-7-phasta@kernel.org/\n> > \n> > \n> > I hope this helps. And again sorry for potentially causing you\n> > unnecessary work :(\n> > \n> > \n> > Regards\n> > Philipp\n> > \n> > \n> > > +}\n> > > +EXPORT_SYMBOL(pcim_alloc_irq_vectors_affinity);\n> > > +\n> > > +/**\n> > >    * pci_irq_vector() - Get Linux IRQ number of a device interrupt vector\n> > >    * @dev: the PCI device to operate on\n> > >    * @nr:  device-relative interrupt vector index (0-based); has different\n> > > diff --git a/include/linux/pci.h b/include/linux/pci.h\n> > > index 2c44545..3716c67 100644\n> > > --- a/include/linux/pci.h\n> > > +++ b/include/linux/pci.h\n> > > @@ -1770,6 +1770,12 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n> > >       unsigned int max_vecs, unsigned int flags,\n> > >       struct irq_affinity *affd);\n> > >   \n> > > +int pcim_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,\n> > > +   unsigned int max_vecs, unsigned int flags);\n> > > +int pcim_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n> > > +    unsigned int max_vecs, unsigned int flags,\n> > > +    struct irq_affinity *affd);\n> > > +\n> > >   bool pci_msix_can_alloc_dyn(struct pci_dev *dev);\n> > >   struct msi_map pci_msix_alloc_irq_at(struct pci_dev *dev, unsigned int index,\n> > >         const struct irq_affinity_desc *affdesc);\n> > > @@ -1812,6 +1818,22 @@ pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,\n> > >          flags, NULL);\n> > >   }\n> > >   \n> > > +static inline int\n> > > +pcim_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,\n> > > +        unsigned int max_vecs, unsigned int flags,\n> > > +        struct irq_affinity *aff_desc)\n> > > +{\n> > > + return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,\n> > > +       flags, aff_desc);\n> > > +}\n> > > +static inline int\n> > > +pcim_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,\n> > > +       unsigned int max_vecs, unsigned int flags)\n> > > +{\n> > > + return pcim_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,\n> > > +       flags, NULL);\n> > > +}\n> > > +\n> > >   static inline bool pci_msix_can_alloc_dyn(struct pci_dev *dev)\n> > >   { return false; }\n> > >   static inline struct msi_map pci_msix_alloc_irq_at(struct pci_dev *dev, unsigned int index,\n> > \n> >","headers":{"Return-Path":"\n <linux-pci+bounces-52970-incoming=patchwork.ozlabs.org@vger.kernel.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linux-pci@vger.kernel.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org\n (client-ip=172.234.253.10; helo=sea.lore.kernel.org;\n envelope-from=linux-pci+bounces-52970-incoming=patchwork.ozlabs.org@vger.kernel.org;\n receiver=patchwork.ozlabs.org)","smtp.subspace.kernel.org;\n\tdkim=pass (2048-bit key) header.d=mailbox.org header.i=@mailbox.org\n header.b=\"UX9Lv3PE\"","smtp.subspace.kernel.org;\n arc=none smtp.client-ip=80.241.56.171","smtp.subspace.kernel.org;\n dmarc=pass (p=reject dis=none) header.from=mailbox.org","smtp.subspace.kernel.org;\n spf=pass smtp.mailfrom=mailbox.org"],"Received":["from sea.lore.kernel.org (sea.lore.kernel.org [172.234.253.10])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4g105f0CzDz1yCv\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 22 Apr 2026 23:13:09 +1000 (AEST)","from smtp.subspace.kernel.org (conduit.subspace.kernel.org\n [100.90.174.1])\n\tby sea.lore.kernel.org (Postfix) with ESMTP id E6F0F3083548\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 22 Apr 2026 13:07:47 +0000 (UTC)","from localhost.localdomain (localhost.localdomain [127.0.0.1])\n\tby smtp.subspace.kernel.org (Postfix) with ESMTP id 1ADC03E715A;\n\tWed, 22 Apr 2026 13:07:47 +0000 (UTC)","from mout-p-201.mailbox.org (mout-p-201.mailbox.org [80.241.56.171])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby smtp.subspace.kernel.org (Postfix) with ESMTPS id 19CFC1DED49\n\tfor <linux-pci@vger.kernel.org>; Wed, 22 Apr 2026 13:07:43 +0000 (UTC)","from smtp102.mailbox.org (smtp102.mailbox.org [10.196.197.102])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby mout-p-201.mailbox.org (Postfix) with ESMTPS id 4g0zzD5LzFz9v0d;\n\tWed, 22 Apr 2026 15:07:36 +0200 (CEST)"],"ARC-Seal":"i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116;\n\tt=1776863267; cv=none;\n b=JBnPrRWjyaKHnvSJymii+c8BeOHG7MbCosOcGFZmm6Q/4klEePyAQNuvghOrv5/9bn5qSMOCeLoIy8UpUJjCFYbgjeaHekwfi26LX6Leb7jxZp3SDnHot9ZuYBqVbpJoLtEe50tkEjmLNepkipaktL0L1qq9aHaoI9T61A6AOM4=","ARC-Message-Signature":"i=1; a=rsa-sha256; d=subspace.kernel.org;\n\ts=arc-20240116; t=1776863267; c=relaxed/simple;\n\tbh=E3VGs0lq3iNWy4t7vCZZ9C/Jqka/LhQBO3zDlgBsp40=;\n\th=Message-ID:Subject:From:To:Cc:Date:In-Reply-To:References:\n\t Content-Type:MIME-Version;\n b=X7LhkV1UO8X/KsX513etu9BJk2gnq+ukWvPRGXuK+xlgEFweBtwMqicsK6vGa9W7XniIzvmF6/unokK7WoQnfekG/+kgA5cf/EJDQouYN2wwTHvRyuhVTszGrrE7rtRm2RiI0T8VH2OUMEUG/yZJDymDq+ADCzSZ1Jn9BIDgJhY=","ARC-Authentication-Results":"i=1; smtp.subspace.kernel.org;\n dmarc=pass (p=reject dis=none) header.from=mailbox.org;\n spf=pass smtp.mailfrom=mailbox.org;\n dkim=pass (2048-bit key) header.d=mailbox.org header.i=@mailbox.org\n header.b=UX9Lv3PE; arc=none smtp.client-ip=80.241.56.171","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=mailbox.org;\n s=mail20150812;\n\tt=1776863256; h=from:from:reply-to:reply-to:subject:subject:date:date:\n\t message-id:message-id:to:to:cc:cc:mime-version:mime-version:\n\t content-type:content-type:\n\t content-transfer-encoding:content-transfer-encoding:\n\t in-reply-to:in-reply-to:references:references;\n\tbh=SEeNUPL3mjPam1QKNhWHJx8B46rHXKTO/OThG/bAF+Y=;\n\tb=UX9Lv3PEFRE925CZn0ZO/jpenxnIWMsp7QQzPIEfOBBEisOa2+WM8Z0inSRhBdh+AUYHcM\n\tf//rYkF3O9gpfyHdgrv3eZyeQ27THo18KOLXMx1Mcm7NNa9+hI1VH0bLpnrSekkreXR7BU\n\tg1RfxP1TSDx1rdUvFfKZ/HueFHFcR2+31aPDm+jAEWd5pNYUao8sulezCRuZvFOI2RrlmO\n\tFpxfce0/bEpx3yDuKtBROnVHdUs1V+ZMIIfpnKpQog1O1jkWjn9cF4cWoxdK4kpGZH6Uds\n\t+puCuIm5QXHBB3q/R0UPWMsXJ7nBt6GDIRcsn1ty9V/JEA5aFSxjt3VhtWSihA==","Message-ID":"<42e31ebc86a34001e98aecc7197e5e5a8fd5bb03.camel@mailbox.org>","Subject":"Re: [RESEND PATCH v3 1/3] PCI/MSI: Add Devres managed IRQ vectors\n allocation","From":"Philipp Stanner <phasta@mailbox.org>","Reply-To":"phasta@kernel.org","To":"Shawn Lin <shawn.lin@rock-chips.com>, phasta@kernel.org, Bjorn Helgaas\n\t <bhelgaas@google.com>","Cc":"Nirmal Patel <nirmal.patel@linux.intel.com>, Jonathan Derrick\n <jonathan.derrick@linux.dev>, Kurt Schwemmer\n <kurt.schwemmer@microsemi.com>,  Logan Gunthorpe <logang@deltatee.com>,\n linux-pci@vger.kernel.org","Date":"Wed, 22 Apr 2026 15:07:32 +0200","In-Reply-To":"<f912358d-925b-1e25-2852-1f265d09032a@rock-chips.com>","References":"<1776825522-6390-1-git-send-email-shawn.lin@rock-chips.com>\n\t <1776825522-6390-2-git-send-email-shawn.lin@rock-chips.com>\n\t <2c227e17a573c8c674c027c9b3014c2ec9ebfec7.camel@mailbox.org>\n\t <f912358d-925b-1e25-2852-1f265d09032a@rock-chips.com>","Content-Type":"text/plain; charset=\"UTF-8\"","Content-Transfer-Encoding":"quoted-printable","Precedence":"bulk","X-Mailing-List":"linux-pci@vger.kernel.org","List-Id":"<linux-pci.vger.kernel.org>","List-Subscribe":"<mailto:linux-pci+subscribe@vger.kernel.org>","List-Unsubscribe":"<mailto:linux-pci+unsubscribe@vger.kernel.org>","MIME-Version":"1.0","X-MBO-RS-META":"5rxsjayd9b53yxjuk8y57owsrxibscan","X-MBO-RS-ID":"6db456ed51e2682ecbb"}}]