get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/2165663/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 2165663,
    "url": "http://patchwork.ozlabs.org/api/patches/2165663/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20251117134912.18566-9-larysa.zaremba@intel.com/",
    "project": {
        "id": 46,
        "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api",
        "name": "Intel Wired Ethernet development",
        "link_name": "intel-wired-lan",
        "list_id": "intel-wired-lan.osuosl.org",
        "list_email": "intel-wired-lan@osuosl.org",
        "web_url": "",
        "scm_url": "",
        "webscm_url": "",
        "list_archive_url": "",
        "list_archive_url_format": "",
        "commit_url_format": ""
    },
    "msgid": "<20251117134912.18566-9-larysa.zaremba@intel.com>",
    "list_archive_url": null,
    "date": "2025-11-17T13:48:48",
    "name": "[iwl-next,v5,08/15] idpf: refactor idpf to use libie_pci APIs",
    "commit_ref": null,
    "pull_url": null,
    "state": "under-review",
    "archived": false,
    "hash": "61a0d6fb222211d4ca2558af4e83476d61c1b831",
    "submitter": {
        "id": 84900,
        "url": "http://patchwork.ozlabs.org/api/people/84900/?format=api",
        "name": "Larysa Zaremba",
        "email": "larysa.zaremba@intel.com"
    },
    "delegate": {
        "id": 109701,
        "url": "http://patchwork.ozlabs.org/api/users/109701/?format=api",
        "username": "anguy11",
        "first_name": "Anthony",
        "last_name": "Nguyen",
        "email": "anthony.l.nguyen@intel.com"
    },
    "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20251117134912.18566-9-larysa.zaremba@intel.com/mbox/",
    "series": [
        {
            "id": 482391,
            "url": "http://patchwork.ozlabs.org/api/series/482391/?format=api",
            "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/list/?series=482391",
            "date": "2025-11-17T13:48:40",
            "name": "Introduce iXD driver",
            "version": 5,
            "mbox": "http://patchwork.ozlabs.org/series/482391/mbox/"
        }
    ],
    "comments": "http://patchwork.ozlabs.org/api/patches/2165663/comments/",
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/2165663/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<intel-wired-lan-bounces@osuosl.org>",
        "X-Original-To": [
            "incoming@patchwork.ozlabs.org",
            "intel-wired-lan@lists.osuosl.org"
        ],
        "Delivered-To": [
            "patchwork-incoming@legolas.ozlabs.org",
            "intel-wired-lan@lists.osuosl.org"
        ],
        "Authentication-Results": [
            "legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256\n header.s=default header.b=7kgrOXot;\n\tdkim-atps=neutral",
            "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org\n (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org;\n envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=patchwork.ozlabs.org)"
        ],
        "Received": [
            "from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4d98Hq45xVz1yDb\n\tfor <incoming@patchwork.ozlabs.org>; Tue, 18 Nov 2025 00:49:43 +1100 (AEDT)",
            "from localhost (localhost [127.0.0.1])\n\tby smtp1.osuosl.org (Postfix) with ESMTP id 23AFE80E95;\n\tMon, 17 Nov 2025 13:49:42 +0000 (UTC)",
            "from smtp1.osuosl.org ([127.0.0.1])\n by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id Aw4yesxKZQtq; Mon, 17 Nov 2025 13:49:40 +0000 (UTC)",
            "from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142])\n\tby smtp1.osuosl.org (Postfix) with ESMTP id 123BA80DD4;\n\tMon, 17 Nov 2025 13:49:40 +0000 (UTC)",
            "from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136])\n by lists1.osuosl.org (Postfix) with ESMTP id 6D350158\n for <intel-wired-lan@lists.osuosl.org>; Mon, 17 Nov 2025 13:49:38 +0000 (UTC)",
            "from localhost (localhost [127.0.0.1])\n by smtp3.osuosl.org (Postfix) with ESMTP id 5463660DBD\n for <intel-wired-lan@lists.osuosl.org>; Mon, 17 Nov 2025 13:49:38 +0000 (UTC)",
            "from smtp3.osuosl.org ([127.0.0.1])\n by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id JOsj9o775wRQ for <intel-wired-lan@lists.osuosl.org>;\n Mon, 17 Nov 2025 13:49:36 +0000 (UTC)",
            "from mgamail.intel.com (mgamail.intel.com [198.175.65.12])\n by smtp3.osuosl.org (Postfix) with ESMTPS id 20F5E60D5E\n for <intel-wired-lan@lists.osuosl.org>; Mon, 17 Nov 2025 13:49:36 +0000 (UTC)",
            "from fmviesa007.fm.intel.com ([10.60.135.147])\n by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 17 Nov 2025 05:49:36 -0800",
            "from irvmail002.ir.intel.com ([10.43.11.120])\n by fmviesa007.fm.intel.com with ESMTP; 17 Nov 2025 05:49:29 -0800",
            "from mglak.igk.intel.com (mglak.igk.intel.com [10.237.112.146])\n by irvmail002.ir.intel.com (Postfix) with ESMTP id 10FD137E3A;\n Mon, 17 Nov 2025 13:49:27 +0000 (GMT)"
        ],
        "X-Virus-Scanned": [
            "amavis at osuosl.org",
            "amavis at osuosl.org"
        ],
        "X-Comment": "SPF check N/A for local connections - client-ip=140.211.166.142;\n helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=<UNKNOWN> ",
        "DKIM-Filter": [
            "OpenDKIM Filter v2.11.0 smtp1.osuosl.org 123BA80DD4",
            "OpenDKIM Filter v2.11.0 smtp3.osuosl.org 20F5E60D5E"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org;\n\ts=default; t=1763387380;\n\tbh=Dm6UOSdxTORCePIs3OxiZ1NTUbfa5hy4VdyglRvU4F0=;\n\th=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id:\n\t List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe:\n\t From;\n\tb=7kgrOXot3qkbRN5pRcpC4dHds+E2hMvTYvi+HbfDHabfC1eO7Dtln52jYr994MHzF\n\t 9rlCnfB/G5/XcN+ucQRerCFwHSO5aELgSo9ey48LKqnVoeA0WYbGy7BwN5BaJ5qKed\n\t 5GgMw6PcUUwnsK9p9q/O56UkRQs+1FdpIYvZQ7JIWX+j7QPxmPLuqf71PoXGsfim9r\n\t 45gkQxlLeTCBMXM3qZaLThYhEXa9ywlHhs+c2MIXv8XvsFzD94tFtadQdOaofwsZua\n\t MBnurUzAvGbQAuSIsL1RYQuAmkHrpEeANJs4axlsGBGIM+oi7M/ofLYPJYIrqXJ/jI\n\t ejEZKLexzGWuQ==",
        "Received-SPF": "Pass (mailfrom) identity=mailfrom; client-ip=198.175.65.12;\n helo=mgamail.intel.com; envelope-from=larysa.zaremba@intel.com;\n receiver=<UNKNOWN>",
        "DMARC-Filter": "OpenDMARC Filter v1.4.2 smtp3.osuosl.org 20F5E60D5E",
        "X-CSE-ConnectionGUID": [
            "8O0QJ0J6RLyLT9+Utd9g7A==",
            "MnuiaBA5Ss+q0BQh8CtK9A=="
        ],
        "X-CSE-MsgGUID": [
            "ifqzi+rDRqapXRtDg+OvzA==",
            "F7ZHKrzWS56SU+NFGrbhtw=="
        ],
        "X-IronPort-AV": [
            "E=McAfee;i=\"6800,10657,11616\"; a=\"76846131\"",
            "E=Sophos;i=\"6.19,311,1754982000\"; d=\"scan'208\";a=\"76846131\"",
            "E=Sophos;i=\"6.19,311,1754982000\"; d=\"scan'208\";a=\"190115725\""
        ],
        "X-ExtLoop1": "1",
        "From": "Larysa Zaremba <larysa.zaremba@intel.com>",
        "To": "intel-wired-lan@lists.osuosl.org, Tony Nguyen <anthony.l.nguyen@intel.com>",
        "Cc": "aleksander.lobakin@intel.com, sridhar.samudrala@intel.com,\n \"Singhai, Anjali\" <anjali.singhai@intel.com>,\n Michal Swiatkowski <michal.swiatkowski@linux.intel.com>,\n Larysa Zaremba <larysa.zaremba@intel.com>,\n \"Fijalkowski, Maciej\" <maciej.fijalkowski@intel.com>,\n Emil Tantilov <emil.s.tantilov@intel.com>,\n Madhu Chittim <madhu.chittim@intel.com>, Josh Hay <joshua.a.hay@intel.com>,\n \"Keller, Jacob E\" <jacob.e.keller@intel.com>,\n jayaprakash.shanmugam@intel.com, natalia.wochtman@intel.com,\n Jiri Pirko <jiri@resnulli.us>, \"David S. Miller\" <davem@davemloft.net>,\n Eric Dumazet <edumazet@google.com>, Jakub Kicinski <kuba@kernel.org>,\n Paolo Abeni <pabeni@redhat.com>, Simon Horman <horms@kernel.org>,\n Jonathan Corbet <corbet@lwn.net>,\n Richard Cochran <richardcochran@gmail.com>,\n Przemek Kitszel <przemyslaw.kitszel@intel.com>,\n Andrew Lunn <andrew+netdev@lunn.ch>, netdev@vger.kernel.org,\n linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org",
        "Date": "Mon, 17 Nov 2025 14:48:48 +0100",
        "Message-ID": "<20251117134912.18566-9-larysa.zaremba@intel.com>",
        "X-Mailer": "git-send-email 2.47.0",
        "In-Reply-To": "<20251117134912.18566-1-larysa.zaremba@intel.com>",
        "References": "<20251117134912.18566-1-larysa.zaremba@intel.com>",
        "MIME-Version": "1.0",
        "Content-Transfer-Encoding": "8bit",
        "X-Mailman-Original-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1763387376; x=1794923376;\n h=from:to:cc:subject:date:message-id:in-reply-to:\n references:mime-version:content-transfer-encoding;\n bh=UsqK9V54xEa8sSuDEMxJg4qPvn7sRRHzG69i8HRc79I=;\n b=XDDavVgnmC7DQ78lCCMDGMC3yIusO5ouqmeMAXHeu/TZ1Q1dM4ObwVZN\n pANUJvSuvf3w5SOOExf7iH2LMjEHDFWpgnR7s19mXeTeZ0SFCX8X3Lvl7\n QWda8FiC6sO7NqJfwFiDo10hiMGrDBtTMYfjCZcWTSjd5bIkUS9XyeVDd\n OJz67H7aHoTOcfhn1rd+8NPN2VP7AVTrh7/6mF9z/ITtjs3ALN6TxZs2S\n GqLr7bbyWT8kKm5sBCG2l0TW8eJauh7dSbqmz5Wju50aT+4ovhyhnPkHG\n AzjXCqPPe0840V0dXDWcK81XG/vbdBTHRaXS7xDQ3Nb6iIQddP0YGQ0d0\n A==;",
        "X-Mailman-Original-Authentication-Results": [
            "smtp3.osuosl.org;\n dmarc=pass (p=none dis=none)\n header.from=intel.com",
            "smtp3.osuosl.org;\n dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com\n header.a=rsa-sha256 header.s=Intel header.b=XDDavVgn"
        ],
        "Subject": "[Intel-wired-lan] [PATCH iwl-next v5 08/15] idpf: refactor idpf to\n use libie_pci APIs",
        "X-BeenThere": "intel-wired-lan@osuosl.org",
        "X-Mailman-Version": "2.1.30",
        "Precedence": "list",
        "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n <intel-wired-lan.osuosl.org>",
        "List-Unsubscribe": "<https://lists.osuosl.org/mailman/options/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>",
        "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>",
        "List-Post": "<mailto:intel-wired-lan@osuosl.org>",
        "List-Help": "<mailto:intel-wired-lan-request@osuosl.org?subject=help>",
        "List-Subscribe": "<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>",
        "Errors-To": "intel-wired-lan-bounces@osuosl.org",
        "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>"
    },
    "content": "From: Pavan Kumar Linga <pavan.kumar.linga@intel.com>\n\nUse libie_pci init and MMIO APIs where possible, struct idpf_hw cannot be\ndeleted for now as it also houses control queues that will be refactored\nlater. Use libie_cp header for libie_ctlq_ctx that contains mmio info from\nthe start in order to not increase the diff later.\n\nReviewed-by: Madhu Chittim <madhu.chittim@intel.com>\nReviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>\nSigned-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>\nCo-developed-by: Larysa Zaremba <larysa.zaremba@intel.com>\nSigned-off-by: Larysa Zaremba <larysa.zaremba@intel.com>\n---\n drivers/net/ethernet/intel/idpf/Kconfig       |   1 +\n drivers/net/ethernet/intel/idpf/idpf.h        |  70 +-------\n .../net/ethernet/intel/idpf/idpf_controlq.c   |  26 ++-\n .../net/ethernet/intel/idpf/idpf_controlq.h   |   2 -\n drivers/net/ethernet/intel/idpf/idpf_dev.c    |  61 +++----\n drivers/net/ethernet/intel/idpf/idpf_idc.c    |  38 +++--\n drivers/net/ethernet/intel/idpf/idpf_lib.c    |   7 +-\n drivers/net/ethernet/intel/idpf/idpf_main.c   | 111 ++++++------\n drivers/net/ethernet/intel/idpf/idpf_vf_dev.c |  32 ++--\n .../net/ethernet/intel/idpf/idpf_virtchnl.c   | 158 ++++++++----------\n .../ethernet/intel/idpf/idpf_virtchnl_ptp.c   |  58 ++++---\n 11 files changed, 264 insertions(+), 300 deletions(-)",
    "diff": "diff --git a/drivers/net/ethernet/intel/idpf/Kconfig b/drivers/net/ethernet/intel/idpf/Kconfig\nindex adab2154125b..586df3a4afe9 100644\n--- a/drivers/net/ethernet/intel/idpf/Kconfig\n+++ b/drivers/net/ethernet/intel/idpf/Kconfig\n@@ -6,6 +6,7 @@ config IDPF\n \tdepends on PCI_MSI\n \tdepends on PTP_1588_CLOCK_OPTIONAL\n \tselect DIMLIB\n+\tselect LIBIE_CP\n \tselect LIBETH_XDP\n \thelp\n \t  This driver supports Intel(R) Infrastructure Data Path Function\ndiff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h\nindex 1a1ea3fef092..dfa7618ed261 100644\n--- a/drivers/net/ethernet/intel/idpf/idpf.h\n+++ b/drivers/net/ethernet/intel/idpf/idpf.h\n@@ -23,6 +23,7 @@ struct idpf_rss_data;\n \n #include <linux/intel/iidc_rdma.h>\n #include <linux/intel/iidc_rdma_idpf.h>\n+#include <linux/intel/libie/controlq.h>\n #include <linux/intel/virtchnl2.h>\n \n #include \"idpf_txrx.h\"\n@@ -625,6 +626,7 @@ struct idpf_vc_xn_manager;\n  * @flags: See enum idpf_flags\n  * @reset_reg: See struct idpf_reset_reg\n  * @hw: Device access data\n+ * @ctlq_ctx: controlq context\n  * @num_avail_msix: Available number of MSIX vectors\n  * @num_msix_entries: Number of entries in MSIX table\n  * @msix_entries: MSIX table\n@@ -682,6 +684,7 @@ struct idpf_adapter {\n \tDECLARE_BITMAP(flags, IDPF_FLAGS_NBITS);\n \tstruct idpf_reset_reg reset_reg;\n \tstruct idpf_hw hw;\n+\tstruct libie_ctlq_ctx ctlq_ctx;\n \tu16 num_avail_msix;\n \tu16 num_msix_entries;\n \tstruct msix_entry *msix_entries;\n@@ -870,70 +873,6 @@ static inline u8 idpf_get_min_tx_pkt_len(struct idpf_adapter *adapter)\n \treturn pkt_len ? pkt_len : IDPF_TX_MIN_PKT_LEN;\n }\n \n-/**\n- * idpf_get_mbx_reg_addr - Get BAR0 mailbox register address\n- * @adapter: private data struct\n- * @reg_offset: register offset value\n- *\n- * Return: BAR0 mailbox register address based on register offset.\n- */\n-static inline void __iomem *idpf_get_mbx_reg_addr(struct idpf_adapter *adapter,\n-\t\t\t\t\t\t  resource_size_t reg_offset)\n-{\n-\treturn adapter->hw.mbx.vaddr + reg_offset;\n-}\n-\n-/**\n- * idpf_get_rstat_reg_addr - Get BAR0 rstat register address\n- * @adapter: private data struct\n- * @reg_offset: register offset value\n- *\n- * Return: BAR0 rstat register address based on register offset.\n- */\n-static inline void __iomem *idpf_get_rstat_reg_addr(struct idpf_adapter *adapter,\n-\t\t\t\t\t\t    resource_size_t reg_offset)\n-{\n-\treg_offset -= adapter->dev_ops.static_reg_info[1].start;\n-\n-\treturn adapter->hw.rstat.vaddr + reg_offset;\n-}\n-\n-/**\n- * idpf_get_reg_addr - Get BAR0 register address\n- * @adapter: private data struct\n- * @reg_offset: register offset value\n- *\n- * Based on the register offset, return the actual BAR0 register address\n- */\n-static inline void __iomem *idpf_get_reg_addr(struct idpf_adapter *adapter,\n-\t\t\t\t\t      resource_size_t reg_offset)\n-{\n-\tstruct idpf_hw *hw = &adapter->hw;\n-\n-\tfor (int i = 0; i < hw->num_lan_regs; i++) {\n-\t\tstruct idpf_mmio_reg *region = &hw->lan_regs[i];\n-\n-\t\tif (reg_offset >= region->addr_start &&\n-\t\t    reg_offset < (region->addr_start + region->addr_len)) {\n-\t\t\t/* Convert the offset so that it is relative to the\n-\t\t\t * start of the region.  Then add the base address of\n-\t\t\t * the region to get the final address.\n-\t\t\t */\n-\t\t\treg_offset -= region->addr_start;\n-\n-\t\t\treturn region->vaddr + reg_offset;\n-\t\t}\n-\t}\n-\n-\t/* It's impossible to hit this case with offsets from the CP. But if we\n-\t * do for any other reason, the kernel will panic on that register\n-\t * access. Might as well do it here to make it clear what's happening.\n-\t */\n-\tBUG();\n-\n-\treturn NULL;\n-}\n-\n /**\n  * idpf_is_reset_detected - check if we were reset at some point\n  * @adapter: driver specific private structure\n@@ -945,7 +884,8 @@ static inline bool idpf_is_reset_detected(struct idpf_adapter *adapter)\n \tif (!adapter->hw.arq)\n \t\treturn true;\n \n-\treturn !(readl(idpf_get_mbx_reg_addr(adapter, adapter->hw.arq->reg.len)) &\n+\treturn !(readl(libie_pci_get_mmio_addr(&adapter->ctlq_ctx.mmio_info,\n+\t\t\t\t\t       adapter->hw.arq->reg.len)) &\n \t\t adapter->hw.arq->reg.len_mask);\n }\n \ndiff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq.c b/drivers/net/ethernet/intel/idpf/idpf_controlq.c\nindex 67894eda2d29..89f6b39934d8 100644\n--- a/drivers/net/ethernet/intel/idpf/idpf_controlq.c\n+++ b/drivers/net/ethernet/intel/idpf/idpf_controlq.c\n@@ -1,7 +1,7 @@\n // SPDX-License-Identifier: GPL-2.0-only\n /* Copyright (C) 2023 Intel Corporation */\n \n-#include \"idpf_controlq.h\"\n+#include \"idpf.h\"\n \n /**\n  * idpf_ctlq_setup_regs - initialize control queue registers\n@@ -34,21 +34,27 @@ static void idpf_ctlq_setup_regs(struct idpf_ctlq_info *cq,\n static void idpf_ctlq_init_regs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,\n \t\t\t\tbool is_rxq)\n {\n+\tstruct libie_mmio_info *mmio = &hw->back->ctlq_ctx.mmio_info;\n+\n \t/* Update tail to post pre-allocated buffers for rx queues */\n \tif (is_rxq)\n-\t\tidpf_mbx_wr32(hw, cq->reg.tail, (u32)(cq->ring_size - 1));\n+\t\twritel((u32)(cq->ring_size - 1),\n+\t\t       libie_pci_get_mmio_addr(mmio, cq->reg.tail));\n \n \t/* For non-Mailbox control queues only TAIL need to be set */\n \tif (cq->q_id != -1)\n \t\treturn;\n \n \t/* Clear Head for both send or receive */\n-\tidpf_mbx_wr32(hw, cq->reg.head, 0);\n+\twritel(0, libie_pci_get_mmio_addr(mmio, cq->reg.head));\n \n \t/* set starting point */\n-\tidpf_mbx_wr32(hw, cq->reg.bal, lower_32_bits(cq->desc_ring.pa));\n-\tidpf_mbx_wr32(hw, cq->reg.bah, upper_32_bits(cq->desc_ring.pa));\n-\tidpf_mbx_wr32(hw, cq->reg.len, (cq->ring_size | cq->reg.len_ena_mask));\n+\twritel(lower_32_bits(cq->desc_ring.pa),\n+\t       libie_pci_get_mmio_addr(mmio, cq->reg.bal));\n+\twritel(upper_32_bits(cq->desc_ring.pa),\n+\t       libie_pci_get_mmio_addr(mmio, cq->reg.bah));\n+\twritel((cq->ring_size | cq->reg.len_ena_mask),\n+\t       libie_pci_get_mmio_addr(mmio, cq->reg.len));\n }\n \n /**\n@@ -328,7 +334,9 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,\n \t */\n \tdma_wmb();\n \n-\tidpf_mbx_wr32(hw, cq->reg.tail, cq->next_to_use);\n+\twritel(cq->next_to_use,\n+\t       libie_pci_get_mmio_addr(&hw->back->ctlq_ctx.mmio_info,\n+\t\t\t\t       cq->reg.tail));\n \n err_unlock:\n \tspin_unlock(&cq->cq_lock);\n@@ -520,7 +528,9 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,\n \n \t\tdma_wmb();\n \n-\t\tidpf_mbx_wr32(hw, cq->reg.tail, cq->next_to_post);\n+\t\twritel(cq->next_to_post,\n+\t\t       libie_pci_get_mmio_addr(&hw->back->ctlq_ctx.mmio_info,\n+\t\t\t\t\t       cq->reg.tail));\n \t}\n \n \tspin_unlock(&cq->cq_lock);\ndiff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq.h b/drivers/net/ethernet/intel/idpf/idpf_controlq.h\nindex de4ece40c2ff..acf595e9265f 100644\n--- a/drivers/net/ethernet/intel/idpf/idpf_controlq.h\n+++ b/drivers/net/ethernet/intel/idpf/idpf_controlq.h\n@@ -109,8 +109,6 @@ struct idpf_mmio_reg {\n  * Align to ctlq_hw_info\n  */\n struct idpf_hw {\n-\tstruct idpf_mmio_reg mbx;\n-\tstruct idpf_mmio_reg rstat;\n \t/* Array of remaining LAN BAR regions */\n \tint num_lan_regs;\n \tstruct idpf_mmio_reg *lan_regs;\ndiff --git a/drivers/net/ethernet/intel/idpf/idpf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_dev.c\nindex a4625638cf3f..3a9355d40c90 100644\n--- a/drivers/net/ethernet/intel/idpf/idpf_dev.c\n+++ b/drivers/net/ethernet/intel/idpf/idpf_dev.c\n@@ -16,7 +16,6 @@\n static void idpf_ctlq_reg_init(struct idpf_adapter *adapter,\n \t\t\t       struct idpf_ctlq_create_info *cq)\n {\n-\tresource_size_t mbx_start = adapter->dev_ops.static_reg_info[0].start;\n \tint i;\n \n \tfor (i = 0; i < IDPF_NUM_DFLT_MBX_Q; i++) {\n@@ -25,22 +24,22 @@ static void idpf_ctlq_reg_init(struct idpf_adapter *adapter,\n \t\tswitch (ccq->type) {\n \t\tcase IDPF_CTLQ_TYPE_MAILBOX_TX:\n \t\t\t/* set head and tail registers in our local struct */\n-\t\t\tccq->reg.head = PF_FW_ATQH - mbx_start;\n-\t\t\tccq->reg.tail = PF_FW_ATQT - mbx_start;\n-\t\t\tccq->reg.len = PF_FW_ATQLEN - mbx_start;\n-\t\t\tccq->reg.bah = PF_FW_ATQBAH - mbx_start;\n-\t\t\tccq->reg.bal = PF_FW_ATQBAL - mbx_start;\n+\t\t\tccq->reg.head = PF_FW_ATQH;\n+\t\t\tccq->reg.tail = PF_FW_ATQT;\n+\t\t\tccq->reg.len = PF_FW_ATQLEN;\n+\t\t\tccq->reg.bah = PF_FW_ATQBAH;\n+\t\t\tccq->reg.bal = PF_FW_ATQBAL;\n \t\t\tccq->reg.len_mask = PF_FW_ATQLEN_ATQLEN_M;\n \t\t\tccq->reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M;\n \t\t\tccq->reg.head_mask = PF_FW_ATQH_ATQH_M;\n \t\t\tbreak;\n \t\tcase IDPF_CTLQ_TYPE_MAILBOX_RX:\n \t\t\t/* set head and tail registers in our local struct */\n-\t\t\tccq->reg.head = PF_FW_ARQH - mbx_start;\n-\t\t\tccq->reg.tail = PF_FW_ARQT - mbx_start;\n-\t\t\tccq->reg.len = PF_FW_ARQLEN - mbx_start;\n-\t\t\tccq->reg.bah = PF_FW_ARQBAH - mbx_start;\n-\t\t\tccq->reg.bal = PF_FW_ARQBAL - mbx_start;\n+\t\t\tccq->reg.head = PF_FW_ARQH;\n+\t\t\tccq->reg.tail = PF_FW_ARQT;\n+\t\t\tccq->reg.len = PF_FW_ARQLEN;\n+\t\t\tccq->reg.bah = PF_FW_ARQBAH;\n+\t\t\tccq->reg.bal = PF_FW_ARQBAL;\n \t\t\tccq->reg.len_mask = PF_FW_ARQLEN_ARQLEN_M;\n \t\t\tccq->reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M;\n \t\t\tccq->reg.head_mask = PF_FW_ARQH_ARQH_M;\n@@ -57,13 +56,14 @@ static void idpf_ctlq_reg_init(struct idpf_adapter *adapter,\n  */\n static void idpf_mb_intr_reg_init(struct idpf_adapter *adapter)\n {\n+\tstruct libie_mmio_info *mmio = &adapter->ctlq_ctx.mmio_info;\n \tstruct idpf_intr_reg *intr = &adapter->mb_vector.intr_reg;\n \tu32 dyn_ctl = le32_to_cpu(adapter->caps.mailbox_dyn_ctl);\n \n-\tintr->dyn_ctl = idpf_get_reg_addr(adapter, dyn_ctl);\n+\tintr->dyn_ctl = libie_pci_get_mmio_addr(mmio, dyn_ctl);\n \tintr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M;\n \tintr->dyn_ctl_itridx_m = PF_GLINT_DYN_CTL_ITR_INDX_M;\n-\tintr->icr_ena = idpf_get_reg_addr(adapter, PF_INT_DIR_OICR_ENA);\n+\tintr->icr_ena = libie_pci_get_mmio_addr(mmio, PF_INT_DIR_OICR_ENA);\n \tintr->icr_ena_ctlq_m = PF_INT_DIR_OICR_ENA_M;\n }\n \n@@ -78,6 +78,7 @@ static int idpf_intr_reg_init(struct idpf_vport *vport,\n \tstruct idpf_adapter *adapter = vport->adapter;\n \tu16 num_vecs = rsrc->num_q_vectors;\n \tstruct idpf_vec_regs *reg_vals;\n+\tstruct libie_mmio_info *mmio;\n \tint num_regs, i, err = 0;\n \tu32 rx_itr, tx_itr, val;\n \tu16 total_vecs;\n@@ -94,14 +95,17 @@ static int idpf_intr_reg_init(struct idpf_vport *vport,\n \t\tgoto free_reg_vals;\n \t}\n \n+\tmmio = &adapter->ctlq_ctx.mmio_info;\n+\n \tfor (i = 0; i < num_vecs; i++) {\n \t\tstruct idpf_q_vector *q_vector = &rsrc->q_vectors[i];\n \t\tu16 vec_id = rsrc->q_vector_idxs[i] - IDPF_MBX_Q_VEC;\n \t\tstruct idpf_intr_reg *intr = &q_vector->intr_reg;\n+\t\tstruct idpf_vec_regs *reg = &reg_vals[vec_id];\n \t\tu32 spacing;\n \n-\t\tintr->dyn_ctl = idpf_get_reg_addr(adapter,\n-\t\t\t\t\t\t  reg_vals[vec_id].dyn_ctl_reg);\n+\t\tintr->dyn_ctl =\tlibie_pci_get_mmio_addr(mmio,\n+\t\t\t\t\t\t\treg->dyn_ctl_reg);\n \t\tintr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M;\n \t\tintr->dyn_ctl_intena_msk_m = PF_GLINT_DYN_CTL_INTENA_MSK_M;\n \t\tintr->dyn_ctl_itridx_s = PF_GLINT_DYN_CTL_ITR_INDX_S;\n@@ -111,22 +115,21 @@ static int idpf_intr_reg_init(struct idpf_vport *vport,\n \t\tintr->dyn_ctl_sw_itridx_ena_m =\n \t\t\tPF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_M;\n \n-\t\tspacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing,\n+\t\tspacing = IDPF_ITR_IDX_SPACING(reg->itrn_index_spacing,\n \t\t\t\t\t       IDPF_PF_ITR_IDX_SPACING);\n \t\trx_itr = PF_GLINT_ITR_ADDR(VIRTCHNL2_ITR_IDX_0,\n-\t\t\t\t\t   reg_vals[vec_id].itrn_reg,\n-\t\t\t\t\t   spacing);\n+\t\t\t\t\t   reg->itrn_reg, spacing);\n \t\ttx_itr = PF_GLINT_ITR_ADDR(VIRTCHNL2_ITR_IDX_1,\n-\t\t\t\t\t   reg_vals[vec_id].itrn_reg,\n-\t\t\t\t\t   spacing);\n-\t\tintr->rx_itr = idpf_get_reg_addr(adapter, rx_itr);\n-\t\tintr->tx_itr = idpf_get_reg_addr(adapter, tx_itr);\n+\t\t\t\t\t   reg->itrn_reg, spacing);\n+\t\tintr->rx_itr = libie_pci_get_mmio_addr(mmio, rx_itr);\n+\t\tintr->tx_itr = libie_pci_get_mmio_addr(mmio, tx_itr);\n \t}\n \n \t/* Data vector for NOIRQ queues */\n \n \tval = reg_vals[rsrc->q_vector_idxs[i] - IDPF_MBX_Q_VEC].dyn_ctl_reg;\n-\trsrc->noirq_dyn_ctl = idpf_get_reg_addr(adapter, val);\n+\trsrc->noirq_dyn_ctl =\n+\t\tlibie_pci_get_mmio_addr(&adapter->ctlq_ctx.mmio_info, val);\n \n \tval = PF_GLINT_DYN_CTL_WB_ON_ITR_M | PF_GLINT_DYN_CTL_INTENA_MSK_M |\n \t      FIELD_PREP(PF_GLINT_DYN_CTL_ITR_INDX_M, IDPF_NO_ITR_UPDATE_IDX);\n@@ -144,7 +147,9 @@ static int idpf_intr_reg_init(struct idpf_vport *vport,\n  */\n static void idpf_reset_reg_init(struct idpf_adapter *adapter)\n {\n-\tadapter->reset_reg.rstat = idpf_get_rstat_reg_addr(adapter, PFGEN_RSTAT);\n+\tadapter->reset_reg.rstat =\n+\t\tlibie_pci_get_mmio_addr(&adapter->ctlq_ctx.mmio_info,\n+\t\t\t\t\tPFGEN_RSTAT);\n \tadapter->reset_reg.rstat_m = PFGEN_RSTAT_PFR_STATE_M;\n }\n \n@@ -156,11 +161,11 @@ static void idpf_reset_reg_init(struct idpf_adapter *adapter)\n static void idpf_trigger_reset(struct idpf_adapter *adapter,\n \t\t\t       enum idpf_flags __always_unused trig_cause)\n {\n-\tu32 reset_reg;\n+\tvoid __iomem *addr;\n \n-\treset_reg = readl(idpf_get_rstat_reg_addr(adapter, PFGEN_CTRL));\n-\twritel(reset_reg | PFGEN_CTRL_PFSWR,\n-\t       idpf_get_rstat_reg_addr(adapter, PFGEN_CTRL));\n+\taddr = libie_pci_get_mmio_addr(&adapter->ctlq_ctx.mmio_info,\n+\t\t\t\t       PFGEN_CTRL);\n+\twritel(readl(addr) | PFGEN_CTRL_PFSWR, addr);\n }\n \n /**\ndiff --git a/drivers/net/ethernet/intel/idpf/idpf_idc.c b/drivers/net/ethernet/intel/idpf/idpf_idc.c\nindex 7e20a07e98e5..c1b963f6bfad 100644\n--- a/drivers/net/ethernet/intel/idpf/idpf_idc.c\n+++ b/drivers/net/ethernet/intel/idpf/idpf_idc.c\n@@ -410,9 +410,12 @@ idpf_idc_init_msix_data(struct idpf_adapter *adapter)\n int idpf_idc_init_aux_core_dev(struct idpf_adapter *adapter,\n \t\t\t       enum iidc_function_type ftype)\n {\n+\tstruct libie_mmio_info *mmio = &adapter->ctlq_ctx.mmio_info;\n \tstruct iidc_rdma_core_dev_info *cdev_info;\n \tstruct iidc_rdma_priv_dev_info *privd;\n-\tint err, i;\n+\tstruct libie_pci_mmio_region *mr;\n+\tsize_t num_mem_regions;\n+\tint err, i = 0;\n \n \tadapter->cdev_info = kzalloc(sizeof(*cdev_info), GFP_KERNEL);\n \tif (!adapter->cdev_info)\n@@ -430,8 +433,15 @@ int idpf_idc_init_aux_core_dev(struct idpf_adapter *adapter,\n \tcdev_info->rdma_protocol = IIDC_RDMA_PROTOCOL_ROCEV2;\n \tprivd->ftype = ftype;\n \n+\tnum_mem_regions = list_count_nodes(&mmio->mmio_list);\n+\tif (num_mem_regions <= IDPF_MMIO_REG_NUM_STATIC) {\n+\t\terr = -EINVAL;\n+\t\tgoto err_plug_aux_dev;\n+\t}\n+\n+\tnum_mem_regions -= IDPF_MMIO_REG_NUM_STATIC;\n \tprivd->mapped_mem_regions =\n-\t\tkcalloc(adapter->hw.num_lan_regs,\n+\t\tkcalloc(num_mem_regions,\n \t\t\tsizeof(struct iidc_rdma_lan_mapped_mem_region),\n \t\t\tGFP_KERNEL);\n \tif (!privd->mapped_mem_regions) {\n@@ -439,14 +449,22 @@ int idpf_idc_init_aux_core_dev(struct idpf_adapter *adapter,\n \t\tgoto err_plug_aux_dev;\n \t}\n \n-\tprivd->num_memory_regions = cpu_to_le16(adapter->hw.num_lan_regs);\n-\tfor (i = 0; i < adapter->hw.num_lan_regs; i++) {\n-\t\tprivd->mapped_mem_regions[i].region_addr =\n-\t\t\tadapter->hw.lan_regs[i].vaddr;\n-\t\tprivd->mapped_mem_regions[i].size =\n-\t\t\tcpu_to_le64(adapter->hw.lan_regs[i].addr_len);\n-\t\tprivd->mapped_mem_regions[i].start_offset =\n-\t\t\tcpu_to_le64(adapter->hw.lan_regs[i].addr_start);\n+\tprivd->num_memory_regions = cpu_to_le16(num_mem_regions);\n+\tlist_for_each_entry(mr, &mmio->mmio_list, list) {\n+\t\tstruct resource *static_regs = adapter->dev_ops.static_reg_info;\n+\t\tbool is_static = false;\n+\n+\t\tfor (uint j = 0; j < IDPF_MMIO_REG_NUM_STATIC; j++)\n+\t\t\tif (mr->offset == static_regs[j].start)\n+\t\t\t\tis_static = true;\n+\n+\t\tif (is_static)\n+\t\t\tcontinue;\n+\n+\t\tprivd->mapped_mem_regions[i].region_addr = mr->addr;\n+\t\tprivd->mapped_mem_regions[i].size = cpu_to_le64(mr->size);\n+\t\tprivd->mapped_mem_regions[i++].start_offset =\n+\t\t\t\t\t\tcpu_to_le64(mr->offset);\n \t}\n \n \tidpf_idc_init_msix_data(adapter);\ndiff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c\nindex dca7861a0a2a..e15b1e8effc8 100644\n--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c\n+++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c\n@@ -1845,15 +1845,14 @@ void idpf_deinit_task(struct idpf_adapter *adapter)\n \n /**\n  * idpf_check_reset_complete - check that reset is complete\n- * @hw: pointer to hw struct\n+ * @adapter: adapter to check\n  * @reset_reg: struct with reset registers\n  *\n  * Returns 0 if device is ready to use, or -EBUSY if it's in reset.\n  **/\n-static int idpf_check_reset_complete(struct idpf_hw *hw,\n+static int idpf_check_reset_complete(struct idpf_adapter *adapter,\n \t\t\t\t     struct idpf_reset_reg *reset_reg)\n {\n-\tstruct idpf_adapter *adapter = hw->back;\n \tint i;\n \n \tfor (i = 0; i < 2000; i++) {\n@@ -1916,7 +1915,7 @@ static void idpf_init_hard_reset(struct idpf_adapter *adapter)\n \t}\n \n \t/* Wait for reset to complete */\n-\terr = idpf_check_reset_complete(&adapter->hw, &adapter->reset_reg);\n+\terr = idpf_check_reset_complete(adapter, &adapter->reset_reg);\n \tif (err) {\n \t\tdev_err(dev, \"The driver was unable to contact the device's firmware. Check that the FW is running. Driver state= 0x%x\\n\",\n \t\t\tadapter->state);\ndiff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c\nindex de5d722cc21d..9da02ce42605 100644\n--- a/drivers/net/ethernet/intel/idpf/idpf_main.c\n+++ b/drivers/net/ethernet/intel/idpf/idpf_main.c\n@@ -15,6 +15,8 @@\n \n MODULE_DESCRIPTION(DRV_SUMMARY);\n MODULE_IMPORT_NS(\"LIBETH\");\n+MODULE_IMPORT_NS(\"LIBIE_CP\");\n+MODULE_IMPORT_NS(\"LIBIE_PCI\");\n MODULE_IMPORT_NS(\"LIBETH_XDP\");\n MODULE_LICENSE(\"GPL\");\n \n@@ -90,6 +92,15 @@ static int idpf_dev_init(struct idpf_adapter *adapter,\n \treturn 0;\n }\n \n+/**\n+ * idpf_decfg_device - deconfigure device and device specific resources\n+ * @adapter: driver specific private structure\n+ */\n+static void idpf_decfg_device(struct idpf_adapter *adapter)\n+{\n+\tlibie_pci_unmap_all_mmio_regions(&adapter->ctlq_ctx.mmio_info);\n+}\n+\n /**\n  * idpf_remove - Device removal routine\n  * @pdev: PCI device information struct\n@@ -159,6 +170,7 @@ static void idpf_remove(struct pci_dev *pdev)\n \tmutex_destroy(&adapter->queue_lock);\n \tmutex_destroy(&adapter->vc_buf_lock);\n \n+\tidpf_decfg_device(adapter);\n \tpci_set_drvdata(pdev, NULL);\n \tkfree(adapter);\n }\n@@ -181,46 +193,52 @@ static void idpf_shutdown(struct pci_dev *pdev)\n }\n \n /**\n- * idpf_cfg_hw - Initialize HW struct\n- * @adapter: adapter to setup hw struct for\n+ * idpf_cfg_device - configure device and device specific resources\n+ * @adapter: driver specific private structure\n  *\n- * Returns 0 on success, negative on failure\n+ * Return: %0 on success, -%errno on failure.\n  */\n-static int idpf_cfg_hw(struct idpf_adapter *adapter)\n+static int idpf_cfg_device(struct idpf_adapter *adapter)\n {\n-\tresource_size_t res_start, mbx_start, rstat_start;\n+\tstruct libie_mmio_info *mmio_info = &adapter->ctlq_ctx.mmio_info;\n \tstruct pci_dev *pdev = adapter->pdev;\n-\tstruct idpf_hw *hw = &adapter->hw;\n-\tstruct device *dev = &pdev->dev;\n-\tlong len;\n+\tstruct resource *region;\n+\tbool mapped = false;\n+\tint err;\n \n-\tres_start = pci_resource_start(pdev, 0);\n+\terr = libie_pci_init_dev(pdev);\n+\tif (err)\n+\t\treturn err;\n \n-\t/* Map mailbox space for virtchnl communication */\n-\tmbx_start = res_start + adapter->dev_ops.static_reg_info[0].start;\n-\tlen = resource_size(&adapter->dev_ops.static_reg_info[0]);\n-\thw->mbx.vaddr = devm_ioremap(dev, mbx_start, len);\n-\tif (!hw->mbx.vaddr) {\n-\t\tpci_err(pdev, \"failed to allocate BAR0 mbx region\\n\");\n+\tmmio_info->pdev = pdev;\n+\tINIT_LIST_HEAD(&mmio_info->mmio_list);\n \n+\t/* Map mailbox space for virtchnl communication */\n+\tregion = &adapter->dev_ops.static_reg_info[0];\n+\tmapped = libie_pci_map_mmio_region(mmio_info, region->start,\n+\t\t\t\t\t   resource_size(region));\n+\tif (!mapped) {\n+\t\tpci_err(pdev, \"failed to map BAR0 mbx region\\n\");\n \t\treturn -ENOMEM;\n \t}\n-\thw->mbx.addr_start = adapter->dev_ops.static_reg_info[0].start;\n-\thw->mbx.addr_len = len;\n \n \t/* Map rstat space for resets */\n-\trstat_start = res_start + adapter->dev_ops.static_reg_info[1].start;\n-\tlen = resource_size(&adapter->dev_ops.static_reg_info[1]);\n-\thw->rstat.vaddr = devm_ioremap(dev, rstat_start, len);\n-\tif (!hw->rstat.vaddr) {\n-\t\tpci_err(pdev, \"failed to allocate BAR0 rstat region\\n\");\n+\tregion = &adapter->dev_ops.static_reg_info[1];\n \n+\tmapped = libie_pci_map_mmio_region(mmio_info, region->start,\n+\t\t\t\t\t   resource_size(region));\n+\tif (!mapped) {\n+\t\tpci_err(pdev, \"failed to map BAR0 rstat region\\n\");\n+\t\tlibie_pci_unmap_all_mmio_regions(mmio_info);\n \t\treturn -ENOMEM;\n \t}\n-\thw->rstat.addr_start = adapter->dev_ops.static_reg_info[1].start;\n-\thw->rstat.addr_len = len;\n \n-\thw->back = adapter;\n+\terr = pci_enable_ptm(pdev, NULL);\n+\tif (err)\n+\t\tpci_dbg(pdev, \"PCIe PTM is not supported by PCIe bus/controller\\n\");\n+\n+\tpci_set_drvdata(pdev, adapter);\n+\tadapter->hw.back = adapter;\n \n \treturn 0;\n }\n@@ -246,32 +264,21 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)\n \tadapter->req_rx_splitq = true;\n \n \tadapter->pdev = pdev;\n-\terr = pcim_enable_device(pdev);\n-\tif (err)\n-\t\tgoto err_free;\n \n-\terr = pcim_request_region(pdev, 0, pci_name(pdev));\n+\terr = idpf_dev_init(adapter, ent);\n \tif (err) {\n-\t\tpci_err(pdev, \"pcim_request_region failed %pe\\n\", ERR_PTR(err));\n-\n+\t\tdev_err(&pdev->dev, \"Unexpected dev ID 0x%x in idpf probe\\n\",\n+\t\t\tent->device);\n \t\tgoto err_free;\n \t}\n \n-\terr = pci_enable_ptm(pdev, NULL);\n-\tif (err)\n-\t\tpci_dbg(pdev, \"PCIe PTM is not supported by PCIe bus/controller\\n\");\n-\n-\t/* set up for high or low dma */\n-\terr = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));\n+\terr = idpf_cfg_device(adapter);\n \tif (err) {\n-\t\tpci_err(pdev, \"DMA configuration failed: %pe\\n\", ERR_PTR(err));\n-\n+\t\tpci_err(pdev, \"Failed to configure device specific resources: %pe\\n\",\n+\t\t\tERR_PTR(err));\n \t\tgoto err_free;\n \t}\n \n-\tpci_set_master(pdev);\n-\tpci_set_drvdata(pdev, adapter);\n-\n \tadapter->init_wq = alloc_workqueue(\"%s-%s-init\",\n \t\t\t\t\t   WQ_UNBOUND | WQ_MEM_RECLAIM, 0,\n \t\t\t\t\t   dev_driver_string(dev),\n@@ -279,7 +286,7 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)\n \tif (!adapter->init_wq) {\n \t\tdev_err(dev, \"Failed to allocate init workqueue\\n\");\n \t\terr = -ENOMEM;\n-\t\tgoto err_free;\n+\t\tgoto err_init_wq;\n \t}\n \n \tadapter->serv_wq = alloc_workqueue(\"%s-%s-service\",\n@@ -324,20 +331,6 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)\n \t/* setup msglvl */\n \tadapter->msg_enable = netif_msg_init(-1, IDPF_AVAIL_NETIF_M);\n \n-\terr = idpf_dev_init(adapter, ent);\n-\tif (err) {\n-\t\tdev_err(&pdev->dev, \"Unexpected dev ID 0x%x in idpf probe\\n\",\n-\t\t\tent->device);\n-\t\tgoto destroy_vc_event_wq;\n-\t}\n-\n-\terr = idpf_cfg_hw(adapter);\n-\tif (err) {\n-\t\tdev_err(dev, \"Failed to configure HW structure for adapter: %d\\n\",\n-\t\t\terr);\n-\t\tgoto destroy_vc_event_wq;\n-\t}\n-\n \tmutex_init(&adapter->vport_ctrl_lock);\n \tmutex_init(&adapter->vector_lock);\n \tmutex_init(&adapter->queue_lock);\n@@ -356,8 +349,6 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)\n \n \treturn 0;\n \n-destroy_vc_event_wq:\n-\tdestroy_workqueue(adapter->vc_event_wq);\n err_vc_event_wq_alloc:\n \tdestroy_workqueue(adapter->stats_wq);\n err_stats_wq_alloc:\n@@ -366,6 +357,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)\n \tdestroy_workqueue(adapter->serv_wq);\n err_serv_wq_alloc:\n \tdestroy_workqueue(adapter->init_wq);\n+err_init_wq:\n+\tidpf_decfg_device(adapter);\n err_free:\n \tkfree(adapter);\n \treturn err;\ndiff --git a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c\nindex 7527b967e2e7..b7aa9538435e 100644\n--- a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c\n+++ b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c\n@@ -56,13 +56,14 @@ static void idpf_vf_ctlq_reg_init(struct idpf_adapter *adapter,\n  */\n static void idpf_vf_mb_intr_reg_init(struct idpf_adapter *adapter)\n {\n+\tstruct libie_mmio_info *mmio = &adapter->ctlq_ctx.mmio_info;\n \tstruct idpf_intr_reg *intr = &adapter->mb_vector.intr_reg;\n \tu32 dyn_ctl = le32_to_cpu(adapter->caps.mailbox_dyn_ctl);\n \n-\tintr->dyn_ctl = idpf_get_reg_addr(adapter, dyn_ctl);\n+\tintr->dyn_ctl = libie_pci_get_mmio_addr(mmio, dyn_ctl);\n \tintr->dyn_ctl_intena_m = VF_INT_DYN_CTL0_INTENA_M;\n \tintr->dyn_ctl_itridx_m = VF_INT_DYN_CTL0_ITR_INDX_M;\n-\tintr->icr_ena = idpf_get_reg_addr(adapter, VF_INT_ICR0_ENA1);\n+\tintr->icr_ena = libie_pci_get_mmio_addr(mmio, VF_INT_ICR0_ENA1);\n \tintr->icr_ena_ctlq_m = VF_INT_ICR0_ENA1_ADMINQ_M;\n }\n \n@@ -77,6 +78,7 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport,\n \tstruct idpf_adapter *adapter = vport->adapter;\n \tu16 num_vecs = rsrc->num_q_vectors;\n \tstruct idpf_vec_regs *reg_vals;\n+\tstruct libie_mmio_info *mmio;\n \tint num_regs, i, err = 0;\n \tu32 rx_itr, tx_itr, val;\n \tu16 total_vecs;\n@@ -93,14 +95,17 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport,\n \t\tgoto free_reg_vals;\n \t}\n \n+\tmmio = &adapter->ctlq_ctx.mmio_info;\n+\n \tfor (i = 0; i < num_vecs; i++) {\n \t\tstruct idpf_q_vector *q_vector = &rsrc->q_vectors[i];\n \t\tu16 vec_id = rsrc->q_vector_idxs[i] - IDPF_MBX_Q_VEC;\n \t\tstruct idpf_intr_reg *intr = &q_vector->intr_reg;\n+\t\tstruct idpf_vec_regs *reg = &reg_vals[vec_id];\n \t\tu32 spacing;\n \n-\t\tintr->dyn_ctl = idpf_get_reg_addr(adapter,\n-\t\t\t\t\t\t  reg_vals[vec_id].dyn_ctl_reg);\n+\t\tintr->dyn_ctl =\tlibie_pci_get_mmio_addr(mmio,\n+\t\t\t\t\t\t\treg->dyn_ctl_reg);\n \t\tintr->dyn_ctl_intena_m = VF_INT_DYN_CTLN_INTENA_M;\n \t\tintr->dyn_ctl_intena_msk_m = VF_INT_DYN_CTLN_INTENA_MSK_M;\n \t\tintr->dyn_ctl_itridx_s = VF_INT_DYN_CTLN_ITR_INDX_S;\n@@ -110,22 +115,21 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport,\n \t\tintr->dyn_ctl_sw_itridx_ena_m =\n \t\t\tVF_INT_DYN_CTLN_SW_ITR_INDX_ENA_M;\n \n-\t\tspacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing,\n+\t\tspacing = IDPF_ITR_IDX_SPACING(reg->itrn_index_spacing,\n \t\t\t\t\t       IDPF_VF_ITR_IDX_SPACING);\n \t\trx_itr = VF_INT_ITRN_ADDR(VIRTCHNL2_ITR_IDX_0,\n-\t\t\t\t\t  reg_vals[vec_id].itrn_reg,\n-\t\t\t\t\t  spacing);\n+\t\t\t\t\t  reg->itrn_reg, spacing);\n \t\ttx_itr = VF_INT_ITRN_ADDR(VIRTCHNL2_ITR_IDX_1,\n-\t\t\t\t\t  reg_vals[vec_id].itrn_reg,\n-\t\t\t\t\t  spacing);\n-\t\tintr->rx_itr = idpf_get_reg_addr(adapter, rx_itr);\n-\t\tintr->tx_itr = idpf_get_reg_addr(adapter, tx_itr);\n+\t\t\t\t\t  reg->itrn_reg, spacing);\n+\t\tintr->rx_itr = libie_pci_get_mmio_addr(mmio, rx_itr);\n+\t\tintr->tx_itr = libie_pci_get_mmio_addr(mmio, tx_itr);\n \t}\n \n \t/* Data vector for NOIRQ queues */\n \n \tval = reg_vals[rsrc->q_vector_idxs[i] - IDPF_MBX_Q_VEC].dyn_ctl_reg;\n-\trsrc->noirq_dyn_ctl = idpf_get_reg_addr(adapter, val);\n+\trsrc->noirq_dyn_ctl =\n+\t\tlibie_pci_get_mmio_addr(&adapter->ctlq_ctx.mmio_info, val);\n \n \tval = VF_INT_DYN_CTLN_WB_ON_ITR_M | VF_INT_DYN_CTLN_INTENA_MSK_M |\n \t      FIELD_PREP(VF_INT_DYN_CTLN_ITR_INDX_M, IDPF_NO_ITR_UPDATE_IDX);\n@@ -143,7 +147,9 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport,\n  */\n static void idpf_vf_reset_reg_init(struct idpf_adapter *adapter)\n {\n-\tadapter->reset_reg.rstat = idpf_get_rstat_reg_addr(adapter, VFGEN_RSTAT);\n+\tadapter->reset_reg.rstat =\n+\t\tlibie_pci_get_mmio_addr(&adapter->ctlq_ctx.mmio_info,\n+\t\t\t\t\tVFGEN_RSTAT);\n \tadapter->reset_reg.rstat_m = VFGEN_RSTAT_VFR_STATE_M;\n }\n \ndiff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c\nindex eb834f29ff77..278247e456f4 100644\n--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c\n+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c\n@@ -2,6 +2,7 @@\n /* Copyright (C) 2023 Intel Corporation */\n \n #include <linux/export.h>\n+#include <linux/intel/libie/pci.h>\n #include <net/libeth/rx.h>\n \n #include \"idpf.h\"\n@@ -1017,12 +1018,46 @@ static int idpf_send_get_caps_msg(struct idpf_adapter *adapter)\n }\n \n /**\n- * idpf_send_get_lan_memory_regions - Send virtchnl get LAN memory regions msg\n+ * idpf_mmio_region_non_static - Check if region is not static\n+ * @mmio_info: PCI resources info\n+ * @reg: region to check\n+ *\n+ * Return: %true if region can be received though virtchnl command,\n+ *\t   %false if region is related to mailbox or resetting\n+ */\n+static bool idpf_mmio_region_non_static(struct libie_mmio_info *mmio_info,\n+\t\t\t\t\tstruct libie_pci_mmio_region *reg)\n+{\n+\tstruct idpf_adapter *adapter =\n+\t\tcontainer_of(mmio_info, struct idpf_adapter,\n+\t\t\t     ctlq_ctx.mmio_info);\n+\n+\tfor (uint i = 0; i < IDPF_MMIO_REG_NUM_STATIC; i++) {\n+\t\tif (reg->bar_idx == 0 &&\n+\t\t    reg->offset == adapter->dev_ops.static_reg_info[i].start)\n+\t\t\treturn false;\n+\t}\n+\n+\treturn true;\n+}\n+\n+/**\n+ * idpf_decfg_lan_memory_regions - Unmap non-static memory regions\n+ * @adapter: Driver specific private structure\n+ */\n+static void idpf_decfg_lan_memory_regions(struct idpf_adapter *adapter)\n+{\n+\tlibie_pci_unmap_fltr_regs(&adapter->ctlq_ctx.mmio_info,\n+\t\t\t\t  idpf_mmio_region_non_static);\n+}\n+\n+/**\n+ * idpf_cfg_lan_memory_regions - Send virtchnl get LAN memory regions msg\n  * @adapter: Driver specific private struct\n  *\n  * Return: 0 on success or error code on failure.\n  */\n-static int idpf_send_get_lan_memory_regions(struct idpf_adapter *adapter)\n+static int idpf_cfg_lan_memory_regions(struct idpf_adapter *adapter)\n {\n \tstruct virtchnl2_get_lan_memory_regions *rcvd_regions __free(kfree);\n \tstruct idpf_vc_xn_params xn_params = {\n@@ -1031,7 +1066,6 @@ static int idpf_send_get_lan_memory_regions(struct idpf_adapter *adapter)\n \t\t.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC,\n \t};\n \tint num_regions, size;\n-\tstruct idpf_hw *hw;\n \tssize_t reply_sz;\n \tint err = 0;\n \n@@ -1052,88 +1086,51 @@ static int idpf_send_get_lan_memory_regions(struct idpf_adapter *adapter)\n \tif (size > IDPF_CTLQ_MAX_BUF_LEN)\n \t\treturn -EINVAL;\n \n-\thw = &adapter->hw;\n-\thw->lan_regs = kcalloc(num_regions, sizeof(*hw->lan_regs), GFP_KERNEL);\n-\tif (!hw->lan_regs)\n-\t\treturn -ENOMEM;\n-\n \tfor (int i = 0; i < num_regions; i++) {\n-\t\thw->lan_regs[i].addr_len =\n-\t\t\tle64_to_cpu(rcvd_regions->mem_reg[i].size);\n-\t\thw->lan_regs[i].addr_start =\n-\t\t\tle64_to_cpu(rcvd_regions->mem_reg[i].start_offset);\n+\t\tstruct libie_mmio_info *mmio = &adapter->ctlq_ctx.mmio_info;\n+\t\tresource_size_t offset, len;\n+\n+\t\toffset = le64_to_cpu(rcvd_regions->mem_reg[i].start_offset);\n+\t\tlen = le64_to_cpu(rcvd_regions->mem_reg[i].size);\n+\t\tif (!libie_pci_map_mmio_region(mmio, offset, len)) {\n+\t\t\tidpf_decfg_lan_memory_regions(adapter);\n+\t\t\treturn -EIO;\n+\t\t}\n \t}\n-\thw->num_lan_regs = num_regions;\n \n \treturn err;\n }\n \n /**\n- * idpf_calc_remaining_mmio_regs - calculate MMIO regions outside mbx and rstat\n+ * idpf_map_remaining_mmio_regs - map MMIO regions outside mbx and rstat\n  * @adapter: Driver specific private structure\n  *\n- * Called when idpf_send_get_lan_memory_regions is not supported. This will\n+ * Called when idpf_cfg_lan_memory_regions is not supported. This will\n  * calculate the offsets and sizes for the regions before, in between, and\n  * after the mailbox and rstat MMIO mappings.\n  *\n  * Return: 0 on success or error code on failure.\n  */\n-static int idpf_calc_remaining_mmio_regs(struct idpf_adapter *adapter)\n+static int idpf_map_remaining_mmio_regs(struct idpf_adapter *adapter)\n {\n \tstruct resource *rstat_reg = &adapter->dev_ops.static_reg_info[1];\n \tstruct resource *mbx_reg = &adapter->dev_ops.static_reg_info[0];\n-\tstruct idpf_hw *hw = &adapter->hw;\n-\n-\thw->num_lan_regs = IDPF_MMIO_MAP_FALLBACK_MAX_REMAINING;\n-\thw->lan_regs = kcalloc(hw->num_lan_regs, sizeof(*hw->lan_regs),\n-\t\t\t       GFP_KERNEL);\n-\tif (!hw->lan_regs)\n-\t\treturn -ENOMEM;\n+\tstruct libie_mmio_info *mmio = &adapter->ctlq_ctx.mmio_info;\n+\tresource_size_t reg_start;\n \n \t/* Region preceding mailbox */\n-\thw->lan_regs[0].addr_start = 0;\n-\thw->lan_regs[0].addr_len = mbx_reg->start;\n-\t/* Region between mailbox and rstat */\n-\thw->lan_regs[1].addr_start = mbx_reg->end + 1;\n-\thw->lan_regs[1].addr_len = rstat_reg->start -\n-\t\t\t\t\thw->lan_regs[1].addr_start;\n-\t/* Region after rstat */\n-\thw->lan_regs[2].addr_start = rstat_reg->end + 1;\n-\thw->lan_regs[2].addr_len = pci_resource_len(adapter->pdev, 0) -\n-\t\t\t\t\thw->lan_regs[2].addr_start;\n-\n-\treturn 0;\n-}\n+\tlibie_pci_map_mmio_region(mmio, 0, mbx_reg->start);\n \n-/**\n- * idpf_map_lan_mmio_regs - map remaining LAN BAR regions\n- * @adapter: Driver specific private structure\n- *\n- * Return: 0 on success or error code on failure.\n- */\n-static int idpf_map_lan_mmio_regs(struct idpf_adapter *adapter)\n-{\n-\tstruct pci_dev *pdev = adapter->pdev;\n-\tstruct idpf_hw *hw = &adapter->hw;\n-\tresource_size_t res_start;\n-\n-\tres_start = pci_resource_start(pdev, 0);\n-\n-\tfor (int i = 0; i < hw->num_lan_regs; i++) {\n-\t\tresource_size_t start;\n-\t\tlong len;\n-\n-\t\tlen = hw->lan_regs[i].addr_len;\n-\t\tif (!len)\n-\t\t\tcontinue;\n-\t\tstart = hw->lan_regs[i].addr_start + res_start;\n+\t/* Region between mailbox and rstat */\n+\treg_start = mbx_reg->end + 1;\n+\tlibie_pci_map_mmio_region(mmio, reg_start,\n+\t\t\t\t  rstat_reg->start - reg_start);\n \n-\t\thw->lan_regs[i].vaddr = devm_ioremap(&pdev->dev, start, len);\n-\t\tif (!hw->lan_regs[i].vaddr) {\n-\t\t\tpci_err(pdev, \"failed to allocate BAR0 region\\n\");\n-\t\t\treturn -ENOMEM;\n-\t\t}\n-\t}\n+\t/* Region after rstat */\n+\treg_start = rstat_reg->end + 1;\n+\tlibie_pci_map_mmio_region(mmio, reg_start,\n+\t\t\t\t  pci_resource_len(adapter->pdev, 0) -\n+\t\t\t\t  reg_start);\n \n \treturn 0;\n }\n@@ -1404,7 +1401,7 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport,\n \t\t\t\t struct idpf_q_vec_rsrc *rsrc, u32 *reg_vals,\n \t\t\t\t int num_regs, u32 q_type)\n {\n-\tstruct idpf_adapter *adapter = vport->adapter;\n+\tstruct libie_mmio_info *mmio = &vport->adapter->ctlq_ctx.mmio_info;\n \tint i, j, k = 0;\n \n \tswitch (q_type) {\n@@ -1414,7 +1411,8 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport,\n \n \t\t\tfor (j = 0; j < tx_qgrp->num_txq && k < num_regs; j++, k++)\n \t\t\t\ttx_qgrp->txqs[j]->tail =\n-\t\t\t\t\tidpf_get_reg_addr(adapter, reg_vals[k]);\n+\t\t\t\t\tlibie_pci_get_mmio_addr(mmio,\n+\t\t\t\t\t\t\t\treg_vals[k]);\n \t\t}\n \t\tbreak;\n \tcase VIRTCHNL2_QUEUE_TYPE_RX:\n@@ -1426,8 +1424,8 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport,\n \t\t\t\tstruct idpf_rx_queue *q;\n \n \t\t\t\tq = rx_qgrp->singleq.rxqs[j];\n-\t\t\t\tq->tail = idpf_get_reg_addr(adapter,\n-\t\t\t\t\t\t\t    reg_vals[k]);\n+\t\t\t\tq->tail = libie_pci_get_mmio_addr(mmio,\n+\t\t\t\t\t\t\t\t  reg_vals[k]);\n \t\t\t}\n \t\t}\n \t\tbreak;\n@@ -1440,8 +1438,8 @@ static int __idpf_queue_reg_init(struct idpf_vport *vport,\n \t\t\t\tstruct idpf_buf_queue *q;\n \n \t\t\t\tq = &rx_qgrp->splitq.bufq_sets[j].bufq;\n-\t\t\t\tq->tail = idpf_get_reg_addr(adapter,\n-\t\t\t\t\t\t\t    reg_vals[k]);\n+\t\t\t\tq->tail = libie_pci_get_mmio_addr(mmio,\n+\t\t\t\t\t\t\t\t  reg_vals[k]);\n \t\t\t}\n \t\t}\n \t\tbreak;\n@@ -3505,29 +3503,22 @@ int idpf_vc_core_init(struct idpf_adapter *adapter)\n \t}\n \n \tif (idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_LAN_MEMORY_REGIONS)) {\n-\t\terr = idpf_send_get_lan_memory_regions(adapter);\n+\t\terr = idpf_cfg_lan_memory_regions(adapter);\n \t\tif (err) {\n-\t\t\tdev_err(&adapter->pdev->dev, \"Failed to get LAN memory regions: %d\\n\",\n+\t\t\tdev_err(&adapter->pdev->dev, \"Failed to configure LAN memory regions: %d\\n\",\n \t\t\t\terr);\n \t\t\treturn -EINVAL;\n \t\t}\n \t} else {\n \t\t/* Fallback to mapping the remaining regions of the entire BAR */\n-\t\terr = idpf_calc_remaining_mmio_regs(adapter);\n+\t\terr = idpf_map_remaining_mmio_regs(adapter);\n \t\tif (err) {\n-\t\t\tdev_err(&adapter->pdev->dev, \"Failed to allocate BAR0 region(s): %d\\n\",\n+\t\t\tdev_err(&adapter->pdev->dev, \"Failed to configure BAR0 region(s): %d\\n\",\n \t\t\t\terr);\n \t\t\treturn -ENOMEM;\n \t\t}\n \t}\n \n-\terr = idpf_map_lan_mmio_regs(adapter);\n-\tif (err) {\n-\t\tdev_err(&adapter->pdev->dev, \"Failed to map BAR0 region(s): %d\\n\",\n-\t\t\terr);\n-\t\treturn -ENOMEM;\n-\t}\n-\n \tpci_sriov_set_totalvfs(adapter->pdev, idpf_get_max_vfs(adapter));\n \tnum_max_vports = idpf_get_max_vports(adapter);\n \tadapter->max_vports = num_max_vports;\n@@ -3634,7 +3625,6 @@ int idpf_vc_core_init(struct idpf_adapter *adapter)\n  */\n void idpf_vc_core_deinit(struct idpf_adapter *adapter)\n {\n-\tstruct idpf_hw *hw = &adapter->hw;\n \tbool remove_in_prog;\n \n \tif (!test_bit(IDPF_VC_CORE_INIT, adapter->flags))\n@@ -3659,12 +3649,10 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter)\n \n \tidpf_vport_params_buf_rel(adapter);\n \n-\tkfree(hw->lan_regs);\n-\thw->lan_regs = NULL;\n-\n \tkfree(adapter->vports);\n \tadapter->vports = NULL;\n \n+\tidpf_decfg_lan_memory_regions(adapter);\n \tclear_bit(IDPF_VC_CORE_INIT, adapter->flags);\n }\n \ndiff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl_ptp.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl_ptp.c\nindex 61cedb6f2854..82f26fc7bc08 100644\n--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl_ptp.c\n+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl_ptp.c\n@@ -31,6 +31,7 @@ int idpf_ptp_get_caps(struct idpf_adapter *adapter)\n \t\t.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC,\n \t};\n \tstruct virtchnl2_ptp_cross_time_reg_offsets cross_tstamp_offsets;\n+\tstruct libie_mmio_info *mmio = &adapter->ctlq_ctx.mmio_info;\n \tstruct virtchnl2_ptp_clk_adj_reg_offsets clk_adj_offsets;\n \tstruct virtchnl2_ptp_clk_reg_offsets clock_offsets;\n \tstruct idpf_ptp_secondary_mbx *scnd_mbx;\n@@ -77,19 +78,20 @@ int idpf_ptp_get_caps(struct idpf_adapter *adapter)\n \tclock_offsets = recv_ptp_caps_msg->clk_offsets;\n \n \ttemp_offset = le32_to_cpu(clock_offsets.dev_clk_ns_l);\n-\tptp->dev_clk_regs.dev_clk_ns_l = idpf_get_reg_addr(adapter,\n-\t\t\t\t\t\t\t   temp_offset);\n+\tptp->dev_clk_regs.dev_clk_ns_l =\n+\t\tlibie_pci_get_mmio_addr(mmio, temp_offset);\n \ttemp_offset = le32_to_cpu(clock_offsets.dev_clk_ns_h);\n-\tptp->dev_clk_regs.dev_clk_ns_h = idpf_get_reg_addr(adapter,\n-\t\t\t\t\t\t\t   temp_offset);\n+\tptp->dev_clk_regs.dev_clk_ns_h =\n+\t\tlibie_pci_get_mmio_addr(mmio, temp_offset);\n \ttemp_offset = le32_to_cpu(clock_offsets.phy_clk_ns_l);\n-\tptp->dev_clk_regs.phy_clk_ns_l = idpf_get_reg_addr(adapter,\n-\t\t\t\t\t\t\t   temp_offset);\n+\tptp->dev_clk_regs.phy_clk_ns_l =\n+\t\tlibie_pci_get_mmio_addr(mmio, temp_offset);\n \ttemp_offset = le32_to_cpu(clock_offsets.phy_clk_ns_h);\n-\tptp->dev_clk_regs.phy_clk_ns_h = idpf_get_reg_addr(adapter,\n-\t\t\t\t\t\t\t   temp_offset);\n+\tptp->dev_clk_regs.phy_clk_ns_h =\n+\t\tlibie_pci_get_mmio_addr(mmio, temp_offset);\n \ttemp_offset = le32_to_cpu(clock_offsets.cmd_sync_trigger);\n-\tptp->dev_clk_regs.cmd_sync = idpf_get_reg_addr(adapter, temp_offset);\n+\tptp->dev_clk_regs.cmd_sync =\n+\t\tlibie_pci_get_mmio_addr(mmio, temp_offset);\n \n cross_tstamp:\n \taccess_type = ptp->get_cross_tstamp_access;\n@@ -99,13 +101,14 @@ int idpf_ptp_get_caps(struct idpf_adapter *adapter)\n \tcross_tstamp_offsets = recv_ptp_caps_msg->cross_time_offsets;\n \n \ttemp_offset = le32_to_cpu(cross_tstamp_offsets.sys_time_ns_l);\n-\tptp->dev_clk_regs.sys_time_ns_l = idpf_get_reg_addr(adapter,\n-\t\t\t\t\t\t\t    temp_offset);\n+\tptp->dev_clk_regs.sys_time_ns_l =\n+\t\tlibie_pci_get_mmio_addr(mmio, temp_offset);\n \ttemp_offset = le32_to_cpu(cross_tstamp_offsets.sys_time_ns_h);\n-\tptp->dev_clk_regs.sys_time_ns_h = idpf_get_reg_addr(adapter,\n-\t\t\t\t\t\t\t    temp_offset);\n+\tptp->dev_clk_regs.sys_time_ns_h =\n+\t\tlibie_pci_get_mmio_addr(mmio, temp_offset);\n \ttemp_offset = le32_to_cpu(cross_tstamp_offsets.cmd_sync_trigger);\n-\tptp->dev_clk_regs.cmd_sync = idpf_get_reg_addr(adapter, temp_offset);\n+\tptp->dev_clk_regs.cmd_sync =\n+\t\tlibie_pci_get_mmio_addr(mmio, temp_offset);\n \n discipline_clock:\n \taccess_type = ptp->adj_dev_clk_time_access;\n@@ -116,29 +119,32 @@ int idpf_ptp_get_caps(struct idpf_adapter *adapter)\n \n \t/* Device clock offsets */\n \ttemp_offset = le32_to_cpu(clk_adj_offsets.dev_clk_cmd_type);\n-\tptp->dev_clk_regs.cmd = idpf_get_reg_addr(adapter, temp_offset);\n+\tptp->dev_clk_regs.cmd = libie_pci_get_mmio_addr(mmio, temp_offset);\n \ttemp_offset = le32_to_cpu(clk_adj_offsets.dev_clk_incval_l);\n-\tptp->dev_clk_regs.incval_l = idpf_get_reg_addr(adapter, temp_offset);\n+\tptp->dev_clk_regs.incval_l = libie_pci_get_mmio_addr(mmio, temp_offset);\n \ttemp_offset = le32_to_cpu(clk_adj_offsets.dev_clk_incval_h);\n-\tptp->dev_clk_regs.incval_h = idpf_get_reg_addr(adapter, temp_offset);\n+\tptp->dev_clk_regs.incval_h = libie_pci_get_mmio_addr(mmio, temp_offset);\n \ttemp_offset = le32_to_cpu(clk_adj_offsets.dev_clk_shadj_l);\n-\tptp->dev_clk_regs.shadj_l = idpf_get_reg_addr(adapter, temp_offset);\n+\tptp->dev_clk_regs.shadj_l = libie_pci_get_mmio_addr(mmio, temp_offset);\n \ttemp_offset = le32_to_cpu(clk_adj_offsets.dev_clk_shadj_h);\n-\tptp->dev_clk_regs.shadj_h = idpf_get_reg_addr(adapter, temp_offset);\n+\tptp->dev_clk_regs.shadj_h = libie_pci_get_mmio_addr(mmio, temp_offset);\n \n \t/* PHY clock offsets */\n \ttemp_offset = le32_to_cpu(clk_adj_offsets.phy_clk_cmd_type);\n-\tptp->dev_clk_regs.phy_cmd = idpf_get_reg_addr(adapter, temp_offset);\n+\tptp->dev_clk_regs.phy_cmd =\n+\t\tlibie_pci_get_mmio_addr(mmio, temp_offset);\n \ttemp_offset = le32_to_cpu(clk_adj_offsets.phy_clk_incval_l);\n-\tptp->dev_clk_regs.phy_incval_l = idpf_get_reg_addr(adapter,\n-\t\t\t\t\t\t\t   temp_offset);\n+\tptp->dev_clk_regs.phy_incval_l =\n+\t\tlibie_pci_get_mmio_addr(mmio, temp_offset);\n \ttemp_offset = le32_to_cpu(clk_adj_offsets.phy_clk_incval_h);\n-\tptp->dev_clk_regs.phy_incval_h = idpf_get_reg_addr(adapter,\n-\t\t\t\t\t\t\t   temp_offset);\n+\tptp->dev_clk_regs.phy_incval_h =\n+\t\tlibie_pci_get_mmio_addr(mmio, temp_offset);\n \ttemp_offset = le32_to_cpu(clk_adj_offsets.phy_clk_shadj_l);\n-\tptp->dev_clk_regs.phy_shadj_l = idpf_get_reg_addr(adapter, temp_offset);\n+\tptp->dev_clk_regs.phy_shadj_l =\n+\t\tlibie_pci_get_mmio_addr(mmio, temp_offset);\n \ttemp_offset = le32_to_cpu(clk_adj_offsets.phy_clk_shadj_h);\n-\tptp->dev_clk_regs.phy_shadj_h = idpf_get_reg_addr(adapter, temp_offset);\n+\tptp->dev_clk_regs.phy_shadj_h =\n+\t\tlibie_pci_get_mmio_addr(mmio, temp_offset);\n \n \treturn 0;\n }\n",
    "prefixes": [
        "iwl-next",
        "v5",
        "08/15"
    ]
}