From patchwork Thu Jan 19 01:16:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Keller X-Patchwork-Id: 1728527 X-Patchwork-Delegate: anthony.l.nguyen@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org (client-ip=140.211.166.133; helo=smtp2.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256 header.s=default header.b=9U+g4Ldo; dkim-atps=neutral Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Ny4WD190hz23g6 for ; Thu, 19 Jan 2023 12:18:00 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 6F96840393; Thu, 19 Jan 2023 01:17:58 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 6F96840393 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1674091078; bh=El5HGLSTAoUmsJy9GOtD+qhpLVtuPZxLO67v9bbtm4Q=; h=From:To:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=9U+g4LdofTT2ZDbIofsl6zLOlXNzcOymHJ4aWYj5AxNVssHXCoatutHREWKYlV3wj 05XiHwgthnwF+D4ophoPhTJIavuW7W/gubqTiSCrUqAAciNeJJqZ9YRwdbwKzO42c2 sdmsq1NnGc0yo5Cw+bY1/2i6ejUoU6Xnu0yDtD5SFd2EVuTWgrwFjwz3kOlZOSD625 7ywN1+nYF2tiG3BR/msqVSB1r8bmzjpKF/EgR6fA2TYhsn4wCif/CoOGpd+hhb7u72 m115jqRYNzGH3Lytgs31ziTnMHX6RrGMBKlvPziFbyiBkCFZ2baxKAf0n3IIJYb6Ey IhPX8Ma2q9QkQ== X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Flv3BTsbRVD3; Thu, 19 Jan 2023 01:17:57 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id D108640C30; Thu, 19 Jan 2023 01:17:56 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org D108640C30 X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by ash.osuosl.org (Postfix) with ESMTP id B556F1BF95F for ; Thu, 19 Jan 2023 01:17:18 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 65B024191B for ; Thu, 19 Jan 2023 01:17:09 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 65B024191B X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id zSffpHaRrs5S for ; Thu, 19 Jan 2023 01:17:05 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org DFAAF41921 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by smtp4.osuosl.org (Postfix) with ESMTPS id DFAAF41921 for ; Thu, 19 Jan 2023 01:17:04 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6500,9779,10594"; a="304840725" X-IronPort-AV: E=Sophos;i="5.97,226,1669104000"; d="scan'208";a="304840725" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2023 17:17:03 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10594"; a="783881870" X-IronPort-AV: E=Sophos;i="5.97,226,1669104000"; d="scan'208";a="783881870" Received: from jekeller-desk.amr.corp.intel.com (HELO jekeller-desk.jekeller.internal) ([10.166.241.1]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2023 17:17:02 -0800 From: Jacob Keller To: Intel Wired LAN Date: Wed, 18 Jan 2023 17:16:45 -0800 Message-Id: <20230119011653.311675-6-jacob.e.keller@intel.com> X-Mailer: git-send-email 2.39.1.405.gd4c25cc71f83 In-Reply-To: <20230119011653.311675-1-jacob.e.keller@intel.com> References: <20230119011653.311675-1-jacob.e.keller@intel.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674091024; x=1705627024; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xAMP0fr5qggukQRLWZYYd3ocbNfxIwjbzlKV8/x5DuI=; b=NZbho/9f2X5w19i2BuCVo3wOfR0e8Lb1UFZO6KvvdlaJYOQrcESm4szQ 9CkXEy3S1ZKpnbgJL0pybrK9jleDWmpwO/R2KQKg+6ME6QCFo8cBbK64c jnwzBOADlZ9wRAOevs/dZa9E7A1P5VFk9fA5bgUUsr2lya2u3p4C8wOOm IXg3Ll+EC7mjjZkhILjyEPW3KqgGr0CjD0kcbAY5uWx5Ra/5oY9mkIv2d KlLv90apSgekF8P7NW4tMTXA1ayLOPxgMRvEFek/Sv0XZHoSKtK4hgETM 6Adog3wfbHuSd8nePJmVMV4E9mtoWRuAt6Cq85IiAInr5n4xSNNd4t7HL w==; X-Mailman-Original-Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=NZbho/9f Subject: [Intel-wired-lan] [PATCH net-next v2 05/13] ice: Fix RDMA latency issue by allowing write-combining X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Anthony Nguyen Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" The current method of mapping the entire BAR region as a single uncacheable region does not allow RDMA to use write combining (WC). This results in increased latency with RDMA. To fix this, we initially planned to reduce the size of the map made by the PF driver to include only up to the beginning of the RDMA space. Unfortunately this will not work in the future as there are some hardware features which use registers beyond the RDMA area. This includes Scalable IOV, a virtualization feature being worked on currently. Instead of simply reducing the size of the map, we need a solution which will allow access to all areas of the address space while leaving the RDMA area open to be mapped with write combining. To allow for this, and fix the RMDA latency issue without blocking the higher areas of the BAR, we need to create multiple separate memory maps. Doing so will create a sparse mapping rather than a contiguous single area. Replace the void *hw_addr with a special ice_hw_addr structure which represents the multiple mappings as a flexible array. Based on the available BAR size, map up to 3 regions: * The space before the RDMA section * The RDMA section which wants write combining behavior * The space after the RDMA section Add an ice_get_hw_addr function which converts a register offset into the appropriate kernel address based on which chunk it falls into. This does cost us slightly more computation overhead for register access as we now must check the table each access. However, we can pre-compute the addresses where this would most be a problem. With this change, the RDMA driver is now free to map the RDMA register section as write-combined without impacting access to other device registers used by the main PF driver. Reported-by: Dave Ertman Signed-off-by: Jacob Keller Tested-by: Jakub Andrysiak _______________________________________________ --- Changes since v1: * Export ice_get_hw_addr * Use ice_get_hw_addr in iRDMA driver * Fix the WARN_ON to use %pa instead of %llx for printing a resource_size_t drivers/infiniband/hw/irdma/main.c | 2 +- drivers/net/ethernet/intel/ice/ice.h | 4 +- drivers/net/ethernet/intel/ice/ice_base.c | 5 +- drivers/net/ethernet/intel/ice/ice_ethtool.c | 3 +- drivers/net/ethernet/intel/ice/ice_main.c | 177 +++++++++++++++++-- drivers/net/ethernet/intel/ice/ice_osdep.h | 48 ++++- drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +- drivers/net/ethernet/intel/ice/ice_type.h | 2 +- 8 files changed, 219 insertions(+), 24 deletions(-) diff --git a/drivers/infiniband/hw/irdma/main.c b/drivers/infiniband/hw/irdma/main.c index 514453777e07..37a2650abbbb 100644 --- a/drivers/infiniband/hw/irdma/main.c +++ b/drivers/infiniband/hw/irdma/main.c @@ -228,7 +228,7 @@ static void irdma_fill_device_info(struct irdma_device *iwdev, struct ice_pf *pf rf->cdev = pf; rf->gen_ops.register_qset = irdma_lan_register_qset; rf->gen_ops.unregister_qset = irdma_lan_unregister_qset; - rf->hw.hw_addr = pf->hw.hw_addr; + rf->hw.hw_addr = ice_get_hw_addr(&pf->hw, 0); rf->pcidev = pf->pdev; rf->msix_count = pf->num_rdma_msix; rf->pf_id = pf->hw.pf_id; diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 51a1a89f7b5a..cd81974822cc 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -75,7 +75,9 @@ #include "ice_vsi_vlan_ops.h" #include "ice_gnss.h" -#define ICE_BAR0 0 +#define ICE_BAR0 0 +#define ICE_BAR_RDMA_WC_START 0x0800000 +#define ICE_BAR_RDMA_WC_END 0x1000000 #define ICE_REQ_DESC_MULTIPLE 32 #define ICE_MIN_NUM_DESC 64 #define ICE_MAX_NUM_DESC 8160 diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 554095b25f44..332d5a1b326c 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -480,7 +480,7 @@ static int ice_setup_rx_ctx(struct ice_rx_ring *ring) ring->rx_offset = ice_rx_offset(ring); /* init queue specific tail register */ - ring->tail = hw->hw_addr + QRX_TAIL(pf_q); + ring->tail = ice_get_hw_addr(hw, QRX_TAIL(pf_q)); writel(0, ring->tail); return 0; @@ -790,8 +790,7 @@ ice_vsi_cfg_txq(struct ice_vsi *vsi, struct ice_tx_ring *ring, /* init queue specific tail reg. It is referred as * transmit comm scheduler queue doorbell. */ - ring->tail = hw->hw_addr + QTX_COMM_DBELL(pf_q); - + ring->tail = ice_get_hw_addr(hw, QTX_COMM_DBELL(pf_q)); if (IS_ENABLED(CONFIG_DCB)) tc = ring->dcb_tc; else diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index 936f0e0c553d..b54f470be8d7 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -3085,7 +3085,8 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring, /* this is to allow wr32 to have something to write to * during early allocation of Rx buffers */ - rx_rings[i].tail = vsi->back->hw.hw_addr + PRTGEN_STATUS; + rx_rings[i].tail = ice_get_hw_addr(&vsi->back->hw, + PRTGEN_STATUS); err = ice_setup_rx_ring(&rx_rings[i]); if (err) diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 4165fde0106d..3b98721fd9d8 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -596,6 +596,163 @@ ice_prepare_for_reset(struct ice_pf *pf, enum ice_reset_req reset_type) set_bit(ICE_PREPARED_FOR_RESET, pf->state); } +/** + * ice_get_hw_addr - Get memory address for a given device register + * @hw: pointer to the HW struct + * @reg: the register to get address of + * + * Convert a register offset into the appropriate memory mapped kernel + * address. + * + * Returns the pointer address or an ERR_PTR on failure. + */ +void __iomem *ice_get_hw_addr(struct ice_hw *hw, resource_size_t reg) +{ + struct ice_hw_addr *hw_addr = (struct ice_hw_addr *)hw->hw_addr; + struct ice_hw_addr_map *map; + unsigned int i; + + if (WARN_ON(!hw_addr)) + return (void __iomem *)ERR_PTR(-EIO); + + for (i = 0, map = hw_addr->maps; i < hw_addr->nr; i++, map++) + if (reg >= map->start && reg < map->end) + return (u8 __iomem *)map->addr + (reg - map->start); + + WARN_ONCE(1, "Unable to map register address %pa to kernel address", + ®); + + return (void __iomem *)ERR_PTR(-EFAULT); +} +EXPORT_SYMBOL_GPL(ice_get_hw_addr); + +/** + * ice_map_hw_addr - map a region of device registers to memory + * @pdev: the PCI device + * @map: the address map structure + * + * Map the specified section of the hardware registers into memory, storing + * the memory mapped address in the provided structure. + * + * Returns 0 on success or an error code on failure. + */ +static int ice_map_hw_addr(struct pci_dev *pdev, struct ice_hw_addr_map *map) +{ + struct device *dev = &pdev->dev; + resource_size_t size, base; + void __iomem *addr; + + if (WARN_ON(map->end <= map->start)) + return -EIO; + + size = map->end - map->start; + base = pci_resource_start(pdev, map->bar) + map->start; + addr = ioremap(base, size); + if (!addr) { + dev_err(dev, "%s: remap at offset %llu failed\n", + __func__, map->start); + return -EIO; + } + + map->addr = addr; + + return 0; +} + +/** + * ice_map_all_hw_addr - Request and map PCI BAR memory + * @pf: pointer to the PF structure + * + * Request and reserve all PCI BAR regions. Memory map chunks of the PCI BAR + * 0 into a sparse memory map to allow the RDMA region to be mapped with write + * combining. + * + * Returns 0 on success or an error code on failure. + */ +static int ice_map_all_hw_addr(struct ice_pf *pf) +{ + struct pci_dev *pdev = pf->pdev; + struct device *dev = &pdev->dev; + struct ice_hw_addr *hw_addr; + resource_size_t bar_len; + unsigned int nr_maps; + int err; + + bar_len = pci_resource_len(pdev, 0); + if (bar_len > ICE_BAR_RDMA_WC_END) + nr_maps = 2; + else + nr_maps = 1; + + hw_addr = kzalloc(struct_size(hw_addr, maps, nr_maps), GFP_KERNEL); + if (!hw_addr) + return -ENOMEM; + + hw_addr->nr = nr_maps; + + err = pci_request_mem_regions(pdev, dev_driver_string(dev)); + if (err) { + dev_err(dev, "pci_request_mem_regions failed, err %pe\n", + ERR_PTR(err)); + goto err_free_hw_addr; + } + + /* Map the start of the BAR as uncachable */ + hw_addr->maps[0].bar = 0; + hw_addr->maps[0].start = 0; + hw_addr->maps[0].end = min_t(resource_size_t, bar_len, + ICE_BAR_RDMA_WC_START); + err = ice_map_hw_addr(pdev, &hw_addr->maps[0]); + if (err) + goto err_release_mem_regions; + + /* Map everything past the RDMA section as uncachable */ + if (nr_maps > 1) { + hw_addr->maps[1].bar = 0; + hw_addr->maps[1].start = ICE_BAR_RDMA_WC_END; + hw_addr->maps[1].end = bar_len; + err = ice_map_hw_addr(pdev, &hw_addr->maps[1]); + if (err) + goto err_unmap_bar_start; + } + + pf->hw.hw_addr = (typeof(pf->hw.hw_addr))hw_addr; + + return 0; + +err_unmap_bar_start: + iounmap(hw_addr->maps[0].addr); +err_release_mem_regions: + pci_release_mem_regions(pdev); +err_free_hw_addr: + kfree(hw_addr); + + return err; +} + +/** + * ice_unmap_all_hw_addr - Release device register memory maps + * @pf: pointer to the PF structure + * + * Release all PCI memory maps and regions. + */ +static void ice_unmap_all_hw_addr(struct ice_pf *pf) +{ + struct ice_hw_addr *hw_addr = (struct ice_hw_addr *)pf->hw.hw_addr; + struct pci_dev *pdev = pf->pdev; + unsigned int i; + + if (WARN_ON(!hw_addr)) + return; + + pf->hw.hw_addr = NULL; + for (i = 0; i < hw_addr->nr; i++) + iounmap(hw_addr->maps[i].addr); + kfree(hw_addr); + + pci_release_mem_regions(pdev); +} + /** * ice_do_reset - Initiate one of many types of resets * @pf: board private structure @@ -5101,19 +5258,10 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) return -EINVAL; } - /* this driver uses devres, see - * Documentation/driver-api/driver-model/devres.rst - */ - err = pcim_enable_device(pdev); + err = pci_enable_device(pdev); if (err) return err; - err = pcim_iomap_regions(pdev, BIT(ICE_BAR0), dev_driver_string(dev)); - if (err) { - dev_err(dev, "BAR0 I/O map error %d\n", err); - return err; - } - pf = ice_allocate_pf(dev); if (!pf) return -ENOMEM; @@ -5138,7 +5286,11 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) set_bit(ICE_SERVICE_DIS, pf->state); hw = &pf->hw; - hw->hw_addr = pcim_iomap_table(pdev)[ICE_BAR0]; + + err = ice_map_all_hw_addr(pf); + if (err) + goto err_init_iomap_fail; + pci_save_state(pdev); hw->back = pf; @@ -5186,6 +5338,8 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) err_init_eth: ice_deinit(pf); err_init: + ice_unmap_all_hw_addr(pf); +err_init_iomap_fail: pci_disable_pcie_error_reporting(pdev); pci_disable_device(pdev); return err; @@ -5295,6 +5449,7 @@ static void ice_remove(struct pci_dev *pdev) */ ice_reset(&pf->hw, ICE_RESET_PFR); pci_wait_for_pending_transaction(pdev); + ice_unmap_all_hw_addr(pf); pci_disable_pcie_error_reporting(pdev); pci_disable_device(pdev); } diff --git a/drivers/net/ethernet/intel/ice/ice_osdep.h b/drivers/net/ethernet/intel/ice/ice_osdep.h index 82bc54fec7f3..4b16ff489c3a 100644 --- a/drivers/net/ethernet/intel/ice/ice_osdep.h +++ b/drivers/net/ethernet/intel/ice/ice_osdep.h @@ -18,10 +18,49 @@ #endif #include -#define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg))) -#define rd32(a, reg) readl((a)->hw_addr + (reg)) -#define wr64(a, reg, value) writeq((value), ((a)->hw_addr + (reg))) -#define rd64(a, reg) readq((a)->hw_addr + (reg)) +struct ice_hw; + +/** + * struct ice_hw_addr_map - a single hardware address memory map + * @addr: iomem address of the start of this map + * @start: register offset at the start of this map, inclusive bound + * @end: register offset at the end of this map, exclusive bound + * @bar: the BAR this map is for + * + * Structure representing one map of a device BAR register space. Stored as + * part of the ice_hw_addr structure in an array ordered by the start offset. + * + * The addr value is an iomem address returned by ioremap. The start indicates + * the first register offset this map is valid for. The end indicates the end + * of the map, and is an exclusive bound. + */ +struct ice_hw_addr_map { + void __iomem *addr; + resource_size_t start; + resource_size_t end; + int bar; +}; + +/** + * struct ice_hw_addr - a list of hardware address memory maps + * @nr: the number of maps made + * @maps: flexible array of maps made during device initialization + * + * Structure representing a series of sparse maps of the device BAR 0 address + * space to kernel addresses. Users must convert a register offset to an iomem + * address using ice_get_hw_addr. + */ +struct ice_hw_addr { + unsigned int nr; + struct ice_hw_addr_map maps[]; +}; + +void __iomem *ice_get_hw_addr(struct ice_hw *hw, resource_size_t reg); + +#define wr32(a, reg, value) writel((value), ice_get_hw_addr((a), (reg))) +#define rd32(a, reg) readl(ice_get_hw_addr((a), (reg))) +#define wr64(a, reg, value) writeq((value), ice_get_hw_addr((a), (reg))) +#define rd64(a, reg) readq(ice_get_hw_addr((a), (reg))) #define ice_flush(a) rd32((a), GLGEN_STAT) #define ICE_M(m, s) ((m) << (s)) @@ -32,7 +71,6 @@ struct ice_dma_mem { size_t size; }; -struct ice_hw; struct device *ice_hw_to_dev(struct ice_hw *hw); #ifdef CONFIG_DYNAMIC_DEBUG diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index 4fd0e5d0a313..3d2834673903 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -272,7 +272,7 @@ struct ice_rx_ring { struct net_device *netdev; /* netdev ring maps to */ struct ice_vsi *vsi; /* Backreference to associated VSI */ struct ice_q_vector *q_vector; /* Backreference to associated vector */ - u8 __iomem *tail; + void __iomem *tail; union { struct ice_rx_buf *rx_buf; struct xdp_buff **xdp_buf; diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index e3f622cad425..f34975efeed7 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -821,7 +821,7 @@ struct ice_mbx_data { /* Port hardware description */ struct ice_hw { - u8 __iomem *hw_addr; + void *hw_addr; void *back; struct ice_aqc_layer_props *layer_info; struct ice_port_info *port_info;