get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/1122917/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 1122917,
    "url": "http://patchwork.ozlabs.org/api/patches/1122917/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20190626080711.634-1-anthony.l.nguyen@intel.com/",
    "project": {
        "id": 46,
        "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api",
        "name": "Intel Wired Ethernet development",
        "link_name": "intel-wired-lan",
        "list_id": "intel-wired-lan.osuosl.org",
        "list_email": "intel-wired-lan@osuosl.org",
        "web_url": "",
        "scm_url": "",
        "webscm_url": "",
        "list_archive_url": "",
        "list_archive_url_format": "",
        "commit_url_format": ""
    },
    "msgid": "<20190626080711.634-1-anthony.l.nguyen@intel.com>",
    "list_archive_url": null,
    "date": "2019-06-26T08:07:10",
    "name": "[v2,1/2] ice: Add support for XDP",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": false,
    "hash": "001f0eb1904ed56e5ed9f916dd238c6527e92144",
    "submitter": {
        "id": 68875,
        "url": "http://patchwork.ozlabs.org/api/people/68875/?format=api",
        "name": "Tony Nguyen",
        "email": "anthony.l.nguyen@intel.com"
    },
    "delegate": {
        "id": 68,
        "url": "http://patchwork.ozlabs.org/api/users/68/?format=api",
        "username": "jtkirshe",
        "first_name": "Jeff",
        "last_name": "Kirsher",
        "email": "jeffrey.t.kirsher@intel.com"
    },
    "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20190626080711.634-1-anthony.l.nguyen@intel.com/mbox/",
    "series": [
        {
            "id": 116291,
            "url": "http://patchwork.ozlabs.org/api/series/116291/?format=api",
            "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/list/?series=116291",
            "date": "2019-06-26T08:07:11",
            "name": "[v2,1/2] ice: Add support for XDP",
            "version": 2,
            "mbox": "http://patchwork.ozlabs.org/series/116291/mbox/"
        }
    ],
    "comments": "http://patchwork.ozlabs.org/api/patches/1122917/comments/",
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/1122917/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<intel-wired-lan-bounces@osuosl.org>",
        "X-Original-To": [
            "incoming@patchwork.ozlabs.org",
            "intel-wired-lan@lists.osuosl.org"
        ],
        "Delivered-To": [
            "patchwork-incoming@bilbo.ozlabs.org",
            "intel-wired-lan@lists.osuosl.org"
        ],
        "Authentication-Results": [
            "ozlabs.org;\n\tspf=pass (mailfrom) smtp.mailfrom=osuosl.org\n\t(client-ip=140.211.166.138; helo=whitealder.osuosl.org;\n\tenvelope-from=intel-wired-lan-bounces@osuosl.org;\n\treceiver=<UNKNOWN>)",
            "ozlabs.org;\n\tdmarc=fail (p=none dis=none) header.from=intel.com"
        ],
        "Received": [
            "from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256\n\tbits)) (No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 45YpXV2S56z9s3Z\n\tfor <incoming@patchwork.ozlabs.org>;\n\tThu, 27 Jun 2019 02:34:50 +1000 (AEST)",
            "from localhost (localhost [127.0.0.1])\n\tby whitealder.osuosl.org (Postfix) with ESMTP id AEC0F82C36;\n\tWed, 26 Jun 2019 16:34:48 +0000 (UTC)",
            "from whitealder.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id yEhanxY-BEU1; Wed, 26 Jun 2019 16:34:41 +0000 (UTC)",
            "from ash.osuosl.org (ash.osuosl.org [140.211.166.34])\n\tby whitealder.osuosl.org (Postfix) with ESMTP id 305BB869F8;\n\tWed, 26 Jun 2019 16:34:41 +0000 (UTC)",
            "from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136])\n\tby ash.osuosl.org (Postfix) with ESMTP id E82351BF3B8\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tWed, 26 Jun 2019 16:34:39 +0000 (UTC)",
            "from localhost (localhost [127.0.0.1])\n\tby silver.osuosl.org (Postfix) with ESMTP id C92AD21F76\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tWed, 26 Jun 2019 16:34:39 +0000 (UTC)",
            "from silver.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id i6MnkIMCQINz for <intel-wired-lan@lists.osuosl.org>;\n\tWed, 26 Jun 2019 16:34:36 +0000 (UTC)",
            "from mga06.intel.com (mga06.intel.com [134.134.136.31])\n\tby silver.osuosl.org (Postfix) with ESMTPS id 8DCD421574\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tWed, 26 Jun 2019 16:34:36 +0000 (UTC)",
            "from orsmga004.jf.intel.com ([10.7.209.38])\n\tby orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t26 Jun 2019 09:34:35 -0700",
            "from unknown (HELO localhost.jf.intel.com) ([10.166.244.174])\n\tby orsmga004.jf.intel.com with ESMTP; 26 Jun 2019 09:34:35 -0700"
        ],
        "X-Virus-Scanned": [
            "amavisd-new at osuosl.org",
            "amavisd-new at osuosl.org"
        ],
        "X-Greylist": "domain auto-whitelisted by SQLgrey-1.7.6",
        "X-Amp-Result": "SKIPPED(no attachment in message)",
        "X-Amp-File-Uploaded": "False",
        "X-ExtLoop1": "1",
        "X-IronPort-AV": "E=Sophos;i=\"5.63,420,1557212400\"; d=\"scan'208\";a=\"313480903\"",
        "From": "Tony Nguyen <anthony.l.nguyen@intel.com>",
        "To": "intel-wired-lan@lists.osuosl.org",
        "Date": "Wed, 26 Jun 2019 01:07:10 -0700",
        "Message-Id": "<20190626080711.634-1-anthony.l.nguyen@intel.com>",
        "X-Mailer": "git-send-email 2.20.1",
        "MIME-Version": "1.0",
        "Subject": "[Intel-wired-lan] [PATCH v2 1/2] ice: Add support for XDP",
        "X-BeenThere": "intel-wired-lan@osuosl.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n\t<intel-wired-lan.osuosl.org>",
        "List-Unsubscribe": "<https://lists.osuosl.org/mailman/options/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>",
        "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>",
        "List-Post": "<mailto:intel-wired-lan@osuosl.org>",
        "List-Help": "<mailto:intel-wired-lan-request@osuosl.org?subject=help>",
        "List-Subscribe": "<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>",
        "Cc": "Maciej Fijalkowski <maciej.fijalkowski@intel.com>",
        "Content-Type": "text/plain; charset=\"us-ascii\"",
        "Content-Transfer-Encoding": "7bit",
        "Errors-To": "intel-wired-lan-bounces@osuosl.org",
        "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>"
    },
    "content": "From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>\n\nAdd support for XDP. Implement ndo_bpf and ndo_xdp_xmit.  Upon load of\nan XDP program, allocate additional Tx rings for dedicated XDP use.\nThe following actions are supported: XDP_TX, XDP_DROP, XDP_REDIRECT,\nXDP_PASS, and XDP_ABORTED.\n\nMove build_ctob() up so that no forward declaration is needed and\nrename it to ice_build_ctob() since it's an ice function.\n\nSigned-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>\nSigned-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>\n---\n drivers/net/ethernet/intel/ice/ice.h          |  21 ++\n drivers/net/ethernet/intel/ice/ice_ethtool.c  |  53 ++-\n drivers/net/ethernet/intel/ice/ice_lib.c      |  84 ++++-\n drivers/net/ethernet/intel/ice/ice_lib.h      |   6 +\n drivers/net/ethernet/intel/ice/ice_main.c     | 319 ++++++++++++++++\n drivers/net/ethernet/intel/ice/ice_txrx.c     | 343 +++++++++++++++---\n drivers/net/ethernet/intel/ice/ice_txrx.h     |  29 +-\n .../net/ethernet/intel/ice/ice_virtchnl_pf.c  |   1 +\n 8 files changed, 799 insertions(+), 57 deletions(-)",
    "diff": "diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h\nindex 9ee6b55553c0..53adb93c6b61 100644\n--- a/drivers/net/ethernet/intel/ice/ice.h\n+++ b/drivers/net/ethernet/intel/ice/ice.h\n@@ -28,7 +28,10 @@\n #include <linux/ip.h>\n #include <linux/sctp.h>\n #include <linux/ipv6.h>\n+#include <linux/pkt_sched.h>\n #include <linux/if_bridge.h>\n+#include <linux/ctype.h>\n+#include <linux/bpf.h>\n #include <linux/avf/virtchnl.h>\n #include <net/ipv6.h>\n #include \"ice_devids.h\"\n@@ -301,6 +304,10 @@ struct ice_vsi {\n \tu16 num_rx_desc;\n \tu16 num_tx_desc;\n \tstruct ice_tc_cfg tc_cfg;\n+\tstruct bpf_prog *xdp_prog;\n+\tstruct ice_ring **xdp_rings;\t /* XDP ring array */\n+\tu16 num_xdp_txq;\t\t /* Used XDP queues */\n+\tu8 xdp_mapping_mode;\t\t /* ICE_MAP_MODE_[CONTIG|SCATTER] */\n } ____cacheline_internodealigned_in_smp;\n \n /* struct that defines an interrupt vector */\n@@ -432,6 +439,16 @@ ice_irq_dynamic_ena(struct ice_hw *hw, struct ice_vsi *vsi,\n \twr32(hw, GLINT_DYN_CTL(vector), val);\n }\n \n+static inline bool ice_is_xdp_ena_vsi(struct ice_vsi *vsi)\n+{\n+\treturn !!vsi->xdp_prog;\n+}\n+\n+static inline void ice_set_ring_xdp(struct ice_ring *ring)\n+{\n+\tring->tx_buf[0].tx_flags |= ICE_TX_FLAGS_RING_XDP;\n+}\n+\n /**\n  * ice_find_vsi_by_type - Find and return VSI of a given type\n  * @pf: PF to search for VSI\n@@ -459,6 +476,10 @@ int ice_up(struct ice_vsi *vsi);\n int ice_down(struct ice_vsi *vsi);\n int ice_vsi_cfg(struct ice_vsi *vsi);\n struct ice_vsi *ice_lb_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi);\n+int ice_prepare_xdp_rings(struct ice_vsi *vsi);\n+int ice_destroy_xdp_rings(struct ice_vsi *vsi);\n+int ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,\n+\t\t u32 flags);\n int ice_set_rss(struct ice_vsi *vsi, u8 *seed, u8 *lut, u16 lut_size);\n int ice_get_rss(struct ice_vsi *vsi, u8 *seed, u8 *lut, u16 lut_size);\n void ice_fill_rss_lut(u8 *lut, u16 rss_table_size, u16 rss_size);\ndiff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c\nindex 52083a63dee6..2d9c184a2333 100644\n--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c\n+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c\n@@ -2558,6 +2558,7 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)\n {\n \tstruct ice_ring *tx_rings = NULL, *rx_rings = NULL;\n \tstruct ice_netdev_priv *np = netdev_priv(netdev);\n+\tstruct ice_ring *xdp_rings = NULL;\n \tstruct ice_vsi *vsi = np->vsi;\n \tstruct ice_pf *pf = vsi->back;\n \tint i, timeout = 50, err = 0;\n@@ -2605,6 +2606,11 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)\n \t\t\tvsi->tx_rings[i]->count = new_tx_cnt;\n \t\tfor (i = 0; i < vsi->alloc_rxq; i++)\n \t\t\tvsi->rx_rings[i]->count = new_rx_cnt;\n+\t\tif (ice_is_xdp_ena_vsi(vsi))\n+\t\t\tfor (i = 0; i < vsi->num_xdp_txq; i++)\n+\t\t\t\tvsi->xdp_rings[i]->count = new_tx_cnt;\n+\t\tvsi->num_tx_desc = new_tx_cnt;\n+\t\tvsi->num_rx_desc = new_rx_cnt;\n \t\tnetdev_dbg(netdev, \"Link is down, descriptor count change happens when link is brought up\\n\");\n \t\tgoto done;\n \t}\n@@ -2631,15 +2637,46 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)\n \t\ttx_rings[i].tx_buf = NULL;\n \t\terr = ice_setup_tx_ring(&tx_rings[i]);\n \t\tif (err) {\n-\t\t\twhile (i) {\n-\t\t\t\ti--;\n+\t\t\twhile (i--)\n \t\t\t\tice_clean_tx_ring(&tx_rings[i]);\n-\t\t\t}\n+\n \t\t\tdevm_kfree(&pf->pdev->dev, tx_rings);\n \t\t\tgoto done;\n \t\t}\n \t}\n \n+\tif (!ice_is_xdp_ena_vsi(vsi))\n+\t\tgoto process_rx;\n+\n+\t/* alloc updated XDP resources */\n+\tnetdev_info(netdev, \"Changing XDP descriptor count from %d to %d\\n\",\n+\t\t    vsi->xdp_rings[0]->count, new_tx_cnt);\n+\n+\txdp_rings = devm_kcalloc(&pf->pdev->dev, vsi->num_xdp_txq,\n+\t\t\t\t sizeof(*xdp_rings), GFP_KERNEL);\n+\tif (!xdp_rings) {\n+\t\terr = -ENOMEM;\n+\t\tgoto free_tx;\n+\t}\n+\n+\tfor (i = 0; i < vsi->num_xdp_txq; i++) {\n+\t\t/* clone ring and setup updated count */\n+\t\txdp_rings[i] = *vsi->xdp_rings[i];\n+\t\txdp_rings[i].count = new_tx_cnt;\n+\t\txdp_rings[i].desc = NULL;\n+\t\txdp_rings[i].tx_buf = NULL;\n+\t\terr = ice_setup_tx_ring(&xdp_rings[i]);\n+\t\tif (err) {\n+\t\t\twhile (i) {\n+\t\t\t\ti--;\n+\t\t\t\tice_clean_tx_ring(&xdp_rings[i]);\n+\t\t\t}\n+\t\t\tdevm_kfree(&pf->pdev->dev, xdp_rings);\n+\t\t\tgoto free_tx;\n+\t\t}\n+\t\tice_set_ring_xdp(&xdp_rings[i]);\n+\t}\n+\n process_rx:\n \tif (new_rx_cnt == vsi->rx_rings[0]->count)\n \t\tgoto process_link;\n@@ -2718,6 +2755,16 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)\n \t\t\tdevm_kfree(&pf->pdev->dev, rx_rings);\n \t\t}\n \n+\t\tif (xdp_rings) {\n+\t\t\tfor (i = 0; i < vsi->num_xdp_txq; i++) {\n+\t\t\t\tice_free_tx_ring(vsi->xdp_rings[i]);\n+\t\t\t\t*vsi->xdp_rings[i] = xdp_rings[i];\n+\t\t\t}\n+\t\t\tdevm_kfree(&pf->pdev->dev, xdp_rings);\n+\t\t}\n+\n+\t\tvsi->num_tx_desc = new_tx_cnt;\n+\t\tvsi->num_rx_desc = new_rx_cnt;\n \t\tice_up(vsi);\n \t}\n \tgoto done;\ndiff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c\nindex a19f5920733b..09c6b9921ccd 100644\n--- a/drivers/net/ethernet/intel/ice/ice_lib.c\n+++ b/drivers/net/ethernet/intel/ice/ice_lib.c\n@@ -27,6 +27,22 @@ static int ice_setup_rx_ctx(struct ice_ring *ring)\n \t/* clear the context structure first */\n \tmemset(&rlan_ctx, 0, sizeof(rlan_ctx));\n \n+\tring->rx_buf_len = vsi->rx_buf_len;\n+\n+\tif (ring->vsi->type == ICE_VSI_PF) {\n+\t\tif (!xdp_rxq_info_is_reg(&ring->xdp_rxq))\n+\t\t\txdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,\n+\t\t\t\t\t ring->q_index);\n+\n+\t\terr = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,\n+\t\t\t\t\t\t MEM_TYPE_PAGE_SHARED, NULL);\n+\t\tif (err)\n+\t\t\treturn err;\n+\t}\n+\t/* Receive Queue Base Address.\n+\t * Indicates the starting address of the descriptor queue defined in\n+\t * 128 Byte units.\n+\t */\n \trlan_ctx.base = ring->dma >> 7;\n \n \trlan_ctx.qlen = ring->count;\n@@ -34,7 +50,7 @@ static int ice_setup_rx_ctx(struct ice_ring *ring)\n \t/* Receive Packet Data Buffer Size.\n \t * The Packet Data Buffer Size is defined in 128 byte units.\n \t */\n-\trlan_ctx.dbuf = vsi->rx_buf_len >> ICE_RLAN_CTX_DBUF_S;\n+\trlan_ctx.dbuf = ring->rx_buf_len >> ICE_RLAN_CTX_DBUF_S;\n \n \t/* use 32 byte descriptors */\n \trlan_ctx.dsize = 1;\n@@ -61,7 +77,7 @@ static int ice_setup_rx_ctx(struct ice_ring *ring)\n \t * than 5 x DBUF\n \t */\n \trlan_ctx.rxmax = min_t(u16, vsi->max_frame,\n-\t\t\t       ICE_MAX_CHAINED_RX_BUFS * vsi->rx_buf_len);\n+\t\t\t       ICE_MAX_CHAINED_RX_BUFS * ring->rx_buf_len);\n \n \t/* Rx queue threshold in units of 64 */\n \trlan_ctx.lrxqthresh = 1;\n@@ -620,7 +636,7 @@ static int __ice_vsi_get_qs_sc(struct ice_qs_cfg *qs_cfg)\n  *\n  * Return 0 on success and -ENOMEM in case of no left space in PF queue bitmap\n  */\n-static int __ice_vsi_get_qs(struct ice_qs_cfg *qs_cfg)\n+int __ice_vsi_get_qs(struct ice_qs_cfg *qs_cfg)\n {\n \tint ret = 0;\n \n@@ -1706,7 +1722,7 @@ ice_vsi_cfg_txqs(struct ice_vsi *vsi, struct ice_ring **rings, int offset)\n \t\t\trings[q_idx]->tail =\n \t\t\t\tpf->hw.hw_addr + QTX_COMM_DBELL(pf_q);\n \t\t\tstatus = ice_ena_vsi_txq(vsi->port_info, vsi->idx, tc,\n-\t\t\t\t\t\t i, num_q_grps, qg_buf,\n+\t\t\t\t\t\t i + offset, num_q_grps, qg_buf,\n \t\t\t\t\t\t buf_len, NULL);\n \t\t\tif (status) {\n \t\t\t\tdev_err(&pf->pdev->dev,\n@@ -1745,6 +1761,18 @@ int ice_vsi_cfg_lan_txqs(struct ice_vsi *vsi)\n \treturn ice_vsi_cfg_txqs(vsi, vsi->tx_rings, 0);\n }\n \n+/**\n+ * ice_vsi_cfg_xdp_txqs - Configure Tx queues dedicated for XDP in given VSI\n+ * @vsi: the VSI being configured\n+ *\n+ * Return 0 on success and a negative value on error\n+ * Configure the Tx queues dedicated for XDP in given VSI for operation.\n+ */\n+int ice_vsi_cfg_xdp_txqs(struct ice_vsi *vsi)\n+{\n+\treturn ice_vsi_cfg_txqs(vsi, vsi->xdp_rings, vsi->num_xdp_txq);\n+}\n+\n /**\n  * ice_intrl_usec_to_reg - convert interrupt rate limit to register value\n  * @intrl: interrupt rate limit in usecs\n@@ -1863,6 +1891,13 @@ ice_cfg_txq_interrupt(struct ice_vsi *vsi, u16 txq, u16 msix_idx, u16 itr_idx)\n \t      ((msix_idx << QINT_TQCTL_MSIX_INDX_S) & QINT_TQCTL_MSIX_INDX_M);\n \n \twr32(hw, QINT_TQCTL(vsi->txq_map[txq]), val);\n+\tif (ice_is_xdp_ena_vsi(vsi)) {\n+\t\tu32 xdp_txq = txq + vsi->num_xdp_txq;\n+\n+\t\twr32(hw, QINT_TQCTL(vsi->txq_map[xdp_txq]),\n+\t\t     val);\n+\t}\n+\tice_flush(hw);\n }\n \n /**\n@@ -2125,12 +2160,12 @@ ice_vsi_stop_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,\n \n \t\t\tq_ids[i] = vsi->txq_map[q_idx + offset];\n \t\t\tq_teids[i] = rings[q_idx]->txq_teid;\n-\t\t\tq_handles[i] = i;\n+\t\t\tq_handles[i] = i + offset;\n \n \t\t\t/* clear cause_ena bit for disabled queues */\n-\t\t\tval = rd32(hw, QINT_TQCTL(rings[i]->reg_idx));\n+\t\t\tval = rd32(hw, QINT_TQCTL(rings[q_idx]->reg_idx));\n \t\t\tval &= ~QINT_TQCTL_CAUSE_ENA_M;\n-\t\t\twr32(hw, QINT_TQCTL(rings[i]->reg_idx), val);\n+\t\t\twr32(hw, QINT_TQCTL(rings[q_idx]->reg_idx), val);\n \n \t\t\t/* software is expected to wait for 100 ns */\n \t\t\tndelay(100);\n@@ -2138,7 +2173,7 @@ ice_vsi_stop_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,\n \t\t\t/* trigger a software interrupt for the vector\n \t\t\t * associated to the queue to schedule NAPI handler\n \t\t\t */\n-\t\t\tq_vector = rings[i]->q_vector;\n+\t\t\tq_vector = rings[q_idx]->q_vector;\n \t\t\tif (q_vector)\n \t\t\t\tice_trigger_sw_intr(hw, q_vector);\n \n@@ -2190,6 +2225,16 @@ ice_vsi_stop_lan_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,\n \t\t\t\t     0);\n }\n \n+/**\n+ * ice_vsi_stop_xdp_tx_rings - Disable XDP Tx rings\n+ * @vsi: the VSI being configured\n+ */\n+int ice_vsi_stop_xdp_tx_rings(struct ice_vsi *vsi)\n+{\n+\treturn ice_vsi_stop_tx_rings(vsi, ICE_NO_RESET, 0, vsi->xdp_rings,\n+\t\t\t\t     vsi->num_xdp_txq);\n+}\n+\n /**\n  * ice_cfg_vlan_pruning - enable or disable VLAN pruning on the VSI\n  * @vsi: VSI to enable or disable VLAN pruning on\n@@ -2590,6 +2635,11 @@ static void ice_vsi_release_msix(struct ice_vsi *vsi)\n \t\twr32(hw, GLINT_ITR(ICE_IDX_ITR1, reg_idx), 0);\n \t\tfor (q = 0; q < q_vector->num_ring_tx; q++) {\n \t\t\twr32(hw, QINT_TQCTL(vsi->txq_map[txq]), 0);\n+\t\t\tif (ice_is_xdp_ena_vsi(vsi)) {\n+\t\t\t\tu32 xdp_txq = txq + vsi->num_xdp_txq;\n+\n+\t\t\t\twr32(hw, QINT_TQCTL(vsi->txq_map[xdp_txq]), 0);\n+\t\t\t}\n \t\t\ttxq++;\n \t\t}\n \n@@ -2962,6 +3012,11 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)\n \t\tvsi->base_vector = 0;\n \t}\n \n+\tif (ice_is_xdp_ena_vsi(vsi))\n+\t\t/* return value check can be skipped here, it always returns\n+\t\t * 0 if reset is in progress\n+\t\t */\n+\t\tice_destroy_xdp_rings(vsi);\n \tice_vsi_clear_rings(vsi);\n \tice_vsi_free_arrays(vsi);\n \tice_dev_onetime_setup(&pf->hw);\n@@ -2995,6 +3050,13 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)\n \t\t\tgoto err_vectors;\n \n \t\tice_vsi_map_rings_to_vectors(vsi);\n+\t\tif (ice_is_xdp_ena_vsi(vsi)) {\n+\t\t\tvsi->num_xdp_txq = vsi->alloc_txq;\n+\t\t\tvsi->xdp_mapping_mode = ICE_VSI_MAP_CONTIG;\n+\t\t\tret = ice_prepare_xdp_rings(vsi);\n+\t\t\tif (ret)\n+\t\t\t\tgoto err_vectors;\n+\t\t}\n \t\t/* Do not exit if configuring RSS had an issue, at least\n \t\t * receive traffic on first queue. Hence no need to capture\n \t\t * return value\n@@ -3027,9 +3089,13 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)\n \t}\n \n \t/* configure VSI nodes based on number of queues and TC's */\n-\tfor (i = 0; i < vsi->tc_cfg.numtc; i++)\n+\tfor (i = 0; i < vsi->tc_cfg.numtc; i++) {\n \t\tmax_txqs[i] = pf->num_lan_tx;\n \n+\t\tif (ice_is_xdp_ena_vsi(vsi))\n+\t\t\tmax_txqs[i] += vsi->num_xdp_txq;\n+\t}\n+\n \tstatus = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc,\n \t\t\t\t max_txqs);\n \tif (status) {\ndiff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h\nindex 6e43ef03bfc3..c4c6eca05757 100644\n--- a/drivers/net/ethernet/intel/ice/ice_lib.h\n+++ b/drivers/net/ethernet/intel/ice/ice_lib.h\n@@ -43,6 +43,10 @@ int\n ice_vsi_stop_lan_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,\n \t\t\t  u16 rel_vmvf_num);\n \n+int ice_vsi_cfg_xdp_txqs(struct ice_vsi *vsi);\n+\n+int ice_vsi_stop_xdp_tx_rings(struct ice_vsi *vsi);\n+\n int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena, bool vlan_promisc);\n \n void ice_cfg_sw_lldp(struct ice_vsi *vsi, bool tx, bool create);\n@@ -76,6 +80,8 @@ bool ice_is_reset_in_progress(unsigned long *state);\n \n void ice_vsi_free_q_vectors(struct ice_vsi *vsi);\n \n+int __ice_vsi_get_qs(struct ice_qs_cfg *qs_cfg);\n+\n void ice_trigger_sw_intr(struct ice_hw *hw, struct ice_q_vector *q_vector);\n \n void ice_vsi_put_qs(struct ice_vsi *vsi);\ndiff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c\nindex 28ec0d57941d..5d14627a6ab6 100644\n--- a/drivers/net/ethernet/intel/ice/ice_main.c\n+++ b/drivers/net/ethernet/intel/ice/ice_main.c\n@@ -1489,6 +1489,304 @@ static int ice_vsi_req_irq_msix(struct ice_vsi *vsi, char *basename)\n \treturn err;\n }\n \n+/**\n+ * ice_xdp_alloc_setup_rings - Allocate and setup Tx rings for XDP\n+ * @vsi: VSI to setup Tx rings used by XDP\n+ *\n+ * Return 0 on success and negative value on error\n+ */\n+static int ice_xdp_alloc_setup_rings(struct ice_vsi *vsi)\n+{\n+\tstruct device *dev = &vsi->back->pdev->dev;\n+\tint i;\n+\n+\tfor (i = 0; i < vsi->num_xdp_txq; i++) {\n+\t\tu16 xdp_q_idx = vsi->alloc_txq + i;\n+\t\tstruct ice_ring *xdp_ring;\n+\n+\t\txdp_ring = kzalloc(sizeof(*xdp_ring), GFP_KERNEL);\n+\n+\t\tif (!xdp_ring)\n+\t\t\tgoto free_xdp_rings;\n+\n+\t\txdp_ring->q_index = xdp_q_idx;\n+\t\txdp_ring->reg_idx = vsi->txq_map[xdp_q_idx];\n+\t\txdp_ring->ring_active = false;\n+\t\txdp_ring->vsi = vsi;\n+\t\txdp_ring->netdev = NULL;\n+\t\txdp_ring->dev = dev;\n+\t\txdp_ring->count = vsi->num_tx_desc;\n+\t\tvsi->xdp_rings[i] = xdp_ring;\n+\t\tif (ice_setup_tx_ring(xdp_ring))\n+\t\t\tgoto free_xdp_rings;\n+\t\tice_set_ring_xdp(xdp_ring);\n+\t}\n+\n+\treturn 0;\n+\n+free_xdp_rings:\n+\tfor (; i >= 0; i--)\n+\t\tif (vsi->xdp_rings[i] && vsi->xdp_rings[i]->desc)\n+\t\t\tice_free_tx_ring(vsi->xdp_rings[i]);\n+\treturn -ENOMEM;\n+}\n+\n+/**\n+ * ice_prepare_xdp_rings - Allocate, configure and setup Tx rings for XDP\n+ * @vsi: VSI to bring up Tx rings used by XDP\n+ *\n+ * Return 0 on success and negative value on error\n+ */\n+int ice_prepare_xdp_rings(struct ice_vsi *vsi)\n+{\n+\tu16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };\n+\tint xdp_rings_rem = vsi->num_xdp_txq;\n+\tstruct ice_pf *pf = vsi->back;\n+\tstruct ice_qs_cfg xdp_qs_cfg = {\n+\t\t.qs_mutex = &pf->avail_q_mutex,\n+\t\t.pf_map = pf->avail_txqs,\n+\t\t.pf_map_size = ICE_MAX_TXQS,\n+\t\t.q_count = vsi->num_xdp_txq,\n+\t\t.scatter_count = ICE_MAX_SCATTER_TXQS,\n+\t\t.vsi_map = vsi->txq_map,\n+\t\t.vsi_map_offset = vsi->alloc_txq,\n+\t\t.mapping_mode = vsi->xdp_mapping_mode\n+\t};\n+\tenum ice_status status;\n+\tint i, v_idx;\n+\n+\tvsi->xdp_rings = devm_kcalloc(&pf->pdev->dev, vsi->num_xdp_txq,\n+\t\t\t\t      sizeof(*vsi->xdp_rings), GFP_KERNEL);\n+\tif (!vsi->xdp_rings)\n+\t\treturn -ENOMEM;\n+\n+\tif (__ice_vsi_get_qs(&xdp_qs_cfg))\n+\t\tgoto err_map_xdp;\n+\n+\tpf->q_left_tx -= vsi->num_xdp_txq;\n+\n+\tif (ice_xdp_alloc_setup_rings(vsi))\n+\t\tgoto clear_xdp_rings;\n+\n+\t/* follow the logic from ice_vsi_map_rings_to_vectors */\n+\tice_for_each_q_vector(vsi, v_idx) {\n+\t\tstruct ice_q_vector *q_vector = vsi->q_vectors[v_idx];\n+\t\tint xdp_rings_per_v, q_id, q_base;\n+\n+\t\txdp_rings_per_v = DIV_ROUND_UP(xdp_rings_rem,\n+\t\t\t\t\t       vsi->num_q_vectors - v_idx);\n+\t\tq_base = vsi->num_xdp_txq - xdp_rings_rem;\n+\n+\t\tfor (q_id = q_base; q_id < (q_base + xdp_rings_per_v); q_id++) {\n+\t\t\tstruct ice_ring *xdp_ring = vsi->xdp_rings[q_id];\n+\n+\t\t\txdp_ring->q_vector = q_vector;\n+\t\t\txdp_ring->next = q_vector->tx.ring;\n+\t\t\tq_vector->tx.ring = xdp_ring;\n+\t\t}\n+\t\txdp_rings_rem -= xdp_rings_per_v;\n+\t}\n+\n+\t/* omit the scheduler update if in reset path; XDP queues will be\n+\t * taken into account at the end of ice_vsi_rebuild, where\n+\t * ice_cfg_vsi_lan is being called\n+\t */\n+\tif (ice_is_reset_in_progress(pf->state))\n+\t\treturn 0;\n+\n+\t/* tell the Tx scheduler that right now we have\n+\t * additional queues\n+\t */\n+\tfor (i = 0; i < vsi->tc_cfg.numtc; i++)\n+\t\tmax_txqs[i] = vsi->num_txq + vsi->num_xdp_txq;\n+\n+\tstatus = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc,\n+\t\t\t\t max_txqs);\n+\tif (status) {\n+\t\tdev_err(&pf->pdev->dev,\n+\t\t\t\"Failed VSI LAN queue config for XDP, error:%d\\n\",\n+\t\t\tstatus);\n+\t\tgoto clear_xdp_rings;\n+\t}\n+\n+\treturn 0;\n+clear_xdp_rings:\n+\tfor (i = 0; i < vsi->num_xdp_txq; i++)\n+\t\tif (vsi->xdp_rings[i]) {\n+\t\t\tkfree_rcu(vsi->xdp_rings[i], rcu);\n+\t\t\tvsi->xdp_rings[i] = NULL;\n+\t\t}\n+\tpf->q_left_tx += vsi->num_xdp_txq;\n+\n+err_map_xdp:\n+\tmutex_lock(&pf->avail_q_mutex);\n+\tfor (i = 0; i < vsi->num_xdp_txq; i++) {\n+\t\tclear_bit(vsi->txq_map[i + vsi->alloc_txq], pf->avail_txqs);\n+\t\tvsi->txq_map[i + vsi->alloc_txq] = ICE_INVAL_Q_INDEX;\n+\t}\n+\tmutex_unlock(&pf->avail_q_mutex);\n+\n+\tdevm_kfree(&pf->pdev->dev, vsi->xdp_rings);\n+\treturn -ENOMEM;\n+}\n+\n+/**\n+ * ice_destroy_xdp_rings - undo the configuration made by ice_prepare_xdp_rings\n+ * @vsi: VSI to remove XDP rings\n+ *\n+ * Detach XDP rings from irq vectors, clean up the PF bitmap and free\n+ * resources\n+ */\n+int ice_destroy_xdp_rings(struct ice_vsi *vsi)\n+{\n+\tu16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };\n+\tstruct ice_pf *pf = vsi->back;\n+\tint i, v_idx;\n+\n+\t/* q_vectors are freed in reset path so there's no point in\n+\t * detaching rings\n+\t */\n+\tif (ice_is_reset_in_progress(pf->state))\n+\t\tgoto free_qmap;\n+\n+\tice_for_each_q_vector(vsi, v_idx) {\n+\t\tstruct ice_q_vector *q_vector = vsi->q_vectors[v_idx];\n+\t\tstruct ice_ring *ring;\n+\n+\t\tice_for_each_ring(ring, q_vector->tx)\n+\t\t\tif (!ring->tx_buf || !ice_ring_is_xdp(ring))\n+\t\t\t\tbreak;\n+\n+\t\t/* restore the value of last node prior to XDP setup */\n+\t\tq_vector->tx.ring = ring;\n+\t}\n+\n+free_qmap:\n+\tmutex_lock(&pf->avail_q_mutex);\n+\tfor (i = 0; i < vsi->num_xdp_txq; i++) {\n+\t\tclear_bit(vsi->txq_map[i + vsi->alloc_txq], pf->avail_txqs);\n+\t\tvsi->txq_map[i + vsi->alloc_txq] = ICE_INVAL_Q_INDEX;\n+\t}\n+\tmutex_unlock(&pf->avail_q_mutex);\n+\n+\tfor (i = 0; i < vsi->num_xdp_txq; i++)\n+\t\tif (vsi->xdp_rings[i]) {\n+\t\t\tif (vsi->xdp_rings[i]->desc)\n+\t\t\t\tice_free_tx_ring(vsi->xdp_rings[i]);\n+\t\t\tkfree_rcu(vsi->xdp_rings[i], rcu);\n+\t\t\tvsi->xdp_rings[i] = NULL;\n+\t\t}\n+\n+\tdevm_kfree(&pf->pdev->dev, vsi->xdp_rings);\n+\tvsi->xdp_rings = NULL;\n+\tpf->q_left_tx += vsi->num_xdp_txq;\n+\n+\tif (ice_is_reset_in_progress(pf->state))\n+\t\treturn 0;\n+\n+\t/* notify Tx scheduler that we destroyed XDP queues and bring\n+\t * back the old number of child nodes\n+\t */\n+\tfor (i = 0; i < vsi->tc_cfg.numtc; i++)\n+\t\tmax_txqs[i] = vsi->num_txq;\n+\n+\treturn ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc,\n+\t\t\t       max_txqs);\n+}\n+\n+/**\n+ * ice_xdp_setup_prog - Add or remove XDP eBPF program\n+ * @vsi: VSI to setup XDP for\n+ * @prog: XDP program\n+ * @extack: netlink extended ack\n+ */\n+static int\n+ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,\n+\t\t   struct netlink_ext_ack *extack)\n+{\n+\tint frame_size = vsi->netdev->mtu + ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN;\n+\tbool if_running = netif_running(vsi->netdev);\n+\tstruct bpf_prog *old_prog;\n+\tint i, ret = 0;\n+\n+\tif (frame_size > vsi->rx_buf_len) {\n+\t\tNL_SET_ERR_MSG_MOD(extack, \"MTU too large for loading XDP\");\n+\t\treturn -ENOTSUPP;\n+\t}\n+\n+\tif (!ice_is_xdp_ena_vsi(vsi) && !prog)\n+\t\treturn 0;\n+\n+\t/* need to stop netdev while setting up the program for Rx rings */\n+\tif (if_running && !test_and_set_bit(__ICE_DOWN, vsi->state)) {\n+\t\tret = ice_down(vsi);\n+\t\tif (ret) {\n+\t\t\tNL_SET_ERR_MSG_MOD(extack,\n+\t\t\t\t\t   \"Preparing device for XDP attach failed\");\n+\t\t\tgoto skip_setting_prog;\n+\t\t}\n+\t}\n+\n+\tif (!ice_is_xdp_ena_vsi(vsi) && prog) {\n+\t\tvsi->num_xdp_txq = vsi->alloc_txq;\n+\t\tvsi->xdp_mapping_mode = ICE_VSI_MAP_CONTIG;\n+\t\tif (ice_prepare_xdp_rings(vsi)) {\n+\t\t\tNL_SET_ERR_MSG_MOD(extack,\n+\t\t\t\t\t   \"Setting up XDP Tx resources failed\");\n+\t\t\tret = -ENOMEM;\n+\t\t\tgoto skip_setting_prog;\n+\t\t}\n+\t} else if (ice_is_xdp_ena_vsi(vsi) && !prog) {\n+\t\tif (ice_destroy_xdp_rings(vsi)) {\n+\t\t\tNL_SET_ERR_MSG_MOD(extack,\n+\t\t\t\t\t   \"Freeing XDP Tx resources failed\");\n+\t\t\tret = -ENOMEM;\n+\t\t\tgoto skip_setting_prog;\n+\t\t}\n+\t}\n+\n+\told_prog = xchg(&vsi->xdp_prog, prog);\n+\tif (old_prog)\n+\t\tbpf_prog_put(old_prog);\n+\n+\tfor (i = 0; i < vsi->num_rxq; i++)\n+\t\tWRITE_ONCE(vsi->rx_rings[i]->xdp_prog, vsi->xdp_prog);\n+\n+\tif (if_running)\n+\t\tret = ice_up(vsi);\n+\n+skip_setting_prog:\n+\treturn ret;\n+}\n+\n+/**\n+ * ice_xdp - implements XDP handler\n+ * @dev: netdevice\n+ * @xdp: XDP command\n+ */\n+static int ice_xdp(struct net_device *dev, struct netdev_bpf *xdp)\n+{\n+\tstruct ice_netdev_priv *np = netdev_priv(dev);\n+\tstruct ice_vsi *vsi = np->vsi;\n+\n+\tif (vsi->type != ICE_VSI_PF) {\n+\t\tNL_SET_ERR_MSG_MOD(xdp->extack,\n+\t\t\t\t   \"XDP can be loaded only on PF VSI\");\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tswitch (xdp->command) {\n+\tcase XDP_SETUP_PROG:\n+\t\treturn ice_xdp_setup_prog(vsi, xdp->prog, xdp->extack);\n+\tcase XDP_QUERY_PROG:\n+\t\txdp->prog_id = vsi->xdp_prog ? vsi->xdp_prog->aux->id : 0;\n+\t\treturn 0;\n+\tdefault:\n+\t\tNL_SET_ERR_MSG_MOD(xdp->extack, \"Unknown XDP command\");\n+\t\treturn -EINVAL;\n+\t}\n+}\n+\n /**\n  * ice_ena_misc_vector - enable the non-queue interrupts\n  * @pf: board private structure\n@@ -2972,6 +3270,8 @@ int ice_vsi_cfg(struct ice_vsi *vsi)\n \tice_vsi_cfg_dcb_rings(vsi);\n \n \terr = ice_vsi_cfg_lan_txqs(vsi);\n+\tif (!err && ice_is_xdp_ena_vsi(vsi))\n+\t\terr = ice_vsi_cfg_xdp_txqs(vsi);\n \tif (!err)\n \t\terr = ice_vsi_cfg_rxqs(vsi);\n \n@@ -3473,6 +3773,13 @@ int ice_down(struct ice_vsi *vsi)\n \t\tnetdev_err(vsi->netdev,\n \t\t\t   \"Failed stop Tx rings, VSI %d error %d\\n\",\n \t\t\t   vsi->vsi_num, tx_err);\n+\tif (!tx_err && ice_is_xdp_ena_vsi(vsi)) {\n+\t\ttx_err = ice_vsi_stop_xdp_tx_rings(vsi);\n+\t\tif (tx_err)\n+\t\t\tnetdev_err(vsi->netdev,\n+\t\t\t\t   \"Failed stop XDP rings, VSI %d error %d\\n\",\n+\t\t\t\t   vsi->vsi_num, tx_err);\n+\t}\n \n \trx_err = ice_vsi_stop_rx_rings(vsi);\n \tif (rx_err)\n@@ -3911,6 +4218,16 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)\n \t\treturn 0;\n \t}\n \n+\tif (ice_is_xdp_ena_vsi(vsi)) {\n+\t\tint eth_overhead = ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN;\n+\n+\t\tif (new_mtu + eth_overhead > ICE_RXBUF_2048) {\n+\t\t\tnetdev_err(netdev, \"max MTU for XDP usage is %d\\n\",\n+\t\t\t\t   ICE_RXBUF_2048 - eth_overhead);\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t}\n+\n \tif (new_mtu < netdev->min_mtu) {\n \t\tnetdev_err(netdev, \"new MTU invalid. min_mtu is %d\\n\",\n \t\t\t   netdev->min_mtu);\n@@ -4412,4 +4729,6 @@ static const struct net_device_ops ice_netdev_ops = {\n \t.ndo_fdb_add = ice_fdb_add,\n \t.ndo_fdb_del = ice_fdb_del,\n \t.ndo_tx_timeout = ice_tx_timeout,\n+\t.ndo_bpf = ice_xdp,\n+\t.ndo_xdp_xmit = ice_xdp_xmit,\n };\ndiff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c\nindex 3c83230434b6..0ed35cac8d60 100644\n--- a/drivers/net/ethernet/intel/ice/ice_txrx.c\n+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c\n@@ -5,11 +5,24 @@\n \n #include <linux/prefetch.h>\n #include <linux/mm.h>\n+#include <linux/bpf_trace.h>\n+#include <net/xdp.h>\n #include \"ice.h\"\n #include \"ice_dcb_lib.h\"\n \n #define ICE_RX_HDR_SIZE\t\t256\n \n+/* helper function for building cmd/type/offset */\n+static __le64\n+ice_build_ctob(u64 td_cmd, u64 td_offset, unsigned int size, u64 td_tag)\n+{\n+\treturn cpu_to_le64(ICE_TX_DESC_DTYPE_DATA |\n+\t\t\t   (td_cmd    << ICE_TXD_QW1_CMD_S) |\n+\t\t\t   (td_offset << ICE_TXD_QW1_OFFSET_S) |\n+\t\t\t   ((u64)size << ICE_TXD_QW1_TX_BUF_SZ_S) |\n+\t\t\t   (td_tag    << ICE_TXD_QW1_L2TAG1_S));\n+}\n+\n /**\n  * ice_unmap_and_free_tx_buf - Release a Tx buffer\n  * @ring: the ring that owns the buffer\n@@ -19,7 +32,10 @@ static void\n ice_unmap_and_free_tx_buf(struct ice_ring *ring, struct ice_tx_buf *tx_buf)\n {\n \tif (tx_buf->skb) {\n-\t\tdev_kfree_skb_any(tx_buf->skb);\n+\t\tif (ice_ring_is_xdp(ring))\n+\t\t\tpage_frag_free(tx_buf->raw_buf);\n+\t\telse\n+\t\t\tdev_kfree_skb_any(tx_buf->skb);\n \t\tif (dma_unmap_len(tx_buf, len))\n \t\t\tdma_unmap_single(ring->dev,\n \t\t\t\t\t dma_unmap_addr(tx_buf, dma),\n@@ -135,8 +151,11 @@ ice_clean_tx_irq(struct ice_vsi *vsi, struct ice_ring *tx_ring, int napi_budget)\n \t\ttotal_bytes += tx_buf->bytecount;\n \t\ttotal_pkts += tx_buf->gso_segs;\n \n-\t\t/* free the skb */\n-\t\tnapi_consume_skb(tx_buf->skb, napi_budget);\n+\t\tif (ice_ring_is_xdp(tx_ring))\n+\t\t\tpage_frag_free(tx_buf->raw_buf);\n+\t\telse\n+\t\t\t/* free the skb */\n+\t\t\tnapi_consume_skb(tx_buf->skb, napi_budget);\n \n \t\t/* unmap skb header data */\n \t\tdma_unmap_single(tx_ring->dev,\n@@ -194,6 +213,9 @@ ice_clean_tx_irq(struct ice_vsi *vsi, struct ice_ring *tx_ring, int napi_budget)\n \ttx_ring->q_vector->tx.total_bytes += total_bytes;\n \ttx_ring->q_vector->tx.total_pkts += total_pkts;\n \n+\tif (ice_ring_is_xdp(tx_ring))\n+\t\treturn !!budget;\n+\n \tnetdev_tx_completed_queue(txring_txq(tx_ring), total_pkts,\n \t\t\t\t  total_bytes);\n \n@@ -318,6 +340,10 @@ void ice_clean_rx_ring(struct ice_ring *rx_ring)\n void ice_free_rx_ring(struct ice_ring *rx_ring)\n {\n \tice_clean_rx_ring(rx_ring);\n+\tif (rx_ring->vsi->type == ICE_VSI_PF)\n+\t\tif (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))\n+\t\t\txdp_rxq_info_unreg(&rx_ring->xdp_rxq);\n+\trx_ring->xdp_prog = NULL;\n \tdevm_kfree(rx_ring->dev, rx_ring->rx_buf);\n \trx_ring->rx_buf = NULL;\n \n@@ -362,6 +388,12 @@ int ice_setup_rx_ring(struct ice_ring *rx_ring)\n \n \trx_ring->next_to_use = 0;\n \trx_ring->next_to_clean = 0;\n+\n+\tif (rx_ring->vsi->type == ICE_VSI_PF &&\n+\t    !xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))\n+\t\tif (xdp_rxq_info_reg(&rx_ring->xdp_rxq, rx_ring->netdev,\n+\t\t\t\t     rx_ring->q_index))\n+\t\t\tgoto err;\n \treturn 0;\n \n err:\n@@ -391,6 +423,200 @@ static void ice_release_rx_desc(struct ice_ring *rx_ring, u32 val)\n \twritel(val, rx_ring->tail);\n }\n \n+/**\n+ * ice_rx_offset - Return expected offset into page to access data\n+ * @rx_ring: Ring we are requesting offset of\n+ *\n+ * Returns the offset value for ring into the data buffer.\n+ */\n+static unsigned int ice_rx_offset(struct ice_ring *rx_ring)\n+{\n+\treturn ice_is_xdp_ena_vsi(rx_ring->vsi) ? XDP_PACKET_HEADROOM : 0;\n+}\n+\n+/**\n+ * ice_xdp_ring_update_tail - Updates the XDP Tx ring tail register\n+ * @xdp_ring: XDP Tx ring\n+ *\n+ * This function updates the XDP Tx ring tail register.\n+ */\n+static void ice_xdp_ring_update_tail(struct ice_ring *xdp_ring)\n+{\n+\t/* Force memory writes to complete before letting h/w\n+\t * know there are new descriptors to fetch.\n+\t */\n+\twmb();\n+\twritel_relaxed(xdp_ring->next_to_use, xdp_ring->tail);\n+}\n+\n+/**\n+ * ice_xmit_xdp_ring - submit single packet to XDP ring for transmission\n+ * @data: packet data pointer\n+ * @size: packet data size\n+ * @xdp_ring: XDP ring for transmission\n+ */\n+static int\n+ice_xmit_xdp_ring(void *data, u16 size, struct ice_ring *xdp_ring)\n+{\n+\tu16 i = xdp_ring->next_to_use;\n+\tstruct ice_tx_desc *tx_desc;\n+\tstruct ice_tx_buf *tx_buf;\n+\tdma_addr_t dma;\n+\n+\tif (!unlikely(ICE_DESC_UNUSED(xdp_ring))) {\n+\t\txdp_ring->tx_stats.tx_busy++;\n+\t\treturn ICE_XDP_CONSUMED;\n+\t}\n+\n+\tdma = dma_map_single(xdp_ring->dev, data, size, DMA_TO_DEVICE);\n+\tif (dma_mapping_error(xdp_ring->dev, dma))\n+\t\treturn ICE_XDP_CONSUMED;\n+\n+\ttx_buf = &xdp_ring->tx_buf[i];\n+\ttx_buf->bytecount = size;\n+\ttx_buf->gso_segs = 1;\n+\ttx_buf->raw_buf = data;\n+\n+\t/* record length, and DMA address */\n+\tdma_unmap_len_set(tx_buf, len, size);\n+\tdma_unmap_addr_set(tx_buf, dma, dma);\n+\n+\ttx_desc = ICE_TX_DESC(xdp_ring, i);\n+\ttx_desc->buf_addr = cpu_to_le64(dma);\n+\ttx_desc->cmd_type_offset_bsz = ice_build_ctob(ICE_TXD_CMD, 0, size, 0);\n+\n+\t/* Make certain all of the status bits have been updated\n+\t * before next_to_watch is written.\n+\t */\n+\tsmp_wmb();\n+\n+\ti++;\n+\tif (i == xdp_ring->count)\n+\t\ti = 0;\n+\n+\ttx_buf->next_to_watch = tx_desc;\n+\txdp_ring->next_to_use = i;\n+\n+\treturn ICE_XDP_TX;\n+}\n+\n+/**\n+ * ice_run_xdp - Executes an XDP program on initialized xdp_buff\n+ * @rx_ring: Rx ring\n+ * @xdp: xdp_buff used as input to the XDP program\n+ * @xdp_prog: XDP program to run\n+ *\n+ * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}\n+ */\n+static int\n+ice_run_xdp(struct ice_ring *rx_ring, struct xdp_buff *xdp,\n+\t    struct bpf_prog *xdp_prog)\n+{\n+\tint err, result = ICE_XDP_PASS;\n+\tstruct ice_ring *xdp_ring;\n+\tu32 act;\n+\n+\tact = bpf_prog_run_xdp(xdp_prog, xdp);\n+\tswitch (act) {\n+\tcase XDP_PASS:\n+\t\tbreak;\n+\tcase XDP_TX:\n+\t\txdp_ring = rx_ring->vsi->xdp_rings[rx_ring->q_index];\n+\t\tresult =\n+\t\t\tice_xmit_xdp_ring(xdp->data,\n+\t\t\t\t\t  (u8 *)xdp->data_end - (u8 *)xdp->data,\n+\t\t\t\t\t  xdp_ring);\n+\t\tbreak;\n+\tcase XDP_REDIRECT:\n+\t\terr = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);\n+\t\tresult = !err ? ICE_XDP_REDIR : ICE_XDP_CONSUMED;\n+\t\tbreak;\n+\tdefault:\n+\t\tbpf_warn_invalid_xdp_action(act);\n+\t\t/* fallthrough -- not supported action */\n+\tcase XDP_ABORTED:\n+\t\ttrace_xdp_exception(rx_ring->netdev, xdp_prog, act);\n+\t\t/* fallthrough -- handle aborts by dropping frame */\n+\tcase XDP_DROP:\n+\t\tresult = ICE_XDP_CONSUMED;\n+\t\tbreak;\n+\t}\n+\n+\treturn result;\n+}\n+\n+/**\n+ * ice_xdp_xmit - submit packets to XDP ring for transmission\n+ * @dev: netdev\n+ * @n: number of XDP frames to be transmitted\n+ * @frames: XDP frames to be transmitted\n+ * @flags: transmit flags\n+ *\n+ * Returns number of frames successfully sent. Frames that fail are\n+ * free'ed via XDP return API.\n+ * For error cases, a negative errno code is returned and no-frames\n+ * are transmitted (caller must handle freeing frames).\n+ */\n+int\n+ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,\n+\t     u32 flags)\n+{\n+\tstruct ice_netdev_priv *np = netdev_priv(dev);\n+\tunsigned int queue_index = smp_processor_id();\n+\tstruct ice_vsi *vsi = np->vsi;\n+\tstruct ice_ring *xdp_ring;\n+\tint drops = 0, i;\n+\n+\tif (test_bit(__ICE_DOWN, vsi->state))\n+\t\treturn -ENETDOWN;\n+\n+\tif (!ice_is_xdp_ena_vsi(vsi) || queue_index >= vsi->num_xdp_txq)\n+\t\treturn -ENXIO;\n+\n+\tif (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))\n+\t\treturn -EINVAL;\n+\n+\txdp_ring = vsi->xdp_rings[queue_index];\n+\tfor (i = 0; i < n; i++) {\n+\t\tstruct xdp_frame *xdpf = frames[i];\n+\t\tint err;\n+\n+\t\terr = ice_xmit_xdp_ring(xdpf->data, xdpf->len, xdp_ring);\n+\t\tif (err != ICE_XDP_TX) {\n+\t\t\txdp_return_frame_rx_napi(xdpf);\n+\t\t\tdrops++;\n+\t\t}\n+\t}\n+\n+\tif (unlikely(flags & XDP_XMIT_FLUSH))\n+\t\tice_xdp_ring_update_tail(xdp_ring);\n+\n+\treturn n - drops;\n+}\n+\n+/**\n+ * ice_finalize_xdp_rx - Bump XDP Tx tail and/or flush redirect map\n+ * @rx_ring: Rx ring\n+ * @xdp_res: Result of the receive batch\n+ *\n+ * This function bumps XDP Tx tail and/or flush redirect map, and\n+ * should be called when a batch of packets has been processed in the\n+ * napi loop.\n+ */\n+static void\n+ice_finalize_xdp_rx(struct ice_ring *rx_ring, unsigned int xdp_res)\n+{\n+\tif (xdp_res & ICE_XDP_REDIR)\n+\t\txdp_do_flush_map();\n+\n+\tif (xdp_res & ICE_XDP_TX) {\n+\t\tstruct ice_ring *xdp_ring =\n+\t\t\trx_ring->vsi->xdp_rings[rx_ring->q_index];\n+\n+\t\tice_xdp_ring_update_tail(xdp_ring);\n+\t}\n+}\n+\n /**\n  * ice_alloc_mapped_page - recycle or make a new page\n  * @rx_ring: ring to use\n@@ -433,7 +659,7 @@ ice_alloc_mapped_page(struct ice_ring *rx_ring, struct ice_rx_buf *bi)\n \n \tbi->dma = dma;\n \tbi->page = page;\n-\tbi->page_offset = 0;\n+\tbi->page_offset = ice_rx_offset(rx_ring);\n \tpage_ref_add(page, USHRT_MAX - 1);\n \tbi->pagecnt_bias = USHRT_MAX;\n \n@@ -669,7 +895,7 @@ ice_get_rx_buf(struct ice_ring *rx_ring, struct sk_buff **skb,\n  * ice_construct_skb - Allocate skb and populate it\n  * @rx_ring: Rx descriptor ring to transact packets on\n  * @rx_buf: Rx buffer to pull data from\n- * @size: the length of the packet\n+ * @xdp: xdp_buff pointing to the data\n  *\n  * This function allocates an skb. It then populates it with the page\n  * data from the current receive descriptor, taking care to set up the\n@@ -677,16 +903,16 @@ ice_get_rx_buf(struct ice_ring *rx_ring, struct sk_buff **skb,\n  */\n static struct sk_buff *\n ice_construct_skb(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf,\n-\t\t  unsigned int size)\n+\t\t  struct xdp_buff *xdp)\n {\n-\tvoid *va = page_address(rx_buf->page) + rx_buf->page_offset;\n+\tunsigned int size = (u8 *)xdp->data_end - (u8 *)xdp->data;\n \tunsigned int headlen;\n \tstruct sk_buff *skb;\n \n \t/* prefetch first cache line of first page */\n-\tprefetch(va);\n+\tprefetch(xdp->data);\n #if L1_CACHE_BYTES < 128\n-\tprefetch((u8 *)va + L1_CACHE_BYTES);\n+\tprefetch((void *)((u8 *)xdp->data + L1_CACHE_BYTES));\n #endif /* L1_CACHE_BYTES */\n \n \t/* allocate a skb to store the frags */\n@@ -699,10 +925,11 @@ ice_construct_skb(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf,\n \t/* Determine available headroom for copy */\n \theadlen = size;\n \tif (headlen > ICE_RX_HDR_SIZE)\n-\t\theadlen = eth_get_headlen(skb->dev, va, ICE_RX_HDR_SIZE);\n+\t\theadlen = eth_get_headlen(skb->dev, xdp->data, ICE_RX_HDR_SIZE);\n \n \t/* align pull length to size of long to optimize memcpy performance */\n-\tmemcpy(__skb_put(skb, headlen), va, ALIGN(headlen, sizeof(long)));\n+\tmemcpy(__skb_put(skb, headlen), xdp->data, ALIGN(headlen,\n+\t\t\t\t\t\t\t sizeof(long)));\n \n \t/* if we exhaust the linear part then add what is left as a frag */\n \tsize -= headlen;\n@@ -732,13 +959,20 @@ ice_construct_skb(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf,\n  * @rx_ring: Rx descriptor ring to transact packets on\n  * @rx_buf: Rx buffer to pull data from\n  *\n- * This function will  clean up the contents of the rx_buf. It will\n- * either recycle the buffer or unmap it and free the associated resources.\n+ * This function will update next_to_clean and then clean up the contents\n+ * of the rx_buf. It will either recycle the buffer or unmap it and free\n+ * the associated resources.\n  */\n static void ice_put_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf)\n {\n-\t\t/* hand second half of page back to the ring */\n+\tu32 ntc = rx_ring->next_to_clean + 1;\n+\n+\t/* fetch, update, and store next to clean */\n+\tntc = (ntc < rx_ring->count) ? ntc : 0;\n+\trx_ring->next_to_clean = ntc;\n+\n \tif (ice_can_reuse_rx_page(rx_buf)) {\n+\t\t/* hand second half of page back to the ring */\n \t\tice_reuse_rx_page(rx_ring, rx_buf);\n \t\trx_ring->rx_stats.page_reuse_count++;\n \t} else {\n@@ -797,30 +1031,20 @@ ice_test_staterr(union ice_32b_rx_flex_desc *rx_desc, const u16 stat_err_bits)\n  * @rx_desc: Rx descriptor for current buffer\n  * @skb: Current socket buffer containing buffer in progress\n  *\n- * This function updates next to clean. If the buffer is an EOP buffer\n- * this function exits returning false, otherwise it will place the\n- * sk_buff in the next buffer to be chained and return true indicating\n- * that this is in fact a non-EOP buffer.\n+ * If the buffer is an EOP buffer, this function exits returning false,\n+ * otherwise return true indicating that this is in fact a non-EOP buffer.\n  */\n static bool\n ice_is_non_eop(struct ice_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc,\n \t       struct sk_buff *skb)\n {\n-\tu32 ntc = rx_ring->next_to_clean + 1;\n-\n-\t/* fetch, update, and store next to clean */\n-\tntc = (ntc < rx_ring->count) ? ntc : 0;\n-\trx_ring->next_to_clean = ntc;\n-\n-\tprefetch(ICE_RX_DESC(rx_ring, ntc));\n-\n \t/* if we are the last buffer then there is nothing else to do */\n #define ICE_RXD_EOF BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S)\n \tif (likely(ice_test_staterr(rx_desc, ICE_RXD_EOF)))\n \t\treturn false;\n \n \t/* place skb in next buffer to be received */\n-\trx_ring->rx_buf[ntc].skb = skb;\n+\trx_ring->rx_buf[rx_ring->next_to_clean].skb = skb;\n \trx_ring->rx_stats.non_eop_descs++;\n \n \treturn true;\n@@ -990,7 +1214,12 @@ static int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)\n {\n \tunsigned int total_rx_bytes = 0, total_rx_pkts = 0;\n \tu16 cleaned_count = ICE_DESC_UNUSED(rx_ring);\n+\tunsigned int xdp_res, xdp_xmit = 0;\n+\tstruct bpf_prog *xdp_prog;\n \tbool failure = false;\n+\tstruct xdp_buff xdp;\n+\n+\txdp.rxq = &rx_ring->xdp_rxq;\n \n \t/* start the loop to process Rx packets bounded by 'budget' */\n \twhile (likely(total_rx_pkts < (unsigned int)budget)) {\n@@ -1030,12 +1259,46 @@ static int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)\n \t\tsize = le16_to_cpu(rx_desc->wb.pkt_len) &\n \t\t\tICE_RX_FLX_DESC_PKT_LEN_M;\n \n+\t\tif (!size)\n+\t\t\tbreak;\n+\n+\t\t/* retrieve a buffer from the ring */\n \t\trx_buf = ice_get_rx_buf(rx_ring, &skb, size);\n-\t\t/* allocate (if needed) and populate skb */\n+\n+\t\txdp.data = page_address(rx_buf->page) + rx_buf->page_offset;\n+\t\txdp.data_hard_start = (u8 *)xdp.data - ice_rx_offset(rx_ring);\n+\t\txdp_set_data_meta_invalid(&xdp);\n+\t\txdp.data_end = (u8 *)xdp.data + size;\n+\n+\t\trcu_read_lock();\n+\t\txdp_prog = READ_ONCE(rx_ring->xdp_prog);\n+\t\tif (!xdp_prog) {\n+\t\t\trcu_read_unlock();\n+\t\t\tgoto construct_skb;\n+\t\t}\n+\n+\t\txdp_res = ice_run_xdp(rx_ring, &xdp, xdp_prog);\n+\t\trcu_read_unlock();\n+\t\tif (xdp_res) {\n+\t\t\tif (xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR)) {\n+\t\t\t\txdp_xmit |= xdp_res;\n+\t\t\t\tice_rx_buf_adjust_pg_offset(rx_buf,\n+\t\t\t\t\t\t\t    ICE_RXBUF_2048);\n+\t\t\t} else {\n+\t\t\t\trx_buf->pagecnt_bias++;\n+\t\t\t}\n+\t\t\ttotal_rx_bytes += size;\n+\t\t\ttotal_rx_pkts++;\n+\n+\t\t\tcleaned_count++;\n+\t\t\tice_put_rx_buf(rx_ring, rx_buf);\n+\t\t\tcontinue;\n+\t\t}\n+construct_skb:\n \t\tif (skb)\n \t\t\tice_add_rx_frag(rx_buf, skb, size);\n \t\telse\n-\t\t\tskb = ice_construct_skb(rx_ring, rx_buf, size);\n+\t\t\tskb = ice_construct_skb(rx_ring, rx_buf, &xdp);\n \n \t\t/* exit if we failed to retrieve a buffer */\n \t\tif (!skb) {\n@@ -1085,6 +1348,8 @@ static int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)\n \t\ttotal_rx_pkts++;\n \t}\n \n+\tice_finalize_xdp_rx(rx_ring, xdp_xmit);\n+\n \t/* update queue and vector specific stats */\n \tu64_stats_update_begin(&rx_ring->syncp);\n \trx_ring->stats.pkts += total_rx_pkts;\n@@ -1456,17 +1721,6 @@ int ice_napi_poll(struct napi_struct *napi, int budget)\n \treturn min_t(int, work_done, budget - 1);\n }\n \n-/* helper function for building cmd/type/offset */\n-static __le64\n-build_ctob(u64 td_cmd, u64 td_offset, unsigned int size, u64 td_tag)\n-{\n-\treturn cpu_to_le64(ICE_TX_DESC_DTYPE_DATA |\n-\t\t\t   (td_cmd    << ICE_TXD_QW1_CMD_S) |\n-\t\t\t   (td_offset << ICE_TXD_QW1_OFFSET_S) |\n-\t\t\t   ((u64)size << ICE_TXD_QW1_TX_BUF_SZ_S) |\n-\t\t\t   (td_tag    << ICE_TXD_QW1_L2TAG1_S));\n-}\n-\n /**\n  * __ice_maybe_stop_tx - 2nd level check for Tx stop conditions\n  * @tx_ring: the ring to be checked\n@@ -1567,7 +1821,8 @@ ice_tx_map(struct ice_ring *tx_ring, struct ice_tx_buf *first,\n \t\t */\n \t\twhile (unlikely(size > ICE_MAX_DATA_PER_TXD)) {\n \t\t\ttx_desc->cmd_type_offset_bsz =\n-\t\t\t\tbuild_ctob(td_cmd, td_offset, max_data, td_tag);\n+\t\t\t\tice_build_ctob(td_cmd, td_offset, max_data,\n+\t\t\t\t\t       td_tag);\n \n \t\t\ttx_desc++;\n \t\t\ti++;\n@@ -1587,8 +1842,8 @@ ice_tx_map(struct ice_ring *tx_ring, struct ice_tx_buf *first,\n \t\tif (likely(!data_len))\n \t\t\tbreak;\n \n-\t\ttx_desc->cmd_type_offset_bsz = build_ctob(td_cmd, td_offset,\n-\t\t\t\t\t\t\t  size, td_tag);\n+\t\ttx_desc->cmd_type_offset_bsz = ice_build_ctob(td_cmd, td_offset,\n+\t\t\t\t\t\t\t      size, td_tag);\n \n \t\ttx_desc++;\n \t\ti++;\n@@ -1620,7 +1875,7 @@ ice_tx_map(struct ice_ring *tx_ring, struct ice_tx_buf *first,\n \t/* write last descriptor with RS and EOP bits */\n \ttd_cmd |= (u64)(ICE_TX_DESC_CMD_EOP | ICE_TX_DESC_CMD_RS);\n \ttx_desc->cmd_type_offset_bsz =\n-\t\t\tbuild_ctob(td_cmd, td_offset, size, td_tag);\n+\t\t\tice_build_ctob(td_cmd, td_offset, size, td_tag);\n \n \t/* Force memory writes to complete before letting h/w know there\n \t * are new descriptors to fetch.\ndiff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h\nindex ec76aba347b9..355834b4abda 100644\n--- a/drivers/net/ethernet/intel/ice/ice_txrx.h\n+++ b/drivers/net/ethernet/intel/ice/ice_txrx.h\n@@ -44,17 +44,33 @@\n #define ICE_TX_FLAGS_TSO\tBIT(0)\n #define ICE_TX_FLAGS_HW_VLAN\tBIT(1)\n #define ICE_TX_FLAGS_SW_VLAN\tBIT(2)\n+/* ICE_TX_FLAGS_RING_XDP is used to indicate that whole ring is dedicated for\n+ * XDP purposes; at this point struct ice_ring doesn't have an appropriate\n+ * field that could be used for setting this flag, so let's use the tx_flags\n+ * field of the first ice_tx_buf from ice_ring\n+ */\n+#define ICE_TX_FLAGS_RING_XDP\tBIT(8)\n #define ICE_TX_FLAGS_VLAN_M\t0xffff0000\n #define ICE_TX_FLAGS_VLAN_PR_M\t0xe0000000\n #define ICE_TX_FLAGS_VLAN_PR_S\t29\n #define ICE_TX_FLAGS_VLAN_S\t16\n \n+#define ICE_XDP_PASS\t\t0\n+#define ICE_XDP_CONSUMED\tBIT(0)\n+#define ICE_XDP_TX\t\tBIT(1)\n+#define ICE_XDP_REDIR\t\tBIT(2)\n+\n #define ICE_RX_DMA_ATTR \\\n \t(DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)\n \n+#define ICE_TXD_CMD (ICE_TX_DESC_CMD_EOP | ICE_TX_DESC_CMD_RS)\n+\n struct ice_tx_buf {\n \tstruct ice_tx_desc *next_to_watch;\n-\tstruct sk_buff *skb;\n+\tunion {\n+\t\tstruct sk_buff *skb;\n+\t\tvoid *raw_buf; /* used for XDP */\n+\t};\n \tunsigned int bytecount;\n \tunsigned short gso_segs;\n \tu32 tx_flags;\n@@ -185,6 +201,9 @@ struct ice_ring {\n \t};\n \n \tstruct rcu_head rcu;\t\t/* to avoid race on free */\n+\tstruct bpf_prog *xdp_prog;\n+\t/* CL3 - 3rd cacheline starts here */\n+\tstruct xdp_rxq_info xdp_rxq;\n \t/* CLX - the below items are only accessed infrequently and should be\n \t * in their own cache line if possible\n \t */\n@@ -197,6 +216,14 @@ struct ice_ring {\n #endif /* CONFIG_DCB */\n } ____cacheline_internodealigned_in_smp;\n \n+static inline bool ice_ring_is_xdp(struct ice_ring *ring)\n+{\n+\tif (!ring->tx_buf)\n+\t\treturn false;\n+\n+\treturn !!(ring->tx_buf[0].tx_flags & ICE_TX_FLAGS_RING_XDP);\n+}\n+\n struct ice_ring_container {\n \t/* head of linked-list of rings */\n \tstruct ice_ring *ring;\ndiff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\nindex 5d24b539648f..f5eaf3059063 100644\n--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\n+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\n@@ -2102,6 +2102,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)\n \t\t\tgoto error_param;\n \t\t}\n \t\tvsi->rx_buf_len = qpi->rxq.databuffer_size;\n+\t\tvsi->rx_rings[i]->rx_buf_len = vsi->rx_buf_len;\n \t\tif (qpi->rxq.max_pkt_size >= (16 * 1024) ||\n \t\t    qpi->rxq.max_pkt_size < 64) {\n \t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n",
    "prefixes": [
        "v2",
        "1/2"
    ]
}