From patchwork Mon Jun 4 12:06:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 925072 X-Patchwork-Delegate: jeffrey.t.kirsher@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=osuosl.org (client-ip=140.211.166.138; helo=whitealder.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40zys02KW6z9s0w for ; Tue, 5 Jun 2018 01:04:36 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id B21988798E; Mon, 4 Jun 2018 15:04:34 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id nIOOC04UNC4j; Mon, 4 Jun 2018 15:04:29 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by whitealder.osuosl.org (Postfix) with ESMTP id D11C087B49; Mon, 4 Jun 2018 15:04:29 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by ash.osuosl.org (Postfix) with ESMTP id 3767C1BFFD0 for ; Mon, 4 Jun 2018 12:07:09 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id 2FA6E26E12 for ; Mon, 4 Jun 2018 12:07:09 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id HEhjuXX-10IR for ; Mon, 4 Jun 2018 12:07:08 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by silver.osuosl.org (Postfix) with ESMTPS id F3BDC2CD1B for ; Mon, 4 Jun 2018 12:07:07 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 04 Jun 2018 05:07:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,476,1520924400"; d="scan'208";a="60197251" Received: from btopel-mobl1.isw.intel.com (HELO btopel-mobl1.hil-pdxphhh.sea.wayport.net) ([10.103.211.148]) by fmsmga004.fm.intel.com with ESMTP; 04 Jun 2018 05:07:02 -0700 From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: bjorn.topel@gmail.com, magnus.karlsson@intel.com, magnus.karlsson@gmail.com, alexander.h.duyck@intel.com, alexander.duyck@gmail.com, ast@fb.com, brouer@redhat.com, daniel@iogearbox.net, netdev@vger.kernel.org, mykyta.iziumtsev@linaro.org Date: Mon, 4 Jun 2018 14:06:00 +0200 Message-Id: <20180604120601.18123-11-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180604120601.18123-1-bjorn.topel@gmail.com> References: <20180604120601.18123-1-bjorn.topel@gmail.com> X-Mailman-Approved-At: Mon, 04 Jun 2018 15:04:26 +0000 Subject: [Intel-wired-lan] [PATCH bpf-next 10/11] i40e: implement AF_XDP zero-copy support for Tx X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: francois.ozog@linaro.org, willemdebruijn.kernel@gmail.com, mst@redhat.com, ilias.apalodimas@linaro.org, michael.lundkvist@ericsson.com, brian.brooks@linaro.org, intel-wired-lan@lists.osuosl.org, qi.z.zhang@intel.com, michael.chan@broadcom.com, andy@greyhouse.net MIME-Version: 1.0 Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" From: Magnus Karlsson Here, ndo_xsk_async_xmit is implemented. As a shortcut, the existing XDP Tx rings are used for zero-copy. This will result in other devices doing XDP_REDIRECT to an AF_XDP enabled queue will have its packets dropped. Signed-off-by: Magnus Karlsson --- drivers/net/ethernet/intel/i40e/i40e_main.c | 7 +- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 93 +++++++++++------- drivers/net/ethernet/intel/i40e/i40e_txrx.h | 23 +++++ drivers/net/ethernet/intel/i40e/i40e_xsk.c | 140 ++++++++++++++++++++++++++++ drivers/net/ethernet/intel/i40e/i40e_xsk.h | 2 + include/net/xdp_sock.h | 14 +++ 6 files changed, 242 insertions(+), 37 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 8c602424d339..98c18c41809d 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -3073,8 +3073,12 @@ static int i40e_configure_tx_ring(struct i40e_ring *ring) i40e_status err = 0; u32 qtx_ctl = 0; - if (ring_is_xdp(ring)) + ring->clean_tx_irq = i40e_clean_tx_irq; + if (ring_is_xdp(ring)) { ring->xsk_umem = i40e_xsk_umem(ring); + if (ring->xsk_umem) + ring->clean_tx_irq = i40e_clean_tx_irq_zc; + } /* some ATR related tx ring init */ if (vsi->back->flags & I40E_FLAG_FD_ATR_ENABLED) { @@ -12162,6 +12166,7 @@ static const struct net_device_ops i40e_netdev_ops = { .ndo_bpf = i40e_xdp, .ndo_xdp_xmit = i40e_xdp_xmit, .ndo_xdp_flush = i40e_xdp_flush, + .ndo_xsk_async_xmit = i40e_xsk_async_xmit, }; /** diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 6b1142fbc697..923bb84a93ab 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -10,16 +10,6 @@ #include "i40e_trace.h" #include "i40e_prototype.h" -static inline __le64 build_ctob(u32 td_cmd, u32 td_offset, unsigned int size, - u32 td_tag) -{ - return cpu_to_le64(I40E_TX_DESC_DTYPE_DATA | - ((u64)td_cmd << I40E_TXD_QW1_CMD_SHIFT) | - ((u64)td_offset << I40E_TXD_QW1_OFFSET_SHIFT) | - ((u64)size << I40E_TXD_QW1_TX_BUF_SZ_SHIFT) | - ((u64)td_tag << I40E_TXD_QW1_L2TAG1_SHIFT)); -} - #define I40E_TXD_CMD (I40E_TX_DESC_CMD_EOP | I40E_TX_DESC_CMD_RS) /** * i40e_fdir - Generate a Flow Director descriptor based on fdata @@ -649,9 +639,13 @@ void i40e_clean_tx_ring(struct i40e_ring *tx_ring) if (!tx_ring->tx_bi) return; - /* Free all the Tx ring sk_buffs */ - for (i = 0; i < tx_ring->count; i++) - i40e_unmap_and_free_tx_resource(tx_ring, &tx_ring->tx_bi[i]); + /* Cleanup only needed for non XSK TX ZC rings */ + if (!tx_ring->xsk_umem) { + /* Free all the Tx ring sk_buffs */ + for (i = 0; i < tx_ring->count; i++) + i40e_unmap_and_free_tx_resource(tx_ring, + &tx_ring->tx_bi[i]); + } bi_size = sizeof(struct i40e_tx_buffer) * tx_ring->count; memset(tx_ring->tx_bi, 0, bi_size); @@ -768,8 +762,40 @@ void i40e_detect_recover_hung(struct i40e_vsi *vsi) } } +void i40e_update_tx_stats(struct i40e_ring *tx_ring, + unsigned int total_packets, + unsigned int total_bytes) +{ + u64_stats_update_begin(&tx_ring->syncp); + tx_ring->stats.bytes += total_bytes; + tx_ring->stats.packets += total_packets; + u64_stats_update_end(&tx_ring->syncp); + tx_ring->q_vector->tx.total_bytes += total_bytes; + tx_ring->q_vector->tx.total_packets += total_packets; +} + #define WB_STRIDE 4 +void i40e_arm_wb(struct i40e_ring *tx_ring, + struct i40e_vsi *vsi, + int budget) +{ + if (tx_ring->flags & I40E_TXR_FLAGS_WB_ON_ITR) { + /* check to see if there are < 4 descriptors + * waiting to be written back, then kick the hardware to force + * them to be written back in case we stay in NAPI. + * In this mode on X722 we do not enable Interrupt. + */ + unsigned int j = i40e_get_tx_pending(tx_ring, false); + + if (budget && + ((j / WB_STRIDE) == 0) && (j > 0) && + !test_bit(__I40E_VSI_DOWN, vsi->state) && + (I40E_DESC_UNUSED(tx_ring) != tx_ring->count)) + tx_ring->arm_wb = true; + } +} + /** * i40e_clean_tx_irq - Reclaim resources after transmit completes * @vsi: the VSI we care about @@ -778,8 +804,8 @@ void i40e_detect_recover_hung(struct i40e_vsi *vsi) * * Returns true if there's any budget left (e.g. the clean is finished) **/ -static bool i40e_clean_tx_irq(struct i40e_vsi *vsi, - struct i40e_ring *tx_ring, int napi_budget) +bool i40e_clean_tx_irq(struct i40e_vsi *vsi, + struct i40e_ring *tx_ring, int napi_budget) { u16 i = tx_ring->next_to_clean; struct i40e_tx_buffer *tx_buf; @@ -874,27 +900,9 @@ static bool i40e_clean_tx_irq(struct i40e_vsi *vsi, i += tx_ring->count; tx_ring->next_to_clean = i; - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->stats.bytes += total_bytes; - tx_ring->stats.packets += total_packets; - u64_stats_update_end(&tx_ring->syncp); - tx_ring->q_vector->tx.total_bytes += total_bytes; - tx_ring->q_vector->tx.total_packets += total_packets; - - if (tx_ring->flags & I40E_TXR_FLAGS_WB_ON_ITR) { - /* check to see if there are < 4 descriptors - * waiting to be written back, then kick the hardware to force - * them to be written back in case we stay in NAPI. - * In this mode on X722 we do not enable Interrupt. - */ - unsigned int j = i40e_get_tx_pending(tx_ring, false); - if (budget && - ((j / WB_STRIDE) == 0) && (j > 0) && - !test_bit(__I40E_VSI_DOWN, vsi->state) && - (I40E_DESC_UNUSED(tx_ring) != tx_ring->count)) - tx_ring->arm_wb = true; - } + i40e_update_tx_stats(tx_ring, total_packets, total_bytes); + i40e_arm_wb(tx_ring, vsi, budget); if (ring_is_xdp(tx_ring)) return !!budget; @@ -2467,10 +2475,11 @@ int i40e_napi_poll(struct napi_struct *napi, int budget) * budget and be more aggressive about cleaning up the Tx descriptors. */ i40e_for_each_ring(ring, q_vector->tx) { - if (!i40e_clean_tx_irq(vsi, ring, budget)) { + if (!ring->clean_tx_irq(vsi, ring, budget)) { clean_complete = false; continue; } + arm_wb |= ring->arm_wb; ring->arm_wb = false; } @@ -3595,6 +3604,12 @@ int i40e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, if (!i40e_enabled_xdp_vsi(vsi) || queue_index >= vsi->num_queue_pairs) return -ENXIO; + /* NB! For now, AF_XDP zero-copy hijacks the XDP ring, and + * will drop incoming packets redirected by other devices! + */ + if (vsi->xdp_rings[queue_index]->xsk_umem) + return -ENXIO; + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) return -EINVAL; @@ -3633,5 +3648,11 @@ void i40e_xdp_flush(struct net_device *dev) if (!i40e_enabled_xdp_vsi(vsi) || queue_index >= vsi->num_queue_pairs) return; + /* NB! For now, AF_XDP zero-copy hijacks the XDP ring, and + * will drop incoming packets redirected by other devices! + */ + if (vsi->xdp_rings[queue_index]->xsk_umem) + return; + i40e_xdp_ring_update_tail(vsi->xdp_rings[queue_index]); } diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h index cddb185cd2f8..b9c42c352a8d 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h @@ -426,6 +426,8 @@ struct i40e_ring { int (*clean_rx_irq)(struct i40e_ring *ring, int budget); bool (*alloc_rx_buffers)(struct i40e_ring *ring, u16 n); + bool (*clean_tx_irq)(struct i40e_vsi *vsi, struct i40e_ring *ring, + int budget); struct xdp_umem *xsk_umem; struct zero_copy_allocator zca; /* ZC allocator anchor */ @@ -506,6 +508,9 @@ int i40e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, u32 flags); void i40e_xdp_flush(struct net_device *dev); int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget); +bool i40e_clean_tx_irq(struct i40e_vsi *vsi, + struct i40e_ring *tx_ring, int napi_budget); +int i40e_xsk_async_xmit(struct net_device *dev, u32 queue_id); /** * i40e_get_head - Retrieve head from head writeback @@ -687,6 +692,16 @@ static inline void i40e_xdp_ring_update_tail(struct i40e_ring *xdp_ring) writel_relaxed(xdp_ring->next_to_use, xdp_ring->tail); } +static inline __le64 build_ctob(u32 td_cmd, u32 td_offset, unsigned int size, + u32 td_tag) +{ + return cpu_to_le64(I40E_TX_DESC_DTYPE_DATA | + ((u64)td_cmd << I40E_TXD_QW1_CMD_SHIFT) | + ((u64)td_offset << I40E_TXD_QW1_OFFSET_SHIFT) | + ((u64)size << I40E_TXD_QW1_TX_BUF_SZ_SHIFT) | + ((u64)td_tag << I40E_TXD_QW1_L2TAG1_SHIFT)); +} + void i40e_fd_handle_status(struct i40e_ring *rx_ring, union i40e_rx_desc *rx_desc, u8 prog_id); int i40e_xmit_xdp_tx_ring(struct xdp_buff *xdp, @@ -696,4 +711,12 @@ void i40e_process_skb_fields(struct i40e_ring *rx_ring, u8 rx_ptype); void i40e_receive_skb(struct i40e_ring *rx_ring, struct sk_buff *skb, u16 vlan_tag); + +void i40e_update_tx_stats(struct i40e_ring *tx_ring, + unsigned int total_packets, + unsigned int total_bytes); +void i40e_arm_wb(struct i40e_ring *tx_ring, + struct i40e_vsi *vsi, + int budget); + #endif /* _I40E_TXRX_H_ */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index 9d16924415b9..021fec5b5799 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -535,3 +535,143 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) return failure ? budget : (int)total_rx_packets; } +/* Returns true if the work is finished */ +static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget) +{ + unsigned int total_packets = 0, total_bytes = 0; + struct i40e_tx_buffer *tx_bi; + struct i40e_tx_desc *tx_desc; + bool work_done = true; + dma_addr_t dma; + u32 len; + + while (budget-- > 0) { + if (!unlikely(I40E_DESC_UNUSED(xdp_ring))) { + xdp_ring->tx_stats.tx_busy++; + work_done = false; + break; + } + + if (!xsk_umem_consume_tx(xdp_ring->xsk_umem, &dma, &len)) + break; + + dma_sync_single_for_device(xdp_ring->dev, dma, len, + DMA_BIDIRECTIONAL); + + tx_bi = &xdp_ring->tx_bi[xdp_ring->next_to_use]; + tx_bi->bytecount = len; + tx_bi->gso_segs = 1; + + tx_desc = I40E_TX_DESC(xdp_ring, xdp_ring->next_to_use); + tx_desc->buffer_addr = cpu_to_le64(dma); + tx_desc->cmd_type_offset_bsz = build_ctob(I40E_TX_DESC_CMD_ICRC + | I40E_TX_DESC_CMD_EOP, + 0, len, 0); + + total_packets++; + total_bytes += len; + + xdp_ring->next_to_use++; + if (xdp_ring->next_to_use == xdp_ring->count) + xdp_ring->next_to_use = 0; + } + + if (total_packets > 0) { + /* Request an interrupt for the last frame and bump tail ptr. */ + tx_desc->cmd_type_offset_bsz |= (I40E_TX_DESC_CMD_RS << + I40E_TXD_QW1_CMD_SHIFT); + i40e_xdp_ring_update_tail(xdp_ring); + + xsk_umem_consume_tx_done(xdp_ring->xsk_umem); + i40e_update_tx_stats(xdp_ring, total_packets, total_bytes); + } + + return !!budget && work_done; +} + +bool i40e_clean_tx_irq_zc(struct i40e_vsi *vsi, + struct i40e_ring *tx_ring, int napi_budget) +{ + struct xdp_umem *umem = tx_ring->xsk_umem; + u32 head_idx = i40e_get_head(tx_ring); + unsigned int budget = vsi->work_limit; + bool work_done = true, xmit_done; + u32 completed_frames; + u32 frames_ready; + + if (head_idx < tx_ring->next_to_clean) + head_idx += tx_ring->count; + frames_ready = head_idx - tx_ring->next_to_clean; + + if (frames_ready == 0) { + goto out_xmit; + } else if (frames_ready > budget) { + completed_frames = budget; + work_done = false; + } else { + completed_frames = frames_ready; + } + + tx_ring->next_to_clean += completed_frames; + if (unlikely(tx_ring->next_to_clean >= tx_ring->count)) + tx_ring->next_to_clean -= tx_ring->count; + + xsk_umem_complete_tx(umem, completed_frames); + + i40e_arm_wb(tx_ring, vsi, budget); + +out_xmit: + xmit_done = i40e_xmit_zc(tx_ring, budget); + + return work_done && xmit_done; +} + +/** + * i40e_napi_is_scheduled - If napi is running, set the NAPIF_STATE_MISSED + * @n: napi context + * + * Returns true if NAPI is scheduled. + **/ +static bool i40e_napi_is_scheduled(struct napi_struct *n) +{ + unsigned long val, new; + + do { + val = READ_ONCE(n->state); + if (val & NAPIF_STATE_DISABLE) + return true; + + if (!(val & NAPIF_STATE_SCHED)) + return false; + + new = val | NAPIF_STATE_MISSED; + } while (cmpxchg(&n->state, val, new) != val); + + return true; +} + +int i40e_xsk_async_xmit(struct net_device *dev, u32 queue_id) +{ + struct i40e_netdev_priv *np = netdev_priv(dev); + struct i40e_vsi *vsi = np->vsi; + struct i40e_ring *ring; + + if (test_bit(__I40E_VSI_DOWN, vsi->state)) + return -ENETDOWN; + + if (!i40e_enabled_xdp_vsi(vsi)) + return -ENXIO; + + if (queue_id >= vsi->num_queue_pairs) + return -ENXIO; + + if (!vsi->xdp_rings[queue_id]->xsk_umem) + return -ENXIO; + + ring = vsi->xdp_rings[queue_id]; + + if (!i40e_napi_is_scheduled(&ring->q_vector->napi)) + i40e_force_wb(vsi, ring->q_vector); + + return 0; +} diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.h b/drivers/net/ethernet/intel/i40e/i40e_xsk.h index 757ac5ca8511..bd006f1a4397 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.h +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.h @@ -13,5 +13,7 @@ int i40e_xsk_umem_setup(struct i40e_vsi *vsi, struct xdp_umem *umem, void i40e_zca_free(struct zero_copy_allocator *alloc, unsigned long handle); bool i40e_alloc_rx_buffers_zc(struct i40e_ring *rx_ring, u16 cleaned_count); int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget); +bool i40e_clean_tx_irq_zc(struct i40e_vsi *vsi, + struct i40e_ring *tx_ring, int napi_budget); #endif /* _I40E_XSK_H_ */ diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h index ec8fd3314097..63aa05abf11d 100644 --- a/include/net/xdp_sock.h +++ b/include/net/xdp_sock.h @@ -103,6 +103,20 @@ static inline u64 *xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr) static inline void xsk_umem_discard_addr(struct xdp_umem *umem) { } + +static inline void xsk_umem_complete_tx(struct xdp_umem *umem, u32 nb_entries) +{ +} + +static inline bool xsk_umem_consume_tx(struct xdp_umem *umem, dma_addr_t *dma, + u32 *len) +{ + return false; +} + +static inline void xsk_umem_consume_tx_done(struct xdp_umem *umem) +{ +} #endif /* CONFIG_XDP_SOCKETS */ static inline char *xdp_umem_get_data(struct xdp_umem *umem, u64 addr)