From patchwork Thu Nov 3 13:58:54 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 690861 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3t8mmr2mZZz9vDV for ; Fri, 4 Nov 2016 00:59:40 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b=cE668fXc; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757833AbcKCN70 (ORCPT ); Thu, 3 Nov 2016 09:59:26 -0400 Received: from mail-wm0-f54.google.com ([74.125.82.54]:38315 "EHLO mail-wm0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757648AbcKCN7Y (ORCPT ); Thu, 3 Nov 2016 09:59:24 -0400 Received: by mail-wm0-f54.google.com with SMTP id n67so103129007wme.1 for ; Thu, 03 Nov 2016 06:59:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Lg4ivOWQlRUjidNdpKUIBS03rHwqkb4pVVnkBm1B9/U=; b=cE668fXcdAKV/DV/XgBX0NB+oik07uXCzZ6EF4DP58KIdH0Pi9zJqzabaowXklTh6G Ry+eGBrrnWUe3irNxnCG9xPxD7VvMT/barMODX4rBKnUT/U4BTbR3nXSYJWI0ia60tXJ U9FVEAT9DNGfuZ8xzuqrf1zO+TDIj5kqPpwilkdV46vYQ8e+NhMpWSPublyst3WD46hm GyKnyWQGTqJlV4x5zVzq8mw389a15Oynie3tI8Hy09AstJdV1b1loMhzdQ+f9Wxg8oX2 SbNGd8U+K0OXahHyx1cWESOXX2a+JvU+w3DdXmA3K2aFoDIp4Ss0zJG9oT2QcjJfXCO7 GrRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Lg4ivOWQlRUjidNdpKUIBS03rHwqkb4pVVnkBm1B9/U=; b=I0l8ApJzJze0IGU7ygYp69az7CQbKx/gI6oxvZ1uue9ozqZaOwZo5Y9626jQP4HAVY ytLIfNKHxlFUrrWTBp0OzVH9DdotDwL50Vxjlvgh5JAcHY6NxzmAcXg9IIRlov+GF86M OiwBjsr2mYi6v7o1dv4b4SGzFSoq6NwfBhKhJNv63wCBuqwvKs1IQi5/Wk+2E7Vvv81v WL8qe/FOdy2mStaUXTk4b2xPD093iTnf2TkO2oObuw+UeMLSgW1JwHTn8ZUKaSj+nLZz kwwqtL+a24+sne4nyCULWANn7h4EppVhDCvTh7R50UQgzM49ubXRVmtg+F3G16g1uWsz /w/A== X-Gm-Message-State: ABUngvdXMyZRJt+RgEHio1Qm2/6z7COr3ECzE5B0eoWN1z2I9T4D9+L+ECqMtgnYmFuBOJHE X-Received: by 10.28.11.208 with SMTP id 199mr8068349wml.97.1478181562692; Thu, 03 Nov 2016 06:59:22 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com (host-79-78-33-110.static.as9105.net. [79.78.33.110]) by smtp.gmail.com with ESMTPSA id g10sm8746530wjw.18.2016.11.03.06.59.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 03 Nov 2016 06:59:22 -0700 (PDT) From: Jakub Kicinski To: netdev@vger.kernel.org Cc: Jakub Kicinski Subject: [PATCH net-next 09/13] nfp: reorganize nfp_net_rx() to get packet offsets early Date: Thu, 3 Nov 2016 13:58:54 +0000 Message-Id: <1478181538-20778-10-git-send-email-jakub.kicinski@netronome.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1478181538-20778-1-git-send-email-jakub.kicinski@netronome.com> References: <1478181538-20778-1-git-send-email-jakub.kicinski@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Calculate packet offsets early in nfp_net_rx() so that we will be able to use them in upcoming XDP handler. While at it move relevant variables into the loop scope. Signed-off-by: Jakub Kicinski --- .../net/ethernet/netronome/nfp/nfp_net_common.c | 56 ++++++++++++---------- 1 file changed, 30 insertions(+), 26 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 506362729607..2ab63661a6fd 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -1383,16 +1383,17 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget) { struct nfp_net_r_vector *r_vec = rx_ring->r_vec; struct nfp_net *nn = r_vec->nfp_net; - unsigned int data_len, meta_len; - struct nfp_net_rx_buf *rxbuf; - struct nfp_net_rx_desc *rxd; - dma_addr_t new_dma_addr; struct sk_buff *skb; int pkts_polled = 0; - void *new_frag; int idx; while (pkts_polled < budget) { + unsigned int meta_len, data_len, data_off, pkt_len, pkt_off; + struct nfp_net_rx_buf *rxbuf; + struct nfp_net_rx_desc *rxd; + dma_addr_t new_dma_addr; + void *new_frag; + idx = rx_ring->rd_p & (rx_ring->cnt - 1); rxd = &rx_ring->rxds[idx]; @@ -1408,22 +1409,6 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget) pkts_polled++; rxbuf = &rx_ring->rxbufs[idx]; - skb = build_skb(rxbuf->frag, nn->fl_bufsz); - if (unlikely(!skb)) { - nfp_net_rx_drop(r_vec, rx_ring, rxbuf, NULL); - continue; - } - new_frag = nfp_net_napi_alloc_one(nn, &new_dma_addr); - if (unlikely(!new_frag)) { - nfp_net_rx_drop(r_vec, rx_ring, rxbuf, skb); - continue; - } - - nfp_net_dma_unmap_rx(nn, rx_ring->rxbufs[idx].dma_addr, - nn->fl_bufsz, DMA_FROM_DEVICE); - - nfp_net_rx_give_one(rx_ring, new_frag, new_dma_addr); - /* < meta_len > * <-- [rx_offset] --> * --------------------------------------------------------- @@ -1438,20 +1423,39 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget) */ meta_len = rxd->rxd.meta_len_dd & PCIE_DESC_RX_META_LEN_MASK; data_len = le16_to_cpu(rxd->rxd.data_len); + pkt_len = data_len - meta_len; if (nn->rx_offset == NFP_NET_CFG_RX_OFFSET_DYNAMIC) - skb_reserve(skb, NFP_NET_RX_BUF_HEADROOM + meta_len); + pkt_off = meta_len; else - skb_reserve(skb, - NFP_NET_RX_BUF_HEADROOM + nn->rx_offset); - skb_put(skb, data_len - meta_len); + pkt_off = nn->rx_offset; + data_off = NFP_NET_RX_BUF_HEADROOM + pkt_off; /* Stats update */ u64_stats_update_begin(&r_vec->rx_sync); r_vec->rx_pkts++; - r_vec->rx_bytes += skb->len; + r_vec->rx_bytes += pkt_len; u64_stats_update_end(&r_vec->rx_sync); + skb = build_skb(rxbuf->frag, nn->fl_bufsz); + if (unlikely(!skb)) { + nfp_net_rx_drop(r_vec, rx_ring, rxbuf, NULL); + continue; + } + new_frag = nfp_net_napi_alloc_one(nn, &new_dma_addr); + if (unlikely(!new_frag)) { + nfp_net_rx_drop(r_vec, rx_ring, rxbuf, skb); + continue; + } + + nfp_net_dma_unmap_rx(nn, rx_ring->rxbufs[idx].dma_addr, + nn->fl_bufsz, DMA_FROM_DEVICE); + + nfp_net_rx_give_one(rx_ring, new_frag, new_dma_addr); + + skb_reserve(skb, data_off); + skb_put(skb, pkt_len); + if (nn->fw_ver.major <= 3) { nfp_net_set_hash_desc(nn->netdev, skb, rxd); } else if (meta_len) {