From patchwork Fri Apr 21 14:20:48 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 753429 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3w8dJD0lHGz9s2x for ; Sat, 22 Apr 2017 00:23:24 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="hc6dcXnm"; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1161695AbdDUOXW (ORCPT ); Fri, 21 Apr 2017 10:23:22 -0400 Received: from mail-yb0-f182.google.com ([209.85.213.182]:33530 "EHLO mail-yb0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1161676AbdDUOXS (ORCPT ); Fri, 21 Apr 2017 10:23:18 -0400 Received: by mail-yb0-f182.google.com with SMTP id 81so43849060ybp.0 for ; Fri, 21 Apr 2017 07:23:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pH5ud0XgMCGNnJKN0PWlkVcjMYrTyX8GEhpcEKoz0+A=; b=hc6dcXnm7Hb57T43JcIYdBafNrfg+yuJAuCJgzu9lcFeONiaC4YhxB1snciFMlKE20 VrC1HsT9xe9zHMnyYazbZy6seJY4FE4El7uQMlDHu91fDUd96TxsWj9yrJ+oAMm4Lhke DCOpBO1QAnopWr0HUjUtCD1whzjq4Iv3Z3byEL+MuPJKgVyWt/2Fzk8vADfirvAdZbN5 bARWLwR8c3I4NrJwoCJ8mdGiR74Uh4n97dYuF4OQghxtmcL/rJWBtuCwCPuOPD6Ea2PD 3USlx55DWVRwUqxfXad3EFLIo+H12aaHQPK9O8+dp/UC9NHujTCSuSOytiB5Z8KhyS1h YvXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pH5ud0XgMCGNnJKN0PWlkVcjMYrTyX8GEhpcEKoz0+A=; b=QH53Z1Yz6XPEbR1svOfFQsngSO+KYNWpgd0ecJdhyPBWzHVzjeLatQuoEgcEb/KYw5 ERWULWzPGt48btBxDG61PexlEoToUAFgIF6bLfjABAQDWmB5vEzZgl+10J5xtX5hZs3C CJwnNJlAFZ1QZUJ34//RF/R7LHXCMh6nBMeYlr3ZAjdY0qzFU+ZbuNQWWM8wwkI+63pR 9APoMMuznGAfjhT4JFCEd20IzbD8JHQbsza4OgT1WGi/z4saBSpcpuejF4sjEwawKrHO TzgA5eVZNmS8ZyaWsOvKgUs4OKp8NVZPJKboWDuGycgD6Pwy/3x7p+/Ijt99SDIelioA wN/w== X-Gm-Message-State: AN3rC/4z1pZU/kK1/V2NS9Gs725Gdv8+xsnR71my4HwrKSZoyziu6OI4 RLyrGiODH7Q/+bWT X-Received: by 10.99.211.21 with SMTP id b21mr13100340pgg.48.1492784587557; Fri, 21 Apr 2017 07:23:07 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([75.53.12.129]) by smtp.gmail.com with ESMTPSA id t141sm14651004pgb.3.2017.04.21.07.23.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 21 Apr 2017 07:23:07 -0700 (PDT) From: Jakub Kicinski To: netdev@vger.kernel.org Cc: oss-drivers@netronome.com, kubakici@wp.pl, Jakub Kicinski Subject: [PATCH net-next 1/5] nfp: make use of the DMA_ATTR_SKIP_CPU_SYNC attr Date: Fri, 21 Apr 2017 07:20:48 -0700 Message-Id: <20170421142052.107388-2-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170421142052.107388-1-jakub.kicinski@netronome.com> References: <20170421142052.107388-1-jakub.kicinski@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org DMA unmap may destroy changes CPU made to the buffer. To make XDP run correctly on non-x86 platforms we should use the DMA_ATTR_SKIP_CPU_SYNC attribute. Thanks to using the attribute we can now push the sync operation to the common code path from XDP handler. A little bit of variable name reshuffling is required to bring the code back to readable state. Signed-off-by: Jakub Kicinski --- .../net/ethernet/netronome/nfp/nfp_net_common.c | 43 +++++++++++++--------- 1 file changed, 25 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index e2197160e4dc..1274a70c9a38 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -87,16 +87,23 @@ void nfp_net_get_fw_version(struct nfp_net_fw_version *fw_ver, static dma_addr_t nfp_net_dma_map_rx(struct nfp_net_dp *dp, void *frag) { - return dma_map_single(dp->dev, frag + NFP_NET_RX_BUF_HEADROOM, - dp->fl_bufsz - NFP_NET_RX_BUF_NON_DATA, - dp->rx_dma_dir); + return dma_map_single_attrs(dp->dev, frag + NFP_NET_RX_BUF_HEADROOM, + dp->fl_bufsz - NFP_NET_RX_BUF_NON_DATA, + dp->rx_dma_dir, DMA_ATTR_SKIP_CPU_SYNC); } static void nfp_net_dma_unmap_rx(struct nfp_net_dp *dp, dma_addr_t dma_addr) { - dma_unmap_single(dp->dev, dma_addr, - dp->fl_bufsz - NFP_NET_RX_BUF_NON_DATA, - dp->rx_dma_dir); + dma_unmap_single_attrs(dp->dev, dma_addr, + dp->fl_bufsz - NFP_NET_RX_BUF_NON_DATA, + dp->rx_dma_dir, DMA_ATTR_SKIP_CPU_SYNC); +} + +static void nfp_net_dma_sync_cpu_rx(struct nfp_net_dp *dp, dma_addr_t dma_addr, + unsigned int len) +{ + dma_sync_single_for_cpu(dp->dev, dma_addr - NFP_NET_RX_BUF_HEADROOM, + len, dp->rx_dma_dir); } /* Firmware reconfig @@ -1569,7 +1576,7 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget) tx_ring = r_vec->xdp_ring; while (pkts_polled < budget) { - unsigned int meta_len, data_len, data_off, pkt_len; + unsigned int meta_len, data_len, meta_off, pkt_len, pkt_off; u8 meta_prepend[NFP_NET_MAX_PREPEND]; struct nfp_net_rx_buf *rxbuf; struct nfp_net_rx_desc *rxd; @@ -1608,11 +1615,12 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget) data_len = le16_to_cpu(rxd->rxd.data_len); pkt_len = data_len - meta_len; + pkt_off = NFP_NET_RX_BUF_HEADROOM + dp->rx_dma_off; if (dp->rx_offset == NFP_NET_CFG_RX_OFFSET_DYNAMIC) - data_off = NFP_NET_RX_BUF_HEADROOM + meta_len; + pkt_off += meta_len; else - data_off = NFP_NET_RX_BUF_HEADROOM + dp->rx_offset; - data_off += dp->rx_dma_off; + pkt_off += dp->rx_offset; + meta_off = pkt_off - meta_len; /* Stats update */ u64_stats_update_begin(&r_vec->rx_sync); @@ -1621,7 +1629,7 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget) u64_stats_update_end(&r_vec->rx_sync); /* Pointer to start of metadata */ - meta = rxbuf->frag + data_off - meta_len; + meta = rxbuf->frag + meta_off; if (unlikely(meta_len > NFP_NET_MAX_PREPEND || (dp->rx_offset && meta_len > dp->rx_offset))) { @@ -1631,6 +1639,9 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget) continue; } + nfp_net_dma_sync_cpu_rx(dp, rxbuf->dma_addr + meta_off, + data_len); + if (xdp_prog && !(rxd->rxd.flags & PCIE_DESC_RX_BPF && dp->bpf_offload_xdp)) { unsigned int dma_off; @@ -1638,10 +1649,6 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget) int act; hard_start = rxbuf->frag + NFP_NET_RX_BUF_HEADROOM; - dma_off = data_off - NFP_NET_RX_BUF_HEADROOM; - dma_sync_single_for_cpu(dp->dev, rxbuf->dma_addr, - dma_off + pkt_len, - DMA_BIDIRECTIONAL); /* Move prepend out of the way */ if (xdp_prog->xdp_adjust_head) { @@ -1650,12 +1657,12 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget) } act = nfp_net_run_xdp(xdp_prog, rxbuf->frag, hard_start, - &data_off, &pkt_len); + &pkt_off, &pkt_len); switch (act) { case XDP_PASS: break; case XDP_TX: - dma_off = data_off - NFP_NET_RX_BUF_HEADROOM; + dma_off = pkt_off - NFP_NET_RX_BUF_HEADROOM; if (unlikely(!nfp_net_tx_xdp_buf(dp, rx_ring, tx_ring, rxbuf, dma_off, @@ -1689,7 +1696,7 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget) nfp_net_rx_give_one(dp, rx_ring, new_frag, new_dma_addr); - skb_reserve(skb, data_off); + skb_reserve(skb, pkt_off); skb_put(skb, pkt_len); if (!dp->chained_metadata_format) {