From patchwork Tue Mar 28 16:47:03 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 744426 X-Patchwork-Delegate: jeffrey.t.kirsher@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3vsxdM3JrGz9s2Q for ; Wed, 29 Mar 2017 03:47:19 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="fG+A/VLz"; dkim-atps=neutral Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id C84B78935A; Tue, 28 Mar 2017 16:47:17 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ImhrhX-IBhOL; Tue, 28 Mar 2017 16:47:16 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by whitealder.osuosl.org (Postfix) with ESMTP id DFD10891AD; Tue, 28 Mar 2017 16:47:16 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id 1FE841C0018 for ; Tue, 28 Mar 2017 16:47:16 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 1BDD389505 for ; Tue, 28 Mar 2017 16:47:16 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 7AY0wzxye75k for ; Tue, 28 Mar 2017 16:47:15 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mail-pg0-f67.google.com (mail-pg0-f67.google.com [74.125.83.67]) by hemlock.osuosl.org (Postfix) with ESMTPS id 6DBD0894B2 for ; Tue, 28 Mar 2017 16:47:15 +0000 (UTC) Received: by mail-pg0-f67.google.com with SMTP id 81so22592826pgh.3 for ; Tue, 28 Mar 2017 09:47:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:subject:to:cc:date:message-id:user-agent:mime-version :content-transfer-encoding; bh=f8hECmSugSF4GS6RJJlsO/mdBQG65bWuq2X9jRbN5WY=; b=fG+A/VLzH6tzhmv9OwhbC7xaX0LOWvfeAw2KptR2MzdojnDnDkM2ZSW6eXGESa7+Ye rjJdAfUwSNq5vUOkWufv7/VOolYse+qNKy9syd6e4uZ6CpJu6IkLHEQFDt2ffD3o4Wa8 JuqwJpIdyNgaUp33iDSZWV1cFjGQG4vxyDI/PkrqYt0V+NeLh6cpaetSsczxq9REq3Kl uYYs3gtNXaHedRDLIMSZ6RihkhbR6rgTRKxikBDejD1liWkGOfhs8JXnYYTBIUyYax6/ x9scYxyxEABykvoKo0hKa0uFTqBehK86bS2oXUDYS27X1hXFWZA9nbY4bFRhLuTc5dhq CVtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:subject:to:cc:date:message-id:user-agent :mime-version:content-transfer-encoding; bh=f8hECmSugSF4GS6RJJlsO/mdBQG65bWuq2X9jRbN5WY=; b=WvnW6ABZYpopkUi+4AnaYpU8v0KDSQucL6TuygYpAJqe6Jz2hQxHCP5HLqgfz3v8Av aRR3YpTApRK3Hu1DYwtHVjbxyZpUcGeLjWy51fLOqQK/26MysGKU574s8PtBQYvYdtdl gBio6Cn22qrkNBktDSvYGUCNVt9wcfaZNZoxxC73HxlC4umBVyTnXujOrTGw6qdrpy4s QykU4l4kx7VK/QU7zti1fx2y64AGE4ovCK0hVY+XFY5ffD+mTm54geQvC2k6bM+uA9Ia qWyWOqq//QHmpOF8v4h+L1/CIS3moWlTC0rICMgQrSStPmo47Z20A5ScTodb01YYY9z5 9PPw== X-Gm-Message-State: AFeK/H0BgSDKsHZM+SOLj8OQSclJGUJfgWmzNYXt0wmGPNgIemMqtfO2n+WvKWtjpm7X4A== X-Received: by 10.98.108.196 with SMTP id h187mr32843493pfc.233.1490719635000; Tue, 28 Mar 2017 09:47:15 -0700 (PDT) Received: from [127.0.1.1] ([72.168.144.120]) by smtp.gmail.com with ESMTPSA id z25sm8539770pfi.28.2017.03.28.09.47.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 28 Mar 2017 09:47:14 -0700 (PDT) From: John Fastabend X-Google-Original-From: John Fastabend To: john.fastabend@gmail.com, alexander.duyck@gmail.com Date: Tue, 28 Mar 2017 09:47:03 -0700 Message-ID: <20170328164703.12688.90760.stgit@john-Precision-Tower-5810> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: intel-wired-lan@lists.osuosl.org, u9012063@gmail.com Subject: [Intel-wired-lan] [PATCH] ixgbe: delay tail write to every 'n' packets X-BeenThere: intel-wired-lan@lists.osuosl.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@lists.osuosl.org Sender: "Intel-wired-lan" Current XDP implementation hits the tail on every XDP_TX return code. This patch changes driver behavior to only hit the tail after packet processing is complete. With this patch I can run XDP drop programs @ 14+Mpps and XDP_TX programs are at ~13.5Mpps. Signed-off-by: John Fastabend Tested-by: Andrew Bowers --- drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 28 +++++++++++++++---------- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index cd7eefd..750b204 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -2284,6 +2284,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, unsigned int mss = 0; #endif /* IXGBE_FCOE */ u16 cleaned_count = ixgbe_desc_unused(rx_ring); + bool xdp_xmit = false; while (likely(total_rx_packets < budget)) { union ixgbe_adv_rx_desc *rx_desc; @@ -2323,10 +2324,12 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, } if (IS_ERR(skb)) { - if (PTR_ERR(skb) == -IXGBE_XDP_TX) + if (PTR_ERR(skb) == -IXGBE_XDP_TX) { + xdp_xmit = true; ixgbe_rx_buffer_flip(rx_ring, rx_buffer, size); - else + } else { rx_buffer->pagecnt_bias++; + } total_rx_packets++; total_rx_bytes += size; } else if (skb) { @@ -2394,6 +2397,16 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, total_rx_packets++; } + if (xdp_xmit) { + struct ixgbe_ring *ring = adapter->xdp_ring[smp_processor_id()]; + + /* Force memory writes to complete before letting h/w + * know there are new descriptors to fetch. + */ + wmb(); + writel(ring->next_to_use, ring->tail); + } + u64_stats_update_begin(&rx_ring->syncp); rx_ring->stats.packets += total_rx_packets; rx_ring->stats.bytes += total_rx_bytes; @@ -8239,14 +8252,8 @@ static int ixgbe_xmit_xdp_ring(struct ixgbe_adapter *adapter, tx_desc->read.olinfo_status = cpu_to_le32(len << IXGBE_ADVTXD_PAYLEN_SHIFT); - /* Force memory writes to complete before letting h/w know there - * are new descriptors to fetch. (Only applicable for weak-ordered - * memory model archs, such as IA-64). - * - * We also need this memory barrier to make certain all of the - * status bits have been updated before next_to_watch is written. - */ - wmb(); + /* Avoid any potential race with xdp_xmit and cleanup */ + smp_wmb(); /* set next_to_watch value indicating a packet is present */ i++; @@ -8256,7 +8263,6 @@ static int ixgbe_xmit_xdp_ring(struct ixgbe_adapter *adapter, tx_buffer->next_to_watch = tx_desc; ring->next_to_use = i; - writel(i, ring->tail); return IXGBE_XDP_TX; }