From patchwork Mon Dec 11 18:36:49 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tantilov, Emil S" X-Patchwork-Id: 847156 X-Patchwork-Delegate: jeffrey.t.kirsher@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=osuosl.org (client-ip=140.211.166.133; helo=hemlock.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3ywWlv43pHz9s03 for ; Tue, 12 Dec 2017 05:32:43 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id B0293881D7; Mon, 11 Dec 2017 18:32:41 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6EZy8i89MOQD; Mon, 11 Dec 2017 18:32:40 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by hemlock.osuosl.org (Postfix) with ESMTP id 542E6881C8; Mon, 11 Dec 2017 18:32:40 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by ash.osuosl.org (Postfix) with ESMTP id 875971BFA4B for ; Mon, 11 Dec 2017 18:32:39 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id 8400487268 for ; Mon, 11 Dec 2017 18:32:39 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id DyCiqD8w5bwM for ; Mon, 11 Dec 2017 18:32:39 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by fraxinus.osuosl.org (Postfix) with ESMTPS id E2CF087267 for ; Mon, 11 Dec 2017 18:32:38 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Dec 2017 10:32:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,392,1508828400"; d="scan'208";a="17361589" Received: from estantil-desk3.jf.intel.com (HELO localhost6.localdomain6) ([134.134.177.96]) by orsmga002.jf.intel.com with ESMTP; 11 Dec 2017 10:32:37 -0800 From: Emil Tantilov To: intel-wired-lan@lists.osuosl.org Date: Mon, 11 Dec 2017 10:36:49 -0800 Message-ID: <20171211183649.21524.3888.stgit@localhost6.localdomain6> In-Reply-To: <20171211183537.21524.10133.stgit@localhost6.localdomain6> References: <20171211183537.21524.10133.stgit@localhost6.localdomain6> User-Agent: StGit/0.17.1-17-ge4e0 MIME-Version: 1.0 Subject: [Intel-wired-lan] [PATCH 1/9] ixgbevf: add function for checking if we can reuse page X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Introduce ixgbevf_can_reuse_page() similar to the change in ixgbe from commit af43da0dba0b ("ixgbe: Add function for checking to see if we can reuse page") Signed-off-by: Emil Tantilov Tested-by: Krishneil Singh --- drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 59 ++++++++++++--------- 1 file changed, 33 insertions(+), 26 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index ed5c3ae..7ffd429 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -754,6 +754,38 @@ static inline bool ixgbevf_page_is_reserved(struct page *page) return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page); } +static bool ixgbevf_can_reuse_rx_page(struct ixgbevf_rx_buffer *rx_buffer, + struct page *page, + const unsigned int truesize) +{ + /* avoid re-using remote pages */ + if (unlikely(ixgbevf_page_is_reserved(page))) + return false; + +#if (PAGE_SIZE < 8192) + /* if we are only owner of page we can reuse it */ + if (unlikely(page_count(page) != 1)) + return false; + + /* flip page offset to other buffer */ + rx_buffer->page_offset ^= IXGBEVF_RX_BUFSZ; + +#else + /* move offset up to the next cache line */ + rx_buffer->page_offset += truesize; + + if (rx_buffer->page_offset > (PAGE_SIZE - IXGBEVF_RX_BUFSZ)) + return false; + +#endif + /* Even if we own the page, we are not allowed to use atomic_set() + * This would break get_page_unless_zero() users. + */ + page_ref_inc(page); + + return true; +} + /** * ixgbevf_add_rx_frag - Add contents of Rx buffer to sk_buff * @rx_ring: rx descriptor ring to transact packets on @@ -815,32 +847,7 @@ static bool ixgbevf_add_rx_frag(struct ixgbevf_ring *rx_ring, skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, (unsigned long)va & ~PAGE_MASK, size, truesize); - /* avoid re-using remote pages */ - if (unlikely(ixgbevf_page_is_reserved(page))) - return false; - -#if (PAGE_SIZE < 8192) - /* if we are only owner of page we can reuse it */ - if (unlikely(page_count(page) != 1)) - return false; - - /* flip page offset to other buffer */ - rx_buffer->page_offset ^= IXGBEVF_RX_BUFSZ; - -#else - /* move offset up to the next cache line */ - rx_buffer->page_offset += truesize; - - if (rx_buffer->page_offset > (PAGE_SIZE - IXGBEVF_RX_BUFSZ)) - return false; - -#endif - /* Even if we own the page, we are not allowed to use atomic_set() - * This would break get_page_unless_zero() users. - */ - page_ref_inc(page); - - return true; + return ixgbevf_can_reuse_rx_page(rx_buffer, page, truesize); } static struct sk_buff *ixgbevf_fetch_rx_buffer(struct ixgbevf_ring *rx_ring, From patchwork Mon Dec 11 18:36:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tantilov, Emil S" X-Patchwork-Id: 847157 X-Patchwork-Delegate: jeffrey.t.kirsher@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=osuosl.org (client-ip=140.211.166.137; helo=fraxinus.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3ywWm01t0Fz9s03 for ; Tue, 12 Dec 2017 05:32:48 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id 3236F87277; Mon, 11 Dec 2017 18:32:46 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id CkVMUqeUsRvW; Mon, 11 Dec 2017 18:32:45 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by fraxinus.osuosl.org (Postfix) with ESMTP id 9A14C87268; Mon, 11 Dec 2017 18:32:45 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id A21781BFA4B for ; Mon, 11 Dec 2017 18:32:44 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 9E34A881C8 for ; Mon, 11 Dec 2017 18:32:44 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3pQ4I9Nmf5uB for ; Mon, 11 Dec 2017 18:32:44 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by hemlock.osuosl.org (Postfix) with ESMTPS id 1AEE5881C7 for ; Mon, 11 Dec 2017 18:32:44 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Dec 2017 10:32:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,392,1508828400"; d="scan'208";a="11446513" Received: from estantil-desk3.jf.intel.com (HELO localhost6.localdomain6) ([134.134.177.96]) by orsmga003.jf.intel.com with ESMTP; 11 Dec 2017 10:32:43 -0800 From: Emil Tantilov To: intel-wired-lan@lists.osuosl.org Date: Mon, 11 Dec 2017 10:36:54 -0800 Message-ID: <20171211183654.21524.84943.stgit@localhost6.localdomain6> In-Reply-To: <20171211183537.21524.10133.stgit@localhost6.localdomain6> References: <20171211183537.21524.10133.stgit@localhost6.localdomain6> User-Agent: StGit/0.17.1-17-ge4e0 MIME-Version: 1.0 Subject: [Intel-wired-lan] [PATCH 2/9] ixgbevf: only DMA sync frame length X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Based on commit 64f2525ca4e7 ("igb: Only DMA sync frame length") On some architectures synching a buffer for DMA may be expensive. Instead of the entire 2K receive buffer only synchronize the length of the frame, which will typically be the MTU or smaller. Signed-off-by: Emil Tantilov Tested-by: Krishneil Singh Tested-by: Krishneil Singh --- drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index 7ffd429..0cc2688 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -803,12 +803,12 @@ static bool ixgbevf_can_reuse_rx_page(struct ixgbevf_rx_buffer *rx_buffer, **/ static bool ixgbevf_add_rx_frag(struct ixgbevf_ring *rx_ring, struct ixgbevf_rx_buffer *rx_buffer, + u16 size, union ixgbe_adv_rx_desc *rx_desc, struct sk_buff *skb) { struct page *page = rx_buffer->page; unsigned char *va = page_address(page) + rx_buffer->page_offset; - unsigned int size = le16_to_cpu(rx_desc->wb.upper.length); #if (PAGE_SIZE < 8192) unsigned int truesize = IXGBEVF_RX_BUFSZ; #else @@ -856,6 +856,7 @@ static struct sk_buff *ixgbevf_fetch_rx_buffer(struct ixgbevf_ring *rx_ring, { struct ixgbevf_rx_buffer *rx_buffer; struct page *page; + u16 size = le16_to_cpu(rx_desc->wb.upper.length); rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; page = rx_buffer->page; @@ -890,11 +891,11 @@ static struct sk_buff *ixgbevf_fetch_rx_buffer(struct ixgbevf_ring *rx_ring, dma_sync_single_range_for_cpu(rx_ring->dev, rx_buffer->dma, rx_buffer->page_offset, - IXGBEVF_RX_BUFSZ, + size, DMA_FROM_DEVICE); /* pull page into skb */ - if (ixgbevf_add_rx_frag(rx_ring, rx_buffer, rx_desc, skb)) { + if (ixgbevf_add_rx_frag(rx_ring, rx_buffer, size, rx_desc, skb)) { /* hand second half of page back to the ring */ ixgbevf_reuse_rx_page(rx_ring, rx_buffer); } else { From patchwork Mon Dec 11 18:36:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tantilov, Emil S" X-Patchwork-Id: 847158 X-Patchwork-Delegate: jeffrey.t.kirsher@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=osuosl.org (client-ip=140.211.166.137; helo=fraxinus.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3ywWm84FnBz9s03 for ; Tue, 12 Dec 2017 05:32:56 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id 9AB8287277; Mon, 11 Dec 2017 18:32:54 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id w79BYQXYt8IJ; Mon, 11 Dec 2017 18:32:51 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by fraxinus.osuosl.org (Postfix) with ESMTP id DED0487268; Mon, 11 Dec 2017 18:32:51 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by ash.osuosl.org (Postfix) with ESMTP id 1B5B51BFA4B for ; Mon, 11 Dec 2017 18:32:50 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id 17DF825D87 for ; Mon, 11 Dec 2017 18:32:50 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id i-XRWsxPvr3N for ; Mon, 11 Dec 2017 18:32:49 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by silver.osuosl.org (Postfix) with ESMTPS id 1150225D72 for ; Mon, 11 Dec 2017 18:32:49 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Dec 2017 10:32:48 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,392,1508828400"; d="scan'208";a="10991727" Received: from estantil-desk3.jf.intel.com (HELO localhost6.localdomain6) ([134.134.177.96]) by FMSMGA003.fm.intel.com with ESMTP; 11 Dec 2017 10:32:48 -0800 From: Emil Tantilov To: intel-wired-lan@lists.osuosl.org Date: Mon, 11 Dec 2017 10:36:59 -0800 Message-ID: <20171211183659.21524.88045.stgit@localhost6.localdomain6> In-Reply-To: <20171211183537.21524.10133.stgit@localhost6.localdomain6> References: <20171211183537.21524.10133.stgit@localhost6.localdomain6> User-Agent: StGit/0.17.1-17-ge4e0 MIME-Version: 1.0 Subject: [Intel-wired-lan] [PATCH 3/9] ixgbevf: use length to determine if descriptor is done X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Based on: commit 7ec0116c9131 ("igb: Use length to determine if descriptor is done") This change makes it so that we use the length of the packet instead of the DD status bit to determine if a new descriptor is ready to be processed. The obvious advantage is that it cuts down on reads as we don't really even need the DD bit if going from a 0 to a non-zero value on size is enough to inform us that the packet has been completed. In addition we only reset the Rx descriptor length for descriptor zero when resetting a ring instead of having to do a memset with 0 over the entire ring. By doing this we can save some time on initialization. Signed-off-by: Emil Tantilov Tested-by: Krishneil Singh Tested-by: Krishneil Singh --- drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index 0cc2688..725fe2d 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -653,8 +653,8 @@ static void ixgbevf_alloc_rx_buffers(struct ixgbevf_ring *rx_ring, i -= rx_ring->count; } - /* clear the hdr_addr for the next_to_use descriptor */ - rx_desc->read.hdr_addr = 0; + /* clear the length for the next_to_use descriptor */ + rx_desc->wb.upper.length = 0; cleaned_count--; } while (cleaned_count); @@ -938,7 +938,7 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector, rx_desc = IXGBEVF_RX_DESC(rx_ring, rx_ring->next_to_clean); - if (!ixgbevf_test_staterr(rx_desc, IXGBE_RXD_STAT_DD)) + if (!rx_desc->wb.upper.length) break; /* This memory barrier is needed to keep us from reading @@ -1729,6 +1729,7 @@ static void ixgbevf_configure_rx_ring(struct ixgbevf_adapter *adapter, struct ixgbevf_ring *ring) { struct ixgbe_hw *hw = &adapter->hw; + union ixgbe_adv_rx_desc *rx_desc; u64 rdba = ring->dma; u32 rxdctl; u8 reg_idx = ring->reg_idx; @@ -1757,6 +1758,10 @@ static void ixgbevf_configure_rx_ring(struct ixgbevf_adapter *adapter, IXGBE_WRITE_REG(hw, IXGBE_VFRDT(reg_idx), 0); ring->tail = adapter->io_addr + IXGBE_VFRDT(reg_idx); + /* initialize Rx descriptor 0 */ + rx_desc = IXGBEVF_RX_DESC(ring, 0); + rx_desc->wb.upper.length = 0; + /* reset ntu and ntc to place SW in sync with hardwdare */ ring->next_to_clean = 0; ring->next_to_use = 0; @@ -2141,9 +2146,6 @@ static void ixgbevf_clean_rx_ring(struct ixgbevf_ring *rx_ring) size = sizeof(struct ixgbevf_rx_buffer) * rx_ring->count; memset(rx_ring->rx_buffer_info, 0, size); - - /* Zero out the descriptor ring */ - memset(rx_ring->desc, 0, rx_ring->size); } /** From patchwork Mon Dec 11 18:37:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tantilov, Emil S" X-Patchwork-Id: 847159 X-Patchwork-Delegate: jeffrey.t.kirsher@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=osuosl.org (client-ip=140.211.166.137; helo=fraxinus.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3ywWmD3Nz2z9s1h for ; Tue, 12 Dec 2017 05:33:00 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id D25FF87267; Mon, 11 Dec 2017 18:32:58 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ohATVAsAzWZn; Mon, 11 Dec 2017 18:32:58 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by fraxinus.osuosl.org (Postfix) with ESMTP id 2298C8726A; Mon, 11 Dec 2017 18:32:58 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by ash.osuosl.org (Postfix) with ESMTP id F25261BFA4B for ; Mon, 11 Dec 2017 18:32:56 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id EDC9886E32 for ; Mon, 11 Dec 2017 18:32:56 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PXG5WKSN6E2T for ; Mon, 11 Dec 2017 18:32:56 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by whitealder.osuosl.org (Postfix) with ESMTPS id F15CE86CF9 for ; Mon, 11 Dec 2017 18:32:55 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Dec 2017 10:32:53 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,392,1508828400"; d="scan'208";a="11633404" Received: from estantil-desk3.jf.intel.com (HELO localhost6.localdomain6) ([134.134.177.96]) by fmsmga004.fm.intel.com with ESMTP; 11 Dec 2017 10:32:53 -0800 From: Emil Tantilov To: intel-wired-lan@lists.osuosl.org Date: Mon, 11 Dec 2017 10:37:04 -0800 Message-ID: <20171211183704.21524.88419.stgit@localhost6.localdomain6> In-Reply-To: <20171211183537.21524.10133.stgit@localhost6.localdomain6> References: <20171211183537.21524.10133.stgit@localhost6.localdomain6> User-Agent: StGit/0.17.1-17-ge4e0 MIME-Version: 1.0 Subject: [Intel-wired-lan] [PATCH 4/9] ixgbevf: add support for DMA_ATTR_SKIP_CPU_SYNC/WEAK_ORDERING X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Based on commit 5be5955425c2 ("igb: update driver to make use of DMA_ATTR_SKIP_CPU_SYNC") and commit 7bd175928280 ("igb: Add support for DMA_ATTR_WEAK_ORDERING") Convert the calls to dma_map/unmap_page() to the attributes version and add DMA_ATTR_SKIP_CPU_SYNC/WEAK_ORDERING which should help improve performance on some platforms. Move sync_for_cpu call before we perform a prefetch to avoid invalidating the first 128 bytes of the packet on architectures where that call may invalidate the cache. Signed-off-by: Emil Tantilov Tested-by: Krishneil Singh Tested-by: Krishneil Singh --- drivers/net/ethernet/intel/ixgbevf/ixgbevf.h | 3 + drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 57 +++++++++++++-------- 2 files changed, 38 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h index 581f44b..b1da9f4 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h @@ -260,6 +260,9 @@ static inline void ixgbevf_write_tail(struct ixgbevf_ring *ring, u32 value) #define MIN_MSIX_Q_VECTORS 1 #define MIN_MSIX_COUNT (MIN_MSIX_Q_VECTORS + NON_Q_VECTORS) +#define IXGBEVF_RX_DMA_ATTR \ + (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) + /* board specific private data structure */ struct ixgbevf_adapter { /* this field must be first, see ixgbevf_process_skb_fields */ diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index 725fe2d..fbd493e 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -595,8 +595,8 @@ static bool ixgbevf_alloc_mapped_page(struct ixgbevf_ring *rx_ring, } /* map page for use */ - dma = dma_map_page(rx_ring->dev, page, 0, - PAGE_SIZE, DMA_FROM_DEVICE); + dma = dma_map_page_attrs(rx_ring->dev, page, 0, PAGE_SIZE, + DMA_FROM_DEVICE, IXGBEVF_RX_DMA_ATTR); /* if mapping failed free memory back to system since * there isn't much point in holding memory we can't use @@ -639,6 +639,12 @@ static void ixgbevf_alloc_rx_buffers(struct ixgbevf_ring *rx_ring, if (!ixgbevf_alloc_mapped_page(rx_ring, bi)) break; + /* sync the buffer for use by the device */ + dma_sync_single_range_for_device(rx_ring->dev, bi->dma, + bi->page_offset, + IXGBEVF_RX_BUFSZ, + DMA_FROM_DEVICE); + /* Refresh the desc even if pkt_addr didn't change * because each write-back erases this info. */ @@ -741,12 +747,6 @@ static void ixgbevf_reuse_rx_page(struct ixgbevf_ring *rx_ring, new_buff->page = old_buff->page; new_buff->dma = old_buff->dma; new_buff->page_offset = old_buff->page_offset; - - /* sync the buffer for use by the device */ - dma_sync_single_range_for_device(rx_ring->dev, new_buff->dma, - new_buff->page_offset, - IXGBEVF_RX_BUFSZ, - DMA_FROM_DEVICE); } static inline bool ixgbevf_page_is_reserved(struct page *page) @@ -862,6 +862,13 @@ static struct sk_buff *ixgbevf_fetch_rx_buffer(struct ixgbevf_ring *rx_ring, page = rx_buffer->page; prefetchw(page); + /* we are reusing so sync this buffer for CPU use */ + dma_sync_single_range_for_cpu(rx_ring->dev, + rx_buffer->dma, + rx_buffer->page_offset, + size, + DMA_FROM_DEVICE); + if (likely(!skb)) { void *page_addr = page_address(page) + rx_buffer->page_offset; @@ -887,21 +894,15 @@ static struct sk_buff *ixgbevf_fetch_rx_buffer(struct ixgbevf_ring *rx_ring, prefetchw(skb->data); } - /* we are reusing so sync this buffer for CPU use */ - dma_sync_single_range_for_cpu(rx_ring->dev, - rx_buffer->dma, - rx_buffer->page_offset, - size, - DMA_FROM_DEVICE); - /* pull page into skb */ if (ixgbevf_add_rx_frag(rx_ring, rx_buffer, size, rx_desc, skb)) { /* hand second half of page back to the ring */ ixgbevf_reuse_rx_page(rx_ring, rx_buffer); } else { /* we are not reusing the buffer so unmap it */ - dma_unmap_page(rx_ring->dev, rx_buffer->dma, - PAGE_SIZE, DMA_FROM_DEVICE); + dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma, + PAGE_SIZE, DMA_FROM_DEVICE, + IXGBEVF_RX_DMA_ATTR); } /* clear contents of buffer_info */ @@ -2116,7 +2117,6 @@ void ixgbevf_up(struct ixgbevf_adapter *adapter) **/ static void ixgbevf_clean_rx_ring(struct ixgbevf_ring *rx_ring) { - struct device *dev = rx_ring->dev; unsigned long size; unsigned int i; @@ -2135,10 +2135,23 @@ static void ixgbevf_clean_rx_ring(struct ixgbevf_ring *rx_ring) struct ixgbevf_rx_buffer *rx_buffer; rx_buffer = &rx_ring->rx_buffer_info[i]; - if (rx_buffer->dma) - dma_unmap_page(dev, rx_buffer->dma, - PAGE_SIZE, DMA_FROM_DEVICE); - rx_buffer->dma = 0; + + /* Invalidate cache lines that may have been written to by + * device so that we avoid corrupting memory. + */ + dma_sync_single_range_for_cpu(rx_ring->dev, + rx_buffer->dma, + rx_buffer->page_offset, + IXGBEVF_RX_BUFSZ, + DMA_FROM_DEVICE); + + /* free resources associated with mapping */ + dma_unmap_page_attrs(rx_ring->dev, + rx_buffer->dma, + PAGE_SIZE, + DMA_FROM_DEVICE, + IXGBEVF_RX_DMA_ATTR); + if (rx_buffer->page) __free_page(rx_buffer->page); rx_buffer->page = NULL; From patchwork Mon Dec 11 18:37:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tantilov, Emil S" X-Patchwork-Id: 847160 X-Patchwork-Delegate: jeffrey.t.kirsher@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=osuosl.org (client-ip=140.211.166.133; helo=hemlock.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3ywWmH5Tm7z9s0g for ; Tue, 12 Dec 2017 05:33:03 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 0CB7E881F0; Mon, 11 Dec 2017 18:33:02 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id f8oSdabOqofE; Mon, 11 Dec 2017 18:33:01 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by hemlock.osuosl.org (Postfix) with ESMTP id 5C572881C8; Mon, 11 Dec 2017 18:33:01 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by ash.osuosl.org (Postfix) with ESMTP id E81021BFA4B for ; Mon, 11 Dec 2017 18:32:59 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id E4CBB87290 for ; Mon, 11 Dec 2017 18:32:59 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ZzPlB1R9ABmn for ; Mon, 11 Dec 2017 18:32:59 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by fraxinus.osuosl.org (Postfix) with ESMTPS id 609D687277 for ; Mon, 11 Dec 2017 18:32:59 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Dec 2017 10:32:59 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,392,1508828400"; d="scan'208";a="1971089" Received: from estantil-desk3.jf.intel.com (HELO localhost6.localdomain6) ([134.134.177.96]) by orsmga006.jf.intel.com with ESMTP; 11 Dec 2017 10:32:59 -0800 From: Emil Tantilov To: intel-wired-lan@lists.osuosl.org Date: Mon, 11 Dec 2017 10:37:10 -0800 Message-ID: <20171211183709.21524.19432.stgit@localhost6.localdomain6> In-Reply-To: <20171211183537.21524.10133.stgit@localhost6.localdomain6> References: <20171211183537.21524.10133.stgit@localhost6.localdomain6> User-Agent: StGit/0.17.1-17-ge4e0 MIME-Version: 1.0 Subject: [Intel-wired-lan] [PATCH 5/9] ixgbevf: update code to better handle incrementing page count X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Based on commit bd4171a5d4c2 ("igb: update code to better handle incrementing page count") Update the driver code so that we do bulk updates of the page reference count instead of just incrementing it by one reference at a time. The advantage to doing this is that we cut down on atomic operations and this in turn should give us a slight improvement in cycles per packet. In addition if we eventually move this over to using build_skb the gains will be more noticeable. Signed-off-by: Emil Tantilov Tested-by: Krishneil Singh Tested-by: Krishneil Singh --- drivers/net/ethernet/intel/ixgbevf/ixgbevf.h | 7 ++++- drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 30 +++++++++++++++------ 2 files changed, 28 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h index b1da9f4..c70a789 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h @@ -62,7 +62,12 @@ struct ixgbevf_tx_buffer { struct ixgbevf_rx_buffer { dma_addr_t dma; struct page *page; - unsigned int page_offset; +#if (BITS_PER_LONG > 32) || (PAGE_SIZE >= 65536) + __u32 page_offset; +#else + __u16 page_offset; +#endif + __u16 pagecnt_bias; }; struct ixgbevf_stats { diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index fbd493e..ae2402d 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -611,6 +611,7 @@ static bool ixgbevf_alloc_mapped_page(struct ixgbevf_ring *rx_ring, bi->dma = dma; bi->page = page; bi->page_offset = 0; + bi->pagecnt_bias = 1; return true; } @@ -747,6 +748,7 @@ static void ixgbevf_reuse_rx_page(struct ixgbevf_ring *rx_ring, new_buff->page = old_buff->page; new_buff->dma = old_buff->dma; new_buff->page_offset = old_buff->page_offset; + new_buff->pagecnt_bias = old_buff->pagecnt_bias; } static inline bool ixgbevf_page_is_reserved(struct page *page) @@ -758,13 +760,15 @@ static bool ixgbevf_can_reuse_rx_page(struct ixgbevf_rx_buffer *rx_buffer, struct page *page, const unsigned int truesize) { + unsigned int pagecnt_bias = rx_buffer->pagecnt_bias--; + /* avoid re-using remote pages */ if (unlikely(ixgbevf_page_is_reserved(page))) return false; #if (PAGE_SIZE < 8192) /* if we are only owner of page we can reuse it */ - if (unlikely(page_count(page) != 1)) + if (unlikely(page_ref_count(page) != pagecnt_bias)) return false; /* flip page offset to other buffer */ @@ -778,10 +782,15 @@ static bool ixgbevf_can_reuse_rx_page(struct ixgbevf_rx_buffer *rx_buffer, return false; #endif - /* Even if we own the page, we are not allowed to use atomic_set() - * This would break get_page_unless_zero() users. + + /* If we have drained the page fragment pool we need to update + * the pagecnt_bias and page count so that we fully restock the + * number of references the driver holds. */ - page_ref_inc(page); + if (unlikely(pagecnt_bias == 1)) { + page_ref_add(page, USHRT_MAX); + rx_buffer->pagecnt_bias = USHRT_MAX; + } return true; } @@ -827,7 +836,6 @@ static bool ixgbevf_add_rx_frag(struct ixgbevf_ring *rx_ring, return true; /* this page cannot be reused so discard it */ - put_page(page); return false; } @@ -899,10 +907,13 @@ static struct sk_buff *ixgbevf_fetch_rx_buffer(struct ixgbevf_ring *rx_ring, /* hand second half of page back to the ring */ ixgbevf_reuse_rx_page(rx_ring, rx_buffer); } else { - /* we are not reusing the buffer so unmap it */ + /* We are not reusing the buffer so unmap it and free + * any references we are holding to it + */ dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma, PAGE_SIZE, DMA_FROM_DEVICE, IXGBEVF_RX_DMA_ATTR); + __page_frag_cache_drain(page, rx_buffer->pagecnt_bias); } /* clear contents of buffer_info */ @@ -2135,6 +2146,8 @@ static void ixgbevf_clean_rx_ring(struct ixgbevf_ring *rx_ring) struct ixgbevf_rx_buffer *rx_buffer; rx_buffer = &rx_ring->rx_buffer_info[i]; + if (!rx_buffer->page) + continue; /* Invalidate cache lines that may have been written to by * device so that we avoid corrupting memory. @@ -2152,8 +2165,9 @@ static void ixgbevf_clean_rx_ring(struct ixgbevf_ring *rx_ring) DMA_FROM_DEVICE, IXGBEVF_RX_DMA_ATTR); - if (rx_buffer->page) - __free_page(rx_buffer->page); + __page_frag_cache_drain(rx_buffer->page, + rx_buffer->pagecnt_bias); + rx_buffer->page = NULL; } From patchwork Mon Dec 11 18:37:15 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tantilov, Emil S" X-Patchwork-Id: 847161 X-Patchwork-Delegate: jeffrey.t.kirsher@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=osuosl.org (client-ip=140.211.166.137; helo=fraxinus.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3ywWmR2DzNz9s03 for ; Tue, 12 Dec 2017 05:33:11 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id 94D058727F; Mon, 11 Dec 2017 18:33:09 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qo36ojFdLTwX; Mon, 11 Dec 2017 18:33:07 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by fraxinus.osuosl.org (Postfix) with ESMTP id B80F687290; Mon, 11 Dec 2017 18:33:07 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id 18E091BFA4B for ; Mon, 11 Dec 2017 18:33:06 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 1541D8820E for ; Mon, 11 Dec 2017 18:33:06 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id kW84YiI7245G for ; Mon, 11 Dec 2017 18:33:05 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by hemlock.osuosl.org (Postfix) with ESMTPS id 46C29881EB for ; Mon, 11 Dec 2017 18:33:05 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Dec 2017 10:33:04 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,392,1508828400"; d="scan'208";a="1890925" Received: from estantil-desk3.jf.intel.com (HELO localhost6.localdomain6) ([134.134.177.96]) by orsmga007.jf.intel.com with ESMTP; 11 Dec 2017 10:33:04 -0800 From: Emil Tantilov To: intel-wired-lan@lists.osuosl.org Date: Mon, 11 Dec 2017 10:37:15 -0800 Message-ID: <20171211183715.21524.56950.stgit@localhost6.localdomain6> In-Reply-To: <20171211183537.21524.10133.stgit@localhost6.localdomain6> References: <20171211183537.21524.10133.stgit@localhost6.localdomain6> User-Agent: StGit/0.17.1-17-ge4e0 MIME-Version: 1.0 Subject: [Intel-wired-lan] [PATCH 6/9] ixgbevf: add counters for Rx page allocations X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" We already had placehloders for failed page and buffer allocations. Added alloc_rx_page and made sure the stats are properly updated and exposed in ethtool. Signed-off-by: Emil Tantilov Tested-by: Krishneil Singh Tested-by: Krishneil Singh --- drivers/net/ethernet/intel/ixgbevf/ethtool.c | 3 +++ drivers/net/ethernet/intel/ixgbevf/ixgbevf.h | 6 ++++- drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 23 ++++++++++++++++----- 3 files changed, 25 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbevf/ethtool.c b/drivers/net/ethernet/intel/ixgbevf/ethtool.c index ff9d05f..4400e49 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ethtool.c +++ b/drivers/net/ethernet/intel/ixgbevf/ethtool.c @@ -75,6 +75,9 @@ struct ixgbe_stats { IXGBEVF_STAT("tx_timeout_count", tx_timeout_count), IXGBEVF_NETDEV_STAT(multicast), IXGBEVF_STAT("rx_csum_offload_errors", hw_csum_rx_error), + IXGBEVF_STAT("alloc_rx_page", alloc_rx_page), + IXGBEVF_STAT("alloc_rx_page_failed", alloc_rx_page_failed), + IXGBEVF_STAT("alloc_rx_buff_failed", alloc_rx_buff_failed), }; #define IXGBEVF_QUEUE_STATS_LEN ( \ diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h index c70a789..f695242 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h @@ -84,6 +84,7 @@ struct ixgbevf_tx_queue_stats { struct ixgbevf_rx_queue_stats { u64 alloc_rx_page_failed; u64 alloc_rx_buff_failed; + u64 alloc_rx_page; u64 csum_err; }; @@ -295,8 +296,9 @@ struct ixgbevf_adapter { u64 hw_csum_rx_error; u64 hw_rx_no_dma_resources; int num_msix_vectors; - u32 alloc_rx_page_failed; - u32 alloc_rx_buff_failed; + u64 alloc_rx_page_failed; + u64 alloc_rx_buff_failed; + u64 alloc_rx_page; struct msix_entry *msix_entries; diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index ae2402d..350afec 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -604,7 +604,7 @@ static bool ixgbevf_alloc_mapped_page(struct ixgbevf_ring *rx_ring, if (dma_mapping_error(rx_ring->dev, dma)) { __free_page(page); - rx_ring->rx_stats.alloc_rx_buff_failed++; + rx_ring->rx_stats.alloc_rx_page_failed++; return false; } @@ -612,6 +612,7 @@ static bool ixgbevf_alloc_mapped_page(struct ixgbevf_ring *rx_ring, bi->page = page; bi->page_offset = 0; bi->pagecnt_bias = 1; + rx_ring->rx_stats.alloc_rx_page++; return true; } @@ -963,8 +964,10 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector, skb = ixgbevf_fetch_rx_buffer(rx_ring, rx_desc, skb); /* exit if we failed to retrieve a buffer */ - if (!skb) + if (!skb) { + rx_ring->rx_stats.alloc_rx_buff_failed++; break; + } cleaned_count++; @@ -2749,6 +2752,8 @@ static int ixgbevf_sw_init(struct ixgbevf_adapter *adapter) void ixgbevf_update_stats(struct ixgbevf_adapter *adapter) { struct ixgbe_hw *hw = &adapter->hw; + u64 alloc_rx_page_failed = 0, alloc_rx_buff_failed = 0; + u64 alloc_rx_page = 0, hw_csum_rx_error = 0; int i; if (test_bit(__IXGBEVF_DOWN, &adapter->state) || @@ -2769,10 +2774,18 @@ void ixgbevf_update_stats(struct ixgbevf_adapter *adapter) adapter->stats.vfmprc); for (i = 0; i < adapter->num_rx_queues; i++) { - adapter->hw_csum_rx_error += - adapter->rx_ring[i]->hw_csum_rx_error; - adapter->rx_ring[i]->hw_csum_rx_error = 0; + struct ixgbevf_ring *rx_ring = adapter->rx_ring[i]; + + hw_csum_rx_error += rx_ring->rx_stats.csum_err; + alloc_rx_page_failed += rx_ring->rx_stats.alloc_rx_page_failed; + alloc_rx_buff_failed += rx_ring->rx_stats.alloc_rx_buff_failed; + alloc_rx_page += rx_ring->rx_stats.alloc_rx_page; } + + adapter->hw_csum_rx_error = hw_csum_rx_error; + adapter->alloc_rx_page_failed = alloc_rx_page_failed; + adapter->alloc_rx_buff_failed = alloc_rx_buff_failed; + adapter->alloc_rx_page = alloc_rx_page; } /** From patchwork Mon Dec 11 18:37:20 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tantilov, Emil S" X-Patchwork-Id: 847162 X-Patchwork-Delegate: jeffrey.t.kirsher@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=osuosl.org (client-ip=140.211.166.137; helo=fraxinus.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3ywWmX02vkz9s03 for ; Tue, 12 Dec 2017 05:33:15 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id 7B25C87277; Mon, 11 Dec 2017 18:33:14 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id uChB24oK-ZEZ; Mon, 11 Dec 2017 18:33:14 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by fraxinus.osuosl.org (Postfix) with ESMTP id 042D187268; Mon, 11 Dec 2017 18:33:14 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by ash.osuosl.org (Postfix) with ESMTP id EC5571BFA4B for ; Mon, 11 Dec 2017 18:33:11 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id E627E26072 for ; Mon, 11 Dec 2017 18:33:11 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UhE9imcDf0jF for ; Mon, 11 Dec 2017 18:33:11 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by silver.osuosl.org (Postfix) with ESMTPS id 9A60D25D72 for ; Mon, 11 Dec 2017 18:33:10 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Dec 2017 10:33:10 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,392,1508828400"; d="scan'208";a="1937343" Received: from estantil-desk3.jf.intel.com (HELO localhost6.localdomain6) ([134.134.177.96]) by orsmga008.jf.intel.com with ESMTP; 11 Dec 2017 10:33:09 -0800 From: Emil Tantilov To: intel-wired-lan@lists.osuosl.org Date: Mon, 11 Dec 2017 10:37:20 -0800 Message-ID: <20171211183720.21524.93082.stgit@localhost6.localdomain6> In-Reply-To: <20171211183537.21524.10133.stgit@localhost6.localdomain6> References: <20171211183537.21524.10133.stgit@localhost6.localdomain6> User-Agent: StGit/0.17.1-17-ge4e0 MIME-Version: 1.0 Subject: [Intel-wired-lan] [PATCH 7/9] ixgbevf: clear rx_buffer_info in configure instead of clean X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Based on commit d2bead576e67 ("igb: Clear Rx buffer_info in configure instead of clean") This change makes it so that instead of going through the entire ring on Rx cleanup we only go through the region that was designated to be cleaned up and stop when we reach the region where new allocations should start. In addition we can avoid having to perform a memset on the Rx buffer_info structures until we are about to start using the ring again. Signed-off-by: Emil Tantilov Tested-by: Krishneil Singh Tested-by: Krishneil Singh --- drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 26 +++++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index 350afec..a793f9e 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -1773,6 +1773,10 @@ static void ixgbevf_configure_rx_ring(struct ixgbevf_adapter *adapter, IXGBE_WRITE_REG(hw, IXGBE_VFRDT(reg_idx), 0); ring->tail = adapter->io_addr + IXGBE_VFRDT(reg_idx); + /* initialize rx_buffer_info */ + memset(ring->rx_buffer_info, 0, + sizeof(struct ixgbevf_rx_buffer) * ring->count); + /* initialize Rx descriptor 0 */ rx_desc = IXGBEVF_RX_DESC(ring, 0); rx_desc->wb.upper.length = 0; @@ -2131,8 +2135,7 @@ void ixgbevf_up(struct ixgbevf_adapter *adapter) **/ static void ixgbevf_clean_rx_ring(struct ixgbevf_ring *rx_ring) { - unsigned long size; - unsigned int i; + u16 i = rx_ring->next_to_clean; /* Free Rx ring sk_buff */ if (rx_ring->skb) { @@ -2140,17 +2143,11 @@ static void ixgbevf_clean_rx_ring(struct ixgbevf_ring *rx_ring) rx_ring->skb = NULL; } - /* ring already cleared, nothing to do */ - if (!rx_ring->rx_buffer_info) - return; - /* Free all the Rx ring pages */ - for (i = 0; i < rx_ring->count; i++) { + while (i != rx_ring->next_to_alloc) { struct ixgbevf_rx_buffer *rx_buffer; rx_buffer = &rx_ring->rx_buffer_info[i]; - if (!rx_buffer->page) - continue; /* Invalidate cache lines that may have been written to by * device so that we avoid corrupting memory. @@ -2171,11 +2168,14 @@ static void ixgbevf_clean_rx_ring(struct ixgbevf_ring *rx_ring) __page_frag_cache_drain(rx_buffer->page, rx_buffer->pagecnt_bias); - rx_buffer->page = NULL; + i++; + if (i == rx_ring->count) + i = 0; } - size = sizeof(struct ixgbevf_rx_buffer) * rx_ring->count; - memset(rx_ring->rx_buffer_info, 0, size); + rx_ring->next_to_alloc = 0; + rx_ring->next_to_clean = 0; + rx_ring->next_to_use = 0; } /** @@ -3090,7 +3090,7 @@ int ixgbevf_setup_rx_resources(struct ixgbevf_ring *rx_ring) int size; size = sizeof(struct ixgbevf_rx_buffer) * rx_ring->count; - rx_ring->rx_buffer_info = vzalloc(size); + rx_ring->rx_buffer_info = vmalloc(size); if (!rx_ring->rx_buffer_info) goto err; From patchwork Mon Dec 11 18:37:25 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tantilov, Emil S" X-Patchwork-Id: 847163 X-Patchwork-Delegate: jeffrey.t.kirsher@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=osuosl.org (client-ip=140.211.166.136; helo=silver.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3ywWmd5tYNz9s03 for ; Tue, 12 Dec 2017 05:33:21 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id 1B70F2654F; Mon, 11 Dec 2017 18:33:20 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 61wC+J0m41cY; Mon, 11 Dec 2017 18:33:18 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by silver.osuosl.org (Postfix) with ESMTP id 3C57525D87; Mon, 11 Dec 2017 18:33:18 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by ash.osuosl.org (Postfix) with ESMTP id F23351BFA4B for ; Mon, 11 Dec 2017 18:33:16 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id ED78B86E6B for ; Mon, 11 Dec 2017 18:33:16 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ikjQz3nNOrF0 for ; Mon, 11 Dec 2017 18:33:16 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by whitealder.osuosl.org (Postfix) with ESMTPS id 372B586E3E for ; Mon, 11 Dec 2017 18:33:16 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Dec 2017 10:33:15 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,392,1508828400"; d="scan'208";a="1599866" Received: from estantil-desk3.jf.intel.com (HELO localhost6.localdomain6) ([134.134.177.96]) by fmsmga007.fm.intel.com with ESMTP; 11 Dec 2017 10:33:14 -0800 From: Emil Tantilov To: intel-wired-lan@lists.osuosl.org Date: Mon, 11 Dec 2017 10:37:25 -0800 Message-ID: <20171211183725.21524.40905.stgit@localhost6.localdomain6> In-Reply-To: <20171211183537.21524.10133.stgit@localhost6.localdomain6> References: <20171211183537.21524.10133.stgit@localhost6.localdomain6> User-Agent: StGit/0.17.1-17-ge4e0 MIME-Version: 1.0 Subject: [Intel-wired-lan] [PATCH 8/9] ixgbevf: improve performance and reduce size of ixgbevf_tx_map() X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Based on commit ec718254cbfe ("ixgbe: Improve performance and reduce size of ixgbe_tx_map") This change is meant to both improve the performance and reduce the size of ixgbevf_tx_map(). Expand the work done in the main loop by pushing first into tx_buffer. This allows us to pull in the dma_mapping_error check, the tx_buffer value assignment, and the initial DMA value assignment to the Tx descriptor. Signed-off-by: Emil Tantilov Tested-by: Krishneil Singh Tested-by: Krishneil Singh --- drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 45 +++++++++------------ 1 file changed, 20 insertions(+), 25 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index a793f9e..d3415ee 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -3532,34 +3532,37 @@ static void ixgbevf_tx_map(struct ixgbevf_ring *tx_ring, struct ixgbevf_tx_buffer *first, const u8 hdr_len) { - dma_addr_t dma; struct sk_buff *skb = first->skb; struct ixgbevf_tx_buffer *tx_buffer; union ixgbe_adv_tx_desc *tx_desc; - struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0]; - unsigned int data_len = skb->data_len; - unsigned int size = skb_headlen(skb); - unsigned int paylen = skb->len - hdr_len; + struct skb_frag_struct *frag; + dma_addr_t dma; + unsigned int data_len, size; u32 tx_flags = first->tx_flags; - __le32 cmd_type; + __le32 cmd_type = ixgbevf_tx_cmd_type(tx_flags); u16 i = tx_ring->next_to_use; tx_desc = IXGBEVF_TX_DESC(tx_ring, i); - ixgbevf_tx_olinfo_status(tx_desc, tx_flags, paylen); - cmd_type = ixgbevf_tx_cmd_type(tx_flags); + ixgbevf_tx_olinfo_status(tx_desc, tx_flags, skb->len - hdr_len); + + size = skb_headlen(skb); + data_len = skb->data_len; dma = dma_map_single(tx_ring->dev, skb->data, size, DMA_TO_DEVICE); - if (dma_mapping_error(tx_ring->dev, dma)) - goto dma_error; - /* record length, and DMA address */ - dma_unmap_len_set(first, len, size); - dma_unmap_addr_set(first, dma, dma); + tx_buffer = first; - tx_desc->read.buffer_addr = cpu_to_le64(dma); + for (frag = &skb_shinfo(skb)->frags[0];; frag++) { + if (dma_mapping_error(tx_ring->dev, dma)) + goto dma_error; + + /* record length, and DMA address */ + dma_unmap_len_set(tx_buffer, len, size); + dma_unmap_addr_set(tx_buffer, dma, dma); + + tx_desc->read.buffer_addr = cpu_to_le64(dma); - for (;;) { while (unlikely(size > IXGBE_MAX_DATA_PER_TXD)) { tx_desc->read.cmd_type_len = cmd_type | cpu_to_le32(IXGBE_MAX_DATA_PER_TXD); @@ -3570,12 +3573,12 @@ static void ixgbevf_tx_map(struct ixgbevf_ring *tx_ring, tx_desc = IXGBEVF_TX_DESC(tx_ring, 0); i = 0; } + tx_desc->read.olinfo_status = 0; dma += IXGBE_MAX_DATA_PER_TXD; size -= IXGBE_MAX_DATA_PER_TXD; tx_desc->read.buffer_addr = cpu_to_le64(dma); - tx_desc->read.olinfo_status = 0; } if (likely(!data_len)) @@ -3589,23 +3592,15 @@ static void ixgbevf_tx_map(struct ixgbevf_ring *tx_ring, tx_desc = IXGBEVF_TX_DESC(tx_ring, 0); i = 0; } + tx_desc->read.olinfo_status = 0; size = skb_frag_size(frag); data_len -= size; dma = skb_frag_dma_map(tx_ring->dev, frag, 0, size, DMA_TO_DEVICE); - if (dma_mapping_error(tx_ring->dev, dma)) - goto dma_error; tx_buffer = &tx_ring->tx_buffer_info[i]; - dma_unmap_len_set(tx_buffer, len, size); - dma_unmap_addr_set(tx_buffer, dma, dma); - - tx_desc->read.buffer_addr = cpu_to_le64(dma); - tx_desc->read.olinfo_status = 0; - - frag++; } /* write last descriptor with RS and EOP bits */ From patchwork Mon Dec 11 18:37:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tantilov, Emil S" X-Patchwork-Id: 847164 X-Patchwork-Delegate: jeffrey.t.kirsher@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=osuosl.org (client-ip=140.211.166.138; helo=whitealder.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3ywWml2w0kz9s03 for ; Tue, 12 Dec 2017 05:33:27 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id E507086E3E; Mon, 11 Dec 2017 18:33:24 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id s9JmcyR8R62z; Mon, 11 Dec 2017 18:33:22 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by whitealder.osuosl.org (Postfix) with ESMTP id 807F186E32; Mon, 11 Dec 2017 18:33:22 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by ash.osuosl.org (Postfix) with ESMTP id D9EFC1BFA4B for ; Mon, 11 Dec 2017 18:33:21 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id D6AE487268 for ; Mon, 11 Dec 2017 18:33:21 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 5YnqCnbHc6Ki for ; Mon, 11 Dec 2017 18:33:21 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by fraxinus.osuosl.org (Postfix) with ESMTPS id 1E5FF87267 for ; Mon, 11 Dec 2017 18:33:21 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Dec 2017 10:33:20 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,392,1508828400"; d="scan'208";a="1705093" Received: from estantil-desk3.jf.intel.com (HELO localhost6.localdomain6) ([134.134.177.96]) by fmsmga008.fm.intel.com with ESMTP; 11 Dec 2017 10:33:20 -0800 From: Emil Tantilov To: intel-wired-lan@lists.osuosl.org Date: Mon, 11 Dec 2017 10:37:31 -0800 Message-ID: <20171211183730.21524.65151.stgit@localhost6.localdomain6> In-Reply-To: <20171211183537.21524.10133.stgit@localhost6.localdomain6> References: <20171211183537.21524.10133.stgit@localhost6.localdomain6> User-Agent: StGit/0.17.1-17-ge4e0 MIME-Version: 1.0 Subject: [Intel-wired-lan] [PATCH 9/9] ixgbevf: don't bother clearing tx_buffer_info in ixgbevf_clean_tx_ring() X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" Signed-off-by: Emil Tantilov Tested-by: Krishneil Singh Tested-by: Krishneil Singh --- drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 115 +++++++++++++-------- 1 file changed, 72 insertions(+), 43 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index d3415ee..9b3d43d 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -206,28 +206,6 @@ static void ixgbevf_set_ivar(struct ixgbevf_adapter *adapter, s8 direction, } } -static void ixgbevf_unmap_and_free_tx_resource(struct ixgbevf_ring *tx_ring, - struct ixgbevf_tx_buffer *tx_buffer) -{ - if (tx_buffer->skb) { - dev_kfree_skb_any(tx_buffer->skb); - if (dma_unmap_len(tx_buffer, len)) - dma_unmap_single(tx_ring->dev, - dma_unmap_addr(tx_buffer, dma), - dma_unmap_len(tx_buffer, len), - DMA_TO_DEVICE); - } else if (dma_unmap_len(tx_buffer, len)) { - dma_unmap_page(tx_ring->dev, - dma_unmap_addr(tx_buffer, dma), - dma_unmap_len(tx_buffer, len), - DMA_TO_DEVICE); - } - tx_buffer->next_to_watch = NULL; - tx_buffer->skb = NULL; - dma_unmap_len_set(tx_buffer, len, 0); - /* tx_buffer must be completely set up in the transmit path */ -} - static u64 ixgbevf_get_tx_completed(struct ixgbevf_ring *ring) { return ring->stats.packets; @@ -349,7 +327,6 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vector *q_vector, DMA_TO_DEVICE); /* clear tx_buffer data */ - tx_buffer->skb = NULL; dma_unmap_len_set(tx_buffer, len, 0); /* unmap remaining buffers */ @@ -1576,6 +1553,10 @@ static void ixgbevf_configure_tx_ring(struct ixgbevf_adapter *adapter, txdctl |= (1u << 8) | /* HTHRESH = 1 */ 32; /* PTHRESH = 32 */ + /* reinitialize tx_buffer_info */ + memset(ring->tx_buffer_info, 0, + sizeof(struct ixgbevf_tx_buffer) * ring->count); + clear_bit(__IXGBEVF_HANG_CHECK_ARMED, &ring->state); IXGBE_WRITE_REG(hw, IXGBE_VFTXDCTL(reg_idx), txdctl); @@ -2184,23 +2165,57 @@ static void ixgbevf_clean_rx_ring(struct ixgbevf_ring *rx_ring) **/ static void ixgbevf_clean_tx_ring(struct ixgbevf_ring *tx_ring) { - struct ixgbevf_tx_buffer *tx_buffer_info; - unsigned long size; - unsigned int i; + u16 i = tx_ring->next_to_clean; + struct ixgbevf_tx_buffer *tx_buffer = &tx_ring->tx_buffer_info[i]; - if (!tx_ring->tx_buffer_info) - return; + while (i != tx_ring->next_to_use) { + union ixgbe_adv_tx_desc *eop_desc, *tx_desc; + + /* Free all the Tx ring sk_buffs */ + dev_kfree_skb_any(tx_buffer->skb); + + /* unmap skb header data */ + dma_unmap_single(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + + /* check for eop_desc to determine the end of the packet */ + eop_desc = tx_buffer->next_to_watch; + tx_desc = IXGBEVF_TX_DESC(tx_ring, i); + + /* unmap remaining buffers */ + while (tx_desc != eop_desc) { + tx_buffer++; + tx_desc++; + i++; + if (unlikely(i == tx_ring->count)) { + i = 0; + tx_buffer = tx_ring->tx_buffer_info; + tx_desc = IXGBEVF_TX_DESC(tx_ring, 0); + } + + /* unmap any remaining paged data */ + if (dma_unmap_len(tx_buffer, len)) + dma_unmap_page(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + } - /* Free all the Tx ring sk_buffs */ - for (i = 0; i < tx_ring->count; i++) { - tx_buffer_info = &tx_ring->tx_buffer_info[i]; - ixgbevf_unmap_and_free_tx_resource(tx_ring, tx_buffer_info); + /* move us one more past the eop_desc for start of next pkt */ + tx_buffer++; + i++; + if (unlikely(i == tx_ring->count)) { + i = 0; + tx_buffer = tx_ring->tx_buffer_info; + } } - size = sizeof(struct ixgbevf_tx_buffer) * tx_ring->count; - memset(tx_ring->tx_buffer_info, 0, size); + /* reset next_to_use and next_to_clean */ + tx_ring->next_to_use = 0; + tx_ring->next_to_clean = 0; - memset(tx_ring->desc, 0, tx_ring->size); } /** @@ -3030,7 +3045,7 @@ int ixgbevf_setup_tx_resources(struct ixgbevf_ring *tx_ring) int size; size = sizeof(struct ixgbevf_tx_buffer) * tx_ring->count; - tx_ring->tx_buffer_info = vzalloc(size); + tx_ring->tx_buffer_info = vmalloc(size); if (!tx_ring->tx_buffer_info) goto err; @@ -3634,18 +3649,32 @@ static void ixgbevf_tx_map(struct ixgbevf_ring *tx_ring, return; dma_error: dev_err(tx_ring->dev, "TX DMA map failed\n"); + tx_buffer = &tx_ring->tx_buffer_info[i]; /* clear dma mappings for failed tx_buffer_info map */ - for (;;) { + while (tx_buffer != first) { + if (dma_unmap_len(tx_buffer, len)) + dma_unmap_page(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + dma_unmap_len_set(tx_buffer, len, 0); + + if (i-- == 0) + i += tx_ring->count; tx_buffer = &tx_ring->tx_buffer_info[i]; - ixgbevf_unmap_and_free_tx_resource(tx_ring, tx_buffer); - if (tx_buffer == first) - break; - if (i == 0) - i = tx_ring->count; - i--; } + if (dma_unmap_len(tx_buffer, len)) + dma_unmap_single(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + dma_unmap_len_set(tx_buffer, len, 0); + + dev_kfree_skb_any(tx_buffer->skb); + tx_buffer->skb = NULL; + tx_ring->next_to_use = i; }