From patchwork Sat Nov 11 02:10:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhu Yanjun X-Patchwork-Id: 836985 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3yYgJr6bMbz9t2r for ; Sat, 11 Nov 2017 13:07:24 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754774AbdKKCHW (ORCPT ); Fri, 10 Nov 2017 21:07:22 -0500 Received: from aserp1040.oracle.com ([141.146.126.69]:20557 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751058AbdKKCHV (ORCPT ); Fri, 10 Nov 2017 21:07:21 -0500 Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id vAB27El9001926 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 11 Nov 2017 02:07:14 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id vAB27Eqf009299 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 11 Nov 2017 02:07:14 GMT Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id vAB27Dup027666; Sat, 11 Nov 2017 02:07:13 GMT Received: from office-bj2017.cn.oracle.com (/10.182.69.78) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 10 Nov 2017 18:07:12 -0800 From: Zhu Yanjun To: netdev@vger.kernel.org, stephen@networkplumber.org, keescook@chromium.org, edumazet@google.com, tremyfr@gmail.com Subject: [PATCH net-next 1/1] forcedeth: remove redudant assignments in xmit Date: Fri, 10 Nov 2017 21:10:00 -0500 Message-Id: <1510366200-26318-1-git-send-email-yanjun.zhu@oracle.com> X-Mailer: git-send-email 2.7.4 X-Source-IP: userv0022.oracle.com [156.151.31.74] Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In xmit process, the variables are set many times. In fact, it is enough for these variables to be set once. After a long time test, the throughput performance is better than before. CC: Srinivas Eeda CC: Joe Jin CC: Junxiao Bi Signed-off-by: Zhu Yanjun --- drivers/net/ethernet/nvidia/forcedeth.c | 28 ++++++++++++++++++++-------- 1 file changed, 20 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c index 63a9e1e..22912e7 100644 --- a/drivers/net/ethernet/nvidia/forcedeth.c +++ b/drivers/net/ethernet/nvidia/forcedeth.c @@ -2218,8 +2218,6 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev) /* setup the header buffer */ do { - prev_tx = put_tx; - prev_tx_ctx = np->put_tx_ctx; bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size; np->put_tx_ctx->dma = dma_map_single(&np->pci_dev->dev, skb->data + offset, bcnt, @@ -2254,8 +2252,6 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev) offset = 0; do { - prev_tx = put_tx; - prev_tx_ctx = np->put_tx_ctx; if (!start_tx_ctx) start_tx_ctx = tmp_tx_ctx = np->put_tx_ctx; @@ -2296,6 +2292,16 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev) } while (frag_size); } + if (unlikely(put_tx == np->first_tx.orig)) + prev_tx = np->last_tx.orig; + else + prev_tx = put_tx - 1; + + if (unlikely(np->put_tx_ctx == np->first_tx_ctx)) + prev_tx_ctx = np->last_tx_ctx; + else + prev_tx_ctx = np->put_tx_ctx - 1; + /* set last fragment flag */ prev_tx->flaglen |= cpu_to_le32(tx_flags_extra); @@ -2368,8 +2374,6 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb, /* setup the header buffer */ do { - prev_tx = put_tx; - prev_tx_ctx = np->put_tx_ctx; bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size; np->put_tx_ctx->dma = dma_map_single(&np->pci_dev->dev, skb->data + offset, bcnt, @@ -2405,8 +2409,6 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb, offset = 0; do { - prev_tx = put_tx; - prev_tx_ctx = np->put_tx_ctx; bcnt = (frag_size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : frag_size; if (!start_tx_ctx) start_tx_ctx = tmp_tx_ctx = np->put_tx_ctx; @@ -2447,6 +2449,16 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb, } while (frag_size); } + if (unlikely(put_tx == np->first_tx.ex)) + prev_tx = np->last_tx.ex; + else + prev_tx = put_tx - 1; + + if (unlikely(np->put_tx_ctx == np->first_tx_ctx)) + prev_tx_ctx = np->last_tx_ctx; + else + prev_tx_ctx = np->put_tx_ctx - 1; + /* set last fragment flag */ prev_tx->flaglen |= cpu_to_le32(NV_TX2_LASTPACKET);