From patchwork Fri Oct 10 20:17:40 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander H Duyck X-Patchwork-Id: 398759 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id CB81A1400E2 for ; Sat, 11 Oct 2014 07:17:51 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751995AbaJJURm (ORCPT ); Fri, 10 Oct 2014 16:17:42 -0400 Received: from mail-pd0-f170.google.com ([209.85.192.170]:60147 "EHLO mail-pd0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751866AbaJJURl (ORCPT ); Fri, 10 Oct 2014 16:17:41 -0400 Received: by mail-pd0-f170.google.com with SMTP id p10so2259499pdj.15 for ; Fri, 10 Oct 2014 13:17:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:to:from:cc:date:message-id:user-agent:mime-version :content-type:content-transfer-encoding; bh=Xq/erMAghKLM4N+zQh0WBy+PTiM0zx21C+2pJQcXzT8=; b=07QrZoZ801ZFJWqTJVCovwgdn+UORkHBzBReXKMDjR6I2f0CcC8N2ycKGna75QgjZx yZc84QI64nTTg5wZTsvzWS9R4Oh7WJwcKf56JfruI8xoo6m17CBIVe8av/rdNbrLarYy o9lyg2L7O8RU4pm/xYRHVheXrE0q2rBubmf5MC8CurSWRKLARpEayIsvIvqaG4FO9aGo 8lFhuq/9+bbWv/WsBA7x79OULduewb+GIixZvisDPNBBZ2IRsaINEK8Z0UdR7rZ+fbhU DJ4HPXxulrNNZM3G5Myh4sPp/WIUPo8nfYMmg8z7yhas+Iy4EPUuiXhSVM580ORmqBh/ 2w6w== X-Received: by 10.66.145.133 with SMTP id su5mr7820502pab.11.1412972260660; Fri, 10 Oct 2014 13:17:40 -0700 (PDT) Received: from ahduyck-workstation.home (static-50-53-6-159.bvtn.or.frontiernet.net. [50.53.6.159]) by mx.google.com with ESMTPSA id ei1sm4277588pbd.46.2014.10.10.13.17.39 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 10 Oct 2014 13:17:40 -0700 (PDT) Subject: [PATCH] fm10k: Add skb->xmit_more support To: e1000-devel@lists.sourceforge.net, netdev@vger.kernel.org, jeffrey.t.kirsher@intel.com From: alexander.duyck@gmail.com Cc: matthew.vick@intel.com Date: Fri, 10 Oct 2014 13:17:40 -0700 Message-ID: <20141010201714.20593.67813.stgit@ahduyck-workstation.home> User-Agent: StGit/0.16 MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alexander Duyck This change adds suport for skb->xmit_more based on the changes that were made to igb to support the feature. The main changes are moving up the check for maybe_stop_tx so that we can check netif_xmit_stopped to determine if we must write the tail because we can add no further buffers. Signed-off-by: Alexander Duyck Acked-by: Matthew Vick --- drivers/net/ethernet/intel/fm10k/fm10k_main.c | 65 +++++++++++++------------ 1 file changed, 34 insertions(+), 31 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_main.c b/drivers/net/ethernet/intel/fm10k/fm10k_main.c index 6c800a3..8ad7ff4 100644 --- a/drivers/net/ethernet/intel/fm10k/fm10k_main.c +++ b/drivers/net/ethernet/intel/fm10k/fm10k_main.c @@ -930,6 +930,30 @@ static bool fm10k_tx_desc_push(struct fm10k_ring *tx_ring, return i == tx_ring->count; } +static int __fm10k_maybe_stop_tx(struct fm10k_ring *tx_ring, u16 size) +{ + netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index); + + smp_mb(); + + /* We need to check again in a case another CPU has just + * made room available. */ + if (likely(fm10k_desc_unused(tx_ring) < size)) + return -EBUSY; + + /* A reprieve! - use start_queue because it doesn't call schedule */ + netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index); + ++tx_ring->tx_stats.restart_queue; + return 0; +} + +static inline int fm10k_maybe_stop_tx(struct fm10k_ring *tx_ring, u16 size) +{ + if (likely(fm10k_desc_unused(tx_ring) >= size)) + return 0; + return __fm10k_maybe_stop_tx(tx_ring, size); +} + static void fm10k_tx_map(struct fm10k_ring *tx_ring, struct fm10k_tx_buffer *first) { @@ -1023,13 +1047,18 @@ static void fm10k_tx_map(struct fm10k_ring *tx_ring, tx_ring->next_to_use = i; + /* Make sure there is space in the ring for the next send. */ + fm10k_maybe_stop_tx(tx_ring, DESC_NEEDED); + /* notify HW of packet */ - writel(i, tx_ring->tail); + if (netif_xmit_stopped(txring_txq(tx_ring)) || !skb->xmit_more) { + writel(i, tx_ring->tail); - /* we need this if more than one processor can write to our tail - * at a time, it synchronizes IO on IA64/Altix systems - */ - mmiowb(); + /* we need this if more than one processor can write to our tail + * at a time, it synchronizes IO on IA64/Altix systems + */ + mmiowb(); + } return; dma_error: @@ -1049,30 +1078,6 @@ dma_error: tx_ring->next_to_use = i; } -static int __fm10k_maybe_stop_tx(struct fm10k_ring *tx_ring, u16 size) -{ - netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index); - - smp_mb(); - - /* We need to check again in a case another CPU has just - * made room available. */ - if (likely(fm10k_desc_unused(tx_ring) < size)) - return -EBUSY; - - /* A reprieve! - use start_queue because it doesn't call schedule */ - netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index); - ++tx_ring->tx_stats.restart_queue; - return 0; -} - -static inline int fm10k_maybe_stop_tx(struct fm10k_ring *tx_ring, u16 size) -{ - if (likely(fm10k_desc_unused(tx_ring) >= size)) - return 0; - return __fm10k_maybe_stop_tx(tx_ring, size); -} - netdev_tx_t fm10k_xmit_frame_ring(struct sk_buff *skb, struct fm10k_ring *tx_ring) { @@ -1117,8 +1122,6 @@ netdev_tx_t fm10k_xmit_frame_ring(struct sk_buff *skb, fm10k_tx_map(tx_ring, first); - fm10k_maybe_stop_tx(tx_ring, DESC_NEEDED); - return NETDEV_TX_OK; out_drop: