From patchwork Sun Oct 18 14:02:44 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Philipp Kirchhofer X-Patchwork-Id: 531942 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id AA0B314076B for ; Mon, 19 Oct 2015 01:09:13 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752828AbbJROJG (ORCPT ); Sun, 18 Oct 2015 10:09:06 -0400 Received: from static.196.166.47.78.clients.your-server.de ([78.47.166.196]:42169 "EHLO vserver.familie-kirchhofer.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752427AbbJROIy (ORCPT ); Sun, 18 Oct 2015 10:08:54 -0400 X-Greylist: delayed 348 seconds by postgrey-1.27 at vger.kernel.org; Sun, 18 Oct 2015 10:08:54 EDT Received: from philipp-VirtualBox.ka.lan (HSI-KBW-046-005-002-041.hsi8.kabel-badenwuerttemberg.de [46.5.2.41]) by vserver.familie-kirchhofer.de (Postfix) with ESMTPSA id 4B0881E58C3; Sun, 18 Oct 2015 16:03:15 +0200 (CEST) From: Philipp Kirchhofer To: Sebastian Hesselbarth , netdev@vger.kernel.org Cc: Philipp Kirchhofer Subject: [PATCH net 2/2] net: mv643xx_eth: Defer writing the first TX descriptor when using TSO Date: Sun, 18 Oct 2015 16:02:44 +0200 Message-Id: <1445176964-22760-3-git-send-email-philipp@familie-kirchhofer.de> X-Mailer: git-send-email 2.6.2 In-Reply-To: <1445176964-22760-1-git-send-email-philipp@familie-kirchhofer.de> References: <1445176964-22760-1-git-send-email-philipp@familie-kirchhofer.de> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org To prevent a race between the TX DMA engine and the CPU the writing of the first transmit descriptor must be deferred until all following descriptors have been updated. The network card may otherwise start transmitting before all packet descriptors are set up correctly, which leads to data corruption or an aborted transmit operation. This deferral is already done in the non-TSO TX path, implement it also in the TSO TX path. Signed-off-by: Philipp Kirchhofer --- drivers/net/ethernet/marvell/mv643xx_eth.c | 26 +++++++++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c index 68073bc..5124535 100644 --- a/drivers/net/ethernet/marvell/mv643xx_eth.c +++ b/drivers/net/ethernet/marvell/mv643xx_eth.c @@ -791,7 +791,8 @@ txq_put_data_tso(struct net_device *dev, struct tx_queue *txq, } static inline void -txq_put_hdr_tso(struct sk_buff *skb, struct tx_queue *txq, int length) +txq_put_hdr_tso(struct sk_buff *skb, struct tx_queue *txq, int length, + u32 *first_cmd_sts, bool first_desc) { struct mv643xx_eth_private *mp = txq_to_mp(txq); int hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb); @@ -800,6 +801,7 @@ txq_put_hdr_tso(struct sk_buff *skb, struct tx_queue *txq, int length) int ret; u32 cmd_csum = 0; u16 l4i_chk = 0; + u32 cmd_sts; tx_index = txq->tx_curr_desc; desc = &txq->tx_desc_area[tx_index]; @@ -815,9 +817,17 @@ txq_put_hdr_tso(struct sk_buff *skb, struct tx_queue *txq, int length) desc->byte_cnt = hdr_len; desc->buf_ptr = txq->tso_hdrs_dma + txq->tx_curr_desc * TSO_HEADER_SIZE; - desc->cmd_sts = cmd_csum | BUFFER_OWNED_BY_DMA | TX_FIRST_DESC | + cmd_sts = cmd_csum | BUFFER_OWNED_BY_DMA | TX_FIRST_DESC | GEN_CRC; + /* Defer updating the first command descriptor until all + * following descriptors have been written. + */ + if (first_desc) + *first_cmd_sts = cmd_sts; + else + desc->cmd_sts = cmd_sts; + txq->tx_curr_desc++; if (txq->tx_curr_desc == txq->tx_ring_size) txq->tx_curr_desc = 0; @@ -831,6 +841,8 @@ static int txq_submit_tso(struct tx_queue *txq, struct sk_buff *skb, int desc_count = 0; struct tso_t tso; int hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb); + struct tx_desc *first_tx_desc; + u32 first_cmd_sts = 0; /* Count needed descriptors */ if ((txq->tx_desc_count + tso_count_descs(skb)) >= txq->tx_ring_size) { @@ -838,11 +850,14 @@ static int txq_submit_tso(struct tx_queue *txq, struct sk_buff *skb, return -EBUSY; } + first_tx_desc = &txq->tx_desc_area[txq->tx_curr_desc]; + /* Initialize the TSO handler, and prepare the first payload */ tso_start(skb, &tso); total_len = skb->len - hdr_len; while (total_len > 0) { + bool first_desc = (desc_count == 0); char *hdr; data_left = min_t(int, skb_shinfo(skb)->gso_size, total_len); @@ -852,7 +867,8 @@ static int txq_submit_tso(struct tx_queue *txq, struct sk_buff *skb, /* prepare packet headers: MAC + IP + TCP */ hdr = txq->tso_hdrs + txq->tx_curr_desc * TSO_HEADER_SIZE; tso_build_hdr(skb, hdr, &tso, data_left, total_len == 0); - txq_put_hdr_tso(skb, txq, data_left); + txq_put_hdr_tso(skb, txq, data_left, &first_cmd_sts, + first_desc); while (data_left > 0) { int size; @@ -872,6 +888,10 @@ static int txq_submit_tso(struct tx_queue *txq, struct sk_buff *skb, __skb_queue_tail(&txq->tx_skb, skb); skb_tx_timestamp(skb); + /* ensure all other descriptors are written before first cmd_sts */ + wmb(); + first_tx_desc->cmd_sts = first_cmd_sts; + /* clear TX_END status */ mp->work_tx_end &= ~(1 << txq->index);