From patchwork Tue Oct 23 19:39:20 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 193562 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 91A052C0147 for ; Wed, 24 Oct 2012 06:39:44 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933361Ab2JWTjc (ORCPT ); Tue, 23 Oct 2012 15:39:32 -0400 Received: from mail-ee0-f46.google.com ([74.125.83.46]:35492 "EHLO mail-ee0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932223Ab2JWTjb (ORCPT ); Tue, 23 Oct 2012 15:39:31 -0400 Received: by mail-ee0-f46.google.com with SMTP id b15so1563793eek.19 for ; Tue, 23 Oct 2012 12:39:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:from:to:cc:in-reply-to:references:content-type:date :message-id:mime-version:x-mailer:content-transfer-encoding; bh=HHPnpslIMB9Dax5HJI6sM7HlRfgY5OGsH4qv/t/U7I4=; b=ULf/QSexXhz4Er/M36KRHpHTI0wF8gw0XH0GoqOBhylXIzDjwu/zTtQlIhJw2DTDYj wVrFeFAiZiP5ut5slgxs29y8HctW/WgDH87Cra9lLgzVGcADSJVhpNe2QKIR8YhyRZ3A kHoNVo1y4NBn3MAfiVFXwkClhtvBBXDBXJ7CJaUO2TzIlT7Vm2AIohiQBdnXVo/UInq5 mXfNur6SC4S+0H/gw1Qn8hj2NskuaqMM7FZcJ9pIexS5j84WkUu9b5hNk5rAiTHUMr5N jOA9JcIby0BkgIyi5xabbp7PN6W/T4U2T2CoKXJWCR8R4EEk6aaWO2XJoKIscV3Kt4nA cxLA== Received: by 10.14.219.2 with SMTP id l2mr12947404eep.3.1351021169804; Tue, 23 Oct 2012 12:39:29 -0700 (PDT) Received: from [172.28.91.48] ([172.28.91.48]) by mx.google.com with ESMTPS id f3sm15728466eeo.13.2012.10.23.12.39.21 (version=SSLv3 cipher=OTHER); Tue, 23 Oct 2012 12:39:22 -0700 (PDT) Subject: Re: [Pv-drivers] 3.7-rc2 regression : file copied to CIFS-mounted directory corrupted From: Eric Dumazet To: Shreyas Bhatewara Cc: "VMware, Inc." , netdev@vger.kernel.org, edumazet@google.com, linux-kernel@vger.kernel.org, jongman heo In-Reply-To: <1351000246.8609.1926.camel@edumazet-glaptop> References: <1103939870.6550404.1350986530909.JavaMail.root@vmware.com> <1351000246.8609.1926.camel@edumazet-glaptop> Date: Tue, 23 Oct 2012 21:39:20 +0200 Message-ID: <1351021160.8609.2503.camel@edumazet-glaptop> Mime-Version: 1.0 X-Mailer: Evolution 2.28.3 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Tue, 2012-10-23 at 15:50 +0200, Eric Dumazet wrote: > Only the skb head is handled in the code you copy/pasted. > > You need to generalize that to code in lines ~754 > > > Then, the number of estimated descriptors is bad : > > /* conservatively estimate # of descriptors to use */ > count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) + > skb_shinfo(skb)->nr_frags + 1; > > > Yes, you need a more precise estimation and vmxnet3_map_pkt() should > eventually split too big frags. raw patch would be : --- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index ce9d4f2..0ae1bcc 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -744,28 +744,43 @@ vmxnet3_map_pkt(struct sk_buff *skb, struct vmxnet3_tx_ctx *ctx, for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { const struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i]; + u32 buf_size; - tbi = tq->buf_info + tq->tx_ring.next2fill; - tbi->map_type = VMXNET3_MAP_PAGE; - tbi->dma_addr = skb_frag_dma_map(&adapter->pdev->dev, frag, - 0, skb_frag_size(frag), - DMA_TO_DEVICE); + buf_offset = 0; + len = skb_frag_size(frag); + while (len) { + tbi = tq->buf_info + tq->tx_ring.next2fill; + if (len < VMXNET3_MAX_TX_BUF_SIZE) { + buf_size = len; + dw2 |= len; + } else { + buf_size = VMXNET3_MAX_TX_BUF_SIZE; + /* spec says that for TxDesc.len, 0 == 2^14 */ + } + tbi->map_type = VMXNET3_MAP_PAGE; + tbi->dma_addr = skb_frag_dma_map(&adapter->pdev->dev, frag, + buf_offset, buf_size, + DMA_TO_DEVICE); - tbi->len = skb_frag_size(frag); + tbi->len = buf_size; - gdesc = tq->tx_ring.base + tq->tx_ring.next2fill; - BUG_ON(gdesc->txd.gen == tq->tx_ring.gen); + gdesc = tq->tx_ring.base + tq->tx_ring.next2fill; + BUG_ON(gdesc->txd.gen == tq->tx_ring.gen); - gdesc->txd.addr = cpu_to_le64(tbi->dma_addr); - gdesc->dword[2] = cpu_to_le32(dw2 | skb_frag_size(frag)); - gdesc->dword[3] = 0; + gdesc->txd.addr = cpu_to_le64(tbi->dma_addr); + gdesc->dword[2] = cpu_to_le32(dw2); + gdesc->dword[3] = 0; - dev_dbg(&adapter->netdev->dev, - "txd[%u]: 0x%llu %u %u\n", - tq->tx_ring.next2fill, le64_to_cpu(gdesc->txd.addr), - le32_to_cpu(gdesc->dword[2]), gdesc->dword[3]); - vmxnet3_cmd_ring_adv_next2fill(&tq->tx_ring); - dw2 = tq->tx_ring.gen << VMXNET3_TXD_GEN_SHIFT; + dev_dbg(&adapter->netdev->dev, + "txd[%u]: 0x%llu %u %u\n", + tq->tx_ring.next2fill, le64_to_cpu(gdesc->txd.addr), + le32_to_cpu(gdesc->dword[2]), gdesc->dword[3]); + vmxnet3_cmd_ring_adv_next2fill(&tq->tx_ring); + dw2 = tq->tx_ring.gen << VMXNET3_TXD_GEN_SHIFT; + + len -= buf_size; + buf_offset += buf_size; + } } ctx->eop_txd = gdesc; @@ -886,6 +901,18 @@ vmxnet3_prepare_tso(struct sk_buff *skb, } } +static int txd_estimate(const struct sk_buff *skb) +{ + int count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) + 1; + int i; + + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + const struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i]; + + count += VMXNET3_TXD_NEEDED(skb_frag_size(frag)); + } + return count; +} /* * Transmits a pkt thru a given tq @@ -914,9 +941,7 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq, union Vmxnet3_GenericDesc tempTxDesc; #endif - /* conservatively estimate # of descriptors to use */ - count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) + - skb_shinfo(skb)->nr_frags + 1; + count = txd_estimate(skb); ctx.ipv4 = (vlan_get_protocol(skb) == cpu_to_be16(ETH_P_IP));