Patchwork twice past the taps, thence out to net?

login
register
mail settings
Submitter Eric Dumazet
Date Dec. 15, 2011, 7 p.m.
Message ID <1323975606.2769.24.camel@edumazet-HP-Compaq-6005-Pro-SFF-PC>
Download mbox | patch
Permalink /patch/131720/
State RFC
Delegated to: David Miller
Headers show

Comments

Eric Dumazet - Dec. 15, 2011, 7 p.m.
Le jeudi 15 décembre 2011 à 10:44 -0800, Stephen Hemminger a écrit :
> On Thu, 15 Dec 2011 10:32:56 -0800
> Rick Jones <rick.jones2@hp.com> wrote:
> 
> > 
> > > More exactly, we call dev_queue_xmit_nit() from dev_hard_start_xmit()
> > > _before_ giving skb to device driver.
> > >
> > > If device driver returns NETDEV_TX_BUSY, and a qdisc was setup on the
> > > device, packet is requeued.
> > >
> > > Later, when queue is allowed to send again packets, packet is
> > > retransmitted (and traced a second time in dev_queue_xmit_nit())
> > 
> > Is this then an unintended consequence bug, or a known feature?
> > 
> > rick
> > 
> > > You can see the 'requeues' counter from "tc -s -d qdisc" output :
> > >
> > > qdisc mq 0: dev eth2 root
> > >   Sent 29421597369 bytes 20301716 pkt (dropped 0, overlimits 0 requeues 371)
> > >   backlog 0b 0p requeues 371
> > 
> > Sure enough:
> > 
> > $ tc -s -d qdisc
> > qdisc mq 0: dev eth0 root
> >   Sent 2212158799862 bytes 1938268098 pkt (dropped 0, overlimits 0 
> > requeues 4975139)
> >   backlog 0b 0p requeues 4975139
> > 
> > rick jones
> 
> Device's work better if the driver proactively manages stop_queue/wake_queue.
> Old devices used TX_BUSY, but newer devices tend to manage the queue
> themselves.
> 

Some 'new' drivers like igb can be fooled in case skb is gso segmented ?

Because igb_xmit_frame_ring() needs skb_shinfo(skb)->nr_frags + 4
descriptors, igb should stop its queue not at MAX_SKB_FRAGS + 4, but
MAX_SKB_FRAGS*4


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rick Jones - Dec. 15, 2011, 10:22 p.m.
On 12/15/2011 11:00 AM, Eric Dumazet wrote:
>> Device's work better if the driver proactively manages stop_queue/wake_queue.
>> Old devices used TX_BUSY, but newer devices tend to manage the queue
>> themselves.
>>
>
> Some 'new' drivers like igb can be fooled in case skb is gso segmented ?
>
> Because igb_xmit_frame_ring() needs skb_shinfo(skb)->nr_frags + 4
> descriptors, igb should stop its queue not at MAX_SKB_FRAGS + 4, but
> MAX_SKB_FRAGS*4
>
> diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
> index 89d576c..989da36 100644
> --- a/drivers/net/ethernet/intel/igb/igb_main.c
> +++ b/drivers/net/ethernet/intel/igb/igb_main.c
> @@ -4370,7 +4370,7 @@ netdev_tx_t igb_xmit_frame_ring(struct sk_buff *skb,
>   	igb_tx_map(tx_ring, first, hdr_len);
>
>   	/* Make sure there is space in the ring for the next send. */
> -	igb_maybe_stop_tx(tx_ring, MAX_SKB_FRAGS + 4);
> +	igb_maybe_stop_tx(tx_ring, MAX_SKB_FRAGS * 4);
>
>   	return NETDEV_TX_OK;


Is there a minimum transmit queue length here?  I get the impression 
that MAX_SKB_FRAGS is at least 16 and is 18 on a system with 4096 byte 
pages.  The previous addition then would be OK so long as the TX queue 
was always at least 22 entries in size, but now it would have to always 
be at least 72?

I guess things are "OK" at the moment:

raj@tardy:~/net-next/drivers/net/ethernet/intel/igb$ grep IGB_MIN_TXD *.[ch]
igb_ethtool.c:	new_tx_count = max_t(u16, new_tx_count, IGB_MIN_TXD);
igb.h:#define IGB_MIN_TXD                       80

but is that getting a little close?

rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch

diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index 89d576c..989da36 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -4370,7 +4370,7 @@  netdev_tx_t igb_xmit_frame_ring(struct sk_buff *skb,
 	igb_tx_map(tx_ring, first, hdr_len);
 
 	/* Make sure there is space in the ring for the next send. */
-	igb_maybe_stop_tx(tx_ring, MAX_SKB_FRAGS + 4);
+	igb_maybe_stop_tx(tx_ring, MAX_SKB_FRAGS * 4);
 
 	return NETDEV_TX_OK;