diff mbox

bonding: set correct tx_queue_len for bond_dev

Message ID 51F7694A.1060703@huawei.com
State Rejected, archived
Delegated to: David Miller
Headers show

Commit Message

Yang Yingliang July 30, 2013, 7:20 a.m. UTC
When we set bandwidth on bond_dev with htb, it's not precise.
Because the length of skb_buff queue is bigger than sch->limit
whose value is 0, some packets are dropped.
This leads to bond device can't make full use of the bandwidth
and influence the accuracy.

With htb, sch->limit is set from bond_dev->tx_queue_len which
is initialized to 0 in bond_setup.

To fix this issue, set the bond_dev's tx_queue_len to slave_dev's.

Example:
tc qdisc add dev bond0 root handle 1: htb default 1
tc class add dev bond0 parent 1:0 classid 1:1 htb rate 1Gbit ceil 1Gbit
iperf -c host -t 30 -i 10

With old bonding:
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   579 MBytes   486 Mbits/sec
[  3] 10.0-20.0 sec   604 MBytes   507 Mbits/sec
[  3] 20.0-30.0 sec   551 MBytes   462 Mbits/sec
[  3]  0.0-30.0 sec  1.69 GBytes   485 Mbits/sec

With new bonding:
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.11 GBytes   955 Mbits/sec
[  3] 10.0-20.0 sec  1.11 GBytes   955 Mbits/sec
[  3] 20.0-30.0 sec  1.11 GBytes   955 Mbits/sec
[  3]  0.0-30.0 sec  3.33 GBytes   955 Mbits/sec

Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
---
 drivers/net/bonding/bond_main.c | 6 ++++++
 1 file changed, 6 insertions(+)

Comments

David Miller July 30, 2013, 7:34 a.m. UTC | #1
From: Yang Yingliang <yangyingliang@huawei.com>
Date: Tue, 30 Jul 2013 15:20:42 +0800

> With htb, sch->limit is set from bond_dev->tx_queue_len which
> is initialized to 0 in bond_setup.

This is intentional, software devices should not queue TX
packets.

You can only use the packet scheduler with hardware devices.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Yang Yingliang July 30, 2013, 9:24 a.m. UTC | #2
On 2013/7/30 15:34, David Miller wrote:
> From: Yang Yingliang <yangyingliang@huawei.com>
> Date: Tue, 30 Jul 2013 15:20:42 +0800
> 
>> With htb, sch->limit is set from bond_dev->tx_queue_len which
>> is initialized to 0 in bond_setup.
> 
> This is intentional, software devices should not queue TX
> packets.

This makes software devices with htb qdisc can hold only
one packet, others are dropped.

Maybe sch->limit's default value which is 1 is too small. 
Instead of this patch, I'll send another patch to fix this 
value for fixing this issue.
> 
> You can only use the packet scheduler with hardware devices.
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index dbbea0e..2536426 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1860,6 +1860,12 @@  int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
 		goto err_dest_symlinks;
 	}
 
+	if (!bond_dev->tx_queue_len)
+		bond_dev->tx_queue_len = slave_dev->tx_queue_len;
+	else
+		bond_dev->tx_queue_len = min(slave_dev->tx_queue_len,
+				bond_dev->tx_queue_len);
+
 	pr_info("%s: enslaving %s as a%s interface with a%s link.\n",
 		bond_dev->name, slave_dev->name,
 		bond_is_active_slave(new_slave) ? "n active" : " backup",