diff mbox series

Setting large MTU size on slave interfaces may stall the whole system

Message ID ea15b7f9-4c02-3d4e-1c01-4e2a90d2a23c@oracle.com
State RFC, archived
Delegated to: David Miller
Headers show
Series Setting large MTU size on slave interfaces may stall the whole system | expand

Commit Message

Qing Huang Dec. 12, 2017, 3:21 a.m. UTC
(resend this email in text format)


Hi,

We found an issue with the bonding driver when testing Mellanox devices.
The following test commands will stall the whole system sometimes, with 
serial console
flooded with log messages from the bond_miimon_inspect() function. 
Setting mtu size
to be 1500 seems okay but very rarely it may hit the same problem too.

ip address flush dev ens3f0
ip link set dev ens3f0 down
ip address flush dev ens3f1
ip link set dev ens3f1 down
[root@ca-hcl629 etc]# modprobe bonding mode=0 miimon=250 use_carrier=1
updelay=500 downdelay=500
[root@ca-hcl629 etc]# ifconfig bond0 up
[root@ca-hcl629 etc]# ifenslave bond0 ens3f0 ens3f1
[root@ca-hcl629 etc]# ip link set bond0 mtu 4500 up


Seiral console output:

** 4 printk messages dropped ** [ 3717.743761] bond0: link status down for
interface ens3f0, disabling it in 500 ms

** 5 printk messages dropped ** [ 3717.755737] bond0: link status down for
interface ens3f0, disabling it in 500 ms

** 5 printk messages dropped ** [ 3717.767758] bond0: link status down for
interface ens3f0, disabling it in 500 ms

** 4 printk messages dropped ** [ 3717.777737] bond0: link status down for
interface ens3f0, disabling it in 500 ms

or

** 4 printk messages dropped ** [274743.297863] bond0: link status down 
again
after 500 ms for interface enp48s0f1
** 4 printk messages dropped ** [274743.307866] bond0: link status down 
again
after 500 ms for interface enp48s0f1
** 4 printk messages dropped ** [274743.317857] bond0: link status down 
again
after 500 ms for interface enp48s0f1
** 4 printk messages dropped ** [274743.327823] bond0: link status down 
again
after 500 ms for interface enp48s0f1
** 4 printk messages dropped ** [274743.337817] bond0: link status down 
again
after 500 ms for interface enp48s0f1


The root cause is the combined affect from commit 
1f2cd845d3827412e82bf26dde0abca332ede402(Revert
"Merge branch 'bonding_monitor_locking'") and commit 
de77ecd4ef02ca783f7762e04e92b3d0964be66b
("bonding: improve link-status update in mii-monitoring"). E.g. 
reverting the second commit, we don't
see the problem.

It seems that when setting a large mtu size on an RoCE interface, the 
RTNL mutex may be held too long by the slave
interface, causing bond_mii_monitor() to be called repeatedly at an 
interval of 1 tick (1K HZ kernel configuration)
and kernel to become unresponsive.


We found two possible solutions:

#1, don't re-arm the mii monitor thread too quick if we cannot get RTNL 
lock:
index b2db581..8fd587a 100644
                                             (BOND_MODE(bond) ==
BOND_MODE_ACTIVEBACKUP) ?
@@ -2105,7 +2105,8 @@ static int bond_miimon_inspect(struct bonding *bond)
                 case BOND_LINK_BACK:
                         if (!link_state) {
                                 bond_propose_link_state(slave,
BOND_LINK_DOWN);
-                               netdev_info(bond->dev, "link status down
again after %d ms for interface %s\n",
+                               if(printk_ratelimit())
+                                       netdev_info(bond->dev, "link status
down again after %d ms for interface %s\n",
(bond->params.updelay -
slave->delay) *
bond->params.miimon,
slave->dev->name);


Regarding the flooding messages, the netdev_info output is misleading 
anyway
when bond_mii_monitor() is called at 1 tick interval due to lock 
contention.


Solution #1 looks simpler and cleaner to me. Any side affect of doing that?


Thanks,
Qing

Comments

Or Gerlitz Dec. 13, 2017, 2:28 p.m. UTC | #1
On Tue, Dec 12, 2017 at 5:21 AM, Qing Huang <qing.huang@oracle.com> wrote:
> Hi,
>
> We found an issue with the bonding driver when testing Mellanox devices.
> The following test commands will stall the whole system sometimes, with
> serial console
> flooded with log messages from the bond_miimon_inspect() function. Setting
> mtu size
> to be 1500 seems okay but very rarely it may hit the same problem too.
>
> ip address flush dev ens3f0
> ip link set dev ens3f0 down
> ip address flush dev ens3f1
> ip link set dev ens3f1 down
> [root@ca-hcl629 etc]# modprobe bonding mode=0 miimon=250 use_carrier=1
> updelay=500 downdelay=500
> [root@ca-hcl629 etc]# ifconfig bond0 up
> [root@ca-hcl629 etc]# ifenslave bond0 ens3f0 ens3f1
> [root@ca-hcl629 etc]# ip link set bond0 mtu 4500 up

> Seiral console output:
>
> ** 4 printk messages dropped ** [ 3717.743761] bond0: link status down for
> interface ens3f0, disabling it in 500 ms
[..]


> It seems that when setting a large mtu size on an RoCE interface, the RTNL
> mutex may be held too long by the slave
> interface, causing bond_mii_monitor() to be called repeatedly at an interval
> of 1 tick (1K HZ kernel configuration) and kernel to become unresponsive.

Did you try/managed to reproduce that also with other NIC drivers?

Or.
Qing Huang Dec. 15, 2017, midnight UTC | #2
Hi Or,


On 12/13/2017 06:28 AM, Or Gerlitz wrote:
> On Tue, Dec 12, 2017 at 5:21 AM, Qing Huang <qing.huang@oracle.com> wrote:
>> Hi,
>>
>> We found an issue with the bonding driver when testing Mellanox devices.
>> The following test commands will stall the whole system sometimes, with
>> serial console
>> flooded with log messages from the bond_miimon_inspect() function. Setting
>> mtu size
>> to be 1500 seems okay but very rarely it may hit the same problem too.
>>
>> ip address flush dev ens3f0
>> ip link set dev ens3f0 down
>> ip address flush dev ens3f1
>> ip link set dev ens3f1 down
>> [root@ca-hcl629 etc]# modprobe bonding mode=0 miimon=250 use_carrier=1
>> updelay=500 downdelay=500
>> [root@ca-hcl629 etc]# ifconfig bond0 up
>> [root@ca-hcl629 etc]# ifenslave bond0 ens3f0 ens3f1
>> [root@ca-hcl629 etc]# ip link set bond0 mtu 4500 up
>> Seiral console output:
>>
>> ** 4 printk messages dropped ** [ 3717.743761] bond0: link status down for
>> interface ens3f0, disabling it in 500 ms
> [..]
>
>
>> It seems that when setting a large mtu size on an RoCE interface, the RTNL
>> mutex may be held too long by the slave
>> interface, causing bond_mii_monitor() to be called repeatedly at an interval
>> of 1 tick (1K HZ kernel configuration) and kernel to become unresponsive.
> Did you try/managed to reproduce that also with other NIC drivers?
This seems to be an issue with the bonding driver. Also running older 
kernels on the same
hardware without commit (de77ecd4ef02) seems to work fine. We can try to 
reproduce the
issue with other type of NIC hardware.

>
> Or.
diff mbox series

Patch

--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -2266,7 +2266,6 @@  static void bond_mii_monitor(struct work_struct 
*work)

                 /* Race avoidance with bond_close cancel of workqueue */
                 if (!rtnl_trylock()) {
-                       delay = 1;
                         should_notify_peers = false;
                         goto re_arm;
                 }

#2, we use printk_ratelimit() to avoid flooding log messages generated 
by bond_miimon_inspect().

index b2db581..0183b7f 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -2054,7 +2054,7 @@  static int bond_miimon_inspect(struct bonding *bond)
                         bond_propose_link_state(slave, BOND_LINK_FAIL);
                         commit++;
                         slave->delay = bond->params.downdelay;
-                       if (slave->delay) {
+                       if (slave->delay && printk_ratelimit()) {
                                 netdev_info(bond->dev, "link status 
down for
%sinterface %s, disabling it in %d ms\n",