diff mbox

localed stuck in recent 3.18 git in copy_net_ns?

Message ID 20141024214927.GA4977@linux.vnet.ibm.com
State RFC, archived
Delegated to: David Miller
Headers show

Commit Message

Paul E. McKenney Oct. 24, 2014, 9:49 p.m. UTC
On Sat, Oct 25, 2014 at 12:25:57AM +0300, Yanko Kaneti wrote:
> On Fri-10/24/14-2014 11:32, Paul E. McKenney wrote:
> > On Fri, Oct 24, 2014 at 08:35:26PM +0300, Yanko Kaneti wrote:
> > > On Fri-10/24/14-2014 10:20, Paul E. McKenney wrote:

[ . . . ]

> > > > Well, if you are feeling aggressive, give the following patch a spin.
> > > > I am doing sanity tests on it in the meantime.
> > > 
> > > Doesn't seem to make a difference here
> > 
> > OK, inspection isn't cutting it, so time for tracing.  Does the system
> > respond to user input?  If so, please enable rcu:rcu_barrier ftrace before
> > the problem occurs, then dump the trace buffer after the problem occurs.
> 
> Sorry for being unresposive here, but I know next to nothing about tracing
> or most things about the kernel, so I have some cathing up to do.
> 
> In the meantime some layman observations while I tried to find what exactly
> triggers the problem.
> - Even in runlevel 1 I can reliably trigger the problem by starting libvirtd
> - libvirtd seems to be very active in using all sorts of kernel facilities
>   that are modules on fedora so it seems to cause many simultaneous kworker 
>   calls to modprobe
> - there are 8 kworker/u16 from 0 to 7
> - one of these kworkers always deadlocks, while there appear to be two
>   kworker/u16:6 - the seventh

Adding Tejun on CC in case this duplication of kworker/u16:6 is important.

>   6 vs 8 as in 6 rcuos where before they were always 8
> 
> Just observations from someone who still doesn't know what the u16
> kworkers are..

Could you please run the following diagnostic patch?  This will help
me see if I have managed to miswire the rcuo kthreads.  It should
print some information at task-hang time.

							Thanx, Paul

------------------------------------------------------------------------

rcu: Dump no-CBs CPU state at task-hung time

Strictly diagnostic commit for rcu_barrier() hang.  Not for inclusion.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Jay Vosburgh Oct. 24, 2014, 10:02 p.m. UTC | #1
Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:

>On Sat, Oct 25, 2014 at 12:25:57AM +0300, Yanko Kaneti wrote:
>> On Fri-10/24/14-2014 11:32, Paul E. McKenney wrote:
>> > On Fri, Oct 24, 2014 at 08:35:26PM +0300, Yanko Kaneti wrote:
>> > > On Fri-10/24/14-2014 10:20, Paul E. McKenney wrote:
>
>[ . . . ]
>
>> > > > Well, if you are feeling aggressive, give the following patch a spin.
>> > > > I am doing sanity tests on it in the meantime.
>> > > 
>> > > Doesn't seem to make a difference here
>> > 
>> > OK, inspection isn't cutting it, so time for tracing.  Does the system
>> > respond to user input?  If so, please enable rcu:rcu_barrier ftrace before
>> > the problem occurs, then dump the trace buffer after the problem occurs.
>> 
>> Sorry for being unresposive here, but I know next to nothing about tracing
>> or most things about the kernel, so I have some cathing up to do.
>> 
>> In the meantime some layman observations while I tried to find what exactly
>> triggers the problem.
>> - Even in runlevel 1 I can reliably trigger the problem by starting libvirtd
>> - libvirtd seems to be very active in using all sorts of kernel facilities
>>   that are modules on fedora so it seems to cause many simultaneous kworker 
>>   calls to modprobe
>> - there are 8 kworker/u16 from 0 to 7
>> - one of these kworkers always deadlocks, while there appear to be two
>>   kworker/u16:6 - the seventh
>
>Adding Tejun on CC in case this duplication of kworker/u16:6 is important.
>
>>   6 vs 8 as in 6 rcuos where before they were always 8
>> 
>> Just observations from someone who still doesn't know what the u16
>> kworkers are..
>
>Could you please run the following diagnostic patch?  This will help
>me see if I have managed to miswire the rcuo kthreads.  It should
>print some information at task-hang time.

	I can give this a spin after the ftrace (now that I've got
CONFIG_RCU_TRACE turned on).

	I've got an ftrace capture from unmodified -net, it looks like
this:

    ovs-vswitchd-902   [000] ....   471.778441: rcu_barrier: rcu_sched Begin cpu -1 remaining 0 # 0
    ovs-vswitchd-902   [000] ....   471.778452: rcu_barrier: rcu_sched Check cpu -1 remaining 0 # 0
    ovs-vswitchd-902   [000] ....   471.778452: rcu_barrier: rcu_sched Inc1 cpu -1 remaining 0 # 1
    ovs-vswitchd-902   [000] ....   471.778453: rcu_barrier: rcu_sched OnlineNoCB cpu 0 remaining 1 # 1
    ovs-vswitchd-902   [000] ....   471.778453: rcu_barrier: rcu_sched OnlineNoCB cpu 1 remaining 2 # 1
    ovs-vswitchd-902   [000] ....   471.778453: rcu_barrier: rcu_sched OnlineNoCB cpu 2 remaining 3 # 1
    ovs-vswitchd-902   [000] ....   471.778454: rcu_barrier: rcu_sched OnlineNoCB cpu 3 remaining 4 # 1
    ovs-vswitchd-902   [000] ....   471.778454: rcu_barrier: rcu_sched Inc2 cpu -1 remaining 4 # 2
         rcuos/0-9     [000] ..s.   471.793150: rcu_barrier: rcu_sched CB cpu -1 remaining 3 # 2
         rcuos/1-18    [001] ..s.   471.793308: rcu_barrier: rcu_sched CB cpu -1 remaining 2 # 2

	I let it sit through several "hung task" cycles but that was all
there was for rcu:rcu_barrier.

	I should have ftrace with the patch as soon as the kernel is
done building, then I can try the below patch (I'll start it building
now).

	-J




>							Thanx, Paul
>
>------------------------------------------------------------------------
>
>rcu: Dump no-CBs CPU state at task-hung time
>
>Strictly diagnostic commit for rcu_barrier() hang.  Not for inclusion.
>
>Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
>
>diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
>index 0e5366200154..34048140577b 100644
>--- a/include/linux/rcutiny.h
>+++ b/include/linux/rcutiny.h
>@@ -157,4 +157,8 @@ static inline bool rcu_is_watching(void)
> 
> #endif /* #else defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) */
> 
>+static inline void rcu_show_nocb_setup(void)
>+{
>+}
>+
> #endif /* __LINUX_RCUTINY_H */
>diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
>index 52953790dcca..0b813bdb971b 100644
>--- a/include/linux/rcutree.h
>+++ b/include/linux/rcutree.h
>@@ -97,4 +97,6 @@ extern int rcu_scheduler_active __read_mostly;
> 
> bool rcu_is_watching(void);
> 
>+void rcu_show_nocb_setup(void);
>+
> #endif /* __LINUX_RCUTREE_H */
>diff --git a/kernel/hung_task.c b/kernel/hung_task.c
>index 06db12434d72..e6e4d0f6b063 100644
>--- a/kernel/hung_task.c
>+++ b/kernel/hung_task.c
>@@ -118,6 +118,7 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
> 		" disables this message.\n");
> 	sched_show_task(t);
> 	debug_show_held_locks(t);
>+	rcu_show_nocb_setup();
> 
> 	touch_nmi_watchdog();
> 
>diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
>index 240fa9094f83..6b373e79ce0e 100644
>--- a/kernel/rcu/rcutorture.c
>+++ b/kernel/rcu/rcutorture.c
>@@ -1513,6 +1513,7 @@ rcu_torture_cleanup(void)
> {
> 	int i;
> 
>+	rcu_show_nocb_setup();
> 	rcutorture_record_test_transition();
> 	if (torture_cleanup_begin()) {
> 		if (cur_ops->cb_barrier != NULL)
>diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
>index 927c17b081c7..285b3f6fb229 100644
>--- a/kernel/rcu/tree_plugin.h
>+++ b/kernel/rcu/tree_plugin.h
>@@ -2699,6 +2699,31 @@ static bool init_nocb_callback_list(struct rcu_data *rdp)
> 
> #endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
> 
>+void rcu_show_nocb_setup(void)
>+{
>+#ifdef CONFIG_RCU_NOCB_CPU
>+	int cpu;
>+	struct rcu_data *rdp;
>+	struct rcu_state *rsp;
>+
>+	for_each_rcu_flavor(rsp) {
>+		pr_alert("rcu_show_nocb_setup(): %s nocb state:\n", rsp->name);
>+		for_each_possible_cpu(cpu) {
>+			if (!rcu_is_nocb_cpu(cpu))
>+				continue;
>+			rdp = per_cpu_ptr(rsp->rda, cpu);
>+			pr_alert("%3d: %p l:%p n:%p %c%c%c\n",
>+				 cpu,
>+				 rdp, rdp->nocb_leader, rdp->nocb_next_follower,
>+				 ".N"[!!rdp->nocb_head],
>+				 ".G"[!!rdp->nocb_gp_head],
>+				 ".F"[!!rdp->nocb_follower_head]);
>+		}
>+	}
>+#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
>+}
>+EXPORT_SYMBOL_GPL(rcu_show_nocb_setup);
>+
> /*
>  * An adaptive-ticks CPU can potentially execute in kernel mode for an
>  * arbitrarily long period of time with the scheduling-clock tick turned
>

---
	-Jay Vosburgh, jay.vosburgh@canonical.com
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paul E. McKenney Oct. 24, 2014, 10:16 p.m. UTC | #2
On Fri, Oct 24, 2014 at 03:02:04PM -0700, Jay Vosburgh wrote:
> Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:
> 
> >On Sat, Oct 25, 2014 at 12:25:57AM +0300, Yanko Kaneti wrote:
> >> On Fri-10/24/14-2014 11:32, Paul E. McKenney wrote:
> >> > On Fri, Oct 24, 2014 at 08:35:26PM +0300, Yanko Kaneti wrote:
> >> > > On Fri-10/24/14-2014 10:20, Paul E. McKenney wrote:
> >
> >[ . . . ]
> >
> >> > > > Well, if you are feeling aggressive, give the following patch a spin.
> >> > > > I am doing sanity tests on it in the meantime.
> >> > > 
> >> > > Doesn't seem to make a difference here
> >> > 
> >> > OK, inspection isn't cutting it, so time for tracing.  Does the system
> >> > respond to user input?  If so, please enable rcu:rcu_barrier ftrace before
> >> > the problem occurs, then dump the trace buffer after the problem occurs.
> >> 
> >> Sorry for being unresposive here, but I know next to nothing about tracing
> >> or most things about the kernel, so I have some cathing up to do.
> >> 
> >> In the meantime some layman observations while I tried to find what exactly
> >> triggers the problem.
> >> - Even in runlevel 1 I can reliably trigger the problem by starting libvirtd
> >> - libvirtd seems to be very active in using all sorts of kernel facilities
> >>   that are modules on fedora so it seems to cause many simultaneous kworker 
> >>   calls to modprobe
> >> - there are 8 kworker/u16 from 0 to 7
> >> - one of these kworkers always deadlocks, while there appear to be two
> >>   kworker/u16:6 - the seventh
> >
> >Adding Tejun on CC in case this duplication of kworker/u16:6 is important.
> >
> >>   6 vs 8 as in 6 rcuos where before they were always 8
> >> 
> >> Just observations from someone who still doesn't know what the u16
> >> kworkers are..
> >
> >Could you please run the following diagnostic patch?  This will help
> >me see if I have managed to miswire the rcuo kthreads.  It should
> >print some information at task-hang time.
> 
> 	I can give this a spin after the ftrace (now that I've got
> CONFIG_RCU_TRACE turned on).
> 
> 	I've got an ftrace capture from unmodified -net, it looks like
> this:
> 
>     ovs-vswitchd-902   [000] ....   471.778441: rcu_barrier: rcu_sched Begin cpu -1 remaining 0 # 0
>     ovs-vswitchd-902   [000] ....   471.778452: rcu_barrier: rcu_sched Check cpu -1 remaining 0 # 0
>     ovs-vswitchd-902   [000] ....   471.778452: rcu_barrier: rcu_sched Inc1 cpu -1 remaining 0 # 1
>     ovs-vswitchd-902   [000] ....   471.778453: rcu_barrier: rcu_sched OnlineNoCB cpu 0 remaining 1 # 1
>     ovs-vswitchd-902   [000] ....   471.778453: rcu_barrier: rcu_sched OnlineNoCB cpu 1 remaining 2 # 1
>     ovs-vswitchd-902   [000] ....   471.778453: rcu_barrier: rcu_sched OnlineNoCB cpu 2 remaining 3 # 1
>     ovs-vswitchd-902   [000] ....   471.778454: rcu_barrier: rcu_sched OnlineNoCB cpu 3 remaining 4 # 1

OK, so it looks like your system has four CPUs, and rcu_barrier() placed
callbacks on them all.

>     ovs-vswitchd-902   [000] ....   471.778454: rcu_barrier: rcu_sched Inc2 cpu -1 remaining 4 # 2

The above removes the extra count used to avoid races between posting new
callbacks and completion of previously posted callbacks.

>          rcuos/0-9     [000] ..s.   471.793150: rcu_barrier: rcu_sched CB cpu -1 remaining 3 # 2
>          rcuos/1-18    [001] ..s.   471.793308: rcu_barrier: rcu_sched CB cpu -1 remaining 2 # 2

Two of the four callbacks fired, but the other two appear to be AWOL.
And rcu_barrier() won't return until they all fire.

> 	I let it sit through several "hung task" cycles but that was all
> there was for rcu:rcu_barrier.
> 
> 	I should have ftrace with the patch as soon as the kernel is
> done building, then I can try the below patch (I'll start it building
> now).

Sounds very good, looking forward to hearing of the results.

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jay Vosburgh Oct. 24, 2014, 10:34 p.m. UTC | #3
Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:

>On Sat, Oct 25, 2014 at 12:25:57AM +0300, Yanko Kaneti wrote:
>> On Fri-10/24/14-2014 11:32, Paul E. McKenney wrote:
>> > On Fri, Oct 24, 2014 at 08:35:26PM +0300, Yanko Kaneti wrote:
>> > > On Fri-10/24/14-2014 10:20, Paul E. McKenney wrote:
>
>[ . . . ]
>
>> > > > Well, if you are feeling aggressive, give the following patch a spin.
>> > > > I am doing sanity tests on it in the meantime.
>> > > 
>> > > Doesn't seem to make a difference here
>> > 
>> > OK, inspection isn't cutting it, so time for tracing.  Does the system
>> > respond to user input?  If so, please enable rcu:rcu_barrier ftrace before
>> > the problem occurs, then dump the trace buffer after the problem occurs.
>> 
>> Sorry for being unresposive here, but I know next to nothing about tracing
>> or most things about the kernel, so I have some cathing up to do.
>> 
>> In the meantime some layman observations while I tried to find what exactly
>> triggers the problem.
>> - Even in runlevel 1 I can reliably trigger the problem by starting libvirtd
>> - libvirtd seems to be very active in using all sorts of kernel facilities
>>   that are modules on fedora so it seems to cause many simultaneous kworker 
>>   calls to modprobe
>> - there are 8 kworker/u16 from 0 to 7
>> - one of these kworkers always deadlocks, while there appear to be two
>>   kworker/u16:6 - the seventh
>
>Adding Tejun on CC in case this duplication of kworker/u16:6 is important.
>
>>   6 vs 8 as in 6 rcuos where before they were always 8
>> 
>> Just observations from someone who still doesn't know what the u16
>> kworkers are..
>
>Could you please run the following diagnostic patch?  This will help
>me see if I have managed to miswire the rcuo kthreads.  It should
>print some information at task-hang time.

	Here's the output of the patch; I let it sit through two hang
cycles.

	-J


[  240.348020] INFO: task ovs-vswitchd:902 blocked for more than 120 seconds.
[  240.354878]       Not tainted 3.17.0-testola+ #4
[  240.359481] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  240.367285] ovs-vswitchd    D ffff88013fc94600     0   902    901 0x00000004
[  240.367290]  ffff8800ab20f7b8 0000000000000002 ffff8800b3304b00 ffff8800ab20ffd8
[  240.367293]  0000000000014600 0000000000014600 ffff8800b0810000 ffff8800b3304b00
[  240.367296]  ffff8800b3304b00 ffffffff81c59850 ffffffff81c59858 7fffffffffffffff
[  240.367300] Call Trace:
[  240.367307]  [<ffffffff81722b99>] schedule+0x29/0x70
[  240.367310]  [<ffffffff81725b6c>] schedule_timeout+0x1dc/0x260
[  240.367313]  [<ffffffff81722f69>] ? _cond_resched+0x29/0x40
[  240.367316]  [<ffffffff81723818>] ? wait_for_completion+0x28/0x160
[  240.367321]  [<ffffffff811081a7>] ? queue_stop_cpus_work+0xc7/0xe0
[  240.367324]  [<ffffffff81723896>] wait_for_completion+0xa6/0x160
[  240.367328]  [<ffffffff81099980>] ? wake_up_state+0x20/0x20
[  240.367331]  [<ffffffff810d0ecc>] _rcu_barrier+0x20c/0x480
[  240.367334]  [<ffffffff810d1195>] rcu_barrier+0x15/0x20
[  240.367338]  [<ffffffff81625010>] netdev_run_todo+0x60/0x300
[  240.367341]  [<ffffffff8162f9ee>] rtnl_unlock+0xe/0x10
[  240.367349]  [<ffffffffa01ffcc5>] internal_dev_destroy+0x55/0x80 [openvswitch]
[  240.367354]  [<ffffffffa01ff622>] ovs_vport_del+0x32/0x40 [openvswitch]
[  240.367358]  [<ffffffffa01f8dd0>] ovs_dp_detach_port+0x30/0x40 [openvswitch]
[  240.367363]  [<ffffffffa01f8ea5>] ovs_vport_cmd_del+0xc5/0x110 [openvswitch]
[  240.367367]  [<ffffffff81651d75>] genl_family_rcv_msg+0x1a5/0x3c0
[  240.367370]  [<ffffffff81651f90>] ? genl_family_rcv_msg+0x3c0/0x3c0
[  240.367372]  [<ffffffff81652021>] genl_rcv_msg+0x91/0xd0
[  240.367376]  [<ffffffff81650091>] netlink_rcv_skb+0xc1/0xe0
[  240.367378]  [<ffffffff816505bc>] genl_rcv+0x2c/0x40
[  240.367381]  [<ffffffff8164f626>] netlink_unicast+0xf6/0x200
[  240.367383]  [<ffffffff8164fa4d>] netlink_sendmsg+0x31d/0x780
[  240.367387]  [<ffffffff8164ca74>] ? netlink_rcv_wake+0x44/0x60
[  240.367391]  [<ffffffff81606a53>] sock_sendmsg+0x93/0xd0
[  240.367395]  [<ffffffff81337700>] ? apparmor_capable+0x60/0x60
[  240.367399]  [<ffffffff81614f27>] ? verify_iovec+0x47/0xd0
[  240.367402]  [<ffffffff81606e79>] ___sys_sendmsg+0x399/0x3b0
[  240.367406]  [<ffffffff812598a2>] ? kernfs_seq_stop_active+0x32/0x40
[  240.367410]  [<ffffffff8101c385>] ? native_sched_clock+0x35/0x90
[  240.367413]  [<ffffffff8101c385>] ? native_sched_clock+0x35/0x90
[  240.367416]  [<ffffffff8101c3e9>] ? sched_clock+0x9/0x10
[  240.367420]  [<ffffffff811277fc>] ? acct_account_cputime+0x1c/0x20
[  240.367424]  [<ffffffff8109ce6b>] ? account_user_time+0x8b/0xa0
[  240.367428]  [<ffffffff81200bd5>] ? __fget_light+0x25/0x70
[  240.367431]  [<ffffffff81607c02>] __sys_sendmsg+0x42/0x80
[  240.367433]  [<ffffffff81607c52>] SyS_sendmsg+0x12/0x20
[  240.367436]  [<ffffffff81727464>] tracesys_phase2+0xd8/0xdd
[  240.367439] rcu_show_nocb_setup(): rcu_sched nocb state:
[  240.372734]   0: ffff88013fc0e600 l:ffff88013fc0e600 n:ffff88013fc8e600 .G.
[  240.379673]   1: ffff88013fc8e600 l:ffff88013fc0e600 n:          (null) .G.
[  240.386611]   2: ffff88013fd0e600 l:ffff88013fd0e600 n:ffff88013fd8e600 N..
[  240.393550]   3: ffff88013fd8e600 l:ffff88013fd0e600 n:          (null) N..
[  240.400489] rcu_show_nocb_setup(): rcu_bh nocb state:
[  240.405525]   0: ffff88013fc0e3c0 l:ffff88013fc0e3c0 n:ffff88013fc8e3c0 ...
[  240.412463]   1: ffff88013fc8e3c0 l:ffff88013fc0e3c0 n:          (null) ...
[  240.419401]   2: ffff88013fd0e3c0 l:ffff88013fd0e3c0 n:ffff88013fd8e3c0 ...
[  240.426339]   3: ffff88013fd8e3c0 l:ffff88013fd0e3c0 n:          (null) ...
[  360.432020] INFO: task ovs-vswitchd:902 blocked for more than 120 seconds.
[  360.438881]       Not tainted 3.17.0-testola+ #4
[  360.443484] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  360.451289] ovs-vswitchd    D ffff88013fc94600     0   902    901 0x00000004
[  360.451293]  ffff8800ab20f7b8 0000000000000002 ffff8800b3304b00 ffff8800ab20ffd8
[  360.451297]  0000000000014600 0000000000014600 ffff8800b0810000 ffff8800b3304b00
[  360.451300]  ffff8800b3304b00 ffffffff81c59850 ffffffff81c59858 7fffffffffffffff
[  360.451303] Call Trace:
[  360.451311]  [<ffffffff81722b99>] schedule+0x29/0x70
[  360.451314]  [<ffffffff81725b6c>] schedule_timeout+0x1dc/0x260
[  360.451317]  [<ffffffff81722f69>] ? _cond_resched+0x29/0x40
[  360.451320]  [<ffffffff81723818>] ? wait_for_completion+0x28/0x160
[  360.451325]  [<ffffffff811081a7>] ? queue_stop_cpus_work+0xc7/0xe0
[  360.451327]  [<ffffffff81723896>] wait_for_completion+0xa6/0x160
[  360.451331]  [<ffffffff81099980>] ? wake_up_state+0x20/0x20
[  360.451335]  [<ffffffff810d0ecc>] _rcu_barrier+0x20c/0x480
[  360.451338]  [<ffffffff810d1195>] rcu_barrier+0x15/0x20
[  360.451342]  [<ffffffff81625010>] netdev_run_todo+0x60/0x300
[  360.451345]  [<ffffffff8162f9ee>] rtnl_unlock+0xe/0x10
[  360.451353]  [<ffffffffa01ffcc5>] internal_dev_destroy+0x55/0x80 [openvswitch]
[  360.451358]  [<ffffffffa01ff622>] ovs_vport_del+0x32/0x40 [openvswitch]
[  360.451362]  [<ffffffffa01f8dd0>] ovs_dp_detach_port+0x30/0x40 [openvswitch]
[  360.451366]  [<ffffffffa01f8ea5>] ovs_vport_cmd_del+0xc5/0x110 [openvswitch]
[  360.451370]  [<ffffffff81651d75>] genl_family_rcv_msg+0x1a5/0x3c0
[  360.451373]  [<ffffffff81651f90>] ? genl_family_rcv_msg+0x3c0/0x3c0
[  360.451376]  [<ffffffff81652021>] genl_rcv_msg+0x91/0xd0
[  360.451379]  [<ffffffff81650091>] netlink_rcv_skb+0xc1/0xe0
[  360.451381]  [<ffffffff816505bc>] genl_rcv+0x2c/0x40
[  360.451384]  [<ffffffff8164f626>] netlink_unicast+0xf6/0x200
[  360.451387]  [<ffffffff8164fa4d>] netlink_sendmsg+0x31d/0x780
[  360.451390]  [<ffffffff8164ca74>] ? netlink_rcv_wake+0x44/0x60
[  360.451394]  [<ffffffff81606a53>] sock_sendmsg+0x93/0xd0
[  360.451399]  [<ffffffff81337700>] ? apparmor_capable+0x60/0x60
[  360.451402]  [<ffffffff81614f27>] ? verify_iovec+0x47/0xd0
[  360.451406]  [<ffffffff81606e79>] ___sys_sendmsg+0x399/0x3b0
[  360.451410]  [<ffffffff812598a2>] ? kernfs_seq_stop_active+0x32/0x40
[  360.451414]  [<ffffffff8101c385>] ? native_sched_clock+0x35/0x90
[  360.451417]  [<ffffffff8101c385>] ? native_sched_clock+0x35/0x90
[  360.451419]  [<ffffffff8101c3e9>] ? sched_clock+0x9/0x10
[  360.451424]  [<ffffffff811277fc>] ? acct_account_cputime+0x1c/0x20
[  360.451427]  [<ffffffff8109ce6b>] ? account_user_time+0x8b/0xa0
[  360.451431]  [<ffffffff81200bd5>] ? __fget_light+0x25/0x70
[  360.451434]  [<ffffffff81607c02>] __sys_sendmsg+0x42/0x80
[  360.451437]  [<ffffffff81607c52>] SyS_sendmsg+0x12/0x20
[  360.451440]  [<ffffffff81727464>] tracesys_phase2+0xd8/0xdd
[  360.451442] rcu_show_nocb_setup(): rcu_sched nocb state:
[  360.456737]   0: ffff88013fc0e600 l:ffff88013fc0e600 n:ffff88013fc8e600 ...
[  360.463676]   1: ffff88013fc8e600 l:ffff88013fc0e600 n:          (null) ...
[  360.470614]   2: ffff88013fd0e600 l:ffff88013fd0e600 n:ffff88013fd8e600 N..
[  360.477554]   3: ffff88013fd8e600 l:ffff88013fd0e600 n:          (null) N..
[  360.484494] rcu_show_nocb_setup(): rcu_bh nocb state:
[  360.489529]   0: ffff88013fc0e3c0 l:ffff88013fc0e3c0 n:ffff88013fc8e3c0 ...
[  360.496469]   1: ffff88013fc8e3c0 l:ffff88013fc0e3c0 n:          (null) .G.
[  360.503407]   2: ffff88013fd0e3c0 l:ffff88013fd0e3c0 n:ffff88013fd8e3c0 ...
[  360.510346]   3: ffff88013fd8e3c0 l:ffff88013fd0e3c0 n:          (null) ...

---
	-Jay Vosburgh, jay.vosburgh@canonical.com
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jay Vosburgh Oct. 24, 2014, 10:41 p.m. UTC | #4
Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:

>On Fri, Oct 24, 2014 at 03:02:04PM -0700, Jay Vosburgh wrote:
>> Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:
>> 
[...]
>> 	I've got an ftrace capture from unmodified -net, it looks like
>> this:
>> 
>>     ovs-vswitchd-902   [000] ....   471.778441: rcu_barrier: rcu_sched Begin cpu -1 remaining 0 # 0
>>     ovs-vswitchd-902   [000] ....   471.778452: rcu_barrier: rcu_sched Check cpu -1 remaining 0 # 0
>>     ovs-vswitchd-902   [000] ....   471.778452: rcu_barrier: rcu_sched Inc1 cpu -1 remaining 0 # 1
>>     ovs-vswitchd-902   [000] ....   471.778453: rcu_barrier: rcu_sched OnlineNoCB cpu 0 remaining 1 # 1
>>     ovs-vswitchd-902   [000] ....   471.778453: rcu_barrier: rcu_sched OnlineNoCB cpu 1 remaining 2 # 1
>>     ovs-vswitchd-902   [000] ....   471.778453: rcu_barrier: rcu_sched OnlineNoCB cpu 2 remaining 3 # 1
>>     ovs-vswitchd-902   [000] ....   471.778454: rcu_barrier: rcu_sched OnlineNoCB cpu 3 remaining 4 # 1
>
>OK, so it looks like your system has four CPUs, and rcu_barrier() placed
>callbacks on them all.

	No, the system has only two CPUs.  It's an Intel Core 2 Duo
E8400, and /proc/cpuinfo agrees that there are only 2.  There is a
potentially relevant-sounding message early in dmesg that says:

[    0.000000] smpboot: Allowing 4 CPUs, 2 hotplug CPUs

>>     ovs-vswitchd-902   [000] ....   471.778454: rcu_barrier: rcu_sched Inc2 cpu -1 remaining 4 # 2
>
>The above removes the extra count used to avoid races between posting new
>callbacks and completion of previously posted callbacks.
>
>>          rcuos/0-9     [000] ..s.   471.793150: rcu_barrier: rcu_sched CB cpu -1 remaining 3 # 2
>>          rcuos/1-18    [001] ..s.   471.793308: rcu_barrier: rcu_sched CB cpu -1 remaining 2 # 2
>
>Two of the four callbacks fired, but the other two appear to be AWOL.
>And rcu_barrier() won't return until they all fire.
>
>> 	I let it sit through several "hung task" cycles but that was all
>> there was for rcu:rcu_barrier.
>> 
>> 	I should have ftrace with the patch as soon as the kernel is
>> done building, then I can try the below patch (I'll start it building
>> now).
>
>Sounds very good, looking forward to hearing of the results.

	Going to bounce it for ftrace now, but the cpu count mismatch
seemed important enough to mention separately.

	-J

---
	-Jay Vosburgh, jay.vosburgh@canonical.com
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paul E. McKenney Oct. 24, 2014, 10:59 p.m. UTC | #5
On Fri, Oct 24, 2014 at 03:34:07PM -0700, Jay Vosburgh wrote:
> Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:
> 
> >On Sat, Oct 25, 2014 at 12:25:57AM +0300, Yanko Kaneti wrote:
> >> On Fri-10/24/14-2014 11:32, Paul E. McKenney wrote:
> >> > On Fri, Oct 24, 2014 at 08:35:26PM +0300, Yanko Kaneti wrote:
> >> > > On Fri-10/24/14-2014 10:20, Paul E. McKenney wrote:
> >
> >[ . . . ]
> >
> >> > > > Well, if you are feeling aggressive, give the following patch a spin.
> >> > > > I am doing sanity tests on it in the meantime.
> >> > > 
> >> > > Doesn't seem to make a difference here
> >> > 
> >> > OK, inspection isn't cutting it, so time for tracing.  Does the system
> >> > respond to user input?  If so, please enable rcu:rcu_barrier ftrace before
> >> > the problem occurs, then dump the trace buffer after the problem occurs.
> >> 
> >> Sorry for being unresposive here, but I know next to nothing about tracing
> >> or most things about the kernel, so I have some cathing up to do.
> >> 
> >> In the meantime some layman observations while I tried to find what exactly
> >> triggers the problem.
> >> - Even in runlevel 1 I can reliably trigger the problem by starting libvirtd
> >> - libvirtd seems to be very active in using all sorts of kernel facilities
> >>   that are modules on fedora so it seems to cause many simultaneous kworker 
> >>   calls to modprobe
> >> - there are 8 kworker/u16 from 0 to 7
> >> - one of these kworkers always deadlocks, while there appear to be two
> >>   kworker/u16:6 - the seventh
> >
> >Adding Tejun on CC in case this duplication of kworker/u16:6 is important.
> >
> >>   6 vs 8 as in 6 rcuos where before they were always 8
> >> 
> >> Just observations from someone who still doesn't know what the u16
> >> kworkers are..
> >
> >Could you please run the following diagnostic patch?  This will help
> >me see if I have managed to miswire the rcuo kthreads.  It should
> >print some information at task-hang time.
> 
> 	Here's the output of the patch; I let it sit through two hang
> cycles.
> 
> 	-J
> 
> 
> [  240.348020] INFO: task ovs-vswitchd:902 blocked for more than 120 seconds.
> [  240.354878]       Not tainted 3.17.0-testola+ #4
> [  240.359481] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  240.367285] ovs-vswitchd    D ffff88013fc94600     0   902    901 0x00000004
> [  240.367290]  ffff8800ab20f7b8 0000000000000002 ffff8800b3304b00 ffff8800ab20ffd8
> [  240.367293]  0000000000014600 0000000000014600 ffff8800b0810000 ffff8800b3304b00
> [  240.367296]  ffff8800b3304b00 ffffffff81c59850 ffffffff81c59858 7fffffffffffffff
> [  240.367300] Call Trace:
> [  240.367307]  [<ffffffff81722b99>] schedule+0x29/0x70
> [  240.367310]  [<ffffffff81725b6c>] schedule_timeout+0x1dc/0x260
> [  240.367313]  [<ffffffff81722f69>] ? _cond_resched+0x29/0x40
> [  240.367316]  [<ffffffff81723818>] ? wait_for_completion+0x28/0x160
> [  240.367321]  [<ffffffff811081a7>] ? queue_stop_cpus_work+0xc7/0xe0
> [  240.367324]  [<ffffffff81723896>] wait_for_completion+0xa6/0x160
> [  240.367328]  [<ffffffff81099980>] ? wake_up_state+0x20/0x20
> [  240.367331]  [<ffffffff810d0ecc>] _rcu_barrier+0x20c/0x480
> [  240.367334]  [<ffffffff810d1195>] rcu_barrier+0x15/0x20
> [  240.367338]  [<ffffffff81625010>] netdev_run_todo+0x60/0x300
> [  240.367341]  [<ffffffff8162f9ee>] rtnl_unlock+0xe/0x10
> [  240.367349]  [<ffffffffa01ffcc5>] internal_dev_destroy+0x55/0x80 [openvswitch]
> [  240.367354]  [<ffffffffa01ff622>] ovs_vport_del+0x32/0x40 [openvswitch]
> [  240.367358]  [<ffffffffa01f8dd0>] ovs_dp_detach_port+0x30/0x40 [openvswitch]
> [  240.367363]  [<ffffffffa01f8ea5>] ovs_vport_cmd_del+0xc5/0x110 [openvswitch]
> [  240.367367]  [<ffffffff81651d75>] genl_family_rcv_msg+0x1a5/0x3c0
> [  240.367370]  [<ffffffff81651f90>] ? genl_family_rcv_msg+0x3c0/0x3c0
> [  240.367372]  [<ffffffff81652021>] genl_rcv_msg+0x91/0xd0
> [  240.367376]  [<ffffffff81650091>] netlink_rcv_skb+0xc1/0xe0
> [  240.367378]  [<ffffffff816505bc>] genl_rcv+0x2c/0x40
> [  240.367381]  [<ffffffff8164f626>] netlink_unicast+0xf6/0x200
> [  240.367383]  [<ffffffff8164fa4d>] netlink_sendmsg+0x31d/0x780
> [  240.367387]  [<ffffffff8164ca74>] ? netlink_rcv_wake+0x44/0x60
> [  240.367391]  [<ffffffff81606a53>] sock_sendmsg+0x93/0xd0
> [  240.367395]  [<ffffffff81337700>] ? apparmor_capable+0x60/0x60
> [  240.367399]  [<ffffffff81614f27>] ? verify_iovec+0x47/0xd0
> [  240.367402]  [<ffffffff81606e79>] ___sys_sendmsg+0x399/0x3b0
> [  240.367406]  [<ffffffff812598a2>] ? kernfs_seq_stop_active+0x32/0x40
> [  240.367410]  [<ffffffff8101c385>] ? native_sched_clock+0x35/0x90
> [  240.367413]  [<ffffffff8101c385>] ? native_sched_clock+0x35/0x90
> [  240.367416]  [<ffffffff8101c3e9>] ? sched_clock+0x9/0x10
> [  240.367420]  [<ffffffff811277fc>] ? acct_account_cputime+0x1c/0x20
> [  240.367424]  [<ffffffff8109ce6b>] ? account_user_time+0x8b/0xa0
> [  240.367428]  [<ffffffff81200bd5>] ? __fget_light+0x25/0x70
> [  240.367431]  [<ffffffff81607c02>] __sys_sendmsg+0x42/0x80
> [  240.367433]  [<ffffffff81607c52>] SyS_sendmsg+0x12/0x20
> [  240.367436]  [<ffffffff81727464>] tracesys_phase2+0xd8/0xdd
> [  240.367439] rcu_show_nocb_setup(): rcu_sched nocb state:
> [  240.372734]   0: ffff88013fc0e600 l:ffff88013fc0e600 n:ffff88013fc8e600 .G.
> [  240.379673]   1: ffff88013fc8e600 l:ffff88013fc0e600 n:          (null) .G.
> [  240.386611]   2: ffff88013fd0e600 l:ffff88013fd0e600 n:ffff88013fd8e600 N..
> [  240.393550]   3: ffff88013fd8e600 l:ffff88013fd0e600 n:          (null) N..
> [  240.400489] rcu_show_nocb_setup(): rcu_bh nocb state:
> [  240.405525]   0: ffff88013fc0e3c0 l:ffff88013fc0e3c0 n:ffff88013fc8e3c0 ...
> [  240.412463]   1: ffff88013fc8e3c0 l:ffff88013fc0e3c0 n:          (null) ...
> [  240.419401]   2: ffff88013fd0e3c0 l:ffff88013fd0e3c0 n:ffff88013fd8e3c0 ...
> [  240.426339]   3: ffff88013fd8e3c0 l:ffff88013fd0e3c0 n:          (null) ...
> [  360.432020] INFO: task ovs-vswitchd:902 blocked for more than 120 seconds.
> [  360.438881]       Not tainted 3.17.0-testola+ #4
> [  360.443484] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  360.451289] ovs-vswitchd    D ffff88013fc94600     0   902    901 0x00000004
> [  360.451293]  ffff8800ab20f7b8 0000000000000002 ffff8800b3304b00 ffff8800ab20ffd8
> [  360.451297]  0000000000014600 0000000000014600 ffff8800b0810000 ffff8800b3304b00
> [  360.451300]  ffff8800b3304b00 ffffffff81c59850 ffffffff81c59858 7fffffffffffffff
> [  360.451303] Call Trace:
> [  360.451311]  [<ffffffff81722b99>] schedule+0x29/0x70
> [  360.451314]  [<ffffffff81725b6c>] schedule_timeout+0x1dc/0x260
> [  360.451317]  [<ffffffff81722f69>] ? _cond_resched+0x29/0x40
> [  360.451320]  [<ffffffff81723818>] ? wait_for_completion+0x28/0x160
> [  360.451325]  [<ffffffff811081a7>] ? queue_stop_cpus_work+0xc7/0xe0
> [  360.451327]  [<ffffffff81723896>] wait_for_completion+0xa6/0x160
> [  360.451331]  [<ffffffff81099980>] ? wake_up_state+0x20/0x20
> [  360.451335]  [<ffffffff810d0ecc>] _rcu_barrier+0x20c/0x480
> [  360.451338]  [<ffffffff810d1195>] rcu_barrier+0x15/0x20
> [  360.451342]  [<ffffffff81625010>] netdev_run_todo+0x60/0x300
> [  360.451345]  [<ffffffff8162f9ee>] rtnl_unlock+0xe/0x10
> [  360.451353]  [<ffffffffa01ffcc5>] internal_dev_destroy+0x55/0x80 [openvswitch]
> [  360.451358]  [<ffffffffa01ff622>] ovs_vport_del+0x32/0x40 [openvswitch]
> [  360.451362]  [<ffffffffa01f8dd0>] ovs_dp_detach_port+0x30/0x40 [openvswitch]
> [  360.451366]  [<ffffffffa01f8ea5>] ovs_vport_cmd_del+0xc5/0x110 [openvswitch]
> [  360.451370]  [<ffffffff81651d75>] genl_family_rcv_msg+0x1a5/0x3c0
> [  360.451373]  [<ffffffff81651f90>] ? genl_family_rcv_msg+0x3c0/0x3c0
> [  360.451376]  [<ffffffff81652021>] genl_rcv_msg+0x91/0xd0
> [  360.451379]  [<ffffffff81650091>] netlink_rcv_skb+0xc1/0xe0
> [  360.451381]  [<ffffffff816505bc>] genl_rcv+0x2c/0x40
> [  360.451384]  [<ffffffff8164f626>] netlink_unicast+0xf6/0x200
> [  360.451387]  [<ffffffff8164fa4d>] netlink_sendmsg+0x31d/0x780
> [  360.451390]  [<ffffffff8164ca74>] ? netlink_rcv_wake+0x44/0x60
> [  360.451394]  [<ffffffff81606a53>] sock_sendmsg+0x93/0xd0
> [  360.451399]  [<ffffffff81337700>] ? apparmor_capable+0x60/0x60
> [  360.451402]  [<ffffffff81614f27>] ? verify_iovec+0x47/0xd0
> [  360.451406]  [<ffffffff81606e79>] ___sys_sendmsg+0x399/0x3b0
> [  360.451410]  [<ffffffff812598a2>] ? kernfs_seq_stop_active+0x32/0x40
> [  360.451414]  [<ffffffff8101c385>] ? native_sched_clock+0x35/0x90
> [  360.451417]  [<ffffffff8101c385>] ? native_sched_clock+0x35/0x90
> [  360.451419]  [<ffffffff8101c3e9>] ? sched_clock+0x9/0x10
> [  360.451424]  [<ffffffff811277fc>] ? acct_account_cputime+0x1c/0x20
> [  360.451427]  [<ffffffff8109ce6b>] ? account_user_time+0x8b/0xa0
> [  360.451431]  [<ffffffff81200bd5>] ? __fget_light+0x25/0x70
> [  360.451434]  [<ffffffff81607c02>] __sys_sendmsg+0x42/0x80
> [  360.451437]  [<ffffffff81607c52>] SyS_sendmsg+0x12/0x20
> [  360.451440]  [<ffffffff81727464>] tracesys_phase2+0xd8/0xdd
> [  360.451442] rcu_show_nocb_setup(): rcu_sched nocb state:
> [  360.456737]   0: ffff88013fc0e600 l:ffff88013fc0e600 n:ffff88013fc8e600 ...
> [  360.463676]   1: ffff88013fc8e600 l:ffff88013fc0e600 n:          (null) ...
> [  360.470614]   2: ffff88013fd0e600 l:ffff88013fd0e600 n:ffff88013fd8e600 N..
> [  360.477554]   3: ffff88013fd8e600 l:ffff88013fd0e600 n:          (null) N..

Hmmm...  It sure looks like we have some callbacks stuck here.  I clearly
need to take a hard look at the sleep/wakeup code.

Thank you for running this!!!

							Thanx, Paul

> [  360.484494] rcu_show_nocb_setup(): rcu_bh nocb state:
> [  360.489529]   0: ffff88013fc0e3c0 l:ffff88013fc0e3c0 n:ffff88013fc8e3c0 ...
> [  360.496469]   1: ffff88013fc8e3c0 l:ffff88013fc0e3c0 n:          (null) .G.
> [  360.503407]   2: ffff88013fd0e3c0 l:ffff88013fd0e3c0 n:ffff88013fd8e3c0 ...
> [  360.510346]   3: ffff88013fd8e3c0 l:ffff88013fd0e3c0 n:          (null) ...
> 
> ---
> 	-Jay Vosburgh, jay.vosburgh@canonical.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Yanko Kaneti Oct. 25, 2014, 12:09 p.m. UTC | #6
On Fri-10/24/14-2014 14:49, Paul E. McKenney wrote:
> On Sat, Oct 25, 2014 at 12:25:57AM +0300, Yanko Kaneti wrote:
> > On Fri-10/24/14-2014 11:32, Paul E. McKenney wrote:
> > > On Fri, Oct 24, 2014 at 08:35:26PM +0300, Yanko Kaneti wrote:
> > > > On Fri-10/24/14-2014 10:20, Paul E. McKenney wrote:
> 
> [ . . . ]
> 
> > > > > Well, if you are feeling aggressive, give the following patch a spin.
> > > > > I am doing sanity tests on it in the meantime.
> > > > 
> > > > Doesn't seem to make a difference here
> > > 
> > > OK, inspection isn't cutting it, so time for tracing.  Does the system
> > > respond to user input?  If so, please enable rcu:rcu_barrier ftrace before
> > > the problem occurs, then dump the trace buffer after the problem occurs.
> > 
> > Sorry for being unresposive here, but I know next to nothing about tracing
> > or most things about the kernel, so I have some cathing up to do.
> > 
> > In the meantime some layman observations while I tried to find what exactly
> > triggers the problem.
> > - Even in runlevel 1 I can reliably trigger the problem by starting libvirtd
> > - libvirtd seems to be very active in using all sorts of kernel facilities
> >   that are modules on fedora so it seems to cause many simultaneous kworker 
> >   calls to modprobe
> > - there are 8 kworker/u16 from 0 to 7
> > - one of these kworkers always deadlocks, while there appear to be two
> >   kworker/u16:6 - the seventh
> 
> Adding Tejun on CC in case this duplication of kworker/u16:6 is important.
> 
> >   6 vs 8 as in 6 rcuos where before they were always 8
> > 
> > Just observations from someone who still doesn't know what the u16
> > kworkers are..
> 
> Could you please run the following diagnostic patch?  This will help
> me see if I have managed to miswire the rcuo kthreads.  It should
> print some information at task-hang time.

So here the output with todays linux tip and the diagnostic patch.
This is the case with just starting libvird in runlevel 1.
Also a snapshot  of the kworker/u16 s

    6 ?        S      0:00  \_ [kworker/u16:0]
  553 ?        S      0:00  |   \_ [kworker/u16:0]
  554 ?        D      0:00  |       \_ /sbin/modprobe -q -- bridge
   78 ?        S      0:00  \_ [kworker/u16:1]
   92 ?        S      0:00  \_ [kworker/u16:2]
   93 ?        S      0:00  \_ [kworker/u16:3]
   94 ?        S      0:00  \_ [kworker/u16:4]
   95 ?        S      0:00  \_ [kworker/u16:5]
   96 ?        D      0:00  \_ [kworker/u16:6]
  105 ?        S      0:00  \_ [kworker/u16:7]
  108 ?        S      0:00  \_ [kworker/u16:8]


INFO: task kworker/u16:6:96 blocked for more than 120 seconds.
      Not tainted 3.18.0-rc1+ #16
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kworker/u16:6   D ffff8800ca9ecec0 11552    96      2 0x00000000
Workqueue: netns cleanup_net
 ffff880221fff9c8 0000000000000096 ffff8800ca9ecec0 00000000001d5f00
 ffff880221ffffd8 00000000001d5f00 ffff880223260000 ffff8800ca9ecec0
 ffffffff82c44010 7fffffffffffffff ffffffff81ee3798 ffffffff81ee3790
Call Trace:
 [<ffffffff81866219>] schedule+0x29/0x70
 [<ffffffff8186b43c>] schedule_timeout+0x26c/0x410
 [<ffffffff81028bea>] ? native_sched_clock+0x2a/0xa0
 [<ffffffff8110748c>] ? mark_held_locks+0x7c/0xb0
 [<ffffffff8186c4c0>] ? _raw_spin_unlock_irq+0x30/0x50
 [<ffffffff8110761d>] ? trace_hardirqs_on_caller+0x15d/0x200
 [<ffffffff81867c4c>] wait_for_completion+0x10c/0x150
 [<ffffffff810e4dc0>] ? wake_up_state+0x20/0x20
 [<ffffffff81133627>] _rcu_barrier+0x677/0xcd0
 [<ffffffff81133cd5>] rcu_barrier+0x15/0x20
 [<ffffffff81720edf>] netdev_run_todo+0x6f/0x310
 [<ffffffff81715aa5>] ? rollback_registered_many+0x265/0x2e0
 [<ffffffff8172df4e>] rtnl_unlock+0xe/0x10
 [<ffffffff81717906>] default_device_exit_batch+0x156/0x180
 [<ffffffff810fd280>] ? abort_exclusive_wait+0xb0/0xb0
 [<ffffffff8170f9b3>] ops_exit_list.isra.1+0x53/0x60
 [<ffffffff81710560>] cleanup_net+0x100/0x1f0
 [<ffffffff810cc988>] process_one_work+0x218/0x850
 [<ffffffff810cc8ef>] ? process_one_work+0x17f/0x850
 [<ffffffff810cd0a7>] ? worker_thread+0xe7/0x4a0
 [<ffffffff810cd02b>] worker_thread+0x6b/0x4a0
 [<ffffffff810ccfc0>] ? process_one_work+0x850/0x850
 [<ffffffff810d337b>] kthread+0x10b/0x130
 [<ffffffff81028c69>] ? sched_clock+0x9/0x10
 [<ffffffff810d3270>] ? kthread_create_on_node+0x250/0x250
 [<ffffffff8186d1fc>] ret_from_fork+0x7c/0xb0
 [<ffffffff810d3270>] ? kthread_create_on_node+0x250/0x250
4 locks held by kworker/u16:6/96:
 #0:  ("%s""netns"){.+.+.+}, at: [<ffffffff810cc8ef>]
 #process_one_work+0x17f/0x850
 #1:  (net_cleanup_work){+.+.+.}, at: [<ffffffff810cc8ef>]
 #process_one_work+0x17f/0x850
 #2:  (net_mutex){+.+.+.}, at: [<ffffffff817104ec>] cleanup_net+0x8c/0x1f0
 #3:  (rcu_sched_state.barrier_mutex){+.+...}, at: [<ffffffff81133025>]
 #_rcu_barrier+0x75/0xcd0
rcu_show_nocb_setup(): rcu_sched nocb state:
  0: ffff8802267ced40 l:ffff8802267ced40 n:ffff8802269ced40 .G.
  1: ffff8802269ced40 l:ffff8802267ced40 n:          (null) ...
  2: ffff880226bced40 l:ffff880226bced40 n:ffff880226dced40 .G.
  3: ffff880226dced40 l:ffff880226bced40 n:          (null) N..
  4: ffff880226fced40 l:ffff880226fced40 n:ffff8802271ced40 .G.
  5: ffff8802271ced40 l:ffff880226fced40 n:          (null) ...
  6: ffff8802273ced40 l:ffff8802273ced40 n:ffff8802275ced40 N..
  7: ffff8802275ced40 l:ffff8802273ced40 n:          (null) N..
rcu_show_nocb_setup(): rcu_bh nocb state:
  0: ffff8802267ceac0 l:ffff8802267ceac0 n:ffff8802269ceac0 ...
  1: ffff8802269ceac0 l:ffff8802267ceac0 n:          (null) ...
  2: ffff880226bceac0 l:ffff880226bceac0 n:ffff880226dceac0 ...
  3: ffff880226dceac0 l:ffff880226bceac0 n:          (null) ...
  4: ffff880226fceac0 l:ffff880226fceac0 n:ffff8802271ceac0 ...
  5: ffff8802271ceac0 l:ffff880226fceac0 n:          (null) ...
  6: ffff8802273ceac0 l:ffff8802273ceac0 n:ffff8802275ceac0 ...
  7: ffff8802275ceac0 l:ffff8802273ceac0 n:          (null) ...
INFO: task modprobe:554 blocked for more than 120 seconds.
      Not tainted 3.18.0-rc1+ #16
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
modprobe        D ffff8800c85dcec0 12456   554    553 0x00000000
 ffff8802178afbf8 0000000000000096 ffff8800c85dcec0 00000000001d5f00
 ffff8802178affd8 00000000001d5f00 ffffffff81e1b580 ffff8800c85dcec0
 ffff8800c85dcec0 ffffffff81f90c08 0000000000000246 ffff8800c85dcec0
Call Trace:
 [<ffffffff818667c1>] schedule_preempt_disabled+0x31/0x80
 [<ffffffff81868013>] mutex_lock_nested+0x183/0x440
 [<ffffffff8171037f>] ? register_pernet_subsys+0x1f/0x50
 [<ffffffff8171037f>] ? register_pernet_subsys+0x1f/0x50
 [<ffffffffa0619000>] ? 0xffffffffa0619000
 [<ffffffff8171037f>] register_pernet_subsys+0x1f/0x50
 [<ffffffffa0619048>] br_init+0x48/0xd3 [bridge]
 [<ffffffff81002148>] do_one_initcall+0xd8/0x210
 [<ffffffff8115bc22>] load_module+0x20c2/0x2870
 [<ffffffff81156c00>] ? store_uevent+0x70/0x70
 [<ffffffff81281327>] ? kernel_read+0x57/0x90
 [<ffffffff8115c5b6>] SyS_finit_module+0xa6/0xe0
 [<ffffffff8186d2d5>] ? sysret_check+0x22/0x5d
 [<ffffffff8186d2a9>] system_call_fastpath+0x12/0x17
1 lock held by modprobe/554:
 #0:  (net_mutex){+.+.+.}, at: [<ffffffff8171037f>]
 #register_pernet_subsys+0x1f/0x50
rcu_show_nocb_setup(): rcu_sched nocb state:
  0: ffff8802267ced40 l:ffff8802267ced40 n:ffff8802269ced40 .G.
  1: ffff8802269ced40 l:ffff8802267ced40 n:          (null) ...
  2: ffff880226bced40 l:ffff880226bced40 n:ffff880226dced40 .G.
  3: ffff880226dced40 l:ffff880226bced40 n:          (null) N..
  4: ffff880226fced40 l:ffff880226fced40 n:ffff8802271ced40 .G.
  5: ffff8802271ced40 l:ffff880226fced40 n:          (null) ...
  6: ffff8802273ced40 l:ffff8802273ced40 n:ffff8802275ced40 N..
  7: ffff8802275ced40 l:ffff8802273ced40 n:          (null) N..
rcu_show_nocb_setup(): rcu_bh nocb state:
  0: ffff8802267ceac0 l:ffff8802267ceac0 n:ffff8802269ceac0 ...
  1: ffff8802269ceac0 l:ffff8802267ceac0 n:          (null) ...
  2: ffff880226bceac0 l:ffff880226bceac0 n:ffff880226dceac0 ...
  3: ffff880226dceac0 l:ffff880226bceac0 n:          (null) ...
  4: ffff880226fceac0 l:ffff880226fceac0 n:ffff8802271ceac0 ...
  5: ffff8802271ceac0 l:ffff880226fceac0 n:          (null) ...
  6: ffff8802273ceac0 l:ffff8802273ceac0 n:ffff8802275ceac0 ...
  7: ffff8802275ceac0 l:ffff8802273ceac0 n:          (null) ...


 
> 							Thanx, Paul
> 
> ------------------------------------------------------------------------
> 
> rcu: Dump no-CBs CPU state at task-hung time
> 
> Strictly diagnostic commit for rcu_barrier() hang.  Not for inclusion.
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> 
> diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
> index 0e5366200154..34048140577b 100644
> --- a/include/linux/rcutiny.h
> +++ b/include/linux/rcutiny.h
> @@ -157,4 +157,8 @@ static inline bool rcu_is_watching(void)
>  
>  #endif /* #else defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) */
>  
> +static inline void rcu_show_nocb_setup(void)
> +{
> +}
> +
>  #endif /* __LINUX_RCUTINY_H */
> diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
> index 52953790dcca..0b813bdb971b 100644
> --- a/include/linux/rcutree.h
> +++ b/include/linux/rcutree.h
> @@ -97,4 +97,6 @@ extern int rcu_scheduler_active __read_mostly;
>  
>  bool rcu_is_watching(void);
>  
> +void rcu_show_nocb_setup(void);
> +
>  #endif /* __LINUX_RCUTREE_H */
> diff --git a/kernel/hung_task.c b/kernel/hung_task.c
> index 06db12434d72..e6e4d0f6b063 100644
> --- a/kernel/hung_task.c
> +++ b/kernel/hung_task.c
> @@ -118,6 +118,7 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
>  		" disables this message.\n");
>  	sched_show_task(t);
>  	debug_show_held_locks(t);
> +	rcu_show_nocb_setup();
>  
>  	touch_nmi_watchdog();
>  
> diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
> index 240fa9094f83..6b373e79ce0e 100644
> --- a/kernel/rcu/rcutorture.c
> +++ b/kernel/rcu/rcutorture.c
> @@ -1513,6 +1513,7 @@ rcu_torture_cleanup(void)
>  {
>  	int i;
>  
> +	rcu_show_nocb_setup();
>  	rcutorture_record_test_transition();
>  	if (torture_cleanup_begin()) {
>  		if (cur_ops->cb_barrier != NULL)
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index 927c17b081c7..285b3f6fb229 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -2699,6 +2699,31 @@ static bool init_nocb_callback_list(struct rcu_data *rdp)
>  
>  #endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
>  
> +void rcu_show_nocb_setup(void)
> +{
> +#ifdef CONFIG_RCU_NOCB_CPU
> +	int cpu;
> +	struct rcu_data *rdp;
> +	struct rcu_state *rsp;
> +
> +	for_each_rcu_flavor(rsp) {
> +		pr_alert("rcu_show_nocb_setup(): %s nocb state:\n", rsp->name);
> +		for_each_possible_cpu(cpu) {
> +			if (!rcu_is_nocb_cpu(cpu))
> +				continue;
> +			rdp = per_cpu_ptr(rsp->rda, cpu);
> +			pr_alert("%3d: %p l:%p n:%p %c%c%c\n",
> +				 cpu,
> +				 rdp, rdp->nocb_leader, rdp->nocb_next_follower,
> +				 ".N"[!!rdp->nocb_head],
> +				 ".G"[!!rdp->nocb_gp_head],
> +				 ".F"[!!rdp->nocb_follower_head]);
> +		}
> +	}
> +#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
> +}
> +EXPORT_SYMBOL_GPL(rcu_show_nocb_setup);
> +
>  /*
>   * An adaptive-ticks CPU can potentially execute in kernel mode for an
>   * arbitrarily long period of time with the scheduling-clock tick turned
> 
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paul E. McKenney Oct. 25, 2014, 1:38 p.m. UTC | #7
On Sat, Oct 25, 2014 at 03:09:36PM +0300, Yanko Kaneti wrote:
> On Fri-10/24/14-2014 14:49, Paul E. McKenney wrote:
> > On Sat, Oct 25, 2014 at 12:25:57AM +0300, Yanko Kaneti wrote:
> > > On Fri-10/24/14-2014 11:32, Paul E. McKenney wrote:
> > > > On Fri, Oct 24, 2014 at 08:35:26PM +0300, Yanko Kaneti wrote:
> > > > > On Fri-10/24/14-2014 10:20, Paul E. McKenney wrote:
> > 
> > [ . . . ]
> > 
> > > > > > Well, if you are feeling aggressive, give the following patch a spin.
> > > > > > I am doing sanity tests on it in the meantime.
> > > > > 
> > > > > Doesn't seem to make a difference here
> > > > 
> > > > OK, inspection isn't cutting it, so time for tracing.  Does the system
> > > > respond to user input?  If so, please enable rcu:rcu_barrier ftrace before
> > > > the problem occurs, then dump the trace buffer after the problem occurs.
> > > 
> > > Sorry for being unresposive here, but I know next to nothing about tracing
> > > or most things about the kernel, so I have some cathing up to do.
> > > 
> > > In the meantime some layman observations while I tried to find what exactly
> > > triggers the problem.
> > > - Even in runlevel 1 I can reliably trigger the problem by starting libvirtd
> > > - libvirtd seems to be very active in using all sorts of kernel facilities
> > >   that are modules on fedora so it seems to cause many simultaneous kworker 
> > >   calls to modprobe
> > > - there are 8 kworker/u16 from 0 to 7
> > > - one of these kworkers always deadlocks, while there appear to be two
> > >   kworker/u16:6 - the seventh
> > 
> > Adding Tejun on CC in case this duplication of kworker/u16:6 is important.
> > 
> > >   6 vs 8 as in 6 rcuos where before they were always 8
> > > 
> > > Just observations from someone who still doesn't know what the u16
> > > kworkers are..
> > 
> > Could you please run the following diagnostic patch?  This will help
> > me see if I have managed to miswire the rcuo kthreads.  It should
> > print some information at task-hang time.
> 
> So here the output with todays linux tip and the diagnostic patch.
> This is the case with just starting libvird in runlevel 1.

Thank you for testing this!

> Also a snapshot  of the kworker/u16 s
> 
>     6 ?        S      0:00  \_ [kworker/u16:0]
>   553 ?        S      0:00  |   \_ [kworker/u16:0]
>   554 ?        D      0:00  |       \_ /sbin/modprobe -q -- bridge
>    78 ?        S      0:00  \_ [kworker/u16:1]
>    92 ?        S      0:00  \_ [kworker/u16:2]
>    93 ?        S      0:00  \_ [kworker/u16:3]
>    94 ?        S      0:00  \_ [kworker/u16:4]
>    95 ?        S      0:00  \_ [kworker/u16:5]
>    96 ?        D      0:00  \_ [kworker/u16:6]
>   105 ?        S      0:00  \_ [kworker/u16:7]
>   108 ?        S      0:00  \_ [kworker/u16:8]

You had six CPUs, IIRC, so the last two kworker/u16 kthreads are surplus
to requirements.  Not sure if they are causing any trouble, though.

> INFO: task kworker/u16:6:96 blocked for more than 120 seconds.
>       Not tainted 3.18.0-rc1+ #16
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> kworker/u16:6   D ffff8800ca9ecec0 11552    96      2 0x00000000
> Workqueue: netns cleanup_net
>  ffff880221fff9c8 0000000000000096 ffff8800ca9ecec0 00000000001d5f00
>  ffff880221ffffd8 00000000001d5f00 ffff880223260000 ffff8800ca9ecec0
>  ffffffff82c44010 7fffffffffffffff ffffffff81ee3798 ffffffff81ee3790
> Call Trace:
>  [<ffffffff81866219>] schedule+0x29/0x70
>  [<ffffffff8186b43c>] schedule_timeout+0x26c/0x410
>  [<ffffffff81028bea>] ? native_sched_clock+0x2a/0xa0
>  [<ffffffff8110748c>] ? mark_held_locks+0x7c/0xb0
>  [<ffffffff8186c4c0>] ? _raw_spin_unlock_irq+0x30/0x50
>  [<ffffffff8110761d>] ? trace_hardirqs_on_caller+0x15d/0x200
>  [<ffffffff81867c4c>] wait_for_completion+0x10c/0x150
>  [<ffffffff810e4dc0>] ? wake_up_state+0x20/0x20
>  [<ffffffff81133627>] _rcu_barrier+0x677/0xcd0
>  [<ffffffff81133cd5>] rcu_barrier+0x15/0x20
>  [<ffffffff81720edf>] netdev_run_todo+0x6f/0x310
>  [<ffffffff81715aa5>] ? rollback_registered_many+0x265/0x2e0
>  [<ffffffff8172df4e>] rtnl_unlock+0xe/0x10
>  [<ffffffff81717906>] default_device_exit_batch+0x156/0x180
>  [<ffffffff810fd280>] ? abort_exclusive_wait+0xb0/0xb0
>  [<ffffffff8170f9b3>] ops_exit_list.isra.1+0x53/0x60
>  [<ffffffff81710560>] cleanup_net+0x100/0x1f0
>  [<ffffffff810cc988>] process_one_work+0x218/0x850
>  [<ffffffff810cc8ef>] ? process_one_work+0x17f/0x850
>  [<ffffffff810cd0a7>] ? worker_thread+0xe7/0x4a0
>  [<ffffffff810cd02b>] worker_thread+0x6b/0x4a0
>  [<ffffffff810ccfc0>] ? process_one_work+0x850/0x850
>  [<ffffffff810d337b>] kthread+0x10b/0x130
>  [<ffffffff81028c69>] ? sched_clock+0x9/0x10
>  [<ffffffff810d3270>] ? kthread_create_on_node+0x250/0x250
>  [<ffffffff8186d1fc>] ret_from_fork+0x7c/0xb0
>  [<ffffffff810d3270>] ? kthread_create_on_node+0x250/0x250
> 4 locks held by kworker/u16:6/96:
>  #0:  ("%s""netns"){.+.+.+}, at: [<ffffffff810cc8ef>]
>  #process_one_work+0x17f/0x850
>  #1:  (net_cleanup_work){+.+.+.}, at: [<ffffffff810cc8ef>]
>  #process_one_work+0x17f/0x850
>  #2:  (net_mutex){+.+.+.}, at: [<ffffffff817104ec>] cleanup_net+0x8c/0x1f0
>  #3:  (rcu_sched_state.barrier_mutex){+.+...}, at: [<ffffffff81133025>]
>  #_rcu_barrier+0x75/0xcd0
> rcu_show_nocb_setup(): rcu_sched nocb state:
>   0: ffff8802267ced40 l:ffff8802267ced40 n:ffff8802269ced40 .G.
>   1: ffff8802269ced40 l:ffff8802267ced40 n:          (null) ...
>   2: ffff880226bced40 l:ffff880226bced40 n:ffff880226dced40 .G.
>   3: ffff880226dced40 l:ffff880226bced40 n:          (null) N..
>   4: ffff880226fced40 l:ffff880226fced40 n:ffff8802271ced40 .G.
>   5: ffff8802271ced40 l:ffff880226fced40 n:          (null) ...
>   6: ffff8802273ced40 l:ffff8802273ced40 n:ffff8802275ced40 N..
>   7: ffff8802275ced40 l:ffff8802273ced40 n:          (null) N..

And this looks like rcu_barrier() has posted callbacks for the
non-existent CPUs 7 and 8, similar to what Jay was seeing.

I am working on a fix -- chasing down corner cases.

							Thanx, Paul

> rcu_show_nocb_setup(): rcu_bh nocb state:
>   0: ffff8802267ceac0 l:ffff8802267ceac0 n:ffff8802269ceac0 ...
>   1: ffff8802269ceac0 l:ffff8802267ceac0 n:          (null) ...
>   2: ffff880226bceac0 l:ffff880226bceac0 n:ffff880226dceac0 ...
>   3: ffff880226dceac0 l:ffff880226bceac0 n:          (null) ...
>   4: ffff880226fceac0 l:ffff880226fceac0 n:ffff8802271ceac0 ...
>   5: ffff8802271ceac0 l:ffff880226fceac0 n:          (null) ...
>   6: ffff8802273ceac0 l:ffff8802273ceac0 n:ffff8802275ceac0 ...
>   7: ffff8802275ceac0 l:ffff8802273ceac0 n:          (null) ...
> INFO: task modprobe:554 blocked for more than 120 seconds.
>       Not tainted 3.18.0-rc1+ #16
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> modprobe        D ffff8800c85dcec0 12456   554    553 0x00000000
>  ffff8802178afbf8 0000000000000096 ffff8800c85dcec0 00000000001d5f00
>  ffff8802178affd8 00000000001d5f00 ffffffff81e1b580 ffff8800c85dcec0
>  ffff8800c85dcec0 ffffffff81f90c08 0000000000000246 ffff8800c85dcec0
> Call Trace:
>  [<ffffffff818667c1>] schedule_preempt_disabled+0x31/0x80
>  [<ffffffff81868013>] mutex_lock_nested+0x183/0x440
>  [<ffffffff8171037f>] ? register_pernet_subsys+0x1f/0x50
>  [<ffffffff8171037f>] ? register_pernet_subsys+0x1f/0x50
>  [<ffffffffa0619000>] ? 0xffffffffa0619000
>  [<ffffffff8171037f>] register_pernet_subsys+0x1f/0x50
>  [<ffffffffa0619048>] br_init+0x48/0xd3 [bridge]
>  [<ffffffff81002148>] do_one_initcall+0xd8/0x210
>  [<ffffffff8115bc22>] load_module+0x20c2/0x2870
>  [<ffffffff81156c00>] ? store_uevent+0x70/0x70
>  [<ffffffff81281327>] ? kernel_read+0x57/0x90
>  [<ffffffff8115c5b6>] SyS_finit_module+0xa6/0xe0
>  [<ffffffff8186d2d5>] ? sysret_check+0x22/0x5d
>  [<ffffffff8186d2a9>] system_call_fastpath+0x12/0x17
> 1 lock held by modprobe/554:
>  #0:  (net_mutex){+.+.+.}, at: [<ffffffff8171037f>]
>  #register_pernet_subsys+0x1f/0x50
> rcu_show_nocb_setup(): rcu_sched nocb state:
>   0: ffff8802267ced40 l:ffff8802267ced40 n:ffff8802269ced40 .G.
>   1: ffff8802269ced40 l:ffff8802267ced40 n:          (null) ...
>   2: ffff880226bced40 l:ffff880226bced40 n:ffff880226dced40 .G.
>   3: ffff880226dced40 l:ffff880226bced40 n:          (null) N..
>   4: ffff880226fced40 l:ffff880226fced40 n:ffff8802271ced40 .G.
>   5: ffff8802271ced40 l:ffff880226fced40 n:          (null) ...
>   6: ffff8802273ced40 l:ffff8802273ced40 n:ffff8802275ced40 N..
>   7: ffff8802275ced40 l:ffff8802273ced40 n:          (null) N..
> rcu_show_nocb_setup(): rcu_bh nocb state:
>   0: ffff8802267ceac0 l:ffff8802267ceac0 n:ffff8802269ceac0 ...
>   1: ffff8802269ceac0 l:ffff8802267ceac0 n:          (null) ...
>   2: ffff880226bceac0 l:ffff880226bceac0 n:ffff880226dceac0 ...
>   3: ffff880226dceac0 l:ffff880226bceac0 n:          (null) ...
>   4: ffff880226fceac0 l:ffff880226fceac0 n:ffff8802271ceac0 ...
>   5: ffff8802271ceac0 l:ffff880226fceac0 n:          (null) ...
>   6: ffff8802273ceac0 l:ffff8802273ceac0 n:ffff8802275ceac0 ...
>   7: ffff8802275ceac0 l:ffff8802273ceac0 n:          (null) ...
> 
> 
> 
> > 							Thanx, Paul
> > 
> > ------------------------------------------------------------------------
> > 
> > rcu: Dump no-CBs CPU state at task-hung time
> > 
> > Strictly diagnostic commit for rcu_barrier() hang.  Not for inclusion.
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > 
> > diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
> > index 0e5366200154..34048140577b 100644
> > --- a/include/linux/rcutiny.h
> > +++ b/include/linux/rcutiny.h
> > @@ -157,4 +157,8 @@ static inline bool rcu_is_watching(void)
> >  
> >  #endif /* #else defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) */
> >  
> > +static inline void rcu_show_nocb_setup(void)
> > +{
> > +}
> > +
> >  #endif /* __LINUX_RCUTINY_H */
> > diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
> > index 52953790dcca..0b813bdb971b 100644
> > --- a/include/linux/rcutree.h
> > +++ b/include/linux/rcutree.h
> > @@ -97,4 +97,6 @@ extern int rcu_scheduler_active __read_mostly;
> >  
> >  bool rcu_is_watching(void);
> >  
> > +void rcu_show_nocb_setup(void);
> > +
> >  #endif /* __LINUX_RCUTREE_H */
> > diff --git a/kernel/hung_task.c b/kernel/hung_task.c
> > index 06db12434d72..e6e4d0f6b063 100644
> > --- a/kernel/hung_task.c
> > +++ b/kernel/hung_task.c
> > @@ -118,6 +118,7 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
> >  		" disables this message.\n");
> >  	sched_show_task(t);
> >  	debug_show_held_locks(t);
> > +	rcu_show_nocb_setup();
> >  
> >  	touch_nmi_watchdog();
> >  
> > diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
> > index 240fa9094f83..6b373e79ce0e 100644
> > --- a/kernel/rcu/rcutorture.c
> > +++ b/kernel/rcu/rcutorture.c
> > @@ -1513,6 +1513,7 @@ rcu_torture_cleanup(void)
> >  {
> >  	int i;
> >  
> > +	rcu_show_nocb_setup();
> >  	rcutorture_record_test_transition();
> >  	if (torture_cleanup_begin()) {
> >  		if (cur_ops->cb_barrier != NULL)
> > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> > index 927c17b081c7..285b3f6fb229 100644
> > --- a/kernel/rcu/tree_plugin.h
> > +++ b/kernel/rcu/tree_plugin.h
> > @@ -2699,6 +2699,31 @@ static bool init_nocb_callback_list(struct rcu_data *rdp)
> >  
> >  #endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
> >  
> > +void rcu_show_nocb_setup(void)
> > +{
> > +#ifdef CONFIG_RCU_NOCB_CPU
> > +	int cpu;
> > +	struct rcu_data *rdp;
> > +	struct rcu_state *rsp;
> > +
> > +	for_each_rcu_flavor(rsp) {
> > +		pr_alert("rcu_show_nocb_setup(): %s nocb state:\n", rsp->name);
> > +		for_each_possible_cpu(cpu) {
> > +			if (!rcu_is_nocb_cpu(cpu))
> > +				continue;
> > +			rdp = per_cpu_ptr(rsp->rda, cpu);
> > +			pr_alert("%3d: %p l:%p n:%p %c%c%c\n",
> > +				 cpu,
> > +				 rdp, rdp->nocb_leader, rdp->nocb_next_follower,
> > +				 ".N"[!!rdp->nocb_head],
> > +				 ".G"[!!rdp->nocb_gp_head],
> > +				 ".F"[!!rdp->nocb_follower_head]);
> > +		}
> > +	}
> > +#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
> > +}
> > +EXPORT_SYMBOL_GPL(rcu_show_nocb_setup);
> > +
> >  /*
> >   * An adaptive-ticks CPU can potentially execute in kernel mode for an
> >   * arbitrarily long period of time with the scheduling-clock tick turned
> > 
> 

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
index 0e5366200154..34048140577b 100644
--- a/include/linux/rcutiny.h
+++ b/include/linux/rcutiny.h
@@ -157,4 +157,8 @@  static inline bool rcu_is_watching(void)
 
 #endif /* #else defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) */
 
+static inline void rcu_show_nocb_setup(void)
+{
+}
+
 #endif /* __LINUX_RCUTINY_H */
diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
index 52953790dcca..0b813bdb971b 100644
--- a/include/linux/rcutree.h
+++ b/include/linux/rcutree.h
@@ -97,4 +97,6 @@  extern int rcu_scheduler_active __read_mostly;
 
 bool rcu_is_watching(void);
 
+void rcu_show_nocb_setup(void);
+
 #endif /* __LINUX_RCUTREE_H */
diff --git a/kernel/hung_task.c b/kernel/hung_task.c
index 06db12434d72..e6e4d0f6b063 100644
--- a/kernel/hung_task.c
+++ b/kernel/hung_task.c
@@ -118,6 +118,7 @@  static void check_hung_task(struct task_struct *t, unsigned long timeout)
 		" disables this message.\n");
 	sched_show_task(t);
 	debug_show_held_locks(t);
+	rcu_show_nocb_setup();
 
 	touch_nmi_watchdog();
 
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index 240fa9094f83..6b373e79ce0e 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -1513,6 +1513,7 @@  rcu_torture_cleanup(void)
 {
 	int i;
 
+	rcu_show_nocb_setup();
 	rcutorture_record_test_transition();
 	if (torture_cleanup_begin()) {
 		if (cur_ops->cb_barrier != NULL)
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 927c17b081c7..285b3f6fb229 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -2699,6 +2699,31 @@  static bool init_nocb_callback_list(struct rcu_data *rdp)
 
 #endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
 
+void rcu_show_nocb_setup(void)
+{
+#ifdef CONFIG_RCU_NOCB_CPU
+	int cpu;
+	struct rcu_data *rdp;
+	struct rcu_state *rsp;
+
+	for_each_rcu_flavor(rsp) {
+		pr_alert("rcu_show_nocb_setup(): %s nocb state:\n", rsp->name);
+		for_each_possible_cpu(cpu) {
+			if (!rcu_is_nocb_cpu(cpu))
+				continue;
+			rdp = per_cpu_ptr(rsp->rda, cpu);
+			pr_alert("%3d: %p l:%p n:%p %c%c%c\n",
+				 cpu,
+				 rdp, rdp->nocb_leader, rdp->nocb_next_follower,
+				 ".N"[!!rdp->nocb_head],
+				 ".G"[!!rdp->nocb_gp_head],
+				 ".F"[!!rdp->nocb_follower_head]);
+		}
+	}
+#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
+}
+EXPORT_SYMBOL_GPL(rcu_show_nocb_setup);
+
 /*
  * An adaptive-ticks CPU can potentially execute in kernel mode for an
  * arbitrarily long period of time with the scheduling-clock tick turned