diff mbox

[v2,18/19] ARC: [plat-eznps] replace sync with proper cpu barrier

Message ID 564B1182.7050107@synopsys.com
State Superseded
Headers show

Commit Message

Vineet Gupta Nov. 17, 2015, 11:37 a.m. UTC
On Tuesday 17 November 2015 04:53 PM, Peter Zijlstra wrote:
> On Tue, Nov 17, 2015 at 04:42:49PM +0530, Vineet Gupta wrote:
>> On Saturday 07 November 2015 04:22 PM, Noam Camus wrote:
>>> From: Tal Zilcer <talz@ezchip.com>
>>>
>>> In SMT system like we have the generic "sync" is not working with
>>> HW threads. The replacement is "schd.rw" instruction that is served
>>> as cpu barrier for HW threads.
>>
>> As discussed in v2 of this patch, SYNC or some such in __switch_to is completely
>> superfluous. We don't need this patch, you may instead wanna submit a patch which
>> removes the sync. Also please do that in assembler version of this file as well !
> 
> Do test it though; 

Certainly !

> as is ARC-SMP seems to have a _lot_ of superfluous
> barriers many of which have no explanation yet (I'm thinking of those
> extra smp_mb()s in the lock primitives).

Other than the lock primitives can u think of any more.

I verified that with llock/scond based spinlocks, those smp_mb() can be safely
removed. I didn't send that patch over yet as part of puzzle is why removing them
in EX based locks causes hackbench to jitter on quad core builds. This required
some perf investigation but that seems to be causing some sort of livelock with
callgraph profiling which is what I'm debugging currently :-)

BTW since we are on the topic we have this loop in stack unwinder which can
potentially cause RCU stalls, actual lockups etc. I was planning to add the
following - does that seem fine to you.

------------------>

Comments

Peter Zijlstra Nov. 17, 2015, 12:22 p.m. UTC | #1
On Tue, Nov 17, 2015 at 05:07:38PM +0530, Vineet Gupta wrote:
> > as is ARC-SMP seems to have a _lot_ of superfluous
> > barriers many of which have no explanation yet (I'm thinking of those
> > extra smp_mb()s in the lock primitives).
> 
> Other than the lock primitives can u think of any more.

Not of the top of my head.

> I verified that with llock/scond based spinlocks, those smp_mb() can be safely
> removed.

Good!

> I didn't send that patch over yet as part of puzzle is why removing them
> in EX based locks causes hackbench to jitter on quad core builds. This required
> some perf investigation but that seems to be causing some sort of livelock with
> callgraph profiling which is what I'm debugging currently :-)

So there's two superfluous barriers right; the one before LOCK and the
one after UNLOCK. Does removing either cause the jitter? I'm thinking
that maybe its the smp_mb after UNLOCK that force flushes the store
buffer that helps (MIPS has something similar).

> BTW since we are on the topic we have this loop in stack unwinder which can
> potentially cause RCU stalls, actual lockups etc. I was planning to add the
> following - does that seem fine to you.

Worries me more than anything. How could you get stuck in there?
Vineet Gupta Nov. 17, 2015, 12:37 p.m. UTC | #2
On Tuesday 17 November 2015 05:52 PM, Peter Zijlstra wrote:
>> > BTW since we are on the topic we have this loop in stack unwinder which can
>> > potentially cause RCU stalls, actual lockups etc. I was planning to add the
>> > following - does that seem fine to you.
> Worries me more than anything. How could you get stuck in there?

No we not getting stuck in there - but this has potential to - if say unwind info
were corrupt (not seen that ever though).

The old code won't even respond to say a Ctrl+C if it were stuck !
Plus the reschedule there will keeps sched happy when say unraveling deep stack
frames with perf ?
Peter Zijlstra Nov. 17, 2015, 12:44 p.m. UTC | #3
On Tue, Nov 17, 2015 at 06:07:08PM +0530, Vineet Gupta wrote:
> On Tuesday 17 November 2015 05:52 PM, Peter Zijlstra wrote:
> >> > BTW since we are on the topic we have this loop in stack unwinder which can
> >> > potentially cause RCU stalls, actual lockups etc. I was planning to add the
> >> > following - does that seem fine to you.
> > Worries me more than anything. How could you get stuck in there?
> 
> No we not getting stuck in there - but this has potential to - if say unwind info
> were corrupt (not seen that ever though).

Better put in a failsafe for that anyway, just out of sheer paranoia :-)

You should never report more than PERF_MAX_STACK_DEPTH thingies anyway,
so once you've done that many loops, you're good to bail, right?

> The old code won't even respond to say a Ctrl+C if it were stuck !
> Plus the reschedule there will keeps sched happy when say unraveling deep stack
> frames with perf ?

You're likely to call this code from interrupt/NMI context, there is no
^C or scheduling going to help you there.
Vineet Gupta Nov. 17, 2015, 1:32 p.m. UTC | #4
On Tuesday 17 November 2015 06:14 PM, Peter Zijlstra wrote:
> On Tue, Nov 17, 2015 at 06:07:08PM +0530, Vineet Gupta wrote:
>> On Tuesday 17 November 2015 05:52 PM, Peter Zijlstra wrote:
>>>>> BTW since we are on the topic we have this loop in stack unwinder which can
>>>>> potentially cause RCU stalls, actual lockups etc. I was planning to add the
>>>>> following - does that seem fine to you.
>>> Worries me more than anything. How could you get stuck in there?
>>
>> No we not getting stuck in there - but this has potential to - if say unwind info
>> were corrupt (not seen that ever though).
> 
> Better put in a failsafe for that anyway, just out of sheer paranoia :-)

Indeed, loop for say 32 times at max or some such.


> You should never report more than PERF_MAX_STACK_DEPTH thingies anyway,
> so once you've done that many loops, you're good to bail, right?

Yeah, although I need to ensure if arch code needs to check that. Plus the
unwinder is also used for things like ps wchan, /proc/<pid>/stack, crash dump etc.

>> The old code won't even respond to say a Ctrl+C if it were stuck !
>> Plus the reschedule there will keeps sched happy when say unraveling deep stack
>> frames with perf ?
> 
> You're likely to call this code from interrupt/NMI context, there is no
> ^C or scheduling going to help you there.

For perf case, but there are other uses of unwinder as described above.

So in light of above, do u think that the cond_resched() + signal_pending check is
not needed and the bail if over N iters will be fine !
Peter Zijlstra Nov. 17, 2015, 1:59 p.m. UTC | #5
On Tue, Nov 17, 2015 at 07:02:00PM +0530, Vineet Gupta wrote:
> > You should never report more than PERF_MAX_STACK_DEPTH thingies anyway,
> > so once you've done that many loops, you're good to bail, right?
> 
> Yeah, although I need to ensure if arch code needs to check that. Plus the
> unwinder is also used for things like ps wchan, /proc/<pid>/stack, crash dump etc.

Ah, ok. On x86 we have a separate unwinder for perf. Trying to touch
userspace memory from NMI context is a tad tricky.

(imagine the NMI interrupting a page-fault and then itself triggering a
page-fault again)

> > You're likely to call this code from interrupt/NMI context, there is no
> > ^C or scheduling going to help you there.
> 
> For perf case, but there are other uses of unwinder as described above.
> 
> So in light of above, do u think that the cond_resched() + signal_pending check is
> not needed and the bail if over N iters will be fine !

It can't hurt, but you want to engineer it to be robust for the most
constrained environment, so I would say the bail over N iters had
better be robust, otherwise you're screwed anyhow ;-)
diff mbox

Patch

diff --git a/arch/arc/kernel/stacktrace.c b/arch/arc/kernel/stacktrace.c
index 573ac469f68b..e887f6df7ac9 100644
--- a/arch/arc/kernel/stacktrace.c
+++ b/arch/arc/kernel/stacktrace.c
@@ -138,6 +138,10 @@  arc_unwind_core(struct task_struct *tsk, struct pt_regs *regs,
 		if (consumer_fn(address, arg) == -1)
 			break;

+		cond_resched();
+		if (fatal_signal_pending(current))
+			return -ERESTARTNOHAND;
+
 		ret = arc_unwind(&frame_info);
 		if (ret)
 			break;