diff mbox

[UPDATED,RFC,08/10] SPARC: smp: remove call to ipi_call_lock_irq()/ipi_call_unlock_irq()

Message ID 20120529082732.GA4250@zhy
State Not Applicable
Delegated to: David Miller
Headers show

Commit Message

Yong Zhang May 29, 2012, 8:27 a.m. UTC
On Tue, May 29, 2012 at 01:31:54PM +0530, Srivatsa S. Bhat wrote:
> This looks odd. IRQs must not have been enabled at this point.
> Just remove the call to local_irq_enable() that is found a few lines above
> this line and then you won't have to add this call to local_irq_disable().

Yeah, have thought about it. But because I was not sure there is
special need to enalbe irq that early (don't know much about sparc),
I decided to make minor change :)

Since we have gotten confirmation from David, I'm sending out the
new version. Please check it.

Thanks,
Yong

---
From: Yong Zhang <yong.zhang@windriver.com>
Date: Tue, 29 May 2012 12:56:08 +0800
Subject: [UPDATED] [RFC PATCH 8/10] SPARC: smp: remove call to
 ipi_call_lock_irq()/ipi_call_unlock_irq()

1) call_function.lock used in smp_call_function_many() is just to protect
   call_function.queue and &data->refs, cpu_online_mask is outside of the
   lock. And it's not necessary to protect cpu_online_mask,
   because data->cpumask is pre-calculate and even if a cpu is brougt up
   when calling arch_send_call_function_ipi_mask(), it's harmless because
   validation test in generic_smp_call_function_interrupt() will take care
   of it.

2) For cpu down issue, stop_machine() will guarantee that no concurrent
   smp_call_fuction() is processing.

And also delay irq enable to after set_cpu_online().

Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: sparclinux@vger.kernel.org
---
 arch/sparc/kernel/smp_64.c |    7 +------
 1 files changed, 1 insertions(+), 6 deletions(-)

Comments

Srivatsa S. Bhat May 29, 2012, 8:30 a.m. UTC | #1
On 05/29/2012 01:57 PM, Yong Zhang wrote:

> On Tue, May 29, 2012 at 01:31:54PM +0530, Srivatsa S. Bhat wrote:
>> This looks odd. IRQs must not have been enabled at this point.
>> Just remove the call to local_irq_enable() that is found a few lines above
>> this line and then you won't have to add this call to local_irq_disable().
> 
> Yeah, have thought about it. But because I was not sure there is
> special need to enalbe irq that early (don't know much about sparc),
> I decided to make minor change :)
> 
> Since we have gotten confirmation from David, I'm sending out the
> new version. Please check it.
> 
> Thanks,
> Yong
> 
> ---
> From: Yong Zhang <yong.zhang@windriver.com>
> Date: Tue, 29 May 2012 12:56:08 +0800
> Subject: [UPDATED] [RFC PATCH 8/10] SPARC: smp: remove call to
>  ipi_call_lock_irq()/ipi_call_unlock_irq()
> 
> 1) call_function.lock used in smp_call_function_many() is just to protect
>    call_function.queue and &data->refs, cpu_online_mask is outside of the
>    lock. And it's not necessary to protect cpu_online_mask,
>    because data->cpumask is pre-calculate and even if a cpu is brougt up
>    when calling arch_send_call_function_ipi_mask(), it's harmless because
>    validation test in generic_smp_call_function_interrupt() will take care
>    of it.
> 
> 2) For cpu down issue, stop_machine() will guarantee that no concurrent
>    smp_call_fuction() is processing.
> 
> And also delay irq enable to after set_cpu_online().
> 
> Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: sparclinux@vger.kernel.org

> ---


Looks good.
Acked-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>

>  arch/sparc/kernel/smp_64.c |    7 +------
>  1 files changed, 1 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
> index f591598..781bcb1 100644
> --- a/arch/sparc/kernel/smp_64.c
> +++ b/arch/sparc/kernel/smp_64.c
> @@ -103,8 +103,6 @@ void __cpuinit smp_callin(void)
>  	if (cheetah_pcache_forced_on)
>  		cheetah_enable_pcache();
> 
> -	local_irq_enable();
> -
>  	callin_flag = 1;
>  	__asm__ __volatile__("membar #Sync\n\t"
>  			     "flush  %%g6" : : : "memory");
> @@ -124,9 +122,8 @@ void __cpuinit smp_callin(void)
>  	while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
>  		rmb();
> 
> -	ipi_call_lock_irq();
>  	set_cpu_online(cpuid, true);
> -	ipi_call_unlock_irq();
> +	local_irq_enable();
> 
>  	/* idle thread is expected to have preempt disabled */
>  	preempt_disable();
> @@ -1308,9 +1305,7 @@ int __cpu_disable(void)
>  	mdelay(1);
>  	local_irq_disable();
> 
> -	ipi_call_lock();
>  	set_cpu_online(cpu, false);
> -	ipi_call_unlock();
> 
>  	cpu_map_rebuild();
> 

--
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller May 29, 2012, 8:36 a.m. UTC | #2
From: Yong Zhang <yong.zhang0@gmail.com>
Date: Tue, 29 May 2012 16:27:33 +0800

> From: Yong Zhang <yong.zhang@windriver.com>
> Date: Tue, 29 May 2012 12:56:08 +0800
> Subject: [UPDATED] [RFC PATCH 8/10] SPARC: smp: remove call to
>  ipi_call_lock_irq()/ipi_call_unlock_irq()
> 
> 1) call_function.lock used in smp_call_function_many() is just to protect
>    call_function.queue and &data->refs, cpu_online_mask is outside of the
>    lock. And it's not necessary to protect cpu_online_mask,
>    because data->cpumask is pre-calculate and even if a cpu is brougt up
>    when calling arch_send_call_function_ipi_mask(), it's harmless because
>    validation test in generic_smp_call_function_interrupt() will take care
>    of it.
> 
> 2) For cpu down issue, stop_machine() will guarantee that no concurrent
>    smp_call_fuction() is processing.
> 
> And also delay irq enable to after set_cpu_online().
> 
> Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>

Acked-by: David S. Miller <davem@davemloft.net>
--
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index f591598..781bcb1 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -103,8 +103,6 @@  void __cpuinit smp_callin(void)
 	if (cheetah_pcache_forced_on)
 		cheetah_enable_pcache();
 
-	local_irq_enable();
-
 	callin_flag = 1;
 	__asm__ __volatile__("membar #Sync\n\t"
 			     "flush  %%g6" : : : "memory");
@@ -124,9 +122,8 @@  void __cpuinit smp_callin(void)
 	while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
 		rmb();
 
-	ipi_call_lock_irq();
 	set_cpu_online(cpuid, true);
-	ipi_call_unlock_irq();
+	local_irq_enable();
 
 	/* idle thread is expected to have preempt disabled */
 	preempt_disable();
@@ -1308,9 +1305,7 @@  int __cpu_disable(void)
 	mdelay(1);
 	local_irq_disable();
 
-	ipi_call_lock();
 	set_cpu_online(cpu, false);
-	ipi_call_unlock();
 
 	cpu_map_rebuild();