diff mbox

[04/13] powerpc: Use soft_enabled_set api to update paca->soft_enabled

Message ID 1473944523-624-5-git-send-email-maddy@linux.vnet.ibm.com (mailing list archive)
State Superseded
Headers show

Commit Message

maddy Sept. 15, 2016, 1:01 p.m. UTC
Force use of soft_enabled_set() wrapper to update paca-soft_enabled
wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
added to force the paca->soft_enabled updates.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h  | 14 ++++++++++++++
 arch/powerpc/include/asm/kvm_ppc.h |  2 +-
 arch/powerpc/kernel/irq.c          |  2 +-
 arch/powerpc/kernel/setup_64.c     |  4 ++--
 arch/powerpc/kernel/time.c         |  6 +++---
 5 files changed, 21 insertions(+), 7 deletions(-)

Comments

Nicholas Piggin Sept. 16, 2016, 9:53 a.m. UTC | #1
On Thu, 15 Sep 2016 18:31:54 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> Force use of soft_enabled_set() wrapper to update paca-soft_enabled
> wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
> added to force the paca->soft_enabled updates.
> 
> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/hw_irq.h  | 14 ++++++++++++++
>  arch/powerpc/include/asm/kvm_ppc.h |  2 +-
>  arch/powerpc/kernel/irq.c          |  2 +-
>  arch/powerpc/kernel/setup_64.c     |  4 ++--
>  arch/powerpc/kernel/time.c         |  6 +++---
>  5 files changed, 21 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> index 8fad8c24760b..f828b8f8df02 100644
> --- a/arch/powerpc/include/asm/hw_irq.h
> +++ b/arch/powerpc/include/asm/hw_irq.h
> @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
>  	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
>  }
>  
> +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
> +{
> +	unsigned long flags;
> +
> +	asm volatile(
> +		"lbz %0,%1(13); stb %2,%1(13)"
> +		: "=r" (flags)
> +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
> +		  "r" (enable)
> +		: "memory");
> +
> +	return flags;
> +}

Why do you have the "memory" clobber here while soft_enabled_set() does not?

Thanks,
Nick
David Laight Sept. 16, 2016, 11:43 a.m. UTC | #2
From: Nicholas Piggin
> Sent: 16 September 2016 10:53
> On Thu, 15 Sep 2016 18:31:54 +0530
> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
> 
> > Force use of soft_enabled_set() wrapper to update paca-soft_enabled
> > wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
> > added to force the paca->soft_enabled updates.
...
> > diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> > index 8fad8c24760b..f828b8f8df02 100644
> > --- a/arch/powerpc/include/asm/hw_irq.h
> > +++ b/arch/powerpc/include/asm/hw_irq.h
> > @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
> >  	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
> >  }
> >
> > +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
> > +{
> > +	unsigned long flags;
> > +
> > +	asm volatile(
> > +		"lbz %0,%1(13); stb %2,%1(13)"
> > +		: "=r" (flags)
> > +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
> > +		  "r" (enable)
> > +		: "memory");
> > +
> > +	return flags;
> > +}
> 
> Why do you have the "memory" clobber here while soft_enabled_set() does not?

I wondered about the missing memory clobber earlier.

Any 'clobber' ought to be restricted to the referenced memory area.
If the structure is only referenced by r13 through 'asm volatile' it isn't needed.
OTOH why not allocate a global register variable to r13 and access through that?

	David
Nicholas Piggin Sept. 16, 2016, 11:59 a.m. UTC | #3
On Fri, 16 Sep 2016 11:43:13 +0000
David Laight <David.Laight@ACULAB.COM> wrote:

> From: Nicholas Piggin
> > Sent: 16 September 2016 10:53
> > On Thu, 15 Sep 2016 18:31:54 +0530
> > Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
> >   
> > > Force use of soft_enabled_set() wrapper to update paca-soft_enabled
> > > wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
> > > added to force the paca->soft_enabled updates.  
> ...
> > > diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> > > index 8fad8c24760b..f828b8f8df02 100644
> > > --- a/arch/powerpc/include/asm/hw_irq.h
> > > +++ b/arch/powerpc/include/asm/hw_irq.h
> > > @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
> > >  	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
> > >  }
> > >
> > > +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
> > > +{
> > > +	unsigned long flags;
> > > +
> > > +	asm volatile(
> > > +		"lbz %0,%1(13); stb %2,%1(13)"
> > > +		: "=r" (flags)
> > > +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
> > > +		  "r" (enable)
> > > +		: "memory");
> > > +
> > > +	return flags;
> > > +}  
> > 
> > Why do you have the "memory" clobber here while soft_enabled_set() does not?  
> 
> I wondered about the missing memory clobber earlier.
> 
> Any 'clobber' ought to be restricted to the referenced memory area.
> If the structure is only referenced by r13 through 'asm volatile' it isn't needed.

Well a clobber (compiler barrier) at some point is needed in irq_disable and
irq_enable paths, so we correctly open and close the critical section vs interrupts.
I just wonder about these helpers. It might be better to take the clobbers out of
there and add barrier(); in callers, which would make it more obvious.
David Laight Sept. 16, 2016, 1:22 p.m. UTC | #4
From: Nicholas Piggin
> Sent: 16 September 2016 12:59
> On Fri, 16 Sep 2016 11:43:13 +0000
> David Laight <David.Laight@ACULAB.COM> wrote:
> 
> > From: Nicholas Piggin
> > > Sent: 16 September 2016 10:53
> > > On Thu, 15 Sep 2016 18:31:54 +0530
> > > Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
> > >
> > > > Force use of soft_enabled_set() wrapper to update paca-soft_enabled
> > > > wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
> > > > added to force the paca->soft_enabled updates.
> > ...
> > > > diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> > > > index 8fad8c24760b..f828b8f8df02 100644
> > > > --- a/arch/powerpc/include/asm/hw_irq.h
> > > > +++ b/arch/powerpc/include/asm/hw_irq.h
> > > > @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
> > > >  	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
> > > >  }
> > > >
> > > > +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
> > > > +{
> > > > +	unsigned long flags;
> > > > +
> > > > +	asm volatile(
> > > > +		"lbz %0,%1(13); stb %2,%1(13)"
> > > > +		: "=r" (flags)
> > > > +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
> > > > +		  "r" (enable)
> > > > +		: "memory");
> > > > +
> > > > +	return flags;
> > > > +}
> > >
> > > Why do you have the "memory" clobber here while soft_enabled_set() does not?
> >
> > I wondered about the missing memory clobber earlier.
> >
> > Any 'clobber' ought to be restricted to the referenced memory area.
> > If the structure is only referenced by r13 through 'asm volatile' it isn't needed.
> 
> Well a clobber (compiler barrier) at some point is needed in irq_disable and
> irq_enable paths, so we correctly open and close the critical section vs interrupts.
> I just wonder about these helpers. It might be better to take the clobbers out of
> there and add barrier(); in callers, which would make it more obvious.

If the memory clobber is needed to synchronise with the rest of the code
rather than just ensuring the compiler doesn't reorder accesses via r13
then I'd add an explicit barrier() somewhere - even if in these helpers.

Potentially the helper wants a memory clobber for the (r13) area
and a separate barrier() to ensure the interrupts are masked for the
right code.
Even if both are together in the same helper.

	David
Nicholas Piggin Sept. 19, 2016, 2:52 a.m. UTC | #5
On Fri, 16 Sep 2016 13:22:24 +0000
David Laight <David.Laight@ACULAB.COM> wrote:

> From: Nicholas Piggin
> > Sent: 16 September 2016 12:59
> > On Fri, 16 Sep 2016 11:43:13 +0000
> > David Laight <David.Laight@ACULAB.COM> wrote:
> >   
> > > From: Nicholas Piggin  
> > > > Sent: 16 September 2016 10:53
> > > > On Thu, 15 Sep 2016 18:31:54 +0530
> > > > Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
> > > >  
> > > > > Force use of soft_enabled_set() wrapper to update paca-soft_enabled
> > > > > wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
> > > > > added to force the paca->soft_enabled updates.  
> > > ...  
> > > > > diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> > > > > index 8fad8c24760b..f828b8f8df02 100644
> > > > > --- a/arch/powerpc/include/asm/hw_irq.h
> > > > > +++ b/arch/powerpc/include/asm/hw_irq.h
> > > > > @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
> > > > >  	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
> > > > >  }
> > > > >
> > > > > +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
> > > > > +{
> > > > > +	unsigned long flags;
> > > > > +
> > > > > +	asm volatile(
> > > > > +		"lbz %0,%1(13); stb %2,%1(13)"
> > > > > +		: "=r" (flags)
> > > > > +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
> > > > > +		  "r" (enable)
> > > > > +		: "memory");
> > > > > +
> > > > > +	return flags;
> > > > > +}  
> > > >
> > > > Why do you have the "memory" clobber here while soft_enabled_set() does not?  
> > >
> > > I wondered about the missing memory clobber earlier.
> > >
> > > Any 'clobber' ought to be restricted to the referenced memory area.
> > > If the structure is only referenced by r13 through 'asm volatile' it isn't needed.  
> > 
> > Well a clobber (compiler barrier) at some point is needed in irq_disable and
> > irq_enable paths, so we correctly open and close the critical section vs interrupts.
> > I just wonder about these helpers. It might be better to take the clobbers out of
> > there and add barrier(); in callers, which would make it more obvious.  
> 
> If the memory clobber is needed to synchronise with the rest of the code
> rather than just ensuring the compiler doesn't reorder accesses via r13
> then I'd add an explicit barrier() somewhere - even if in these helpers.
> 
> Potentially the helper wants a memory clobber for the (r13) area
> and a separate barrier() to ensure the interrupts are masked for the
> right code.
> Even if both are together in the same helper.

Good point. Some of the existing modification helpers don't seem to have
clobbers for modifying the r13->soft_enabled memory itself, but they do
have the memory clobber where a critical section barrier is required.

The former may not be a problem if the helpers are used very carefully,
but probably should be commented at best, if not fixed. So after Madhi's
patches, we should make all accesses go via the helper functions, so a
clobber for the soft_enabled modification may not be required (this should
be commented). I think it may be cleaner to specify the location in the
constraints, but maybe that doesn't generate the best code -- something to
investigate.

Then, I'd like to see barrier()s for interrupt critical sections placed in
the callers of these helpers, which will make the code more obvious.

Thanks,
Nick
maddy Sept. 19, 2016, 4:11 a.m. UTC | #6
On Friday 16 September 2016 03:23 PM, Nicholas Piggin wrote:
> On Thu, 15 Sep 2016 18:31:54 +0530
> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>
>> Force use of soft_enabled_set() wrapper to update paca-soft_enabled
>> wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
>> added to force the paca->soft_enabled updates.
>>
>> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
>> ---
>>   arch/powerpc/include/asm/hw_irq.h  | 14 ++++++++++++++
>>   arch/powerpc/include/asm/kvm_ppc.h |  2 +-
>>   arch/powerpc/kernel/irq.c          |  2 +-
>>   arch/powerpc/kernel/setup_64.c     |  4 ++--
>>   arch/powerpc/kernel/time.c         |  6 +++---
>>   5 files changed, 21 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
>> index 8fad8c24760b..f828b8f8df02 100644
>> --- a/arch/powerpc/include/asm/hw_irq.h
>> +++ b/arch/powerpc/include/asm/hw_irq.h
>> @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
>>   	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
>>   }
>>   
>> +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
>> +{
>> +	unsigned long flags;
>> +
>> +	asm volatile(
>> +		"lbz %0,%1(13); stb %2,%1(13)"
>> +		: "=r" (flags)
>> +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
>> +		  "r" (enable)
>> +		: "memory");
>> +
>> +	return flags;
>> +}
> Why do you have the "memory" clobber here while soft_enabled_set() does not?

I did change to function to include a local variable
and update the soft_enabled. But, my bad. It was in the next patch.
I should make the change here. Yes. we dont a "memory" clobber here
which is right. But, this change is not complete and I will correct it.

Maddy

>
> Thanks,
> Nick
>
maddy Sept. 19, 2016, 5:05 a.m. UTC | #7
On Friday 16 September 2016 05:13 PM, David Laight wrote:
> From: Nicholas Piggin
>> Sent: 16 September 2016 10:53
>> On Thu, 15 Sep 2016 18:31:54 +0530
>> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>>
>>> Force use of soft_enabled_set() wrapper to update paca-soft_enabled
>>> wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
>>> added to force the paca->soft_enabled updates.
> ...
>>> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
>>> index 8fad8c24760b..f828b8f8df02 100644
>>> --- a/arch/powerpc/include/asm/hw_irq.h
>>> +++ b/arch/powerpc/include/asm/hw_irq.h
>>> @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
>>>   	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
>>>   }
>>>
>>> +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
>>> +{
>>> +	unsigned long flags;
>>> +
>>> +	asm volatile(
>>> +		"lbz %0,%1(13); stb %2,%1(13)"
>>> +		: "=r" (flags)
>>> +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
>>> +		  "r" (enable)
>>> +		: "memory");
>>> +
>>> +	return flags;
>>> +}
>> Why do you have the "memory" clobber here while soft_enabled_set() does not?
> I wondered about the missing memory clobber earlier.
>
> Any 'clobber' ought to be restricted to the referenced memory area.
> If the structure is only referenced by r13 through 'asm volatile' it isn't needed.
> OTOH why not allocate a global register variable to r13 and access through that?

I do see this in asm/paca.h "register struct paca_struct *local_paca 
asm("r13"); "
and __check_irq_replay() in kernel/irq.c do updates the "irq_happened" as
mentioned. But existing helpers in hw_irq update the soft_enabled via
asm volatile and i did the same.

Maddy

> 	David
>
maddy Sept. 19, 2016, 5:32 a.m. UTC | #8
On Monday 19 September 2016 08:22 AM, Nicholas Piggin wrote:
> On Fri, 16 Sep 2016 13:22:24 +0000
> David Laight <David.Laight@ACULAB.COM> wrote:
>
>> From: Nicholas Piggin
>>> Sent: 16 September 2016 12:59
>>> On Fri, 16 Sep 2016 11:43:13 +0000
>>> David Laight <David.Laight@ACULAB.COM> wrote:
>>>    
>>>> From: Nicholas Piggin
>>>>> Sent: 16 September 2016 10:53
>>>>> On Thu, 15 Sep 2016 18:31:54 +0530
>>>>> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>>>>>   
>>>>>> Force use of soft_enabled_set() wrapper to update paca-soft_enabled
>>>>>> wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
>>>>>> added to force the paca->soft_enabled updates.
>>>> ...
>>>>>> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
>>>>>> index 8fad8c24760b..f828b8f8df02 100644
>>>>>> --- a/arch/powerpc/include/asm/hw_irq.h
>>>>>> +++ b/arch/powerpc/include/asm/hw_irq.h
>>>>>> @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
>>>>>>   	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
>>>>>>   }
>>>>>>
>>>>>> +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
>>>>>> +{
>>>>>> +	unsigned long flags;
>>>>>> +
>>>>>> +	asm volatile(
>>>>>> +		"lbz %0,%1(13); stb %2,%1(13)"
>>>>>> +		: "=r" (flags)
>>>>>> +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
>>>>>> +		  "r" (enable)
>>>>>> +		: "memory");
>>>>>> +
>>>>>> +	return flags;
>>>>>> +}
>>>>> Why do you have the "memory" clobber here while soft_enabled_set() does not?
>>>> I wondered about the missing memory clobber earlier.
>>>>
>>>> Any 'clobber' ought to be restricted to the referenced memory area.
>>>> If the structure is only referenced by r13 through 'asm volatile' it isn't needed.
>>> Well a clobber (compiler barrier) at some point is needed in irq_disable and
>>> irq_enable paths, so we correctly open and close the critical section vs interrupts.
>>> I just wonder about these helpers. It might be better to take the clobbers out of
>>> there and add barrier(); in callers, which would make it more obvious.
>> If the memory clobber is needed to synchronise with the rest of the code
>> rather than just ensuring the compiler doesn't reorder accesses via r13
>> then I'd add an explicit barrier() somewhere - even if in these helpers.
>>
>> Potentially the helper wants a memory clobber for the (r13) area
>> and a separate barrier() to ensure the interrupts are masked for the
>> right code.
>> Even if both are together in the same helper.
> Good point. Some of the existing modification helpers don't seem to have
> clobbers for modifying the r13->soft_enabled memory itself, but they do
> have the memory clobber where a critical section barrier is required.
>
> The former may not be a problem if the helpers are used very carefully,
> but probably should be commented at best, if not fixed.

Yes. Agreed. Will add comments

>   So after Madhi's
> patches, we should make all accesses go via the helper functions, so a
> clobber for the soft_enabled modification may not be required (this should
> be commented). I think it may be cleaner to specify the location in the
> constraints, but maybe that doesn't generate the best code -- something to
> investigate.
>
> Then, I'd like to see barrier()s for interrupt critical sections placed in
> the callers of these helpers, which will make the code more obvious.

Ok will look into this.

>
> Thanks,
> Nick
>
diff mbox

Patch

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 8fad8c24760b..f828b8f8df02 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -53,6 +53,20 @@  static inline notrace void soft_enabled_set(unsigned long enable)
 	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
 }
 
+static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
+{
+	unsigned long flags;
+
+	asm volatile(
+		"lbz %0,%1(13); stb %2,%1(13)"
+		: "=r" (flags)
+		: "i" (offsetof(struct paca_struct, soft_enabled)),\
+		  "r" (enable)
+		: "memory");
+
+	return flags;
+}
+
 static inline unsigned long arch_local_save_flags(void)
 {
 	unsigned long flags;
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 740ee309cea8..07f6a51ae99f 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -707,7 +707,7 @@  static inline void kvmppc_fix_ee_before_entry(void)
 
 	/* Only need to enable IRQs by hard enabling them after this */
 	local_paca->irq_happened = 0;
-	local_paca->soft_enabled = IRQ_DISABLE_MASK_NONE;
+	soft_enabled_set(IRQ_DISABLE_MASK_NONE);
 #endif
 }
 
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 5a926ea5bd0b..58462ce186fa 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -332,7 +332,7 @@  bool prep_irq_for_idle(void)
 	 * of entering the low power state.
 	 */
 	local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
-	local_paca->soft_enabled = IRQ_DISABLE_MASK_NONE;
+	soft_enabled_set(IRQ_DISABLE_MASK_NONE);
 
 	/* Tell the caller to enter the low power state */
 	return true;
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index f31930b9bfc1..f0f882166dcc 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -197,7 +197,7 @@  static void __init fixup_boot_paca(void)
 	/* Allow percpu accesses to work until we setup percpu data */
 	get_paca()->data_offset = 0;
 	/* Mark interrupts disabled in PACA */
-	get_paca()->soft_enabled = IRQ_DISABLE_MASK_LINUX;
+	soft_enabled_set(IRQ_DISABLE_MASK_LINUX);
 }
 
 static void __init configure_exceptions(void)
@@ -334,7 +334,7 @@  void __init early_setup(unsigned long dt_ptr)
 void early_setup_secondary(void)
 {
 	/* Mark interrupts disabled in PACA */
-	get_paca()->soft_enabled = 0;
+	soft_enabled_set(IRQ_DISABLE_MASK_LINUX);
 
 	/* Initialize the hash table or TLB handling */
 	early_init_mmu_secondary();
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 7105757cdb90..483313aa311f 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -259,7 +259,7 @@  static u64 scan_dispatch_log(u64 stop_tb)
 void accumulate_stolen_time(void)
 {
 	u64 sst, ust;
-	u8 save_soft_enabled = local_paca->soft_enabled;
+	unsigned long save_soft_enabled;
 	struct cpu_accounting_data *acct = &local_paca->accounting;
 
 	/* We are called early in the exception entry, before
@@ -268,7 +268,7 @@  void accumulate_stolen_time(void)
 	 * needs to reflect that so various debug stuff doesn't
 	 * complain
 	 */
-	local_paca->soft_enabled = IRQ_DISABLE_MASK_LINUX;
+	save_soft_enabled = soft_enabled_set_return(IRQ_DISABLE_MASK_LINUX);
 
 	sst = scan_dispatch_log(acct->starttime_user);
 	ust = scan_dispatch_log(acct->starttime);
@@ -276,7 +276,7 @@  void accumulate_stolen_time(void)
 	acct->user_time -= ust;
 	local_paca->stolen_time += ust + sst;
 
-	local_paca->soft_enabled = save_soft_enabled;
+	soft_enabled_set(save_soft_enabled);
 }
 
 static inline u64 calculate_stolen_time(u64 stop_tb)