diff mbox

powerpc/mm: Use jump label to speed up radix_enabled check

Message ID 87k2jje6w4.fsf@skywalker.in.ibm.com (mailing list archive)
State Changes Requested
Headers show

Commit Message

Aneesh Kumar K.V April 27, 2016, 9:30 a.m. UTC
"Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> writes:

> Benjamin Herrenschmidt <benh@kernel.crashing.org> writes:
>
>> On Wed, 2016-04-27 at 11:00 +1000, Balbir Singh wrote:
>>> Just basic testing across CPUs with various mm features 
>>> enabled/disabled. Just for sanity
>>
>> I still don't think it's worth scattering the change. Either the jump
>> label works or it doesn't ... The only problem is make sure we identify
>> all the pre-boot ones but that's about it.
>>
>
> There are two ways to do this. One is to follow the approach listed
> below done by Kevin, which is to do the jump_label_init early during boot and
> switch both cpu and mmu feature check to plain jump label.
>
> http://mid.gmane.org/1440415228-8006-1-git-send-email-haokexin@gmail.com
>
> I already found one use case of cpu_has_feature before that
> jump_label_init. In this approach we need to carefully audit all the
> cpu/mmu_has_feature calls to make sure they don't get called before
> jump_label_init. A missed conversion mean we miss a cpu/mmu feature
> check.
>
>
> Other option is to follow the patch I posted above, with the simple
> change of renaming mmu_feature_enabled to mmu_has_feature. So we can
> use it in early boot without really worrying about when we init jump
> label.
>
> What do you suggest we follow ?
>

Something simple as below ?
diff mbox

Patch

diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index 696b7c5cc31f..0835a8f9904b 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -23,7 +23,7 @@  struct mmu_psize_def {
 };
 extern struct mmu_psize_def mmu_psize_defs[MMU_PAGE_COUNT];
 
-#define radix_enabled() mmu_feature_enabled(MMU_FTR_RADIX)
+#define radix_enabled() mmu_has_feature(MMU_FTR_RADIX)
 
 #endif /* __ASSEMBLY__ */
 
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index fdb70dc218e5..d4b726f5bd4a 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -115,7 +115,7 @@ 
 DECLARE_PER_CPU(int, next_tlbcam_idx);
 #endif
 
-static inline int mmu_has_feature(unsigned long feature)
+static inline int __mmu_has_feature(unsigned long feature)
 {
 	return (cur_cpu_spec->mmu_features & feature);
 }
@@ -145,7 +145,7 @@  static inline void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
 #endif /* !CONFIG_DEBUG_VM */
 
 #ifdef HAVE_JUMP_LABEL
-static __always_inline bool mmu_feature_enabled(unsigned long feature)
+static __always_inline bool mmu_has_feature(unsigned long feature)
 {
 	asm_volatile_goto("1:\n\t"
 			  ".pushsection __mmu_ftr_fixup_c,  \"a\"\n\t"
@@ -155,7 +155,7 @@  static __always_inline bool mmu_feature_enabled(unsigned long feature)
 			  JUMP_ENTRY_TYPE "%l[l_false]\n\t"
 			  ".popsection\n\t"
 			  : : "i"(feature) : : l_true, l_false);
-	if (mmu_has_feature(feature))
+	if (__mmu_has_feature(feature))
 l_true:
 		return true;
 l_false: