diff mbox series

[RFC,3/5] kasan: allow architectures to provide an outline readiness check

Message ID 20190215000441.14323-4-dja@axtens.net (mailing list archive)
State RFC
Headers show
Series powerpc: KASAN for 64-bit Book3E | expand

Checks

Context Check Description
snowpatch_ozlabs/apply_patch success next/apply_patch Successfully applied
snowpatch_ozlabs/checkpatch warning total: 0 errors, 2 warnings, 0 checks, 18 lines checked

Commit Message

Daniel Axtens Feb. 15, 2019, 12:04 a.m. UTC
In powerpc (as I understand it), we spend a lot of time in boot
running in real mode before MMU paging is initalised. During
this time we call a lot of generic code, including printk(). If
we try to access the shadow region during this time, things fail.

My attempts to move early init before the first printk have not
been successful. (Both previous RFCs for ppc64 - by 2 different
people - have needed this trick too!)

So, allow architectures to define a check_return_arch_not_ready()
hook that bails out of check_memory_region_inline() unless the
arch has done all of the init.

Link: https://lore.kernel.org/patchwork/patch/592820/ # ppc64 hash series
Link: https://patchwork.ozlabs.org/patch/795211/      # ppc radix series
Originally-by: Balbir Singh <bsingharora@gmail.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Daniel Axtens <dja@axtens.net>
---
 include/linux/kasan.h | 4 ++++
 mm/kasan/generic.c    | 2 ++
 2 files changed, 6 insertions(+)

Comments

Dmitry Vyukov Feb. 15, 2019, 8:25 a.m. UTC | #1
On Fri, Feb 15, 2019 at 1:05 AM Daniel Axtens <dja@axtens.net> wrote:
>
> In powerpc (as I understand it), we spend a lot of time in boot
> running in real mode before MMU paging is initalised. During
> this time we call a lot of generic code, including printk(). If
> we try to access the shadow region during this time, things fail.
>
> My attempts to move early init before the first printk have not
> been successful. (Both previous RFCs for ppc64 - by 2 different
> people - have needed this trick too!)
>
> So, allow architectures to define a check_return_arch_not_ready()
> hook that bails out of check_memory_region_inline() unless the
> arch has done all of the init.
>
> Link: https://lore.kernel.org/patchwork/patch/592820/ # ppc64 hash series
> Link: https://patchwork.ozlabs.org/patch/795211/      # ppc radix series
> Originally-by: Balbir Singh <bsingharora@gmail.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> Signed-off-by: Daniel Axtens <dja@axtens.net>
> ---
>  include/linux/kasan.h | 4 ++++
>  mm/kasan/generic.c    | 2 ++
>  2 files changed, 6 insertions(+)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index f6261840f94c..83edc5e2b6a0 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -14,6 +14,10 @@ struct task_struct;
>  #include <asm/kasan.h>
>  #include <asm/pgtable.h>
>
> +#ifndef check_return_arch_not_ready
> +#define check_return_arch_not_ready()  do { } while (0)
> +#endif

Please do a bool-returning function. There is no need for
macro-super-powers here and normal C should be the default choice in
such cases.
It will be inlined and an empty impl will dissolve just as the macro.

>  extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
>  extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
>  extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index bafa2f986660..4c18bbd09a20 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -170,6 +170,8 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
>                                                 size_t size, bool write,
>                                                 unsigned long ret_ip)
>  {
> +       check_return_arch_not_ready();
> +
>         if (unlikely(size == 0))
>                 return;
>
> --
> 2.19.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20190215000441.14323-4-dja%40axtens.net.
> For more options, visit https://groups.google.com/d/optout.
Christophe Leroy Feb. 17, 2019, 12:05 p.m. UTC | #2
Le 15/02/2019 à 01:04, Daniel Axtens a écrit :
> In powerpc (as I understand it), we spend a lot of time in boot
> running in real mode before MMU paging is initalised. During
> this time we call a lot of generic code, including printk(). If
> we try to access the shadow region during this time, things fail.
> 
> My attempts to move early init before the first printk have not
> been successful. (Both previous RFCs for ppc64 - by 2 different
> people - have needed this trick too!)
> 
> So, allow architectures to define a check_return_arch_not_ready()
> hook that bails out of check_memory_region_inline() unless the
> arch has done all of the init.
> 
> Link: https://lore.kernel.org/patchwork/patch/592820/ # ppc64 hash series
> Link: https://patchwork.ozlabs.org/patch/795211/      # ppc radix series
> Originally-by: Balbir Singh <bsingharora@gmail.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> Signed-off-by: Daniel Axtens <dja@axtens.net>
> ---
>   include/linux/kasan.h | 4 ++++
>   mm/kasan/generic.c    | 2 ++
>   2 files changed, 6 insertions(+)
> 
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index f6261840f94c..83edc5e2b6a0 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -14,6 +14,10 @@ struct task_struct;
>   #include <asm/kasan.h>
>   #include <asm/pgtable.h>
>   
> +#ifndef check_return_arch_not_ready
> +#define check_return_arch_not_ready()	do { } while (0)
> +#endif

A static inline would be better I believe.

Something like

#ifndef kasan_arch_is_ready
static inline bool kasan_arch_is_ready {return true;}
#endif

> +
>   extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
>   extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
>   extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index bafa2f986660..4c18bbd09a20 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -170,6 +170,8 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
>   						size_t size, bool write,
>   						unsigned long ret_ip)
>   {
> +	check_return_arch_not_ready();
> +

Not good for readibility that the above macro embeds a return, something 
like below would be better I think:

	if (!kasan_arch_is_ready())
		return;

Unless somebody minds, I'll do the change and take this patch in my 
series in order to handle the case of book3s/32 hash.

Christophe

>   	if (unlikely(size == 0))
>   		return;
>   
> 

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
Daniel Axtens Feb. 18, 2019, 6:13 a.m. UTC | #3
christophe leroy <christophe.leroy@c-s.fr> writes:

> Le 15/02/2019 à 01:04, Daniel Axtens a écrit :
>> In powerpc (as I understand it), we spend a lot of time in boot
>> running in real mode before MMU paging is initalised. During
>> this time we call a lot of generic code, including printk(). If
>> we try to access the shadow region during this time, things fail.
>> 
>> My attempts to move early init before the first printk have not
>> been successful. (Both previous RFCs for ppc64 - by 2 different
>> people - have needed this trick too!)
>> 
>> So, allow architectures to define a check_return_arch_not_ready()
>> hook that bails out of check_memory_region_inline() unless the
>> arch has done all of the init.
>> 
>> Link: https://lore.kernel.org/patchwork/patch/592820/ # ppc64 hash series
>> Link: https://patchwork.ozlabs.org/patch/795211/      # ppc radix series
>> Originally-by: Balbir Singh <bsingharora@gmail.com>
>> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> Signed-off-by: Daniel Axtens <dja@axtens.net>
>> ---
>>   include/linux/kasan.h | 4 ++++
>>   mm/kasan/generic.c    | 2 ++
>>   2 files changed, 6 insertions(+)
>> 
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index f6261840f94c..83edc5e2b6a0 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -14,6 +14,10 @@ struct task_struct;
>>   #include <asm/kasan.h>
>>   #include <asm/pgtable.h>
>>   
>> +#ifndef check_return_arch_not_ready
>> +#define check_return_arch_not_ready()	do { } while (0)
>> +#endif
>
> A static inline would be better I believe.
>
> Something like
>
> #ifndef kasan_arch_is_ready
> static inline bool kasan_arch_is_ready {return true;}
> #endif
>
>> +
>>   extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
>>   extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
>>   extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
>> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
>> index bafa2f986660..4c18bbd09a20 100644
>> --- a/mm/kasan/generic.c
>> +++ b/mm/kasan/generic.c
>> @@ -170,6 +170,8 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
>>   						size_t size, bool write,
>>   						unsigned long ret_ip)
>>   {
>> +	check_return_arch_not_ready();
>> +
>
> Not good for readibility that the above macro embeds a return, something 
> like below would be better I think:
>
> 	if (!kasan_arch_is_ready())
> 		return;
>
> Unless somebody minds, I'll do the change and take this patch in my 
> series in order to handle the case of book3s/32 hash.

Please do; feel free to take as many of the patches as you would like
and I'll rebase whatever is left on the next version of your series.

The idea with the macro magic was to take advantage of the speed of
static keys (I think, I borrowed it from Balbir's patch). Perhaps an
inline function will achieve this anyway, but given that KASAN with
outline instrumentation is inevitably slow, I guess it doesn't matter
much either way.

Regards,
Daniel
>
> Christophe
>
>>   	if (unlikely(size == 0))
>>   		return;
>>   
>> 
>
> ---
> L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
> https://www.avast.com/antivirus
Christophe Leroy Feb. 25, 2019, 2:01 p.m. UTC | #4
Hi Daniel,

Le 18/02/2019 à 07:13, Daniel Axtens a écrit :
> christophe leroy <christophe.leroy@c-s.fr> writes:
> 
>> Le 15/02/2019 à 01:04, Daniel Axtens a écrit :
>>> In powerpc (as I understand it), we spend a lot of time in boot
>>> running in real mode before MMU paging is initalised. During
>>> this time we call a lot of generic code, including printk(). If
>>> we try to access the shadow region during this time, things fail.
>>>
>>> My attempts to move early init before the first printk have not
>>> been successful. (Both previous RFCs for ppc64 - by 2 different
>>> people - have needed this trick too!)
>>>
>>> So, allow architectures to define a check_return_arch_not_ready()
>>> hook that bails out of check_memory_region_inline() unless the
>>> arch has done all of the init.
>>>
>>> Link: https://lore.kernel.org/patchwork/patch/592820/ # ppc64 hash series
>>> Link: https://patchwork.ozlabs.org/patch/795211/      # ppc radix series
>>> Originally-by: Balbir Singh <bsingharora@gmail.com>
>>> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>>> Signed-off-by: Daniel Axtens <dja@axtens.net>
>>> ---
>>>    include/linux/kasan.h | 4 ++++
>>>    mm/kasan/generic.c    | 2 ++
>>>    2 files changed, 6 insertions(+)
>>>
>>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>>> index f6261840f94c..83edc5e2b6a0 100644
>>> --- a/include/linux/kasan.h
>>> +++ b/include/linux/kasan.h
>>> @@ -14,6 +14,10 @@ struct task_struct;
>>>    #include <asm/kasan.h>
>>>    #include <asm/pgtable.h>
>>>    
>>> +#ifndef check_return_arch_not_ready
>>> +#define check_return_arch_not_ready()	do { } while (0)
>>> +#endif
>>
>> A static inline would be better I believe.
>>
>> Something like
>>
>> #ifndef kasan_arch_is_ready
>> static inline bool kasan_arch_is_ready {return true;}
>> #endif
>>
>>> +
>>>    extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
>>>    extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
>>>    extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
>>> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
>>> index bafa2f986660..4c18bbd09a20 100644
>>> --- a/mm/kasan/generic.c
>>> +++ b/mm/kasan/generic.c
>>> @@ -170,6 +170,8 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
>>>    						size_t size, bool write,
>>>    						unsigned long ret_ip)
>>>    {
>>> +	check_return_arch_not_ready();
>>> +
>>
>> Not good for readibility that the above macro embeds a return, something
>> like below would be better I think:
>>
>> 	if (!kasan_arch_is_ready())
>> 		return;
>>
>> Unless somebody minds, I'll do the change and take this patch in my
>> series in order to handle the case of book3s/32 hash.
> 
> Please do; feel free to take as many of the patches as you would like
> and I'll rebase whatever is left on the next version of your series.

I have now done a big step with v7: works on both nohash and hash ppc32 
without any special feature in the core of kasan. Have to do more tests 
on the hash version, but it seems promissing.

I have kept your patches on sync on top of it (allthough totally 
untested), you can find them in 
https://github.com/chleroy/linux/commits/kasan

> 
> The idea with the macro magic was to take advantage of the speed of
> static keys (I think, I borrowed it from Balbir's patch). Perhaps an
> inline function will achieve this anyway, but given that KASAN with
> outline instrumentation is inevitably slow, I guess it doesn't matter
> much either way.

You'll see in the modifications I've done to your patches, we can still 
use static keys while using static inline functions.

Christophe
Daniel Axtens Feb. 26, 2019, 12:14 a.m. UTC | #5
>>> Unless somebody minds, I'll do the change and take this patch in my
>>> series in order to handle the case of book3s/32 hash.
>> 
>> Please do; feel free to take as many of the patches as you would like
>> and I'll rebase whatever is left on the next version of your series.
>
> I have now done a big step with v7: works on both nohash and hash ppc32 
> without any special feature in the core of kasan. Have to do more tests 
> on the hash version, but it seems promissing.
>
> I have kept your patches on sync on top of it (allthough totally 
> untested), you can find them in 
> https://github.com/chleroy/linux/commits/kasan

Thanks - I've got sidetracked with other internal stuff but I hope to
get back to this later in the week.

Regards,
Daniel
>
>> 
>> The idea with the macro magic was to take advantage of the speed of
>> static keys (I think, I borrowed it from Balbir's patch). Perhaps an
>> inline function will achieve this anyway, but given that KASAN with
>> outline instrumentation is inevitably slow, I guess it doesn't matter
>> much either way.
>
> You'll see in the modifications I've done to your patches, we can still 
> use static keys while using static inline functions.
>
> Christophe
diff mbox series

Patch

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index f6261840f94c..83edc5e2b6a0 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -14,6 +14,10 @@  struct task_struct;
 #include <asm/kasan.h>
 #include <asm/pgtable.h>
 
+#ifndef check_return_arch_not_ready
+#define check_return_arch_not_ready()	do { } while (0)
+#endif
+
 extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
 extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
 extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index bafa2f986660..4c18bbd09a20 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -170,6 +170,8 @@  static __always_inline void check_memory_region_inline(unsigned long addr,
 						size_t size, bool write,
 						unsigned long ret_ip)
 {
+	check_return_arch_not_ready();
+
 	if (unlikely(size == 0))
 		return;