diff mbox

[v6,11/15] arm64/kasan: explicitly zero kasan shadow memory

Message ID 1502138329-123460-12-git-send-email-pasha.tatashin@oracle.com
State Not Applicable
Delegated to: David Miller
Headers show

Commit Message

Pavel Tatashin Aug. 7, 2017, 8:38 p.m. UTC
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.

We must explicitly zero the memory that is allocated by vmemmap_populate()
for kasan, as this memory does not go through struct page initialization
path.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: Bob Picco <bob.picco@oracle.com>
---
 arch/arm64/mm/kasan_init.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

Comments

Will Deacon Aug. 8, 2017, 9:07 a.m. UTC | #1
On Mon, Aug 07, 2017 at 04:38:45PM -0400, Pavel Tatashin wrote:
> To optimize the performance of struct page initialization,
> vmemmap_populate() will no longer zero memory.
> 
> We must explicitly zero the memory that is allocated by vmemmap_populate()
> for kasan, as this memory does not go through struct page initialization
> path.
> 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
> Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
> Reviewed-by: Bob Picco <bob.picco@oracle.com>
> ---
>  arch/arm64/mm/kasan_init.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 42 insertions(+)
> 
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 81f03959a4ab..e78a9ecbb687 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
>  		set_pgd(pgd_offset_k(start), __pgd(0));
>  }
>  
> +/*
> + * Memory that was allocated by vmemmap_populate is not zeroed, so we must
> + * zero it here explicitly.
> + */
> +static void
> +zero_vmemmap_populated_memory(void)
> +{
> +	struct memblock_region *reg;
> +	u64 start, end;
> +
> +	for_each_memblock(memory, reg) {
> +		start = __phys_to_virt(reg->base);
> +		end = __phys_to_virt(reg->base + reg->size);
> +
> +		if (start >= end)
> +			break;
> +
> +		start = (u64)kasan_mem_to_shadow((void *)start);
> +		end = (u64)kasan_mem_to_shadow((void *)end);
> +
> +		/* Round to the start end of the mapped pages */
> +		start = round_down(start, SWAPPER_BLOCK_SIZE);
> +		end = round_up(end, SWAPPER_BLOCK_SIZE);
> +		memset((void *)start, 0, end - start);
> +	}
> +
> +	start = (u64)kasan_mem_to_shadow(_text);
> +	end = (u64)kasan_mem_to_shadow(_end);
> +
> +	/* Round to the start end of the mapped pages */
> +	start = round_down(start, SWAPPER_BLOCK_SIZE);
> +	end = round_up(end, SWAPPER_BLOCK_SIZE);
> +	memset((void *)start, 0, end - start);
> +}

I can't help but think this would be an awful lot nicer if you made
vmemmap_alloc_block take extra GFP flags as a parameter. That way, we could
implement a version of vmemmap_populate that does the zeroing when we need
it, without having to duplicate a bunch of the code like this. I think it
would also be less error-prone, because you wouldn't have to do the
allocation and the zeroing in two separate steps.

Will
--
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Pavel Tatashin Aug. 8, 2017, 11:49 a.m. UTC | #2
Hi Will,

Thank you for looking at this change. What you described was in my 
previous iterations of this project.

See for example here: https://lkml.org/lkml/2017/5/5/369

I was asked to remove that flag, and only zero memory in place when 
needed. Overall the current approach is better everywhere else in the 
kernel, but it adds a little extra code to kasan initialization.

Pasha

On 08/08/2017 05:07 AM, Will Deacon wrote:
> On Mon, Aug 07, 2017 at 04:38:45PM -0400, Pavel Tatashin wrote:
>> To optimize the performance of struct page initialization,
>> vmemmap_populate() will no longer zero memory.
>>
>> We must explicitly zero the memory that is allocated by vmemmap_populate()
>> for kasan, as this memory does not go through struct page initialization
>> path.
>>
>> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
>> Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
>> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
>> Reviewed-by: Bob Picco <bob.picco@oracle.com>
>> ---
>>   arch/arm64/mm/kasan_init.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 42 insertions(+)
>>
>> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
>> index 81f03959a4ab..e78a9ecbb687 100644
>> --- a/arch/arm64/mm/kasan_init.c
>> +++ b/arch/arm64/mm/kasan_init.c
>> @@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
>>   		set_pgd(pgd_offset_k(start), __pgd(0));
>>   }
>>   
>> +/*
>> + * Memory that was allocated by vmemmap_populate is not zeroed, so we must
>> + * zero it here explicitly.
>> + */
>> +static void
>> +zero_vmemmap_populated_memory(void)
>> +{
>> +	struct memblock_region *reg;
>> +	u64 start, end;
>> +
>> +	for_each_memblock(memory, reg) {
>> +		start = __phys_to_virt(reg->base);
>> +		end = __phys_to_virt(reg->base + reg->size);
>> +
>> +		if (start >= end)
>> +			break;
>> +
>> +		start = (u64)kasan_mem_to_shadow((void *)start);
>> +		end = (u64)kasan_mem_to_shadow((void *)end);
>> +
>> +		/* Round to the start end of the mapped pages */
>> +		start = round_down(start, SWAPPER_BLOCK_SIZE);
>> +		end = round_up(end, SWAPPER_BLOCK_SIZE);
>> +		memset((void *)start, 0, end - start);
>> +	}
>> +
>> +	start = (u64)kasan_mem_to_shadow(_text);
>> +	end = (u64)kasan_mem_to_shadow(_end);
>> +
>> +	/* Round to the start end of the mapped pages */
>> +	start = round_down(start, SWAPPER_BLOCK_SIZE);
>> +	end = round_up(end, SWAPPER_BLOCK_SIZE);
>> +	memset((void *)start, 0, end - start);
>> +}
> 
> I can't help but think this would be an awful lot nicer if you made
> vmemmap_alloc_block take extra GFP flags as a parameter. That way, we could
> implement a version of vmemmap_populate that does the zeroing when we need
> it, without having to duplicate a bunch of the code like this. I think it
> would also be less error-prone, because you wouldn't have to do the
> allocation and the zeroing in two separate steps.
> 
> Will
> 
--
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Will Deacon Aug. 8, 2017, 12:30 p.m. UTC | #3
On Tue, Aug 08, 2017 at 07:49:22AM -0400, Pasha Tatashin wrote:
> Hi Will,
> 
> Thank you for looking at this change. What you described was in my previous
> iterations of this project.
> 
> See for example here: https://lkml.org/lkml/2017/5/5/369
> 
> I was asked to remove that flag, and only zero memory in place when needed.
> Overall the current approach is better everywhere else in the kernel, but it
> adds a little extra code to kasan initialization.

Damn, I actually prefer the flag :)

But actually, if you look at our implementation of vmemmap_populate, then we
have our own version of vmemmap_populate_basepages that terminates at the
pmd level anyway if ARM64_SWAPPER_USES_SECTION_MAPS. If there's resistance
to do this in the core code, then I'd be inclined to replace our
vmemmap_populate implementation in the arm64 code with a single version that
can terminate at either the PMD or the PTE level, and do zeroing if
required. We're already special-casing it, so we don't really lose anything
imo.

Will
--
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Pavel Tatashin Aug. 8, 2017, 12:49 p.m. UTC | #4
Hi Will,

 > Damn, I actually prefer the flag :)
 >
 > But actually, if you look at our implementation of vmemmap_populate, 
then we
 > have our own version of vmemmap_populate_basepages that terminates at the
 > pmd level anyway if ARM64_SWAPPER_USES_SECTION_MAPS. If there's 
resistance
 > to do this in the core code, then I'd be inclined to replace our
 > vmemmap_populate implementation in the arm64 code with a single 
version that
 > can terminate at either the PMD or the PTE level, and do zeroing if
 > required. We're already special-casing it, so we don't really lose 
anything
 > imo.

Another approach is to create a new mapping interface for kasan only. As 
what Ard Biesheuvel wrote:

 > KASAN uses vmemmap_populate as a convenience: kasan has nothing to do
 > with vmemmap, but the function already existed and happened to do what
 > KASAN requires.
 >
 > Given that that will no longer be the case, it would be far better to
 > stop using vmemmap_populate altogether, and clone it into a KASAN
 > specific version (with an appropriate name) with the zeroing folded
 > into it.

I agree with this statement, but I think it should not be part of this 
project.

Pasha
--
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Laight Aug. 8, 2017, 1:15 p.m. UTC | #5
From: Pasha Tatashin

> Sent: 08 August 2017 12:49

> Thank you for looking at this change. What you described was in my

> previous iterations of this project.

> 

> See for example here: https://lkml.org/lkml/2017/5/5/369

> 

> I was asked to remove that flag, and only zero memory in place when

> needed. Overall the current approach is better everywhere else in the

> kernel, but it adds a little extra code to kasan initialization.


Perhaps you could #define the function prototype(s?) so that the flags
are not passed unless it is a kasan build?

	David
Pavel Tatashin Aug. 8, 2017, 1:30 p.m. UTC | #6
On 2017-08-08 09:15, David Laight wrote:
> From: Pasha Tatashin
>> Sent: 08 August 2017 12:49
>> Thank you for looking at this change. What you described was in my
>> previous iterations of this project.
>>
>> See for example here: https://lkml.org/lkml/2017/5/5/369
>>
>> I was asked to remove that flag, and only zero memory in place when
>> needed. Overall the current approach is better everywhere else in the
>> kernel, but it adds a little extra code to kasan initialization.
> 
> Perhaps you could #define the function prototype(s?) so that the flags
> are not passed unless it is a kasan build?
> 

Hi David,

Thank you for suggestion. I think a kasan specific vmemmap (what I 
described in the previous e-mail) would be a better solution over having 
different prototypes with different builds.  It would be cleaner to have 
all kasan specific code in one place.

Pasha
--
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 81f03959a4ab..e78a9ecbb687 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -135,6 +135,41 @@  static void __init clear_pgds(unsigned long start,
 		set_pgd(pgd_offset_k(start), __pgd(0));
 }
 
+/*
+ * Memory that was allocated by vmemmap_populate is not zeroed, so we must
+ * zero it here explicitly.
+ */
+static void
+zero_vmemmap_populated_memory(void)
+{
+	struct memblock_region *reg;
+	u64 start, end;
+
+	for_each_memblock(memory, reg) {
+		start = __phys_to_virt(reg->base);
+		end = __phys_to_virt(reg->base + reg->size);
+
+		if (start >= end)
+			break;
+
+		start = (u64)kasan_mem_to_shadow((void *)start);
+		end = (u64)kasan_mem_to_shadow((void *)end);
+
+		/* Round to the start end of the mapped pages */
+		start = round_down(start, SWAPPER_BLOCK_SIZE);
+		end = round_up(end, SWAPPER_BLOCK_SIZE);
+		memset((void *)start, 0, end - start);
+	}
+
+	start = (u64)kasan_mem_to_shadow(_text);
+	end = (u64)kasan_mem_to_shadow(_end);
+
+	/* Round to the start end of the mapped pages */
+	start = round_down(start, SWAPPER_BLOCK_SIZE);
+	end = round_up(end, SWAPPER_BLOCK_SIZE);
+	memset((void *)start, 0, end - start);
+}
+
 void __init kasan_init(void)
 {
 	u64 kimg_shadow_start, kimg_shadow_end;
@@ -205,8 +240,15 @@  void __init kasan_init(void)
 			pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO));
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
+
 	cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
 
+	/*
+	 * vmemmap_populate does not zero the memory, so we need to zero it
+	 * explicitly
+	 */
+	zero_vmemmap_populated_memory();
+
 	/* At this point kasan is fully initialized. Enable error messages */
 	init_task.kasan_depth = 0;
 	pr_info("KernelAddressSanitizer initialized\n");