From patchwork Tue Aug 29 02:02:21 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Tatashin X-Patchwork-Id: 806906 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3xhBnd4VyGz9sRV for ; Tue, 29 Aug 2017 12:06:13 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751286AbdH2CDw (ORCPT ); Mon, 28 Aug 2017 22:03:52 -0400 Received: from aserp1040.oracle.com ([141.146.126.69]:25864 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751237AbdH2CDu (ORCPT ); Mon, 28 Aug 2017 22:03:50 -0400 Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v7T22enm011093 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 29 Aug 2017 02:02:41 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id v7T22e0J005404 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 29 Aug 2017 02:02:40 GMT Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id v7T22dYK012178; Tue, 29 Aug 2017 02:02:39 GMT Received: from ca-ldom-ol-build-1.us.oracle.com (/10.129.68.23) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 28 Aug 2017 19:02:38 -0700 From: Pavel Tatashin To: linux-kernel@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, kasan-dev@googlegroups.com, borntraeger@de.ibm.com, heiko.carstens@de.ibm.com, davem@davemloft.net, willy@infradead.org, mhocko@kernel.org, ard.biesheuvel@linaro.org, will.deacon@arm.com, catalin.marinas@arm.com, sam@ravnborg.org, mgorman@techsingularity.net, Steven.Sistare@oracle.com, daniel.m.jordan@oracle.com, bob.picco@oracle.com Subject: [PATCH v7 10/11] arm64/kasan: explicitly zero kasan shadow memory Date: Mon, 28 Aug 2017 22:02:21 -0400 Message-Id: <1503972142-289376-11-git-send-email-pasha.tatashin@oracle.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1503972142-289376-1-git-send-email-pasha.tatashin@oracle.com> References: <1503972142-289376-1-git-send-email-pasha.tatashin@oracle.com> X-Source-IP: userv0022.oracle.com [156.151.31.74] Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org To optimize the performance of struct page initialization, vmemmap_populate() will no longer zero memory. We must explicitly zero the memory that is allocated by vmemmap_populate() for kasan, as this memory does not go through struct page initialization path. Signed-off-by: Pavel Tatashin Reviewed-by: Steven Sistare Reviewed-by: Daniel Jordan Reviewed-by: Bob Picco --- arch/arm64/mm/kasan_init.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 81f03959a4ab..e78a9ecbb687 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start, set_pgd(pgd_offset_k(start), __pgd(0)); } +/* + * Memory that was allocated by vmemmap_populate is not zeroed, so we must + * zero it here explicitly. + */ +static void +zero_vmemmap_populated_memory(void) +{ + struct memblock_region *reg; + u64 start, end; + + for_each_memblock(memory, reg) { + start = __phys_to_virt(reg->base); + end = __phys_to_virt(reg->base + reg->size); + + if (start >= end) + break; + + start = (u64)kasan_mem_to_shadow((void *)start); + end = (u64)kasan_mem_to_shadow((void *)end); + + /* Round to the start end of the mapped pages */ + start = round_down(start, SWAPPER_BLOCK_SIZE); + end = round_up(end, SWAPPER_BLOCK_SIZE); + memset((void *)start, 0, end - start); + } + + start = (u64)kasan_mem_to_shadow(_text); + end = (u64)kasan_mem_to_shadow(_end); + + /* Round to the start end of the mapped pages */ + start = round_down(start, SWAPPER_BLOCK_SIZE); + end = round_up(end, SWAPPER_BLOCK_SIZE); + memset((void *)start, 0, end - start); +} + void __init kasan_init(void) { u64 kimg_shadow_start, kimg_shadow_end; @@ -205,8 +240,15 @@ void __init kasan_init(void) pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO)); memset(kasan_zero_page, 0, PAGE_SIZE); + cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); + /* + * vmemmap_populate does not zero the memory, so we need to zero it + * explicitly + */ + zero_vmemmap_populated_memory(); + /* At this point kasan is fully initialized. Enable error messages */ init_task.kasan_depth = 0; pr_info("KernelAddressSanitizer initialized\n");