From patchwork Thu Jun 8 02:10:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cengiz Can X-Patchwork-Id: 1791972 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=czBQ13w+; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Qc74H1myPz20WP for ; Thu, 8 Jun 2023 12:11:26 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1q757R-0001Fh-5d; Thu, 08 Jun 2023 02:11:21 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1q757P-0001FG-Kr for kernel-team@lists.ubuntu.com; Thu, 08 Jun 2023 02:11:19 +0000 Received: from mail-ej1-f71.google.com (mail-ej1-f71.google.com [209.85.218.71]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id BBC323F117 for ; Thu, 8 Jun 2023 02:11:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1686190278; bh=YIINjrJw9755aLv1eR9XQai6XOb+wN2Y0Le03/Xv64o=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=czBQ13w+86vbR+IcOmBk7XrBy5Yp080hOyr0mEqrwXkTY3nSKEpAd5MbCHyk3bckL 9RUTlBEBvQsawAdcH2sIZILDoAaoPDGQUlr1pKEP8UuqBQelLq2MeHq2XD48mffx0E eUzYjzGcmqGzhbgGMRNKivJSjrPeE6r3qdOovVp1Go8xSPRKUZECBKc7wXxNnqzwf3 2WIA0HjVllXSEQHXKlAB1dKVI7foPgbSfkaQqOjbLGtrZlw9oXPiMjr1QeZaA5lPS4 dsHpUpaWTyEKOKClzf+HX/sZGomxIk4XrCC3NOCqRyTf7qVMeJVW1gL9PF3M/lljXg xiZrBUNCSh5EQ== Received: by mail-ej1-f71.google.com with SMTP id a640c23a62f3a-9715654ab36so14660566b.0 for ; Wed, 07 Jun 2023 19:11:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686190277; x=1688782277; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YIINjrJw9755aLv1eR9XQai6XOb+wN2Y0Le03/Xv64o=; b=SENKtL9ehLtxDwTFaYvxkAEfk3qg9VVbvc9rC4ydK+VorJ88E6CQYXnh/k/MviySjO tlLyAcbrC60R80eZctOOPXaEBTegC/clQk2yj7d5V7inCgJHC+3vKI+dJ+vlECxpMI8V MVudC8wrlCspAdpD0B3Ut+L9zTzZ66c8iJmmYwhwOE9NMBPmBN4wg0sw/El35t/0bnlu PoMrcW92pvE7kAiDan4k6b9SWuBenxvCzyR1/AsvRSyK4QzXNrql3hHZcI7bsgZ4wCYi lgHEbGKGcuavG18PSKYOjBMH9xv4Hl15JJ6QoGovJ+KNSgCvh2/gR3PTFZgwi8sdEe4n PjWA== X-Gm-Message-State: AC+VfDwg1+d8G+sr2sUk89h3SN5HcTqn755BE1Gd/w6ULdlQwkrZzGu2 1K2zKuBmgGsaZyAaZJHu3vszVRxD3GmRN/thZ9S309f3Rq8+MPOSoMND4gdeT+5C7tR2PBleG+v QFxCCM+FKnkMO6+RvAHpw6I48lb43E9rBN5Uapu/sZfgCPbui3Ocx2zA= X-Received: by 2002:a17:907:6ea1:b0:94f:73db:b390 with SMTP id sh33-20020a1709076ea100b0094f73dbb390mr8603686ejc.65.1686190277499; Wed, 07 Jun 2023 19:11:17 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7y0YJ8RPeNLHxukNuAwVGaKRg8ommMNRnC4zL9pmo2Lcd3L31DUlUfOuI6xByt2SCJ/pDKXg== X-Received: by 2002:a17:907:6ea1:b0:94f:73db:b390 with SMTP id sh33-20020a1709076ea100b0094f73dbb390mr8603675ejc.65.1686190277299; Wed, 07 Jun 2023 19:11:17 -0700 (PDT) Received: from localhost ([82.222.124.85]) by smtp.gmail.com with ESMTPSA id t25-20020a170906179900b00977ca819248sm41035eje.110.2023.06.07.19.11.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jun 2023 19:11:17 -0700 (PDT) From: Cengiz Can To: kernel-team@lists.ubuntu.com Subject: [SRU Jammy, OEM-5.17, Kinetic, OEM-6.0 PATCH 1/5] x86/kasan: Map shadow for percpu pages on demand Date: Thu, 8 Jun 2023 05:10:51 +0300 Message-Id: <20230608021055.203634-2-cengiz.can@canonical.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230608021055.203634-1-cengiz.can@canonical.com> References: <20230608021055.203634-1-cengiz.can@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Andrey Ryabinin KASAN maps shadow for the entire CPU-entry-area: [CPU_ENTRY_AREA_BASE, CPU_ENTRY_AREA_BASE + CPU_ENTRY_AREA_MAP_SIZE] This will explode once the per-cpu entry areas are randomized since it will increase CPU_ENTRY_AREA_MAP_SIZE to 512 GB and KASAN fails to allocate shadow for such big area. Fix this by allocating KASAN shadow only for really used cpu entry area addresses mapped by cea_map_percpu_pages() Thanks to the 0day folks for finding and reporting this to be an issue. [ dhansen: tweak changelog since this will get committed before peterz's actual cpu-entry-area randomization ] Signed-off-by: Andrey Ryabinin Signed-off-by: Dave Hansen Tested-by: Yujie Liu Cc: kernel test robot Link: https://lore.kernel.org/r/202210241508.2e203c3d-yujie.liu@intel.com CVE-2023-0597 (cherry picked from commit 3f148f3318140035e87decc1214795ff0755757b) [cengizcan: prerequisite commit] Signed-off-by: Cengiz Can --- arch/x86/include/asm/kasan.h | 3 +++ arch/x86/mm/cpu_entry_area.c | 8 +++++++- arch/x86/mm/kasan_init_64.c | 15 ++++++++++++--- 3 files changed, 22 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h index 13e70da38bed..de75306b932e 100644 --- a/arch/x86/include/asm/kasan.h +++ b/arch/x86/include/asm/kasan.h @@ -28,9 +28,12 @@ #ifdef CONFIG_KASAN void __init kasan_early_init(void); void __init kasan_init(void); +void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid); #else static inline void kasan_early_init(void) { } static inline void kasan_init(void) { } +static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size, + int nid) { } #endif #endif diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c index 6c2f1b76a0b6..d7081b1accca 100644 --- a/arch/x86/mm/cpu_entry_area.c +++ b/arch/x86/mm/cpu_entry_area.c @@ -9,6 +9,7 @@ #include #include #include +#include static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage); @@ -53,8 +54,13 @@ void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags) static void __init cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot) { + phys_addr_t pa = per_cpu_ptr_to_phys(ptr); + + kasan_populate_shadow_for_vaddr(cea_vaddr, pages * PAGE_SIZE, + early_pfn_to_nid(PFN_DOWN(pa))); + for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE) - cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot); + cea_set_pte(cea_vaddr, pa, prot); } static void __init percpu_setup_debug_store(unsigned int cpu) diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index e7b9b464a82f..d1416926ad52 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -316,6 +316,18 @@ void __init kasan_early_init(void) kasan_map_early_shadow(init_top_pgt); } +void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid) +{ + unsigned long shadow_start, shadow_end; + + shadow_start = (unsigned long)kasan_mem_to_shadow(va); + shadow_start = round_down(shadow_start, PAGE_SIZE); + shadow_end = (unsigned long)kasan_mem_to_shadow(va + size); + shadow_end = round_up(shadow_end, PAGE_SIZE); + + kasan_populate_shadow(shadow_start, shadow_end, nid); +} + void __init kasan_init(void) { int i; @@ -393,9 +405,6 @@ void __init kasan_init(void) kasan_mem_to_shadow((void *)VMALLOC_END + 1), shadow_cpu_entry_begin); - kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin, - (unsigned long)shadow_cpu_entry_end, 0); - kasan_populate_early_shadow(shadow_cpu_entry_end, kasan_mem_to_shadow((void *)__START_KERNEL_map));