From patchwork Thu Jun 8 02:10:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cengiz Can X-Patchwork-Id: 1791977 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=lBC/wnvk; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Qc74g1QrRz20WP for ; Thu, 8 Jun 2023 12:11:47 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1q757l-0001VC-CJ; Thu, 08 Jun 2023 02:11:41 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1q757i-0001Ra-6R for kernel-team@lists.ubuntu.com; Thu, 08 Jun 2023 02:11:38 +0000 Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id A961C3F0F8 for ; Thu, 8 Jun 2023 02:11:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1686190297; bh=n8obTFyff56jxeqL+l1hvA4S7Qc9QOevt5+h3L//ulc=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=lBC/wnvkddJYJXdFapxCcPVS3E389k8ainYNgtAWil2ko16pcvhEq90BIll2tSW2g 8O8wrBhcMrdL+BYiH0uaSnpJsN115Tj/7v+WiPLzwW1RlNy3cx8kwNUzhrjBXr1a4Z kyOk/WZ3qIYQuaP+KZQA+bB+8nU5epyON4MBMiRGvunWbhF7ntja3oR11ADrtGQUz+ oYLaiyjD0KwUGUcQqFiKTh0P+MQbXDX7wot0Wf69h+PBm9e7IKd8uWdU1F0uBOIzhG 1uaD/MotzatEv+raWKq/u5CX8H2zpmNQcUT8RIDTXJAsSjshsbpxxCSbGCQFtBauaU Q17Osv+FH0hMg== Received: by mail-ed1-f72.google.com with SMTP id 4fb4d7f45d1cf-513f5318ff2so49990a12.3 for ; Wed, 07 Jun 2023 19:11:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686190297; x=1688782297; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=n8obTFyff56jxeqL+l1hvA4S7Qc9QOevt5+h3L//ulc=; b=mCFyRtKAc+tR22EzfQTusyLp3vlFdMaqV7tZZDKBIk7B/1c3pOcYr/X7j3luKz4Dzk f4NvUhTOLsgJgJqSMgoOB267q9p5QSyrEi9kCDcCPfvnPM+CClHRS2KqNOwvxKUCtqLq I8LwnkEFEoc+s4GScrf+HafM0BppY7pGmeC25XA0LAXBi73ys4okLjyvoUtL3LAXoxg7 XfRutTjnl0JLGmmpIg0fIh98L4GFXcuvmKt4g2VwZSaiS7T3v5M7f1nBZasREwp1Yy9N 8AUOCGu2c2BhfSzv7WVhGbgvB1RgBj8p25Mn/LUPvW14DBan3+pBhNMPHliAXdP2YmzG GUYQ== X-Gm-Message-State: AC+VfDyoK37nluw+CaSIqkekuDm+dTA8MGIzaEY29pnkips1s7AAtApv AiWqymWnJhj6geSBv6EufiEtNBYgniAZZdPy52dwWn86gbczgiHeoSNODWnhld4Kj01gS1v49py 33kNkeIgqVLI0+faIr4bqwWpMYl800VpcQlLqUJm+hJl3PM++wR907Mk= X-Received: by 2002:a17:907:6e29:b0:974:1f03:fcd1 with SMTP id sd41-20020a1709076e2900b009741f03fcd1mr8642971ejc.3.1686190296951; Wed, 07 Jun 2023 19:11:36 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ65CapFwRaemUNdJkoYHROOBsxKRMDhbPwBJ8Ba53Da8OKTHaCmQfKA/rGG2MSmT7ghwbk/GQ== X-Received: by 2002:a17:907:6e29:b0:974:1f03:fcd1 with SMTP id sd41-20020a1709076e2900b009741f03fcd1mr8642961ejc.3.1686190296565; Wed, 07 Jun 2023 19:11:36 -0700 (PDT) Received: from localhost ([82.222.124.85]) by smtp.gmail.com with ESMTPSA id t25-20020a170906179900b00977ca819248sm41353eje.110.2023.06.07.19.11.36 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jun 2023 19:11:36 -0700 (PDT) From: Cengiz Can To: kernel-team@lists.ubuntu.com Subject: [SRU Jammy, OEM-5.17, Kinetic, OEM-6.0 PATCH 4/5] x86/mm: Populate KASAN shadow for entire per-CPU range of CPU entry area Date: Thu, 8 Jun 2023 05:10:55 +0300 Message-Id: <20230608021055.203634-6-cengiz.can@canonical.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230608021055.203634-1-cengiz.can@canonical.com> References: <20230608021055.203634-1-cengiz.can@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Sean Christopherson Populate a KASAN shadow for the entire possible per-CPU range of the CPU entry area instead of requiring that each individual chunk map a shadow. Mapping shadows individually is error prone, e.g. the per-CPU GDT mapping was left behind, which can lead to not-present page faults during KASAN validation if the kernel performs a software lookup into the GDT. The DS buffer is also likely affected. The motivation for mapping the per-CPU areas on-demand was to avoid mapping the entire 512GiB range that's reserved for the CPU entry area, shaving a few bytes by not creating shadows for potentially unused memory was not a goal. The bug is most easily reproduced by doing a sigreturn with a garbage CS in the sigcontext, e.g. int main(void) { struct sigcontext regs; syscall(__NR_mmap, 0x1ffff000ul, 0x1000ul, 0ul, 0x32ul, -1, 0ul); syscall(__NR_mmap, 0x20000000ul, 0x1000000ul, 7ul, 0x32ul, -1, 0ul); syscall(__NR_mmap, 0x21000000ul, 0x1000ul, 0ul, 0x32ul, -1, 0ul); memset(®s, 0, sizeof(regs)); regs.cs = 0x1d0; syscall(__NR_rt_sigreturn); return 0; } to coerce the kernel into doing a GDT lookup to compute CS.base when reading the instruction bytes on the subsequent #GP to determine whether or not the #GP is something the kernel should handle, e.g. to fixup UMIP violations or to emulate CLI/STI for IOPL=3 applications. BUG: unable to handle page fault for address: fffffbc8379ace00 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 16c03a067 P4D 16c03a067 PUD 15b990067 PMD 15b98f067 PTE 0 Oops: 0000 [#1] PREEMPT SMP KASAN CPU: 3 PID: 851 Comm: r2 Not tainted 6.1.0-rc3-next-20221103+ #432 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 RIP: 0010:kasan_check_range+0xdf/0x190 Call Trace: get_desc+0xb0/0x1d0 insn_get_seg_base+0x104/0x270 insn_fetch_from_user+0x66/0x80 fixup_umip_exception+0xb1/0x530 exc_general_protection+0x181/0x210 asm_exc_general_protection+0x22/0x30 RIP: 0003:0x0 Code: Unable to access opcode bytes at 0xffffffffffffffd6. RSP: 0003:0000000000000000 EFLAGS: 00000202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00000000000001d0 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand") Reported-by: syzbot+ffb4f000dc2872c93f62@syzkaller.appspotmail.com Suggested-by: Andrey Ryabinin Signed-off-by: Sean Christopherson Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Andrey Ryabinin Link: https://lkml.kernel.org/r/20221110203504.1985010-3-seanjc@google.com CVE-2023-0597 (cherry picked from commit 97650148a15e0b30099d6175ffe278b9f55ec66a) Signed-off-by: Cengiz Can --- arch/x86/mm/cpu_entry_area.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c index d831aae94b41..7c855dffcdc2 100644 --- a/arch/x86/mm/cpu_entry_area.c +++ b/arch/x86/mm/cpu_entry_area.c @@ -91,11 +91,6 @@ void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags) static void __init cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot) { - phys_addr_t pa = per_cpu_ptr_to_phys(ptr); - - kasan_populate_shadow_for_vaddr(cea_vaddr, pages * PAGE_SIZE, - early_pfn_to_nid(PFN_DOWN(pa))); - for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE) cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot); } @@ -195,6 +190,9 @@ static void __init setup_cpu_entry_area(unsigned int cpu) pgprot_t tss_prot = PAGE_KERNEL; #endif + kasan_populate_shadow_for_vaddr(cea, CPU_ENTRY_AREA_SIZE, + early_cpu_to_node(cpu)); + cea_set_pte(&cea->gdt, get_cpu_gdt_paddr(cpu), gdt_prot); cea_map_percpu_pages(&cea->entry_stack_page,