From patchwork Sat Mar 2 00:17:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 1906994 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=fgmAqI/a; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=gandalf.ozlabs.org; envelope-from=srs0=jp+d=ki=vger.kernel.org=sparclinux+bounces-461-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from gandalf.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4TmlvK2hY4z1yWy for ; Sat, 2 Mar 2024 11:19:25 +1100 (AEDT) Received: from gandalf.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4TmlvK2DrPz4wc4 for ; Sat, 2 Mar 2024 11:19:25 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4TmlvK2BGFz4wcb; Sat, 2 Mar 2024 11:19:25 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: gandalf.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=fgmAqI/a; dkim-atps=neutral Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45e3:2400::1; helo=sv.mirrors.kernel.org; envelope-from=sparclinux+bounces-461-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [IPv6:2604:1380:45e3:2400::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4TmlvJ70g7z4wc4 for ; Sat, 2 Mar 2024 11:19:24 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id B22B2289FE9 for ; Sat, 2 Mar 2024 00:19:23 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 528A86FCC; Sat, 2 Mar 2024 00:17:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="fgmAqI/a" X-Original-To: sparclinux@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74D786FB2 for ; Sat, 2 Mar 2024 00:17:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709338653; cv=none; b=GGPfw6HxprTD2GIWSMe9uKJCiywGr7QAglGQSz1yoR8hhBd01r58qFfX7ubLjhPAGhu34o7HOSXeCiyvYciJNEGEoGRykQQyZn5Ng8fMBeJjxOmeY7TUA7o5H0nWZH6vQDnhcKXYQ2JVOkjRJzav6elmmmPopmAVCldVsr8aCng= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709338653; c=relaxed/simple; bh=21GyrqURHtvFmMAO+cmO/9c174x9Wwx0oUnYUzHkErY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=emm0zIzyHMPzG6yF/PuM1wzVsbXJhMXLqwtn7YZLDb0qnHnPMvO3yG++VkS0YVoZmpVME83Io/fXE4KA6lhfJlWn60k3jc30oWNeEbDTPoSINOlUQBlf9uE7kLoV8POgehULVw6QBPN3xcGwEz6pVygOECvUU7cuss5tjUri8aU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=fgmAqI/a; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709338651; x=1740874651; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=21GyrqURHtvFmMAO+cmO/9c174x9Wwx0oUnYUzHkErY=; b=fgmAqI/arFaHo/jAiGF42NfUcFBMtD1uFWx7e+GFrDTJJCjgrWfwpZh9 5lsoFmbyvRsyoxz4RPOA0LAQcwKQohWvxA2CiMTU61xWiBq9NQjPHZ7iY eb5NuXTLzHKLZSjmwUwKhqMq4KNRbQFzxtsyPIjdmV4tRj9rIm9Wiby6y liN87WgyHDuXaNsK+eGW+1bNuADIn+2DSolwWmzgVOY6wZxc+l0mN2TzU 9Ll4Qk4cW5gTYOCuaajIlB67JgTB2QQlBLywIWIXNxY5xFPqUpAusqjsB ZlR6Noe/gUrOA4g3qYw1M3WmitsRrsITLsCH0SgdAEr+pzVm82txBDiLI Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11000"; a="7715790" X-IronPort-AV: E=Sophos;i="6.06,197,1705392000"; d="scan'208";a="7715790" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 16:17:30 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,197,1705392000"; d="scan'208";a="8495994" Received: from thwood-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.7.75]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 16:17:29 -0800 From: Rick Edgecombe To: rick.p.edgecombe@intel.com Cc: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, sparclinux@vger.kernel.org, tglx@linutronix.de, x86@kernel.org, "David S. Miller" Subject: [RFC v2.1 10/12] sparc: Use initializer for struct vm_unmapped_area_info Date: Fri, 1 Mar 2024 16:17:12 -0800 Message-Id: <20240302001714.674091-10-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240302001714.674091-1-rick.p.edgecombe@intel.com> References: <20240226190951.3240433-6-rick.p.edgecombe@intel.com> <20240302001714.674091-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: sparclinux@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Future changes will need to add a new member to struct vm_unmapped_area_info. This would cause trouble for any call site that doesn't initialize the struct. Currently every caller sets each field manually, so if new fields are added they will be unitialized and the core code parsing the struct will see garbage in the new field. It could be possible to initialize the new field manually to 0 at each call site. This and a couple other options were discussed, and the consensus (see links) was that in general the best way to accomplish this would be via static initialization with designated field initiators. Having some struct vm_unmapped_area_info instances not zero initialized will put those sites at risk of feeding garbage into vm_unmapped_area() if the convention is to zero initialize the struct and any new field addition misses a call site that initializes each field manually. It could be possible to leave the code mostly untouched, and just change the line: struct vm_unmapped_area_info info to: struct vm_unmapped_area_info info = {}; However, that would leave cleanup for the fields that are manually set to zero, as it would no longer be required. So to be reduce the chance of bugs via uninitialized fields, instead simply continue the process to initialize the struct this way tree wide. This will zero any unspecified members. Move the field initializers to the struct declaration when they are known at that time. Leave the fields out that were manually initialized to zero, as this would be redundant for designated initializers. Signed-off-by: Rick Edgecombe Cc: "David S. Miller" Cc: sparclinux@vger.kernel.org Link: https://lore.kernel.org/lkml/202402280912.33AEE7A9CF@keescook/#t Link: https://lore.kernel.org/lkml/j7bfvig3gew3qruouxrh7z7ehjjafrgkbcmg6tcghhfh3rhmzi@wzlcoecgy5rs/ --- Hi, This patch was split and refactored out of a tree-wide change [0] to just zero-init each struct vm_unmapped_area_info. The overall goal of the series is to help shadow stack guard gaps. Currently, there is only one arch with shadow stacks, but two more are in progress. It is 0day tested only. Thanks, Rick [0] https://lore.kernel.org/lkml/20240226190951.3240433-6-rick.p.edgecombe@intel.com/ --- arch/sparc/kernel/sys_sparc_32.c | 16 ++++++++-------- arch/sparc/kernel/sys_sparc_64.c | 23 ++++++++++++----------- arch/sparc/mm/hugetlbpage.c | 25 ++++++++++++------------- 3 files changed, 32 insertions(+), 32 deletions(-) diff --git a/arch/sparc/kernel/sys_sparc_32.c b/arch/sparc/kernel/sys_sparc_32.c index 082a551897ed..13226d921a23 100644 --- a/arch/sparc/kernel/sys_sparc_32.c +++ b/arch/sparc/kernel/sys_sparc_32.c @@ -41,7 +41,14 @@ SYSCALL_DEFINE0(getpagesize) unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .length = len, + .low_limit = addr, + .high_limit = TASK_SIZE, + .align_mask = (flags & MAP_SHARED) ? + (PAGE_MASK & (SHMLBA - 1)) : 0, + info.align_offset = pgoff << PAGE_SHIFT + }; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate @@ -59,13 +66,6 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi if (!addr) addr = TASK_UNMAPPED_BASE; - info.flags = 0; - info.length = len; - info.low_limit = addr; - info.high_limit = TASK_SIZE; - info.align_mask = (flags & MAP_SHARED) ? - (PAGE_MASK & (SHMLBA - 1)) : 0; - info.align_offset = pgoff << PAGE_SHIFT; return vm_unmapped_area(&info); } diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c index 1dbf7211666e..83c273043962 100644 --- a/arch/sparc/kernel/sys_sparc_64.c +++ b/arch/sparc/kernel/sys_sparc_64.c @@ -93,7 +93,11 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi struct vm_area_struct * vma; unsigned long task_size = TASK_SIZE; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .length = len, + .low_limit = TASK_UNMAPPED_BASE, + .align_offset = pgoff << PAGE_SHIFT + }; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate @@ -126,12 +130,8 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi return addr; } - info.flags = 0; - info.length = len; - info.low_limit = TASK_UNMAPPED_BASE; info.high_limit = min(task_size, VA_EXCLUDE_START); info.align_mask = do_color_align ? (PAGE_MASK & (SHMLBA - 1)) : 0; - info.align_offset = pgoff << PAGE_SHIFT; addr = vm_unmapped_area(&info); if ((addr & ~PAGE_MASK) && task_size > VA_EXCLUDE_END) { @@ -154,7 +154,13 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, unsigned long task_size = STACK_TOP32; unsigned long addr = addr0; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .flags = VM_UNMAPPED_AREA_TOPDOWN, + .length = len, + .low_limit = PAGE_SIZE, + .high_limit = mm->mmap_base, + .align_offset = pgoff << PAGE_SHIFT + }; /* This should only ever run for 32-bit processes. */ BUG_ON(!test_thread_flag(TIF_32BIT)); @@ -189,12 +195,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, return addr; } - info.flags = VM_UNMAPPED_AREA_TOPDOWN; - info.length = len; - info.low_limit = PAGE_SIZE; - info.high_limit = mm->mmap_base; info.align_mask = do_color_align ? (PAGE_MASK & (SHMLBA - 1)) : 0; - info.align_offset = pgoff << PAGE_SHIFT; addr = vm_unmapped_area(&info); /* diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c index 38a1bef47efb..5307781020bf 100644 --- a/arch/sparc/mm/hugetlbpage.c +++ b/arch/sparc/mm/hugetlbpage.c @@ -31,17 +31,16 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *filp, { struct hstate *h = hstate_file(filp); unsigned long task_size = TASK_SIZE; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .length = len, + .low_limit = TASK_UNMAPPED_BASE, + .align_mask = PAGE_MASK & ~huge_page_mask(h) + }; if (test_thread_flag(TIF_32BIT)) task_size = STACK_TOP32; - info.flags = 0; - info.length = len; - info.low_limit = TASK_UNMAPPED_BASE; info.high_limit = min(task_size, VA_EXCLUDE_START); - info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; addr = vm_unmapped_area(&info); if ((addr & ~PAGE_MASK) && task_size > VA_EXCLUDE_END) { @@ -63,17 +62,17 @@ hugetlb_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, struct hstate *h = hstate_file(filp); struct mm_struct *mm = current->mm; unsigned long addr = addr0; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .flags = VM_UNMAPPED_AREA_TOPDOWN, + .length = len, + .low_limit = PAGE_SIZE, + .high_limit = mm->mmap_base, + .align_mask = PAGE_MASK & ~huge_page_mask(h) + }; /* This should only ever run for 32-bit processes. */ BUG_ON(!test_thread_flag(TIF_32BIT)); - info.flags = VM_UNMAPPED_AREA_TOPDOWN; - info.length = len; - info.low_limit = PAGE_SIZE; - info.high_limit = mm->mmap_base; - info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; addr = vm_unmapped_area(&info); /*