From patchwork Sat Mar 2 00:17:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 1906988 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=iOqjhjAT; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=gandalf.ozlabs.org; envelope-from=srs0=qkee=ki=vger.kernel.org=sparclinux+bounces-452-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from gandalf.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4TmlvG59m7z23l2 for ; Sat, 2 Mar 2024 11:19:22 +1100 (AEDT) Received: from gandalf.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4TmlvD6QHXz4wc4 for ; Sat, 2 Mar 2024 11:19:20 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4TmlvD6Ldqz4wcK; Sat, 2 Mar 2024 11:19:20 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: gandalf.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=iOqjhjAT; dkim-atps=neutral Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45e3:2400::1; helo=sv.mirrors.kernel.org; envelope-from=sparclinux+bounces-452-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [IPv6:2604:1380:45e3:2400::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4TmlvD41jgz4wc4 for ; Sat, 2 Mar 2024 11:19:20 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 4DA11289FC8 for ; Sat, 2 Mar 2024 00:19:19 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 328001877; Sat, 2 Mar 2024 00:17:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="iOqjhjAT" X-Original-To: sparclinux@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F9CC23A0 for ; Sat, 2 Mar 2024 00:17:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709338646; cv=none; b=qv7SVl7jvXM6n+nvMwGXG3Vt/vWEculsckikKNv4mFD6TuweZAdj72E7F7pX6UAYXiOwlEv+up9UkGyjwWA4rJMshydwQzoh4BPw0RE4UiYtUrufu+/H/QtMFLTpYyBI2gpWmZskXU6uE3Qqju4tBguMveSxi9O5u5zaKX2w5BQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709338646; c=relaxed/simple; bh=ISJXgR86btvXlGtTVHkZL4KU3RGpJLuMEprQEr/bPxQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Mb+M2FUnpGgwRs9xH3hLByU+jeHlmnnhK7zo4deQ3zFyXD1YH0l0nCgAwXp4Ina3CuQdbvflJsPBMvqSPq4Ohxwm8fgNcJLS1XNZOyKBA4RCkKz7/5BjQTw2bHuaUkuxU8eebLv7upU/4kM/SAYI6Wfau+I5EwsJGjf4bdi+KP8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=iOqjhjAT; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709338644; x=1740874644; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ISJXgR86btvXlGtTVHkZL4KU3RGpJLuMEprQEr/bPxQ=; b=iOqjhjATUx7z3l5adrHYq9Fty8of10aFvOXxQL6GB2qQASALA/eNedG8 iv7FHgZ+ROhAvPAWC7/U43R2OfQUmo6c6fJK1E5XCNsBG+tsuOZ3wHvM2 a6XEkJZ+72O6ZX/WxWyVhgOZ/LaUCevc/6PlAGc/QQ6TQT/bKb11Fuf0e ETuWvHCQdbvXXZ9Y6iWpcL0EzbaLZp3uzpxtaKGcExibEwCPZL6pgADCR ykyqUK6eamDNBIv2FzSQjjVimDsa4J2yVTkJFTGhY0md931ajzlwoVNEq ERMQx1AfEiNVkyEOvi8WlbJMgNyCzQNm1n9vA5vuEsApnzJUNTqIBOJZC A==; X-IronPort-AV: E=McAfee;i="6600,9927,11000"; a="7715682" X-IronPort-AV: E=Sophos;i="6.06,197,1705392000"; d="scan'208";a="7715682" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 16:17:22 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,197,1705392000"; d="scan'208";a="8495954" Received: from thwood-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.7.75]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 16:17:22 -0800 From: Rick Edgecombe To: rick.p.edgecombe@intel.com Cc: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, sparclinux@vger.kernel.org, tglx@linutronix.de, x86@kernel.org, Russell King , linux-arm-kernel@lists.infradead.org Subject: [RFC v2.1 02/12] ARM: Use initializer for struct vm_unmapped_area_info Date: Fri, 1 Mar 2024 16:17:04 -0800 Message-Id: <20240302001714.674091-2-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240302001714.674091-1-rick.p.edgecombe@intel.com> References: <20240226190951.3240433-6-rick.p.edgecombe@intel.com> <20240302001714.674091-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: sparclinux@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Future changes will need to add a new member to struct vm_unmapped_area_info. This would cause trouble for any call site that doesn't initialize the struct. Currently every caller sets each field manually, so if new fields are added they will be unitialized and the core code parsing the struct will see garbage in the new field. It could be possible to initialize the new field manually to 0 at each call site. This and a couple other options were discussed, and the consensus (see links) was that in general the best way to accomplish this would be via static initialization with designated field initiators. Having some struct vm_unmapped_area_info instances not zero initialized will put those sites at risk of feeding garbage into vm_unmapped_area() if the convention is to zero initialize the struct and any new field addition misses a call site that initializes each field manually. It could be possible to leave the code mostly untouched, and just change the line: struct vm_unmapped_area_info info to: struct vm_unmapped_area_info info = {}; However, that would leave cleanup for the fields that are manually set to zero, as it would no longer be required. So to be reduce the chance of bugs via uninitialized fields, instead simply continue the process to initialize the struct this way tree wide. This will zero any unspecified members. Move the field initializers to the struct declaration when they are known at that time. Leave the fields out that were manually initialized to zero, as this would be redundant for designated initializers. Signed-off-by: Rick Edgecombe Cc: Russell King Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/lkml/202402280912.33AEE7A9CF@keescook/#t Link: https://lore.kernel.org/lkml/j7bfvig3gew3qruouxrh7z7ehjjafrgkbcmg6tcghhfh3rhmzi@wzlcoecgy5rs/ --- Hi, This patch was split and refactored out of a tree-wide change [0] to just zero-init each struct vm_unmapped_area_info. The overall goal of the series is to help shadow stack guard gaps. Currently, there is only one arch with shadow stacks, but two more are in progress. It is 0day tested only. Thanks, Rick [0] https://lore.kernel.org/lkml/20240226190951.3240433-6-rick.p.edgecombe@intel.com/ --- arch/arm/mm/mmap.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c index a0f8a0ca0788..f50c8ed98be0 100644 --- a/arch/arm/mm/mmap.c +++ b/arch/arm/mm/mmap.c @@ -34,7 +34,12 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, struct vm_area_struct *vma; int do_align = 0; int aliasing = cache_is_vipt_aliasing(); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .length = len, + .low_limit = mm->mmap_base, + .high_limit = TASK_SIZE, + .align_offset = pgoff << PAGE_SHIFT + }; /* * We only need to do colour alignment if either the I or D @@ -68,12 +73,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, return addr; } - info.flags = 0; - info.length = len; - info.low_limit = mm->mmap_base; - info.high_limit = TASK_SIZE; info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0; - info.align_offset = pgoff << PAGE_SHIFT; return vm_unmapped_area(&info); } @@ -87,7 +87,13 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, unsigned long addr = addr0; int do_align = 0; int aliasing = cache_is_vipt_aliasing(); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .flags = VM_UNMAPPED_AREA_TOPDOWN, + .length = len, + .low_limit = FIRST_USER_ADDRESS, + .high_limit = mm->mmap_base, + .align_offset = pgoff << PAGE_SHIFT + }; /* * We only need to do colour alignment if either the I or D @@ -119,12 +125,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, return addr; } - info.flags = VM_UNMAPPED_AREA_TOPDOWN; - info.length = len; - info.low_limit = FIRST_USER_ADDRESS; - info.high_limit = mm->mmap_base; info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0; - info.align_offset = pgoff << PAGE_SHIFT; addr = vm_unmapped_area(&info); /* From patchwork Sat Mar 2 00:17:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 1906987 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=gkVJvK3j; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=150.107.74.76; helo=gandalf.ozlabs.org; envelope-from=srs0=5bko=ki=vger.kernel.org=sparclinux+bounces-454-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from gandalf.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4TmlvG3ldmz1yWy for ; Sat, 2 Mar 2024 11:19:22 +1100 (AEDT) Received: from gandalf.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4TmlvF4gzJz4wcK for ; Sat, 2 Mar 2024 11:19:21 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4TmlvF4cRSz4wck; Sat, 2 Mar 2024 11:19:21 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: gandalf.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=gkVJvK3j; dkim-atps=neutral Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45e3:2400::1; helo=sv.mirrors.kernel.org; envelope-from=sparclinux+bounces-454-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [IPv6:2604:1380:45e3:2400::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4TmlvF2KYDz4wcK for ; Sat, 2 Mar 2024 11:19:21 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 17CBC28A002 for ; Sat, 2 Mar 2024 00:19:20 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 16C8D4C62; Sat, 2 Mar 2024 00:17:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="gkVJvK3j" X-Original-To: sparclinux@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 55D75ECC for ; Sat, 2 Mar 2024 00:17:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709338646; cv=none; b=PFKbFaDT/TsRrVrlQd2MF6eYrs/imlNS1UmD7xy+Knt/7oPNunStdM/Qy7mETEZeozcf2EtkmFZ+6St/dWjuD1zCTvOAFMEK2x7Wgc8ZX6wfjotxI7oFQPDJ9DvK2sTsn8yLmpsY1TaJRbWx4qaQ1Ig0fa3V7Z9bUBEpJKVj8qw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709338646; c=relaxed/simple; bh=o4PnmzIktc1EZvQ94I+XKlSaEBl1YbcP4E3fu8hkQ90=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DVNDql+SwYHBBR/qbTwvCUtflz60KYuzMV7MEqp3s+Ee3Krw6Qn/E1ujjfyRwW0scjf7N5/zJO7+BfEYhd/cJHAQW2l+LctWa4zkNv0ZozyNHSaBFdT4XX8XzMMop7pmn9QinBuP6M4t4Vz8+i+/g08gt/g4s5sPSmu+ggcNclY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=gkVJvK3j; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709338645; x=1740874645; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o4PnmzIktc1EZvQ94I+XKlSaEBl1YbcP4E3fu8hkQ90=; b=gkVJvK3j0ytz602wYKQFHvIclYXH4JZuNvcziaPlJJ/KtzbjzlqUWQ5d mZeQtN7bGDYPKq1xBdnf09yQNe0PQ0ABDkDWeQ8ZlwHWp0jYR2ZMrlKlG WjRj8IAL0v84H/5ERaJl6hpSQSsBcwZsi8KZmhyZcFp0YInIoHOgbteRL N7BgjmleeMnmsmjZZspMJ/XLfdBFLDFIxt9kUsd0qIyUy+rgHPWdIRccm oPjyPFL6oC0H4omC7ABgMXAsAtZdGr+FWxWSlDs287BNOPKHdkP0uC5dQ CxquDijAuCoWcsyfYHlxJoJ8uvJGOcG5SE1K/1ovR4AUvrcgHQ9oRh27c Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11000"; a="7715707" X-IronPort-AV: E=Sophos;i="6.06,197,1705392000"; d="scan'208";a="7715707" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 16:17:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,197,1705392000"; d="scan'208";a="8495965" Received: from thwood-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.7.75]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 16:17:23 -0800 From: Rick Edgecombe To: rick.p.edgecombe@intel.com Cc: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, sparclinux@vger.kernel.org, tglx@linutronix.de, x86@kernel.org, Huacai Chen , WANG Xuerui , loongarch@lists.linux.dev Subject: [RFC v2.1 04/12] LoongArch: Use initializer for struct vm_unmapped_area_info Date: Fri, 1 Mar 2024 16:17:06 -0800 Message-Id: <20240302001714.674091-4-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240302001714.674091-1-rick.p.edgecombe@intel.com> References: <20240226190951.3240433-6-rick.p.edgecombe@intel.com> <20240302001714.674091-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: sparclinux@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Future changes will need to add a new member to struct vm_unmapped_area_info. This would cause trouble for any call site that doesn't initialize the struct. Currently every caller sets each field manually, so if new fields are added they will be unitialized and the core code parsing the struct will see garbage in the new field. It could be possible to initialize the new field manually to 0 at each call site. This and a couple other options were discussed, and the consensus (see links) was that in general the best way to accomplish this would be via static initialization with designated field initiators. Having some struct vm_unmapped_area_info instances not zero initialized will put those sites at risk of feeding garbage into vm_unmapped_area() if the convention is to zero initialize the struct and any new field addition misses a call site that initializes each field manually. It could be possible to leave the code mostly untouched, and just change the line: struct vm_unmapped_area_info info to: struct vm_unmapped_area_info info = {}; However, that would leave cleanup for the fields that are manually set to zero, as it would no longer be required. So to be reduce the chance of bugs via uninitialized fields, instead simply continue the process to initialize the struct this way tree wide. This will zero any unspecified members. Move the field initializers to the struct declaration when they are known at that time. Leave the fields out that were manually initialized to zero, as this would be redundant for designated initializers. Signed-off-by: Rick Edgecombe Cc: Huacai Chen Cc: WANG Xuerui Cc: loongarch@lists.linux.dev Link: https://lore.kernel.org/lkml/202402280912.33AEE7A9CF@keescook/#t Link: https://lore.kernel.org/lkml/j7bfvig3gew3qruouxrh7z7ehjjafrgkbcmg6tcghhfh3rhmzi@wzlcoecgy5rs/ --- Hi, This patch was split and refactored out of a tree-wide change [0] to just zero-init each struct vm_unmapped_area_info. The overall goal of the series is to help shadow stack guard gaps. Currently, there is only one arch with shadow stacks, but two more are in progress. It is 0day tested only. Thanks, Rick [0] https://lore.kernel.org/lkml/20240226190951.3240433-6-rick.p.edgecombe@intel.com/ --- arch/loongarch/mm/mmap.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/loongarch/mm/mmap.c b/arch/loongarch/mm/mmap.c index a9630a81b38a..aa028b25c461 100644 --- a/arch/loongarch/mm/mmap.c +++ b/arch/loongarch/mm/mmap.c @@ -24,7 +24,12 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, struct vm_area_struct *vma; unsigned long addr = addr0; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .low_limit = mm->mmap_base, + .high_limit = TASK_SIZE, + .length = len, + .align_offset = pgoff << PAGE_SHIFT + }; if (unlikely(len > TASK_SIZE)) return -ENOMEM; @@ -61,9 +66,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, return addr; } - info.length = len; info.align_mask = do_color_align ? (PAGE_MASK & SHM_ALIGN_MASK) : 0; - info.align_offset = pgoff << PAGE_SHIFT; if (dir == DOWN) { info.flags = VM_UNMAPPED_AREA_TOPDOWN; @@ -82,9 +85,6 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, */ } - info.flags = 0; - info.low_limit = mm->mmap_base; - info.high_limit = TASK_SIZE; return vm_unmapped_area(&info); } From patchwork Mon Feb 26 19:09:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 1904689 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=FmAmElEE; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=gandalf.ozlabs.org; envelope-from=srs0=n5w6=kd=vger.kernel.org=sparclinux+bounces-416-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from gandalf.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Tk9Hs1vc6z23qJ for ; Tue, 27 Feb 2024 06:13:13 +1100 (AEDT) Received: from gandalf.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4Tk9Hs1Mlrz4wc1 for ; Tue, 27 Feb 2024 06:13:13 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4Tk9Hs1HClz4wcH; Tue, 27 Feb 2024 06:13:13 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: gandalf.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=FmAmElEE; dkim-atps=neutral Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=139.178.88.99; helo=sv.mirrors.kernel.org; envelope-from=sparclinux+bounces-416-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [139.178.88.99]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4Tk9Hr64cLz4wc1 for ; Tue, 27 Feb 2024 06:13:12 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 935FD292436 for ; Mon, 26 Feb 2024 19:13:11 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 64755132492; Mon, 26 Feb 2024 19:10:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FmAmElEE" X-Original-To: sparclinux@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F1828131E37; Mon, 26 Feb 2024 19:10:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708974605; cv=none; b=gtrJTgX2T1j3tGNIWY2WEFi7XeENlhzWoD1zb70HAQx7uqzSui/0+ois/1RrFp8wsgDjN1XVfZO62WkgLeUtdDNGLx1htf42vm8hXlYLmaGRqXhpY9gkPaWsXM+VxODiFYDjL/BQbOduLMI0QT6MxeT7U2FmU+YrSINEaAzOZL4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708974605; c=relaxed/simple; bh=M4PemZaS0aqUDnC2dki1OuhIuHLGRlk1+X+hvrPVA+A=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=N8iE2ocEDwqTtWa1fnPfxdDzy6iO/DL91ruAVjGuQzP4/Jej9fJsZTDLH2NGcFM9KUJaA6I9yhFtYsgBIeesz2moO1/WWR/zxf0+UIPYSUkMkmZYFDysnj/AIlkdHLkdF8RzAaJ6SUSjVuSuYHDaM8WL1HMbdJnQ44KYcqnkd9c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=FmAmElEE; arc=none smtp.client-ip=198.175.65.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708974603; x=1740510603; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=M4PemZaS0aqUDnC2dki1OuhIuHLGRlk1+X+hvrPVA+A=; b=FmAmElEE2j0GbdsLhEOnJFMYnobg+ZH8Wb4P+Wpnui51MRaU1GNBa/o2 zCR2kG4rZtkdKacBLnb/hcSR7FkrMUdiIq3T0jCL/H9wUWMnaA69/uq/M FiUNg8n7zHJoQvDTWg0abSvuGfjEEexUzlMFtKdtdFAjw0idVT2ErxcOJ d74EHGW9ZI+FTxk1gQu6/epQBaooAucnfubjhCYI+ZU9+vpO5OQ0bazQG MHYaNbG5lW6neKfPIdpSpVXFPbt6BxB6EqWplYq5MASwijPutyrY0ViaJ WtUqCa92Vp9/DWHV+1RCSL/2JNOvCNCjBS8aJtHpuYGU818oqj0K1lbi8 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10996"; a="14721380" X-IronPort-AV: E=Sophos;i="6.06,186,1705392000"; d="scan'208";a="14721380" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 11:10:02 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,186,1705392000"; d="scan'208";a="6911458" Received: from bdmirand-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.3.213]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 11:10:00 -0800 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, debug@rivosinc.com, broonie@kernel.org, kirill.shutemov@linux.intel.com, keescook@chromium.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, luto@kernel.org, peterz@infradead.org, hpa@zytor.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: rick.p.edgecombe@intel.com, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org Subject: [PATCH v2 5/9] mm: Initialize struct vm_unmapped_area_info Date: Mon, 26 Feb 2024 11:09:47 -0800 Message-Id: <20240226190951.3240433-6-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240226190951.3240433-1-rick.p.edgecombe@intel.com> References: <20240226190951.3240433-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: sparclinux@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Future changes will need to add a field to struct vm_unmapped_area_info. This would cause trouble for any archs that don't initialize the struct. Currently every user sets each field, so if new fields are added, the core code parsing the struct will see garbage in the new field. It could be possible to initialize the new field for each arch to 0, but instead simply inialize the field with a C99 struct inializing syntax. Cc: linux-mm@kvack.org Cc: linux-alpha@vger.kernel.org Cc: linux-snps-arc@lists.infradead.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-csky@vger.kernel.org Cc: loongarch@lists.linux.dev Cc: linux-mips@vger.kernel.org Cc: linux-parisc@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: sparclinux@vger.kernel.org Cc: x86@kernel.org Suggested-by: Kirill A. Shutemov Signed-off-by: Rick Edgecombe Link: https://lore.kernel.org/lkml/3ynogxcgokc6i6xojbxzzwqectg472laes24u7jmtktlxcch5e@dfytra3ia3zc/#t --- Hi archs, For some context, this is part of a larger series to improve shadow stack guard gaps. It involves plumbing a new field via struct vm_unmapped_area_info. The first user is x86, but arm and riscv may likely use it as well. The change is compile tested only for non-x86 but seems like a relatively safe one. Thanks, Rick v2: - New patch --- arch/alpha/kernel/osf_sys.c | 2 +- arch/arc/mm/mmap.c | 2 +- arch/arm/mm/mmap.c | 4 ++-- arch/csky/abiv1/mmap.c | 2 +- arch/loongarch/mm/mmap.c | 2 +- arch/mips/mm/mmap.c | 2 +- arch/parisc/kernel/sys_parisc.c | 2 +- arch/powerpc/mm/book3s64/slice.c | 4 ++-- arch/s390/mm/hugetlbpage.c | 4 ++-- arch/s390/mm/mmap.c | 4 ++-- arch/sh/mm/mmap.c | 4 ++-- arch/sparc/kernel/sys_sparc_32.c | 2 +- arch/sparc/kernel/sys_sparc_64.c | 4 ++-- arch/sparc/mm/hugetlbpage.c | 4 ++-- arch/x86/kernel/sys_x86_64.c | 4 ++-- arch/x86/mm/hugetlbpage.c | 4 ++-- fs/hugetlbfs/inode.c | 4 ++-- mm/mmap.c | 4 ++-- 18 files changed, 29 insertions(+), 29 deletions(-) diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c index 5db88b627439..dd6801bb9240 100644 --- a/arch/alpha/kernel/osf_sys.c +++ b/arch/alpha/kernel/osf_sys.c @@ -1218,7 +1218,7 @@ static unsigned long arch_get_unmapped_area_1(unsigned long addr, unsigned long len, unsigned long limit) { - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; info.flags = 0; info.length = len; diff --git a/arch/arc/mm/mmap.c b/arch/arc/mm/mmap.c index 3c1c7ae73292..6549b3375f54 100644 --- a/arch/arc/mm/mmap.c +++ b/arch/arc/mm/mmap.c @@ -27,7 +27,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* * We enforce the MAP_FIXED case. diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c index a0f8a0ca0788..525795578c29 100644 --- a/arch/arm/mm/mmap.c +++ b/arch/arm/mm/mmap.c @@ -34,7 +34,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, struct vm_area_struct *vma; int do_align = 0; int aliasing = cache_is_vipt_aliasing(); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* * We only need to do colour alignment if either the I or D @@ -87,7 +87,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, unsigned long addr = addr0; int do_align = 0; int aliasing = cache_is_vipt_aliasing(); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* * We only need to do colour alignment if either the I or D diff --git a/arch/csky/abiv1/mmap.c b/arch/csky/abiv1/mmap.c index 6792aca49999..726659d41fa9 100644 --- a/arch/csky/abiv1/mmap.c +++ b/arch/csky/abiv1/mmap.c @@ -28,7 +28,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, struct mm_struct *mm = current->mm; struct vm_area_struct *vma; int do_align = 0; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* * We only need to do colour alignment if either the I or D diff --git a/arch/loongarch/mm/mmap.c b/arch/loongarch/mm/mmap.c index a9630a81b38a..664bf4abfdcf 100644 --- a/arch/loongarch/mm/mmap.c +++ b/arch/loongarch/mm/mmap.c @@ -24,7 +24,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, struct vm_area_struct *vma; unsigned long addr = addr0; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (unlikely(len > TASK_SIZE)) return -ENOMEM; diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c index 00fe90c6db3e..6321b53dc995 100644 --- a/arch/mips/mm/mmap.c +++ b/arch/mips/mm/mmap.c @@ -34,7 +34,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, struct vm_area_struct *vma; unsigned long addr = addr0; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (unlikely(len > TASK_SIZE)) return -ENOMEM; diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c index 98af719d5f85..e87c0e325abf 100644 --- a/arch/parisc/kernel/sys_parisc.c +++ b/arch/parisc/kernel/sys_parisc.c @@ -104,7 +104,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, struct vm_area_struct *vma, *prev; unsigned long filp_pgoff; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (unlikely(len > TASK_SIZE)) return -ENOMEM; diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c index c0b58afb9a47..5884f384866f 100644 --- a/arch/powerpc/mm/book3s64/slice.c +++ b/arch/powerpc/mm/book3s64/slice.c @@ -282,7 +282,7 @@ static unsigned long slice_find_area_bottomup(struct mm_struct *mm, { int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); unsigned long found, next_end; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; info.flags = 0; info.length = len; @@ -326,7 +326,7 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm, { int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); unsigned long found, prev; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; unsigned long min_addr = max(PAGE_SIZE, mmap_min_addr); info.flags = VM_UNMAPPED_AREA_TOPDOWN; diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c index c2d2850ec8d5..7f68485feea0 100644 --- a/arch/s390/mm/hugetlbpage.c +++ b/arch/s390/mm/hugetlbpage.c @@ -258,7 +258,7 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; info.flags = 0; info.length = len; @@ -274,7 +274,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; unsigned long addr; info.flags = VM_UNMAPPED_AREA_TOPDOWN; diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c index cd52d72b59cf..df88496e2903 100644 --- a/arch/s390/mm/mmap.c +++ b/arch/s390/mm/mmap.c @@ -77,7 +77,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (len > TASK_SIZE - mmap_min_addr) return -ENOMEM; @@ -116,7 +116,7 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp, unsigned long ad { struct vm_area_struct *vma; struct mm_struct *mm = current->mm; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* requested length too big for entire address space */ if (len > TASK_SIZE - mmap_min_addr) diff --git a/arch/sh/mm/mmap.c b/arch/sh/mm/mmap.c index b82199878b45..6aee5f761e08 100644 --- a/arch/sh/mm/mmap.c +++ b/arch/sh/mm/mmap.c @@ -57,7 +57,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, struct mm_struct *mm = current->mm; struct vm_area_struct *vma; int do_colour_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate @@ -106,7 +106,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, struct mm_struct *mm = current->mm; unsigned long addr = addr0; int do_colour_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate diff --git a/arch/sparc/kernel/sys_sparc_32.c b/arch/sparc/kernel/sys_sparc_32.c index 082a551897ed..7e781dbfd052 100644 --- a/arch/sparc/kernel/sys_sparc_32.c +++ b/arch/sparc/kernel/sys_sparc_32.c @@ -41,7 +41,7 @@ SYSCALL_DEFINE0(getpagesize) unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c index 1dbf7211666e..fc48ab3f83af 100644 --- a/arch/sparc/kernel/sys_sparc_64.c +++ b/arch/sparc/kernel/sys_sparc_64.c @@ -93,7 +93,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi struct vm_area_struct * vma; unsigned long task_size = TASK_SIZE; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate @@ -154,7 +154,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, unsigned long task_size = STACK_TOP32; unsigned long addr = addr0; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* This should only ever run for 32-bit processes. */ BUG_ON(!test_thread_flag(TIF_32BIT)); diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c index 38a1bef47efb..614e2c46d781 100644 --- a/arch/sparc/mm/hugetlbpage.c +++ b/arch/sparc/mm/hugetlbpage.c @@ -31,7 +31,7 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *filp, { struct hstate *h = hstate_file(filp); unsigned long task_size = TASK_SIZE; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (test_thread_flag(TIF_32BIT)) task_size = STACK_TOP32; @@ -63,7 +63,7 @@ hugetlb_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, struct hstate *h = hstate_file(filp); struct mm_struct *mm = current->mm; unsigned long addr = addr0; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* This should only ever run for 32-bit processes. */ BUG_ON(!test_thread_flag(TIF_32BIT)); diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c index c783aeb37dce..6e5d4fa5fc42 100644 --- a/arch/x86/kernel/sys_x86_64.c +++ b/arch/x86/kernel/sys_x86_64.c @@ -125,7 +125,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; unsigned long begin, end; if (flags & MAP_FIXED) @@ -165,7 +165,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, struct vm_area_struct *vma; struct mm_struct *mm = current->mm; unsigned long addr = addr0; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* requested length too big for entire address space */ if (len > TASK_SIZE) diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c index 6d77c0039617..88726bd1f72d 100644 --- a/arch/x86/mm/hugetlbpage.c +++ b/arch/x86/mm/hugetlbpage.c @@ -51,7 +51,7 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; info.flags = 0; info.length = len; @@ -74,7 +74,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; info.flags = VM_UNMAPPED_AREA_TOPDOWN; info.length = len; diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index a63d2eee086f..848b2239a215 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -165,7 +165,7 @@ hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; info.flags = 0; info.length = len; @@ -181,7 +181,7 @@ hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; info.flags = VM_UNMAPPED_AREA_TOPDOWN; info.length = len; diff --git a/mm/mmap.c b/mm/mmap.c index e02bb17fef5b..33af683a643f 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1699,7 +1699,7 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr, { struct mm_struct *mm = current->mm; struct vm_area_struct *vma, *prev; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags); if (len > mmap_end - mmap_min_addr) @@ -1747,7 +1747,7 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, { struct vm_area_struct *vma, *prev; struct mm_struct *mm = current->mm; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags); /* requested length too big for entire address space */