From patchwork Tue Jan 2 20:05:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 854735 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="IlsjC4yq"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zB4qY2cLlz9s7n for ; Wed, 3 Jan 2018 07:07:53 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751249AbeABUHN (ORCPT ); Tue, 2 Jan 2018 15:07:13 -0500 Received: from mail-wr0-f193.google.com ([209.85.128.193]:35201 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751220AbeABUHK (ORCPT ); Tue, 2 Jan 2018 15:07:10 -0500 Received: by mail-wr0-f193.google.com with SMTP id l19so37220007wrc.2 for ; Tue, 02 Jan 2018 12:07:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=3xW6hB18H7nCj1B8KM19meDl2NukzrBIHv9vfi9d94c=; b=IlsjC4yqku/Bgvxd3dSYA80TsqwMeZzyABWwlbxe827nworF3Z2Xw2NJJ9GC5u1EVj TlmJPcYtP0p5eP2s5u9UB06rVtkb1x+/J7q2wqmbSLsSc3bgNNB1+bkZWxdCYlujbUoP GxFZ5Jka238l0BCJThgTwy43cQ6Jj89D4IPmg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=3xW6hB18H7nCj1B8KM19meDl2NukzrBIHv9vfi9d94c=; b=UYQGAHVycG8XvDnOZt3zQTOrA5/jgulLfHt3TE36mGNzHcS+Lb5EDWezVU+G8IjQpp GHzqQTUsl6xH7L9g3yOnZsqIMUQ4BVcsbkj6KPVSIfcHkCVhygqmxfEaACnp/gqLePEa +wp73PXoGAxknFuxea4kH/lyqYjb2an6wmmmLvGWTwlPqCvXYHBb5OeFclBn8zl3Y8pJ yadMBB+S5Juo2yqAPpnUswGE6H99lwMCZB9puwwAKSX8Atx3L/ktQ7fMMq13WiKY3pAO q9PnsnSA51hjUDTiVIk7iFJ4/a+GIMoDT55+mG27RYiIv7LCQG6TMa4iMCwdZUPLHftN GXbg== X-Gm-Message-State: AKGB3mL3bZ3qoZcwA45hGDBTWjXera6GbnHCoylJ5t9rGK1gP954NDt/ GXjZoECagbfiBaBUqaEJNvL5NQ== X-Google-Smtp-Source: ACJfBoud3bqGO7fUQcY4b1dNTrZOfP3q0fUJPUXWN6CkcwSAMQ/Py8E4OagotjFR3BvalO1ZlBxCBg== X-Received: by 10.223.177.143 with SMTP id q15mr14341520wra.42.1514923629540; Tue, 02 Jan 2018 12:07:09 -0800 (PST) Received: from localhost.localdomain ([160.89.138.198]) by smtp.gmail.com with ESMTPSA id m70sm19128526wma.36.2018.01.02.12.07.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 02 Jan 2018 12:07:08 -0800 (PST) From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: Ard Biesheuvel , "H. Peter Anvin" , Ralf Baechle , Arnd Bergmann , Heiko Carstens , Kees Cook , Will Deacon , Michael Ellerman , Thomas Garnier , Thomas Gleixner , "Serge E. Hallyn" , Bjorn Helgaas , Benjamin Herrenschmidt , Russell King , Paul Mackerras , Catalin Marinas , "David S. Miller" , Petr Mladek , Ingo Molnar , James Morris , Andrew Morton , Nicolas Pitre , Josh Poimboeuf , Steven Rostedt , Martin Schwidefsky , Sergey Senozhatsky , Linus Torvalds , Jessica Yu , linux-arm-kernel@lists.infradead.org, linux-mips@linux-mips.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v7 09/10] x86: jump_label: switch to jump_entry accessors Date: Tue, 2 Jan 2018 20:05:48 +0000 Message-Id: <20180102200549.22984-10-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180102200549.22984-1-ard.biesheuvel@linaro.org> References: <20180102200549.22984-1-ard.biesheuvel@linaro.org> Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org In preparation of switching x86 to use place-relative references for the code, target and key members of struct jump_entry, replace direct references to the struct member with invocations of the new accessors. This will allow us to make the switch by modifying the accessors only. Signed-off-by: Ard Biesheuvel --- arch/x86/kernel/jump_label.c | 43 ++++++++++++-------- 1 file changed, 26 insertions(+), 17 deletions(-) diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c index e56c95be2808..d64296092ef5 100644 --- a/arch/x86/kernel/jump_label.c +++ b/arch/x86/kernel/jump_label.c @@ -52,22 +52,24 @@ static void __jump_label_transform(struct jump_entry *entry, * Jump label is enabled for the first time. * So we expect a default_nop... */ - if (unlikely(memcmp((void *)entry->code, default_nop, 5) - != 0)) - bug_at((void *)entry->code, __LINE__); + if (unlikely(memcmp((void *)jump_entry_code(entry), + default_nop, 5) != 0)) + bug_at((void *)jump_entry_code(entry), + __LINE__); } else { /* * ...otherwise expect an ideal_nop. Otherwise * something went horribly wrong. */ - if (unlikely(memcmp((void *)entry->code, ideal_nop, 5) - != 0)) - bug_at((void *)entry->code, __LINE__); + if (unlikely(memcmp((void *)jump_entry_code(entry), + ideal_nop, 5) != 0)) + bug_at((void *)jump_entry_code(entry), + __LINE__); } code.jump = 0xe9; - code.offset = entry->target - - (entry->code + JUMP_LABEL_NOP_SIZE); + code.offset = jump_entry_target(entry) - + (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE); } else { /* * We are disabling this jump label. If it is not what @@ -76,14 +78,18 @@ static void __jump_label_transform(struct jump_entry *entry, * are converting the default nop to the ideal nop. */ if (init) { - if (unlikely(memcmp((void *)entry->code, default_nop, 5) != 0)) - bug_at((void *)entry->code, __LINE__); + if (unlikely(memcmp((void *)jump_entry_code(entry), + default_nop, 5) != 0)) + bug_at((void *)jump_entry_code(entry), + __LINE__); } else { code.jump = 0xe9; - code.offset = entry->target - - (entry->code + JUMP_LABEL_NOP_SIZE); - if (unlikely(memcmp((void *)entry->code, &code, 5) != 0)) - bug_at((void *)entry->code, __LINE__); + code.offset = jump_entry_target(entry) - + (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE); + if (unlikely(memcmp((void *)jump_entry_code(entry), + &code, 5) != 0)) + bug_at((void *)jump_entry_code(entry), + __LINE__); } memcpy(&code, ideal_nops[NOP_ATOMIC5], JUMP_LABEL_NOP_SIZE); } @@ -97,10 +103,13 @@ static void __jump_label_transform(struct jump_entry *entry, * */ if (poker) - (*poker)((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE); + (*poker)((void *)jump_entry_code(entry), &code, + JUMP_LABEL_NOP_SIZE); else - text_poke_bp((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE, - (void *)entry->code + JUMP_LABEL_NOP_SIZE); + text_poke_bp((void *)jump_entry_code(entry), &code, + JUMP_LABEL_NOP_SIZE, + (void *)jump_entry_code(entry) + + JUMP_LABEL_NOP_SIZE); } void arch_jump_label_transform(struct jump_entry *entry,