From patchwork Thu Aug 5 00:52:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513674 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg98d0mzsz9sX3 for ; Thu, 5 Aug 2021 10:53:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236605AbhHEAxn (ORCPT ); Wed, 4 Aug 2021 20:53:43 -0400 Received: from mga02.intel.com ([134.134.136.20]:12119 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234733AbhHEAxg (ORCPT ); Wed, 4 Aug 2021 20:53:36 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="201215417" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="201215417" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:22 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617174" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:20 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 01/15] x86/mm: Move force_dma_unencrypted() to common code Date: Wed, 4 Aug 2021 17:52:04 -0700 Message-Id: <20210805005218.2912076-2-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org From: "Kirill A. Shutemov" Intel TDX doesn't allow VMM to access guest private memory. Any memory that is required for communication with VMM must be shared explicitly by setting the bit in page table entry. After setting the shared bit, the conversion must be completed with MapGPA hypercall. You can find details about MapGPA hypercall in [1], sec 3.2. The call informs VMM about the conversion between private/shared mappings. The shared memory is similar to unencrypted memory in AMD SME/SEV terminology but the underlying process of sharing/un-sharing the memory is different for Intel TDX guest platform. SEV assumes that I/O devices can only do DMA to "decrypted" physical addresses without the C-bit set. In order for the CPU to interact with this memory, the CPU needs a decrypted mapping. To add this support, AMD SME code forces force_dma_unencrypted() to return true for platforms that support AMD SEV feature. It will be used for DMA memory allocation API to trigger set_memory_decrypted() for platforms that support AMD SEV feature. TDX is similar. So, to communicate with I/O devices, related pages need to be marked as shared. As mentioned above, shared memory in TDX architecture is similar to decrypted memory in AMD SME/SEV. So similar to AMD SEV, force_dma_unencrypted() has to forced to return true. This support is added in other patches in this series. So move force_dma_unencrypted() out of AMD specific code and call AMD specific (amd_force_dma_unencrypted()) initialization function from it. force_dma_unencrypted() will be modified by later patches to include Intel TDX guest platform specific initialization. Also, introduce new config option X86_MEM_ENCRYPT_COMMON that has to be selected by all x86 memory encryption features. This will be selected by both AMD SEV and Intel TDX guest config options. This is preparation for TDX changes in DMA code and it has no functional change. [1] - https://software.intel.com/content/dam/develop/external/us/en/documents/intel-tdx-guest-hypervisor-communication-interface.pdf Signed-off-by: Kirill A. Shutemov Reviewed-by: Andi Kleen Reviewed-by: Tony Luck Signed-off-by: Kuppuswamy Sathyanarayanan --- Change since v3: * None Changes since v1: * Removed sev_active(), sme_active() checks in force_dma_unencrypted(). arch/x86/Kconfig | 8 ++++++-- arch/x86/include/asm/mem_encrypt_common.h | 18 ++++++++++++++++++ arch/x86/mm/Makefile | 2 ++ arch/x86/mm/mem_encrypt.c | 3 ++- arch/x86/mm/mem_encrypt_common.c | 17 +++++++++++++++++ 5 files changed, 45 insertions(+), 3 deletions(-) create mode 100644 arch/x86/include/asm/mem_encrypt_common.h create mode 100644 arch/x86/mm/mem_encrypt_common.c diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index b500f2afacce..d66a8a2f3c97 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1524,16 +1524,20 @@ config X86_CPA_STATISTICS helps to determine the effectiveness of preserving large and huge page mappings when mapping protections are changed. +config X86_MEM_ENCRYPT_COMMON + select ARCH_HAS_FORCE_DMA_UNENCRYPTED + select DYNAMIC_PHYSICAL_MASK + def_bool n + config AMD_MEM_ENCRYPT bool "AMD Secure Memory Encryption (SME) support" depends on X86_64 && CPU_SUP_AMD select DMA_COHERENT_POOL - select DYNAMIC_PHYSICAL_MASK select ARCH_USE_MEMREMAP_PROT - select ARCH_HAS_FORCE_DMA_UNENCRYPTED select INSTRUCTION_DECODER select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS select ARCH_HAS_PROTECTED_GUEST + select X86_MEM_ENCRYPT_COMMON help Say yes to enable support for the encryption of system memory. This requires an AMD processor that supports Secure Memory diff --git a/arch/x86/include/asm/mem_encrypt_common.h b/arch/x86/include/asm/mem_encrypt_common.h new file mode 100644 index 000000000000..697bc40a4e3d --- /dev/null +++ b/arch/x86/include/asm/mem_encrypt_common.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2020 Intel Corporation */ +#ifndef _ASM_X86_MEM_ENCRYPT_COMMON_H +#define _ASM_X86_MEM_ENCRYPT_COMMON_H + +#include +#include + +#ifdef CONFIG_AMD_MEM_ENCRYPT +bool amd_force_dma_unencrypted(struct device *dev); +#else /* CONFIG_AMD_MEM_ENCRYPT */ +static inline bool amd_force_dma_unencrypted(struct device *dev) +{ + return false; +} +#endif /* CONFIG_AMD_MEM_ENCRYPT */ + +#endif diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 5864219221ca..b31cb52bf1bd 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -52,6 +52,8 @@ obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o obj-$(CONFIG_PAGE_TABLE_ISOLATION) += pti.o +obj-$(CONFIG_X86_MEM_ENCRYPT_COMMON) += mem_encrypt_common.o + obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_identity.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 7d3b2c6f5f88..1f7a72ce9d66 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -31,6 +31,7 @@ #include #include #include +#include #include "mm_internal.h" @@ -415,7 +416,7 @@ bool amd_prot_guest_has(unsigned int attr) EXPORT_SYMBOL(amd_prot_guest_has); /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ -bool force_dma_unencrypted(struct device *dev) +bool amd_force_dma_unencrypted(struct device *dev) { /* * For SEV, all DMA must be to unencrypted addresses. diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c new file mode 100644 index 000000000000..f063c885b0a5 --- /dev/null +++ b/arch/x86/mm/mem_encrypt_common.c @@ -0,0 +1,17 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Memory Encryption Support Common Code + * + * Copyright (C) 2021 Intel Corporation + * + * Author: Kuppuswamy Sathyanarayanan + */ + +#include +#include + +/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ +bool force_dma_unencrypted(struct device *dev) +{ + return amd_force_dma_unencrypted(dev); +} From patchwork Thu Aug 5 00:52:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513675 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg98d4BWKz9sXS for ; Thu, 5 Aug 2021 10:53:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234733AbhHEAxr (ORCPT ); Wed, 4 Aug 2021 20:53:47 -0400 Received: from mga02.intel.com ([134.134.136.20]:12125 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234968AbhHEAxi (ORCPT ); Wed, 4 Aug 2021 20:53:38 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="201215423" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="201215423" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:25 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617186" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:22 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 02/15] x86/tdx: Exclude Shared bit from physical_mask Date: Wed, 4 Aug 2021 17:52:05 -0700 Message-Id: <20210805005218.2912076-3-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org From: "Kirill A. Shutemov" Just like MKTME, TDX reassigns bits of the physical address for metadata. MKTME used several bits for an encryption KeyID. TDX uses a single bit in guests to communicate whether a physical page should be protected by TDX as private memory (bit set to 0) or unprotected and shared with the VMM (bit set to 1). Add a helper, tdg_shared_mask() to generate the mask. The processor enumerates its physical address width to include the shared bit, which means it gets included in __PHYSICAL_MASK by default. Remove the shared mask from 'physical_mask' since any bits in tdg_shared_mask() are not used for physical addresses in page table entries. Also, note that shared mapping configuration cannot be clubbed between AMD SME and Intel TDX Guest platforms in common function. SME has to do it very early in __startup_64() as it sets the bit on all memory, except what is used for communication. TDX can postpone it, as it don't need any shared mapping in very early boot. Signed-off-by: Kirill A. Shutemov Reviewed-by: Andi Kleen Reviewed-by: Tony Luck Signed-off-by: Kuppuswamy Sathyanarayanan --- Changes since v3: * None Changes since v1: * Fixed format issues in commit log. arch/x86/Kconfig | 1 + arch/x86/include/asm/tdx.h | 4 ++++ arch/x86/kernel/tdx.c | 9 +++++++++ 3 files changed, 14 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index d66a8a2f3c97..8eada36694b6 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -872,6 +872,7 @@ config INTEL_TDX_GUEST select X86_X2APIC select SECURITY_LOCKDOWN_LSM select ARCH_HAS_PROTECTED_GUEST + select X86_MEM_ENCRYPT_COMMON help Provide support for running in a trusted domain on Intel processors equipped with Trusted Domain eXtensions. TDX is a new Intel diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index 72154d3f63c2..1e2a1c6a1898 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -77,6 +77,8 @@ int tdg_handle_virtualization_exception(struct pt_regs *regs, bool tdg_early_handle_ve(struct pt_regs *regs); +extern phys_addr_t tdg_shared_mask(void); + /* * To support I/O port access in decompressor or early kernel init * code, since #VE exception handler cannot be used, use paravirt @@ -145,6 +147,8 @@ static inline bool tdx_prot_guest_has(unsigned long flag) { return false; } static inline bool tdg_early_handle_ve(struct pt_regs *regs) { return false; } +static inline phys_addr_t tdg_shared_mask(void) { return 0; } + #endif /* CONFIG_INTEL_TDX_GUEST */ #ifdef CONFIG_INTEL_TDX_GUEST_KVM diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c index 0c24439774b4..d316fe33f52f 100644 --- a/arch/x86/kernel/tdx.c +++ b/arch/x86/kernel/tdx.c @@ -75,6 +75,12 @@ bool tdx_prot_guest_has(unsigned long flag) } EXPORT_SYMBOL_GPL(tdx_prot_guest_has); +/* The highest bit of a guest physical address is the "sharing" bit */ +phys_addr_t tdg_shared_mask(void) +{ + return 1ULL << (td_info.gpa_width - 1); +} + static void tdg_get_info(void) { u64 ret; @@ -86,6 +92,9 @@ static void tdg_get_info(void) td_info.gpa_width = out.rcx & GENMASK(5, 0); td_info.attributes = out.rdx; + + /* Exclude Shared bit from the __PHYSICAL_MASK */ + physical_mask &= ~tdg_shared_mask(); } static __cpuidle void tdg_halt(void) From patchwork Thu Aug 5 00:52:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513676 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg98d6XvBz9sRR for ; Thu, 5 Aug 2021 10:53:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236699AbhHEAxs (ORCPT ); Wed, 4 Aug 2021 20:53:48 -0400 Received: from mga02.intel.com ([134.134.136.20]:12127 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235659AbhHEAxl (ORCPT ); Wed, 4 Aug 2021 20:53:41 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="201215425" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="201215425" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:27 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617195" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:24 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 03/15] x86/tdx: Make pages shared in ioremap() Date: Wed, 4 Aug 2021 17:52:06 -0700 Message-Id: <20210805005218.2912076-4-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org From: "Kirill A. Shutemov" All ioremap()ed pages that are not backed by normal memory (NONE or RESERVED) have to be mapped as shared. Reuse the infrastructure from AMD SEV code. Note that DMA code doesn't use ioremap() to convert memory to shared as DMA buffers backed by normal memory. DMA code make buffer shared with set_memory_decrypted(). Signed-off-by: Kirill A. Shutemov Reviewed-by: Andi Kleen Reviewed-by: Tony Luck Signed-off-by: Kuppuswamy Sathyanarayanan --- Changes since v3: * Rebased on top of Tom Lendacky's protected guest changes (https://lore.kernel.org/patchwork/cover/1468760/) Changes since v1: * Fixed format issues in commit log. arch/x86/include/asm/pgtable.h | 4 ++++ arch/x86/mm/ioremap.c | 8 ++++++-- include/linux/protected_guest.h | 1 + 3 files changed, 11 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 448cd01eb3ec..2d4d518651d2 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -21,6 +21,10 @@ #define pgprot_encrypted(prot) __pgprot(__sme_set(pgprot_val(prot))) #define pgprot_decrypted(prot) __pgprot(__sme_clr(pgprot_val(prot))) +/* Make the page accesable by VMM for protected guests */ +#define pgprot_protected_guest(prot) __pgprot(pgprot_val(prot) | \ + tdg_shared_mask()) + #ifndef __ASSEMBLY__ #include #include diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 60ade7dd71bd..69a60f240124 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -26,6 +27,7 @@ #include #include #include +#include #include "physaddr.h" @@ -87,8 +89,8 @@ static unsigned int __ioremap_check_ram(struct resource *res) } /* - * In a SEV guest, NONE and RESERVED should not be mapped encrypted because - * there the whole memory is already encrypted. + * In a SEV or TDX guest, NONE and RESERVED should not be mapped encrypted (or + * private in TDX case) because there the whole memory is already encrypted. */ static unsigned int __ioremap_check_encrypted(struct resource *res) { @@ -246,6 +248,8 @@ __ioremap_caller(resource_size_t phys_addr, unsigned long size, prot = PAGE_KERNEL_IO; if ((io_desc.flags & IORES_MAP_ENCRYPTED) || encrypted) prot = pgprot_encrypted(prot); + else if (prot_guest_has(PATTR_GUEST_SHARED_MAPPING_INIT)) + prot = pgprot_protected_guest(prot); switch (pcm) { case _PAGE_CACHE_MODE_UC: diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h index 1a350c3fcfe2..14255f801100 100644 --- a/include/linux/protected_guest.h +++ b/include/linux/protected_guest.h @@ -17,6 +17,7 @@ #define PATTR_GUEST_MEM_ENCRYPT 2 /* Guest encrypted memory */ #define PATTR_GUEST_PROT_STATE 3 /* Guest encrypted state */ #define PATTR_GUEST_UNROLL_STRING_IO 4 /* Unrolled string IO */ +#define PATTR_GUEST_SHARED_MAPPING_INIT 5 /* Late shared mapping init*/ /* 0x800 - 0x8ff reserved for AMD */ #define PATTR_SME 0x800 From patchwork Thu Aug 5 00:52:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513687 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg98z6JZvz9t0p for ; Thu, 5 Aug 2021 10:53:55 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236974AbhHEAyI (ORCPT ); Wed, 4 Aug 2021 20:54:08 -0400 Received: from mga02.intel.com ([134.134.136.20]:12129 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236596AbhHEAxn (ORCPT ); Wed, 4 Aug 2021 20:53:43 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="201215429" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="201215429" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:29 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617203" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:27 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 04/15] x86/tdx: Add helper to do MapGPA hypercall Date: Wed, 4 Aug 2021 17:52:07 -0700 Message-Id: <20210805005218.2912076-5-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org From: "Kirill A. Shutemov" MapGPA hypercall is used by TDX guests to request VMM convert the existing mapping of given GPA address range between private/shared. tdx_hcall_gpa_intent() is the wrapper used for making MapGPA hypercall. Signed-off-by: Kirill A. Shutemov Reviewed-by: Andi Kleen Reviewed-by: Tony Luck Signed-off-by: Kuppuswamy Sathyanarayanan --- Changes since v3: * None Changes since v1: * Modified tdx_hcall_gpa_intent() to use _tdx_hypercall() instead of tdx_hypercall(). arch/x86/include/asm/tdx.h | 18 ++++++++++++++++++ arch/x86/kernel/tdx.c | 25 +++++++++++++++++++++++++ 2 files changed, 43 insertions(+) diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index 1e2a1c6a1898..665c8cf57d5b 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -56,6 +56,15 @@ struct ve_info { u32 instr_info; }; +/* + * Page mapping type enum. This is software construct not + * part of any hardware or VMM ABI. + */ +enum tdx_map_type { + TDX_MAP_PRIVATE, + TDX_MAP_SHARED, +}; + #ifdef CONFIG_INTEL_TDX_GUEST void __init tdx_early_init(void); @@ -79,6 +88,9 @@ bool tdg_early_handle_ve(struct pt_regs *regs); extern phys_addr_t tdg_shared_mask(void); +extern int tdx_hcall_gpa_intent(phys_addr_t gpa, int numpages, + enum tdx_map_type map_type); + /* * To support I/O port access in decompressor or early kernel init * code, since #VE exception handler cannot be used, use paravirt @@ -149,6 +161,12 @@ static inline bool tdg_early_handle_ve(struct pt_regs *regs) { return false; } static inline phys_addr_t tdg_shared_mask(void) { return 0; } +static inline int tdx_hcall_gpa_intent(phys_addr_t gpa, int numpages, + enum tdx_map_type map_type) +{ + return -ENODEV; +} + #endif /* CONFIG_INTEL_TDX_GUEST */ #ifdef CONFIG_INTEL_TDX_GUEST_KVM diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c index d316fe33f52f..ccbad600a422 100644 --- a/arch/x86/kernel/tdx.c +++ b/arch/x86/kernel/tdx.c @@ -18,6 +18,9 @@ #define TDINFO 1 #define TDGETVEINFO 3 +/* TDX hypercall Leaf IDs */ +#define TDVMCALL_MAP_GPA 0x10001 + #define VE_IS_IO_OUT(exit_qual) (((exit_qual) & 8) ? 0 : 1) #define VE_GET_IO_SIZE(exit_qual) (((exit_qual) & 7) + 1) #define VE_GET_PORT_NUM(exit_qual) ((exit_qual) >> 16) @@ -97,6 +100,28 @@ static void tdg_get_info(void) physical_mask &= ~tdg_shared_mask(); } +/* + * Inform the VMM of the guest's intent for this physical page: + * shared with the VMM or private to the guest. The VMM is + * expected to change its mapping of the page in response. + * + * Note: shared->private conversions require further guest + * action to accept the page. + */ +int tdx_hcall_gpa_intent(phys_addr_t gpa, int numpages, + enum tdx_map_type map_type) +{ + u64 ret; + + if (map_type == TDX_MAP_SHARED) + gpa |= tdg_shared_mask(); + + ret = _tdx_hypercall(TDVMCALL_MAP_GPA, gpa, PAGE_SIZE * numpages, 0, 0, + NULL); + + return ret ? -EIO : 0; +} + static __cpuidle void tdg_halt(void) { u64 ret; From patchwork Thu Aug 5 00:52:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513681 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg98n24yFz9sWl for ; Thu, 5 Aug 2021 10:53:45 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236966AbhHEAx4 (ORCPT ); Wed, 4 Aug 2021 20:53:56 -0400 Received: from mga02.intel.com ([134.134.136.20]:12131 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236633AbhHEAxp (ORCPT ); Wed, 4 Aug 2021 20:53:45 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="201215435" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="201215435" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:31 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617216" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:29 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 05/15] x86/tdx: Make DMA pages shared Date: Wed, 4 Aug 2021 17:52:08 -0700 Message-Id: <20210805005218.2912076-6-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org From: "Kirill A. Shutemov" Just like MKTME, TDX reassigns bits of the physical address for metadata. MKTME used several bits for an encryption KeyID. TDX uses a single bit in guests to communicate whether a physical page should be protected by TDX as private memory (bit set to 0) or unprotected and shared with the VMM (bit set to 1). __set_memory_enc_dec() is now aware about TDX and sets Shared bit accordingly following with relevant TDX hypercall. Also, Do TDACCEPTPAGE on every 4k page after mapping the GPA range when converting memory to private. Using 4k page size limit is due to current TDX spec restriction. Also, If the GPA (range) was already mapped as an active, private page, the host VMM may remove the private page from the TD by following the “Removing TD Private Pages” sequence in the Intel TDX-module specification [1] to safely block the mapping(s), flush the TLB and cache, and remove the mapping(s). BUG() if TDACCEPTPAGE fails (except "previously accepted page" case) , as the guest is completely hosed if it can't access memory.  [1] https://software.intel.com/content/dam/develop/external/us/en/documents/tdx-module-1eas-v0.85.039.pdf Tested-by: Kai Huang Signed-off-by: Kirill A. Shutemov Signed-off-by: Sean Christopherson Reviewed-by: Andi Kleen Reviewed-by: Tony Luck Signed-off-by: Kuppuswamy Sathyanarayanan --- Changes since v3: * Rebased on top of Tom Lendacky's protected guest changes (https://lore.kernel.org/patchwork/cover/1468760/) * Fixed TDX_PAGE_ALREADY_ACCEPTED error code as per latest spec update. Changes since v1: * Removed "we" or "I" usages in comment section. * Replaced is_tdx_guest() checks with prot_guest_has() checks. arch/x86/include/asm/pgtable.h | 1 + arch/x86/kernel/tdx.c | 35 +++++++++++++++++++++---- arch/x86/mm/mem_encrypt_common.c | 9 ++++++- arch/x86/mm/pat/set_memory.c | 45 +++++++++++++++++++++++++++----- 4 files changed, 77 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 2d4d518651d2..f341bf6b8b93 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -24,6 +24,7 @@ /* Make the page accesable by VMM for protected guests */ #define pgprot_protected_guest(prot) __pgprot(pgprot_val(prot) | \ tdg_shared_mask()) +#define pgprot_pg_shared_mask() __pgprot(tdg_shared_mask()) #ifndef __ASSEMBLY__ #include diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c index ccbad600a422..b91740a485d6 100644 --- a/arch/x86/kernel/tdx.c +++ b/arch/x86/kernel/tdx.c @@ -17,10 +17,16 @@ /* TDX Module call Leaf IDs */ #define TDINFO 1 #define TDGETVEINFO 3 +#define TDACCEPTPAGE 6 /* TDX hypercall Leaf IDs */ #define TDVMCALL_MAP_GPA 0x10001 +/* TDX Module call error codes */ +#define TDX_PAGE_ALREADY_ACCEPTED 0x00000b0a00000000 +#define TDCALL_RETURN_CODE_MASK 0xFFFFFFFF00000000 +#define TDCALL_RETURN_CODE(a) ((a) & TDCALL_RETURN_CODE_MASK) + #define VE_IS_IO_OUT(exit_qual) (((exit_qual) & 8) ? 0 : 1) #define VE_GET_IO_SIZE(exit_qual) (((exit_qual) & 7) + 1) #define VE_GET_PORT_NUM(exit_qual) ((exit_qual) >> 16) @@ -100,26 +106,45 @@ static void tdg_get_info(void) physical_mask &= ~tdg_shared_mask(); } +static void tdg_accept_page(phys_addr_t gpa) +{ + u64 ret; + + ret = __tdx_module_call(TDACCEPTPAGE, gpa, 0, 0, 0, NULL); + + BUG_ON(ret && TDCALL_RETURN_CODE(ret) != TDX_PAGE_ALREADY_ACCEPTED); +} + /* * Inform the VMM of the guest's intent for this physical page: * shared with the VMM or private to the guest. The VMM is * expected to change its mapping of the page in response. - * - * Note: shared->private conversions require further guest - * action to accept the page. */ int tdx_hcall_gpa_intent(phys_addr_t gpa, int numpages, enum tdx_map_type map_type) { - u64 ret; + u64 ret = 0; + int i; if (map_type == TDX_MAP_SHARED) gpa |= tdg_shared_mask(); ret = _tdx_hypercall(TDVMCALL_MAP_GPA, gpa, PAGE_SIZE * numpages, 0, 0, NULL); + if (ret) + ret = -EIO; - return ret ? -EIO : 0; + if (ret || map_type == TDX_MAP_SHARED) + return ret; + + /* + * For shared->private conversion, accept the page using TDACCEPTPAGE + * TDX module call. + */ + for (i = 0; i < numpages; i++) + tdg_accept_page(gpa + i * PAGE_SIZE); + + return 0; } static __cpuidle void tdg_halt(void) diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c index f063c885b0a5..fdaf09b4a658 100644 --- a/arch/x86/mm/mem_encrypt_common.c +++ b/arch/x86/mm/mem_encrypt_common.c @@ -9,9 +9,16 @@ #include #include +#include /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ bool force_dma_unencrypted(struct device *dev) { - return amd_force_dma_unencrypted(dev); + if (prot_guest_has(PATTR_SEV) || prot_guest_has(PATTR_SME)) + return amd_force_dma_unencrypted(dev); + + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) + return true; + + return false; } diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index ad8a5c586a35..52c93923b947 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -29,6 +29,7 @@ #include #include #include +#include #include "../mm_internal.h" @@ -1980,9 +1981,11 @@ int set_memory_global(unsigned long addr, int numpages) __pgprot(_PAGE_GLOBAL), 0); } -static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) +static int __set_memory_protect(unsigned long addr, int numpages, bool protect) { + pgprot_t mem_protected_bits, mem_plain_bits; struct cpa_data cpa; + enum tdx_map_type map_type; int ret; /* Nothing to do if memory encryption is not active */ @@ -1996,8 +1999,25 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) memset(&cpa, 0, sizeof(cpa)); cpa.vaddr = &addr; cpa.numpages = numpages; - cpa.mask_set = enc ? __pgprot(_PAGE_ENC) : __pgprot(0); - cpa.mask_clr = enc ? __pgprot(0) : __pgprot(_PAGE_ENC); + + if (prot_guest_has(PATTR_GUEST_SHARED_MAPPING_INIT)) { + mem_protected_bits = __pgprot(0); + mem_plain_bits = pgprot_pg_shared_mask(); + } else { + mem_protected_bits = __pgprot(_PAGE_ENC); + mem_plain_bits = __pgprot(0); + } + + if (protect) { + cpa.mask_set = mem_protected_bits; + cpa.mask_clr = mem_plain_bits; + map_type = TDX_MAP_PRIVATE; + } else { + cpa.mask_set = mem_plain_bits; + cpa.mask_clr = mem_protected_bits; + map_type = TDX_MAP_SHARED; + } + cpa.pgd = init_mm.pgd; /* Must avoid aliasing mappings in the highmem code */ @@ -2005,9 +2025,17 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) vm_unmap_aliases(); /* - * Before changing the encryption attribute, we need to flush caches. + * Before changing the encryption attribute, flush caches. + * + * For TDX, guest is responsible for flushing caches on private->shared + * transition. VMM is responsible for flushing on shared->private. */ - cpa_flush(&cpa, !this_cpu_has(X86_FEATURE_SME_COHERENT)); + if (prot_guest_has(PATTR_GUEST_TDX)) { + if (map_type == TDX_MAP_SHARED) + cpa_flush(&cpa, 1); + } else { + cpa_flush(&cpa, !this_cpu_has(X86_FEATURE_SME_COHERENT)); + } ret = __change_page_attr_set_clr(&cpa, 1); @@ -2020,18 +2048,21 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) */ cpa_flush(&cpa, 0); + if (!ret && prot_guest_has(PATTR_GUEST_SHARED_MAPPING_INIT)) + ret = tdx_hcall_gpa_intent(__pa(addr), numpages, map_type); + return ret; } int set_memory_encrypted(unsigned long addr, int numpages) { - return __set_memory_enc_dec(addr, numpages, true); + return __set_memory_protect(addr, numpages, true); } EXPORT_SYMBOL_GPL(set_memory_encrypted); int set_memory_decrypted(unsigned long addr, int numpages) { - return __set_memory_enc_dec(addr, numpages, false); + return __set_memory_protect(addr, numpages, false); } EXPORT_SYMBOL_GPL(set_memory_decrypted); From patchwork Thu Aug 5 00:52:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513684 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg98q2Fv8z9srX for ; Thu, 5 Aug 2021 10:53:47 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236991AbhHEAx6 (ORCPT ); Wed, 4 Aug 2021 20:53:58 -0400 Received: from mga02.intel.com ([134.134.136.20]:12127 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236666AbhHEAxr (ORCPT ); Wed, 4 Aug 2021 20:53:47 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="201215437" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="201215437" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:34 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617221" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:31 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 06/15] x86/kvm: Use bounce buffers for TD guest Date: Wed, 4 Aug 2021 17:52:09 -0700 Message-Id: <20210805005218.2912076-7-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org From: "Kirill A. Shutemov" Intel TDX doesn't allow VMM to directly access guest private memory. Any memory that is required for communication with VMM must be shared explicitly. The same rule applies for any any DMA to and fromTDX guest. All DMA pages had to marked as shared pages. A generic way to achieve this without any changes to device drivers is to use the SWIOTLB framework. This method of handling is similar to AMD SEV. So extend this support for TDX guest as well. Also since there are some common code between AMD SEV and TDX guest in mem_encrypt_init(), move it to mem_encrypt_common.c and call AMD specific init function from it Signed-off-by: Kirill A. Shutemov Reviewed-by: Andi Kleen Reviewed-by: Tony Luck Signed-off-by: Kuppuswamy Sathyanarayanan --- Changes since v3: * Rebased on top of Tom Lendacky's protected guest changes (https://lore.kernel.org/patchwork/cover/1468760/) Changes since v1: * Removed sme_me_mask check for amd_mem_encrypt_init() in mem_encrypt_init(). arch/x86/include/asm/mem_encrypt_common.h | 2 ++ arch/x86/kernel/tdx.c | 3 +++ arch/x86/mm/mem_encrypt.c | 5 +---- arch/x86/mm/mem_encrypt_common.c | 14 ++++++++++++++ 4 files changed, 20 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/mem_encrypt_common.h b/arch/x86/include/asm/mem_encrypt_common.h index 697bc40a4e3d..48d98a3d64fd 100644 --- a/arch/x86/include/asm/mem_encrypt_common.h +++ b/arch/x86/include/asm/mem_encrypt_common.h @@ -8,11 +8,13 @@ #ifdef CONFIG_AMD_MEM_ENCRYPT bool amd_force_dma_unencrypted(struct device *dev); +void __init amd_mem_encrypt_init(void); #else /* CONFIG_AMD_MEM_ENCRYPT */ static inline bool amd_force_dma_unencrypted(struct device *dev) { return false; } +static inline void amd_mem_encrypt_init(void) {} #endif /* CONFIG_AMD_MEM_ENCRYPT */ #endif diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c index b91740a485d6..01b758496e84 100644 --- a/arch/x86/kernel/tdx.c +++ b/arch/x86/kernel/tdx.c @@ -13,6 +13,7 @@ #include #include #include /* force_sig_fault() */ +#include /* TDX Module call Leaf IDs */ #define TDINFO 1 @@ -517,6 +518,8 @@ void __init tdx_early_init(void) legacy_pic = &null_legacy_pic; + swiotlb_force = SWIOTLB_FORCE; + cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "tdg:cpu_hotplug", NULL, tdg_cpu_offline_prepare); diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 1f7a72ce9d66..cab68d8cc5b0 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -491,14 +491,11 @@ static void print_mem_encrypt_feature_info(void) } /* Architecture __weak replacement functions */ -void __init mem_encrypt_init(void) +void __init amd_mem_encrypt_init(void) { if (!sme_me_mask) return; - /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ - swiotlb_update_mem_attributes(); - /* * With SEV, we need to unroll the rep string I/O instructions, * but SEV-ES supports them through the #VC handler. diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c index fdaf09b4a658..2ba19476dc26 100644 --- a/arch/x86/mm/mem_encrypt_common.c +++ b/arch/x86/mm/mem_encrypt_common.c @@ -10,6 +10,7 @@ #include #include #include +#include /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ bool force_dma_unencrypted(struct device *dev) @@ -22,3 +23,16 @@ bool force_dma_unencrypted(struct device *dev) return false; } + +/* Architecture __weak replacement functions */ +void __init mem_encrypt_init(void) +{ + /* + * For TDX guest or SEV/SME, call into SWIOTLB to update + * the SWIOTLB DMA buffers + */ + if (sme_me_mask || prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) + swiotlb_update_mem_attributes(); + + amd_mem_encrypt_init(); +} From patchwork Thu Aug 5 00:52:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513685 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg98q71plz9sWl for ; Thu, 5 Aug 2021 10:53:47 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237012AbhHEAx7 (ORCPT ); Wed, 4 Aug 2021 20:53:59 -0400 Received: from mga02.intel.com ([134.134.136.20]:12131 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236751AbhHEAxt (ORCPT ); Wed, 4 Aug 2021 20:53:49 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="201215440" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="201215440" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:36 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617232" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:33 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 07/15] x86/tdx: ioapic: Add shared bit for IOAPIC base address Date: Wed, 4 Aug 2021 17:52:10 -0700 Message-Id: <20210805005218.2912076-8-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org From: Isaku Yamahata The kernel interacts with each bare-metal IOAPIC with a special MMIO page. When running under KVM, the guest's IOAPICs are emulated by KVM. When running as a TDX guest, the guest needs to mark each IOAPIC mapping as "shared" with the host. This ensures that TDX private protections are not applied to the page, which allows the TDX host emulation to work. Earlier patches in this series modified ioremap() so that ioremap()-created mappings such as virtio will be marked as shared. However, the IOAPIC code does not use ioremap() and instead uses the fixmap mechanism. Introduce a special fixmap helper just for the IOAPIC code. Ensure that it marks IOAPIC pages as "shared". This replaces set_fixmap_nocache() with __set_fixmap() since __set_fixmap() allows custom 'prot' values. Signed-off-by: Isaku Yamahata Reviewed-by: Andi Kleen Reviewed-by: Tony Luck Signed-off-by: Kuppuswamy Sathyanarayanan --- Changes since v3: * Rebased on top of Tom Lendacky's protected guest changes (https://lore.kernel.org/patchwork/cover/1468760/) arch/x86/kernel/apic/io_apic.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c index d5c691a3208b..5154efe8c4f7 100644 --- a/arch/x86/kernel/apic/io_apic.c +++ b/arch/x86/kernel/apic/io_apic.c @@ -49,6 +49,7 @@ #include #include #include +#include #include #include @@ -65,6 +66,7 @@ #include #include #include +#include #define for_each_ioapic(idx) \ for ((idx) = 0; (idx) < nr_ioapics; (idx)++) @@ -2675,6 +2677,18 @@ static struct resource * __init ioapic_setup_resources(void) return res; } +static void io_apic_set_fixmap_nocache(enum fixed_addresses idx, + phys_addr_t phys) +{ + pgprot_t flags = FIXMAP_PAGE_NOCACHE; + + /* Set TDX guest shared bit in pgprot flags */ + if (prot_guest_has(PATTR_GUEST_SHARED_MAPPING_INIT)) + flags = pgprot_protected_guest(flags); + + __set_fixmap(idx, phys, flags); +} + void __init io_apic_init_mappings(void) { unsigned long ioapic_phys, idx = FIX_IO_APIC_BASE_0; @@ -2707,7 +2721,7 @@ void __init io_apic_init_mappings(void) __func__, PAGE_SIZE, PAGE_SIZE); ioapic_phys = __pa(ioapic_phys); } - set_fixmap_nocache(idx, ioapic_phys); + io_apic_set_fixmap_nocache(idx, ioapic_phys); apic_printk(APIC_VERBOSE, "mapped IOAPIC to %08lx (%08lx)\n", __fix_to_virt(idx) + (ioapic_phys & ~PAGE_MASK), ioapic_phys); @@ -2836,7 +2850,7 @@ int mp_register_ioapic(int id, u32 address, u32 gsi_base, ioapics[idx].mp_config.flags = MPC_APIC_USABLE; ioapics[idx].mp_config.apicaddr = address; - set_fixmap_nocache(FIX_IO_APIC_BASE_0 + idx, address); + io_apic_set_fixmap_nocache(FIX_IO_APIC_BASE_0 + idx, address); if (bad_ioapic_register(idx)) { clear_fixmap(FIX_IO_APIC_BASE_0 + idx); return -ENODEV; From patchwork Thu Aug 5 00:52:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513686 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg98s2cTfz9sX3 for ; Thu, 5 Aug 2021 10:53:49 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236751AbhHEAx7 (ORCPT ); Wed, 4 Aug 2021 20:53:59 -0400 Received: from mga02.intel.com ([134.134.136.20]:12131 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236865AbhHEAxw (ORCPT ); Wed, 4 Aug 2021 20:53:52 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="201215445" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="201215445" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:38 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617244" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:36 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 08/15] x86/tdx: Enable shared memory protected guest flags for TDX guest Date: Wed, 4 Aug 2021 17:52:11 -0700 Message-Id: <20210805005218.2912076-9-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org In TDX guest, since the memory is private to guest, it needs some extra configuration before sharing any data with VMM. AMD SEV also implements similar features and hence code can be shared. Currently memory sharing related code in the kernel is protected by PATTR_GUEST_MEM_ENCRYPT and PATTR_GUEST_SHARED_MAPPING_INIT flags. So enable them for TDX guest as well. Signed-off-by: Kuppuswamy Sathyanarayanan --- arch/x86/kernel/tdx.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c index 01b758496e84..1cf2443edb90 100644 --- a/arch/x86/kernel/tdx.c +++ b/arch/x86/kernel/tdx.c @@ -78,6 +78,8 @@ bool tdx_prot_guest_has(unsigned long flag) switch (flag) { case PATTR_GUEST_TDX: case PATTR_GUEST_UNROLL_STRING_IO: + case PATTR_GUEST_MEM_ENCRYPT: + case PATTR_GUEST_SHARED_MAPPING_INIT: return cpu_feature_enabled(X86_FEATURE_TDX_GUEST); } From patchwork Thu Aug 5 00:52:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513689 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg99C1gP3z9sXS for ; Thu, 5 Aug 2021 10:54:07 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237107AbhHEAyT (ORCPT ); Wed, 4 Aug 2021 20:54:19 -0400 Received: from mga09.intel.com ([134.134.136.24]:27469 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236608AbhHEAyF (ORCPT ); Wed, 4 Aug 2021 20:54:05 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="214027473" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="214027473" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:41 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617259" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:38 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 09/15] pci: Consolidate pci_iomap* and pci_iomap*wc Date: Wed, 4 Aug 2021 17:52:12 -0700 Message-Id: <20210805005218.2912076-10-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org From: Andi Kleen pci_iomap* and pci_iomap*wc are currently duplicated code, except that the _wc variant does not support IO ports. Replace them with a common helper and a callback for the mapping. I used wrappers for the maps because some architectures implement ioremap and friends with macros. This will allow to add more variants without excessive code duplications. This patch should have no behavior change. Signed-off-by: Andi Kleen Signed-off-by: Kuppuswamy Sathyanarayanan --- lib/pci_iomap.c | 81 +++++++++++++++++++++++++++---------------------- 1 file changed, 44 insertions(+), 37 deletions(-) diff --git a/lib/pci_iomap.c b/lib/pci_iomap.c index 2d3eb1cb73b8..6251c3f651c6 100644 --- a/lib/pci_iomap.c +++ b/lib/pci_iomap.c @@ -10,6 +10,46 @@ #include #ifdef CONFIG_PCI + +/* + * Callback wrappers because some architectures define ioremap et.al. + * as macros. + */ +static void __iomem *map_ioremap(phys_addr_t addr, size_t size) +{ + return ioremap(addr, size); +} + +static void __iomem *map_ioremap_wc(phys_addr_t addr, size_t size) +{ + return ioremap_wc(addr, size); +} + +static void __iomem *pci_iomap_range_map(struct pci_dev *dev, + int bar, + unsigned long offset, + unsigned long maxlen, + void __iomem *(*mapm)(phys_addr_t, + size_t)) +{ + resource_size_t start = pci_resource_start(dev, bar); + resource_size_t len = pci_resource_len(dev, bar); + unsigned long flags = pci_resource_flags(dev, bar); + + if (len <= offset || !start) + return NULL; + len -= offset; + start += offset; + if (maxlen && len > maxlen) + len = maxlen; + if (flags & IORESOURCE_IO) + return __pci_ioport_map(dev, start, len); + if (flags & IORESOURCE_MEM) + return mapm(start, len); + /* What? */ + return NULL; +} + /** * pci_iomap_range - create a virtual mapping cookie for a PCI BAR * @dev: PCI device that owns the BAR @@ -30,22 +70,8 @@ void __iomem *pci_iomap_range(struct pci_dev *dev, unsigned long offset, unsigned long maxlen) { - resource_size_t start = pci_resource_start(dev, bar); - resource_size_t len = pci_resource_len(dev, bar); - unsigned long flags = pci_resource_flags(dev, bar); - - if (len <= offset || !start) - return NULL; - len -= offset; - start += offset; - if (maxlen && len > maxlen) - len = maxlen; - if (flags & IORESOURCE_IO) - return __pci_ioport_map(dev, start, len); - if (flags & IORESOURCE_MEM) - return ioremap(start, len); - /* What? */ - return NULL; + return pci_iomap_range_map(dev, bar, offset, maxlen, + map_ioremap); } EXPORT_SYMBOL(pci_iomap_range); @@ -70,27 +96,8 @@ void __iomem *pci_iomap_wc_range(struct pci_dev *dev, unsigned long offset, unsigned long maxlen) { - resource_size_t start = pci_resource_start(dev, bar); - resource_size_t len = pci_resource_len(dev, bar); - unsigned long flags = pci_resource_flags(dev, bar); - - - if (flags & IORESOURCE_IO) - return NULL; - - if (len <= offset || !start) - return NULL; - - len -= offset; - start += offset; - if (maxlen && len > maxlen) - len = maxlen; - - if (flags & IORESOURCE_MEM) - return ioremap_wc(start, len); - - /* What? */ - return NULL; + return pci_iomap_range_map(dev, bar, offset, maxlen, + map_ioremap_wc); } EXPORT_SYMBOL_GPL(pci_iomap_wc_range); From patchwork Thu Aug 5 00:52:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513691 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg99D3hwqz9sWl for ; Thu, 5 Aug 2021 10:54:08 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237101AbhHEAyT (ORCPT ); Wed, 4 Aug 2021 20:54:19 -0400 Received: from mga09.intel.com ([134.134.136.24]:27472 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237104AbhHEAyF (ORCPT ); Wed, 4 Aug 2021 20:54:05 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="214027478" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="214027478" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:43 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617269" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:40 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 10/15] asm/io.h: Add ioremap_shared fallback Date: Wed, 4 Aug 2021 17:52:13 -0700 Message-Id: <20210805005218.2912076-11-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org From: Andi Kleen This function is for declaring memory that should be shared with a hypervisor in a confidential guest. If the architecture doesn't implement it it's just ioremap. Signed-off-by: Andi Kleen Signed-off-by: Kuppuswamy Sathyanarayanan --- arch/alpha/include/asm/io.h | 1 + arch/mips/include/asm/io.h | 1 + arch/parisc/include/asm/io.h | 1 + arch/sparc/include/asm/io_64.h | 1 + include/asm-generic/io.h | 4 ++++ 5 files changed, 8 insertions(+) diff --git a/arch/alpha/include/asm/io.h b/arch/alpha/include/asm/io.h index 0fab5ac90775..701b44909b94 100644 --- a/arch/alpha/include/asm/io.h +++ b/arch/alpha/include/asm/io.h @@ -283,6 +283,7 @@ static inline void __iomem *ioremap(unsigned long port, unsigned long size) } #define ioremap_wc ioremap +#define ioremap_shared ioremap #define ioremap_uc ioremap static inline void iounmap(volatile void __iomem *addr) diff --git a/arch/mips/include/asm/io.h b/arch/mips/include/asm/io.h index 6f5c86d2bab4..3713ff624632 100644 --- a/arch/mips/include/asm/io.h +++ b/arch/mips/include/asm/io.h @@ -179,6 +179,7 @@ void iounmap(const volatile void __iomem *addr); #define ioremap(offset, size) \ ioremap_prot((offset), (size), _CACHE_UNCACHED) #define ioremap_uc ioremap +#define ioremap_shared ioremap /* * ioremap_cache - map bus memory into CPU space diff --git a/arch/parisc/include/asm/io.h b/arch/parisc/include/asm/io.h index 0b5259102319..73064e152df7 100644 --- a/arch/parisc/include/asm/io.h +++ b/arch/parisc/include/asm/io.h @@ -129,6 +129,7 @@ static inline void gsc_writeq(unsigned long long val, unsigned long addr) */ void __iomem *ioremap(unsigned long offset, unsigned long size); #define ioremap_wc ioremap +#define ioremap_shared ioremap #define ioremap_uc ioremap extern void iounmap(const volatile void __iomem *addr); diff --git a/arch/sparc/include/asm/io_64.h b/arch/sparc/include/asm/io_64.h index 5ffa820dcd4d..18cc656eb712 100644 --- a/arch/sparc/include/asm/io_64.h +++ b/arch/sparc/include/asm/io_64.h @@ -409,6 +409,7 @@ static inline void __iomem *ioremap(unsigned long offset, unsigned long size) #define ioremap_uc(X,Y) ioremap((X),(Y)) #define ioremap_wc(X,Y) ioremap((X),(Y)) #define ioremap_wt(X,Y) ioremap((X),(Y)) +#define ioremap_shared(X, Y) ioremap((X), (Y)) static inline void __iomem *ioremap_np(unsigned long offset, unsigned long size) { return NULL; diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h index e93375c710b9..bfcaee1691c8 100644 --- a/include/asm-generic/io.h +++ b/include/asm-generic/io.h @@ -982,6 +982,10 @@ static inline void __iomem *ioremap(phys_addr_t addr, size_t size) #define ioremap_wt ioremap #endif +#ifndef ioremap_shared +#define ioremap_shared ioremap +#endif + /* * ioremap_uc is special in that we do require an explicit architecture * implementation. In general you do not want to use this function in a From patchwork Thu Aug 5 00:52:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513693 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg99L4qsVz9sWl for ; Thu, 5 Aug 2021 10:54:14 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236577AbhHEAyX (ORCPT ); Wed, 4 Aug 2021 20:54:23 -0400 Received: from mga09.intel.com ([134.134.136.24]:27477 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236905AbhHEAyH (ORCPT ); Wed, 4 Aug 2021 20:54:07 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="214027483" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="214027483" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:45 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617274" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:42 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 11/15] pci: Add pci_iomap_shared{,_range} Date: Wed, 4 Aug 2021 17:52:14 -0700 Message-Id: <20210805005218.2912076-12-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org From: Andi Kleen Add a new variant of pci_iomap for mapping all PCI resources of a devices as shared memory with a hypervisor in a confidential guest. Signed-off-by: Andi Kleen Signed-off-by: Kuppuswamy Sathyanarayanan --- include/asm-generic/pci_iomap.h | 6 +++++ lib/pci_iomap.c | 46 +++++++++++++++++++++++++++++++++ 2 files changed, 52 insertions(+) diff --git a/include/asm-generic/pci_iomap.h b/include/asm-generic/pci_iomap.h index d4f16dcc2ed7..0178ddd7ad88 100644 --- a/include/asm-generic/pci_iomap.h +++ b/include/asm-generic/pci_iomap.h @@ -18,6 +18,12 @@ extern void __iomem *pci_iomap_range(struct pci_dev *dev, int bar, extern void __iomem *pci_iomap_wc_range(struct pci_dev *dev, int bar, unsigned long offset, unsigned long maxlen); +extern void __iomem *pci_iomap_shared(struct pci_dev *dev, int bar, + unsigned long max); +extern void __iomem *pci_iomap_shared_range(struct pci_dev *dev, int bar, + unsigned long offset, + unsigned long maxlen); + /* Create a virtual mapping cookie for a port on a given PCI device. * Do not call this directly, it exists to make it easier for architectures * to override */ diff --git a/lib/pci_iomap.c b/lib/pci_iomap.c index 6251c3f651c6..b04e8689eab3 100644 --- a/lib/pci_iomap.c +++ b/lib/pci_iomap.c @@ -25,6 +25,11 @@ static void __iomem *map_ioremap_wc(phys_addr_t addr, size_t size) return ioremap_wc(addr, size); } +static void __iomem *map_ioremap_shared(phys_addr_t addr, size_t size) +{ + return ioremap_shared(addr, size); +} + static void __iomem *pci_iomap_range_map(struct pci_dev *dev, int bar, unsigned long offset, @@ -101,6 +106,47 @@ void __iomem *pci_iomap_wc_range(struct pci_dev *dev, } EXPORT_SYMBOL_GPL(pci_iomap_wc_range); +/** + * pci_iomap_shared_range - create a virtual shared mapping cookie for a + * PCI BAR + * @dev: PCI device that owns the BAR + * @bar: BAR number + * @offset: map memory at the given offset in BAR + * @maxlen: max length of the memory to map + * + * Remap a pci device's resources shared in a confidential guest. + * For more details see pci_iomap_range's documentation. + * + * @maxlen specifies the maximum length to map. To get access to + * the complete BAR from offset to the end, pass %0 here. + */ +void __iomem *pci_iomap_shared_range(struct pci_dev *dev, int bar, + unsigned long offset, unsigned long maxlen) +{ + return pci_iomap_range_map(dev, bar, offset, maxlen, + map_ioremap_shared); +} +EXPORT_SYMBOL_GPL(pci_iomap_shared_range); + +/** + * pci_iomap_shared - create a virtual shared mapping cookie for a PCI BAR + * @dev: PCI device that owns the BAR + * @bar: BAR number + * @maxlen: length of the memory to map + * + * See pci_iomap for details. This function creates a shared mapping + * with the host for confidential hosts. + * + * @maxlen specifies the maximum length to map. To get access to the + * complete BAR without checking for its length first, pass %0 here. + */ +void __iomem *pci_iomap_shared(struct pci_dev *dev, int bar, + unsigned long maxlen) +{ + return pci_iomap_shared_range(dev, bar, 0, maxlen); +} +EXPORT_SYMBOL_GPL(pci_iomap_shared); + /** * pci_iomap - create a virtual mapping cookie for a PCI BAR * @dev: PCI device that owns the BAR From patchwork Thu Aug 5 00:52:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513696 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg9B23lGPz9sX3 for ; Thu, 5 Aug 2021 10:54:50 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237644AbhHEAzD (ORCPT ); Wed, 4 Aug 2021 20:55:03 -0400 Received: from mga09.intel.com ([134.134.136.24]:27469 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237071AbhHEAyS (ORCPT ); Wed, 4 Aug 2021 20:54:18 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="214027486" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="214027486" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:47 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617285" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:45 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 12/15] pci: Mark MSI data shared Date: Wed, 4 Aug 2021 17:52:15 -0700 Message-Id: <20210805005218.2912076-13-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org From: Andi Kleen In a TDX guest the MSI area must be shared with the host, so use a shared mapping. Signed-off-by: Andi Kleen Signed-off-by: Kuppuswamy Sathyanarayanan --- drivers/pci/msi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c index 9232255c8515..c6cf1b756d74 100644 --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -682,7 +682,7 @@ static void __iomem *msix_map_region(struct pci_dev *dev, unsigned nr_entries) table_offset &= PCI_MSIX_TABLE_OFFSET; phys_addr = pci_resource_start(dev, bir) + table_offset; - return ioremap(phys_addr, nr_entries * PCI_MSIX_ENTRY_SIZE); + return ioremap_shared(phys_addr, nr_entries * PCI_MSIX_ENTRY_SIZE); } static int msix_setup_entries(struct pci_dev *dev, void __iomem *base, From patchwork Thu Aug 5 00:52:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513701 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg9B421KHz9t0Z for ; Thu, 5 Aug 2021 10:54:52 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237090AbhHEAzE (ORCPT ); Wed, 4 Aug 2021 20:55:04 -0400 Received: from mga09.intel.com ([134.134.136.24]:27472 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237196AbhHEAyT (ORCPT ); Wed, 4 Aug 2021 20:54:19 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="214027490" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="214027490" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:49 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617289" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:47 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 13/15] virtio: Use shared mappings for virtio PCI devices Date: Wed, 4 Aug 2021 17:52:16 -0700 Message-Id: <20210805005218.2912076-14-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org From: Andi Kleen In a TDX guest the pci device mappings of virtio must be shared with the host, so use explicit shared mappings. Signed-off-by: Andi Kleen Signed-off-by: Kuppuswamy Sathyanarayanan --- drivers/virtio/virtio_pci_modern_dev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/virtio/virtio_pci_modern_dev.c b/drivers/virtio/virtio_pci_modern_dev.c index e11ed748e661..6ec4c52258ce 100644 --- a/drivers/virtio/virtio_pci_modern_dev.c +++ b/drivers/virtio/virtio_pci_modern_dev.c @@ -83,7 +83,7 @@ vp_modern_map_capability(struct virtio_pci_modern_device *mdev, int off, return NULL; } - p = pci_iomap_range(dev, bar, offset, length); + p = pci_iomap_shared_range(dev, bar, offset, length); if (!p) dev_err(&dev->dev, "virtio_pci: unable to map virtio %u@%u on bar %i\n", From patchwork Thu Aug 5 00:52:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513699 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg9B35MF1z9t0G for ; Thu, 5 Aug 2021 10:54:51 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237234AbhHEAzD (ORCPT ); Wed, 4 Aug 2021 20:55:03 -0400 Received: from mga09.intel.com ([134.134.136.24]:27477 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237201AbhHEAyT (ORCPT ); Wed, 4 Aug 2021 20:54:19 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="214027497" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="214027497" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:51 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617298" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:49 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 14/15] x86/tdx: Implement ioremap_shared for x86 Date: Wed, 4 Aug 2021 17:52:17 -0700 Message-Id: <20210805005218.2912076-15-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org From: Andi Kleen Implement ioremap_shared for x86. In TDX most memory is encrypted, but some memory that is used to communicate with the host must be declared shared with special page table attributes and a special hypercall. Previously all ioremaped memory was declared shared, but this leads to various BIOS tables and other private state being shared, which is a security risk. This patch replaces the unconditional ioremap sharing with an explicit ioremap_shared that enables sharing. Signed-off-by: Andi Kleen Signed-off-by: Kuppuswamy Sathyanarayanan --- arch/x86/include/asm/io.h | 3 +++ arch/x86/mm/ioremap.c | 41 ++++++++++++++++++++++++++++++--------- 2 files changed, 35 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index 4c9e06a81ebe..51c2c45456bf 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -379,6 +379,9 @@ extern void __iomem *ioremap_wc(resource_size_t offset, unsigned long size); extern void __iomem *ioremap_wt(resource_size_t offset, unsigned long size); #define ioremap_wt ioremap_wt +extern void __iomem *ioremap_shared(resource_size_t offset, unsigned long size); +#define ioremap_shared ioremap_shared + extern bool is_early_ioremap_ptep(pte_t *ptep); #define IO_SPACE_LIMIT 0xffff diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 69a60f240124..74260aaa494b 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -178,7 +178,8 @@ static void __ioremap_check_mem(resource_size_t addr, unsigned long size, */ static void __iomem * __ioremap_caller(resource_size_t phys_addr, unsigned long size, - enum page_cache_mode pcm, void *caller, bool encrypted) + enum page_cache_mode pcm, void *caller, bool encrypted, + bool shared) { unsigned long offset, vaddr; resource_size_t last_addr; @@ -248,7 +249,7 @@ __ioremap_caller(resource_size_t phys_addr, unsigned long size, prot = PAGE_KERNEL_IO; if ((io_desc.flags & IORES_MAP_ENCRYPTED) || encrypted) prot = pgprot_encrypted(prot); - else if (prot_guest_has(PATTR_GUEST_SHARED_MAPPING_INIT)) + else if (shared) prot = pgprot_protected_guest(prot); switch (pcm) { @@ -340,7 +341,8 @@ void __iomem *ioremap(resource_size_t phys_addr, unsigned long size) enum page_cache_mode pcm = _PAGE_CACHE_MODE_UC_MINUS; return __ioremap_caller(phys_addr, size, pcm, - __builtin_return_address(0), false); + __builtin_return_address(0), false, + false); } EXPORT_SYMBOL(ioremap); @@ -373,7 +375,8 @@ void __iomem *ioremap_uc(resource_size_t phys_addr, unsigned long size) enum page_cache_mode pcm = _PAGE_CACHE_MODE_UC; return __ioremap_caller(phys_addr, size, pcm, - __builtin_return_address(0), false); + __builtin_return_address(0), false, + false); } EXPORT_SYMBOL_GPL(ioremap_uc); @@ -390,10 +393,29 @@ EXPORT_SYMBOL_GPL(ioremap_uc); void __iomem *ioremap_wc(resource_size_t phys_addr, unsigned long size) { return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WC, - __builtin_return_address(0), false); + __builtin_return_address(0), false, + false); } EXPORT_SYMBOL(ioremap_wc); +/** + * ioremap_shared - map memory into CPU space shared with host + * @phys_addr: bus address of the memory + * @size: size of the resource to map + * + * This version of ioremap ensures that the memory is marked shared + * with the host. This is useful for confidential guests. + * + * Must be freed with iounmap. + */ +void __iomem *ioremap_shared(resource_size_t phys_addr, unsigned long size) +{ + return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WC, + __builtin_return_address(0), false, + prot_guest_has(PATTR_GUEST_SHARED_MAPPING_INIT)); +} +EXPORT_SYMBOL(ioremap_shared); + /** * ioremap_wt - map memory into CPU space write through * @phys_addr: bus address of the memory @@ -407,21 +429,22 @@ EXPORT_SYMBOL(ioremap_wc); void __iomem *ioremap_wt(resource_size_t phys_addr, unsigned long size) { return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WT, - __builtin_return_address(0), false); + __builtin_return_address(0), false, + false); } EXPORT_SYMBOL(ioremap_wt); void __iomem *ioremap_encrypted(resource_size_t phys_addr, unsigned long size) { return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB, - __builtin_return_address(0), true); + __builtin_return_address(0), true, false); } EXPORT_SYMBOL(ioremap_encrypted); void __iomem *ioremap_cache(resource_size_t phys_addr, unsigned long size) { return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB, - __builtin_return_address(0), false); + __builtin_return_address(0), false, false); } EXPORT_SYMBOL(ioremap_cache); @@ -430,7 +453,7 @@ void __iomem *ioremap_prot(resource_size_t phys_addr, unsigned long size, { return __ioremap_caller(phys_addr, size, pgprot2cachemode(__pgprot(prot_val)), - __builtin_return_address(0), false); + __builtin_return_address(0), false, false); } EXPORT_SYMBOL(ioremap_prot); From patchwork Thu Aug 5 00:52:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 1513702 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4Gg9CG4GXfz9sRR for ; Thu, 5 Aug 2021 10:55:54 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235659AbhHEA4G (ORCPT ); Wed, 4 Aug 2021 20:56:06 -0400 Received: from mga09.intel.com ([134.134.136.24]:27469 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237465AbhHEAy6 (ORCPT ); Wed, 4 Aug 2021 20:54:58 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10066"; a="214027500" X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="214027500" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:53 -0700 X-IronPort-AV: E=Sophos;i="5.84,296,1620716400"; d="scan'208";a="437617307" Received: from mjkendri-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.17.117]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2021 17:53:51 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 15/15] x86/tdx: Add cmdline option to force use of ioremap_shared Date: Wed, 4 Aug 2021 17:52:18 -0700 Message-Id: <20210805005218.2912076-16-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210805005218.2912076-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org Add a command line option to force all the enabled drivers to use shared memory mappings. This will be useful when enabling new drivers in the protected guest without making all the required changes to use shared mappings in it. Note that this might also allow other non explicitly enabled drivers to interact with the host, which could cause other security risks. Signed-off-by: Kuppuswamy Sathyanarayanan --- .../admin-guide/kernel-parameters.rst | 1 + .../admin-guide/kernel-parameters.txt | 12 ++++++++++++ arch/x86/include/asm/io.h | 2 ++ arch/x86/mm/ioremap.c | 19 ++++++++++++++++++- 4 files changed, 33 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/kernel-parameters.rst b/Documentation/admin-guide/kernel-parameters.rst index 01ba293a2d70..bdf3896a100c 100644 --- a/Documentation/admin-guide/kernel-parameters.rst +++ b/Documentation/admin-guide/kernel-parameters.rst @@ -147,6 +147,7 @@ parameter is applicable:: PCI PCI bus support is enabled. PCIE PCI Express support is enabled. PCMCIA The PCMCIA subsystem is enabled. + PG Protected guest is enabled. PNP Plug & Play support is enabled. PPC PowerPC architecture is enabled. PPT Parallel port support is enabled. diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index bdb22006f713..ba390be62f89 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2062,6 +2062,18 @@ 1 - Bypass the IOMMU for DMA. unset - Use value of CONFIG_IOMMU_DEFAULT_PASSTHROUGH. + ioremap_force_shared= [X86_64, PG] + Force the kernel to use shared memory mappings which do + not use ioremap_shared/pcimap_shared to opt-in to shared + mappings with the host. This feature is mainly used by + a protected guest when enabling new drivers without + proper shared memory related changes. Please note that + this option might also allow other non explicitly enabled + drivers to interact with the host in protected guest, + which could cause other security risks. This option will + also cause BIOS data structures to be shared with the host, + which might open security holes. + io7= [HW] IO7 for Marvel-based Alpha systems See comment before marvel_specify_io7 in arch/alpha/kernel/core_marvel.c. diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index 51c2c45456bf..744f72835a30 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -413,6 +413,8 @@ extern bool arch_memremap_can_ram_remap(resource_size_t offset, extern bool phys_mem_access_encrypted(unsigned long phys_addr, unsigned long size); +extern bool ioremap_force_shared; + /** * iosubmit_cmds512 - copy data to single MMIO location, in 512-bit units * @dst: destination, in MMIO space (must be 512-bit aligned) diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 74260aaa494b..7576e886fad8 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -28,6 +28,7 @@ #include #include #include +#include #include "physaddr.h" @@ -162,6 +163,17 @@ static void __ioremap_check_mem(resource_size_t addr, unsigned long size, __ioremap_check_other(addr, desc); } +/* + * Normally only drivers that are hardened for use in confidential guests + * force shared mappings. But if device filtering is disabled other + * devices can be loaded, and these need shared mappings too. This + * variable is set to true if these filters are disabled. + * + * Note this has some side effects, e.g. various BIOS tables + * get shared too which is risky. + */ +bool ioremap_force_shared; + /* * Remap an arbitrary physical address space into the kernel virtual * address space. It transparently creates kernel huge I/O mapping when @@ -249,7 +261,7 @@ __ioremap_caller(resource_size_t phys_addr, unsigned long size, prot = PAGE_KERNEL_IO; if ((io_desc.flags & IORES_MAP_ENCRYPTED) || encrypted) prot = pgprot_encrypted(prot); - else if (shared) + else if (shared || ioremap_force_shared) prot = pgprot_protected_guest(prot); switch (pcm) { @@ -847,6 +859,11 @@ void __init early_ioremap_init(void) WARN_ON((fix_to_virt(0) + PAGE_SIZE) & ((1 << PMD_SHIFT) - 1)); #endif + /* Parse cmdline params for ioremap_force_shared */ + if (cmdline_find_option_bool(boot_command_line, + "ioremap_force_shared")) + ioremap_force_shared = 1; + early_ioremap_setup(); pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN));