From patchwork Thu May 5 12:39:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 1626944 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=BB/w7M0X; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KvCvv6815z9sBF for ; Thu, 5 May 2022 22:40:11 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1nmam5-0007uC-OB; Thu, 05 May 2022 12:40:05 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1nmam1-0007sO-Si for kernel-team@lists.ubuntu.com; Thu, 05 May 2022 12:40:01 +0000 Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id 1C9C13F21C for ; Thu, 5 May 2022 12:40:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1651754401; bh=iHx7qedkiO4WJLXwPY2wRD2B1vrHzZ1YfK1nZv1klSY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BB/w7M0XnJvicbaE0FcLDH5MZmZA4ZNrA0SutJKniOGM4Bx+U3TeEp0RSu67elpdg zVWHcemmN3lLS/niQAgYofd8gYqXxO/yFDmP+b8dpGaMvPicqR6/JEDNWp8eeJIPcy 3duqjjGweXLu2R0sApqgxnncGMyW3awD/bM75iuRoQl+p+5aaWHygHrMUDvHyclYb7 /IiVF1LpbuNCeXI+/Qh9I7cAVCw6gh/O7R6k0a0vRdiGdA2u0P29NqkFlwQtoY1Tuq VmpBoiPQOSl0cnS0jeERoq63u5kWXnNWzsHhaBnyfx4tpa+01NATy290pq4LRQv0Yi KryQRNVXKPfzQ== Received: by mail-pf1-f197.google.com with SMTP id z34-20020a056a001da200b0050e057fdd7eso1036460pfw.12 for ; Thu, 05 May 2022 05:40:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iHx7qedkiO4WJLXwPY2wRD2B1vrHzZ1YfK1nZv1klSY=; b=RFPPjM86huNBenvkSC6yNTJrDxZH4kByUO/cbEC6JyI/pWy1vIWvHnMEVdXM843YWC NvtI5BCjsRXsZjc1DLOaCy3XcFGIPAnMdrmNgoKgjG/FCR4XZQjyyuMhTfEbyarlJ/Gf b62b3OAIMXfvQFby4/j2CPiUajCdpskLE0b3j48SbQWU+iI4wsqErkkz6mWQ0YoL9hF/ QWsHTBbOc05NqiO0cSTSywuoZuZfiW1poL1gwHHtOQldhY4yFpCanVjTBsbGOtP4UF7T ikfCs8jY2MoAqiJwRUijmgLOIaxPsTdWbCWGamsQ+Yrfb6KKK1yyT2A0vb3Xm+p9Q1RK hflg== X-Gm-Message-State: AOAM533ilTBdHjiVBLhmUXG2ZAYA4colCC3bEmdPoLe8AxuDTzPSqV0B Eke2NoaM58miAIKq77IbdVnJWxjWYeaXxQ51EBx1kocbK6Jl92SlzHrcMXY+hanNoKGDpR5KRAl Jtt87PCyBRqp2SKYjz49tDQrlBEanhc7z9Kzd8FPm5A== X-Received: by 2002:a63:1d26:0:b0:3c1:eb3f:9daf with SMTP id d38-20020a631d26000000b003c1eb3f9dafmr19420399pgd.284.1651754399108; Thu, 05 May 2022 05:39:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy8cD8le2HCnZsICXJvZdyvSBXMW/vAVeie6HcykFrNgsbdHDE8RkBbJHe02XyaVqBv8Sumww== X-Received: by 2002:a63:1d26:0:b0:3c1:eb3f:9daf with SMTP id d38-20020a631d26000000b003c1eb3f9dafmr19420331pgd.284.1651754398010; Thu, 05 May 2022 05:39:58 -0700 (PDT) Received: from localhost.localdomain ([69.163.84.166]) by smtp.gmail.com with ESMTPSA id o12-20020a170902bccc00b0015ec71f72d6sm1403744pls.253.2022.05.05.05.39.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 May 2022 05:39:57 -0700 (PDT) From: Tim Gardner To: kernel-team@lists.ubuntu.com Subject: [PATCH 1/4] UBUNTU: SAUCE: treewide: Replace the use of mem_encrypt_active() with cc_platform_has() Date: Thu, 5 May 2022 06:39:46 -0600 Message-Id: <20220505123949.12030-2-tim.gardner@canonical.com> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220505123949.12030-1-tim.gardner@canonical.com> References: <20220505123949.12030-1-tim.gardner@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Tom Lendacky BugLink: https://bugs.launchpad.net/bugs/1971701 Replace uses of mem_encrypt_active() with calls to cc_platform_has() with the CC_ATTR_MEM_ENCRYPT attribute. Remove the implementation of mem_encrypt_active() across all arches. For s390, since the default implementation of the cc_platform_has() matches the s390 implementation of mem_encrypt_active(), cc_platform_has() does not need to be implemented in s390 (the config option ARCH_HAS_CC_PLATFORM is not set). Signed-off-by: Tom Lendacky Signed-off-by: Borislav Petkov Link: https://lkml.kernel.org/r/20210928191009.32551-9-bp@alien8.de (backported from commit 8d1acdc4e6229d2861101ec682bb33da89a9578d https://github.com/intel/tdx.git) [rtg - context adjustments] Signed-off-by: Tim Gardner --- arch/powerpc/include/asm/mem_encrypt.h | 5 ----- arch/powerpc/platforms/pseries/svm.c | 5 +++-- arch/s390/include/asm/mem_encrypt.h | 2 -- arch/x86/include/asm/mem_encrypt.h | 5 ----- arch/x86/kernel/head64.c | 9 +++++++-- arch/x86/mm/ioremap.c | 4 ++-- arch/x86/mm/mem_encrypt.c | 2 +- arch/x86/mm/pat/set_memory.c | 5 +++++ drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +++- drivers/gpu/drm/drm_cache.c | 4 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +++--- drivers/iommu/amd/iommu.c | 3 ++- drivers/iommu/amd/iommu_v2.c | 3 ++- drivers/iommu/iommu.c | 3 ++- fs/proc/vmcore.c | 6 +++--- include/linux/mem_encrypt.h | 4 ---- kernel/dma/swiotlb.c | 4 ++-- 18 files changed, 39 insertions(+), 39 deletions(-) diff --git a/arch/powerpc/include/asm/mem_encrypt.h b/arch/powerpc/include/asm/mem_encrypt.h index ba9dab07c1be..2f26b8fc8d29 100644 --- a/arch/powerpc/include/asm/mem_encrypt.h +++ b/arch/powerpc/include/asm/mem_encrypt.h @@ -10,11 +10,6 @@ #include -static inline bool mem_encrypt_active(void) -{ - return is_secure_guest(); -} - static inline bool force_dma_unencrypted(struct device *dev) { return is_secure_guest(); diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c index 87f001b4c4e4..c083ecbbae4d 100644 --- a/arch/powerpc/platforms/pseries/svm.c +++ b/arch/powerpc/platforms/pseries/svm.c @@ -8,6 +8,7 @@ #include #include +#include #include #include #include @@ -63,7 +64,7 @@ void __init svm_swiotlb_init(void) int set_memory_encrypted(unsigned long addr, int numpages) { - if (!mem_encrypt_active()) + if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT)) return 0; if (!PAGE_ALIGNED(addr)) @@ -76,7 +77,7 @@ int set_memory_encrypted(unsigned long addr, int numpages) int set_memory_decrypted(unsigned long addr, int numpages) { - if (!mem_encrypt_active()) + if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT)) return 0; if (!PAGE_ALIGNED(addr)) diff --git a/arch/s390/include/asm/mem_encrypt.h b/arch/s390/include/asm/mem_encrypt.h index 2542cbf7e2d1..08a8b96606d7 100644 --- a/arch/s390/include/asm/mem_encrypt.h +++ b/arch/s390/include/asm/mem_encrypt.h @@ -4,8 +4,6 @@ #ifndef __ASSEMBLY__ -static inline bool mem_encrypt_active(void) { return false; } - int set_memory_encrypted(unsigned long addr, int numpages); int set_memory_decrypted(unsigned long addr, int numpages); diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index c6e61dee54ce..a855fdcdd242 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -100,11 +100,6 @@ static inline void mem_encrypt_free_decrypted_mem(void) { } extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_decrypted_unused[]; -static inline bool mem_encrypt_active(void) -{ - return sme_me_mask; -} - static inline u64 sme_get_me_mask(void) { return sme_me_mask; diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index de01903c3735..fc5371a7e9d1 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -19,7 +19,7 @@ #include #include #include -#include +#include #include #include @@ -284,8 +284,13 @@ unsigned long __head __startup_64(unsigned long physaddr, * The bss section will be memset to zero later in the initialization so * there is no need to zero it after changing the memory encryption * attribute. + * + * This is early code, use an open coded check for SME instead of + * using cc_platform_has(). This eliminates worries about removing + * instrumentation or checking boot_cpu_data in the cc_platform_has() + * function. */ - if (mem_encrypt_active()) { + if (sme_get_me_mask()) { vaddr = (unsigned long)__start_bss_decrypted; vaddr_end = (unsigned long)__end_bss_decrypted; for (; vaddr < vaddr_end; vaddr += PMD_SIZE) { diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 9174977de482..de6f010bcf64 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -738,7 +738,7 @@ static bool __init early_memremap_is_setup_data(resource_size_t phys_addr, bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long size, unsigned long flags) { - if (!mem_encrypt_active()) + if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT)) return true; if (flags & MEMREMAP_ENC) @@ -768,7 +768,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr, { bool encrypted_prot; - if (!mem_encrypt_active()) + if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT)) return prot; encrypted_prot = true; diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 49b5a6f3a142..bd8a43c62858 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -423,7 +423,7 @@ void __init mem_encrypt_free_decrypted_mem(void) * The unused memory range was mapped decrypted, change the encryption * attribute from decrypted to encrypted before freeing it. */ - if (mem_encrypt_active()) { + if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) { r = set_memory_encrypted(vaddr, npages); if (r) { pr_warn("failed to free unused decrypted pages\n"); diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 494b17dfe872..0ad166b7336c 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include @@ -1991,6 +1992,10 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc) struct cpa_data cpa; int ret; + /* Nothing to do if memory encryption is not active */ + if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT)) + return 0; + /* Should not be working on unaligned addresses */ if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr)) addr &= PAGE_MASK; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c index 5a7fef324c82..164fd3c8ef69 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c @@ -39,6 +39,7 @@ #include #include #include +#include #include "amdgpu.h" #include "amdgpu_irq.h" @@ -2012,7 +2013,8 @@ static int amdgpu_pci_probe(struct pci_dev *pdev, * however, SME requires an indirect IOMMU mapping because the encryption * bit is beyond the DMA mask of the chip. */ - if (mem_encrypt_active() && ((flags & AMD_ASIC_MASK) == CHIP_RAVEN)) { + if (cc_platform_has(CC_ATTR_MEM_ENCRYPT) && + ((flags & AMD_ASIC_MASK) == CHIP_RAVEN)) { dev_info(&pdev->dev, "SME is not compatible with RAVEN\n"); return -ENOTSUPP; diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c index 30cc59fe6ef7..f19d9acbe959 100644 --- a/drivers/gpu/drm/drm_cache.c +++ b/drivers/gpu/drm/drm_cache.c @@ -31,7 +31,7 @@ #include #include #include -#include +#include #include #include @@ -204,7 +204,7 @@ bool drm_need_swiotlb(int dma_bits) * Enforce dma_alloc_coherent when memory encryption is active as well * for the same reasons as for Xen paravirtual hosts. */ - if (mem_encrypt_active()) + if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) return true; for (tmp = iomem_resource.child; tmp; tmp = tmp->sibling) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index 8449d09c06f7..ab246cff209a 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -29,7 +29,7 @@ #include #include #include -#include +#include #include #include @@ -666,7 +666,7 @@ static int vmw_dma_select_mode(struct vmw_private *dev_priv) [vmw_dma_map_bind] = "Giving up DMA mappings early."}; /* TTM currently doesn't fully support SEV encryption. */ - if (mem_encrypt_active()) + if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) return -EINVAL; if (vmw_force_coherent) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c index e50fb82a3030..2aceac7856e2 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c @@ -28,7 +28,7 @@ #include #include #include -#include +#include #include #include @@ -160,7 +160,7 @@ static unsigned long vmw_port_hb_out(struct rpc_channel *channel, unsigned long msg_len = strlen(msg); /* HB port can't access encrypted memory. */ - if (hb && !mem_encrypt_active()) { + if (hb && !cc_platform_has(CC_ATTR_MEM_ENCRYPT)) { unsigned long bp = channel->cookie_high; u32 channel_id = (channel->channel_id << 16); @@ -216,7 +216,7 @@ static unsigned long vmw_port_hb_in(struct rpc_channel *channel, char *reply, unsigned long si, di, eax, ebx, ecx, edx; /* HB port can't access encrypted memory */ - if (hb && !mem_encrypt_active()) { + if (hb && !cc_platform_has(CC_ATTR_MEM_ENCRYPT)) { unsigned long bp = channel->cookie_low; u32 channel_id = (channel->channel_id << 16); diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index d425586b6754..2fe05e51b2a2 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include #include @@ -2243,7 +2244,7 @@ static int amd_iommu_def_domain_type(struct device *dev) * active, because some of those devices (AMD GPUs) don't have the * encryption bit in their DMA-mask and require remapping. */ - if (!mem_encrypt_active() && dev_data->iommu_v2) + if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT) && dev_data->iommu_v2) return IOMMU_DOMAIN_IDENTITY; return 0; diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c index a45c5536d250..58da08cc3d01 100644 --- a/drivers/iommu/amd/iommu_v2.c +++ b/drivers/iommu/amd/iommu_v2.c @@ -17,6 +17,7 @@ #include #include #include +#include #include "amd_iommu.h" @@ -742,7 +743,7 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids) * When memory encryption is active the device is likely not in a * direct-mapped domain. Forbid using IOMMUv2 functionality for now. */ - if (mem_encrypt_active()) + if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) return -ENODEV; if (!amd_iommu_v2_supported()) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 7f409e9eea4b..c10d35ecdeed 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -25,6 +25,7 @@ #include #include #include +#include #include static struct kset *iommu_group_kset; @@ -130,7 +131,7 @@ static int __init iommu_subsys_init(void) else iommu_set_default_translated(false); - if (iommu_default_passthrough() && mem_encrypt_active()) { + if (iommu_default_passthrough() && cc_platform_has(CC_ATTR_MEM_ENCRYPT)) { pr_info("Memory encryption detected - Disabling default IOMMU Passthrough\n"); iommu_set_default_translated(false); } diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index e5730986758f..d2178114af30 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -26,7 +26,7 @@ #include #include #include -#include +#include #include #include "internal.h" @@ -181,7 +181,7 @@ ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos) */ ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos) { - return read_from_oldmem(buf, count, ppos, 0, mem_encrypt_active()); + return read_from_oldmem(buf, count, ppos, 0, cc_platform_has(CC_ATTR_MEM_ENCRYPT)); } /* @@ -382,7 +382,7 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos, buflen); start = m->paddr + *fpos - m->offset; tmp = read_from_oldmem(buffer, tsz, &start, - userbuf, mem_encrypt_active()); + userbuf, cc_platform_has(CC_ATTR_MEM_ENCRYPT)); if (tmp < 0) return tmp; buflen -= tsz; diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h index 5c4a18a91f89..ae4526389261 100644 --- a/include/linux/mem_encrypt.h +++ b/include/linux/mem_encrypt.h @@ -16,10 +16,6 @@ #include -#else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */ - -static inline bool mem_encrypt_active(void) { return false; } - #endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */ #ifdef CONFIG_AMD_MEM_ENCRYPT diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index ce241d72ad03..1bf993d1f97d 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -34,7 +34,7 @@ #include #include #include -#include +#include #include #ifdef CONFIG_DEBUG_FS #include @@ -612,7 +612,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, if (!mem) panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer"); - if (mem_encrypt_active()) + if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n"); if (mapping_size > alloc_size) {