From patchwork Fri Sep 8 06:49:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Khalid Elmously X-Patchwork-Id: 1831343 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=TJiyahnp; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=185.125.189.65; helo=lists.ubuntu.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=patchwork.ozlabs.org) Received: from lists.ubuntu.com (lists.ubuntu.com [185.125.189.65]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Rhmw91KMMz1yhG for ; Fri, 8 Sep 2023 16:50:49 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=lists.ubuntu.com) by lists.ubuntu.com with esmtp (Exim 4.86_2) (envelope-from ) id 1qeVKB-0006N4-5Y; Fri, 08 Sep 2023 06:50:39 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by lists.ubuntu.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1qeVJ4-00064U-Eo for kernel-team@lists.ubuntu.com; Fri, 08 Sep 2023 06:49:30 +0000 Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id EF4773F659 for ; Fri, 8 Sep 2023 06:49:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1694155769; bh=t5miDg7pHcVVjeaMX5i1rRvXg+CINB5i4NOFXtNPab8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TJiyahnpZcYVkf1KRaQdXFV3BD9JEo/44msEDSe3+Q25T4ZU14LxagYGeatsqkHv9 eBmzm3RnmEosX3Ps0Y/6Iada3FmQWzfj/pcQ5KLRGXhwcWFsjT40/Lch+3SqcIP9Zl /edOF7SX8obvrbcTm+ggaQ1N9G0OVI9JUWUPGxIQaPpNQJnEiuiALSPIYxc58YBGIW 4GA9K333pqQd4EpTxbDdEmzTXH/XPFomiCf6tvs1UrgMFtIpChlwvNcqsfHRTIg7ph VhnrdJ4kf+BEjcmoDdD4VWHh9/rfeAVfN77m4kgr3f8UebR9Fo4yQQFcOxGBThmdWn 68xDTnTfFTCaw== Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-64f39876f01so19534996d6.2 for ; Thu, 07 Sep 2023 23:49:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694155768; x=1694760568; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=t5miDg7pHcVVjeaMX5i1rRvXg+CINB5i4NOFXtNPab8=; b=PORQ3pOT4yF4EyN3KeXf95rpa+qof8IQVrOJT90F/q8FZtnRGAKRWaipQ55CYURrZs EtqqPmlhNidce38uF3LBOYhbKTPpeUH3EKuMdne9WRi9vf455rM+0JNOTidgIPq3IzZU 3u6iB0wA0XwazTYvVUeNUWF0B1HZofGoh/96l7y3ai9+OW9J0UYw8ESUOQWmi88+6zk/ Gjr+8hRVYqX19+L8HJIyMyiFtNAXgP/FhisKl65JSm00fbT1Zp0lSU2ElPMhCx14gfoM 5s654oQpRlFiieF0pFcOypOmDpafEd/jTBKF4BTsZM2X/Av4dV8UQQoR/CZwfhSsq0SI Rwrw== X-Gm-Message-State: AOJu0YxfcYOvYOCks9UF6JS5nIV9YModH6BUI3Bl3HXvlktsgfEF+bOf n7o09P0ethi5iSyDTm7WnTwrSo5sXU/35WY61cVbDy4JofR0M3/1vPFhwhR3L7e6IOxLhWX9cSd oPnBi3tqlNBJ2FWIrVRrMGvUw0YWKmXO5M9Hxc20FeqNpqDEcX8Kj X-Received: by 2002:a05:620a:28d5:b0:76d:3475:2e10 with SMTP id l21-20020a05620a28d500b0076d34752e10mr2281428qkp.48.1694155768446; Thu, 07 Sep 2023 23:49:28 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEoIcXXgpBB1GNMyPWrW7uqeCD+QO4uzEQiyyW58I4sxpJcY4TvuRNdBNpYaGPLVFwTc9+IXQ== X-Received: by 2002:a05:620a:28d5:b0:76d:3475:2e10 with SMTP id l21-20020a05620a28d500b0076d34752e10mr2281414qkp.48.1694155768051; Thu, 07 Sep 2023 23:49:28 -0700 (PDT) Received: from k2.fuzzbuzz.org ([38.147.253.170]) by smtp.gmail.com with ESMTPSA id j28-20020a05620a001c00b007683d78ce4csm364239qki.84.2023.09.07.23.49.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Sep 2023 23:49:27 -0700 (PDT) From: Khalid Elmously To: kernel-team@lists.ubuntu.com Subject: [SRU][J/gcp][PATCH 1/1] x86/sev: Make enc_dec_hypercall() accept a size instead of npages Date: Fri, 8 Sep 2023 02:49:01 -0400 Message-Id: <20230908064903.287209-3-khalid.elmously@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230908064903.287209-1-khalid.elmously@canonical.com> References: <20230908064903.287209-1-khalid.elmously@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Steve Rutherford BugLink: https://bugs.launchpad.net/bugs/2034894 enc_dec_hypercall() accepted a page count instead of a size, which forced its callers to round up. As a result, non-page aligned vaddrs caused pages to be spuriously marked as decrypted via the encryption status hypercall, which in turn caused consistent corruption of pages during live migration. Live migration requires accurate encryption status information to avoid migrating pages from the wrong perspective. Fixes: 064ce6c550a0 ("mm: x86: Invoke hypercall when page encryption status is changed") Signed-off-by: Steve Rutherford Signed-off-by: Ingo Molnar Reviewed-by: Tom Lendacky Reviewed-by: Pankaj Gupta Tested-by: Ben Hillier Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230824223731.2055016-1-srutherford@google.com Signed-off-by: Khalid Elmously (backported from commit ac3f9c9f1b37edaa7d1a9b908bc79d843955a1a2) [ kmously: adjusted for different context in arch/x86/mm/mem_encrypt_amd.c ] --- arch/x86/include/asm/mem_encrypt.h | 6 +++--- arch/x86/include/asm/set_memory.h | 2 +- arch/x86/kernel/kvm.c | 4 +--- arch/x86/mm/mem_encrypt_amd.c | 13 ++++++------- 4 files changed, 11 insertions(+), 14 deletions(-) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index efaaf66a2fead..e2f7fc54f8705 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -44,8 +44,8 @@ void __init sme_enable(struct boot_params *bp); int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size); int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size); -void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, - bool enc); +void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, + unsigned long size, bool enc); void __init mem_encrypt_free_decrypted_mem(void); @@ -86,7 +86,7 @@ early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; static inline int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; } static inline void __init -early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) {} +early_set_mem_enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc) {} static inline void mem_encrypt_free_decrypted_mem(void) { } diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index 872617542bbc0..5d411255b2c05 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -83,7 +83,7 @@ int set_pages_rw(struct page *page, int numpages); int set_direct_map_invalid_noflush(struct page *page); int set_direct_map_default_noflush(struct page *page); bool kernel_page_present(struct page *page); -void notify_range_enc_status_changed(unsigned long vaddr, int npages, bool enc); +void notify_range_enc_status_changed(unsigned long vaddr, unsigned long size, bool enc); extern int kernel_set_to_readonly; diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index dbab42d8af7ab..4fa324335d57f 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -931,10 +931,8 @@ static void __init kvm_init_platform(void) * Ensure that _bss_decrypted section is marked as decrypted in the * shared pages list. */ - nr_pages = DIV_ROUND_UP(__end_bss_decrypted - __start_bss_decrypted, - PAGE_SIZE); early_set_mem_enc_dec_hypercall((unsigned long)__start_bss_decrypted, - nr_pages, 0); + __end_bss_decrypted - __start_bss_decrypted, 0); /* * If not booted using EFI, enable Live migration support. diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c index c4332d56d683b..ff51890ce49f9 100644 --- a/arch/x86/mm/mem_encrypt_amd.c +++ b/arch/x86/mm/mem_encrypt_amd.c @@ -292,11 +292,10 @@ static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot) return pfn; } -void notify_range_enc_status_changed(unsigned long vaddr, int npages, bool enc) +void notify_range_enc_status_changed(unsigned long vaddr, unsigned long size, bool enc) { #ifdef CONFIG_PARAVIRT - unsigned long sz = npages << PAGE_SHIFT; - unsigned long vaddr_end = vaddr + sz; + unsigned long vaddr_end = vaddr + size; while (vaddr < vaddr_end) { int psize, pmask, level; @@ -316,7 +315,7 @@ void notify_range_enc_status_changed(unsigned long vaddr, int npages, bool enc) psize = page_level_size(level); pmask = page_level_mask(level); - notify_page_enc_status_changed(pfn, psize >> PAGE_SHIFT, enc); + notify_page_enc_status_changed(pfn, size, enc); vaddr = (vaddr & pmask) + psize; } @@ -442,7 +441,7 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr, ret = 0; - notify_range_enc_status_changed(start, PAGE_ALIGN(size) >> PAGE_SHIFT, enc); + notify_range_enc_status_changed(start, size, enc); out: __flush_tlb_all(); return ret; @@ -458,9 +457,9 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) return early_set_memory_enc_dec(vaddr, size, true); } -void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) +void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc) { - notify_range_enc_status_changed(vaddr, npages, enc); + notify_range_enc_status_changed(vaddr, size, enc); } /*