From patchwork Thu Apr 8 19:36:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 1464013 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4FGWjp0P5kz9sW5; Fri, 9 Apr 2021 05:37:02 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1lUaSY-0005dh-Ba; Thu, 08 Apr 2021 19:36:58 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1lUaSW-0005dH-W8 for kernel-team@lists.ubuntu.com; Thu, 08 Apr 2021 19:36:56 +0000 Received: from mail-pf1-f198.google.com ([209.85.210.198]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1lUaSW-0003By-J8 for kernel-team@lists.ubuntu.com; Thu, 08 Apr 2021 19:36:56 +0000 Received: by mail-pf1-f198.google.com with SMTP id g205so1750525pfb.15 for ; Thu, 08 Apr 2021 12:36:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dzNtH8H7+9b3D+jAb+ORqwz18e3BGthqqmGyfJwINeA=; b=FO2iq5VlIsYHG9cDK29Cia1WY8/GeX3DiBCbEyvueg7IgwauFpGDC+Zz4ihcyxfWKy s7hYQBQ43C6Pwk4kZ44K67+YPbaUOV4CzoyJB1QGtnRvYJ3htgunFq8vpJBkegPsxcRA MD9Ba1DyO3tg/3RlRQ7OY2QWGEu1raPVjZEAoCc/fnnHK2S2Bj/GEKqzK/Rb2DXKNHRO wGEH1DjcE5BrEcX1dcVzXxUpDkpFQ8RF7tN089GZuuDvLqYYbMqCMEsX7I11TI0d2ioZ blLoq05I2RAW9m9Xjbs0Mw5KNXe/Ef75iFW9oHy9/P6VeK8QMIRXtQMYdFZ4AS09hflB tgfg== X-Gm-Message-State: AOAM533OktIMKRgVqB1R0H1Ps3ps8MLSAQ2MV9WSAlfUdp9o9ykxKaNJ CzgVQQkP2P1g/3nC69Qd+LrlplfIO9Z7r+wK5W25IUI2xGnh4p528RFWD6Bafyhr19NrMLpwfxT CXH4+UH58Nni/FXHI1r0RHnY5XlHHx74L36HsbLZnPA== X-Received: by 2002:aa7:9418:0:b029:1f7:de99:9a29 with SMTP id x24-20020aa794180000b02901f7de999a29mr8938560pfo.69.1617910614933; Thu, 08 Apr 2021 12:36:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyYuF1wi/WBDdlvXQ4bpjT3B24g/TeOqGhsi/cbFOOG2cVYJ6IQ7+h+GuavqmSyeeadI64ekg== X-Received: by 2002:aa7:9418:0:b029:1f7:de99:9a29 with SMTP id x24-20020aa794180000b02901f7de999a29mr8938537pfo.69.1617910614607; Thu, 08 Apr 2021 12:36:54 -0700 (PDT) Received: from localhost.localdomain ([69.163.84.166]) by smtp.gmail.com with ESMTPSA id t19sm246136pfg.38.2021.04.08.12.36.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Apr 2021 12:36:53 -0700 (PDT) From: Tim Gardner To: kernel-team@lists.ubuntu.com Subject: [PATCH] KVM: SVM: load control fields from VMCB12 before checking them Date: Thu, 8 Apr 2021 13:36:49 -0600 Message-Id: <20210408193649.25649-2-tim.gardner@canonical.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210408193649.25649-1-tim.gardner@canonical.com> References: <20210408193649.25649-1-tim.gardner@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Paolo Bonzini CVE-2021-29657 Avoid races between check and use of the nested VMCB controls. This for example ensures that the VMRUN intercept is always reflected to the nested hypervisor, instead of being processed by the host. Without this patch, it is possible to end up with svm->nested.hsave pointing to the MSR permission bitmap for nested guests. This bug is CVE-2021-29657. Reported-by: Felix Wilhelm Cc: stable@vger.kernel.org Fixes: 2fcf4876ada ("KVM: nSVM: implement on demand allocation of the nested state") Signed-off-by: Paolo Bonzini (cherry picked from commit a58d9166a756a0f4a6618e4f593232593d6df134) Signed-off-by: Tim Gardner Acked-by: Stefan Bader Acked-by: Kleber Sacilotto de Souza --- arch/x86/kvm/svm/nested.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 1008cc6cb66c..dd318ca6c722 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -246,7 +246,7 @@ static bool nested_vmcb_check_controls(struct vmcb_control_area *control) return true; } -static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12) +static bool nested_vmcb_check_save(struct vcpu_svm *svm, struct vmcb *vmcb12) { struct kvm_vcpu *vcpu = &svm->vcpu; bool vmcb12_lma; @@ -271,7 +271,7 @@ static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12) if (kvm_valid_cr4(&svm->vcpu, vmcb12->save.cr4)) return false; - return nested_vmcb_check_controls(&vmcb12->control); + return true; } static void load_nested_vmcb_control(struct vcpu_svm *svm, @@ -454,7 +454,6 @@ int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb12_gpa, int ret; svm->nested.vmcb12_gpa = vmcb12_gpa; - load_nested_vmcb_control(svm, &vmcb12->control); nested_prepare_vmcb_save(svm, vmcb12); nested_prepare_vmcb_control(svm); @@ -501,7 +500,10 @@ int nested_svm_vmrun(struct vcpu_svm *svm) if (WARN_ON_ONCE(!svm->nested.initialized)) return -EINVAL; - if (!nested_vmcb_checks(svm, vmcb12)) { + load_nested_vmcb_control(svm, &vmcb12->control); + + if (!nested_vmcb_check_save(svm, vmcb12) || + !nested_vmcb_check_controls(&svm->nested.ctl)) { vmcb12->control.exit_code = SVM_EXIT_ERR; vmcb12->control.exit_code_hi = 0; vmcb12->control.exit_info_1 = 0;