From patchwork Wed Apr 25 11:54:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 904179 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="KcEtWHXZ"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40WJYK1XFxz9s2b for ; Wed, 25 Apr 2018 21:55:33 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752964AbeDYLzc (ORCPT ); Wed, 25 Apr 2018 07:55:32 -0400 Received: from mail-pg0-f68.google.com ([74.125.83.68]:36274 "EHLO mail-pg0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752885AbeDYLzX (ORCPT ); Wed, 25 Apr 2018 07:55:23 -0400 Received: by mail-pg0-f68.google.com with SMTP id i6so13174171pgv.3; Wed, 25 Apr 2018 04:55:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cF3XJAERdgPZxO0ve5rAtBxOvAIXvn5S7liwq5GJbSU=; b=KcEtWHXZnA+tlMqmek8kYfu/DfhAKkzicl8K0ljnEgtt71FoLAfY7KEYhh05HeaDbS 1swbAtEVNUqe7nqgxyS7iLKJl8Me5Xx+X6XEq/khao97PDCchiTKu5JwSkh1VA8JorYn B5AekeLNfEYupRiMxHRfVew8fhH9X9lVdX4DmW7+f/pR6F/Kos1KSZ/+92DZ7Xyj5bvF /GNQDv8HvWlE9u8M+6ejCHqWRgo42O+zBrepR5rluQziPnxl27KCRPJ+N21M2TTtDfBr RXHAIECt2XeEExYw4yV0WXCnBoowaYRuU4fGzWBHuSpLXLXJEJV16jgxxrQW6C9JNH9f rnVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cF3XJAERdgPZxO0ve5rAtBxOvAIXvn5S7liwq5GJbSU=; b=aMdgk0PNVbvGgdBvFWe8Tp+aLolr7thUvU8ISv9v6x7D/80JPbyzV04/W0s2RXgOnq zv6f+sVfTazk8rv5/VUfFEOB29wGaiJTPt4UZlzaxmMv/8w0Hy+T/FWsvTYHWvHmaGxZ 7Ybd9bptMMxIRzjUu7/LlgvzQonKze098O323tTh0eMiRo6Yo+g7QFZXD0S6T9Ack9Vk 4YAZigjjbK6IxoFWzoNIXN+7kHOXn5P62WLLQ/BJq3LfxrkI+CdOaKS9zL8l2Fp2gYBL yQqxdtnOW88CTXUyyrQBvlms5EHRfXPVbVw2O/HmQXF96IFP0+yt2mdAJKg9gfEcVTCf oWfw== X-Gm-Message-State: ALQs6tBEIAa6YEjW2LH3fMri51nRA2tQwUA71U6fURKorVbi28c++zBa d2dAlkxGz1di+FR85gRR9H4zDQ== X-Google-Smtp-Source: AB8JxZrYSmo0UkmWRHvj3n9WsOJjhwp4pSpdK+56LF9wstlZb9EGyxU7SLJeZVTn9xyRW7r7e1pFkA== X-Received: by 2002:a17:902:598d:: with SMTP id p13-v6mr1090595pli.191.1524657322408; Wed, 25 Apr 2018 04:55:22 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.87]) by smtp.gmail.com with ESMTPSA id j1sm30912934pgn.69.2018.04.25.04.55.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Apr 2018 04:55:21 -0700 (PDT) From: wei.guo.simon@gmail.com To: kvm-ppc@vger.kernel.org Cc: Paul Mackerras , kvm@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Simon Guo Subject: [PATCH 08/11] KVM: PPC: add giveup_ext() hook for PPC KVM ops Date: Wed, 25 Apr 2018 19:54:41 +0800 Message-Id: <1524657284-16706-9-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1524657284-16706-1-git-send-email-wei.guo.simon@gmail.com> References: <1524657284-16706-1-git-send-email-wei.guo.simon@gmail.com> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Simon Guo Currently HV will save math regs(FP/VEC/VSX) when trap into host. But PR KVM will only save math regs when qemu task switch out of CPU. To emulate FP/VEC/VSX load, PR KVM need to flush math regs firstly and then be able to update saved VCPU FPR/VEC/VSX area reasonably. This patch adds the giveup_ext() to KVM ops (an empty one for HV KVM) and kvmppc_complete_mmio_load() can invoke that hook to flush math regs accordingly. Math regs flush is also necessary for STORE, which will be covered in later patch within this patch series. Signed-off-by: Simon Guo --- arch/powerpc/include/asm/kvm_ppc.h | 1 + arch/powerpc/kvm/book3s_hv.c | 5 +++++ arch/powerpc/kvm/book3s_pr.c | 1 + arch/powerpc/kvm/powerpc.c | 9 +++++++++ 4 files changed, 16 insertions(+) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index abe7032..b265538 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -324,6 +324,7 @@ struct kvmppc_ops { int (*get_rmmu_info)(struct kvm *kvm, struct kvm_ppc_rmmu_info *info); int (*set_smt_mode)(struct kvm *kvm, unsigned long mode, unsigned long flags); + void (*giveup_ext)(struct kvm_vcpu *vcpu, ulong msr); }; extern struct kvmppc_ops *kvmppc_hv_ops; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 5b875ba..7eb5507 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -2084,6 +2084,10 @@ static int kvmhv_set_smt_mode(struct kvm *kvm, unsigned long smt_mode, return err; } +static void kvmhv_giveup_ext(struct kvm_vcpu *vcpu, ulong msr) +{ +} + static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa) { if (vpa->pinned_addr) @@ -4398,6 +4402,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg) .configure_mmu = kvmhv_configure_mmu, .get_rmmu_info = kvmhv_get_rmmu_info, .set_smt_mode = kvmhv_set_smt_mode, + .giveup_ext = kvmhv_giveup_ext, }; static int kvm_init_subcore_bitmap(void) diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 67061d3..be26636 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -1782,6 +1782,7 @@ static long kvm_arch_vm_ioctl_pr(struct file *filp, #ifdef CONFIG_PPC_BOOK3S_64 .hcall_implemented = kvmppc_hcall_impl_pr, #endif + .giveup_ext = kvmppc_giveup_ext, }; diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 17f0315..e724601 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -1061,6 +1061,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr); break; case KVM_MMIO_REG_FPR: + if (!is_kvmppc_hv_enabled(vcpu->kvm)) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP); + VCPU_FPR(vcpu, vcpu->arch.io_gpr & KVM_MMIO_REG_MASK) = gpr; break; #ifdef CONFIG_PPC_BOOK3S @@ -1074,6 +1077,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, #endif #ifdef CONFIG_VSX case KVM_MMIO_REG_VSX: + if (!is_kvmppc_hv_enabled(vcpu->kvm)) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VSX); + if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_DWORD) kvmppc_set_vsr_dword(vcpu, gpr); else if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_WORD) @@ -1088,6 +1094,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, #endif #ifdef CONFIG_ALTIVEC case KVM_MMIO_REG_VMX: + if (!is_kvmppc_hv_enabled(vcpu->kvm)) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VEC); + kvmppc_set_vmx_dword(vcpu, gpr); break; #endif