diff mbox series

[08/11] KVM: PPC: add giveup_ext() hook for PPC KVM ops

Message ID 1524657284-16706-9-git-send-email-wei.guo.simon@gmail.com
State Superseded
Headers show
Series KVM: PPC: reconstruct mmio emulation with analyse_instr() | expand

Commit Message

Simon Guo April 25, 2018, 11:54 a.m. UTC
From: Simon Guo <wei.guo.simon@gmail.com>

Currently HV will save math regs(FP/VEC/VSX) when trap into host. But
PR KVM will only save math regs when qemu task switch out of CPU.

To emulate FP/VEC/VSX load, PR KVM need to flush math regs firstly and
then be able to update saved VCPU FPR/VEC/VSX area reasonably.

This patch adds the giveup_ext() to KVM ops (an empty one for HV KVM)
and kvmppc_complete_mmio_load() can invoke that hook to flush math
regs accordingly.

Math regs flush is also necessary for STORE, which will be covered
in later patch within this patch series.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_ppc.h | 1 +
 arch/powerpc/kvm/book3s_hv.c       | 5 +++++
 arch/powerpc/kvm/book3s_pr.c       | 1 +
 arch/powerpc/kvm/powerpc.c         | 9 +++++++++
 4 files changed, 16 insertions(+)

Comments

Paul Mackerras May 3, 2018, 6:08 a.m. UTC | #1
On Wed, Apr 25, 2018 at 07:54:41PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Currently HV will save math regs(FP/VEC/VSX) when trap into host. But
> PR KVM will only save math regs when qemu task switch out of CPU.
> 
> To emulate FP/VEC/VSX load, PR KVM need to flush math regs firstly and
> then be able to update saved VCPU FPR/VEC/VSX area reasonably.
> 
> This patch adds the giveup_ext() to KVM ops (an empty one for HV KVM)
> and kvmppc_complete_mmio_load() can invoke that hook to flush math
> regs accordingly.
> 
> Math regs flush is also necessary for STORE, which will be covered
> in later patch within this patch series.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

I don't see where you have provided a function for Book E.

I would suggest you only set the function pointer to non-NULL when the
function is actually needed, i.e. for PR KVM.

It seems to me that this means that emulation of FP/VMX/VSX loads is
currently broken for PR KVM for the case where kvm_io_bus_read() is
able to supply the data, and the emulation of FP/VMX/VSX stores is
broken for PR KVM for all cases.  Do you agree?

> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 5b875ba..7eb5507 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -2084,6 +2084,10 @@ static int kvmhv_set_smt_mode(struct kvm *kvm, unsigned long smt_mode,
>  	return err;
>  }
>  
> +static void kvmhv_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
> +{
> +}
> +
>  static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
>  {
>  	if (vpa->pinned_addr)
> @@ -4398,6 +4402,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
>  	.configure_mmu = kvmhv_configure_mmu,
>  	.get_rmmu_info = kvmhv_get_rmmu_info,
>  	.set_smt_mode = kvmhv_set_smt_mode,
> +	.giveup_ext = kvmhv_giveup_ext,
>  };
>  
>  static int kvm_init_subcore_bitmap(void)

I think HV KVM could leave this pointer as NULL, and then...

> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 17f0315..e724601 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -1061,6 +1061,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
>  		kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
>  		break;
>  	case KVM_MMIO_REG_FPR:
> +		if (!is_kvmppc_hv_enabled(vcpu->kvm))
> +			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
> +

This could become
		if (vcpu->kvm->arch.kvm_ops->giveup_ext)
			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);

and you wouldn't need to fix Book E explicitly.

Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Simon Guo May 3, 2018, 9:21 a.m. UTC | #2
On Thu, May 03, 2018 at 04:08:17PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:41PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Currently HV will save math regs(FP/VEC/VSX) when trap into host. But
> > PR KVM will only save math regs when qemu task switch out of CPU.
> > 
> > To emulate FP/VEC/VSX load, PR KVM need to flush math regs firstly and
> > then be able to update saved VCPU FPR/VEC/VSX area reasonably.
> > 
> > This patch adds the giveup_ext() to KVM ops (an empty one for HV KVM)
> > and kvmppc_complete_mmio_load() can invoke that hook to flush math
> > regs accordingly.
> > 
> > Math regs flush is also necessary for STORE, which will be covered
> > in later patch within this patch series.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> I don't see where you have provided a function for Book E.
> 
> I would suggest you only set the function pointer to non-NULL when the
> function is actually needed, i.e. for PR KVM.
Got it.

> 
> It seems to me that this means that emulation of FP/VMX/VSX loads is
> currently broken for PR KVM for the case where kvm_io_bus_read() is
> able to supply the data, and the emulation of FP/VMX/VSX stores is
> broken for PR KVM for all cases.  Do you agree?
> 
Yes. I think so.

> > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> > index 5b875ba..7eb5507 100644
> > --- a/arch/powerpc/kvm/book3s_hv.c
> > +++ b/arch/powerpc/kvm/book3s_hv.c
> > @@ -2084,6 +2084,10 @@ static int kvmhv_set_smt_mode(struct kvm *kvm, unsigned long smt_mode,
> >  	return err;
> >  }
> >  
> > +static void kvmhv_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
> > +{
> > +}
> > +
> >  static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
> >  {
> >  	if (vpa->pinned_addr)
> > @@ -4398,6 +4402,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
> >  	.configure_mmu = kvmhv_configure_mmu,
> >  	.get_rmmu_info = kvmhv_get_rmmu_info,
> >  	.set_smt_mode = kvmhv_set_smt_mode,
> > +	.giveup_ext = kvmhv_giveup_ext,
> >  };
> >  
> >  static int kvm_init_subcore_bitmap(void)
> 
> I think HV KVM could leave this pointer as NULL, and then...
ok.

> 
> > diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> > index 17f0315..e724601 100644
> > --- a/arch/powerpc/kvm/powerpc.c
> > +++ b/arch/powerpc/kvm/powerpc.c
> > @@ -1061,6 +1061,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
> >  		kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
> >  		break;
> >  	case KVM_MMIO_REG_FPR:
> > +		if (!is_kvmppc_hv_enabled(vcpu->kvm))
> > +			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
> > +
> 
> This could become
> 		if (vcpu->kvm->arch.kvm_ops->giveup_ext)
> 			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
> 
> and you wouldn't need to fix Book E explicitly.
Yes

> 
> Paul.

Thanks,
- Simon
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox series

Patch

diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index abe7032..b265538 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -324,6 +324,7 @@  struct kvmppc_ops {
 	int (*get_rmmu_info)(struct kvm *kvm, struct kvm_ppc_rmmu_info *info);
 	int (*set_smt_mode)(struct kvm *kvm, unsigned long mode,
 			    unsigned long flags);
+	void (*giveup_ext)(struct kvm_vcpu *vcpu, ulong msr);
 };
 
 extern struct kvmppc_ops *kvmppc_hv_ops;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 5b875ba..7eb5507 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2084,6 +2084,10 @@  static int kvmhv_set_smt_mode(struct kvm *kvm, unsigned long smt_mode,
 	return err;
 }
 
+static void kvmhv_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
+{
+}
+
 static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
 {
 	if (vpa->pinned_addr)
@@ -4398,6 +4402,7 @@  static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
 	.configure_mmu = kvmhv_configure_mmu,
 	.get_rmmu_info = kvmhv_get_rmmu_info,
 	.set_smt_mode = kvmhv_set_smt_mode,
+	.giveup_ext = kvmhv_giveup_ext,
 };
 
 static int kvm_init_subcore_bitmap(void)
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 67061d3..be26636 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1782,6 +1782,7 @@  static long kvm_arch_vm_ioctl_pr(struct file *filp,
 #ifdef CONFIG_PPC_BOOK3S_64
 	.hcall_implemented = kvmppc_hcall_impl_pr,
 #endif
+	.giveup_ext = kvmppc_giveup_ext,
 };
 
 
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 17f0315..e724601 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1061,6 +1061,9 @@  static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 		kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
 		break;
 	case KVM_MMIO_REG_FPR:
+		if (!is_kvmppc_hv_enabled(vcpu->kvm))
+			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
+
 		VCPU_FPR(vcpu, vcpu->arch.io_gpr & KVM_MMIO_REG_MASK) = gpr;
 		break;
 #ifdef CONFIG_PPC_BOOK3S
@@ -1074,6 +1077,9 @@  static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 #endif
 #ifdef CONFIG_VSX
 	case KVM_MMIO_REG_VSX:
+		if (!is_kvmppc_hv_enabled(vcpu->kvm))
+			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VSX);
+
 		if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_DWORD)
 			kvmppc_set_vsr_dword(vcpu, gpr);
 		else if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_WORD)
@@ -1088,6 +1094,9 @@  static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 #endif
 #ifdef CONFIG_ALTIVEC
 	case KVM_MMIO_REG_VMX:
+		if (!is_kvmppc_hv_enabled(vcpu->kvm))
+			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VEC);
+
 		kvmppc_set_vmx_dword(vcpu, gpr);
 		break;
 #endif