From patchwork Mon May 7 06:20:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 909527 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="Z9+vGLui"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40fXYL1ZYzz9s4Z for ; Mon, 7 May 2018 16:20:38 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751147AbeEGGUh (ORCPT ); Mon, 7 May 2018 02:20:37 -0400 Received: from mail-pg0-f65.google.com ([74.125.83.65]:33036 "EHLO mail-pg0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751082AbeEGGUg (ORCPT ); Mon, 7 May 2018 02:20:36 -0400 Received: by mail-pg0-f65.google.com with SMTP id i194-v6so19489259pgd.0; Sun, 06 May 2018 23:20:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jyEP0fZq3R379dBrDSNdse4S6Dixd3oqZ2fWB4iwXzw=; b=Z9+vGLuiz9PtUyANDUJIgWB26n5ADRf1cwaiyz4FobZtDsQPjTOApG8wY+bdqr270o 1Ok9wi1rMGaO6xsTg1Nuc4UyjhXDfNo8hwkG1EcQIsT9DwxNu9+dDGou5lfmLTS1xw3S +bC7SP8IpxxP9bP6E5BOtOjsy/7PXoRLNuggMH1XHsByTUW27GNvwS3VHY0D1HmJWq/s HpDx5+s3Hdf5AJZ7CbdovmX/+FyVrYcNjrsVaM61sgaTDEewHfEh2kKnXNawvsue4+b/ 5pbEpPyyt5VX8xj+mucojIJGq4DewFb6pIYbKvGgQ1kK2L0DgbkTvt3/stQ79M4lJvnP xfqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jyEP0fZq3R379dBrDSNdse4S6Dixd3oqZ2fWB4iwXzw=; b=BIUpeG2hTvxilQNh6kb9Yfoal12FzzsACGB5cCgJHuy669m9fkNgq31C+ee0VmtjBA IMcQirlDZxuBCTfa88jTTUipWCAWxzpaFa/xzVeGMRe2/AuOdNX/tlpYZAxto9vTFb+Q 4Et/omE8gRjqVK+pBBPMLtPlPksrrFQRZiP6H5xlK86DDP7BU36ckSfy8mXzpLf9A/Po Pc682nVsYjIIRUS0ghdq5b8j+zGmFV2QGjhUJw8WqGWwdJwedtWNqwhB/5MercVw2cRv jxDx3o25Vq5Oeptk4m5tNwc4+sTCG55LrOhkkl6J6pGBSXVBSPGHiwHfvUxJOvpMo0KJ gohw== X-Gm-Message-State: ALQs6tBek81tnFCv9A62h+yoRWpKdZBxf+XwgkqHqya+snTdEoU9VlO8 ugh5McluMFzuKMHdQQMnKD5vgw== X-Google-Smtp-Source: AB8JxZrAt0U0TWwn2We/mHjVE40xQGT22D3EhUsp5WX9kDHvJfNhuL+qTsMY4Zl9sE56/X7BXN8CuA== X-Received: by 2002:a17:902:968d:: with SMTP id n13-v6mr35819577plp.168.1525674035000; Sun, 06 May 2018 23:20:35 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id z25sm10459544pfi.171.2018.05.06.23.20.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 May 2018 23:20:34 -0700 (PDT) From: wei.guo.simon@gmail.com To: kvm-ppc@vger.kernel.org Cc: Paul Mackerras , kvm@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Simon Guo Subject: [PATCH v2 01/10] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it Date: Mon, 7 May 2018 14:20:07 +0800 Message-Id: <1525674016-6703-2-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> References: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Simon Guo Current regs are scattered at kvm_vcpu_arch structure and it will be more neat to organize them into pt_regs structure. Also it will enable reimplementation of MMIO emulation code with analyse_instr() later. Signed-off-by: Simon Guo --- arch/powerpc/include/asm/kvm_book3s.h | 4 +-- arch/powerpc/include/asm/kvm_book3s_64.h | 8 ++--- arch/powerpc/include/asm/kvm_booke.h | 4 +-- arch/powerpc/include/asm/kvm_host.h | 2 +- arch/powerpc/kernel/asm-offsets.c | 2 +- arch/powerpc/kvm/book3s_64_vio_hv.c | 2 +- arch/powerpc/kvm/book3s_hv_builtin.c | 6 ++-- arch/powerpc/kvm/book3s_hv_rm_mmu.c | 15 +++++---- arch/powerpc/kvm/book3s_hv_rm_xics.c | 2 +- arch/powerpc/kvm/book3s_pr.c | 56 ++++++++++++++++---------------- arch/powerpc/kvm/book3s_xive_template.c | 4 +-- arch/powerpc/kvm/e500_emulate.c | 4 +-- 12 files changed, 55 insertions(+), 54 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index 4c02a73..9de4127 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -273,12 +273,12 @@ static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu) static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val) { - vcpu->arch.gpr[num] = val; + vcpu->arch.regs.gpr[num] = val; } static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num) { - return vcpu->arch.gpr[num]; + return vcpu->arch.regs.gpr[num]; } static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index c424e44..38dbcad 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -490,8 +490,8 @@ static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu) vcpu->arch.ppr = vcpu->arch.ppr_tm; vcpu->arch.dscr = vcpu->arch.dscr_tm; vcpu->arch.tar = vcpu->arch.tar_tm; - memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm, - sizeof(vcpu->arch.gpr)); + memcpy(vcpu->arch.regs.gpr, vcpu->arch.gpr_tm, + sizeof(vcpu->arch.regs.gpr)); vcpu->arch.fp = vcpu->arch.fp_tm; vcpu->arch.vr = vcpu->arch.vr_tm; vcpu->arch.vrsave = vcpu->arch.vrsave_tm; @@ -507,8 +507,8 @@ static inline void copy_to_checkpoint(struct kvm_vcpu *vcpu) vcpu->arch.ppr_tm = vcpu->arch.ppr; vcpu->arch.dscr_tm = vcpu->arch.dscr; vcpu->arch.tar_tm = vcpu->arch.tar; - memcpy(vcpu->arch.gpr_tm, vcpu->arch.gpr, - sizeof(vcpu->arch.gpr)); + memcpy(vcpu->arch.gpr_tm, vcpu->arch.regs.gpr, + sizeof(vcpu->arch.regs.gpr)); vcpu->arch.fp_tm = vcpu->arch.fp; vcpu->arch.vr_tm = vcpu->arch.vr; vcpu->arch.vrsave_tm = vcpu->arch.vrsave; diff --git a/arch/powerpc/include/asm/kvm_booke.h b/arch/powerpc/include/asm/kvm_booke.h index bc6e29e..f5fc956 100644 --- a/arch/powerpc/include/asm/kvm_booke.h +++ b/arch/powerpc/include/asm/kvm_booke.h @@ -36,12 +36,12 @@ static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val) { - vcpu->arch.gpr[num] = val; + vcpu->arch.regs.gpr[num] = val; } static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num) { - return vcpu->arch.gpr[num]; + return vcpu->arch.regs.gpr[num]; } static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 17498e9..1c93d82 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -486,7 +486,7 @@ struct kvm_vcpu_arch { struct kvmppc_book3s_shadow_vcpu *shadow_vcpu; #endif - ulong gpr[32]; + struct pt_regs regs; struct thread_fp_state fp; diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c index 6bee65f..3dd6984 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -425,7 +425,7 @@ int main(void) OFFSET(VCPU_HOST_STACK, kvm_vcpu, arch.host_stack); OFFSET(VCPU_HOST_PID, kvm_vcpu, arch.host_pid); OFFSET(VCPU_GUEST_PID, kvm_vcpu, arch.pid); - OFFSET(VCPU_GPRS, kvm_vcpu, arch.gpr); + OFFSET(VCPU_GPRS, kvm_vcpu, arch.regs.gpr); OFFSET(VCPU_VRSAVE, kvm_vcpu, arch.vrsave); OFFSET(VCPU_FPRS, kvm_vcpu, arch.fp.fpr); #ifdef CONFIG_ALTIVEC diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c index 6651f73..bdd872a 100644 --- a/arch/powerpc/kvm/book3s_64_vio_hv.c +++ b/arch/powerpc/kvm/book3s_64_vio_hv.c @@ -571,7 +571,7 @@ long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn, page = stt->pages[idx / TCES_PER_PAGE]; tbl = (u64 *)page_address(page); - vcpu->arch.gpr[4] = tbl[idx % TCES_PER_PAGE]; + vcpu->arch.regs.gpr[4] = tbl[idx % TCES_PER_PAGE]; return H_SUCCESS; } diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c index de18299..2b12758 100644 --- a/arch/powerpc/kvm/book3s_hv_builtin.c +++ b/arch/powerpc/kvm/book3s_hv_builtin.c @@ -211,9 +211,9 @@ long kvmppc_h_random(struct kvm_vcpu *vcpu) /* Only need to do the expensive mfmsr() on radix */ if (kvm_is_radix(vcpu->kvm) && (mfmsr() & MSR_IR)) - r = powernv_get_random_long(&vcpu->arch.gpr[4]); + r = powernv_get_random_long(&vcpu->arch.regs.gpr[4]); else - r = powernv_get_random_real_mode(&vcpu->arch.gpr[4]); + r = powernv_get_random_real_mode(&vcpu->arch.regs.gpr[4]); if (r) return H_SUCCESS; @@ -562,7 +562,7 @@ unsigned long kvmppc_rm_h_xirr_x(struct kvm_vcpu *vcpu) { if (!kvmppc_xics_enabled(vcpu)) return H_TOO_HARD; - vcpu->arch.gpr[5] = get_tb(); + vcpu->arch.regs.gpr[5] = get_tb(); if (xive_enabled()) { if (is_rm()) return xive_rm_h_xirr(vcpu); diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c index e1c083f..3d3ce7a 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c @@ -418,7 +418,8 @@ long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags, long pte_index, unsigned long pteh, unsigned long ptel) { return kvmppc_do_h_enter(vcpu->kvm, flags, pte_index, pteh, ptel, - vcpu->arch.pgdir, true, &vcpu->arch.gpr[4]); + vcpu->arch.pgdir, true, + &vcpu->arch.regs.gpr[4]); } #ifdef __BIG_ENDIAN__ @@ -565,13 +566,13 @@ long kvmppc_h_remove(struct kvm_vcpu *vcpu, unsigned long flags, unsigned long pte_index, unsigned long avpn) { return kvmppc_do_h_remove(vcpu->kvm, flags, pte_index, avpn, - &vcpu->arch.gpr[4]); + &vcpu->arch.regs.gpr[4]); } long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu) { struct kvm *kvm = vcpu->kvm; - unsigned long *args = &vcpu->arch.gpr[4]; + unsigned long *args = &vcpu->arch.regs.gpr[4]; __be64 *hp, *hptes[4]; unsigned long tlbrb[4]; long int i, j, k, n, found, indexes[4]; @@ -791,8 +792,8 @@ long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long flags, r = rev[i].guest_rpte | (r & (HPTE_R_R | HPTE_R_C)); r &= ~HPTE_GR_RESERVED; } - vcpu->arch.gpr[4 + i * 2] = v; - vcpu->arch.gpr[5 + i * 2] = r; + vcpu->arch.regs.gpr[4 + i * 2] = v; + vcpu->arch.regs.gpr[5 + i * 2] = r; } return H_SUCCESS; } @@ -838,7 +839,7 @@ long kvmppc_h_clear_ref(struct kvm_vcpu *vcpu, unsigned long flags, } } } - vcpu->arch.gpr[4] = gr; + vcpu->arch.regs.gpr[4] = gr; ret = H_SUCCESS; out: unlock_hpte(hpte, v & ~HPTE_V_HVLOCK); @@ -885,7 +886,7 @@ long kvmppc_h_clear_mod(struct kvm_vcpu *vcpu, unsigned long flags, kvmppc_set_dirty_from_hpte(kvm, v, gr); } } - vcpu->arch.gpr[4] = gr; + vcpu->arch.regs.gpr[4] = gr; ret = H_SUCCESS; out: unlock_hpte(hpte, v & ~HPTE_V_HVLOCK); diff --git a/arch/powerpc/kvm/book3s_hv_rm_xics.c b/arch/powerpc/kvm/book3s_hv_rm_xics.c index 2a86261..758d1d2 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_xics.c +++ b/arch/powerpc/kvm/book3s_hv_rm_xics.c @@ -517,7 +517,7 @@ unsigned long xics_rm_h_xirr(struct kvm_vcpu *vcpu) } while (!icp_rm_try_update(icp, old_state, new_state)); /* Return the result in GPR4 */ - vcpu->arch.gpr[4] = xirr; + vcpu->arch.regs.gpr[4] = xirr; return check_too_hard(xics, icp); } diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index d3f304d..899bc9a 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -147,20 +147,20 @@ void kvmppc_copy_to_svcpu(struct kvm_vcpu *vcpu) { struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu); - svcpu->gpr[0] = vcpu->arch.gpr[0]; - svcpu->gpr[1] = vcpu->arch.gpr[1]; - svcpu->gpr[2] = vcpu->arch.gpr[2]; - svcpu->gpr[3] = vcpu->arch.gpr[3]; - svcpu->gpr[4] = vcpu->arch.gpr[4]; - svcpu->gpr[5] = vcpu->arch.gpr[5]; - svcpu->gpr[6] = vcpu->arch.gpr[6]; - svcpu->gpr[7] = vcpu->arch.gpr[7]; - svcpu->gpr[8] = vcpu->arch.gpr[8]; - svcpu->gpr[9] = vcpu->arch.gpr[9]; - svcpu->gpr[10] = vcpu->arch.gpr[10]; - svcpu->gpr[11] = vcpu->arch.gpr[11]; - svcpu->gpr[12] = vcpu->arch.gpr[12]; - svcpu->gpr[13] = vcpu->arch.gpr[13]; + svcpu->gpr[0] = vcpu->arch.regs.gpr[0]; + svcpu->gpr[1] = vcpu->arch.regs.gpr[1]; + svcpu->gpr[2] = vcpu->arch.regs.gpr[2]; + svcpu->gpr[3] = vcpu->arch.regs.gpr[3]; + svcpu->gpr[4] = vcpu->arch.regs.gpr[4]; + svcpu->gpr[5] = vcpu->arch.regs.gpr[5]; + svcpu->gpr[6] = vcpu->arch.regs.gpr[6]; + svcpu->gpr[7] = vcpu->arch.regs.gpr[7]; + svcpu->gpr[8] = vcpu->arch.regs.gpr[8]; + svcpu->gpr[9] = vcpu->arch.regs.gpr[9]; + svcpu->gpr[10] = vcpu->arch.regs.gpr[10]; + svcpu->gpr[11] = vcpu->arch.regs.gpr[11]; + svcpu->gpr[12] = vcpu->arch.regs.gpr[12]; + svcpu->gpr[13] = vcpu->arch.regs.gpr[13]; svcpu->cr = vcpu->arch.cr; svcpu->xer = vcpu->arch.xer; svcpu->ctr = vcpu->arch.ctr; @@ -194,20 +194,20 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu) if (!svcpu->in_use) goto out; - vcpu->arch.gpr[0] = svcpu->gpr[0]; - vcpu->arch.gpr[1] = svcpu->gpr[1]; - vcpu->arch.gpr[2] = svcpu->gpr[2]; - vcpu->arch.gpr[3] = svcpu->gpr[3]; - vcpu->arch.gpr[4] = svcpu->gpr[4]; - vcpu->arch.gpr[5] = svcpu->gpr[5]; - vcpu->arch.gpr[6] = svcpu->gpr[6]; - vcpu->arch.gpr[7] = svcpu->gpr[7]; - vcpu->arch.gpr[8] = svcpu->gpr[8]; - vcpu->arch.gpr[9] = svcpu->gpr[9]; - vcpu->arch.gpr[10] = svcpu->gpr[10]; - vcpu->arch.gpr[11] = svcpu->gpr[11]; - vcpu->arch.gpr[12] = svcpu->gpr[12]; - vcpu->arch.gpr[13] = svcpu->gpr[13]; + vcpu->arch.regs.gpr[0] = svcpu->gpr[0]; + vcpu->arch.regs.gpr[1] = svcpu->gpr[1]; + vcpu->arch.regs.gpr[2] = svcpu->gpr[2]; + vcpu->arch.regs.gpr[3] = svcpu->gpr[3]; + vcpu->arch.regs.gpr[4] = svcpu->gpr[4]; + vcpu->arch.regs.gpr[5] = svcpu->gpr[5]; + vcpu->arch.regs.gpr[6] = svcpu->gpr[6]; + vcpu->arch.regs.gpr[7] = svcpu->gpr[7]; + vcpu->arch.regs.gpr[8] = svcpu->gpr[8]; + vcpu->arch.regs.gpr[9] = svcpu->gpr[9]; + vcpu->arch.regs.gpr[10] = svcpu->gpr[10]; + vcpu->arch.regs.gpr[11] = svcpu->gpr[11]; + vcpu->arch.regs.gpr[12] = svcpu->gpr[12]; + vcpu->arch.regs.gpr[13] = svcpu->gpr[13]; vcpu->arch.cr = svcpu->cr; vcpu->arch.xer = svcpu->xer; vcpu->arch.ctr = svcpu->ctr; diff --git a/arch/powerpc/kvm/book3s_xive_template.c b/arch/powerpc/kvm/book3s_xive_template.c index c7a5dea..b5940fc 100644 --- a/arch/powerpc/kvm/book3s_xive_template.c +++ b/arch/powerpc/kvm/book3s_xive_template.c @@ -327,7 +327,7 @@ X_STATIC unsigned long GLUE(X_PFX,h_xirr)(struct kvm_vcpu *vcpu) */ /* Return interrupt and old CPPR in GPR4 */ - vcpu->arch.gpr[4] = hirq | (old_cppr << 24); + vcpu->arch.regs.gpr[4] = hirq | (old_cppr << 24); return H_SUCCESS; } @@ -362,7 +362,7 @@ X_STATIC unsigned long GLUE(X_PFX,h_ipoll)(struct kvm_vcpu *vcpu, unsigned long hirq = GLUE(X_PFX,scan_interrupts)(xc, pending, scan_poll); /* Return interrupt and old CPPR in GPR4 */ - vcpu->arch.gpr[4] = hirq | (xc->cppr << 24); + vcpu->arch.regs.gpr[4] = hirq | (xc->cppr << 24); return H_SUCCESS; } diff --git a/arch/powerpc/kvm/e500_emulate.c b/arch/powerpc/kvm/e500_emulate.c index 990db69..8f871fb 100644 --- a/arch/powerpc/kvm/e500_emulate.c +++ b/arch/powerpc/kvm/e500_emulate.c @@ -53,7 +53,7 @@ static int dbell2prio(ulong param) static int kvmppc_e500_emul_msgclr(struct kvm_vcpu *vcpu, int rb) { - ulong param = vcpu->arch.gpr[rb]; + ulong param = vcpu->arch.regs.gpr[rb]; int prio = dbell2prio(param); if (prio < 0) @@ -65,7 +65,7 @@ static int kvmppc_e500_emul_msgclr(struct kvm_vcpu *vcpu, int rb) static int kvmppc_e500_emul_msgsnd(struct kvm_vcpu *vcpu, int rb) { - ulong param = vcpu->arch.gpr[rb]; + ulong param = vcpu->arch.regs.gpr[rb]; int prio = dbell2prio(rb); int pir = param & PPC_DBELL_PIR_MASK; int i; From patchwork Mon May 7 06:20:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 909528 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="ax7ODtrR"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40fXYP0LPSz9s7L for ; Mon, 7 May 2018 16:20:41 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751848AbeEGGUk (ORCPT ); Mon, 7 May 2018 02:20:40 -0400 Received: from mail-pf0-f194.google.com ([209.85.192.194]:33900 "EHLO mail-pf0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751082AbeEGGUi (ORCPT ); Mon, 7 May 2018 02:20:38 -0400 Received: by mail-pf0-f194.google.com with SMTP id a14so22049597pfi.1; Sun, 06 May 2018 23:20:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FJgxOjchpUVRTK/qkWcXmNgqIZxGTMKM4d6LG0Y8UNM=; b=ax7ODtrR6eSmx/fZoG4fJqGHXECQIQqlQeLwpH3tX4A+2dQKxvx+f1bJ+9w4/Fvf0f pKEC4E6SPqeWAmz0HfehlnhyXJic908p/TU1+E8HKY28eqNwl36+QSC1FndmeaoRd/Z4 ttL5YrjWkO5JvI9cRPmbKG0Wxuo+aI100kNROOri+A5lxId0RR/r4IryNf5QKiGlnLT6 XHTWrGeICT6rpd+DjYP9RmHR7sTzIgsoGaSjTpT2IMv1fzrwJFW3AyWm8sGUUD55NfDy 6wCZp0Mu5X6kjzo/5oB/wOv4ruxXCTQZBUyjeOdg3W8KEhHbSW++N8941od8CCBGI1N8 AmXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FJgxOjchpUVRTK/qkWcXmNgqIZxGTMKM4d6LG0Y8UNM=; b=B0S4Bv/YFDxwnUNkS2ZMXfBkT+T/8rawQhzHGiR38nS/OpOnkJy8TGxkIlOStidey6 K4g9IHhd7n24qtP7frY8PsqRHLtjTbJVGgqKbcfLl0CFhagWYofQe+PPsqWs3UgYqB78 C8+3AP+8lhyic8Vqc3YNaq6uplSbGFHvU/4TU+wnMxGV4CjUDrZGr7y8PXn7u5Wf/A1b 8T3BoB1dwR/t5PpWefgA4s7l94DXmlm5euuKN8eOVWb+T7bzVoAbZq3lgmkisIuO2DQ2 U/Gcse5TexyyN4fDHYHIuujYbtBL3WR6rOjxAsGn2CCWv7a34IYVYxNNN3EFTQN0yY68 eTYw== X-Gm-Message-State: ALQs6tAx/sBYENxc4zw7UbVmVboIbSygDL1fBcq1lVbHZXnEmRQS1Vvh cDnRROWW12Z9LaTUf1MXsCT2/g== X-Google-Smtp-Source: AB8JxZr1h5/GQI2rSu8/IrJ6YgLwBOjAMP+cnZGJwfTet+H8lp9gbmLhBrUwke+GGwNUEytnCg/8PA== X-Received: by 2002:a63:3811:: with SMTP id f17-v6mr29339228pga.70.1525674037773; Sun, 06 May 2018 23:20:37 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id z25sm10459544pfi.171.2018.05.06.23.20.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 May 2018 23:20:37 -0700 (PDT) From: wei.guo.simon@gmail.com To: kvm-ppc@vger.kernel.org Cc: Paul Mackerras , kvm@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Simon Guo Subject: [PATCH v2 02/10] KVM: PPC: mov nip/ctr/lr/xer registers to pt_regs in kvm_vcpu_arch Date: Mon, 7 May 2018 14:20:08 +0800 Message-Id: <1525674016-6703-3-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> References: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Simon Guo This patch moves nip/ctr/lr/xer registers from scattered places in kvm_vcpu_arch to pt_regs structure. cr register is "unsigned long" in pt_regs and u32 in vcpu->arch. It will need more consideration and may move in later patches. Signed-off-by: Simon Guo --- arch/powerpc/include/asm/kvm_book3s.h | 16 ++++++------- arch/powerpc/include/asm/kvm_book3s_64.h | 12 +++++----- arch/powerpc/include/asm/kvm_booke.h | 16 ++++++------- arch/powerpc/include/asm/kvm_host.h | 4 ---- arch/powerpc/kernel/asm-offsets.c | 16 ++++++------- arch/powerpc/kvm/book3s_32_mmu.c | 2 +- arch/powerpc/kvm/book3s_hv.c | 6 ++--- arch/powerpc/kvm/book3s_hv_tm.c | 10 ++++---- arch/powerpc/kvm/book3s_hv_tm_builtin.c | 10 ++++---- arch/powerpc/kvm/book3s_pr.c | 16 ++++++------- arch/powerpc/kvm/booke.c | 41 +++++++++++++++++--------------- arch/powerpc/kvm/booke_emulate.c | 6 ++--- arch/powerpc/kvm/e500_emulate.c | 2 +- arch/powerpc/kvm/e500_mmu.c | 2 +- 14 files changed, 79 insertions(+), 80 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index 9de4127..d39d608 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -293,42 +293,42 @@ static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu) static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.xer = val; + vcpu->arch.regs.xer = val; } static inline ulong kvmppc_get_xer(struct kvm_vcpu *vcpu) { - return vcpu->arch.xer; + return vcpu->arch.regs.xer; } static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.ctr = val; + vcpu->arch.regs.ctr = val; } static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu) { - return vcpu->arch.ctr; + return vcpu->arch.regs.ctr; } static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.lr = val; + vcpu->arch.regs.link = val; } static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu) { - return vcpu->arch.lr; + return vcpu->arch.regs.link; } static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.pc = val; + vcpu->arch.regs.nip = val; } static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu) { - return vcpu->arch.pc; + return vcpu->arch.regs.nip; } static inline u64 kvmppc_get_msr(struct kvm_vcpu *vcpu); diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index 38dbcad..dc435a5 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -483,9 +483,9 @@ static inline u64 sanitize_msr(u64 msr) static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu) { vcpu->arch.cr = vcpu->arch.cr_tm; - vcpu->arch.xer = vcpu->arch.xer_tm; - vcpu->arch.lr = vcpu->arch.lr_tm; - vcpu->arch.ctr = vcpu->arch.ctr_tm; + vcpu->arch.regs.xer = vcpu->arch.xer_tm; + vcpu->arch.regs.link = vcpu->arch.lr_tm; + vcpu->arch.regs.ctr = vcpu->arch.ctr_tm; vcpu->arch.amr = vcpu->arch.amr_tm; vcpu->arch.ppr = vcpu->arch.ppr_tm; vcpu->arch.dscr = vcpu->arch.dscr_tm; @@ -500,9 +500,9 @@ static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu) static inline void copy_to_checkpoint(struct kvm_vcpu *vcpu) { vcpu->arch.cr_tm = vcpu->arch.cr; - vcpu->arch.xer_tm = vcpu->arch.xer; - vcpu->arch.lr_tm = vcpu->arch.lr; - vcpu->arch.ctr_tm = vcpu->arch.ctr; + vcpu->arch.xer_tm = vcpu->arch.regs.xer; + vcpu->arch.lr_tm = vcpu->arch.regs.link; + vcpu->arch.ctr_tm = vcpu->arch.regs.ctr; vcpu->arch.amr_tm = vcpu->arch.amr; vcpu->arch.ppr_tm = vcpu->arch.ppr; vcpu->arch.dscr_tm = vcpu->arch.dscr; diff --git a/arch/powerpc/include/asm/kvm_booke.h b/arch/powerpc/include/asm/kvm_booke.h index f5fc956..d513e3e 100644 --- a/arch/powerpc/include/asm/kvm_booke.h +++ b/arch/powerpc/include/asm/kvm_booke.h @@ -56,12 +56,12 @@ static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu) static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.xer = val; + vcpu->arch.regs.xer = val; } static inline ulong kvmppc_get_xer(struct kvm_vcpu *vcpu) { - return vcpu->arch.xer; + return vcpu->arch.regs.xer; } static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu) @@ -72,32 +72,32 @@ static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu) static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.ctr = val; + vcpu->arch.regs.ctr = val; } static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu) { - return vcpu->arch.ctr; + return vcpu->arch.regs.ctr; } static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.lr = val; + vcpu->arch.regs.link = val; } static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu) { - return vcpu->arch.lr; + return vcpu->arch.regs.link; } static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val) { - vcpu->arch.pc = val; + vcpu->arch.regs.nip = val; } static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu) { - return vcpu->arch.pc; + return vcpu->arch.regs.nip; } static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 1c93d82..2d87768 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -521,14 +521,10 @@ struct kvm_vcpu_arch { u32 qpr[32]; #endif - ulong pc; - ulong ctr; - ulong lr; #ifdef CONFIG_PPC_BOOK3S ulong tar; #endif - ulong xer; u32 cr; #ifdef CONFIG_PPC_BOOK3S diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c index 3dd6984..695676c 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -431,14 +431,14 @@ int main(void) #ifdef CONFIG_ALTIVEC OFFSET(VCPU_VRS, kvm_vcpu, arch.vr.vr); #endif - OFFSET(VCPU_XER, kvm_vcpu, arch.xer); - OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr); - OFFSET(VCPU_LR, kvm_vcpu, arch.lr); + OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer); + OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr); + OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link); #ifdef CONFIG_PPC_BOOK3S OFFSET(VCPU_TAR, kvm_vcpu, arch.tar); #endif OFFSET(VCPU_CR, kvm_vcpu, arch.cr); - OFFSET(VCPU_PC, kvm_vcpu, arch.pc); + OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip); #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr); OFFSET(VCPU_SRR0, kvm_vcpu, arch.shregs.srr0); @@ -694,10 +694,10 @@ int main(void) #else /* CONFIG_PPC_BOOK3S */ OFFSET(VCPU_CR, kvm_vcpu, arch.cr); - OFFSET(VCPU_XER, kvm_vcpu, arch.xer); - OFFSET(VCPU_LR, kvm_vcpu, arch.lr); - OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr); - OFFSET(VCPU_PC, kvm_vcpu, arch.pc); + OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer); + OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link); + OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr); + OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip); OFFSET(VCPU_SPRG9, kvm_vcpu, arch.sprg9); OFFSET(VCPU_LAST_INST, kvm_vcpu, arch.last_inst); OFFSET(VCPU_FAULT_DEAR, kvm_vcpu, arch.fault_dear); diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c index 1992676..45c8ea4 100644 --- a/arch/powerpc/kvm/book3s_32_mmu.c +++ b/arch/powerpc/kvm/book3s_32_mmu.c @@ -52,7 +52,7 @@ static inline bool check_debug_ip(struct kvm_vcpu *vcpu) { #ifdef DEBUG_MMU_PTE_IP - return vcpu->arch.pc == DEBUG_MMU_PTE_IP; + return vcpu->arch.regs.nip == DEBUG_MMU_PTE_IP; #else return true; #endif diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 4d07fca..5b875ba 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -371,13 +371,13 @@ static void kvmppc_dump_regs(struct kvm_vcpu *vcpu) pr_err("vcpu %p (%d):\n", vcpu, vcpu->vcpu_id); pr_err("pc = %.16lx msr = %.16llx trap = %x\n", - vcpu->arch.pc, vcpu->arch.shregs.msr, vcpu->arch.trap); + vcpu->arch.regs.nip, vcpu->arch.shregs.msr, vcpu->arch.trap); for (r = 0; r < 16; ++r) pr_err("r%2d = %.16lx r%d = %.16lx\n", r, kvmppc_get_gpr(vcpu, r), r+16, kvmppc_get_gpr(vcpu, r+16)); pr_err("ctr = %.16lx lr = %.16lx\n", - vcpu->arch.ctr, vcpu->arch.lr); + vcpu->arch.regs.ctr, vcpu->arch.regs.link); pr_err("srr0 = %.16llx srr1 = %.16llx\n", vcpu->arch.shregs.srr0, vcpu->arch.shregs.srr1); pr_err("sprg0 = %.16llx sprg1 = %.16llx\n", @@ -385,7 +385,7 @@ static void kvmppc_dump_regs(struct kvm_vcpu *vcpu) pr_err("sprg2 = %.16llx sprg3 = %.16llx\n", vcpu->arch.shregs.sprg2, vcpu->arch.shregs.sprg3); pr_err("cr = %.8x xer = %.16lx dsisr = %.8x\n", - vcpu->arch.cr, vcpu->arch.xer, vcpu->arch.shregs.dsisr); + vcpu->arch.cr, vcpu->arch.regs.xer, vcpu->arch.shregs.dsisr); pr_err("dar = %.16llx\n", vcpu->arch.shregs.dar); pr_err("fault dar = %.16lx dsisr = %.8x\n", vcpu->arch.fault_dar, vcpu->arch.fault_dsisr); diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c index bf710ad..0082850 100644 --- a/arch/powerpc/kvm/book3s_hv_tm.c +++ b/arch/powerpc/kvm/book3s_hv_tm.c @@ -19,7 +19,7 @@ static void emulate_tx_failure(struct kvm_vcpu *vcpu, u64 failure_cause) u64 texasr, tfiar; u64 msr = vcpu->arch.shregs.msr; - tfiar = vcpu->arch.pc & ~0x3ull; + tfiar = vcpu->arch.regs.nip & ~0x3ull; texasr = (failure_cause << 56) | TEXASR_ABORT | TEXASR_FS | TEXASR_EXACT; if (MSR_TM_SUSPENDED(vcpu->arch.shregs.msr)) texasr |= TEXASR_SUSP; @@ -57,8 +57,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) (newmsr & MSR_TM))); newmsr = sanitize_msr(newmsr); vcpu->arch.shregs.msr = newmsr; - vcpu->arch.cfar = vcpu->arch.pc - 4; - vcpu->arch.pc = vcpu->arch.shregs.srr0; + vcpu->arch.cfar = vcpu->arch.regs.nip - 4; + vcpu->arch.regs.nip = vcpu->arch.shregs.srr0; return RESUME_GUEST; case PPC_INST_RFEBB: @@ -90,8 +90,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) vcpu->arch.bescr = bescr; msr = (msr & ~MSR_TS_MASK) | MSR_TS_T; vcpu->arch.shregs.msr = msr; - vcpu->arch.cfar = vcpu->arch.pc - 4; - vcpu->arch.pc = vcpu->arch.ebbrr; + vcpu->arch.cfar = vcpu->arch.regs.nip - 4; + vcpu->arch.regs.nip = vcpu->arch.ebbrr; return RESUME_GUEST; case PPC_INST_MTMSRD: diff --git a/arch/powerpc/kvm/book3s_hv_tm_builtin.c b/arch/powerpc/kvm/book3s_hv_tm_builtin.c index d98ccfd..b2c7c6f 100644 --- a/arch/powerpc/kvm/book3s_hv_tm_builtin.c +++ b/arch/powerpc/kvm/book3s_hv_tm_builtin.c @@ -35,8 +35,8 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu) return 0; newmsr = sanitize_msr(newmsr); vcpu->arch.shregs.msr = newmsr; - vcpu->arch.cfar = vcpu->arch.pc - 4; - vcpu->arch.pc = vcpu->arch.shregs.srr0; + vcpu->arch.cfar = vcpu->arch.regs.nip - 4; + vcpu->arch.regs.nip = vcpu->arch.shregs.srr0; return 1; case PPC_INST_RFEBB: @@ -58,8 +58,8 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu) mtspr(SPRN_BESCR, bescr); msr = (msr & ~MSR_TS_MASK) | MSR_TS_T; vcpu->arch.shregs.msr = msr; - vcpu->arch.cfar = vcpu->arch.pc - 4; - vcpu->arch.pc = mfspr(SPRN_EBBRR); + vcpu->arch.cfar = vcpu->arch.regs.nip - 4; + vcpu->arch.regs.nip = mfspr(SPRN_EBBRR); return 1; case PPC_INST_MTMSRD: @@ -103,7 +103,7 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu) void kvmhv_emulate_tm_rollback(struct kvm_vcpu *vcpu) { vcpu->arch.shregs.msr &= ~MSR_TS_MASK; /* go to N state */ - vcpu->arch.pc = vcpu->arch.tfhar; + vcpu->arch.regs.nip = vcpu->arch.tfhar; copy_from_checkpoint(vcpu); vcpu->arch.cr = (vcpu->arch.cr & 0x0fffffff) | 0xa0000000; } diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 899bc9a..67061d3 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -162,10 +162,10 @@ void kvmppc_copy_to_svcpu(struct kvm_vcpu *vcpu) svcpu->gpr[12] = vcpu->arch.regs.gpr[12]; svcpu->gpr[13] = vcpu->arch.regs.gpr[13]; svcpu->cr = vcpu->arch.cr; - svcpu->xer = vcpu->arch.xer; - svcpu->ctr = vcpu->arch.ctr; - svcpu->lr = vcpu->arch.lr; - svcpu->pc = vcpu->arch.pc; + svcpu->xer = vcpu->arch.regs.xer; + svcpu->ctr = vcpu->arch.regs.ctr; + svcpu->lr = vcpu->arch.regs.link; + svcpu->pc = vcpu->arch.regs.nip; #ifdef CONFIG_PPC_BOOK3S_64 svcpu->shadow_fscr = vcpu->arch.shadow_fscr; #endif @@ -209,10 +209,10 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu) vcpu->arch.regs.gpr[12] = svcpu->gpr[12]; vcpu->arch.regs.gpr[13] = svcpu->gpr[13]; vcpu->arch.cr = svcpu->cr; - vcpu->arch.xer = svcpu->xer; - vcpu->arch.ctr = svcpu->ctr; - vcpu->arch.lr = svcpu->lr; - vcpu->arch.pc = svcpu->pc; + vcpu->arch.regs.xer = svcpu->xer; + vcpu->arch.regs.ctr = svcpu->ctr; + vcpu->arch.regs.link = svcpu->lr; + vcpu->arch.regs.nip = svcpu->pc; vcpu->arch.shadow_srr1 = svcpu->shadow_srr1; vcpu->arch.fault_dar = svcpu->fault_dar; vcpu->arch.fault_dsisr = svcpu->fault_dsisr; diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c index 6038e2e..05999c2 100644 --- a/arch/powerpc/kvm/booke.c +++ b/arch/powerpc/kvm/booke.c @@ -77,8 +77,10 @@ void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu) { int i; - printk("pc: %08lx msr: %08llx\n", vcpu->arch.pc, vcpu->arch.shared->msr); - printk("lr: %08lx ctr: %08lx\n", vcpu->arch.lr, vcpu->arch.ctr); + printk("pc: %08lx msr: %08llx\n", vcpu->arch.regs.nip, + vcpu->arch.shared->msr); + printk("lr: %08lx ctr: %08lx\n", vcpu->arch.regs.link, + vcpu->arch.regs.ctr); printk("srr0: %08llx srr1: %08llx\n", vcpu->arch.shared->srr0, vcpu->arch.shared->srr1); @@ -484,24 +486,25 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu, if (allowed) { switch (int_class) { case INT_CLASS_NONCRIT: - set_guest_srr(vcpu, vcpu->arch.pc, + set_guest_srr(vcpu, vcpu->arch.regs.nip, vcpu->arch.shared->msr); break; case INT_CLASS_CRIT: - set_guest_csrr(vcpu, vcpu->arch.pc, + set_guest_csrr(vcpu, vcpu->arch.regs.nip, vcpu->arch.shared->msr); break; case INT_CLASS_DBG: - set_guest_dsrr(vcpu, vcpu->arch.pc, + set_guest_dsrr(vcpu, vcpu->arch.regs.nip, vcpu->arch.shared->msr); break; case INT_CLASS_MC: - set_guest_mcsrr(vcpu, vcpu->arch.pc, + set_guest_mcsrr(vcpu, vcpu->arch.regs.nip, vcpu->arch.shared->msr); break; } - vcpu->arch.pc = vcpu->arch.ivpr | vcpu->arch.ivor[priority]; + vcpu->arch.regs.nip = vcpu->arch.ivpr | + vcpu->arch.ivor[priority]; if (update_esr == true) kvmppc_set_esr(vcpu, vcpu->arch.queued_esr); if (update_dear == true) @@ -819,7 +822,7 @@ static int emulation_exit(struct kvm_run *run, struct kvm_vcpu *vcpu) case EMULATE_FAIL: printk(KERN_CRIT "%s: emulation at %lx failed (%08x)\n", - __func__, vcpu->arch.pc, vcpu->arch.last_inst); + __func__, vcpu->arch.regs.nip, vcpu->arch.last_inst); /* For debugging, encode the failing instruction and * report it to userspace. */ run->hw.hardware_exit_reason = ~0ULL << 32; @@ -868,7 +871,7 @@ static int kvmppc_handle_debug(struct kvm_run *run, struct kvm_vcpu *vcpu) */ vcpu->arch.dbsr = 0; run->debug.arch.status = 0; - run->debug.arch.address = vcpu->arch.pc; + run->debug.arch.address = vcpu->arch.regs.nip; if (dbsr & (DBSR_IAC1 | DBSR_IAC2 | DBSR_IAC3 | DBSR_IAC4)) { run->debug.arch.status |= KVMPPC_DEBUG_BREAKPOINT; @@ -964,7 +967,7 @@ static int kvmppc_resume_inst_load(struct kvm_run *run, struct kvm_vcpu *vcpu, case EMULATE_FAIL: pr_debug("%s: load instruction from guest address %lx failed\n", - __func__, vcpu->arch.pc); + __func__, vcpu->arch.regs.nip); /* For debugging, encode the failing instruction and * report it to userspace. */ run->hw.hardware_exit_reason = ~0ULL << 32; @@ -1162,7 +1165,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, case BOOKE_INTERRUPT_SPE_FP_DATA: case BOOKE_INTERRUPT_SPE_FP_ROUND: printk(KERN_CRIT "%s: unexpected SPE interrupt %u at %08lx\n", - __func__, exit_nr, vcpu->arch.pc); + __func__, exit_nr, vcpu->arch.regs.nip); run->hw.hardware_exit_reason = exit_nr; r = RESUME_HOST; break; @@ -1292,7 +1295,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, } case BOOKE_INTERRUPT_ITLB_MISS: { - unsigned long eaddr = vcpu->arch.pc; + unsigned long eaddr = vcpu->arch.regs.nip; gpa_t gpaddr; gfn_t gfn; int gtlb_index; @@ -1384,7 +1387,7 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) int i; int r; - vcpu->arch.pc = 0; + vcpu->arch.regs.nip = 0; vcpu->arch.shared->pir = vcpu->vcpu_id; kvmppc_set_gpr(vcpu, 1, (16<<20) - 8); /* -8 for the callee-save LR slot */ kvmppc_set_msr(vcpu, 0); @@ -1433,10 +1436,10 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) vcpu_load(vcpu); - regs->pc = vcpu->arch.pc; + regs->pc = vcpu->arch.regs.nip; regs->cr = kvmppc_get_cr(vcpu); - regs->ctr = vcpu->arch.ctr; - regs->lr = vcpu->arch.lr; + regs->ctr = vcpu->arch.regs.ctr; + regs->lr = vcpu->arch.regs.link; regs->xer = kvmppc_get_xer(vcpu); regs->msr = vcpu->arch.shared->msr; regs->srr0 = kvmppc_get_srr0(vcpu); @@ -1464,10 +1467,10 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) vcpu_load(vcpu); - vcpu->arch.pc = regs->pc; + vcpu->arch.regs.nip = regs->pc; kvmppc_set_cr(vcpu, regs->cr); - vcpu->arch.ctr = regs->ctr; - vcpu->arch.lr = regs->lr; + vcpu->arch.regs.ctr = regs->ctr; + vcpu->arch.regs.link = regs->lr; kvmppc_set_xer(vcpu, regs->xer); kvmppc_set_msr(vcpu, regs->msr); kvmppc_set_srr0(vcpu, regs->srr0); diff --git a/arch/powerpc/kvm/booke_emulate.c b/arch/powerpc/kvm/booke_emulate.c index a82f645..d23e582 100644 --- a/arch/powerpc/kvm/booke_emulate.c +++ b/arch/powerpc/kvm/booke_emulate.c @@ -34,19 +34,19 @@ static void kvmppc_emul_rfi(struct kvm_vcpu *vcpu) { - vcpu->arch.pc = vcpu->arch.shared->srr0; + vcpu->arch.regs.nip = vcpu->arch.shared->srr0; kvmppc_set_msr(vcpu, vcpu->arch.shared->srr1); } static void kvmppc_emul_rfdi(struct kvm_vcpu *vcpu) { - vcpu->arch.pc = vcpu->arch.dsrr0; + vcpu->arch.regs.nip = vcpu->arch.dsrr0; kvmppc_set_msr(vcpu, vcpu->arch.dsrr1); } static void kvmppc_emul_rfci(struct kvm_vcpu *vcpu) { - vcpu->arch.pc = vcpu->arch.csrr0; + vcpu->arch.regs.nip = vcpu->arch.csrr0; kvmppc_set_msr(vcpu, vcpu->arch.csrr1); } diff --git a/arch/powerpc/kvm/e500_emulate.c b/arch/powerpc/kvm/e500_emulate.c index 8f871fb..3f8189e 100644 --- a/arch/powerpc/kvm/e500_emulate.c +++ b/arch/powerpc/kvm/e500_emulate.c @@ -94,7 +94,7 @@ static int kvmppc_e500_emul_ehpriv(struct kvm_run *run, struct kvm_vcpu *vcpu, switch (get_oc(inst)) { case EHPRIV_OC_DEBUG: run->exit_reason = KVM_EXIT_DEBUG; - run->debug.arch.address = vcpu->arch.pc; + run->debug.arch.address = vcpu->arch.regs.nip; run->debug.arch.status = 0; kvmppc_account_exit(vcpu, DEBUG_EXITS); emulated = EMULATE_EXIT_USER; diff --git a/arch/powerpc/kvm/e500_mmu.c b/arch/powerpc/kvm/e500_mmu.c index ddbf8f0..24296f4 100644 --- a/arch/powerpc/kvm/e500_mmu.c +++ b/arch/powerpc/kvm/e500_mmu.c @@ -513,7 +513,7 @@ void kvmppc_mmu_itlb_miss(struct kvm_vcpu *vcpu) { unsigned int as = !!(vcpu->arch.shared->msr & MSR_IS); - kvmppc_e500_deliver_tlb_miss(vcpu, vcpu->arch.pc, as); + kvmppc_e500_deliver_tlb_miss(vcpu, vcpu->arch.regs.nip, as); } void kvmppc_mmu_dtlb_miss(struct kvm_vcpu *vcpu) From patchwork Mon May 7 06:20:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 909536 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="ABQnEhMZ"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40fXZH1Dz4z9s78 for ; Mon, 7 May 2018 16:21:27 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751903AbeEGGUo (ORCPT ); Mon, 7 May 2018 02:20:44 -0400 Received: from mail-pg0-f66.google.com ([74.125.83.66]:42661 "EHLO mail-pg0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751891AbeEGGUl (ORCPT ); Mon, 7 May 2018 02:20:41 -0400 Received: by mail-pg0-f66.google.com with SMTP id p9-v6so16580292pgc.9; Sun, 06 May 2018 23:20:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ZLxHMJFZfHNj4XcFmRbCa5JL/sGUvmZ9iGp3SCMW96Q=; b=ABQnEhMZ9VzE/jJqaaEiFfS97Sa96BYDtiC6Vx86jNGm/gE2wseU/AJ+5wt9ihanKX fNVrLBHMJ4p52luitnXHLxJTpzpoPanC1yRJObG/xrT7BuP5DklqQznVo9N+8BCFQFfB SUgnK845QsgJKfWNsYIs1cKPP+7M+ApMKNaTLbcwj43zIAj0Pjk+vN0A8VQ8SGHNhsrv eiMyjFNFOsIjGRAkIPFh6a7/hYof7kpgmO1vicg/phIQ7EY8CnBCriFofpuvoKmE81K/ yhQvPuE/5Gh4BJuqZhCgNZ+cx4LFVIi1mTyqDkh/sO6Nu+Lft48Fasg2DRa4M4MRKZx/ vQSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ZLxHMJFZfHNj4XcFmRbCa5JL/sGUvmZ9iGp3SCMW96Q=; b=sHuzMq/CxngoAldaDcUU+1hYnepwCyoDK8KGbZjFaxjwPMtt6FfNtDDBUBKHQDOPKR Mi9YoYgPdklaSe4HMArX4pKPT1ZI6MHcTZzvOKdWpr4eU5Wh1u3X2HKuNTIseZIxWniq vgoXZfKzOYy0CwOvdVQ1HJT07++TmkZdYXlJG+nIJEqrWfh8QclDBz1BbuMO5aKQpIRh Re0HYw1sM5+TclF3PHAltDVlrhd3s3eurIoRxQROYpbF+LgXw9/JV2QKKqJYdFo415gA vQOr0uSxMdC1HEVOMe/OkjuVBzx0uXkAZ+0adSy4ILNoni1VWcBp7PzjH3+/xQnTd7SG cGlQ== X-Gm-Message-State: ALQs6tBSXJOnZQReBCXymTgKAmPL4HL14kpYHRWTPxNZWRADUplaHZTW qp36ELjqzPOwsP+n7U53TKp+wg== X-Google-Smtp-Source: AB8JxZpXcSwuG3aRph0z3PqgN8gDQzcv6uDyvroHXcthGK6Gd0iVGcpizVJGD9rOk75zKH6Kxd6sqw== X-Received: by 2002:a17:902:f24:: with SMTP id 33-v6mr37899814ply.242.1525674040487; Sun, 06 May 2018 23:20:40 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id z25sm10459544pfi.171.2018.05.06.23.20.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 May 2018 23:20:39 -0700 (PDT) From: wei.guo.simon@gmail.com To: kvm-ppc@vger.kernel.org Cc: Paul Mackerras , kvm@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Simon Guo Subject: [PATCH v2 03/10] KVM: PPC: Fix a mmio_host_swabbed uninitialized usage issue when VMX store Date: Mon, 7 May 2018 14:20:09 +0800 Message-Id: <1525674016-6703-4-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> References: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Simon Guo When KVM emulates VMX store, it will invoke kvmppc_get_vmx_data() to retrieve VMX reg val. kvmppc_get_vmx_data() will check mmio_host_swabbed to decide which double word of vr[] to be used. But the mmio_host_swabbed can be uninitiazed during VMX store procedure: kvmppc_emulate_loadstore \- kvmppc_handle_store128_by2x64 \- kvmppc_get_vmx_data So vcpu->arch.mmio_host_swabbed is not meant to be used at all for emulation of store instructions, and this patch makes that true for VMX stores. This patch also initializes mmio_host_swabbed to avoid invisble trouble. Signed-off-by: Simon Guo --- arch/powerpc/kvm/emulate_loadstore.c | 1 + arch/powerpc/kvm/powerpc.c | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c index a382e15..b8a3aef 100644 --- a/arch/powerpc/kvm/emulate_loadstore.c +++ b/arch/powerpc/kvm/emulate_loadstore.c @@ -111,6 +111,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) vcpu->arch.mmio_sp64_extend = 0; vcpu->arch.mmio_sign_extend = 0; vcpu->arch.mmio_vmx_copy_nums = 0; + vcpu->arch.mmio_host_swabbed = 0; switch (get_op(inst)) { case 31: diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 4e38764..bef27b1 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -1374,7 +1374,7 @@ static inline int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val) if (di > 1) return -1; - if (vcpu->arch.mmio_host_swabbed) + if (kvmppc_need_byteswap(vcpu)) di = 1 - di; w0 = vrs.u[di * 2]; From patchwork Mon May 7 06:20:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 909529 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="FL42lktb"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40fXYd44tKz9s37 for ; Mon, 7 May 2018 16:20:53 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751951AbeEGGUv (ORCPT ); Mon, 7 May 2018 02:20:51 -0400 Received: from mail-pf0-f196.google.com ([209.85.192.196]:45259 "EHLO mail-pf0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751941AbeEGGUr (ORCPT ); Mon, 7 May 2018 02:20:47 -0400 Received: by mail-pf0-f196.google.com with SMTP id c10so22019324pfi.12; Sun, 06 May 2018 23:20:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TEazah0JiQAj9zKSDat85T+22Pd8eG/o7JvBvhQkGrs=; b=FL42lktbQtWXjv9/sUJipiyOsFZs1iW6+DxHQ8WQw2h6kg/7oWRgkEaZbAUMl20Oxq rw9se4lh+3E6I8GNNTWHvubTtyUFB7gRb5DMmu+dSAey84FiqOKaTarftPC2STGzlNrd q/hP5Bi1iAX56gWg2vPqEACQuMPKTLVaZ55IC4gwcz90gteKrmHjkEuOzUx1GbqKw9ku ax43IHfhpi3in+6SP66GqBDjSfcuf1+IFmK/zAzKOxO8aAiAtZ480YEGjW/gs2rE6zFC ES+LOcVEqt3kifjXKT2Kt+gcPyEHZ1LDQFtc4sVHqX0ZyDvXo+1e9+vUBiCTWN/YJt2Z W0dQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TEazah0JiQAj9zKSDat85T+22Pd8eG/o7JvBvhQkGrs=; b=KgZRkpPhAAURAiXQl/zXJ2kQswvyyK1S6QCu0gLouOGZwQc3KIeqPA/fpYIZsf6H+F it7CORF6pOla7HwuhxbF+wSMrOf8quz5lgq7wIb+TXAN5IHewDCPJjApRR2dECMRcJfx czhS4Ia6+aLirgCHBHpCkk1ESsSRP3SQunXOJdRAmqbMQBGrrcY0TUXpZCYxL0NYTvlV EvYj/6gN0yHObUoB4Vjfuvf07r4oXifEZeHF8iHMmoGROGjaHQZhbw/r5YyCdH/6by23 5xu3A5jUsdvjp83ZgEa1wPcPd3Cbqi6xl8im+InG7AB7pEJ9pKhaaBfDNsFXU84dPONe nsuQ== X-Gm-Message-State: ALQs6tBxjMwpmpBS8dO++zhIsTBYDV+YR0T5d34U0u12X5sXxfZ8nEx0 3ceH2VSHZF+VG2iMOG8Z5Goilw== X-Google-Smtp-Source: AB8JxZoqOAXpkDQWoDy3FygzNVa4EIj0cJ4ykCBp9eEuvnadNPYZNt5/yTwCXiGo6dD743RIPtqZGw== X-Received: by 10.98.153.15 with SMTP id d15mr35593193pfe.115.1525674043110; Sun, 06 May 2018 23:20:43 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id z25sm10459544pfi.171.2018.05.06.23.20.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 May 2018 23:20:42 -0700 (PDT) From: wei.guo.simon@gmail.com To: kvm-ppc@vger.kernel.org Cc: Paul Mackerras , kvm@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Simon Guo Subject: [PATCH v2 04/10] KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio emulation Date: Mon, 7 May 2018 14:20:10 +0800 Message-Id: <1525674016-6703-5-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> References: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Simon Guo Some VSX instruction like lxvwsx will splat word into VSR. This patch adds VSX copy type KVMPPC_VSX_COPY_WORD_LOAD_DUMP to support this. Signed-off-by: Simon Guo Reviewed-by: Paul Mackerras --- arch/powerpc/include/asm/kvm_host.h | 1 + arch/powerpc/kvm/powerpc.c | 23 +++++++++++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 2d87768..3fb5e8d 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -454,6 +454,7 @@ struct mmio_hpte_cache { #define KVMPPC_VSX_COPY_WORD 1 #define KVMPPC_VSX_COPY_DWORD 2 #define KVMPPC_VSX_COPY_DWORD_LOAD_DUMP 3 +#define KVMPPC_VSX_COPY_WORD_LOAD_DUMP 4 struct openpic; diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index bef27b1..45daf3b 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -907,6 +907,26 @@ static inline void kvmppc_set_vsr_dword_dump(struct kvm_vcpu *vcpu, } } +static inline void kvmppc_set_vsr_word_dump(struct kvm_vcpu *vcpu, + u32 gpr) +{ + union kvmppc_one_reg val; + int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK; + + if (vcpu->arch.mmio_vsx_tx_sx_enabled) { + val.vsx32val[0] = gpr; + val.vsx32val[1] = gpr; + val.vsx32val[2] = gpr; + val.vsx32val[3] = gpr; + VCPU_VSX_VR(vcpu, index) = val.vval; + } else { + val.vsx32val[0] = gpr; + val.vsx32val[1] = gpr; + VCPU_VSX_FPR(vcpu, index, 0) = val.vsxval[0]; + VCPU_VSX_FPR(vcpu, index, 1) = val.vsxval[0]; + } +} + static inline void kvmppc_set_vsr_word(struct kvm_vcpu *vcpu, u32 gpr32) { @@ -1061,6 +1081,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, else if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_DWORD_LOAD_DUMP) kvmppc_set_vsr_dword_dump(vcpu, gpr); + else if (vcpu->arch.mmio_vsx_copy_type == + KVMPPC_VSX_COPY_WORD_LOAD_DUMP) + kvmppc_set_vsr_word_dump(vcpu, gpr); break; #endif #ifdef CONFIG_ALTIVEC From patchwork Mon May 7 06:20:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 909530 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="KYBrOg5C"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40fXYk14Zjz9s3D for ; Mon, 7 May 2018 16:20:58 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751955AbeEGGUz (ORCPT ); Mon, 7 May 2018 02:20:55 -0400 Received: from mail-pf0-f194.google.com ([209.85.192.194]:41975 "EHLO mail-pf0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751947AbeEGGUt (ORCPT ); Mon, 7 May 2018 02:20:49 -0400 Received: by mail-pf0-f194.google.com with SMTP id v63so22017590pfk.8; Sun, 06 May 2018 23:20:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Xz0wzzx5gHO2d+IN1mBN2/95O66HMADWUHrugesn64M=; b=KYBrOg5CXIcc4lGrQsdNmQiNnbJgZikh0E9338ZFx0zRRjW3oaByAKs2/HiVR9MJiH VIiXk5wdvnurWJYp6GpGf3ZuLxSlgTG0ltKXUY0iRPOBSubUhB10GAYsTNGKs7AD+Igp yFCdlvNRxmq/3b8YdDn2s3uufWsltYXfmwaw8IVex5cQp+UwSKLJt9OSXRa9rt9fjkt2 ZvPWqxD640RGqdt59Tl4pLMQbMlVnWvjJwd2HFg1wJc/FE/Zd5K7WlDYhgjdUjgef+6Y SAuG9j3eGNsY5Yta9I9Dz7L1PeLZMDtA6wJFb4xepD6Zkk3i+uug6hvl/JAU1M/dJsyt +WIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Xz0wzzx5gHO2d+IN1mBN2/95O66HMADWUHrugesn64M=; b=OdM5QR5suvXU7554BOM0YNmpjs4hx0gvxbEaldLlRxtzoYMuLI9aMhPwf8vCTfiqas 8JNamz/kpz4DJe56XiMJ4Ne1iW+nGj6Sxg9Q1IzR9i9FigOmkH7UoSWbXzFFSffAw/Tr yX/q+svF17O7BCS+dYoZDQ2vJM3XwOyuCUponLDrhWHsvp7cF2XStGCfQw21lDz3+tSi uhSwlhOq/3kDNYH+bTqz4ToWKkJBWtZWiuiF0dhKzBAp7strNLACOS8L2721xwGe+6kg l03v7lTChxa/Srl9+vu1Y6/eJ1HP57HKKJSW3gbES7EGwPWDG7UkGzvg28C5yA+Uxbsb 5f6w== X-Gm-Message-State: ALQs6tC0m4QQKtcruCPzsqSocp7f44NPgr21lyWToEP0JzmoM9bWLgq1 l4yQkq7kYLR4U97vw5uWJzDikg== X-Google-Smtp-Source: AB8JxZqlSdZG8RGYZPL8Aitd8ald+xPtH1eDP7t0dyKws+kxCFMy2BHFIDxvK5YVk5TjgfoVpybnGA== X-Received: by 2002:a65:60d1:: with SMTP id r17-v6mr24784213pgv.410.1525674048283; Sun, 06 May 2018 23:20:48 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id z25sm10459544pfi.171.2018.05.06.23.20.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 May 2018 23:20:47 -0700 (PDT) From: wei.guo.simon@gmail.com To: kvm-ppc@vger.kernel.org Cc: Paul Mackerras , kvm@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Simon Guo Subject: [PATCH v2 05/10] KVM: PPC: reimplement non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input Date: Mon, 7 May 2018 14:20:11 +0800 Message-Id: <1525674016-6703-6-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> References: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Simon Guo This patch reimplements non-SIMD LOAD/STORE instruction MMIO emulation with analyse_intr() input. It utilizes the BYTEREV/UPDATE/SIGNEXT properties exported by analyse_instr() and invokes kvmppc_handle_load(s)/kvmppc_handle_store() accordingly. It also move CACHEOP type handling into the skeleton. instruction_type within kvm_ppc.h is renamed to avoid conflict with sstep.h. Suggested-by: Paul Mackerras Signed-off-by: Simon Guo --- arch/powerpc/include/asm/kvm_ppc.h | 6 +- arch/powerpc/kvm/book3s.c | 4 +- arch/powerpc/kvm/e500_mmu_host.c | 8 +- arch/powerpc/kvm/emulate_loadstore.c | 271 ++++++----------------------------- 4 files changed, 51 insertions(+), 238 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index abe7032..139cdf0 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -52,7 +52,7 @@ enum emulation_result { EMULATE_EXIT_USER, /* emulation requires exit to user-space */ }; -enum instruction_type { +enum instruction_fetch_type { INST_GENERIC, INST_SC, /* system call */ }; @@ -93,7 +93,7 @@ extern int kvmppc_handle_vsx_store(struct kvm_run *run, struct kvm_vcpu *vcpu, int is_default_endian); extern int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, - enum instruction_type type, u32 *inst); + enum instruction_fetch_type type, u32 *inst); extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data); @@ -330,7 +330,7 @@ struct kvmppc_ops { extern struct kvmppc_ops *kvmppc_pr_ops; static inline int kvmppc_get_last_inst(struct kvm_vcpu *vcpu, - enum instruction_type type, u32 *inst) + enum instruction_fetch_type type, u32 *inst) { int ret = EMULATE_DONE; u32 fetched_inst; diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index 97d4a11..320cdcf 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -450,8 +450,8 @@ int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, enum xlate_instdata xlid, return r; } -int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type, - u32 *inst) +int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, + enum instruction_fetch_type type, u32 *inst) { ulong pc = kvmppc_get_pc(vcpu); int r; diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c index c878b4f..8f2985e 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -625,8 +625,8 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 eaddr, gpa_t gpaddr, } #ifdef CONFIG_KVM_BOOKE_HV -int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type, - u32 *instr) +int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, + enum instruction_fetch_type type, u32 *instr) { gva_t geaddr; hpa_t addr; @@ -715,8 +715,8 @@ int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type, return EMULATE_DONE; } #else -int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, enum instruction_type type, - u32 *instr) +int kvmppc_load_last_inst(struct kvm_vcpu *vcpu, + enum instruction_fetch_type type, u32 *instr) { return EMULATE_AGAIN; } diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c index b8a3aef..2a91845 100644 --- a/arch/powerpc/kvm/emulate_loadstore.c +++ b/arch/powerpc/kvm/emulate_loadstore.c @@ -31,6 +31,7 @@ #include #include #include +#include #include "timing.h" #include "trace.h" @@ -84,8 +85,9 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) struct kvm_run *run = vcpu->run; u32 inst; int ra, rs, rt; - enum emulation_result emulated; + enum emulation_result emulated = EMULATE_FAIL; int advance = 1; + struct instruction_op op; /* this default type might be overwritten by subcategories */ kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS); @@ -113,144 +115,61 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) vcpu->arch.mmio_vmx_copy_nums = 0; vcpu->arch.mmio_host_swabbed = 0; - switch (get_op(inst)) { - case 31: - switch (get_xop(inst)) { - case OP_31_XOP_LWZX: - emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1); - break; - - case OP_31_XOP_LWZUX: - emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_LBZX: - emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); - break; - - case OP_31_XOP_LBZUX: - emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_STDX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 8, 1); - break; - - case OP_31_XOP_STDUX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; + emulated = EMULATE_FAIL; + vcpu->arch.regs.msr = vcpu->arch.shared->msr; + vcpu->arch.regs.ccr = vcpu->arch.cr; + if (analyse_instr(&op, &vcpu->arch.regs, inst) == 0) { + int type = op.type & INSTR_TYPE_MASK; + int size = GETSIZE(op.type); - case OP_31_XOP_STWX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 4, 1); - break; + switch (type) { + case LOAD: { + int instr_byte_swap = op.type & BYTEREV; - case OP_31_XOP_STWUX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; + if (op.type & SIGNEXT) + emulated = kvmppc_handle_loads(run, vcpu, + op.reg, size, !instr_byte_swap); + else + emulated = kvmppc_handle_load(run, vcpu, + op.reg, size, !instr_byte_swap); - case OP_31_XOP_STBX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 1, 1); - break; + if ((op.type & UPDATE) && (emulated != EMULATE_FAIL)) + kvmppc_set_gpr(vcpu, op.update_reg, op.ea); - case OP_31_XOP_STBUX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 1, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_LHAX: - emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1); - break; - - case OP_31_XOP_LHAUX: - emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_LHZX: - emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); break; + } + case STORE: + /* if need byte reverse, op.val has been reversed by + * analyse_instr(). + */ + emulated = kvmppc_handle_store(run, vcpu, op.val, + size, 1); - case OP_31_XOP_LHZUX: - emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; + if ((op.type & UPDATE) && (emulated != EMULATE_FAIL)) + kvmppc_set_gpr(vcpu, op.update_reg, op.ea); - case OP_31_XOP_STHX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 2, 1); - break; - - case OP_31_XOP_STHUX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 2, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); break; - - case OP_31_XOP_DCBST: - case OP_31_XOP_DCBF: - case OP_31_XOP_DCBI: + case CACHEOP: /* Do nothing. The guest is performing dcbi because * hardware DMA is not snooped by the dcache, but * emulated DMA either goes through the dcache as * normal writes, or the host kernel has handled dcache - * coherence. */ - break; - - case OP_31_XOP_LWBRX: - emulated = kvmppc_handle_load(run, vcpu, rt, 4, 0); - break; - - case OP_31_XOP_STWBRX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 4, 0); - break; - - case OP_31_XOP_LHBRX: - emulated = kvmppc_handle_load(run, vcpu, rt, 2, 0); + * coherence. + */ + emulated = EMULATE_DONE; break; - - case OP_31_XOP_STHBRX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 2, 0); - break; - - case OP_31_XOP_LDBRX: - emulated = kvmppc_handle_load(run, vcpu, rt, 8, 0); - break; - - case OP_31_XOP_STDBRX: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 8, 0); - break; - - case OP_31_XOP_LDX: - emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1); - break; - - case OP_31_XOP_LDUX: - emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); + default: break; + } + } - case OP_31_XOP_LWAX: - emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1); - break; - case OP_31_XOP_LWAUX: - emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; + if (emulated == EMULATE_DONE) + goto out; + switch (get_op(inst)) { + case 31: + switch (get_xop(inst)) { #ifdef CONFIG_PPC_FPU case OP_31_XOP_LFSX: if (kvmppc_check_fp_disabled(vcpu)) @@ -502,10 +421,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) } break; - case OP_LWZ: - emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1); - break; - #ifdef CONFIG_PPC_FPU case OP_STFS: if (kvmppc_check_fp_disabled(vcpu)) @@ -542,110 +457,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) 8, 1); kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); break; -#endif - - case OP_LD: - rt = get_rt(inst); - switch (inst & 3) { - case 0: /* ld */ - emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1); - break; - case 1: /* ldu */ - emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - case 2: /* lwa */ - emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1); - break; - default: - emulated = EMULATE_FAIL; - } - break; - - case OP_LWZU: - emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_LBZ: - emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); - break; - case OP_LBZU: - emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_STW: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), - 4, 1); - break; - - case OP_STD: - rs = get_rs(inst); - switch (inst & 3) { - case 0: /* std */ - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 8, 1); - break; - case 1: /* stdu */ - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - default: - emulated = EMULATE_FAIL; - } - break; - - case OP_STWU: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_STB: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 1, 1); - break; - - case OP_STBU: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 1, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_LHZ: - emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); - break; - - case OP_LHZU: - emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_LHA: - emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1); - break; - - case OP_LHAU: - emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_STH: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 2, 1); - break; - - case OP_STHU: - emulated = kvmppc_handle_store(run, vcpu, - kvmppc_get_gpr(vcpu, rs), 2, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - -#ifdef CONFIG_PPC_FPU case OP_LFS: if (kvmppc_check_fp_disabled(vcpu)) return EMULATE_DONE; @@ -684,6 +496,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) break; } +out: if (emulated == EMULATE_FAIL) { advance = 0; kvmppc_core_queue_program(vcpu, 0); From patchwork Mon May 7 06:20:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 909532 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="P9VH9l2f"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40fXYt2gQXz9s5f for ; Mon, 7 May 2018 16:21:06 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751952AbeEGGVE (ORCPT ); Mon, 7 May 2018 02:21:04 -0400 Received: from mail-pf0-f195.google.com ([209.85.192.195]:38709 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751959AbeEGGUz (ORCPT ); Mon, 7 May 2018 02:20:55 -0400 Received: by mail-pf0-f195.google.com with SMTP id o76so22032509pfi.5; Sun, 06 May 2018 23:20:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=7BcmZYEoiQcWPcY+uu/9ACmRcD1y1CFnun5ifjzzHlo=; b=P9VH9l2fTtw1GQOne4fq9kcfw7pAQy4l2Spq4mO561LnIrO6fDuKHhFTqdxZiyRACh s363lJZRgiRAelmEMFHjASNtZ9R2XgxOSNXazkRZG7Ez3q3TlrTK/Vd0D/5cWITJ/TU/ qdw7M9Cyj7/lEn8FPk2UibQbV+Hvfa9ZdtId72sp8qtC6WaS2sm0V3dSuFLwqNbNgenR MuzKPvWHJQF+lVPfCUHETeGlW11ZsfFBpEh+NUkEZduRIKxyq5I8SNZRYDHfPGRT3yr7 wU09V62oaW8k1SEm6DMl2k+BrqjmPypFwcAeQ328iKOyp3twMYBdlJ1raPId2QuSv3Ft MtGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7BcmZYEoiQcWPcY+uu/9ACmRcD1y1CFnun5ifjzzHlo=; b=IBZpsJyzAA38JTXBsAbi0wImWvXsorSw/Ua1fQ8OjoI0RAnbhyQ5+lau/QlhA6kcZm fFeceVVnBggNOjvHQLJBo9I3j1TnNH4/OHsyOcLqqWT1aq9ugjWYo0DUCCKIE44g+pdJ SlP/ytNWIRRlRhJoe+7LHvMnY9698HZ0Sz6hhhP6babghHOFfC7hWMc4+inRYM1rinWQ oXhTW2ooBE4MPKLhSfyE1AEeMkcoRLlDny/3TAszIShEuzu72vDKhsh+dvIsRtpAe4I4 FPzJ6XvosesArK7eLSIkVwu0FNNc66M89ZhU/94eGGAiw0Cngazz8QGSx5L/qf9TjX93 THyQ== X-Gm-Message-State: ALQs6tAcRF1hh+RMmaHEg2/rvrHQYVzlE7pqPN+CsDzbA96e19vkYDb+ BOphIjefI2cV/nBZzRzPOeFEZw== X-Google-Smtp-Source: AB8JxZqT+Eb94qZfxezdvGXe33XDOt+0QB/oavz8X0FVRJ3eofodubXjNatwk9hmzKnRFj74V3brRg== X-Received: by 2002:a63:744a:: with SMTP id e10-v6mr25070167pgn.275.1525674050912; Sun, 06 May 2018 23:20:50 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id z25sm10459544pfi.171.2018.05.06.23.20.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 May 2018 23:20:50 -0700 (PDT) From: wei.guo.simon@gmail.com To: kvm-ppc@vger.kernel.org Cc: Paul Mackerras , kvm@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Simon Guo Subject: [PATCH v2 06/10] KVM: PPC: add giveup_ext() hook for PPC KVM ops Date: Mon, 7 May 2018 14:20:12 +0800 Message-Id: <1525674016-6703-7-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> References: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Simon Guo Currently HV will save math regs(FP/VEC/VSX) when trap into host. But PR KVM will only save math regs when qemu task switch out of CPU, or when returning from qemu code. To emulate FP/VEC/VSX mmio load, PR KVM need to make sure that math regs were flushed firstly and then be able to update saved VCPU FPR/VEC/VSX area reasonably. This patch adds giveup_ext() field to KVM ops and PR KVM has non-NULL giveup_ext() ops. kvmppc_complete_mmio_load() can invoke that hook (when not NULL) to flush math regs accordingly, before updating saved register vals. Math regs flush is also necessary for STORE, which will be covered in later patch within this patch series. Signed-off-by: Simon Guo --- arch/powerpc/include/asm/kvm_ppc.h | 1 + arch/powerpc/kvm/book3s_pr.c | 1 + arch/powerpc/kvm/powerpc.c | 9 +++++++++ 3 files changed, 11 insertions(+) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 139cdf0..1f087c4 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -324,6 +324,7 @@ struct kvmppc_ops { int (*get_rmmu_info)(struct kvm *kvm, struct kvm_ppc_rmmu_info *info); int (*set_smt_mode)(struct kvm *kvm, unsigned long mode, unsigned long flags); + void (*giveup_ext)(struct kvm_vcpu *vcpu, ulong msr); }; extern struct kvmppc_ops *kvmppc_hv_ops; diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 67061d3..be26636 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -1782,6 +1782,7 @@ static long kvm_arch_vm_ioctl_pr(struct file *filp, #ifdef CONFIG_PPC_BOOK3S_64 .hcall_implemented = kvmppc_hcall_impl_pr, #endif + .giveup_ext = kvmppc_giveup_ext, }; diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 45daf3b..8ce9e7b 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -1061,6 +1061,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr); break; case KVM_MMIO_REG_FPR: + if (vcpu->kvm->arch.kvm_ops->giveup_ext) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP); + VCPU_FPR(vcpu, vcpu->arch.io_gpr & KVM_MMIO_REG_MASK) = gpr; break; #ifdef CONFIG_PPC_BOOK3S @@ -1074,6 +1077,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, #endif #ifdef CONFIG_VSX case KVM_MMIO_REG_VSX: + if (vcpu->kvm->arch.kvm_ops->giveup_ext) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VSX); + if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_DWORD) kvmppc_set_vsr_dword(vcpu, gpr); else if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_WORD) @@ -1088,6 +1094,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, #endif #ifdef CONFIG_ALTIVEC case KVM_MMIO_REG_VMX: + if (vcpu->kvm->arch.kvm_ops->giveup_ext) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VEC); + kvmppc_set_vmx_dword(vcpu, gpr); break; #endif From patchwork Mon May 7 06:20:13 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 909531 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="UbVEeRHY"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40fXYr1lS9z9s72 for ; Mon, 7 May 2018 16:21:04 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751975AbeEGGVC (ORCPT ); Mon, 7 May 2018 02:21:02 -0400 Received: from mail-pf0-f195.google.com ([209.85.192.195]:46550 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751943AbeEGGUy (ORCPT ); Mon, 7 May 2018 02:20:54 -0400 Received: by mail-pf0-f195.google.com with SMTP id p12so22011460pff.13; Sun, 06 May 2018 23:20:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=m15mzqc18jhSeaytdTsdDJzPaPb2Emn5ry7SSw6c1m0=; b=UbVEeRHYUI3CUCJfaq4VnoTOz2FIQSrEDMH4H7VV0oXfphXMzJgVp3caHGZtbyw1H/ 1n1xijPOaiemZ+hjhfnOvgCM7wWBiD0UJTMOxM6decaH1Egu18YlPUixQUZRfQLaDUiA gXkj1ZkXliAwFcgdWuXKrRP7x/Rx0ehh1yyKkXENbB7ci4xtuOnbSTZC9cmT3QYA7U5B vUG0r2P85FyQcty+66+h+jO07Xh9Qea0msZp2vP6nIQTL/qFyomokEVWcsCd46ifA3Pq 4J31qdzfqUoxJqfYuRv+1go8yp+Ze/s9vd3uCCedouv1T60m64fPrYwBckM8u6c7Ir5E X9Ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=m15mzqc18jhSeaytdTsdDJzPaPb2Emn5ry7SSw6c1m0=; b=lZmMLJj4g3sKlx8uZ5DesTu/EHIAo7NA+rXf3GHpdPEUKTI8AMZUxMN7+N7aAiW8Ky Uk6doX2beDswi6xSpmSENEBssRX6OFvvt+dPMCtAlrsxLplN9DCR5WzYJpAr+kyhVRsg ekma6t+dbE/eUVXtSTGCvZ1rf4MlM2vXuJnTlRyxBBtSFd+tp8HuRm2Rb+NrpDyRk/Kt gvGzQeRQ/KMv/yM7O1KsFRafp7oiO4dMy2F6M58YJQ0u0Xnm9sH88plE3zNrXxRQcna6 cpQrlgf++CA5pBDJP9vOiz4zJjrl6aWn1V/pIW5qEdVJ1jPaZMUnpvPeqylGqqSbAKl8 SWPQ== X-Gm-Message-State: ALQs6tBWpLjID2WxM/Ky+qsiIv1zmaBnin9KCKX1pZfeq7ghVaX9rWZI GqQYvaj7GzHjzw7Z/4ozcD6m4A== X-Google-Smtp-Source: AB8JxZp2rgy1uv6uLkI/1qlfPMZxr9Yff8g2wx0Eg1SX9RmayI9p/HokI7lMeL7Igzcek0TOwCnXUg== X-Received: by 2002:a65:4d07:: with SMTP id i7-v6mr15253987pgt.149.1525674053436; Sun, 06 May 2018 23:20:53 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id z25sm10459544pfi.171.2018.05.06.23.20.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 May 2018 23:20:52 -0700 (PDT) From: wei.guo.simon@gmail.com To: kvm-ppc@vger.kernel.org Cc: Paul Mackerras , kvm@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Simon Guo Subject: [PATCH v2 07/10] KVM: PPC: reimplement LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input Date: Mon, 7 May 2018 14:20:13 +0800 Message-Id: <1525674016-6703-8-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> References: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Simon Guo This patch reimplements LOAD_FP/STORE_FP instruction MMIO emulation with analyse_intr() input. It utilizes the FPCONV/UPDATE properties exported by analyse_instr() and invokes kvmppc_handle_load(s)/kvmppc_handle_store() accordingly. For FP store MMIO emulation, the FP regs need to be flushed firstly so that the right FP reg vals can be read from vcpu->arch.fpr, which will be stored into MMIO data. Suggested-by: Paul Mackerras Signed-off-by: Simon Guo --- arch/powerpc/kvm/emulate_loadstore.c | 197 +++++++---------------------------- 1 file changed, 40 insertions(+), 157 deletions(-) diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c index 2a91845..5a6571c 100644 --- a/arch/powerpc/kvm/emulate_loadstore.c +++ b/arch/powerpc/kvm/emulate_loadstore.c @@ -138,6 +138,22 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) break; } +#ifdef CONFIG_PPC_FPU + case LOAD_FP: + if (kvmppc_check_fp_disabled(vcpu)) + return EMULATE_DONE; + + if (op.type & FPCONV) + vcpu->arch.mmio_sp64_extend = 1; + + emulated = kvmppc_handle_load(run, vcpu, + KVM_MMIO_REG_FPR|op.reg, size, 1); + + if ((op.type & UPDATE) && (emulated != EMULATE_FAIL)) + kvmppc_set_gpr(vcpu, op.update_reg, op.ea); + + break; +#endif case STORE: /* if need byte reverse, op.val has been reversed by * analyse_instr(). @@ -149,6 +165,30 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) kvmppc_set_gpr(vcpu, op.update_reg, op.ea); break; +#ifdef CONFIG_PPC_FPU + case STORE_FP: + if (kvmppc_check_fp_disabled(vcpu)) + return EMULATE_DONE; + + /* The FP registers need to be flushed so that + * kvmppc_handle_store() can read actual FP vals + * from vcpu->arch. + */ + if (vcpu->kvm->arch.kvm_ops->giveup_ext) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, + MSR_FP); + + if (op.type & FPCONV) + vcpu->arch.mmio_sp64_extend = 1; + + emulated = kvmppc_handle_store(run, vcpu, + VCPU_FPR(vcpu, op.reg), size, 1); + + if ((op.type & UPDATE) && (emulated != EMULATE_FAIL)) + kvmppc_set_gpr(vcpu, op.update_reg, op.ea); + + break; +#endif case CACHEOP: /* Do nothing. The guest is performing dcbi because * hardware DMA is not snooped by the dcache, but @@ -170,93 +210,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) switch (get_op(inst)) { case 31: switch (get_xop(inst)) { -#ifdef CONFIG_PPC_FPU - case OP_31_XOP_LFSX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 4, 1); - break; - - case OP_31_XOP_LFSUX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_LFDX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 8, 1); - break; - - case OP_31_XOP_LFDUX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_LFIWAX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_loads(run, vcpu, - KVM_MMIO_REG_FPR|rt, 4, 1); - break; - - case OP_31_XOP_LFIWZX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 4, 1); - break; - - case OP_31_XOP_STFSX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), 4, 1); - break; - - case OP_31_XOP_STFSUX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_STFDX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), 8, 1); - break; - - case OP_31_XOP_STFDUX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_31_XOP_STFIWX: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), 4, 1); - break; -#endif - #ifdef CONFIG_VSX case OP_31_XOP_LXSDX: if (kvmppc_check_vsx_disabled(vcpu)) @@ -421,76 +374,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) } break; -#ifdef CONFIG_PPC_FPU - case OP_STFS: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), - 4, 1); - break; - - case OP_STFSU: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), - 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_STFD: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), - 8, 1); - break; - - case OP_STFDU: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_store(run, vcpu, - VCPU_FPR(vcpu, rs), - 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_LFS: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 4, 1); - break; - - case OP_LFSU: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 4, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; - - case OP_LFD: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 8, 1); - break; - - case OP_LFDU: - if (kvmppc_check_fp_disabled(vcpu)) - return EMULATE_DONE; - emulated = kvmppc_handle_load(run, vcpu, - KVM_MMIO_REG_FPR|rt, 8, 1); - kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed); - break; -#endif - default: emulated = EMULATE_FAIL; break; From patchwork Mon May 7 06:20:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 909533 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="H4d5Fzaq"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40fXZ209Lbz9s3q for ; Mon, 7 May 2018 16:21:14 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751988AbeEGGVL (ORCPT ); Mon, 7 May 2018 02:21:11 -0400 Received: from mail-pf0-f195.google.com ([209.85.192.195]:36576 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751961AbeEGGU5 (ORCPT ); Mon, 7 May 2018 02:20:57 -0400 Received: by mail-pf0-f195.google.com with SMTP id w129so16607125pfd.3; Sun, 06 May 2018 23:20:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=L6K/PfN+4qbovGN57nlugjMS4S5z6Ix1fzCm8nWLN94=; b=H4d5FzaqYvAkUtzWO2JScHFxyA4oAchN5Vmx69lGS9oJGRbtCwio8xYvrB+al+IGhB wECIlhIWjvccuahEkYOOAolkm4jikFeu+LmjHXYfWRHIJkJbKa6ttGvql5kSYUGnZ8bu ibvI7XoJ4USVE8WKlZyXe2wQqHNLRZRYvjfiTtZXKiDf+UnOQ6YOiS5zpM8wRtdZC+F9 zbbm2bYjv3dB/y0mc90L29R1F200RwwZWJNuq4fM03Jai0MyXgMme+XPbLCzpNHJ6S7h DogU+VX8dTazBqSc23yqMaGc2Cd8Rzjszu5xOaJ7Xb7aZ9oDbwyxGy4zdsFJXusRLIa7 gMVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=L6K/PfN+4qbovGN57nlugjMS4S5z6Ix1fzCm8nWLN94=; b=RQKrLCKQjgdIqVUpXnjJrLQce8JJse9OFLpDeXKDA9iBwhWrbolc2mbZmN/4qDol3q KJoC/HKtY+T8s52TffV9wsice8YVrXr0ogrHdPom95KePkipZcRamI6qWke26y4b0v2Z g7vhitHgsLh0Z3DhEwJC3WXSX7ttMgTP06uMmudo3u/3GHAmZCKMqwmtQ9bg2RQ55lZp ioGdVPzof3YL663BwuIwisXKvJRULnStwNjTs5XWqmGmY/fpNicLeq1OWv8berQwKL9e 9KYAsavlZ+E/PlIaaLABNhbr52hDxmpDTHsRNrahoTxR6WgM3+PAwvIY6O0goUwuX9hp g2cQ== X-Gm-Message-State: ALQs6tBb8xzFn7DWJP0dvIgOz8mHIwJCwNW43c+tEzuWmkGXgdkwZiFc tqenQrQ3oD3xHI+5HOtj0v3sHQ== X-Google-Smtp-Source: AB8JxZr+SREzNXUQdhOyd6/Ij/jXzIwjhsl7KCu+fH0CyhsMTHKcptyWLmZG3KGAiLo+Sg5fdPvFSQ== X-Received: by 2002:a17:902:584:: with SMTP id f4-v6mr36366416plf.290.1525674056443; Sun, 06 May 2018 23:20:56 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id z25sm10459544pfi.171.2018.05.06.23.20.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 May 2018 23:20:55 -0700 (PDT) From: wei.guo.simon@gmail.com To: kvm-ppc@vger.kernel.org Cc: Paul Mackerras , kvm@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Simon Guo Subject: [PATCH v2 08/10] KVM: PPC: reimplements LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input Date: Mon, 7 May 2018 14:20:14 +0800 Message-Id: <1525674016-6703-9-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> References: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Simon Guo This patch reimplements LOAD_VSX/STORE_VSX instruction MMIO emulation with analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported by analyse_instr() and handle accordingly. When emulating VSX store, the VSX reg will need to be flushed so that the right reg val can be retrieved before writing to IO MEM. Suggested-by: Paul Mackerras Signed-off-by: Simon Guo --- arch/powerpc/kvm/emulate_loadstore.c | 227 ++++++++++++++--------------------- 1 file changed, 91 insertions(+), 136 deletions(-) diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c index 5a6571c..28e97c5 100644 --- a/arch/powerpc/kvm/emulate_loadstore.c +++ b/arch/powerpc/kvm/emulate_loadstore.c @@ -154,6 +154,54 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) break; #endif +#ifdef CONFIG_VSX + case LOAD_VSX: { + int io_size_each; + + if (op.vsx_flags & VSX_CHECK_VEC) { + if (kvmppc_check_altivec_disabled(vcpu)) + return EMULATE_DONE; + } else { + if (kvmppc_check_vsx_disabled(vcpu)) + return EMULATE_DONE; + } + + if (op.vsx_flags & VSX_FPCONV) + vcpu->arch.mmio_sp64_extend = 1; + + if (op.element_size == 8) { + if (op.vsx_flags & VSX_SPLAT) + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_DWORD_LOAD_DUMP; + else + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_DWORD; + } else if (op.element_size == 4) { + if (op.vsx_flags & VSX_SPLAT) + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_WORD_LOAD_DUMP; + else + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_WORD; + } else + break; + + if (size < op.element_size) { + /* precision convert case: lxsspx, etc */ + vcpu->arch.mmio_vsx_copy_nums = 1; + io_size_each = size; + } else { /* lxvw4x, lxvd2x, etc */ + vcpu->arch.mmio_vsx_copy_nums = + size/op.element_size; + io_size_each = op.element_size; + } + + emulated = kvmppc_handle_vsx_load(run, vcpu, + KVM_MMIO_REG_VSX|op.reg, io_size_each, + 1, op.type & SIGNEXT); + break; + } +#endif case STORE: /* if need byte reverse, op.val has been reversed by * analyse_instr(). @@ -189,6 +237,49 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) break; #endif +#ifdef CONFIG_VSX + case STORE_VSX: { + int io_size_each; + + if (op.vsx_flags & VSX_CHECK_VEC) { + if (kvmppc_check_altivec_disabled(vcpu)) + return EMULATE_DONE; + } else { + if (kvmppc_check_vsx_disabled(vcpu)) + return EMULATE_DONE; + } + + if (vcpu->kvm->arch.kvm_ops->giveup_ext) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, + MSR_VSX); + + if (op.vsx_flags & VSX_FPCONV) + vcpu->arch.mmio_sp64_extend = 1; + + if (op.element_size == 8) + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_DWORD; + else if (op.element_size == 4) + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_WORD; + else + break; + + if (size < op.element_size) { + /* precise conversion case, like stxsspx */ + vcpu->arch.mmio_vsx_copy_nums = 1; + io_size_each = size; + } else { /* stxvw4x, stxvd2x, etc */ + vcpu->arch.mmio_vsx_copy_nums = + size/op.element_size; + io_size_each = op.element_size; + } + + emulated = kvmppc_handle_vsx_store(run, vcpu, + op.reg, io_size_each, 1); + break; + } +#endif case CACHEOP: /* Do nothing. The guest is performing dcbi because * hardware DMA is not snooped by the dcache, but @@ -210,142 +301,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) switch (get_op(inst)) { case 31: switch (get_xop(inst)) { -#ifdef CONFIG_VSX - case OP_31_XOP_LXSDX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 8, 1, 0); - break; - - case OP_31_XOP_LXSSPX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 0); - break; - - case OP_31_XOP_LXSIWAX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 1); - break; - - case OP_31_XOP_LXSIWZX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 0); - break; - - case OP_31_XOP_LXVD2X: - /* - * In this case, the official load/store process is like this: - * Step1, exit from vm by page fault isr, then kvm save vsr. - * Please see guest_exit_cont->store_fp_state->SAVE_32VSRS - * as reference. - * - * Step2, copy data between memory and VCPU - * Notice: for LXVD2X/STXVD2X/LXVW4X/STXVW4X, we use - * 2copies*8bytes or 4copies*4bytes - * to simulate one copy of 16bytes. - * Also there is an endian issue here, we should notice the - * layout of memory. - * Please see MARCO of LXVD2X_ROT/STXVD2X_ROT as more reference. - * If host is little-endian, kvm will call XXSWAPD for - * LXVD2X_ROT/STXVD2X_ROT. - * So, if host is little-endian, - * the postion of memeory should be swapped. - * - * Step3, return to guest, kvm reset register. - * Please see kvmppc_hv_entry->load_fp_state->REST_32VSRS - * as reference. - */ - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 2; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 8, 1, 0); - break; - - case OP_31_XOP_LXVW4X: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 4; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 0); - break; - - case OP_31_XOP_LXVDSX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = - KVMPPC_VSX_COPY_DWORD_LOAD_DUMP; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 8, 1, 0); - break; - - case OP_31_XOP_STXSDX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 8, 1); - break; - - case OP_31_XOP_STXSSPX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 4, 1); - break; - - case OP_31_XOP_STXSIWX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_offset = 1; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 4, 1); - break; - - case OP_31_XOP_STXVD2X: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 2; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 8, 1); - break; - - case OP_31_XOP_STXVW4X: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 4; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 4, 1); - break; -#endif /* CONFIG_VSX */ - #ifdef CONFIG_ALTIVEC case OP_31_XOP_LVX: if (kvmppc_check_altivec_disabled(vcpu)) From patchwork Mon May 7 06:20:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 909534 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="K8+C2840"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40fXZ517kpz9s7c for ; Mon, 7 May 2018 16:21:17 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752000AbeEGGVQ (ORCPT ); Mon, 7 May 2018 02:21:16 -0400 Received: from mail-pf0-f194.google.com ([209.85.192.194]:39425 "EHLO mail-pf0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751949AbeEGGVA (ORCPT ); Mon, 7 May 2018 02:21:00 -0400 Received: by mail-pf0-f194.google.com with SMTP id a22so7931712pfn.6; Sun, 06 May 2018 23:20:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xgnkREvgHZBxSH2uq1uPD172R7GYy1mdQOXrGfERy08=; b=K8+C2840ReET+pWG/HsyqXdYIp/pm7ZlqwMx4rBZ/tRiJM8GxZtY+uzF9IOF6eqS2B 2isPmWBdPzWy3kqTcVBDIpW2Oo2dGgq5sybo5td3TSbiy2Xj+dR4gDrkL4XPzUl1z6se LwydhCPTatAEFk8vC1jm29UGt9jzwYFFXOa0pak+meTz1UnMjz+LrnXPQwp556DBybP2 NoPv43Y3QAtosamL5NGSGlN1ubba/CNJpWW77LhVoSoo6xiDRJGmVrd9qrkwjS3hUNUE CgMl50CaENhafFJB9430RZQz/kJySIBcAsBvHHSmqpklApg2IHX2JPz3dkzIvhe/tqLr TbRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xgnkREvgHZBxSH2uq1uPD172R7GYy1mdQOXrGfERy08=; b=cX702vc/nOAbLiVR0ion8RGvPvDmLqOj/NFPOE6Cm9y9V7jqYIfsADj43KBZoC0Qw5 4OY0xIFbBA+jTHCcXS8DZknP3QhEU1xbIbUXYZCLiyUgFtqzDhRDP4CkEDnfbngHVw0Z tJHB7Zczkq+s48s564cr2tUH2SIrQNFbtWgySiDVpIMzU63FnaKzEMlOTLFptaxhvY6i u3D+RHOyDbI6pILtP4tBYuFab9/Iuw1OmnUUQNh/9MsfXOH9S15dcyeA5+lWVih4kei9 X18yJGtiSj2wuP7GW6tn1A3nZ884kGAxvsLYDRfTxYApOYH6e3kn44rTr0IpiU/4F+pl ewhQ== X-Gm-Message-State: ALQs6tBG+q1hV5pI2RmQOubd1tjuL1hGLdCzzfuDyv5M4ap1LUtTpxWb xUREUSz4ekD5KDvutx0kTZ+pvg== X-Google-Smtp-Source: AB8JxZpV17i/I5vfLb5pC2M38VEEWwBqEBuiZOH1++JBQP+xwSTMCMsli+ypxKXq8TmOJFkMK88c9A== X-Received: by 2002:a63:751a:: with SMTP id q26-v6mr28858130pgc.338.1525674059311; Sun, 06 May 2018 23:20:59 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id z25sm10459544pfi.171.2018.05.06.23.20.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 May 2018 23:20:58 -0700 (PDT) From: wei.guo.simon@gmail.com To: kvm-ppc@vger.kernel.org Cc: Paul Mackerras , kvm@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Simon Guo Subject: [PATCH v2 09/10] KVM: PPC: expand mmio_vsx_copy_type to mmio_copy_type to cover VMX load/store elem types Date: Mon, 7 May 2018 14:20:15 +0800 Message-Id: <1525674016-6703-10-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> References: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Simon Guo VSX MMIO emulation uses mmio_vsx_copy_type to represent VSX emulated element size/type, such as KVMPPC_VSX_COPY_DWORD_LOAD, etc. This patch expands mmio_vsx_copy_type to cover VMX copy type, such as KVMPPC_VMX_COPY_BYTE(stvebx/lvebx), etc. As a result, mmio_vsx_copy_type is also renamed to mmio_copy_type. It is a preparation for reimplement VMX MMIO emulation. Signed-off-by: Simon Guo --- arch/powerpc/include/asm/kvm_host.h | 9 +++++++-- arch/powerpc/kvm/emulate_loadstore.c | 14 +++++++------- arch/powerpc/kvm/powerpc.c | 10 +++++----- 3 files changed, 19 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 3fb5e8d..2c4382f 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -456,6 +456,11 @@ struct mmio_hpte_cache { #define KVMPPC_VSX_COPY_DWORD_LOAD_DUMP 3 #define KVMPPC_VSX_COPY_WORD_LOAD_DUMP 4 +#define KVMPPC_VMX_COPY_BYTE 8 +#define KVMPPC_VMX_COPY_HWORD 9 +#define KVMPPC_VMX_COPY_WORD 10 +#define KVMPPC_VMX_COPY_DWORD 11 + struct openpic; /* W0 and W1 of a XIVE thread management context */ @@ -678,16 +683,16 @@ struct kvm_vcpu_arch { * Number of simulations for vsx. * If we use 2*8bytes to simulate 1*16bytes, * then the number should be 2 and - * mmio_vsx_copy_type=KVMPPC_VSX_COPY_DWORD. + * mmio_copy_type=KVMPPC_VSX_COPY_DWORD. * If we use 4*4bytes to simulate 1*16bytes, * the number should be 4 and * mmio_vsx_copy_type=KVMPPC_VSX_COPY_WORD. */ u8 mmio_vsx_copy_nums; u8 mmio_vsx_offset; - u8 mmio_vsx_copy_type; u8 mmio_vsx_tx_sx_enabled; u8 mmio_vmx_copy_nums; + u8 mmio_copy_type; u8 osi_needed; u8 osi_enabled; u8 papr_enabled; diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c index 28e97c5..02304ca 100644 --- a/arch/powerpc/kvm/emulate_loadstore.c +++ b/arch/powerpc/kvm/emulate_loadstore.c @@ -109,7 +109,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) vcpu->arch.mmio_vsx_tx_sx_enabled = get_tx_or_sx(inst); vcpu->arch.mmio_vsx_copy_nums = 0; vcpu->arch.mmio_vsx_offset = 0; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_NONE; + vcpu->arch.mmio_copy_type = KVMPPC_VSX_COPY_NONE; vcpu->arch.mmio_sp64_extend = 0; vcpu->arch.mmio_sign_extend = 0; vcpu->arch.mmio_vmx_copy_nums = 0; @@ -171,17 +171,17 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) if (op.element_size == 8) { if (op.vsx_flags & VSX_SPLAT) - vcpu->arch.mmio_vsx_copy_type = + vcpu->arch.mmio_copy_type = KVMPPC_VSX_COPY_DWORD_LOAD_DUMP; else - vcpu->arch.mmio_vsx_copy_type = + vcpu->arch.mmio_copy_type = KVMPPC_VSX_COPY_DWORD; } else if (op.element_size == 4) { if (op.vsx_flags & VSX_SPLAT) - vcpu->arch.mmio_vsx_copy_type = + vcpu->arch.mmio_copy_type = KVMPPC_VSX_COPY_WORD_LOAD_DUMP; else - vcpu->arch.mmio_vsx_copy_type = + vcpu->arch.mmio_copy_type = KVMPPC_VSX_COPY_WORD; } else break; @@ -257,10 +257,10 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) vcpu->arch.mmio_sp64_extend = 1; if (op.element_size == 8) - vcpu->arch.mmio_vsx_copy_type = + vcpu->arch.mmio_copy_type = KVMPPC_VSX_COPY_DWORD; else if (op.element_size == 4) - vcpu->arch.mmio_vsx_copy_type = + vcpu->arch.mmio_copy_type = KVMPPC_VSX_COPY_WORD; else break; diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 8ce9e7b..1580bd2 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -1080,14 +1080,14 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, if (vcpu->kvm->arch.kvm_ops->giveup_ext) vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VSX); - if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_DWORD) + if (vcpu->arch.mmio_copy_type == KVMPPC_VSX_COPY_DWORD) kvmppc_set_vsr_dword(vcpu, gpr); - else if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_WORD) + else if (vcpu->arch.mmio_copy_type == KVMPPC_VSX_COPY_WORD) kvmppc_set_vsr_word(vcpu, gpr); - else if (vcpu->arch.mmio_vsx_copy_type == + else if (vcpu->arch.mmio_copy_type == KVMPPC_VSX_COPY_DWORD_LOAD_DUMP) kvmppc_set_vsr_dword_dump(vcpu, gpr); - else if (vcpu->arch.mmio_vsx_copy_type == + else if (vcpu->arch.mmio_copy_type == KVMPPC_VSX_COPY_WORD_LOAD_DUMP) kvmppc_set_vsr_word_dump(vcpu, gpr); break; @@ -1260,7 +1260,7 @@ static inline int kvmppc_get_vsr_data(struct kvm_vcpu *vcpu, int rs, u64 *val) u32 dword_offset, word_offset; union kvmppc_one_reg reg; int vsx_offset = 0; - int copy_type = vcpu->arch.mmio_vsx_copy_type; + int copy_type = vcpu->arch.mmio_copy_type; int result = 0; switch (copy_type) { From patchwork Mon May 7 06:20:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 909535 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="ujyPvg++"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40fXZC4RzSz9s3D for ; Mon, 7 May 2018 16:21:23 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752003AbeEGGVU (ORCPT ); Mon, 7 May 2018 02:21:20 -0400 Received: from mail-pg0-f66.google.com ([74.125.83.66]:39710 "EHLO mail-pg0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751976AbeEGGVD (ORCPT ); Mon, 7 May 2018 02:21:03 -0400 Received: by mail-pg0-f66.google.com with SMTP id e1-v6so8138628pga.6; Sun, 06 May 2018 23:21:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=U8ZgaLZtuNxGKkDXuEOzXvbg6M5W9iXuEAIiM9Gajvw=; b=ujyPvg++WfL22349dxZIx/z5lEgqwdUeK8zNDa1BY3gJXC2AwC/PmKiYdw3GWk76VA 17nWIDU+r+9GoxpuRW6oZXpkmP5uePlalkKuyTIo8IOfcMoe0VWq6p73niJDqLPAMvB9 a3wevlfhu6pnDbbnExtA1+9JZRwNRIyK4asmMOOB6ttKH1N26fy+eYXE+skVK/WJyjvf 5gFZ2XYfNbY+Fa/XpX1hi5b4r3xFLolpTkEUdOSkm7k7YAxFf8BoWuiuTRA4i6goGJiT ms2gzaj+gSMTwbmafUDyKd45TjNGvsBX1JqCRO8FUSXloJ9rn1nCBy14LcU8sUU1zRRf WV+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=U8ZgaLZtuNxGKkDXuEOzXvbg6M5W9iXuEAIiM9Gajvw=; b=JorQhSIBBRwXzbqeix5stxjRUp8F33TE5MtRXxXC7zlcUUr33iudEIlcNQjzwVbVBL KCVe6bg46jrY3xxM8iUSp+kIVtecXvzWuh3Da8owAkKQ/KOCfAc3v/RojS7QjKASHLv+ 66TOwyDph9tbqfSGZFhxbkYk9F65yoQFrBSfj44U35JgWxf7TDtqYc1pwk9F7qhFB2Pw YTNZ26g8XZCyXgOMKDUHzpiNgEF5fXYRe0HNWXnqEbZwFI1ERij6XolX1Xdx/mnk5d0w 6DtHmRNJuQzCMjpp7yIqZEbPqD/I6hKo2n4i3bTX3/H7nq9rh8ubitGQywoidBWaUcxM vtkA== X-Gm-Message-State: ALQs6tDSeZWqleIuL4Tm1xjzIiCmFuJjr+ButtSYVv3LcUYhvrBEc3Ow s6mT1YXUwLJ+icAA2UKutG/MGw== X-Google-Smtp-Source: AB8JxZpM7qgUZucZywFrXGY2Z2n73vM5mq2VEBLSO7K+dWZ9OHMtXp2FGm+Zt4m4AaHPNErgKc1EOw== X-Received: by 10.98.102.66 with SMTP id a63mr15049239pfc.162.1525674061772; Sun, 06 May 2018 23:21:01 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id z25sm10459544pfi.171.2018.05.06.23.20.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 May 2018 23:21:01 -0700 (PDT) From: wei.guo.simon@gmail.com To: kvm-ppc@vger.kernel.org Cc: Paul Mackerras , kvm@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Simon Guo Subject: [PATCH v2 10/10] KVM: PPC: reimplements LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input Date: Mon, 7 May 2018 14:20:16 +0800 Message-Id: <1525674016-6703-11-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> References: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Simon Guo This patch reimplements LOAD_VMX/STORE_VMX MMIO emulation with analyse_intr() input. When emulating the store, the VMX reg will need to be flushed so that the right reg val can be retrieved before writing to IO MEM. This patch also adds support for lvebx/lvehx/lvewx/stvebx/stvehx/stvewx MMIO emulation. To meet the requirement of handling different element sizes, kvmppc_handle_load128_by2x64()/kvmppc_handle_store128_by2x64() were replaced with kvmppc_handle_vmx_load()/kvmppc_handle_vmx_store(). The framework used is the similar with VSX instruction mmio emulation. Suggested-by: Paul Mackerras Signed-off-by: Simon Guo --- arch/powerpc/include/asm/kvm_host.h | 1 + arch/powerpc/include/asm/kvm_ppc.h | 10 +- arch/powerpc/kvm/emulate_loadstore.c | 124 +++++++++++------ arch/powerpc/kvm/powerpc.c | 259 ++++++++++++++++++++++++++++------- 4 files changed, 302 insertions(+), 92 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 2c4382f..5ab660d 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -692,6 +692,7 @@ struct kvm_vcpu_arch { u8 mmio_vsx_offset; u8 mmio_vsx_tx_sx_enabled; u8 mmio_vmx_copy_nums; + u8 mmio_vmx_offset; u8 mmio_copy_type; u8 osi_needed; u8 osi_enabled; diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 1f087c4..e991821 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -81,10 +81,10 @@ extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu, extern int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu, unsigned int rt, unsigned int bytes, int is_default_endian, int mmio_sign_extend); -extern int kvmppc_handle_load128_by2x64(struct kvm_run *run, - struct kvm_vcpu *vcpu, unsigned int rt, int is_default_endian); -extern int kvmppc_handle_store128_by2x64(struct kvm_run *run, - struct kvm_vcpu *vcpu, unsigned int rs, int is_default_endian); +extern int kvmppc_handle_vmx_load(struct kvm_run *run, struct kvm_vcpu *vcpu, + unsigned int rt, unsigned int bytes, int is_default_endian); +extern int kvmppc_handle_vmx_store(struct kvm_run *run, struct kvm_vcpu *vcpu, + unsigned int rs, unsigned int bytes, int is_default_endian); extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu, u64 val, unsigned int bytes, int is_default_endian); @@ -265,6 +265,8 @@ extern int kvmppc_xics_get_xive(struct kvm *kvm, u32 irq, u32 *server, vector128 vval; u64 vsxval[2]; u32 vsx32val[4]; + u16 vsx16val[8]; + u8 vsx8val[16]; struct { u64 addr; u64 length; diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c index 02304ca..459f8fe 100644 --- a/arch/powerpc/kvm/emulate_loadstore.c +++ b/arch/powerpc/kvm/emulate_loadstore.c @@ -113,6 +113,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) vcpu->arch.mmio_sp64_extend = 0; vcpu->arch.mmio_sign_extend = 0; vcpu->arch.mmio_vmx_copy_nums = 0; + vcpu->arch.mmio_vmx_offset = 0; vcpu->arch.mmio_host_swabbed = 0; emulated = EMULATE_FAIL; @@ -154,6 +155,46 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) break; #endif +#ifdef CONFIG_ALTIVEC + case LOAD_VMX: + if (kvmppc_check_altivec_disabled(vcpu)) + return EMULATE_DONE; + + /* Hardware enforces alignment of VMX accesses */ + vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1); + vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1); + + if (size == 16) { /* lvx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_DWORD; + } else if (size == 4) { /* lvewx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_WORD; + } else if (size == 2) { /* lvehx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_HWORD; + } else if (size == 1) { /* lvebx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_BYTE; + } else + break; + + vcpu->arch.mmio_vmx_offset = + (vcpu->arch.vaddr_accessed & 0xf)/size; + + if (size == 16) { + vcpu->arch.mmio_vmx_copy_nums = 2; + emulated = kvmppc_handle_vmx_load(run, + vcpu, KVM_MMIO_REG_VMX|op.reg, + 8, 1); + } else { + vcpu->arch.mmio_vmx_copy_nums = 1; + emulated = kvmppc_handle_vmx_load(run, vcpu, + KVM_MMIO_REG_VMX|op.reg, + size, 1); + } + break; +#endif #ifdef CONFIG_VSX case LOAD_VSX: { int io_size_each; @@ -237,6 +278,48 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) break; #endif +#ifdef CONFIG_ALTIVEC + case STORE_VMX: + if (kvmppc_check_altivec_disabled(vcpu)) + return EMULATE_DONE; + + /* Hardware enforces alignment of VMX accesses. */ + vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1); + vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1); + + if (vcpu->kvm->arch.kvm_ops->giveup_ext) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, + MSR_VEC); + if (size == 16) { /* stvx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_DWORD; + } else if (size == 4) { /* stvewx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_WORD; + } else if (size == 2) { /* stvehx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_HWORD; + } else if (size == 1) { /* stvebx */ + vcpu->arch.mmio_copy_type = + KVMPPC_VMX_COPY_BYTE; + } else + break; + + vcpu->arch.mmio_vmx_offset = + (vcpu->arch.vaddr_accessed & 0xf)/size; + + if (size == 16) { + vcpu->arch.mmio_vmx_copy_nums = 2; + emulated = kvmppc_handle_vmx_store(run, + vcpu, op.reg, 8, 1); + } else { + vcpu->arch.mmio_vmx_copy_nums = 1; + emulated = kvmppc_handle_vmx_store(run, + vcpu, op.reg, size, 1); + } + + break; +#endif #ifdef CONFIG_VSX case STORE_VSX: { int io_size_each; @@ -294,47 +377,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) } } - - if (emulated == EMULATE_DONE) - goto out; - - switch (get_op(inst)) { - case 31: - switch (get_xop(inst)) { -#ifdef CONFIG_ALTIVEC - case OP_31_XOP_LVX: - if (kvmppc_check_altivec_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.vaddr_accessed &= ~0xFULL; - vcpu->arch.paddr_accessed &= ~0xFULL; - vcpu->arch.mmio_vmx_copy_nums = 2; - emulated = kvmppc_handle_load128_by2x64(run, vcpu, - KVM_MMIO_REG_VMX|rt, 1); - break; - - case OP_31_XOP_STVX: - if (kvmppc_check_altivec_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.vaddr_accessed &= ~0xFULL; - vcpu->arch.paddr_accessed &= ~0xFULL; - vcpu->arch.mmio_vmx_copy_nums = 2; - emulated = kvmppc_handle_store128_by2x64(run, vcpu, - rs, 1); - break; -#endif /* CONFIG_ALTIVEC */ - - default: - emulated = EMULATE_FAIL; - break; - } - break; - - default: - emulated = EMULATE_FAIL; - break; - } - -out: if (emulated == EMULATE_FAIL) { advance = 0; kvmppc_core_queue_program(vcpu, 0); diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 1580bd2..05eccdc 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -953,30 +953,110 @@ static inline void kvmppc_set_vsr_word(struct kvm_vcpu *vcpu, #endif /* CONFIG_VSX */ #ifdef CONFIG_ALTIVEC +static inline int kvmppc_get_vmx_offset_generic(struct kvm_vcpu *vcpu, + int index, int element_size) +{ + int offset; + int elts = sizeof(vector128)/element_size; + + if ((index < 0) || (index >= elts)) + return -1; + + if (kvmppc_need_byteswap(vcpu)) + offset = elts - index - 1; + else + offset = index; + + return offset; +} + +static inline int kvmppc_get_vmx_dword_offset(struct kvm_vcpu *vcpu, + int index) +{ + return kvmppc_get_vmx_offset_generic(vcpu, index, 8); +} + +static inline int kvmppc_get_vmx_word_offset(struct kvm_vcpu *vcpu, + int index) +{ + return kvmppc_get_vmx_offset_generic(vcpu, index, 4); +} + +static inline int kvmppc_get_vmx_hword_offset(struct kvm_vcpu *vcpu, + int index) +{ + return kvmppc_get_vmx_offset_generic(vcpu, index, 2); +} + +static inline int kvmppc_get_vmx_byte_offset(struct kvm_vcpu *vcpu, + int index) +{ + return kvmppc_get_vmx_offset_generic(vcpu, index, 1); +} + + static inline void kvmppc_set_vmx_dword(struct kvm_vcpu *vcpu, - u64 gpr) + u64 gpr) { + union kvmppc_one_reg val; + int offset = kvmppc_get_vmx_dword_offset(vcpu, + vcpu->arch.mmio_vmx_offset); int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK; - u32 hi, lo; - u32 di; -#ifdef __BIG_ENDIAN - hi = gpr >> 32; - lo = gpr & 0xffffffff; -#else - lo = gpr >> 32; - hi = gpr & 0xffffffff; -#endif + if (offset == -1) + return; + + val.vval = VCPU_VSX_VR(vcpu, index); + val.vsxval[offset] = gpr; + VCPU_VSX_VR(vcpu, index) = val.vval; +} + +static inline void kvmppc_set_vmx_word(struct kvm_vcpu *vcpu, + u32 gpr32) +{ + union kvmppc_one_reg val; + int offset = kvmppc_get_vmx_word_offset(vcpu, + vcpu->arch.mmio_vmx_offset); + int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK; + + if (offset == -1) + return; + + val.vval = VCPU_VSX_VR(vcpu, index); + val.vsx32val[offset] = gpr32; + VCPU_VSX_VR(vcpu, index) = val.vval; +} - di = 2 - vcpu->arch.mmio_vmx_copy_nums; /* doubleword index */ - if (di > 1) +static inline void kvmppc_set_vmx_hword(struct kvm_vcpu *vcpu, + u16 gpr16) +{ + union kvmppc_one_reg val; + int offset = kvmppc_get_vmx_hword_offset(vcpu, + vcpu->arch.mmio_vmx_offset); + int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK; + + if (offset == -1) return; - if (vcpu->arch.mmio_host_swabbed) - di = 1 - di; + val.vval = VCPU_VSX_VR(vcpu, index); + val.vsx16val[offset] = gpr16; + VCPU_VSX_VR(vcpu, index) = val.vval; +} + +static inline void kvmppc_set_vmx_byte(struct kvm_vcpu *vcpu, + u8 gpr8) +{ + union kvmppc_one_reg val; + int offset = kvmppc_get_vmx_byte_offset(vcpu, + vcpu->arch.mmio_vmx_offset); + int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK; - VCPU_VSX_VR(vcpu, index).u[di * 2] = hi; - VCPU_VSX_VR(vcpu, index).u[di * 2 + 1] = lo; + if (offset == -1) + return; + + val.vval = VCPU_VSX_VR(vcpu, index); + val.vsx8val[offset] = gpr8; + VCPU_VSX_VR(vcpu, index) = val.vval; } #endif /* CONFIG_ALTIVEC */ @@ -1097,7 +1177,16 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, if (vcpu->kvm->arch.kvm_ops->giveup_ext) vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VEC); - kvmppc_set_vmx_dword(vcpu, gpr); + if (vcpu->arch.mmio_copy_type == KVMPPC_VMX_COPY_DWORD) + kvmppc_set_vmx_dword(vcpu, gpr); + else if (vcpu->arch.mmio_copy_type == KVMPPC_VMX_COPY_WORD) + kvmppc_set_vmx_word(vcpu, gpr); + else if (vcpu->arch.mmio_copy_type == + KVMPPC_VMX_COPY_HWORD) + kvmppc_set_vmx_hword(vcpu, gpr); + else if (vcpu->arch.mmio_copy_type == + KVMPPC_VMX_COPY_BYTE) + kvmppc_set_vmx_byte(vcpu, gpr); break; #endif default: @@ -1376,14 +1465,16 @@ static int kvmppc_emulate_mmio_vsx_loadstore(struct kvm_vcpu *vcpu, #endif /* CONFIG_VSX */ #ifdef CONFIG_ALTIVEC -/* handle quadword load access in two halves */ -int kvmppc_handle_load128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu, - unsigned int rt, int is_default_endian) +int kvmppc_handle_vmx_load(struct kvm_run *run, struct kvm_vcpu *vcpu, + unsigned int rt, unsigned int bytes, int is_default_endian) { enum emulation_result emulated = EMULATE_DONE; + if (vcpu->arch.mmio_vsx_copy_nums > 2) + return EMULATE_FAIL; + while (vcpu->arch.mmio_vmx_copy_nums) { - emulated = __kvmppc_handle_load(run, vcpu, rt, 8, + emulated = __kvmppc_handle_load(run, vcpu, rt, bytes, is_default_endian, 0); if (emulated != EMULATE_DONE) @@ -1391,55 +1482,127 @@ int kvmppc_handle_load128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu, vcpu->arch.paddr_accessed += run->mmio.len; vcpu->arch.mmio_vmx_copy_nums--; + vcpu->arch.mmio_vmx_offset++; } return emulated; } -static inline int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val) +int kvmppc_get_vmx_dword(struct kvm_vcpu *vcpu, int index, u64 *val) { - vector128 vrs = VCPU_VSX_VR(vcpu, rs); - u32 di; - u64 w0, w1; + union kvmppc_one_reg reg; + int vmx_offset = 0; + int result = 0; + + vmx_offset = + kvmppc_get_vmx_dword_offset(vcpu, vcpu->arch.mmio_vmx_offset); - di = 2 - vcpu->arch.mmio_vmx_copy_nums; /* doubleword index */ - if (di > 1) + if (vmx_offset == -1) return -1; - if (kvmppc_need_byteswap(vcpu)) - di = 1 - di; + reg.vval = VCPU_VSX_VR(vcpu, index); + *val = reg.vsxval[vmx_offset]; - w0 = vrs.u[di * 2]; - w1 = vrs.u[di * 2 + 1]; + return result; +} -#ifdef __BIG_ENDIAN - *val = (w0 << 32) | w1; -#else - *val = (w1 << 32) | w0; -#endif - return 0; +int kvmppc_get_vmx_word(struct kvm_vcpu *vcpu, int index, u64 *val) +{ + union kvmppc_one_reg reg; + int vmx_offset = 0; + int result = 0; + + vmx_offset = + kvmppc_get_vmx_word_offset(vcpu, vcpu->arch.mmio_vmx_offset); + + if (vmx_offset == -1) + return -1; + + reg.vval = VCPU_VSX_VR(vcpu, index); + *val = reg.vsx32val[vmx_offset]; + + return result; +} + +int kvmppc_get_vmx_hword(struct kvm_vcpu *vcpu, int index, u64 *val) +{ + union kvmppc_one_reg reg; + int vmx_offset = 0; + int result = 0; + + vmx_offset = + kvmppc_get_vmx_hword_offset(vcpu, vcpu->arch.mmio_vmx_offset); + + if (vmx_offset == -1) + return -1; + + reg.vval = VCPU_VSX_VR(vcpu, index); + *val = reg.vsx16val[vmx_offset]; + + return result; +} + +int kvmppc_get_vmx_byte(struct kvm_vcpu *vcpu, int index, u64 *val) +{ + union kvmppc_one_reg reg; + int vmx_offset = 0; + int result = 0; + + vmx_offset = + kvmppc_get_vmx_byte_offset(vcpu, vcpu->arch.mmio_vmx_offset); + + if (vmx_offset == -1) + return -1; + + reg.vval = VCPU_VSX_VR(vcpu, index); + *val = reg.vsx8val[vmx_offset]; + + return result; } -/* handle quadword store in two halves */ -int kvmppc_handle_store128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu, - unsigned int rs, int is_default_endian) +int kvmppc_handle_vmx_store(struct kvm_run *run, struct kvm_vcpu *vcpu, + unsigned int rs, unsigned int bytes, int is_default_endian) { u64 val = 0; + unsigned int index = rs & KVM_MMIO_REG_MASK; enum emulation_result emulated = EMULATE_DONE; + if (vcpu->arch.mmio_vsx_copy_nums > 2) + return EMULATE_FAIL; + vcpu->arch.io_gpr = rs; while (vcpu->arch.mmio_vmx_copy_nums) { - if (kvmppc_get_vmx_data(vcpu, rs, &val) == -1) + switch (vcpu->arch.mmio_copy_type) { + case KVMPPC_VMX_COPY_DWORD: + if (kvmppc_get_vmx_dword(vcpu, index, &val) == -1) + return EMULATE_FAIL; + + break; + case KVMPPC_VMX_COPY_WORD: + if (kvmppc_get_vmx_word(vcpu, index, &val) == -1) + return EMULATE_FAIL; + break; + case KVMPPC_VMX_COPY_HWORD: + if (kvmppc_get_vmx_hword(vcpu, index, &val) == -1) + return EMULATE_FAIL; + break; + case KVMPPC_VMX_COPY_BYTE: + if (kvmppc_get_vmx_byte(vcpu, index, &val) == -1) + return EMULATE_FAIL; + break; + default: return EMULATE_FAIL; + } - emulated = kvmppc_handle_store(run, vcpu, val, 8, + emulated = kvmppc_handle_store(run, vcpu, val, bytes, is_default_endian); if (emulated != EMULATE_DONE) break; vcpu->arch.paddr_accessed += run->mmio.len; vcpu->arch.mmio_vmx_copy_nums--; + vcpu->arch.mmio_vmx_offset++; } return emulated; @@ -1454,11 +1617,11 @@ static int kvmppc_emulate_mmio_vmx_loadstore(struct kvm_vcpu *vcpu, vcpu->arch.paddr_accessed += run->mmio.len; if (!vcpu->mmio_is_write) { - emulated = kvmppc_handle_load128_by2x64(run, vcpu, - vcpu->arch.io_gpr, 1); + emulated = kvmppc_handle_vmx_load(run, vcpu, + vcpu->arch.io_gpr, run->mmio.len, 1); } else { - emulated = kvmppc_handle_store128_by2x64(run, vcpu, - vcpu->arch.io_gpr, 1); + emulated = kvmppc_handle_vmx_store(run, vcpu, + vcpu->arch.io_gpr, run->mmio.len, 1); } switch (emulated) { @@ -1602,8 +1765,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) } #endif #ifdef CONFIG_ALTIVEC - if (vcpu->arch.mmio_vmx_copy_nums > 0) + if (vcpu->arch.mmio_vmx_copy_nums > 0) { vcpu->arch.mmio_vmx_copy_nums--; + vcpu->arch.mmio_vmx_offset++; + } if (vcpu->arch.mmio_vmx_copy_nums > 0) { r = kvmppc_emulate_mmio_vmx_loadstore(vcpu, run);