From patchwork Mon May 7 06:20:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 909533 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="H4d5Fzaq"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40fXZ209Lbz9s3q for ; Mon, 7 May 2018 16:21:14 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751988AbeEGGVL (ORCPT ); Mon, 7 May 2018 02:21:11 -0400 Received: from mail-pf0-f195.google.com ([209.85.192.195]:36576 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751961AbeEGGU5 (ORCPT ); Mon, 7 May 2018 02:20:57 -0400 Received: by mail-pf0-f195.google.com with SMTP id w129so16607125pfd.3; Sun, 06 May 2018 23:20:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=L6K/PfN+4qbovGN57nlugjMS4S5z6Ix1fzCm8nWLN94=; b=H4d5FzaqYvAkUtzWO2JScHFxyA4oAchN5Vmx69lGS9oJGRbtCwio8xYvrB+al+IGhB wECIlhIWjvccuahEkYOOAolkm4jikFeu+LmjHXYfWRHIJkJbKa6ttGvql5kSYUGnZ8bu ibvI7XoJ4USVE8WKlZyXe2wQqHNLRZRYvjfiTtZXKiDf+UnOQ6YOiS5zpM8wRtdZC+F9 zbbm2bYjv3dB/y0mc90L29R1F200RwwZWJNuq4fM03Jai0MyXgMme+XPbLCzpNHJ6S7h DogU+VX8dTazBqSc23yqMaGc2Cd8Rzjszu5xOaJ7Xb7aZ9oDbwyxGy4zdsFJXusRLIa7 gMVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=L6K/PfN+4qbovGN57nlugjMS4S5z6Ix1fzCm8nWLN94=; b=RQKrLCKQjgdIqVUpXnjJrLQce8JJse9OFLpDeXKDA9iBwhWrbolc2mbZmN/4qDol3q KJoC/HKtY+T8s52TffV9wsice8YVrXr0ogrHdPom95KePkipZcRamI6qWke26y4b0v2Z g7vhitHgsLh0Z3DhEwJC3WXSX7ttMgTP06uMmudo3u/3GHAmZCKMqwmtQ9bg2RQ55lZp ioGdVPzof3YL663BwuIwisXKvJRULnStwNjTs5XWqmGmY/fpNicLeq1OWv8berQwKL9e 9KYAsavlZ+E/PlIaaLABNhbr52hDxmpDTHsRNrahoTxR6WgM3+PAwvIY6O0goUwuX9hp g2cQ== X-Gm-Message-State: ALQs6tBb8xzFn7DWJP0dvIgOz8mHIwJCwNW43c+tEzuWmkGXgdkwZiFc tqenQrQ3oD3xHI+5HOtj0v3sHQ== X-Google-Smtp-Source: AB8JxZr+SREzNXUQdhOyd6/Ij/jXzIwjhsl7KCu+fH0CyhsMTHKcptyWLmZG3KGAiLo+Sg5fdPvFSQ== X-Received: by 2002:a17:902:584:: with SMTP id f4-v6mr36366416plf.290.1525674056443; Sun, 06 May 2018 23:20:56 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id z25sm10459544pfi.171.2018.05.06.23.20.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 May 2018 23:20:55 -0700 (PDT) From: wei.guo.simon@gmail.com To: kvm-ppc@vger.kernel.org Cc: Paul Mackerras , kvm@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Simon Guo Subject: [PATCH v2 08/10] KVM: PPC: reimplements LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input Date: Mon, 7 May 2018 14:20:14 +0800 Message-Id: <1525674016-6703-9-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> References: <1525674016-6703-1-git-send-email-wei.guo.simon@gmail.com> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Simon Guo This patch reimplements LOAD_VSX/STORE_VSX instruction MMIO emulation with analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported by analyse_instr() and handle accordingly. When emulating VSX store, the VSX reg will need to be flushed so that the right reg val can be retrieved before writing to IO MEM. Suggested-by: Paul Mackerras Signed-off-by: Simon Guo --- arch/powerpc/kvm/emulate_loadstore.c | 227 ++++++++++++++--------------------- 1 file changed, 91 insertions(+), 136 deletions(-) diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c index 5a6571c..28e97c5 100644 --- a/arch/powerpc/kvm/emulate_loadstore.c +++ b/arch/powerpc/kvm/emulate_loadstore.c @@ -154,6 +154,54 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) break; #endif +#ifdef CONFIG_VSX + case LOAD_VSX: { + int io_size_each; + + if (op.vsx_flags & VSX_CHECK_VEC) { + if (kvmppc_check_altivec_disabled(vcpu)) + return EMULATE_DONE; + } else { + if (kvmppc_check_vsx_disabled(vcpu)) + return EMULATE_DONE; + } + + if (op.vsx_flags & VSX_FPCONV) + vcpu->arch.mmio_sp64_extend = 1; + + if (op.element_size == 8) { + if (op.vsx_flags & VSX_SPLAT) + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_DWORD_LOAD_DUMP; + else + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_DWORD; + } else if (op.element_size == 4) { + if (op.vsx_flags & VSX_SPLAT) + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_WORD_LOAD_DUMP; + else + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_WORD; + } else + break; + + if (size < op.element_size) { + /* precision convert case: lxsspx, etc */ + vcpu->arch.mmio_vsx_copy_nums = 1; + io_size_each = size; + } else { /* lxvw4x, lxvd2x, etc */ + vcpu->arch.mmio_vsx_copy_nums = + size/op.element_size; + io_size_each = op.element_size; + } + + emulated = kvmppc_handle_vsx_load(run, vcpu, + KVM_MMIO_REG_VSX|op.reg, io_size_each, + 1, op.type & SIGNEXT); + break; + } +#endif case STORE: /* if need byte reverse, op.val has been reversed by * analyse_instr(). @@ -189,6 +237,49 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) break; #endif +#ifdef CONFIG_VSX + case STORE_VSX: { + int io_size_each; + + if (op.vsx_flags & VSX_CHECK_VEC) { + if (kvmppc_check_altivec_disabled(vcpu)) + return EMULATE_DONE; + } else { + if (kvmppc_check_vsx_disabled(vcpu)) + return EMULATE_DONE; + } + + if (vcpu->kvm->arch.kvm_ops->giveup_ext) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, + MSR_VSX); + + if (op.vsx_flags & VSX_FPCONV) + vcpu->arch.mmio_sp64_extend = 1; + + if (op.element_size == 8) + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_DWORD; + else if (op.element_size == 4) + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_WORD; + else + break; + + if (size < op.element_size) { + /* precise conversion case, like stxsspx */ + vcpu->arch.mmio_vsx_copy_nums = 1; + io_size_each = size; + } else { /* stxvw4x, stxvd2x, etc */ + vcpu->arch.mmio_vsx_copy_nums = + size/op.element_size; + io_size_each = op.element_size; + } + + emulated = kvmppc_handle_vsx_store(run, vcpu, + op.reg, io_size_each, 1); + break; + } +#endif case CACHEOP: /* Do nothing. The guest is performing dcbi because * hardware DMA is not snooped by the dcache, but @@ -210,142 +301,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) switch (get_op(inst)) { case 31: switch (get_xop(inst)) { -#ifdef CONFIG_VSX - case OP_31_XOP_LXSDX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 8, 1, 0); - break; - - case OP_31_XOP_LXSSPX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 0); - break; - - case OP_31_XOP_LXSIWAX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 1); - break; - - case OP_31_XOP_LXSIWZX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 0); - break; - - case OP_31_XOP_LXVD2X: - /* - * In this case, the official load/store process is like this: - * Step1, exit from vm by page fault isr, then kvm save vsr. - * Please see guest_exit_cont->store_fp_state->SAVE_32VSRS - * as reference. - * - * Step2, copy data between memory and VCPU - * Notice: for LXVD2X/STXVD2X/LXVW4X/STXVW4X, we use - * 2copies*8bytes or 4copies*4bytes - * to simulate one copy of 16bytes. - * Also there is an endian issue here, we should notice the - * layout of memory. - * Please see MARCO of LXVD2X_ROT/STXVD2X_ROT as more reference. - * If host is little-endian, kvm will call XXSWAPD for - * LXVD2X_ROT/STXVD2X_ROT. - * So, if host is little-endian, - * the postion of memeory should be swapped. - * - * Step3, return to guest, kvm reset register. - * Please see kvmppc_hv_entry->load_fp_state->REST_32VSRS - * as reference. - */ - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 2; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 8, 1, 0); - break; - - case OP_31_XOP_LXVW4X: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 4; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 0); - break; - - case OP_31_XOP_LXVDSX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = - KVMPPC_VSX_COPY_DWORD_LOAD_DUMP; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 8, 1, 0); - break; - - case OP_31_XOP_STXSDX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 8, 1); - break; - - case OP_31_XOP_STXSSPX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 4, 1); - break; - - case OP_31_XOP_STXSIWX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_offset = 1; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 4, 1); - break; - - case OP_31_XOP_STXVD2X: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 2; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 8, 1); - break; - - case OP_31_XOP_STXVW4X: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 4; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 4, 1); - break; -#endif /* CONFIG_VSX */ - #ifdef CONFIG_ALTIVEC case OP_31_XOP_LVX: if (kvmppc_check_altivec_disabled(vcpu))