From patchwork Mon Apr 5 01:19:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462172 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Pld5uuyV; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCWV0wljz9sXh for ; Mon, 5 Apr 2021 11:20:06 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231809AbhDEBUJ (ORCPT ); Sun, 4 Apr 2021 21:20:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231656AbhDEBUJ (ORCPT ); Sun, 4 Apr 2021 21:20:09 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C864C061756 for ; Sun, 4 Apr 2021 18:20:04 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id j25so7205629pfe.2 for ; Sun, 04 Apr 2021 18:20:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1Lonny+8YXdFwRIoOFSgIHHCV4tfZl5PHVskAqriodI=; b=Pld5uuyVNma4IP095QZiWqo8UqV8PkMaHdh9HQO5vviVQUc1Zrr6CStAUlatD1Cpt9 Cf3dMiWSOxsJ2EbSqU2uxTTqqes5OQr/SCUIirAF+2fRdy4bQ0leINn/otKxy+2pVQ2x yPoDB25kpfCKupwHS1cs4k+qKMbci+yX/P0gfuucazdvzqgx2T+7bEMrJnw3YW7ApH4M 3IMouOXeU47OArfcU3rw96Sqpnlvn8pJSFpKH5sNkzsMQWv/AN/8cFbCSHbXd0AhgU6N gOqa2serK4u9NF3MOw9UU/3QAoWpe9bDq4cG0vDqKT2yJWmRTynMI1AiRRSkCxFy/Cr0 7TMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1Lonny+8YXdFwRIoOFSgIHHCV4tfZl5PHVskAqriodI=; b=nGtVo60WAcVNqyHEgJlKprlk5LQhET/mVvj9hgWWkToPVjeozgKD6Dkb3+SXzC+czs 5QjTII3Cc0IAYiLTzBYOspunUJsbiprHiHd4k/Ud7U1TQm2fsrG4VC9q5uvzADPpixyz IgKdINTOWmtseVvErYuecu6tbxHt6XSyO1gORCxt9qTyPJb2NpBKj+qWMJCNp9JOXu65 kbpMCG0ksB4B/cTkH8kmUl/N5tgRQkPELYxm9fxb9o9PiVDCpjVp5Q2VC2TfR0JGLvp0 3EuDOvFpkXV2hy/XUeJw5ng5XXLTfqRmtvOnhIZZq2dDpPR0zBMVG3L8uyK3esliYfSk ZXtA== X-Gm-Message-State: AOAM5334zNJ/PSDYjYyYbVe7y1kn1JVxKaxszZ23VxuS0X0pgNxq/H1Z 8dCYtzkWSp3T7zUoGk4SBFTkhKKhIsIr8Q== X-Google-Smtp-Source: ABdhPJxtDv4We3Qhgcb0JgXD1Oj5UVWVjspAVC4M8tPa2to/IEuHoNAXkvDJvNbQpvRLhOubCb6Vug== X-Received: by 2002:aa7:969d:0:b029:1f5:b02c:eed3 with SMTP id f29-20020aa7969d0000b02901f5b02ceed3mr21046840pfk.75.1617585603465; Sun, 04 Apr 2021 18:20:03 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:03 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v6 01/48] KVM: PPC: Book3S HV: Nested move LPCR sanitising to sanitise_hv_regs Date: Mon, 5 Apr 2021 11:19:01 +1000 Message-Id: <20210405011948.675354-2-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This will get a bit more complicated in future patches. Move it into the helper function. This change allows the L1 hypervisor to determine some of the LPCR bits that the L0 is using to run it, which could be a privilege violation (LPCR is HV-privileged), although the same problem exists now for HFSCR for example. Discussion of the HV privilege issue is ongoing and can be resolved with a later change. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_nested.c | 27 +++++++++++++++++++++------ 1 file changed, 21 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 0cd0e7aad588..3060e5deffc8 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -132,8 +132,27 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap, } } +/* + * This can result in some L0 HV register state being leaked to an L1 + * hypervisor when the hv_guest_state is copied back to the guest after + * being modified here. + * + * There is no known problem with such a leak, and in many cases these + * register settings could be derived by the guest by observing behaviour + * and timing, interrupts, etc., but it is an issue to consider. + */ static void sanitise_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr) { + struct kvmppc_vcore *vc = vcpu->arch.vcore; + u64 mask; + + /* + * Don't let L1 change LPCR bits for the L2 except these: + */ + mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD | + LPCR_LPES | LPCR_MER; + hr->lpcr = (vc->lpcr & ~mask) | (hr->lpcr & mask); + /* * Don't let L1 enable features for L2 which we've disabled for L1, * but preserve the interrupt cause field. @@ -271,8 +290,6 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) u64 hv_ptr, regs_ptr; u64 hdec_exp; s64 delta_purr, delta_spurr, delta_ic, delta_vtb; - u64 mask; - unsigned long lpcr; if (vcpu->kvm->arch.l1_ptcr == 0) return H_NOT_AVAILABLE; @@ -321,9 +338,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) vcpu->arch.nested_vcpu_id = l2_hv.vcpu_token; vcpu->arch.regs = l2_regs; vcpu->arch.shregs.msr = vcpu->arch.regs.msr; - mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD | - LPCR_LPES | LPCR_MER; - lpcr = (vc->lpcr & ~mask) | (l2_hv.lpcr & mask); + sanitise_hv_regs(vcpu, &l2_hv); restore_hv_regs(vcpu, &l2_hv); @@ -335,7 +350,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) r = RESUME_HOST; break; } - r = kvmhv_run_single_vcpu(vcpu, hdec_exp, lpcr); + r = kvmhv_run_single_vcpu(vcpu, hdec_exp, l2_hv.lpcr); } while (is_kvmppc_resume_guest(r)); /* save L2 state for return */ From patchwork Mon Apr 5 01:19:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462173 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=th0OZH/9; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCWX6MF7z9sWX for ; Mon, 5 Apr 2021 11:20:08 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231820AbhDEBUN (ORCPT ); Sun, 4 Apr 2021 21:20:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231656AbhDEBUM (ORCPT ); Sun, 4 Apr 2021 21:20:12 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B574C061756 for ; Sun, 4 Apr 2021 18:20:07 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id l123so5681281pfl.8 for ; Sun, 04 Apr 2021 18:20:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4shQOl+Q9zLkUpe6/3smaSERBWGUCH93NF6hRCaZmCU=; b=th0OZH/9x9mShOnc54csEq70tg6C1O4x7QUndQFrobPuPB1XdjNcROTl884w3SFCF0 pEfgxzwbdtPcQw6EsAuxqBdltNE71PnIOknAKMUwuR0kF5SutUr9HKwbddt+oLzcYsNq qW3hk6O+vxR1oiMFXsgUvaiBcjtgbOBtHRjRCYKef5pelovd99K9v1b9JoPZfuegsW2T 7ffyNNQEwO98vYeqIZ97a10hg+MEWOyd9D6WPmimwyS0QaVe9wN6+Zs3gBhPiOuQOO5X gxpNzZZMbriLyHlK26p69QIQQjrJ3DaydBbDlVL57/AYrMEIjWRtqeUgxA+fewcD9Cae LDFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4shQOl+Q9zLkUpe6/3smaSERBWGUCH93NF6hRCaZmCU=; b=EicDXe8A5g5C/uMcDuhEWTl08VsYn4R1e6VNyeJYGMO03dWRHIMJOCXy7p3Obg2xN6 52bBAfFEAdX+8V8PvJXiTha9PounIHjMss+hHGNNMMFfSafFkCFqwpWs/gWE3nkndSqV D0oz22m/QJrnW22xvgvdPVMAWnMtfVutLyhQVxz/OVi6neNCaD/njCdaDJ1Wrs6DRgFI qWt8VuIkGpRadJ4bLgPs1fKwqiBMXsac8LYqnvNJkiUFNCD3qwzrmOoIA/r4YCSWeZzy Ir9gEuA3rmxRALjOjpGW3H/kxu48ZpgCo7oYmWsmTgmusJFMJEQaTm3qH50nlfzBYukt h4gQ== X-Gm-Message-State: AOAM532/aIKSNB2f4Gbh0PetvOgmRMVzfMg4VR5lZFFakGa1EOJ12VPQ 3w6CcpwHLgKXrGJ2ROt5QUPLp6nauq8jMA== X-Google-Smtp-Source: ABdhPJycHBq9I5qXlQdZhewoKKU+807zBbgRSx1anlvMLsekt0HZUWdJq7ZyxFLIxrgsXXFajVneNA== X-Received: by 2002:a63:2507:: with SMTP id l7mr21149858pgl.198.1617585606814; Sun, 04 Apr 2021 18:20:06 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:06 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v6 02/48] KVM: PPC: Book3S HV: Add a function to filter guest LPCR bits Date: Mon, 5 Apr 2021 11:19:02 +1000 Message-Id: <20210405011948.675354-3-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Guest LPCR depends on hardware type, and future changes will add restrictions based on errata and guest MMU mode. Move this logic to a common function and use it for the cases where the guest wants to update its LPCR (or the LPCR of a nested guest). This also adds a warning in other places that set or update LPCR if we try to set something that would have been disallowed by the filter, as a sanity check. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s.h | 2 + arch/powerpc/kvm/book3s_hv.c | 68 ++++++++++++++++++++------- arch/powerpc/kvm/book3s_hv_nested.c | 8 +++- 3 files changed, 59 insertions(+), 19 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index 2f5f919f6cd3..c58121508157 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -258,6 +258,8 @@ extern long kvmppc_hv_get_dirty_log_hpt(struct kvm *kvm, extern void kvmppc_harvest_vpa_dirty(struct kvmppc_vpa *vpa, struct kvm_memory_slot *memslot, unsigned long *map); +extern unsigned long kvmppc_filter_lpcr_hv(struct kvm *kvm, + unsigned long lpcr); extern void kvmppc_update_lpcr(struct kvm *kvm, unsigned long lpcr, unsigned long mask); extern void kvmppc_set_fscr(struct kvm_vcpu *vcpu, u64 fscr); diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 13bad6bf4c95..d2c7626cb960 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1635,6 +1635,35 @@ static int kvm_arch_vcpu_ioctl_set_sregs_hv(struct kvm_vcpu *vcpu, return 0; } +/* + * Enforce limits on guest LPCR values based on hardware availability, + * guest configuration, and possibly hypervisor support and security + * concerns. + */ +unsigned long kvmppc_filter_lpcr_hv(struct kvm *kvm, unsigned long lpcr) +{ + /* On POWER8 and above, userspace can modify AIL */ + if (!cpu_has_feature(CPU_FTR_ARCH_207S)) + lpcr &= ~LPCR_AIL; + + /* + * On POWER9, allow userspace to enable large decrementer for the + * guest, whether or not the host has it enabled. + */ + if (!cpu_has_feature(CPU_FTR_ARCH_300)) + lpcr &= ~LPCR_LD; + + return lpcr; +} + +static void verify_lpcr(struct kvm *kvm, unsigned long lpcr) +{ + if (lpcr != kvmppc_filter_lpcr_hv(kvm, lpcr)) { + WARN_ONCE(1, "lpcr 0x%lx differs from filtered 0x%lx\n", + lpcr, kvmppc_filter_lpcr_hv(kvm, lpcr)); + } +} + static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr, bool preserve_top32) { @@ -1643,6 +1672,23 @@ static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr, u64 mask; spin_lock(&vc->lock); + + /* + * Userspace can only modify + * DPFD (default prefetch depth), ILE (interrupt little-endian), + * TC (translation control), AIL (alternate interrupt location), + * LD (large decrementer). + * These are subject to restrictions from kvmppc_filter_lcpr_hv(). + */ + mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD; + + /* Broken 32-bit version of LPCR must not clear top bits */ + if (preserve_top32) + mask &= 0xFFFFFFFF; + + new_lpcr = kvmppc_filter_lpcr_hv(kvm, + (vc->lpcr & ~mask) | (new_lpcr & mask)); + /* * If ILE (interrupt little-endian) has changed, update the * MSR_LE bit in the intr_msr for each vcpu in this vcore. @@ -1661,25 +1707,8 @@ static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr, } } - /* - * Userspace can only modify DPFD (default prefetch depth), - * ILE (interrupt little-endian) and TC (translation control). - * On POWER8 and POWER9 userspace can also modify AIL (alt. interrupt loc.). - */ - mask = LPCR_DPFD | LPCR_ILE | LPCR_TC; - if (cpu_has_feature(CPU_FTR_ARCH_207S)) - mask |= LPCR_AIL; - /* - * On POWER9, allow userspace to enable large decrementer for the - * guest, whether or not the host has it enabled. - */ - if (cpu_has_feature(CPU_FTR_ARCH_300)) - mask |= LPCR_LD; + vc->lpcr = new_lpcr; - /* Broken 32-bit version of LPCR must not clear top bits */ - if (preserve_top32) - mask &= 0xFFFFFFFF; - vc->lpcr = (vc->lpcr & ~mask) | (new_lpcr & mask); spin_unlock(&vc->lock); } @@ -4641,8 +4670,10 @@ void kvmppc_update_lpcr(struct kvm *kvm, unsigned long lpcr, unsigned long mask) struct kvmppc_vcore *vc = kvm->arch.vcores[i]; if (!vc) continue; + spin_lock(&vc->lock); vc->lpcr = (vc->lpcr & ~mask) | lpcr; + verify_lpcr(kvm, vc->lpcr); spin_unlock(&vc->lock); if (++cores_done >= kvm->arch.online_vcores) break; @@ -4970,6 +5001,7 @@ static int kvmppc_core_init_vm_hv(struct kvm *kvm) kvmppc_setup_partition_table(kvm); } + verify_lpcr(kvm, lpcr); kvm->arch.lpcr = lpcr; /* Initialization for future HPT resizes */ diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 3060e5deffc8..d14fe32f167b 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -151,7 +151,13 @@ static void sanitise_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr) */ mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD | LPCR_LPES | LPCR_MER; - hr->lpcr = (vc->lpcr & ~mask) | (hr->lpcr & mask); + + /* + * Additional filtering is required depending on hardware + * and configuration. + */ + hr->lpcr = kvmppc_filter_lpcr_hv(vcpu->kvm, + (vc->lpcr & ~mask) | (hr->lpcr & mask)); /* * Don't let L1 enable features for L2 which we've disabled for L1, From patchwork Mon Apr 5 01:19:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462174 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Ia1SRBaN; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCWb5dlcz9sWX for ; Mon, 5 Apr 2021 11:20:11 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231824AbhDEBUQ (ORCPT ); Sun, 4 Apr 2021 21:20:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231823AbhDEBUQ (ORCPT ); Sun, 4 Apr 2021 21:20:16 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E78FFC061756 for ; Sun, 4 Apr 2021 18:20:10 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id il9-20020a17090b1649b0290114bcb0d6c2so7100798pjb.0 for ; Sun, 04 Apr 2021 18:20:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OfjZBYhIkDtimAOFPQqSOyFCP6WGylRgGDnOYsNsrRY=; b=Ia1SRBaNSSim5ddLtX57Kg+v5iFWsYO3Uoyt8Cwc/KA7JJ4DsJCpGkOxQ8Uau4m+8U bvpiz+9xI+6am6uHbDy01kyJnbLHE+DdeWkByXrlVxqgBbgmvRcBOJzewToKrPE6cQud nBwob2tEPStDffdYMeTktHNm+FmE8JZ3NWAeY/Okf9hgscz5ASZNbeKMxOYDfuxriHdU Dgya82Ib0V0yoVlRI+Wz+R8pbWOwWLSD+frSh5dG94y41MP+pGTmYaJjOdBUssx5Q+4P Lf1VGjKyprF443OeOEA2YJIh3OtS+6TvwUyRIMANUR8P5DH3H7lIv7WWyzbreJv5/UKU 7ujA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OfjZBYhIkDtimAOFPQqSOyFCP6WGylRgGDnOYsNsrRY=; b=KQeSsfD8j/bejtS5Y7YtZgqMrhfXiQF4L2POT1YbuOksZISKHbRzjmh3oQ/aZJIMUp fjwlonFl1e3j+Kq7xEedJ969ig5jd+5uVx6eqcrM/OHUy9dgyCdjxK+gqIUMqxwtpq31 ODuM2caKF5omAquk6oXW/HgNAM3UBAhG21ceBA20i9b1pXNhPqMGrFJQU77VpCcZXyq4 TGE9jAm+ojrbMI2cGgdp9NUotXUGN6MPy9dIcvUnVkU3JK59OV8LrtMTJNwOaB2cPVKN frszNAbOzmtQsfpPnp2evg7HT1CicShM0zJj88ho6IrytSd9nsE3uY+XMbXV0IrSskO7 /nyQ== X-Gm-Message-State: AOAM5323MGIz9BqUcikuloCx/3b0NvZZyOYiQRAhxdc0lN3FTWUWCQJ2 LXWYKRvubrkdHL7gwxU9a+PgpZ4ndmps1A== X-Google-Smtp-Source: ABdhPJwLOBc7+aa0W2w927sGnl26UcGB/vIhK+cm1ecZtNxEWHUlVlipg0jT1jyE22ABhG826u6aVw== X-Received: by 2002:a17:90a:f2cf:: with SMTP id gt15mr7466893pjb.49.1617585610438; Sun, 04 Apr 2021 18:20:10 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:10 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Paul Mackerras , Fabiano Rosas Subject: [PATCH v6 03/48] KVM: PPC: Book3S HV: Disallow LPCR[AIL] to be set to 1 or 2 Date: Mon, 5 Apr 2021 11:19:03 +1000 Message-Id: <20210405011948.675354-4-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org These are already disallowed by H_SET_MODE from the guest, also disallow these by updating LPCR directly. AIL modes can affect the host interrupt behaviour while the guest LPCR value is set, so filter it here too. Acked-by: Paul Mackerras Suggested-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index d2c7626cb960..daded8949a39 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -803,7 +803,10 @@ static int kvmppc_h_set_mode(struct kvm_vcpu *vcpu, unsigned long mflags, vcpu->arch.dawrx1 = value2; return H_SUCCESS; case H_SET_MODE_RESOURCE_ADDR_TRANS_MODE: - /* KVM does not support mflags=2 (AIL=2) */ + /* + * KVM does not support mflags=2 (AIL=2) and AIL=1 is reserved. + * Keep this in synch with kvmppc_filter_guest_lpcr_hv. + */ if (mflags != 0 && mflags != 3) return H_UNSUPPORTED_FLAG_START; return H_TOO_HARD; @@ -1645,6 +1648,8 @@ unsigned long kvmppc_filter_lpcr_hv(struct kvm *kvm, unsigned long lpcr) /* On POWER8 and above, userspace can modify AIL */ if (!cpu_has_feature(CPU_FTR_ARCH_207S)) lpcr &= ~LPCR_AIL; + if ((lpcr & LPCR_AIL) != LPCR_AIL_3) + lpcr &= ~LPCR_AIL; /* LPCR[AIL]=1/2 is disallowed */ /* * On POWER9, allow userspace to enable large decrementer for the From patchwork Mon Apr 5 01:19:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462175 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=jteegXL2; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCWn2Tzpz9sWX for ; Mon, 5 Apr 2021 11:20:21 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231826AbhDEBUZ (ORCPT ); Sun, 4 Apr 2021 21:20:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231656AbhDEBUY (ORCPT ); Sun, 4 Apr 2021 21:20:24 -0400 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD4BCC061756 for ; Sun, 4 Apr 2021 18:20:14 -0700 (PDT) Received: by mail-pg1-x530.google.com with SMTP id g35so2418127pgg.9 for ; Sun, 04 Apr 2021 18:20:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2ljQ2o/1KJXmV5budvf2xHhIW20DBRLuADvMqiJ21X0=; b=jteegXL22i+h9UVPqzn2fKvO68awdCVK3/kmTXzoyUnBcLbviueKoFI+ypqp8rvTw3 Q+wiC7tzAFBILyZdrXGElHnzu2vR4ujsIz1szk24wBflaorLv9Sc57jZ3fIY8ylC4LnX ckclJGWyS2ugmRrEJM2liYX4/0e3ILdx4973YzOEcwakKJ8abSW06w397K1P/SH6Ld90 wZkVMRsWCh5SPnaYm4WgEhuL9rKuFyMLq+IyUDFdv210jbVe+1zcvlRM+7nvlcIud5xc RunY0YAzhPRN7YiyWhtnfdFB34KPQGDpjxc/dwqUjSrr9YGFXuED5iB2JA1BCaH5ZK5N hXnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2ljQ2o/1KJXmV5budvf2xHhIW20DBRLuADvMqiJ21X0=; b=i4KFz35LKWmGjEdLeID2YDSCRPkD/PiE0S0k/MJZAGTjtgC+t64lNRi9hNnVXXqulR sr9rthj5uqyNfDw4mYAX72+RgSKhQd4fsp3KWsKNemJ9WKrYoOU1JM39UXw2IplL39Q9 md6PiRHguFwFeDePUH0X8U0tnej3A/+GgdMSEHXHocj/n2oyQs/ViDIa5gcLItZUIGDa OABwJEvu8PbNbBgAnML+5ESa9JaqU9FEOi0ssiInPUEDUwz7Yd6VVWYmrI6nH7Bq6hGP H2e74W1gfEnOYXI20PEzeS3U64LF+IomCZ23EFxOvorKVVJfPlGSBDasebr653Nh1OKq nC1w== X-Gm-Message-State: AOAM531cddICPVgtvypDfIvzITRQ8aH9AIPMHApJ0dqS0VQZEStXIxDH Y7erzPpNnjN7NUF2vRHPx63TKOpAtPA3SQ== X-Google-Smtp-Source: ABdhPJy4e6XKE8ykNm7P8y7vG58oHuwFcXPgmxUmMJ1wX3FpQZ9izeGtMahOaoXNWJTWTTcgx2o7LQ== X-Received: by 2002:a65:6645:: with SMTP id z5mr20660659pgv.273.1617585614266; Sun, 04 Apr 2021 18:20:14 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:14 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Alexey Kardashevskiy Subject: [PATCH v6 04/48] KVM: PPC: Book3S HV: Prevent radix guests setting LPCR[TC] Date: Mon, 5 Apr 2021 11:19:04 +1000 Message-Id: <20210405011948.675354-5-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Prevent radix guests setting LPCR[TC]. This bit only applies to hash partitions. Reviewed-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index daded8949a39..a6b5d79d9306 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1645,6 +1645,10 @@ static int kvm_arch_vcpu_ioctl_set_sregs_hv(struct kvm_vcpu *vcpu, */ unsigned long kvmppc_filter_lpcr_hv(struct kvm *kvm, unsigned long lpcr) { + /* LPCR_TC only applies to HPT guests */ + if (kvm_is_radix(kvm)) + lpcr &= ~LPCR_TC; + /* On POWER8 and above, userspace can modify AIL */ if (!cpu_has_feature(CPU_FTR_ARCH_207S)) lpcr &= ~LPCR_AIL; From patchwork Mon Apr 5 01:19:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462176 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=a4aZwsMy; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCWn6b85z9sX5 for ; Mon, 5 Apr 2021 11:20:21 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231656AbhDEBUZ (ORCPT ); Sun, 4 Apr 2021 21:20:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231823AbhDEBUY (ORCPT ); Sun, 4 Apr 2021 21:20:24 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E220BC0613E6 for ; Sun, 4 Apr 2021 18:20:18 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id kk2-20020a17090b4a02b02900c777aa746fso5057532pjb.3 for ; Sun, 04 Apr 2021 18:20:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=r+tYArWWu4RNguNhduAefXr3uYY9Yl8n8bH2Jfs1pRM=; b=a4aZwsMyJMu0C2njYTAugBqlrA3Wtd9fBeEXZbaMVzQOqhS+VRBfLESrOHM1kj2r7H vmFtmrCzQ31NDpX0fzyY8EKDYYaGe0+1w0ED3QEeMuCZzW7SknzsWnHvnnrLircpqAwc rPb6x6I0KwMLWFWLAPmPH04RzAGG2S0niXJJ1rOF4EQb/fLFRPVKwtjOVAk821DfMJsB onLP78tagVcZCDjvieZJYCvfh8XDODx6koYytXN/NuEYRrHTTdKXn/lOD8h8CsFRFZGA YjVtgegcvRsVuG4w1KsIoqxxBPOiQNOPo8cVC7xY39RYeZ5Ovu+CPGAA3AHFlMCVJ1bM GiXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=r+tYArWWu4RNguNhduAefXr3uYY9Yl8n8bH2Jfs1pRM=; b=A85s3/+r3yEKE4fS1Y7AqCdFZLmaWcM1obHL3CN9rdFZJPHsQ69fafPDBQ5lSHUZvx j80AtQU6WYbb/T4v6kcEFzCHAxwtQ2ems/jkTh3ZV9ccW1CfLc8RXvdA2l260141yNu4 e63f0dZY+9VHxlt343Qpg2qi3ZxT9hAesqcsFKDa4ZLgZzWybZ9DN3hSTbtCIYy8asWe NmfyZW3EP2cezBjEsimnhm/uB30TUzBiPNbShHdKz8AO+WGg0wKEfTRji+NWD/r6vB2c Kr5vadQSfiyG+MQ6QuFxpN866Ko9h6ihIoxzGKBQfTeOfUymsdeHt60JECkEMhcMxGhY iWcg== X-Gm-Message-State: AOAM5335e9SScvBnYGj9Zc/AE74dpNKJzKPOlQmUChGXKr6PRoXnw/Gr AwxSgkgtAUv/SyRGjIisZiSJxWu5WnbtSQ== X-Google-Smtp-Source: ABdhPJxO8avvP6Fy4tA7YQlADYX9ZQy7YGylbO9U124A+gY/JoOQkUkjK0BQQdv2Uk4srQBOb+VDMw== X-Received: by 2002:a17:90a:4309:: with SMTP id q9mr12625678pjg.40.1617585618454; Sun, 04 Apr 2021 18:20:18 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:18 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Paul Mackerras , Fabiano Rosas , Daniel Axtens Subject: [PATCH v6 05/48] KVM: PPC: Book3S HV: Remove redundant mtspr PSPB Date: Mon, 5 Apr 2021 11:19:05 +1000 Message-Id: <20210405011948.675354-6-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This SPR is set to 0 twice when exiting the guest. Acked-by: Paul Mackerras Suggested-by: Fabiano Rosas Reviewed-by: Daniel Axtens Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index a6b5d79d9306..8bc2a5ee9ece 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3787,7 +3787,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, mtspr(SPRN_DSCR, host_dscr); mtspr(SPRN_TIDR, host_tidr); mtspr(SPRN_IAMR, host_iamr); - mtspr(SPRN_PSPB, 0); if (host_amr != vcpu->arch.amr) mtspr(SPRN_AMR, host_amr); From patchwork Mon Apr 5 01:19:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462177 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=oFuGn8uX; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCWy1zspz9sX5 for ; Mon, 5 Apr 2021 11:20:30 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231829AbhDEBUc (ORCPT ); Sun, 4 Apr 2021 21:20:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231827AbhDEBU2 (ORCPT ); Sun, 4 Apr 2021 21:20:28 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04E70C061756 for ; Sun, 4 Apr 2021 18:20:23 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id g10so4917157plt.8 for ; Sun, 04 Apr 2021 18:20:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kcMSsKOd9XJdn7alZvC43Pwxot302AJo0nnXaxeTFDs=; b=oFuGn8uX+ROQHvXOx9JhZf+MLrC6LVCvFmuGEVKre8zV53rMFZFlLyFkqKvEA8ZnGP bqIT3G31WiEYLEOKfDo/DpHuT9SoQrqcXnMI5U6n3tXaleXSBO8ARK/SVPWFRVciRGft VoxiJ7SVLJM8DXFqm2pnSRp0bpKeizMkh++ZVQLPeDpI0hZ6P1PP/ePO8VaemLvmww+Z 07h5qNP6YFA6R+Q4oJftIcT8xlMN7ed+OEZR3SKciXmMAD6xoPqGbvYWgoZjGHBLqlN1 7YpYmiQEsz0KBBP02dr0kXBE1zoQlD+Q2NKTDK9M44NBanc1JhVBc1jcwh7NDnK3akkH BkVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kcMSsKOd9XJdn7alZvC43Pwxot302AJo0nnXaxeTFDs=; b=PXG1uesfEpRwqbIb/d1d7T+9HYh+ms8fx2kJFxovyL9yaubpXC/zTxjWYo3APyKRt8 pZcFRJIo7zJ7t1BWC1w8mpaSq2hhjUDiVYIQcfnowZTHphkniEyhU9WByWlSBS7S6SBv vhsDlyu4ZMFARfAaAZRlbyxg8/XA3mQUdB93kbtm/G7V6G4Pgo6FEGeUFUweepBi1NgZ VMQ3dqsMn8rLXaZR22YLI8Iu/do8HP+ruFl5Y8iazySITRXcBo3muaYC5keMTlufFCgx IlIH6NwW9YpTuNR4GSrYxzf8FV/dRpDc3tmYNqKU7BmHv6+xRUU49gZ9DKWhNoAQXNb8 tjEw== X-Gm-Message-State: AOAM531GOa7lgoTx0GkM6PPGuLWRk22ilw0lWoaZNhWh6qcIBgOcPaF+ KO3Nojt/OwWXgkdqLWXv1vCHgftErjC5Jg== X-Google-Smtp-Source: ABdhPJzu6OWLDH6qzyRrj0OWZN7Ppb3avbojcT1MReb+hAokjIRNAbh9i0Xlz//auLea/i8zqZPdSg== X-Received: by 2002:a17:90a:5d14:: with SMTP id s20mr24453563pji.6.1617585622570; Sun, 04 Apr 2021 18:20:22 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:22 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Paul Mackerras , Daniel Axtens Subject: [PATCH v6 06/48] KVM: PPC: Book3S HV: remove unused kvmppc_h_protect argument Date: Mon, 5 Apr 2021 11:19:06 +1000 Message-Id: <20210405011948.675354-7-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The va argument is not used in the function or set by its asm caller, so remove it to be safe. Acked-by: Paul Mackerras Reviewed-by: Daniel Axtens Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_ppc.h | 3 +-- arch/powerpc/kvm/book3s_hv_rm_mmu.c | 3 +-- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 8aacd76bb702..9531b1c1b190 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -767,8 +767,7 @@ long kvmppc_h_remove(struct kvm_vcpu *vcpu, unsigned long flags, unsigned long pte_index, unsigned long avpn); long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu); long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags, - unsigned long pte_index, unsigned long avpn, - unsigned long va); + unsigned long pte_index, unsigned long avpn); long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long flags, unsigned long pte_index); long kvmppc_h_clear_ref(struct kvm_vcpu *vcpu, unsigned long flags, diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c index 88da2764c1bb..7af7c70f1468 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c @@ -673,8 +673,7 @@ long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu) } long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags, - unsigned long pte_index, unsigned long avpn, - unsigned long va) + unsigned long pte_index, unsigned long avpn) { struct kvm *kvm = vcpu->kvm; __be64 *hpte; From patchwork Mon Apr 5 01:19:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462178 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=fx6TdocB; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCWy5WsCz9shn for ; Mon, 5 Apr 2021 11:20:30 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231823AbhDEBUf (ORCPT ); Sun, 4 Apr 2021 21:20:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231830AbhDEBUc (ORCPT ); Sun, 4 Apr 2021 21:20:32 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8FB9C061756 for ; Sun, 4 Apr 2021 18:20:26 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id lr1-20020a17090b4b81b02900ea0a3f38c1so7635174pjb.0 for ; Sun, 04 Apr 2021 18:20:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=akE4Bh1LcVOdXlv04bpZUL91EDUzw6Ofnv5OK5AoVW4=; b=fx6TdocBGKyQdc27E1MP56nhRztLfViwq3BfdM8VGfIXftWZABA+Q6lvINgWYNmV9x aziBchM584a8aqhbmu7wEQdg54s2bEjOJ79NwABJQ9Cu8dGe0AJF9GdIs+sfQG47m1ws Jgjfoq/FL3PFq6drpb97J2KLPptiJC0wVQMxdOyVeepfY8vZJ9XfQxqFaSO9p9OLmrAn xCONWeMESolgtslixD85tTAOJmAMh0qBaKbuwCv6Jcjk6nk1OKHcbE/RhluVjBUvtam4 ov5cmCoOFr0dFD0HxGPStuj73wmophXwA1PdcMOzJzXy21RGiZ+Pva9+NrrTv3J37H1V gRNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=akE4Bh1LcVOdXlv04bpZUL91EDUzw6Ofnv5OK5AoVW4=; b=V4u5osC1yfpN+WH7pD/qaCOCO79RFQwEUoRfRiTefV705Ains+3HqYqcL5pH2pdZao j6HH7PtbFKcWpDNWgCR9PQaZKjHOOZ3spAScgKHYtCzyPcyZfDJ9WwS1/T+EBuD8AFFL 7lllsGGLApAD0Jmn142tOZC1qiLzdIZ3xGV3z/VAM1WRgqlOHJelzY4gEsAbzx0KiHOx avp5D4d7nf608V1PYreZ3DJPCPCDxc3SOXNL24jQAFP0a5KSpJPVHu2zCaa+2dAJ5n3v q1muWg37k1B1uuQoFdltDG5L7vV+bRU87I4EoW8rYflTCCP87NvqOs/A8JNFTgb4pPI4 Qc7Q== X-Gm-Message-State: AOAM532+KRnQKfPW9l/4F24fyb7IHiEkYXy01w2Oo6cJdsntQprAam5r 78MrGpvWZ0G+ELoYjb/zPrjfQLyUfKBmVA== X-Google-Smtp-Source: ABdhPJxHQeAmERijeyNlHq0Dm/GtcKkz/yuE73BKtZcqomrio6nidKJ/aKwauhmBy2ZqUJTmiR71hQ== X-Received: by 2002:a17:90b:4d0f:: with SMTP id mw15mr24359578pjb.92.1617585626383; Sun, 04 Apr 2021 18:20:26 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:26 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Paul Mackerras , Daniel Axtens Subject: [PATCH v6 07/48] KVM: PPC: Book3S HV: Fix CONFIG_SPAPR_TCE_IOMMU=n default hcalls Date: Mon, 5 Apr 2021 11:19:07 +1000 Message-Id: <20210405011948.675354-8-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This config option causes the warning in init_default_hcalls to fire because the TCE handlers are in the default hcall list but not implemented. Acked-by: Paul Mackerras Reviewed-by: Daniel Axtens Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 8bc2a5ee9ece..ed77aff9cdb6 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -5409,8 +5409,10 @@ static unsigned int default_hcall_list[] = { H_READ, H_PROTECT, H_BULK_REMOVE, +#ifdef CONFIG_SPAPR_TCE_IOMMU H_GET_TCE, H_PUT_TCE, +#endif H_SET_DABR, H_SET_XDABR, H_CEDE, From patchwork Mon Apr 5 01:19:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462180 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=e8CUUZP4; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCX42Z9cz9sWY for ; Mon, 5 Apr 2021 11:20:36 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231828AbhDEBUg (ORCPT ); Sun, 4 Apr 2021 21:20:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231827AbhDEBUf (ORCPT ); Sun, 4 Apr 2021 21:20:35 -0400 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9472BC061756 for ; Sun, 4 Apr 2021 18:20:30 -0700 (PDT) Received: by mail-pg1-x536.google.com with SMTP id w10so1675739pgh.5 for ; Sun, 04 Apr 2021 18:20:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=z2fLHBAlH6bMHiwD8ZSE8QbPd3NRupfIKb21fqNXnZg=; b=e8CUUZP4iaV+gW+BtowJx1AyJ3m/Ri0uxOf/sCp9zB7xk7RBRD6Uza50KgP3YgvzB1 mpUvJj9rSjaxtYn1V1J4eEbJbEKHHDaK02sGf8IiJbGHEVsevUs+siSc1gPbvf8uehVV PcnvbSfIr2qAze0h4EZKr++J9GDyd21pREMud5dnakft0jBNwi1159pp56AUovUwKJiX 3tetKg4XAqeWRWb6a6obmaLAzvxtstMzFDDa2deF9dtdT9lIYCKawxmjZkyvABD1XBar CXwKuvM7aBUBpIjTOQ6UkHUpyyAxU4B2HwZUB6QL0Eod+oicIS5ea9ElrtlvlVPFblZ4 lqUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=z2fLHBAlH6bMHiwD8ZSE8QbPd3NRupfIKb21fqNXnZg=; b=DeIHEu2pHQ5sWhRy2i84KuSooiJecLsVMF0PM93udJf6oukAJ2iTIA0PDlS+cuOt2Q WxJikREGbSUIjUcrp93MiEwNiU916wNOCuprNouKrpvlNMOIld4hMWUy7AAxW+yWcnb+ nW4ZVUJ4ngugD/vXc1R3eGj75wQmjT24S22m5E89t+0rq4304yzaanNXJKa/ayEjrH2V iA6/0uVrCIxZVrQ87G8ktrR/47h3bkb6zUEVLlNwC7qCmD64XCy+cgEl4/ZbvueCyrAV g7IN5h8yL5cFR4azfAoRKOqSi7v9W/9KI75USMDOqg7gNr6XWB/uQ74ALu/h6R1pdXNk Ow1w== X-Gm-Message-State: AOAM531ZlGVXDZPGs1YdsNTW80+Xv2acoGlxB0uq5cX1VAyMSSvKP6Ok /iaU5RU4R2RKVr9DibcApyFE/9gSPCctSw== X-Google-Smtp-Source: ABdhPJxv/d+H/pav+GLr07qdpRh1R0m5IeEnIE0Oey9cIjcnLA8r9QXq7CzegkEoEWf0qP+uaGdaqA== X-Received: by 2002:a63:43c2:: with SMTP id q185mr20716518pga.41.1617585630041; Sun, 04 Apr 2021 18:20:30 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:29 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Paul Mackerras , Fabiano Rosas Subject: [PATCH v6 08/48] powerpc/64s: Remove KVM handler support from CBE_RAS interrupts Date: Mon, 5 Apr 2021 11:19:08 +1000 Message-Id: <20210405011948.675354-9-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Cell does not support KVM. Acked-by: Paul Mackerras Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 6 ------ 1 file changed, 6 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 8082b690e874..a0515cb829c2 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -2530,8 +2530,6 @@ EXC_VIRT_NONE(0x5100, 0x100) INT_DEFINE_BEGIN(cbe_system_error) IVEC=0x1200 IHSRR=1 - IKVM_SKIP=1 - IKVM_REAL=1 INT_DEFINE_END(cbe_system_error) EXC_REAL_BEGIN(cbe_system_error, 0x1200, 0x100) @@ -2701,8 +2699,6 @@ EXC_COMMON_BEGIN(denorm_exception_common) INT_DEFINE_BEGIN(cbe_maintenance) IVEC=0x1600 IHSRR=1 - IKVM_SKIP=1 - IKVM_REAL=1 INT_DEFINE_END(cbe_maintenance) EXC_REAL_BEGIN(cbe_maintenance, 0x1600, 0x100) @@ -2754,8 +2750,6 @@ EXC_COMMON_BEGIN(altivec_assist_common) INT_DEFINE_BEGIN(cbe_thermal) IVEC=0x1800 IHSRR=1 - IKVM_SKIP=1 - IKVM_REAL=1 INT_DEFINE_END(cbe_thermal) EXC_REAL_BEGIN(cbe_thermal, 0x1800, 0x100) From patchwork Mon Apr 5 01:19:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462181 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=DDRSr5jw; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCX86zbhz9sX5 for ; Mon, 5 Apr 2021 11:20:40 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231830AbhDEBUo (ORCPT ); Sun, 4 Apr 2021 21:20:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231827AbhDEBUn (ORCPT ); Sun, 4 Apr 2021 21:20:43 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 10B0CC061756 for ; Sun, 4 Apr 2021 18:20:35 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id t24so1192464pjw.4 for ; Sun, 04 Apr 2021 18:20:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=d/LPXjK92NbOrRa0PeaBki14cO228KIgPnhx2QtNvNo=; b=DDRSr5jwRsiT2dvsHefsVNe0wEkujn8kPwsfJQ1nnAy+62x3pXmdNkjnm44KI9+rud KsF3F+5HLP0oyZIj3lZ6A9MGMjmgAympt3R+Ao4H3V2h4sC2m8mCrdC1lylRDYA1j0R0 xmaTR9qItovcvzQ3hdtgbNdIETV67gm7f4X5EOXvIWUZIPgj16dSTrs7RGGoQZlSlJrs c6x3AMkXABxiBV0yc4H5C63zN2HTAYEt35PBVo4FPAgG9aO+8KMHSCX1817tut0mDrIe /nxa4EDEKJmg8LN3+ChotvsWKOp4JAHjZi2PVzpcSKKro7Zu4WaeNQfkiBg8mWX0iamM uA6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=d/LPXjK92NbOrRa0PeaBki14cO228KIgPnhx2QtNvNo=; b=m1GKHCWWMUb/jpPxKFdxR2wuNgFTP0+ioOO33aaj9L9Zon60SS/KBUhDWuOjDl6l6N xVdUPsuFCFSpuFnNHJIllSO3DoZKNxP4dVkpehq42RpiRoBr/tYF35jFR6aB9GOU79/0 sTFA4H7NY+VOPgyIUiDPxo0aFG19J2/RyiLOKNJbrRZawkscTPqrZa/i10SbZxIZoHzB AybXHJQREghqlcfyk4xszAHSrZChmmhNRd+glevaMFy7WEnEV8DztCkB2bRFlgr6kVXi lk/GRVaB6IKUjMoZ4Wt/P7UZiyXhQ4lzmcIYcSmFuBCBbeNQR64m5drmHl15QlR3/Qsn CFSQ== X-Gm-Message-State: AOAM5309LwqXYXDPmqn7r3MzO3mMbxpxhJ3PUy593TGT4acr5CgoYNqy DFu2gVgJNq6iW8Xkb9AlIz0pf57fN64X2g== X-Google-Smtp-Source: ABdhPJys3TkaeqCURnzpiDgIzaQ9FVU//McEKHlQFYV6EF3gG7ZW8hRaiHX3Ai1aQf9Q/dZjLFy3eg== X-Received: by 2002:a17:90a:fa7:: with SMTP id 36mr8541313pjz.80.1617585634599; Sun, 04 Apr 2021 18:20:34 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:34 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Paul Mackerras , Daniel Axtens , Fabiano Rosas Subject: [PATCH v6 09/48] powerpc/64s: remove KVM SKIP test from instruction breakpoint handler Date: Mon, 5 Apr 2021 11:19:09 +1000 Message-Id: <20210405011948.675354-10-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The code being executed in KVM_GUEST_MODE_SKIP is hypervisor code with MSR[IR]=0, so the faults of concern are the d-side ones caused by access to guest context by the hypervisor. Instruction breakpoint interrupts are not a concern here. It's unlikely any good would come of causing breaks in this code, but skipping the instruction that caused it won't help matters (e.g., skip the mtmsr that sets MSR[DR]=0 or clears KVM_GUEST_MODE_SKIP). [Paul notes: the 0x1300 interrupt was dropped from the architecture a long time ago and is not generated by P7, P8, P9 or P10.] In fact it does not exist in ISA v2.01, which is the earliest supported now, but did exist in 600 series designs (some of the earliest 64-bit powerpcs), so it could probably be removed entirely. Acked-by: Paul Mackerras Reviewed-by: Daniel Axtens Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index a0515cb829c2..c9c446ccff54 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -2553,7 +2553,6 @@ EXC_VIRT_NONE(0x5200, 0x100) INT_DEFINE_BEGIN(instruction_breakpoint) IVEC=0x1300 #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE - IKVM_SKIP=1 IKVM_REAL=1 #endif INT_DEFINE_END(instruction_breakpoint) From patchwork Mon Apr 5 01:19:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462182 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=qTRhikcr; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCXD3TSZz9sj1 for ; Mon, 5 Apr 2021 11:20:44 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231697AbhDEBUq (ORCPT ); Sun, 4 Apr 2021 21:20:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231831AbhDEBUp (ORCPT ); Sun, 4 Apr 2021 21:20:45 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F201C061756 for ; Sun, 4 Apr 2021 18:20:39 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id q6-20020a17090a4306b02900c42a012202so5050608pjg.5 for ; Sun, 04 Apr 2021 18:20:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XTwV9xVlVat7TnR+O83IRQJ050ZVGX6uPq2CbLxvr5s=; b=qTRhikcrH3/Zr5oPLdE032U6xMCvjUoB2lQbn3pBzdNA8pUickLMGYU+EAnbuWNfzp d2iHtGkbT17COc9zuTF8Pa8ZjtQmqjlnYOv82MZeiMdZUAa0cxSYZjbANgpuJIn+JAfT T6PqvzrwNVOc4Dof4q3M6q7132zigQ6FZS3FkSjpXZoLXVXUMqkVuaaDNO8IjS/MfpKO pRQ+bWDhgF9BEy5sejWNIomclwZSvXZwhf6aFhN38W4+pPkQD7OSnX6aJu59SNMQBdOr WBmgQht11OQ39pMwN/sJnL+Zyn9OnsO+bleVjpr3xBdpPClru15mepLsv2bSPTZ6IC1l ExzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XTwV9xVlVat7TnR+O83IRQJ050ZVGX6uPq2CbLxvr5s=; b=Y5Q0FybOFIuBb+XGOEwGBMn9l56nRC9vUmLGR2kX11KyDFSvnOE8OAc2RcGOdcQlIN xhLpYp7TgmDOBZEWtiLOF8hAw7YhKV+F5m9gUjucTH7NnLum/QcY7lR9aqU3vjWny0Bx 6xIljyaX5Oi39TB7bbTRzKweQ/Ix+EVAbRuoOIO4Pp5ONkDRyBiT+V0Dd7xCLmCCxDv4 61mefwDCwpSkb1ATGkGrX+yDIBZq6eQw+HjgjJqdlHqH/9u8NKiUeY8ePCvBxwunFqUH KtKT30jXlX3KRDp9hJOhyQ+fIVUdra1vMvgiRgsX1JiaLGaf0GHwFXT7V9Kv0vU9hyTO 86cQ== X-Gm-Message-State: AOAM531x9JjHs5ET2+UddKsgneRyCjEGTj8nobTbbHKik+agVIzs3ggr bQb70+kWS/0p2c8NpYEIdogtx8/4UQW0TQ== X-Google-Smtp-Source: ABdhPJy8auehRG8QDvzXYfOdTM5XuOLcXu7jGgckCm/ditqcjEDm6KvS5q1GHun2vfRijGIqczyzqw== X-Received: by 2002:a17:902:263:b029:e7:35d8:4554 with SMTP id 90-20020a1709020263b02900e735d84554mr21591735plc.83.1617585638854; Sun, 04 Apr 2021 18:20:38 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:38 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Paul Mackerras , Daniel Axtens , Fabiano Rosas Subject: [PATCH v6 10/48] KVM: PPC: Book3S HV: Ensure MSR[ME] is always set in guest MSR Date: Mon, 5 Apr 2021 11:19:10 +1000 Message-Id: <20210405011948.675354-11-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rather than add the ME bit to the MSR at guest entry, make it clear that the hypervisor does not allow the guest to clear the bit. The ME set is kept in guest entry for now, but a future patch will warn if it's not present. Acked-by: Paul Mackerras Reviewed-by: Daniel Axtens Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_builtin.c | 3 +++ arch/powerpc/kvm/book3s_hv_nested.c | 4 +++- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c index 158d309b42a3..41cb03d0bde4 100644 --- a/arch/powerpc/kvm/book3s_hv_builtin.c +++ b/arch/powerpc/kvm/book3s_hv_builtin.c @@ -662,6 +662,9 @@ static void kvmppc_end_cede(struct kvm_vcpu *vcpu) void kvmppc_set_msr_hv(struct kvm_vcpu *vcpu, u64 msr) { + /* Guest must always run with ME enabled. */ + msr = msr | MSR_ME; + /* * Check for illegal transactional state bit combination * and if we find it, force the TS field to a safe state. diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index d14fe32f167b..fb03085c902b 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -343,7 +343,9 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) vcpu->arch.nested = l2; vcpu->arch.nested_vcpu_id = l2_hv.vcpu_token; vcpu->arch.regs = l2_regs; - vcpu->arch.shregs.msr = vcpu->arch.regs.msr; + + /* Guest must always run with ME enabled. */ + vcpu->arch.shregs.msr = vcpu->arch.regs.msr | MSR_ME; sanitise_hv_regs(vcpu, &l2_hv); restore_hv_regs(vcpu, &l2_hv); From patchwork Mon Apr 5 01:19:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462183 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=fkn24Rh2; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCXH38htz9sj0 for ; Mon, 5 Apr 2021 11:20:47 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231785AbhDEBUu (ORCPT ); Sun, 4 Apr 2021 21:20:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231831AbhDEBUs (ORCPT ); Sun, 4 Apr 2021 21:20:48 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3503C061756 for ; Sun, 4 Apr 2021 18:20:42 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id g15so7213570pfq.3 for ; Sun, 04 Apr 2021 18:20:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SrGG57sCJsQ8pkpP+3mT7sg63D+3Ca06l9D6kjJ6tqw=; b=fkn24Rh2tyMJgoiBgroeRaOOeOSmiyQVTA0fzz3q95rUSWykaTwmGQminD86aIMeDU BSnh5WFaTpl24rJpdr4rtRad3UlGzeJCTotJXnsR7N5/PJgHa5xZTX1hMxOnwFHnLRdQ 9If6gh8tfJFomgZ9eSyTk74iuCxqhxrQLxmGO9pw+8qazYi74R+O+qlBaGRh3avm/Fw7 yJ7JGRS1+E7AAs7evVIuqfbWe4wRMVib31VrZqvEzlq34HoXGOKMK4euguZQj8CPsfk9 RwQE6Ht4KHgZjnEMtWQytvVs4oJCIq+oiZTzD9AzMQP2hL+ewe5nJ49MpOB5ePse6t2P 9a1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SrGG57sCJsQ8pkpP+3mT7sg63D+3Ca06l9D6kjJ6tqw=; b=J+g9snq1Idc3RhGrzyBxBrbtGzVs9wDsFNtJiIkY3WvDU9zD+CT/p3v0TntJsnIREO Rj8AKQsZvZwJfVbBnOOS+xyJSENFqwxNKmFqm0xfshJfJvmFhmru4XPJ5rqrYTvmMFRI Tj6z9T9XOdMKoPEKaa2VJGsMQIPr+bo4T6IeQD10BmG0CC8WKr4NgMhsv5426V2NZPy9 u2T2WcIUnBe/cwnn7bChYPKUxDdPIUMXYNCQUkJY/EeofMx9HOZVLYsEtnXceun30e+u zfxaQWyTCLV50mAnuEphDDGkWqXi1tE41TUD4Tpizi5ziFr/rEphzBpXoMhivlDrsQh1 dJDw== X-Gm-Message-State: AOAM533b5+cdoYyFOPoPXLAqJBkdK34YWQUFWunt8awVMm1Pjp8Rj445 VvZbGyk16ymAYaFOTAqm3VaZUe4eROkYwA== X-Google-Smtp-Source: ABdhPJxUmMVnw9SMhnzSH8lJv1BCZinDTTVcYhWlb0rPal1HJ/KI5N2aqfXMKeJdqDoqhtDwgPbkxg== X-Received: by 2002:a62:8811:0:b029:1ef:2105:3594 with SMTP id l17-20020a6288110000b02901ef21053594mr21701500pfd.70.1617585642156; Sun, 04 Apr 2021 18:20:42 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:41 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Paul Mackerras Subject: [PATCH v6 11/48] KVM: PPC: Book3S HV: Ensure MSR[HV] is always clear in guest MSR Date: Mon, 5 Apr 2021 11:19:11 +1000 Message-Id: <20210405011948.675354-12-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rather than clear the HV bit from the MSR at guest entry, make it clear that the hypervisor does not allow the guest to set the bit. The HV clear is kept in guest entry for now, but a future patch will warn if it is set. Acked-by: Paul Mackerras Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_builtin.c | 4 ++-- arch/powerpc/kvm/book3s_hv_nested.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c index 41cb03d0bde4..7a0e33a9c980 100644 --- a/arch/powerpc/kvm/book3s_hv_builtin.c +++ b/arch/powerpc/kvm/book3s_hv_builtin.c @@ -662,8 +662,8 @@ static void kvmppc_end_cede(struct kvm_vcpu *vcpu) void kvmppc_set_msr_hv(struct kvm_vcpu *vcpu, u64 msr) { - /* Guest must always run with ME enabled. */ - msr = msr | MSR_ME; + /* Guest must always run with ME enabled, HV disabled. */ + msr = (msr | MSR_ME) & ~MSR_HV; /* * Check for illegal transactional state bit combination diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index fb03085c902b..60724f674421 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -344,8 +344,8 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) vcpu->arch.nested_vcpu_id = l2_hv.vcpu_token; vcpu->arch.regs = l2_regs; - /* Guest must always run with ME enabled. */ - vcpu->arch.shregs.msr = vcpu->arch.regs.msr | MSR_ME; + /* Guest must always run with ME enabled, HV disabled. */ + vcpu->arch.shregs.msr = (vcpu->arch.regs.msr | MSR_ME) & ~MSR_HV; sanitise_hv_regs(vcpu, &l2_hv); restore_hv_regs(vcpu, &l2_hv); From patchwork Mon Apr 5 01:19:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462184 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=UR+XNTr3; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCXL1NKpz9sWD for ; Mon, 5 Apr 2021 11:20:50 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231827AbhDEBUy (ORCPT ); Sun, 4 Apr 2021 21:20:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231831AbhDEBUx (ORCPT ); Sun, 4 Apr 2021 21:20:53 -0400 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97742C061756 for ; Sun, 4 Apr 2021 18:20:46 -0700 (PDT) Received: by mail-pg1-x52b.google.com with SMTP id t140so7141208pgb.13 for ; Sun, 04 Apr 2021 18:20:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QAMjwOXp/lCYiC25gxUp2qYMVNKEbzScenfHr/CileU=; b=UR+XNTr3cdSQEO6qn2CCnIxss/NMYZnSUCvb05l5Puy7ieMxKWwAAi8gs/+gI2OLuH WsvaiIGc8ivz0JuiAZnj3Db3N8oxE7+r3udwnRJwtc3K/dWkKm76RQcu6WebqfTkTHsz eigvJS99JDMpjAV6n57UuGIJ0i36lLEo+7mIVOiXaAU9CkMnuZkIf1YSR6/l6Cvy0Aji lMU4R9WiFEMDN9os/YofGd6hnMtq5gJk2KYBDwGufeAOGNFj5vQyb6wN1v0ZJYNG+RH6 hHVWEbkJahaXm6S+j5DK2Ucexf6liFnsceYf/Fb0ilNUopir08COTQYCx2E52VU/zpru rWsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QAMjwOXp/lCYiC25gxUp2qYMVNKEbzScenfHr/CileU=; b=OjtQJAnck2BxhFGmPdLEq/Do2O4mJBoOWAMj9Q/YRqPwli/hGsASbTuFKYNjlHq7qO wu4RQuCZW4NoRzh3zkxtkhBlOz8SHwUSt8IoJSCqON1smtbc1Cs5hiCgdt2CpxIwEVVn XH6yVedkk+FLozs1EffYnJGdKXGKWVyuNHe/ppr/TqXf6ZUk+kDIEMJxmCDnb+0glRLm SZhuty4vv9+0ZcwZUk92Bg9+ogdQp0n+Emo7Hnd0SMzm9qJs2nD7cSI5AvZG4N2raXsc alx40bsJmco6tXZ1EbwwVlrxhuq1uIM6CfhRd/G5/EGKmRjuFejsqC6P+o2pXNbO1rBU AK+g== X-Gm-Message-State: AOAM5320z292XFgMzlXzyXzWOdjIYcp3t5GQGNPlfDjXOTH/DR9p1eDo 5oVc1/Obo0OKFAYxfIgG/sRQWq/pqwPENw== X-Google-Smtp-Source: ABdhPJwVR3W4sNmLn3dYhjd3yFE4hRc4cmR83jno6etWbNgGqDi5hCAbYfeAkfH/X3zFuJULFQja6A== X-Received: by 2002:a65:404b:: with SMTP id h11mr21094051pgp.25.1617585646011; Sun, 04 Apr 2021 18:20:46 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:45 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Paul Mackerras , Daniel Axtens , Fabiano Rosas Subject: [PATCH v6 12/48] KVM: PPC: Book3S 64: move KVM interrupt entry to a common entry point Date: Mon, 5 Apr 2021 11:19:12 +1000 Message-Id: <20210405011948.675354-13-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rather than bifurcate the call depending on whether or not HV is possible, and have the HV entry test for PR, just make a single common point which does the demultiplexing. This makes it simpler to add another type of exit handler. Acked-by: Paul Mackerras Reviewed-by: Daniel Axtens Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 8 +----- arch/powerpc/kvm/Makefile | 3 +++ arch/powerpc/kvm/book3s_64_entry.S | 36 +++++++++++++++++++++++++ arch/powerpc/kvm/book3s_hv_rmhandlers.S | 11 ++------ 4 files changed, 42 insertions(+), 16 deletions(-) create mode 100644 arch/powerpc/kvm/book3s_64_entry.S diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index c9c446ccff54..162595af1ac7 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -208,7 +208,6 @@ do_define_int n .endm #ifdef CONFIG_KVM_BOOK3S_64_HANDLER -#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE /* * All interrupts which set HSRR registers, as well as SRESET and MCE and * syscall when invoked with "sc 1" switch to MSR[HV]=1 (HVMODE) to be taken, @@ -238,13 +237,8 @@ do_define_int n /* * If an interrupt is taken while a guest is running, it is immediately routed - * to KVM to handle. If both HV and PR KVM arepossible, KVM interrupts go first - * to kvmppc_interrupt_hv, which handles the PR guest case. + * to KVM to handle. */ -#define kvmppc_interrupt kvmppc_interrupt_hv -#else -#define kvmppc_interrupt kvmppc_interrupt_pr -#endif .macro KVMTEST name lbz r10,HSTATE_IN_GUEST(r13) diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile index 2bfeaa13befb..cdd119028f64 100644 --- a/arch/powerpc/kvm/Makefile +++ b/arch/powerpc/kvm/Makefile @@ -59,6 +59,9 @@ kvm-pr-y := \ kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) += \ tm.o +kvm-book3s_64-builtin-objs-y += \ + book3s_64_entry.o + ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) += \ book3s_rmhandlers.o diff --git a/arch/powerpc/kvm/book3s_64_entry.S b/arch/powerpc/kvm/book3s_64_entry.S new file mode 100644 index 000000000000..7a039ea78f15 --- /dev/null +++ b/arch/powerpc/kvm/book3s_64_entry.S @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#include +#include +#include +#include +#include +#include + +/* + * This is branched to from interrupt handlers in exception-64s.S which set + * IKVM_REAL or IKVM_VIRT, if HSTATE_IN_GUEST was found to be non-zero. + */ +.global kvmppc_interrupt +.balign IFETCH_ALIGN_BYTES +kvmppc_interrupt: + /* + * Register contents: + * R12 = (guest CR << 32) | interrupt vector + * R13 = PACA + * guest R12 saved in shadow VCPU SCRATCH0 + * guest R13 saved in SPRN_SCRATCH0 + */ +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE + std r9,HSTATE_SCRATCH2(r13) + lbz r9,HSTATE_IN_GUEST(r13) + cmpwi r9,KVM_GUEST_MODE_HOST_HV + beq kvmppc_bad_host_intr +#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE + cmpwi r9,KVM_GUEST_MODE_GUEST + ld r9,HSTATE_SCRATCH2(r13) + beq kvmppc_interrupt_pr +#endif + b kvmppc_interrupt_hv +#else + b kvmppc_interrupt_pr +#endif diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 5e634db4809b..f976efb7e4a9 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -1269,16 +1269,8 @@ kvmppc_interrupt_hv: * R13 = PACA * guest R12 saved in shadow VCPU SCRATCH0 * guest R13 saved in SPRN_SCRATCH0 + * guest R9 saved in HSTATE_SCRATCH2 */ - std r9, HSTATE_SCRATCH2(r13) - lbz r9, HSTATE_IN_GUEST(r13) - cmpwi r9, KVM_GUEST_MODE_HOST_HV - beq kvmppc_bad_host_intr -#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE - cmpwi r9, KVM_GUEST_MODE_GUEST - ld r9, HSTATE_SCRATCH2(r13) - beq kvmppc_interrupt_pr -#endif /* We're now back in the host but in guest MMU context */ li r9, KVM_GUEST_MODE_HOST_HV stb r9, HSTATE_IN_GUEST(r13) @@ -3280,6 +3272,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_P9_TM_HV_ASSIST) * cfar is saved in HSTATE_CFAR(r13) * ppr is saved in HSTATE_PPR(r13) */ +.global kvmppc_bad_host_intr kvmppc_bad_host_intr: /* * Switch to the emergency stack, but start half-way down in From patchwork Mon Apr 5 01:19:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462185 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=JLNGMtId; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCXT6bzjz9sWY for ; Mon, 5 Apr 2021 11:20:57 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231833AbhDEBVB (ORCPT ); Sun, 4 Apr 2021 21:21:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231831AbhDEBUz (ORCPT ); Sun, 4 Apr 2021 21:20:55 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CC1FC061756 for ; Sun, 4 Apr 2021 18:20:50 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id s11so7209237pfm.1 for ; Sun, 04 Apr 2021 18:20:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QnAH6vLwG/CSAZMFY7f9tw3x1aO5kQ0YxBlmG9hsOF8=; b=JLNGMtIdz1cBZQKF3kU61i/A7O7kBaCf9iZd1s1l6/9fxh9NkIeOFrWbEvBCVJgAIi 25vibYtkhiyCUeEoBd0YyfirPDzH8molWv2OvvADM8o1vLT8k7k/UNXNMKPhzyt7rP5p lo1qacL5QCh5SVLtMASV4QY0+B3EG27ZWVhHeTw8dUlay6KCzTDRW0SAcmI9/9VQcI50 V8p+V4HcqG6bUIMgQ3O1qAF125JTQNIE+vvQQXSCK1UY7qpdl8Ft3m4khHxvMDYBez+f e48FH00+u2D4tkF2539jPDm7Ek+phpRYTKFK7c5w+IZlobDHpcKBSUcOCwDars6u0V7T bVuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QnAH6vLwG/CSAZMFY7f9tw3x1aO5kQ0YxBlmG9hsOF8=; b=Q+L7YMLVzpH4AJaVaUnbpO7YHK7Of08YLZhEkYaDTvzs5ic1BNZUIy094rpxEPucX8 dZVoSUun9MIJzy24O+Fk83SAHJDULGuBAEGH11bvwvwH6Qio+JRuTyTGACyLmTrEyFsl OC1HjEutEGQo7fyei6F4Ru7WhO4GjMUzin7FgLKt4y+Xp+dLLMGIxiEjvgrPUJPEaXo8 tA6eXYsen1U8oS1NPQJrZyvn04E9tdn/GYRaDzs2Op56qF9HkoExMDmbw9cMH7MMr0h/ DtPMHpFP5VS36EtDX0W0mEuEJocFOcKz0GeKTQh2/EQIwJtIU+ZILwh/etCgs893Hq+M tOcQ== X-Gm-Message-State: AOAM530JcNaZdzIglUoGiTGwX6UHMQhmE9gnB45KFZ5X23Qr40k1wRLB Bt6XBXFGaRIM3eeJaLAZRXiyzCWUf/wEoA== X-Google-Smtp-Source: ABdhPJzmbSZbgggRR++o0lMW35qtL/TIDy4xu9oRCJnUYz5/HClGyAJj1vGa2AYAekb4tkloyoss9A== X-Received: by 2002:a65:610f:: with SMTP id z15mr20971239pgu.360.1617585649766; Sun, 04 Apr 2021 18:20:49 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:49 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Daniel Axtens , Fabiano Rosas Subject: [PATCH v6 13/48] KVM: PPC: Book3S 64: Move GUEST_MODE_SKIP test into KVM Date: Mon, 5 Apr 2021 11:19:13 +1000 Message-Id: <20210405011948.675354-14-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move the GUEST_MODE_SKIP logic into KVM code. This is quite a KVM internal detail that has no real need to be in common handlers. Also add a comment explaining why this thing exists. Reviewed-by: Daniel Axtens Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 60 ---------------------------- arch/powerpc/kvm/book3s_64_entry.S | 59 ++++++++++++++++++++++++++- 2 files changed, 58 insertions(+), 61 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 162595af1ac7..16fbfde960e7 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -133,7 +133,6 @@ name: #define IBRANCH_TO_COMMON .L_IBRANCH_TO_COMMON_\name\() /* ENTRY branch to common */ #define IREALMODE_COMMON .L_IREALMODE_COMMON_\name\() /* Common runs in realmode */ #define IMASK .L_IMASK_\name\() /* IRQ soft-mask bit */ -#define IKVM_SKIP .L_IKVM_SKIP_\name\() /* Generate KVM skip handler */ #define IKVM_REAL .L_IKVM_REAL_\name\() /* Real entry tests KVM */ #define __IKVM_REAL(name) .L_IKVM_REAL_ ## name #define IKVM_VIRT .L_IKVM_VIRT_\name\() /* Virt entry tests KVM */ @@ -190,9 +189,6 @@ do_define_int n .ifndef IMASK IMASK=0 .endif - .ifndef IKVM_SKIP - IKVM_SKIP=0 - .endif .ifndef IKVM_REAL IKVM_REAL=0 .endif @@ -250,15 +246,10 @@ do_define_int n .balign IFETCH_ALIGN_BYTES \name\()_kvm: - .if IKVM_SKIP - cmpwi r10,KVM_GUEST_MODE_SKIP - beq 89f - .else BEGIN_FTR_SECTION ld r10,IAREA+EX_CFAR(r13) std r10,HSTATE_CFAR(r13) END_FTR_SECTION_IFSET(CPU_FTR_CFAR) - .endif ld r10,IAREA+EX_CTR(r13) mtctr r10 @@ -285,27 +276,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) ori r12,r12,(IVEC) .endif b kvmppc_interrupt - - .if IKVM_SKIP -89: mtocrf 0x80,r9 - ld r10,IAREA+EX_CTR(r13) - mtctr r10 - ld r9,IAREA+EX_R9(r13) - ld r10,IAREA+EX_R10(r13) - ld r11,IAREA+EX_R11(r13) - ld r12,IAREA+EX_R12(r13) - .if IHSRR_IF_HVMODE - BEGIN_FTR_SECTION - b kvmppc_skip_Hinterrupt - FTR_SECTION_ELSE - b kvmppc_skip_interrupt - ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) - .elseif IHSRR - b kvmppc_skip_Hinterrupt - .else - b kvmppc_skip_interrupt - .endif - .endif .endm #else @@ -1083,7 +1053,6 @@ INT_DEFINE_BEGIN(machine_check) ISET_RI=0 IDAR=1 IDSISR=1 - IKVM_SKIP=1 IKVM_REAL=1 INT_DEFINE_END(machine_check) @@ -1356,7 +1325,6 @@ INT_DEFINE_BEGIN(data_access) IVEC=0x300 IDAR=1 IDSISR=1 - IKVM_SKIP=1 IKVM_REAL=1 INT_DEFINE_END(data_access) @@ -1410,7 +1378,6 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) INT_DEFINE_BEGIN(data_access_slb) IVEC=0x380 IDAR=1 - IKVM_SKIP=1 IKVM_REAL=1 INT_DEFINE_END(data_access_slb) @@ -2080,7 +2047,6 @@ INT_DEFINE_BEGIN(h_data_storage) IHSRR=1 IDAR=1 IDSISR=1 - IKVM_SKIP=1 IKVM_REAL=1 IKVM_VIRT=1 INT_DEFINE_END(h_data_storage) @@ -3024,32 +2990,6 @@ EXPORT_SYMBOL(do_uaccess_flush) MASKED_INTERRUPT MASKED_INTERRUPT hsrr=1 -#ifdef CONFIG_KVM_BOOK3S_64_HANDLER -kvmppc_skip_interrupt: - /* - * Here all GPRs are unchanged from when the interrupt happened - * except for r13, which is saved in SPRG_SCRATCH0. - */ - mfspr r13, SPRN_SRR0 - addi r13, r13, 4 - mtspr SPRN_SRR0, r13 - GET_SCRATCH0(r13) - RFI_TO_KERNEL - b . - -kvmppc_skip_Hinterrupt: - /* - * Here all GPRs are unchanged from when the interrupt happened - * except for r13, which is saved in SPRG_SCRATCH0. - */ - mfspr r13, SPRN_HSRR0 - addi r13, r13, 4 - mtspr SPRN_HSRR0, r13 - GET_SCRATCH0(r13) - HRFI_TO_KERNEL - b . -#endif - /* * Relocation-on interrupts: A subset of the interrupts can be delivered * with IR=1/DR=1, if AIL==2 and MSR.HV won't be changed by delivering diff --git a/arch/powerpc/kvm/book3s_64_entry.S b/arch/powerpc/kvm/book3s_64_entry.S index 7a039ea78f15..bf927e7a06af 100644 --- a/arch/powerpc/kvm/book3s_64_entry.S +++ b/arch/powerpc/kvm/book3s_64_entry.S @@ -1,6 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0-only */ #include #include +#include #include #include #include @@ -20,9 +21,12 @@ kvmppc_interrupt: * guest R12 saved in shadow VCPU SCRATCH0 * guest R13 saved in SPRN_SCRATCH0 */ -#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE std r9,HSTATE_SCRATCH2(r13) lbz r9,HSTATE_IN_GUEST(r13) + cmpwi r9,KVM_GUEST_MODE_SKIP + beq- .Lmaybe_skip +.Lno_skip: +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE cmpwi r9,KVM_GUEST_MODE_HOST_HV beq kvmppc_bad_host_intr #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE @@ -32,5 +36,58 @@ kvmppc_interrupt: #endif b kvmppc_interrupt_hv #else + ld r9,HSTATE_SCRATCH2(r13) b kvmppc_interrupt_pr #endif + +/* + * "Skip" interrupts are part of a trick KVM uses a with hash guests to load + * the faulting instruction in guest memory from the the hypervisor without + * walking page tables. + * + * When the guest takes a fault that requires the hypervisor to load the + * instruction (e.g., MMIO emulation), KVM is running in real-mode with HV=1 + * and the guest MMU context loaded. It sets KVM_GUEST_MODE_SKIP, and sets + * MSR[DR]=1 while leaving MSR[IR]=0, so it continues to fetch HV instructions + * but loads and stores will access the guest context. This is used to load + * the faulting instruction using the faulting guest effective address. + * + * However the guest context may not be able to translate, or it may cause a + * machine check or other issue, which results in a fault in the host + * (even with KVM-HV). + * + * These faults come here because KVM_GUEST_MODE_SKIP was set, so if they + * are (or are likely) caused by that load, the instruction is skipped by + * just returning with the PC advanced +4, where it is noticed the load did + * not execute and it goes to the slow path which walks the page tables to + * read guest memory. + */ +.Lmaybe_skip: + cmpwi r12,BOOK3S_INTERRUPT_MACHINE_CHECK + beq 1f + cmpwi r12,BOOK3S_INTERRUPT_DATA_STORAGE + beq 1f + cmpwi r12,BOOK3S_INTERRUPT_DATA_SEGMENT + beq 1f +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE + /* HSRR interrupts get 2 added to interrupt number */ + cmpwi r12,BOOK3S_INTERRUPT_H_DATA_STORAGE | 0x2 + beq 2f +#endif + b .Lno_skip +1: mfspr r9,SPRN_SRR0 + addi r9,r9,4 + mtspr SPRN_SRR0,r9 + ld r12,HSTATE_SCRATCH0(r13) + ld r9,HSTATE_SCRATCH2(r13) + GET_SCRATCH0(r13) + RFI_TO_KERNEL +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +2: mfspr r9,SPRN_HSRR0 + addi r9,r9,4 + mtspr SPRN_HSRR0,r9 + ld r12,HSTATE_SCRATCH0(r13) + ld r9,HSTATE_SCRATCH2(r13) + GET_SCRATCH0(r13) + HRFI_TO_KERNEL +#endif From patchwork Mon Apr 5 01:19:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462187 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=e73LpyIJ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCXW5SZlz9sWX for ; Mon, 5 Apr 2021 11:20:59 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231754AbhDEBVC (ORCPT ); Sun, 4 Apr 2021 21:21:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231775AbhDEBVA (ORCPT ); Sun, 4 Apr 2021 21:21:00 -0400 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1FD94C0613E6 for ; Sun, 4 Apr 2021 18:20:54 -0700 (PDT) Received: by mail-pg1-x536.google.com with SMTP id b17so3514324pgh.7 for ; Sun, 04 Apr 2021 18:20:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ed2GXWlaCaSp80Tc/mRVZswGXqjaQmSEJ2I3FJJrKHY=; b=e73LpyIJznudnCnZPgrkOx35aNoogsvfvId8gLSOXX8i9H0929f2Y321Dr7ONmL4p3 XFmd+Oteke/GyqaIXgHEo+dEPB1rpdcETZxXKemfK+yp30Jxq/gYTswMsy3kZTMX8zJ7 p6HyxTMyFjFWg39qIFjAuAEn6fZvz2fxXNhglAzE0O5Tpw6kRXxCMfRYhBXbIaP9Luj3 RBXq83fK9deVMfF71PF3nup3vRJ8E5A7fIwV6PnpuWLMNaol07REV8M+PW4/2u3XzZ0/ fCMcba3Ux9wgC+DDVs8+OgSCVmWlHKjxn0hJth7RSTmsiv53WnEkGsMlxkD16u9yIQDN ntlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ed2GXWlaCaSp80Tc/mRVZswGXqjaQmSEJ2I3FJJrKHY=; b=H+qgwvvadg4LaW613o83wfRoAI1Tkq1NMKAE+JUIYbeHDt5c4VNZsf3hIgu3RiAJj9 zhAZJIMo9dutQJYRlvMmqS/4bAtIC8LPT+ES8Bn2KCtPTtRhFldaGQnZ+zZawPYTy6Ln ZzGMdiLyruLV05JWEduysVPT7NC4cuvSklPobpXqNoQKfyx/uoPecWmlcl3z5Xe5kzeK 5oUjmj4wXD7nIWXg3Pi3dVdWTSnTdBUhdqtnjIU3TrXRgxaS/0UczEK/yq4z7O13C7fX Rqo1q2KoYIaEYHNturWyg+uGft5uWbBNXsJUmJCqv+ywQ8wKVP2ctGrte/hKe+CwNRFf 4NBQ== X-Gm-Message-State: AOAM533p9SWCgS7xLEmyDoq/p3fO34wFi2s8BDvRIKVHhVo6y57/Th4X mIdXUEuUo8gx6Ibow1RaJ0XOn1chu9Pc4g== X-Google-Smtp-Source: ABdhPJxXoU7kKdyUv7+TwJCZznBqV0rzxxPEglGyclVPCRi9MLLoA/a3wHM3mZvQuwuDW+M77gnT+Q== X-Received: by 2002:a63:c145:: with SMTP id p5mr14876597pgi.451.1617585653647; Sun, 04 Apr 2021 18:20:53 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:53 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Daniel Axtens , Fabiano Rosas Subject: [PATCH v6 14/48] KVM: PPC: Book3S 64: add hcall interrupt handler Date: Mon, 5 Apr 2021 11:19:14 +1000 Message-Id: <20210405011948.675354-15-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Add a separate hcall entry point. This can be used to deal with the different calling convention. Reviewed-by: Daniel Axtens Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 6 +++--- arch/powerpc/kvm/book3s_64_entry.S | 6 +++++- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 16fbfde960e7..98bf73df0f57 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -1989,16 +1989,16 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) ori r12,r12,0xc00 #ifdef CONFIG_RELOCATABLE /* - * Requires __LOAD_FAR_HANDLER beause kvmppc_interrupt lives + * Requires __LOAD_FAR_HANDLER beause kvmppc_hcall lives * outside the head section. */ - __LOAD_FAR_HANDLER(r10, kvmppc_interrupt) + __LOAD_FAR_HANDLER(r10, kvmppc_hcall) mtctr r10 ld r10,PACA_EXGEN+EX_R10(r13) bctr #else ld r10,PACA_EXGEN+EX_R10(r13) - b kvmppc_interrupt + b kvmppc_hcall #endif #endif diff --git a/arch/powerpc/kvm/book3s_64_entry.S b/arch/powerpc/kvm/book3s_64_entry.S index bf927e7a06af..c21fa64059ef 100644 --- a/arch/powerpc/kvm/book3s_64_entry.S +++ b/arch/powerpc/kvm/book3s_64_entry.S @@ -8,9 +8,13 @@ #include /* - * This is branched to from interrupt handlers in exception-64s.S which set + * These are branched to from interrupt handlers in exception-64s.S which set * IKVM_REAL or IKVM_VIRT, if HSTATE_IN_GUEST was found to be non-zero. */ +.global kvmppc_hcall +.balign IFETCH_ALIGN_BYTES +kvmppc_hcall: + .global kvmppc_interrupt .balign IFETCH_ALIGN_BYTES kvmppc_interrupt: From patchwork Mon Apr 5 01:19:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462188 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=XlpZ+f26; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCXX487tz9t10 for ; Mon, 5 Apr 2021 11:21:00 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231831AbhDEBVE (ORCPT ); Sun, 4 Apr 2021 21:21:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231775AbhDEBVD (ORCPT ); Sun, 4 Apr 2021 21:21:03 -0400 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03B1EC061756 for ; Sun, 4 Apr 2021 18:20:57 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id x26so465121pfn.0 for ; Sun, 04 Apr 2021 18:20:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=br9KsSPqAYwn2ChgsOdZxU/P/wvzrThXCx6ps2GjyIQ=; b=XlpZ+f26ijm5QlRP/XaP9w28migRZBSvTlRhklIMXIA/1xbiIiVZyjkmIW5KnqYWn6 61K5xN3hbUeSv4ILa9tHiQtxNaRo5Z7Zxr148EcPdSlT8GVmyIJ53g6plSY3sf1DV+xM KMX1ipJHs9ASxMdzf+/VsQB7oGartO0CZB/nLcTzkAJAQtErvF0VhcUzHUxGTfQ0pPXk zSe/He7O8Z7xiCyDCrm6PNWRav9bn6H9Wl4HGVUFSCpvirl0/H0zYmRpNAXe37eiEGNT 4On7KPueRKg35vfHa6JTqHY1jMtPBoJcMfAM2rPvyvX88J73q9ERp6MfJ3FlCtbHcA1j dOwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=br9KsSPqAYwn2ChgsOdZxU/P/wvzrThXCx6ps2GjyIQ=; b=WSW7RevxX+mqTcLf/BvAxBK9+tlarymBq5uB7wBPSRY2y/hZzyPpp2E9gDLQPpUBa9 zlSTnnIVpaJm1D7EbzsYwNRb7PkUVuTufOOBqzB1FcgvgHIu1AvSJBUFd/RInySO5ZRo fYcO5S9lpjUgFSYzyEcyBtGl3PrtTvYLIpn2PjZ9+5yUfK36AePZ4AxCE24xX15dd/kM 1bnFdcMnuoabox/7cc7/hQOwabrtH9gAe525GHU9v/E9jxx9JUbKLCAmlOps4Vnltfa9 s+Y+Q4SIvXjPLe+PyE6hWuAwY5RdnzgDhD8L+49yS5RaHRZNdghiLROGJrcnh8xDTGEp 974A== X-Gm-Message-State: AOAM533xHjq+tCtpJoGkNn6OcjKxXVlFmpyHUgGYjWJgVj/Hgaww6bey kYxQE9g6ilUcrLIXhDqbCzUugLvbUlh2Cw== X-Google-Smtp-Source: ABdhPJwzNh485AOCMuE40Zaj15xORArf8KbiAhxqGToIMroIvkxSk2Bm8poytBRn4liBEE98JuihRQ== X-Received: by 2002:a63:1820:: with SMTP id y32mr20950285pgl.157.1617585656491; Sun, 04 Apr 2021 18:20:56 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:56 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 15/48] KVM: PPC: Book3S 64: Move hcall early register setup to KVM Date: Mon, 5 Apr 2021 11:19:15 +1000 Message-Id: <20210405011948.675354-16-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org System calls / hcalls have a different calling convention than other interrupts, so there is code in the KVMTEST to massage these into the same form as other interrupt handlers. Move this work into the KVM hcall handler. This means teaching KVM a little more about the low level interrupt handler setup, PACA save areas, etc., although that's not obviously worse than the current approach of coming up with an entirely different interrupt register / save convention. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/exception-64s.h | 13 ++++++++ arch/powerpc/kernel/exceptions-64s.S | 42 +----------------------- arch/powerpc/kvm/book3s_64_entry.S | 30 +++++++++++++++++ 3 files changed, 44 insertions(+), 41 deletions(-) diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h index c1a8aac01cf9..bb6f78fcf981 100644 --- a/arch/powerpc/include/asm/exception-64s.h +++ b/arch/powerpc/include/asm/exception-64s.h @@ -35,6 +35,19 @@ /* PACA save area size in u64 units (exgen, exmc, etc) */ #define EX_SIZE 10 +/* PACA save area offsets */ +#define EX_R9 0 +#define EX_R10 8 +#define EX_R11 16 +#define EX_R12 24 +#define EX_R13 32 +#define EX_DAR 40 +#define EX_DSISR 48 +#define EX_CCR 52 +#define EX_CFAR 56 +#define EX_PPR 64 +#define EX_CTR 72 + /* * maximum recursive depth of MCE exceptions */ diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 98bf73df0f57..a23feaa445b5 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -21,22 +21,6 @@ #include #include -/* PACA save area offsets (exgen, exmc, etc) */ -#define EX_R9 0 -#define EX_R10 8 -#define EX_R11 16 -#define EX_R12 24 -#define EX_R13 32 -#define EX_DAR 40 -#define EX_DSISR 48 -#define EX_CCR 52 -#define EX_CFAR 56 -#define EX_PPR 64 -#define EX_CTR 72 -.if EX_SIZE != 10 - .error "EX_SIZE is wrong" -.endif - /* * Following are fixed section helper macros. * @@ -1964,29 +1948,8 @@ EXC_VIRT_END(system_call, 0x4c00, 0x100) #ifdef CONFIG_KVM_BOOK3S_64_HANDLER TRAMP_REAL_BEGIN(system_call_kvm) - /* - * This is a hcall, so register convention is as above, with these - * differences: - * r13 = PACA - * ctr = orig r13 - * orig r10 saved in PACA - */ - /* - * Save the PPR (on systems that support it) before changing to - * HMT_MEDIUM. That allows the KVM code to save that value into the - * guest state (it is the guest's PPR value). - */ -BEGIN_FTR_SECTION - mfspr r10,SPRN_PPR - std r10,HSTATE_PPR(r13) -END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) - HMT_MEDIUM mfctr r10 - SET_SCRATCH0(r10) - mfcr r10 - std r12,HSTATE_SCRATCH0(r13) - sldi r12,r10,32 - ori r12,r12,0xc00 + SET_SCRATCH0(r10) /* Save r13 in SCRATCH0 */ #ifdef CONFIG_RELOCATABLE /* * Requires __LOAD_FAR_HANDLER beause kvmppc_hcall lives @@ -1994,15 +1957,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) */ __LOAD_FAR_HANDLER(r10, kvmppc_hcall) mtctr r10 - ld r10,PACA_EXGEN+EX_R10(r13) bctr #else - ld r10,PACA_EXGEN+EX_R10(r13) b kvmppc_hcall #endif #endif - /** * Interrupt 0xd00 - Trace Interrupt. * This is a synchronous interrupt in response to instruction step or diff --git a/arch/powerpc/kvm/book3s_64_entry.S b/arch/powerpc/kvm/book3s_64_entry.S index c21fa64059ef..f527e16707db 100644 --- a/arch/powerpc/kvm/book3s_64_entry.S +++ b/arch/powerpc/kvm/book3s_64_entry.S @@ -14,6 +14,36 @@ .global kvmppc_hcall .balign IFETCH_ALIGN_BYTES kvmppc_hcall: + /* + * This is a hcall, so register convention is as + * Documentation/powerpc/papr_hcalls.rst, with these additions: + * R13 = PACA + * guest R13 saved in SPRN_SCRATCH0 + * R10 = free + * guest r10 saved in PACA_EXGEN + * + * This may also be a syscall from PR-KVM userspace that is to be + * reflected to the PR guest kernel, so registers may be set up for + * a system call rather than hcall. We don't currently clobber + * anything here, but the 0xc00 handler has already clobbered CTR + * and CR0, so PR-KVM can not support a guest kernel that preserves + * those registers across its system calls. + */ + /* + * Save the PPR (on systems that support it) before changing to + * HMT_MEDIUM. That allows the KVM code to save that value into the + * guest state (it is the guest's PPR value). + */ +BEGIN_FTR_SECTION + mfspr r10,SPRN_PPR + std r10,HSTATE_PPR(r13) +END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) + HMT_MEDIUM + mfcr r10 + std r12,HSTATE_SCRATCH0(r13) + sldi r12,r10,32 + ori r12,r12,0xc00 + ld r10,PACA_EXGEN+EX_R10(r13) .global kvmppc_interrupt .balign IFETCH_ALIGN_BYTES From patchwork Mon Apr 5 01:19:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462189 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=aplDVAlQ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCXb6CnFz9sj1 for ; Mon, 5 Apr 2021 11:21:03 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231834AbhDEBVH (ORCPT ); Sun, 4 Apr 2021 21:21:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231775AbhDEBVF (ORCPT ); Sun, 4 Apr 2021 21:21:05 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3346BC061756 for ; Sun, 4 Apr 2021 18:21:00 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id x21-20020a17090a5315b029012c4a622e4aso5066632pjh.2 for ; Sun, 04 Apr 2021 18:21:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=avE4rxgWz6aTXDOw6GgSQr9BsN+ztXDr6CAMKn+LjvA=; b=aplDVAlQkLEUOyedizLpwjEHpqTJSvSbNbMoGRA2vDS92gyuJ2f4NeSbqF79pNNrLs ini39wnrIKPXcYkyTfHMqudn7qDLBHwihVEtto8qwX9ipLqCumR2LF7MnFt+rxXqNEP4 Sj+vqNmDQnj7G9mORVX7FeElDOJswGrbobw6iDUeNSMbOuIJAKi4TLUyK9allcEqRX4j rWr40xXoWnYEAQszrCQMwMcM+O97EPxnia4oDWp6OcNUrLNvFx7UVg6tDROrKK+xKgu7 vxN/hhb/DxD17v0T+13kO6EkEA0KAbdtb37eDDWOkDgkHUD9K2T5Jo/0X9Is6xUC5MPs +J+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=avE4rxgWz6aTXDOw6GgSQr9BsN+ztXDr6CAMKn+LjvA=; b=YKyZl0a13T8cg18ukBlHBJ6nWCNeX+XT+VcPSQen3Rh9yKDljotIk3rhP8PF8Nrz1M SgSflCOwwb4ooUxDLPSMHYpRqNQgumJbqRZlYWT/OVi0BZmbwQtH9IhQwhiaQdHaLyAh p7MpYhqbRS+A7lpPTJdV2PFBxKI/QiP4B5GtBIL5UKV1kx7fWfgbFO/xz8ZEdA1I1qA3 nyTWOpG9e218BV4XUp2CDYpkq94EGALZY4UyqN2CytCslRnFqd8V2ntyiHPhTVYAtQr2 5kfuf38vCmX/pUEWopEmDZd08cAbWUVJwKGyQE7O4wyaWXQzpHF7taMueSzRc4WI0Yof r6mg== X-Gm-Message-State: AOAM532nV7FefPsBdCCJ5GiGWm11erN/+5Oqx1GLMhgwI44TBpvrKmIZ VnmF0gc3I9zxDgx1Qsk555dxHV3jmZX5GQ== X-Google-Smtp-Source: ABdhPJz7lTsYiKoh/QzaWFZHvP3u4kKAoENQ0q7ltKQvHiL43TuXG4EtfJdrz2Wp8c+gTAFVFULm+A== X-Received: by 2002:a17:90a:8417:: with SMTP id j23mr24295136pjn.224.1617585659574; Sun, 04 Apr 2021 18:20:59 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:20:59 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 16/48] KVM: PPC: Book3S 64: Move interrupt early register setup to KVM Date: Mon, 5 Apr 2021 11:19:16 +1000 Message-Id: <20210405011948.675354-17-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Like the earlier patch for hcalls, KVM interrupt entry requires a different calling convention than the Linux interrupt handlers set up. Move the code that converts from one to the other into KVM. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 131 +++++---------------------- arch/powerpc/kvm/book3s_64_entry.S | 50 +++++++++- 2 files changed, 71 insertions(+), 110 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index a23feaa445b5..115cf79f3e82 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -187,7 +187,6 @@ do_define_int n .endif .endm -#ifdef CONFIG_KVM_BOOK3S_64_HANDLER /* * All interrupts which set HSRR registers, as well as SRESET and MCE and * syscall when invoked with "sc 1" switch to MSR[HV]=1 (HVMODE) to be taken, @@ -220,54 +219,25 @@ do_define_int n * to KVM to handle. */ -.macro KVMTEST name +.macro KVMTEST name handler +#ifdef CONFIG_KVM_BOOK3S_64_HANDLER lbz r10,HSTATE_IN_GUEST(r13) cmpwi r10,0 - bne \name\()_kvm -.endm - -.macro GEN_KVM name - .balign IFETCH_ALIGN_BYTES -\name\()_kvm: - -BEGIN_FTR_SECTION - ld r10,IAREA+EX_CFAR(r13) - std r10,HSTATE_CFAR(r13) -END_FTR_SECTION_IFSET(CPU_FTR_CFAR) - - ld r10,IAREA+EX_CTR(r13) - mtctr r10 -BEGIN_FTR_SECTION - ld r10,IAREA+EX_PPR(r13) - std r10,HSTATE_PPR(r13) -END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) - ld r11,IAREA+EX_R11(r13) - ld r12,IAREA+EX_R12(r13) - std r12,HSTATE_SCRATCH0(r13) - sldi r12,r9,32 - ld r9,IAREA+EX_R9(r13) - ld r10,IAREA+EX_R10(r13) /* HSRR variants have the 0x2 bit added to their trap number */ .if IHSRR_IF_HVMODE BEGIN_FTR_SECTION - ori r12,r12,(IVEC + 0x2) + li r10,(IVEC + 0x2) FTR_SECTION_ELSE - ori r12,r12,(IVEC) + li r10,(IVEC) ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) .elseif IHSRR - ori r12,r12,(IVEC+ 0x2) + li r10,(IVEC + 0x2) .else - ori r12,r12,(IVEC) + li r10,(IVEC) .endif - b kvmppc_interrupt -.endm - -#else -.macro KVMTEST name -.endm -.macro GEN_KVM name -.endm + bne \handler #endif +.endm /* * This is the BOOK3S interrupt entry code macro. @@ -409,7 +379,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR) DEFINE_FIXED_SYMBOL(\name\()_common_real) \name\()_common_real: .if IKVM_REAL - KVMTEST \name + KVMTEST \name kvm_interrupt .endif ld r10,PACAKMSR(r13) /* get MSR value for kernel */ @@ -432,7 +402,7 @@ DEFINE_FIXED_SYMBOL(\name\()_common_real) DEFINE_FIXED_SYMBOL(\name\()_common_virt) \name\()_common_virt: .if IKVM_VIRT - KVMTEST \name + KVMTEST \name kvm_interrupt 1: .endif .endif /* IVIRT */ @@ -446,7 +416,7 @@ DEFINE_FIXED_SYMBOL(\name\()_common_virt) DEFINE_FIXED_SYMBOL(\name\()_common_real) \name\()_common_real: .if IKVM_REAL - KVMTEST \name + KVMTEST \name kvm_interrupt .endif .endm @@ -967,8 +937,6 @@ EXC_COMMON_BEGIN(system_reset_common) EXCEPTION_RESTORE_REGS RFI_TO_USER_OR_KERNEL - GEN_KVM system_reset - /** * Interrupt 0x200 - Machine Check Interrupt (MCE). @@ -1132,7 +1100,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) /* * Check if we are coming from guest. If yes, then run the normal * exception handler which will take the - * machine_check_kvm->kvmppc_interrupt branch to deliver the MC event + * machine_check_kvm->kvm_interrupt branch to deliver the MC event * to guest. */ lbz r11,HSTATE_IN_GUEST(r13) @@ -1203,8 +1171,6 @@ EXC_COMMON_BEGIN(machine_check_common) bl machine_check_exception b interrupt_return - GEN_KVM machine_check - #ifdef CONFIG_PPC_P7_NAP /* @@ -1339,8 +1305,6 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) REST_NVGPRS(r1) b interrupt_return - GEN_KVM data_access - /** * Interrupt 0x380 - Data Segment Interrupt (DSLB). @@ -1390,8 +1354,6 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) bl do_bad_slb_fault b interrupt_return - GEN_KVM data_access_slb - /** * Interrupt 0x400 - Instruction Storage Interrupt (ISI). @@ -1428,8 +1390,6 @@ MMU_FTR_SECTION_ELSE ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) b interrupt_return - GEN_KVM instruction_access - /** * Interrupt 0x480 - Instruction Segment Interrupt (ISLB). @@ -1474,8 +1434,6 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) bl do_bad_slb_fault b interrupt_return - GEN_KVM instruction_access_slb - /** * Interrupt 0x500 - External Interrupt. @@ -1521,8 +1479,6 @@ EXC_COMMON_BEGIN(hardware_interrupt_common) bl do_IRQ b interrupt_return - GEN_KVM hardware_interrupt - /** * Interrupt 0x600 - Alignment Interrupt @@ -1550,8 +1506,6 @@ EXC_COMMON_BEGIN(alignment_common) REST_NVGPRS(r1) /* instruction emulation may change GPRs */ b interrupt_return - GEN_KVM alignment - /** * Interrupt 0x700 - Program Interrupt (program check). @@ -1659,8 +1613,6 @@ EXC_COMMON_BEGIN(program_check_common) REST_NVGPRS(r1) /* instruction emulation may change GPRs */ b interrupt_return - GEN_KVM program_check - /* * Interrupt 0x800 - Floating-Point Unavailable Interrupt. @@ -1710,8 +1662,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM) b interrupt_return #endif - GEN_KVM fp_unavailable - /** * Interrupt 0x900 - Decrementer Interrupt. @@ -1751,8 +1701,6 @@ EXC_COMMON_BEGIN(decrementer_common) bl timer_interrupt b interrupt_return - GEN_KVM decrementer - /** * Interrupt 0x980 - Hypervisor Decrementer Interrupt. @@ -1798,8 +1746,6 @@ EXC_COMMON_BEGIN(hdecrementer_common) ld r13,PACA_EXGEN+EX_R13(r13) HRFI_TO_KERNEL - GEN_KVM hdecrementer - /** * Interrupt 0xa00 - Directed Privileged Doorbell Interrupt. @@ -1840,8 +1786,6 @@ EXC_COMMON_BEGIN(doorbell_super_common) #endif b interrupt_return - GEN_KVM doorbell_super - EXC_REAL_NONE(0xb00, 0x100) EXC_VIRT_NONE(0x4b00, 0x100) @@ -1891,7 +1835,7 @@ INT_DEFINE_END(system_call) GET_PACA(r13) std r10,PACA_EXGEN+EX_R10(r13) INTERRUPT_TO_KERNEL - KVMTEST system_call /* uses r10, branch to system_call_kvm */ + KVMTEST system_call kvm_hcall /* uses r10, branch to kvm_hcall */ mfctr r9 #else mr r9,r13 @@ -1947,7 +1891,7 @@ EXC_VIRT_BEGIN(system_call, 0x4c00, 0x100) EXC_VIRT_END(system_call, 0x4c00, 0x100) #ifdef CONFIG_KVM_BOOK3S_64_HANDLER -TRAMP_REAL_BEGIN(system_call_kvm) +TRAMP_REAL_BEGIN(kvm_hcall) mfctr r10 SET_SCRATCH0(r10) /* Save r13 in SCRATCH0 */ #ifdef CONFIG_RELOCATABLE @@ -1987,8 +1931,6 @@ EXC_COMMON_BEGIN(single_step_common) bl single_step_exception b interrupt_return - GEN_KVM single_step - /** * Interrupt 0xe00 - Hypervisor Data Storage Interrupt (HDSI). @@ -2027,8 +1969,6 @@ MMU_FTR_SECTION_ELSE ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_TYPE_RADIX) b interrupt_return - GEN_KVM h_data_storage - /** * Interrupt 0xe20 - Hypervisor Instruction Storage Interrupt (HISI). @@ -2054,8 +1994,6 @@ EXC_COMMON_BEGIN(h_instr_storage_common) bl unknown_exception b interrupt_return - GEN_KVM h_instr_storage - /** * Interrupt 0xe40 - Hypervisor Emulation Assistance Interrupt. @@ -2080,8 +2018,6 @@ EXC_COMMON_BEGIN(emulation_assist_common) REST_NVGPRS(r1) /* instruction emulation may change GPRs */ b interrupt_return - GEN_KVM emulation_assist - /** * Interrupt 0xe60 - Hypervisor Maintenance Interrupt (HMI). @@ -2153,8 +2089,6 @@ EXC_COMMON_BEGIN(hmi_exception_early_common) EXCEPTION_RESTORE_REGS hsrr=1 GEN_INT_ENTRY hmi_exception, virt=0 - GEN_KVM hmi_exception_early - EXC_COMMON_BEGIN(hmi_exception_common) GEN_COMMON hmi_exception FINISH_NAP @@ -2162,8 +2096,6 @@ EXC_COMMON_BEGIN(hmi_exception_common) bl handle_hmi_exception b interrupt_return - GEN_KVM hmi_exception - /** * Interrupt 0xe80 - Directed Hypervisor Doorbell Interrupt. @@ -2195,8 +2127,6 @@ EXC_COMMON_BEGIN(h_doorbell_common) #endif b interrupt_return - GEN_KVM h_doorbell - /** * Interrupt 0xea0 - Hypervisor Virtualization Interrupt. @@ -2224,8 +2154,6 @@ EXC_COMMON_BEGIN(h_virt_irq_common) bl do_IRQ b interrupt_return - GEN_KVM h_virt_irq - EXC_REAL_NONE(0xec0, 0x20) EXC_VIRT_NONE(0x4ec0, 0x20) @@ -2270,8 +2198,6 @@ EXC_COMMON_BEGIN(performance_monitor_common) bl performance_monitor_exception b interrupt_return - GEN_KVM performance_monitor - /** * Interrupt 0xf20 - Vector Unavailable Interrupt. @@ -2321,8 +2247,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) bl altivec_unavailable_exception b interrupt_return - GEN_KVM altivec_unavailable - /** * Interrupt 0xf40 - VSX Unavailable Interrupt. @@ -2371,8 +2295,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX) bl vsx_unavailable_exception b interrupt_return - GEN_KVM vsx_unavailable - /** * Interrupt 0xf60 - Facility Unavailable Interrupt. @@ -2401,8 +2323,6 @@ EXC_COMMON_BEGIN(facility_unavailable_common) REST_NVGPRS(r1) /* instruction emulation may change GPRs */ b interrupt_return - GEN_KVM facility_unavailable - /** * Interrupt 0xf60 - Hypervisor Facility Unavailable Interrupt. @@ -2431,8 +2351,6 @@ EXC_COMMON_BEGIN(h_facility_unavailable_common) REST_NVGPRS(r1) /* XXX Shouldn't be necessary in practice */ b interrupt_return - GEN_KVM h_facility_unavailable - EXC_REAL_NONE(0xfa0, 0x20) EXC_VIRT_NONE(0x4fa0, 0x20) @@ -2462,8 +2380,6 @@ EXC_COMMON_BEGIN(cbe_system_error_common) bl cbe_system_error_exception b interrupt_return - GEN_KVM cbe_system_error - #else /* CONFIG_CBE_RAS */ EXC_REAL_NONE(0x1200, 0x100) EXC_VIRT_NONE(0x5200, 0x100) @@ -2489,8 +2405,6 @@ EXC_COMMON_BEGIN(instruction_breakpoint_common) bl instruction_breakpoint_exception b interrupt_return - GEN_KVM instruction_breakpoint - EXC_REAL_NONE(0x1400, 0x100) EXC_VIRT_NONE(0x5400, 0x100) @@ -2611,8 +2525,6 @@ EXC_COMMON_BEGIN(denorm_exception_common) bl unknown_exception b interrupt_return - GEN_KVM denorm_exception - #ifdef CONFIG_CBE_RAS INT_DEFINE_BEGIN(cbe_maintenance) @@ -2630,8 +2542,6 @@ EXC_COMMON_BEGIN(cbe_maintenance_common) bl cbe_maintenance_exception b interrupt_return - GEN_KVM cbe_maintenance - #else /* CONFIG_CBE_RAS */ EXC_REAL_NONE(0x1600, 0x100) EXC_VIRT_NONE(0x5600, 0x100) @@ -2662,8 +2572,6 @@ EXC_COMMON_BEGIN(altivec_assist_common) #endif b interrupt_return - GEN_KVM altivec_assist - #ifdef CONFIG_CBE_RAS INT_DEFINE_BEGIN(cbe_thermal) @@ -2681,8 +2589,6 @@ EXC_COMMON_BEGIN(cbe_thermal_common) bl cbe_thermal_exception b interrupt_return - GEN_KVM cbe_thermal - #else /* CONFIG_CBE_RAS */ EXC_REAL_NONE(0x1800, 0x100) EXC_VIRT_NONE(0x5800, 0x100) @@ -2935,6 +2841,15 @@ TRAMP_REAL_BEGIN(rfscv_flush_fallback) USE_TEXT_SECTION() +#ifdef CONFIG_KVM_BOOK3S_64_HANDLER +kvm_interrupt: + /* + * The conditional branch in KVMTEST can't reach all the way, + * make a stub. + */ + b kvmppc_interrupt +#endif + _GLOBAL(do_uaccess_flush) UACCESS_FLUSH_FIXUP_SECTION nop diff --git a/arch/powerpc/kvm/book3s_64_entry.S b/arch/powerpc/kvm/book3s_64_entry.S index f527e16707db..2c9d106145e8 100644 --- a/arch/powerpc/kvm/book3s_64_entry.S +++ b/arch/powerpc/kvm/book3s_64_entry.S @@ -44,15 +44,61 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) sldi r12,r10,32 ori r12,r12,0xc00 ld r10,PACA_EXGEN+EX_R10(r13) + b do_kvm_interrupt +/* + * KVM interrupt entry occurs after GEN_INT_ENTRY runs, and follows that + * call convention: + * + * guest R9-R13, CTR, CFAR, PPR saved in PACA EX_xxx save area + * guest (H)DAR, (H)DSISR are also in the save area for relevant interrupts + * guest R13 also saved in SCRATCH0 + * R13 = PACA + * R11 = (H)SRR0 + * R12 = (H)SRR1 + * R9 = guest CR + * PPR is set to medium + * + * With the addition for KVM: + * R10 = trap vector + */ .global kvmppc_interrupt .balign IFETCH_ALIGN_BYTES kvmppc_interrupt: + li r11,PACA_EXGEN + cmpdi r10,0x200 + bgt+ 1f + li r11,PACA_EXMC + beq 1f + li r11,PACA_EXNMI +1: add r11,r11,r13 + +BEGIN_FTR_SECTION + ld r12,EX_CFAR(r11) + std r12,HSTATE_CFAR(r13) +END_FTR_SECTION_IFSET(CPU_FTR_CFAR) + ld r12,EX_CTR(r11) + mtctr r12 +BEGIN_FTR_SECTION + ld r12,EX_PPR(r11) + std r12,HSTATE_PPR(r13) +END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) + ld r12,EX_R12(r11) + std r12,HSTATE_SCRATCH0(r13) + sldi r12,r9,32 + or r12,r12,r10 + ld r9,EX_R9(r11) + ld r10,EX_R10(r11) + ld r11,EX_R11(r11) + +do_kvm_interrupt: /* - * Register contents: + * Hcalls and other interrupts come here after normalising register + * contents and save locations: + * * R12 = (guest CR << 32) | interrupt vector * R13 = PACA - * guest R12 saved in shadow VCPU SCRATCH0 + * guest R12 saved in shadow HSTATE_SCRATCH0 * guest R13 saved in SPRN_SCRATCH0 */ std r9,HSTATE_SCRATCH2(r13) From patchwork Mon Apr 5 01:19:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462190 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=CBvEiTa8; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCXd3ygDz9sj0 for ; Mon, 5 Apr 2021 11:21:05 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231835AbhDEBVJ (ORCPT ); Sun, 4 Apr 2021 21:21:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231775AbhDEBVI (ORCPT ); Sun, 4 Apr 2021 21:21:08 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 212D5C061756 for ; Sun, 4 Apr 2021 18:21:03 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id t23so2030826pjy.3 for ; Sun, 04 Apr 2021 18:21:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8zIiezjZW/ByRW+n+QDDz4LjyD5A6sN0OuFEjYqvHKk=; b=CBvEiTa86AkJgU+tNxlO08DebjgRTX+VUAo5zTG5AdWncCf8lCXz5Y/71LUqEqLjH7 Vt6eahLIznf+UugihRG65XV3eeYftHk022xB5hspQl6I59ilFi+48rmk9EwyQj67eM8f 1vufC4uwArE9x+BxZqejXt6YLygKU1vDZHGDbbRZYeBmtlNUNcv9fGt9K/lSqrGn/pjs eGnSEfv4lE4YYZfzXpySoncgJuMFEyJO24M9+ZPrgE1M3+QvX3klfxP7AIkF3aCWNBH0 vw2gCsubgKnk7y11MFLsA1d40p9A+PFGdFUeX6wxgNZhM6NvyhY4WFUPEbpZvB9ECT6y s/PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8zIiezjZW/ByRW+n+QDDz4LjyD5A6sN0OuFEjYqvHKk=; b=VQCvx4kNY7loMjalYCz47SzU4YlR0IrIwnmRd0hOZBsTdW66th3w7XUBI4ZdcsW6AM I8r0jvd7WKKnFeGYCr9eiPaN6uk6jDHlrm3tB8obKl+uQCiu5EQqtz7TKqxSVR6AhtxM BRSGh7mAaK9pQ3k3Ra+NJPDLM8LJUg5+HDNzV+SLjE6E+dNcWz6VGEd/hcoFnOpFty4y iplArONfUiMbvsLVqIyUOZ37CfG9SYoo62CI/Va+U3/R8kt1P5V0s9YFs7WlMDSn0YZY OkwTpZ2gOKh2rC4iaqdifZqzy7gd3Ylt9rnYNKq95DHYJceZEfvdOES7lcuMPYG1F0wr QV4A== X-Gm-Message-State: AOAM533iPgz3+PqK2sAAmzvDOykjrO4Cg/DGWb4Mg6qIL4mhH8fR8Txv Zpn+xLkJqgLVO2pavq94niGXZl5H4qqu5w== X-Google-Smtp-Source: ABdhPJwHDtiTUAi9cbGtynVZFA/Rfm5aZMUGl9w2p9sVmmhZGZBBCfPKy8CsDexl1E11VDcXGNzGRw== X-Received: by 2002:a17:90a:bb8b:: with SMTP id v11mr24625131pjr.4.1617585662630; Sun, 04 Apr 2021 18:21:02 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.20.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:02 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Alexey Kardashevskiy Subject: [PATCH v6 17/48] KVM: PPC: Book3S 64: move bad_host_intr check to HV handler Date: Mon, 5 Apr 2021 11:19:17 +1000 Message-Id: <20210405011948.675354-18-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This is not used by PR KVM. Reviewed-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_64_entry.S | 4 ---- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 4 +++- arch/powerpc/kvm/book3s_segment.S | 3 +++ 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_entry.S b/arch/powerpc/kvm/book3s_64_entry.S index 2c9d106145e8..66170ea85bc2 100644 --- a/arch/powerpc/kvm/book3s_64_entry.S +++ b/arch/powerpc/kvm/book3s_64_entry.S @@ -107,16 +107,12 @@ do_kvm_interrupt: beq- .Lmaybe_skip .Lno_skip: #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE - cmpwi r9,KVM_GUEST_MODE_HOST_HV - beq kvmppc_bad_host_intr #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE cmpwi r9,KVM_GUEST_MODE_GUEST - ld r9,HSTATE_SCRATCH2(r13) beq kvmppc_interrupt_pr #endif b kvmppc_interrupt_hv #else - ld r9,HSTATE_SCRATCH2(r13) b kvmppc_interrupt_pr #endif diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index f976efb7e4a9..75405ef53238 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -1265,6 +1265,7 @@ hdec_soon: kvmppc_interrupt_hv: /* * Register contents: + * R9 = HSTATE_IN_GUEST * R12 = (guest CR << 32) | interrupt vector * R13 = PACA * guest R12 saved in shadow VCPU SCRATCH0 @@ -1272,6 +1273,8 @@ kvmppc_interrupt_hv: * guest R9 saved in HSTATE_SCRATCH2 */ /* We're now back in the host but in guest MMU context */ + cmpwi r9,KVM_GUEST_MODE_HOST_HV + beq kvmppc_bad_host_intr li r9, KVM_GUEST_MODE_HOST_HV stb r9, HSTATE_IN_GUEST(r13) @@ -3272,7 +3275,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_P9_TM_HV_ASSIST) * cfar is saved in HSTATE_CFAR(r13) * ppr is saved in HSTATE_PPR(r13) */ -.global kvmppc_bad_host_intr kvmppc_bad_host_intr: /* * Switch to the emergency stack, but start half-way down in diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S index 1f492aa4c8d6..202046a83fc1 100644 --- a/arch/powerpc/kvm/book3s_segment.S +++ b/arch/powerpc/kvm/book3s_segment.S @@ -164,12 +164,15 @@ kvmppc_interrupt_pr: /* 64-bit entry. Register usage at this point: * * SPRG_SCRATCH0 = guest R13 + * R9 = HSTATE_IN_GUEST * R12 = (guest CR << 32) | exit handler id * R13 = PACA * HSTATE.SCRATCH0 = guest R12 + * HSTATE.SCRATCH2 = guest R9 */ #ifdef CONFIG_PPC64 /* Match 32-bit entry */ + ld r9,HSTATE_SCRATCH2(r13) rotldi r12, r12, 32 /* Flip R12 halves for stw */ stw r12, HSTATE_SCRATCH1(r13) /* CR is now in the low half */ srdi r12, r12, 32 /* shift trap into low half */ From patchwork Mon Apr 5 01:19:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462191 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=WV2XthWu; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCXk3w5wz9sWX for ; Mon, 5 Apr 2021 11:21:10 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231837AbhDEBVO (ORCPT ); Sun, 4 Apr 2021 21:21:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231775AbhDEBVN (ORCPT ); Sun, 4 Apr 2021 21:21:13 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A49C5C061756 for ; Sun, 4 Apr 2021 18:21:06 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id d8so4916171plh.11 for ; Sun, 04 Apr 2021 18:21:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Q/vrWSy5tKpNMm8JT9TVnUYhWIiLWI9uewuCHeVdsyY=; b=WV2XthWuPT9/cftO4jyAQMkQfgRRC4+QEmApAQRGSesXU3VuVt/e4xmYOXe6v84Btr maxSA/4kVRdGsRMnVDPixsG5K+KXrKYCqDUJ2GdwYd3Z56k5878Pidppxt13AYwKN7uk y7MkatCBilm0cVILibFBZdISeY4OSg8B38kNqLIiBXb84h0O8553BPiwYS26nBc5m9aI fUfwis/RNEjbwq1+pDp/DP4C90Co8+U3IjeiRioDAswcSXE3J26fxsGbe1fVFlKJDM+q SM9538Fn+iGoRnPbdu5OxBHWo53ODqUyYwlMvRgtFQHpjHN3vkaKU3cknG36XwZ3NI2o JChg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q/vrWSy5tKpNMm8JT9TVnUYhWIiLWI9uewuCHeVdsyY=; b=aVazUsimR3CZDyKolAMo/1gmyxUiSHyP2Xhgs7k9U5+MSmj8ZtJbpSkJzVTHzQP0mS MGfqSmdR2g8nba2Gc2mmIdcXdYj//BVII4HCiD2AFIf4tJtZO8oFObuapmIFynV87QH3 lyW+/TSBfICNelkyN4QS+QzB0WhFixEih2J5Z3emOc5jt91EEFATUidf0avPSFMjHRB4 u6A9x5NiDsa0Q046G0fOHPGlxNsOAm7OWfd1hNlI9p23mw6/dYQeBhJoEEibCBhhbADK IQjzyLjVwRODEOHTK64/3rZ6nyi3NZEqKOqRkSxJqApxYLreEpdIPCmyK++TsgtZBoCR XFJA== X-Gm-Message-State: AOAM532SOOPaKQUjNDzRNyWTEBKYLvp8/LbILJnum63Dbnmm8E+1BUg+ VvvKO5SkFcYVpkwocXUfEgzoADH/bkH47A== X-Google-Smtp-Source: ABdhPJyYEF8rbZUxav/uIFRMoKTSByTeRmwh278CPs/pAoDCJFcyu9ZwUh5F0zUZ9zwBsi4D3r/d2Q== X-Received: by 2002:a17:90b:1802:: with SMTP id lw2mr13771910pjb.226.1617585666126; Sun, 04 Apr 2021 18:21:06 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:05 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Alexey Kardashevskiy Subject: [PATCH v6 18/48] KVM: PPC: Book3S 64: Minimise hcall handler calling convention differences Date: Mon, 5 Apr 2021 11:19:18 +1000 Message-Id: <20210405011948.675354-19-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This sets up the same calling convention from interrupt entry to KVM interrupt handler for system calls as exists for other interrupt types. This is a better API, it uses a save area rather than SPR, and it has more registers free to use. Using a single common API helps maintain it, and it becomes easier to use in C in a later patch. Reviewed-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/exceptions-64s.S | 21 +++++++++++- arch/powerpc/kvm/book3s_64_entry.S | 51 +++++++++++----------------- 2 files changed, 39 insertions(+), 33 deletions(-) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 115cf79f3e82..4615057681c3 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -1892,8 +1892,27 @@ EXC_VIRT_END(system_call, 0x4c00, 0x100) #ifdef CONFIG_KVM_BOOK3S_64_HANDLER TRAMP_REAL_BEGIN(kvm_hcall) + std r9,PACA_EXGEN+EX_R9(r13) + std r11,PACA_EXGEN+EX_R11(r13) + std r12,PACA_EXGEN+EX_R12(r13) + mfcr r9 mfctr r10 - SET_SCRATCH0(r10) /* Save r13 in SCRATCH0 */ + std r10,PACA_EXGEN+EX_R13(r13) + li r10,0 + std r10,PACA_EXGEN+EX_CFAR(r13) + std r10,PACA_EXGEN+EX_CTR(r13) + /* + * Save the PPR (on systems that support it) before changing to + * HMT_MEDIUM. That allows the KVM code to save that value into the + * guest state (it is the guest's PPR value). + */ +BEGIN_FTR_SECTION + mfspr r10,SPRN_PPR + std r10,PACA_EXGEN+EX_PPR(r13) +END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) + + HMT_MEDIUM + #ifdef CONFIG_RELOCATABLE /* * Requires __LOAD_FAR_HANDLER beause kvmppc_hcall lives diff --git a/arch/powerpc/kvm/book3s_64_entry.S b/arch/powerpc/kvm/book3s_64_entry.S index 66170ea85bc2..0c79c89c6a4b 100644 --- a/arch/powerpc/kvm/book3s_64_entry.S +++ b/arch/powerpc/kvm/book3s_64_entry.S @@ -11,40 +11,28 @@ * These are branched to from interrupt handlers in exception-64s.S which set * IKVM_REAL or IKVM_VIRT, if HSTATE_IN_GUEST was found to be non-zero. */ + +/* + * This is a hcall, so register convention is as + * Documentation/powerpc/papr_hcalls.rst. + * + * This may also be a syscall from PR-KVM userspace that is to be + * reflected to the PR guest kernel, so registers may be set up for + * a system call rather than hcall. We don't currently clobber + * anything here, but the 0xc00 handler has already clobbered CTR + * and CR0, so PR-KVM can not support a guest kernel that preserves + * those registers across its system calls. + * + * The state of registers is as kvmppc_interrupt, except CFAR is not + * saved, R13 is not in SCRATCH0, and R10 does not contain the trap. + */ .global kvmppc_hcall .balign IFETCH_ALIGN_BYTES kvmppc_hcall: - /* - * This is a hcall, so register convention is as - * Documentation/powerpc/papr_hcalls.rst, with these additions: - * R13 = PACA - * guest R13 saved in SPRN_SCRATCH0 - * R10 = free - * guest r10 saved in PACA_EXGEN - * - * This may also be a syscall from PR-KVM userspace that is to be - * reflected to the PR guest kernel, so registers may be set up for - * a system call rather than hcall. We don't currently clobber - * anything here, but the 0xc00 handler has already clobbered CTR - * and CR0, so PR-KVM can not support a guest kernel that preserves - * those registers across its system calls. - */ - /* - * Save the PPR (on systems that support it) before changing to - * HMT_MEDIUM. That allows the KVM code to save that value into the - * guest state (it is the guest's PPR value). - */ -BEGIN_FTR_SECTION - mfspr r10,SPRN_PPR - std r10,HSTATE_PPR(r13) -END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) - HMT_MEDIUM - mfcr r10 - std r12,HSTATE_SCRATCH0(r13) - sldi r12,r10,32 - ori r12,r12,0xc00 - ld r10,PACA_EXGEN+EX_R10(r13) - b do_kvm_interrupt + ld r10,PACA_EXGEN+EX_R13(r13) + SET_SCRATCH0(r10) + li r10,0xc00 + /* Now we look like kvmppc_interrupt */ /* * KVM interrupt entry occurs after GEN_INT_ENTRY runs, and follows that @@ -91,7 +79,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) ld r10,EX_R10(r11) ld r11,EX_R11(r11) -do_kvm_interrupt: /* * Hcalls and other interrupts come here after normalising register * contents and save locations: From patchwork Mon Apr 5 01:19:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462192 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=emC1GlsJ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCXl4G0Sz9sj5 for ; Mon, 5 Apr 2021 11:21:11 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231840AbhDEBVP (ORCPT ); Sun, 4 Apr 2021 21:21:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231775AbhDEBVP (ORCPT ); Sun, 4 Apr 2021 21:21:15 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97313C061756 for ; Sun, 4 Apr 2021 18:21:09 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id il9-20020a17090b1649b0290114bcb0d6c2so7101595pjb.0 for ; Sun, 04 Apr 2021 18:21:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oFmaJjBp09ty+mQfvxR7OURCiJbmuhIiXXzajeTk/cc=; b=emC1GlsJVEoBKH3tpRm428lR5Q278lI0ICrGiVJe7NkjyOX0Bl8emZC0wATx2YWgEx hnoVt3ZwFnFxtq3wxFlLENUAcPa+LOXRPxatyLbvw3wYcafTntKow1UBSbToWr0xWugM a3XmrmzrXBwKuFMP2N8GekmvlEi7LXzJRg1rwr5GiW01u9n/Dexr3lx5IGG778+gcBPA gXuHuVkNsi7hNfvKBwI2GpcqNnEdHCpjYv0PoYRDJEkb8u67VFsPLqRzdISPYa0xxs3m KxWsN1h9clM1G41eZk/lFHksJk/wnKHpJ4fHo3PbUk0FEbF8E9IqzjIBnYix1L3S26om Il3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oFmaJjBp09ty+mQfvxR7OURCiJbmuhIiXXzajeTk/cc=; b=fzsZ0RDDdgFC+lAC1Jc4FbH6F/GNBQuDG7KPGUIRRtUoSOguRtqXS+OJlCXUQ1zYH5 qyv4QeulFy8FI1RNbYpXnV/vspgeYnavlRLHHSgdrjfi3tFr6I/Jfx+4W+3l0YreillY LwtcH5+dVZkYflYIqQv5hikcSXMormigTZaIT5BIg1jJuCtB9OXdgwTSQQFcIjQCdmnd Ae0M1L52Ng255dUaY/V4gWUGHcjD6GuQzejQp4P/GyHQZGp1dKsNUW0SVr4ULLl8NjAv Gi0Rr29lZiwxhNbAX83BKblGtCKw2L6ZpLqg7pUZ/0ZxEdXZFr4eW+GSp2swj806eDBO cb5Q== X-Gm-Message-State: AOAM530imJoFypf+gFjgN2Ob06HiA/9A/dUzL+h0YrrNu2XIa2qkg3z5 NY1Vu6kbiiFdX6KwtD/PMppc6DYFQ5qrnw== X-Google-Smtp-Source: ABdhPJxlXA2dMaC1bmiPAnD/aJpjmgpVbcqRhVvmwr0FERrSeu3xkAl+9eh7rOLt8mBiijtZJh+SlA== X-Received: by 2002:a17:90a:7786:: with SMTP id v6mr24154111pjk.16.1617585669004; Sun, 04 Apr 2021 18:21:09 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:08 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Alexey Kardashevskiy Subject: [PATCH v6 19/48] KVM: PPC: Book3S HV P9: Move radix MMU switching instructions together Date: Mon, 5 Apr 2021 11:19:19 +1000 Message-Id: <20210405011948.675354-20-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Switching the MMU from radix<->radix mode is tricky particularly as the MMU can remain enabled and requires a certain sequence of SPR updates. Move these together into their own functions. This also includes the radix TLB check / flush because it's tied in to MMU switching due to tlbiel getting LPID from LPIDR. Reviewed-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 66 +++++++++++++++++++++++------------- 1 file changed, 43 insertions(+), 23 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index ed77aff9cdb6..3424b1bfa98e 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3478,12 +3478,49 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) trace_kvmppc_run_core(vc, 1); } +static void switch_mmu_to_guest_radix(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpcr) +{ + struct kvmppc_vcore *vc = vcpu->arch.vcore; + struct kvm_nested_guest *nested = vcpu->arch.nested; + u32 lpid; + + lpid = nested ? nested->shadow_lpid : kvm->arch.lpid; + + /* + * All the isync()s are overkill but trivially follow the ISA + * requirements. Some can likely be replaced with justification + * comment for why they are not needed. + */ + isync(); + mtspr(SPRN_LPID, lpid); + isync(); + mtspr(SPRN_LPCR, lpcr); + isync(); + mtspr(SPRN_PID, vcpu->arch.pid); + isync(); + + /* TLBIEL must have LPIDR set, so set guest LPID before flushing. */ + kvmppc_check_need_tlb_flush(kvm, vc->pcpu, nested); +} + +static void switch_mmu_to_host_radix(struct kvm *kvm, u32 pid) +{ + isync(); + mtspr(SPRN_PID, pid); + isync(); + mtspr(SPRN_LPID, kvm->arch.host_lpid); + isync(); + mtspr(SPRN_LPCR, kvm->arch.host_lpcr); + isync(); +} + /* * Load up hypervisor-mode registers on P9. */ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr) { + struct kvm *kvm = vcpu->kvm; struct kvmppc_vcore *vc = vcpu->arch.vcore; s64 hdec; u64 tb, purr, spurr; @@ -3506,12 +3543,12 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, * P8 and P9 suppress the HDEC exception when LPCR[HDICE] = 0, * so set HDICE before writing HDEC. */ - mtspr(SPRN_LPCR, vcpu->kvm->arch.host_lpcr | LPCR_HDICE); + mtspr(SPRN_LPCR, kvm->arch.host_lpcr | LPCR_HDICE); isync(); hdec = time_limit - mftb(); if (hdec < 0) { - mtspr(SPRN_LPCR, vcpu->kvm->arch.host_lpcr); + mtspr(SPRN_LPCR, kvm->arch.host_lpcr); isync(); return BOOK3S_INTERRUPT_HV_DECREMENTER; } @@ -3546,7 +3583,6 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, } mtspr(SPRN_CIABR, vcpu->arch.ciabr); mtspr(SPRN_IC, vcpu->arch.ic); - mtspr(SPRN_PID, vcpu->arch.pid); mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); @@ -3560,8 +3596,7 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, mtspr(SPRN_AMOR, ~0UL); - mtspr(SPRN_LPCR, lpcr); - isync(); + switch_mmu_to_guest_radix(kvm, vcpu, lpcr); kvmppc_xive_push_vcpu(vcpu); @@ -3600,7 +3635,6 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, mtspr(SPRN_DAWR1, host_dawr1); mtspr(SPRN_DAWRX1, host_dawrx1); } - mtspr(SPRN_PID, host_pidr); /* * Since this is radix, do a eieio; tlbsync; ptesync sequence in @@ -3615,9 +3649,6 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, if (cpu_has_feature(CPU_FTR_ARCH_31)) asm volatile(PPC_CP_ABORT); - mtspr(SPRN_LPID, vcpu->kvm->arch.host_lpid); /* restore host LPID */ - isync(); - vc->dpdes = mfspr(SPRN_DPDES); vc->vtb = mfspr(SPRN_VTB); mtspr(SPRN_DPDES, 0); @@ -3634,7 +3665,8 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, } mtspr(SPRN_HDEC, 0x7fffffff); - mtspr(SPRN_LPCR, vcpu->kvm->arch.host_lpcr); + + switch_mmu_to_host_radix(kvm, host_pidr); return trap; } @@ -4167,7 +4199,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, { struct kvm_run *run = vcpu->run; int trap, r, pcpu; - int srcu_idx, lpid; + int srcu_idx; struct kvmppc_vcore *vc; struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; @@ -4241,13 +4273,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, vc->vcore_state = VCORE_RUNNING; trace_kvmppc_run_core(vc, 0); - if (cpu_has_feature(CPU_FTR_HVMODE)) { - lpid = nested ? nested->shadow_lpid : kvm->arch.lpid; - mtspr(SPRN_LPID, lpid); - isync(); - kvmppc_check_need_tlb_flush(kvm, pcpu, nested); - } - guest_enter_irqoff(); srcu_idx = srcu_read_lock(&kvm->srcu); @@ -4266,11 +4291,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, srcu_read_unlock(&kvm->srcu, srcu_idx); - if (cpu_has_feature(CPU_FTR_HVMODE)) { - mtspr(SPRN_LPID, kvm->arch.host_lpid); - isync(); - } - set_irq_happened(trap); kvmppc_set_host_core(pcpu); From patchwork Mon Apr 5 01:19:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462193 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=tnf57h3h; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCXr0VBZz9srY for ; Mon, 5 Apr 2021 11:21:16 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231842AbhDEBVT (ORCPT ); Sun, 4 Apr 2021 21:21:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231775AbhDEBVS (ORCPT ); Sun, 4 Apr 2021 21:21:18 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C213C061756 for ; Sun, 4 Apr 2021 18:21:13 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id j6-20020a17090adc86b02900cbfe6f2c96so5074397pjv.1 for ; Sun, 04 Apr 2021 18:21:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iJ6ODLaZGRZoX5kuCv+hNtwVVZa1NGvMNfJW5zA4O7A=; b=tnf57h3hsYSAPnNoIVDX0DRnOFYV1zcFOYHbsShspfQ1OXRzvw6Cx0p5hn91bFEwuL jQkeo79AtuCmMr5eIxTRgO5a86edi3F2rjTi/b1lsy6klNLup8OcUcVTvcjW8+eAl73Y Vcu3AGuVKqJqsjyI9zXvgJvjjRWRGdpVkjpGKCOn5giuqzRzO/MsAM7vUrNBIn3pTrA8 Q350EFaEhPTWBQ2Ta5TU7B8Jx53/K4rdWhSnEIdtHxJcViIZPd26R7/d/HeWsNOmFn8r MvX5wjCqCqFCh25kvxdSsoervBlE5s6reuzgQNC7ER8ABBC2EpmNMxsZld2e0nfOSGx5 C8gQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iJ6ODLaZGRZoX5kuCv+hNtwVVZa1NGvMNfJW5zA4O7A=; b=Yfv6NTsgxg84HD0uttwsvzJeT07txhdu55L72kheVQ5gueJfbHwqC1HHfzw8O+jrA3 IQgBwOGLq45tE2sJ7w+m+vpk/2Va5zz9THdJW7XQxd93QTHxSHBTpUA02083ti4zPDds y/CWspLx+asPBY6jjRa7KtOGlX3Jp9J2FJOJh2HXfo9OlVSQstPhah5F7oH0NMj5TYse NPPfx4ZPSFdpAt1KB6PYl45JufoR39FiEnXpkp21X6jv3yQ0ECbAbD5+v3W+0gDVbeE8 mUA4KqSHqlbQaY6iTgIvIckpLoEnxuJu47BhryYVTNVshmBn56PaIT2yNMyBiOp74Yvz 1juA== X-Gm-Message-State: AOAM5315O7uxLzRfwT7Ou8T0vEvZeEHSn+5rziV7ducYg7Wn3rBhmDtl w0dXZdH4mbudoM9FcnYVgGnbXf0bWzg2MQ== X-Google-Smtp-Source: ABdhPJwqhx+v1wv801r8ZoWRW5tqE3CVkxURxTh6TXVJm/2NlVOoco3OJeN4roQWCu0HQumw+yjBqg== X-Received: by 2002:a17:902:9a0a:b029:e6:bf00:8a36 with SMTP id v10-20020a1709029a0ab02900e6bf008a36mr22035803plp.51.1617585672919; Sun, 04 Apr 2021 18:21:12 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:12 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, =?utf-8?q?C=C3=A9dric_Le_Goater?= , Alexey Kardashevskiy Subject: [PATCH v6 20/48] KVM: PPC: Book3S HV P9: implement kvmppc_xive_pull_vcpu in C Date: Mon, 5 Apr 2021 11:19:20 +1000 Message-Id: <20210405011948.675354-21-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This is more symmetric with kvmppc_xive_push_vcpu. The extra test in the asm will go away in a later change. Reviewed-by: Cédric Le Goater Reviewed-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_ppc.h | 2 ++ arch/powerpc/kvm/book3s_hv.c | 2 ++ arch/powerpc/kvm/book3s_hv_rmhandlers.S | 5 ++++ arch/powerpc/kvm/book3s_xive.c | 31 +++++++++++++++++++++++++ 4 files changed, 40 insertions(+) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 9531b1c1b190..73b1ca5a6471 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -672,6 +672,7 @@ extern int kvmppc_xive_set_icp(struct kvm_vcpu *vcpu, u64 icpval); extern int kvmppc_xive_set_irq(struct kvm *kvm, int irq_source_id, u32 irq, int level, bool line_status); extern void kvmppc_xive_push_vcpu(struct kvm_vcpu *vcpu); +extern void kvmppc_xive_pull_vcpu(struct kvm_vcpu *vcpu); static inline int kvmppc_xive_enabled(struct kvm_vcpu *vcpu) { @@ -712,6 +713,7 @@ static inline int kvmppc_xive_set_icp(struct kvm_vcpu *vcpu, u64 icpval) { retur static inline int kvmppc_xive_set_irq(struct kvm *kvm, int irq_source_id, u32 irq, int level, bool line_status) { return -ENODEV; } static inline void kvmppc_xive_push_vcpu(struct kvm_vcpu *vcpu) { } +static inline void kvmppc_xive_pull_vcpu(struct kvm_vcpu *vcpu) { } static inline int kvmppc_xive_enabled(struct kvm_vcpu *vcpu) { return 0; } diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 3424b1bfa98e..6ca47f26a397 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3605,6 +3605,8 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, trap = __kvmhv_vcpu_entry_p9(vcpu); + kvmppc_xive_pull_vcpu(vcpu); + /* Advance host PURR/SPURR by the amount used by guest */ purr = mfspr(SPRN_PURR); spurr = mfspr(SPRN_SPURR); diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 75405ef53238..c11597f815e4 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -1442,6 +1442,11 @@ guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */ bl kvmhv_accumulate_time #endif #ifdef CONFIG_KVM_XICS + /* If we came in through the P9 short path, xive pull is done in C */ + lwz r0, STACK_SLOT_SHORT_PATH(r1) + cmpwi r0, 0 + bne 1f + /* We are exiting, pull the VP from the XIVE */ lbz r0, VCPU_XIVE_PUSHED(r9) cmpwi cr0, r0, 0 diff --git a/arch/powerpc/kvm/book3s_xive.c b/arch/powerpc/kvm/book3s_xive.c index e7219b6f5f9a..741bf1f4387a 100644 --- a/arch/powerpc/kvm/book3s_xive.c +++ b/arch/powerpc/kvm/book3s_xive.c @@ -127,6 +127,37 @@ void kvmppc_xive_push_vcpu(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvmppc_xive_push_vcpu); +/* + * Pull a vcpu's context from the XIVE on guest exit. + * This assumes we are in virtual mode (MMU on) + */ +void kvmppc_xive_pull_vcpu(struct kvm_vcpu *vcpu) +{ + void __iomem *tima = local_paca->kvm_hstate.xive_tima_virt; + + if (!vcpu->arch.xive_pushed) + return; + + /* + * Should not have been pushed if there is no tima + */ + if (WARN_ON(!tima)) + return; + + eieio(); + /* First load to pull the context, we ignore the value */ + __raw_readl(tima + TM_SPC_PULL_OS_CTX); + /* Second load to recover the context state (Words 0 and 1) */ + vcpu->arch.xive_saved_state.w01 = __raw_readq(tima + TM_QW1_OS); + + /* Fixup some of the state for the next load */ + vcpu->arch.xive_saved_state.lsmfb = 0; + vcpu->arch.xive_saved_state.ack = 0xff; + vcpu->arch.xive_pushed = 0; + eieio(); +} +EXPORT_SYMBOL_GPL(kvmppc_xive_pull_vcpu); + /* * This is a simple trigger for a generic XIVE IRQ. This must * only be called for interrupts that support a trigger page From patchwork Mon Apr 5 01:19:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462194 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=F7sn4PNt; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCXw1vTTz9sX5 for ; Mon, 5 Apr 2021 11:21:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231775AbhDEBVX (ORCPT ); Sun, 4 Apr 2021 21:21:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231539AbhDEBVW (ORCPT ); Sun, 4 Apr 2021 21:21:22 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F0EEC061756 for ; Sun, 4 Apr 2021 18:21:16 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id y2so4928608plg.5 for ; Sun, 04 Apr 2021 18:21:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c0MvDZ1F9X/SZ94TT66c0+OApX1YU3GVKikmjXf/RuQ=; b=F7sn4PNtawZZ+0n4PkDaXml8Zr5ys7vuudGUP/2FwhYXAIbyXJliCSpcFYCnGYCYXc KJHx7+6GqcS/ZQgOxTYVc7Fn3HF8cXskQrPVL3mKu9G+qF4LSkfF3VMle+B70Ne3G2Wl PT7qdEeG7Pg3JfXUhCCvKMNvdSzH7BT9Rq3I/2YXO9FYO11PfKv41AuyOMkJuPby0FdV TRwJSfFNiRSGzcc4t4jIUYx28I0VJnLfbv5Z0/oyPwPt2BwVk2PXSz8/DAMf1ZQO7zeE KLm0gdKr1dg4SXO4ydC3vkrYmpfEVzO47cUsLaCo/muJAZpcdDrOx0cmlsI3qaCpOmRU S0Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c0MvDZ1F9X/SZ94TT66c0+OApX1YU3GVKikmjXf/RuQ=; b=mlD6V7iRZT+G9C0pzKDgTGzuOR1dAnYq/bH9lICJ5+hZDKyTsfRzrbXbftzEnc5Zvp RYCyDXEA/zUaQmAmmTaEfqgaWq+7tOHJFPrHew6IAY8+rsecIVCfj4UJOKcPAud4noVm UBIDTugMWiQNuJvZUnq1vLc8id+Y0zafxvCPVERVgHktItdgXsk+NBkNi0Z9NL5HMIMU RT2g5FhK4t+a++ZjxnzN+qGlftlRGZOAfmyaVLI6c3O7ZnZtt3vzpEw/FvvxBmCOTItg jv/zJa4YceazfmRP2UVHLjE4vuQbultQCsnMLinob0DsV1Sy+WBhj/X2r53bRgOKW1vJ hhFQ== X-Gm-Message-State: AOAM531JELGLVUzpIAQnMtRdc/k2skdshHkFMEUMbqHsDk5umt5Fi6sF gbGUSIoeNtXOGMPXwCL0JfOWU9c2ghM0KA== X-Google-Smtp-Source: ABdhPJwtPrpV+zXiV4XPpDvj9qVH5USq3fAtYCWCwwtYuSrsWHqeWpm1iPvPJRRRGaU3gWu5FoOdMg== X-Received: by 2002:a17:90b:b0d:: with SMTP id bf13mr23848251pjb.7.1617585675964; Sun, 04 Apr 2021 18:21:15 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:15 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Alexey Kardashevskiy Subject: [PATCH v6 21/48] KVM: PPC: Book3S HV P9: Move xive vcpu context management into kvmhv_p9_guest_entry Date: Mon, 5 Apr 2021 11:19:21 +1000 Message-Id: <20210405011948.675354-22-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move the xive management up so the low level register switching can be pushed further down in a later patch. XIVE MMIO CI operations can run in higher level code with machine checks, tracing, etc., available. Reviewed-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 6ca47f26a397..2dc65d752f80 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3598,15 +3598,11 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, switch_mmu_to_guest_radix(kvm, vcpu, lpcr); - kvmppc_xive_push_vcpu(vcpu); - mtspr(SPRN_SRR0, vcpu->arch.shregs.srr0); mtspr(SPRN_SRR1, vcpu->arch.shregs.srr1); trap = __kvmhv_vcpu_entry_p9(vcpu); - kvmppc_xive_pull_vcpu(vcpu); - /* Advance host PURR/SPURR by the amount used by guest */ purr = mfspr(SPRN_PURR); spurr = mfspr(SPRN_SPURR); @@ -3789,7 +3785,10 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, trap = 0; } } else { + kvmppc_xive_push_vcpu(vcpu); trap = kvmhv_load_hv_regs_and_go(vcpu, time_limit, lpcr); + kvmppc_xive_pull_vcpu(vcpu); + } vcpu->arch.slb_max = 0; From patchwork Mon Apr 5 01:19:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462196 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=F/bU20Uu; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCY11yz4z9sj1 for ; Mon, 5 Apr 2021 11:21:25 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231539AbhDEBV1 (ORCPT ); Sun, 4 Apr 2021 21:21:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231843AbhDEBVZ (ORCPT ); Sun, 4 Apr 2021 21:21:25 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63F9EC061756 for ; Sun, 4 Apr 2021 18:21:20 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id ay2so4931566plb.3 for ; Sun, 04 Apr 2021 18:21:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EA6+7hewDPlZEv+gkXz/qw0djkxgBFSzRN/Bgo3zSCA=; b=F/bU20UufpT4S+QCX9JprubxquGnxL31rK+MWm82bfNvgCf2vv/m3x6Rh++tc9Ek39 cn+9x/dD1uRUbtqUwXgvlbLdbkF6gXg+aleCrY3SabS2mTHR+ufIj2a9UlkcbIFKFo5N iJvqHD07yYMqhhgip22CP1wWCc6TbCvLTlXH2R+VntiCh/yl7zAQQa7KfMzeItbKigou jr38AXuIGyZykPZqKMblIsorNPIVqdKlv0hOePa4p8DA4G+xCx94C+x4T9m1CAdS0S05 5ptnMijO9aIZ19dj0IQs0VoO9kiiA+oHWNkY7zPSSmY/t+s0gph6RO+0kGcxz/hpi7DR XQiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EA6+7hewDPlZEv+gkXz/qw0djkxgBFSzRN/Bgo3zSCA=; b=ClyYfU6XaNiw6bxOKGATTY2sZajwRuddC0tgS5JBVAY8Fa3M1D15/z5L8M6XCqtZRA fQJ0fPNjcO5pfJTA50avMRDHIMmFGu2rZynnuDu+zNQKDwtpx3AgdFGe3JdydlmgnZWy rywzPx2hLY68CQjarZ6cf7dJONdwMrGSTH9eiJEtv/kVH2+6Nxe4NJt13g22kROGNNBJ hKnBBrC4tTf7QBbFmVdjumJFH1bDZh4KzXEJfT6pQmYdvgBOZTEuYqE4BU8xb7WYeOJO jGws8322cEGMNzOehxSoUut/AE+yDgTeTSEqu/JhZh8phETfXSxGVWj17m+sb6W12h+V s/dg== X-Gm-Message-State: AOAM531TDbChmZbeucW9ZBbO+2dPq9PVAKRCvuZL1/H1a4RLzaE/gtnY bVo+BmzN9X6D6MgXaCKGNk8jkQf2thgZYw== X-Google-Smtp-Source: ABdhPJzif4hTmxfQSWENI3mWO3OywBwmaf+JVZmzdPVcHf/dLMXPOKXITDhG/H9V8923oloZjj5sbw== X-Received: by 2002:a17:90a:c202:: with SMTP id e2mr23682662pjt.73.1617585679645; Sun, 04 Apr 2021 18:21:19 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:19 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Alexey Kardashevskiy , =?utf-8?q?C=C3=A9dric_Le_Goater?= Subject: [PATCH v6 22/48] KVM: PPC: Book3S HV P9: Stop handling hcalls in real-mode in the P9 path Date: Mon, 5 Apr 2021 11:19:22 +1000 Message-Id: <20210405011948.675354-23-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org In the interest of minimising the amount of code that is run in "real-mode", don't handle hcalls in real mode in the P9 path. This requires some new handlers for H_CEDE and xics-on-xive to be added before xive is pulled or cede logic is checked. This introduces a change in radix guest behaviour where radix guests that execute 'sc 1' in userspace now get a privilege fault whereas previously the 'sc 1' would be reflected as a syscall interrupt to the guest kernel. That reflection is only required for hash guests that run PR KVM. Background: In POWER8 and earlier processors, it is very expensive to exit from the HV real mode context of a guest hypervisor interrupt, and switch to host virtual mode. On those processors, guest->HV interrupts reach the hypervisor with the MMU off because the MMU is loaded with guest context (LPCR, SDR1, SLB), and the other threads in the sub-core need to be pulled out of the guest too. Then the primary must save off guest state, invalidate SLB and ERAT, and load up host state before the MMU can be enabled to run in host virtual mode (~= regular Linux mode). Hash guests also require a lot of hcalls to run due to the nature of the MMU architecture and paravirtualisation design. The XICS interrupt controller requires hcalls to run. So KVM traditionally tries hard to avoid the full exit, by handling hcalls and other interrupts in real mode as much as possible. By contrast, POWER9 has independent MMU context per-thread, and in radix mode the hypervisor is in host virtual memory mode when the HV interrupt is taken. Radix guests do not require significant hcalls to manage their translations, and xive guests don't need hcalls to handle interrupts. So it's much less important from a performance standpoint to handle hcalls in real mode in P9. Reviewed-by: Alexey Kardashevskiy Reviewed-by: Cédric Le Goater Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_ppc.h | 6 ++ arch/powerpc/kvm/book3s.c | 6 ++ arch/powerpc/kvm/book3s_hv.c | 76 +++++++++++++++++++++---- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 5 ++ arch/powerpc/kvm/book3s_xive.c | 64 +++++++++++++++++++++ 5 files changed, 146 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 73b1ca5a6471..cbd8deba2e8a 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -129,6 +129,7 @@ extern void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu); extern int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu); extern int kvmppc_core_pending_dec(struct kvm_vcpu *vcpu); extern void kvmppc_core_queue_machine_check(struct kvm_vcpu *vcpu, ulong flags); +extern void kvmppc_core_queue_syscall(struct kvm_vcpu *vcpu); extern void kvmppc_core_queue_program(struct kvm_vcpu *vcpu, ulong flags); extern void kvmppc_core_queue_fpunavail(struct kvm_vcpu *vcpu); extern void kvmppc_core_queue_vec_unavail(struct kvm_vcpu *vcpu); @@ -607,6 +608,7 @@ extern void kvmppc_free_pimap(struct kvm *kvm); extern int kvmppc_xics_rm_complete(struct kvm_vcpu *vcpu, u32 hcall); extern void kvmppc_xics_free_icp(struct kvm_vcpu *vcpu); extern int kvmppc_xics_hcall(struct kvm_vcpu *vcpu, u32 cmd); +extern int kvmppc_xive_xics_hcall(struct kvm_vcpu *vcpu, u32 req); extern u64 kvmppc_xics_get_icp(struct kvm_vcpu *vcpu); extern int kvmppc_xics_set_icp(struct kvm_vcpu *vcpu, u64 icpval); extern int kvmppc_xics_connect_vcpu(struct kvm_device *dev, @@ -639,6 +641,8 @@ static inline int kvmppc_xics_enabled(struct kvm_vcpu *vcpu) static inline void kvmppc_xics_free_icp(struct kvm_vcpu *vcpu) { } static inline int kvmppc_xics_hcall(struct kvm_vcpu *vcpu, u32 cmd) { return 0; } +static inline int kvmppc_xive_xics_hcall(struct kvm_vcpu *vcpu, u32 req) + { return 0; } #endif #ifdef CONFIG_KVM_XIVE @@ -673,6 +677,7 @@ extern int kvmppc_xive_set_irq(struct kvm *kvm, int irq_source_id, u32 irq, int level, bool line_status); extern void kvmppc_xive_push_vcpu(struct kvm_vcpu *vcpu); extern void kvmppc_xive_pull_vcpu(struct kvm_vcpu *vcpu); +extern void kvmppc_xive_rearm_escalation(struct kvm_vcpu *vcpu); static inline int kvmppc_xive_enabled(struct kvm_vcpu *vcpu) { @@ -714,6 +719,7 @@ static inline int kvmppc_xive_set_irq(struct kvm *kvm, int irq_source_id, u32 ir int level, bool line_status) { return -ENODEV; } static inline void kvmppc_xive_push_vcpu(struct kvm_vcpu *vcpu) { } static inline void kvmppc_xive_pull_vcpu(struct kvm_vcpu *vcpu) { } +static inline void kvmppc_xive_rearm_escalation(struct kvm_vcpu *vcpu) { } static inline int kvmppc_xive_enabled(struct kvm_vcpu *vcpu) { return 0; } diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index 44bf567b6589..1658f899e289 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -171,6 +171,12 @@ void kvmppc_core_queue_machine_check(struct kvm_vcpu *vcpu, ulong flags) } EXPORT_SYMBOL_GPL(kvmppc_core_queue_machine_check); +void kvmppc_core_queue_syscall(struct kvm_vcpu *vcpu) +{ + kvmppc_inject_interrupt(vcpu, BOOK3S_INTERRUPT_SYSCALL, 0); +} +EXPORT_SYMBOL(kvmppc_core_queue_syscall); + void kvmppc_core_queue_program(struct kvm_vcpu *vcpu, ulong flags) { /* might as well deliver this straight away */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 2dc65d752f80..782c02520741 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1142,12 +1142,13 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) } /* - * Handle H_CEDE in the nested virtualization case where we haven't - * called the real-mode hcall handlers in book3s_hv_rmhandlers.S. + * Handle H_CEDE in the P9 path where we don't call the real-mode hcall + * handlers in book3s_hv_rmhandlers.S. + * * This has to be done early, not in kvmppc_pseries_do_hcall(), so * that the cede logic in kvmppc_run_single_vcpu() works properly. */ -static void kvmppc_nested_cede(struct kvm_vcpu *vcpu) +static void kvmppc_cede(struct kvm_vcpu *vcpu) { vcpu->arch.shregs.msr |= MSR_EE; vcpu->arch.ceded = 1; @@ -1400,17 +1401,34 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, } case BOOK3S_INTERRUPT_SYSCALL: { - /* hcall - punt to userspace */ int i; - /* hypercall with MSR_PR has already been handled in rmode, - * and never reaches here. - */ + if (unlikely(vcpu->arch.shregs.msr & MSR_PR)) { + /* + * Guest userspace executed sc 1. This can only be + * reached by the P9 path because the old path + * handles this case in realmode hcall handlers. + * + * Radix guests can not run PR KVM or nested HV hash + * guests which might run PR KVM, so this is always + * a privilege fault. Send a program check to guest + * kernel. + */ + kvmppc_core_queue_program(vcpu, SRR1_PROGPRIV); + r = RESUME_GUEST; + break; + } + /* + * hcall - gather args and set exit_reason. This will next be + * handled by kvmppc_pseries_do_hcall which may be able to deal + * with it and resume guest, or may punt to userspace. + */ run->papr_hcall.nr = kvmppc_get_gpr(vcpu, 3); for (i = 0; i < 9; ++i) run->papr_hcall.args[i] = kvmppc_get_gpr(vcpu, 4 + i); run->exit_reason = KVM_EXIT_PAPR_HCALL; + /* Should this only be set if pseries_do_hcall fails? */ vcpu->arch.hcall_needed = 1; r = RESUME_HOST; break; @@ -3669,6 +3687,12 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, return trap; } +static inline bool hcall_is_xics(unsigned long req) +{ + return req == H_EOI || req == H_CPPR || req == H_IPI || + req == H_IPOLL || req == H_XIRR || req == H_XIRR_X; +} + /* * Virtual-mode guest entry for POWER9 and later when the host and * guest are both using the radix MMU. The LPIDR has already been set. @@ -3780,15 +3804,36 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, /* H_CEDE has to be handled now, not later */ if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && kvmppc_get_gpr(vcpu, 3) == H_CEDE) { - kvmppc_nested_cede(vcpu); + kvmppc_cede(vcpu); kvmppc_set_gpr(vcpu, 3, 0); trap = 0; } } else { kvmppc_xive_push_vcpu(vcpu); trap = kvmhv_load_hv_regs_and_go(vcpu, time_limit, lpcr); + if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && + !(vcpu->arch.shregs.msr & MSR_PR)) { + unsigned long req = kvmppc_get_gpr(vcpu, 3); + + /* H_CEDE has to be handled now, not later */ + if (req == H_CEDE) { + kvmppc_cede(vcpu); + kvmppc_xive_rearm_escalation(vcpu); /* may un-cede */ + kvmppc_set_gpr(vcpu, 3, 0); + trap = 0; + + /* XICS hcalls must be handled before xive is pulled */ + } else if (hcall_is_xics(req)) { + int ret; + + ret = kvmppc_xive_xics_hcall(vcpu, req); + if (ret != H_TOO_HARD) { + kvmppc_set_gpr(vcpu, 3, ret); + trap = 0; + } + } + } kvmppc_xive_pull_vcpu(vcpu); - } vcpu->arch.slb_max = 0; @@ -4448,8 +4493,17 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) else r = kvmppc_run_vcpu(vcpu); - if (run->exit_reason == KVM_EXIT_PAPR_HCALL && - !(vcpu->arch.shregs.msr & MSR_PR)) { + if (run->exit_reason == KVM_EXIT_PAPR_HCALL) { + if (WARN_ON_ONCE(vcpu->arch.shregs.msr & MSR_PR)) { + /* + * These should have been caught reflected + * into the guest by now. Final sanity check: + * don't allow userspace to execute hcalls in + * the hypervisor. + */ + r = RESUME_GUEST; + continue; + } trace_kvm_hcall_enter(vcpu); r = kvmppc_pseries_do_hcall(vcpu); trace_kvm_hcall_exit(vcpu, r); diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index c11597f815e4..2d0d14ed1d92 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -1397,9 +1397,14 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) mr r4,r9 bge fast_guest_return 2: + /* If we came in through the P9 short path, no real mode hcalls */ + lwz r0, STACK_SLOT_SHORT_PATH(r1) + cmpwi r0, 0 + bne no_try_real /* See if this is an hcall we can handle in real mode */ cmpwi r12,BOOK3S_INTERRUPT_SYSCALL beq hcall_try_real_mode +no_try_real: /* Hypervisor doorbell - exit only if host IPI flag set */ cmpwi r12, BOOK3S_INTERRUPT_H_DOORBELL diff --git a/arch/powerpc/kvm/book3s_xive.c b/arch/powerpc/kvm/book3s_xive.c index 741bf1f4387a..24c07094651a 100644 --- a/arch/powerpc/kvm/book3s_xive.c +++ b/arch/powerpc/kvm/book3s_xive.c @@ -158,6 +158,40 @@ void kvmppc_xive_pull_vcpu(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvmppc_xive_pull_vcpu); +void kvmppc_xive_rearm_escalation(struct kvm_vcpu *vcpu) +{ + void __iomem *esc_vaddr = (void __iomem *)vcpu->arch.xive_esc_vaddr; + + if (!esc_vaddr) + return; + + /* we are using XIVE with single escalation */ + + if (vcpu->arch.xive_esc_on) { + /* + * If we still have a pending escalation, abort the cede, + * and we must set PQ to 10 rather than 00 so that we don't + * potentially end up with two entries for the escalation + * interrupt in the XIVE interrupt queue. In that case + * we also don't want to set xive_esc_on to 1 here in + * case we race with xive_esc_irq(). + */ + vcpu->arch.ceded = 0; + /* + * The escalation interrupts are special as we don't EOI them. + * There is no need to use the load-after-store ordering offset + * to set PQ to 10 as we won't use StoreEOI. + */ + __raw_readq(esc_vaddr + XIVE_ESB_SET_PQ_10); + } else { + vcpu->arch.xive_esc_on = true; + mb(); + __raw_readq(esc_vaddr + XIVE_ESB_SET_PQ_00); + } + mb(); +} +EXPORT_SYMBOL_GPL(kvmppc_xive_rearm_escalation); + /* * This is a simple trigger for a generic XIVE IRQ. This must * only be called for interrupts that support a trigger page @@ -2106,6 +2140,36 @@ static int kvmppc_xive_create(struct kvm_device *dev, u32 type) return 0; } +int kvmppc_xive_xics_hcall(struct kvm_vcpu *vcpu, u32 req) +{ + struct kvmppc_vcore *vc = vcpu->arch.vcore; + + /* The VM should have configured XICS mode before doing XICS hcalls. */ + if (!kvmppc_xics_enabled(vcpu)) + return H_TOO_HARD; + + switch (req) { + case H_XIRR: + return xive_vm_h_xirr(vcpu); + case H_CPPR: + return xive_vm_h_cppr(vcpu, kvmppc_get_gpr(vcpu, 4)); + case H_EOI: + return xive_vm_h_eoi(vcpu, kvmppc_get_gpr(vcpu, 4)); + case H_IPI: + return xive_vm_h_ipi(vcpu, kvmppc_get_gpr(vcpu, 4), + kvmppc_get_gpr(vcpu, 5)); + case H_IPOLL: + return xive_vm_h_ipoll(vcpu, kvmppc_get_gpr(vcpu, 4)); + case H_XIRR_X: + xive_vm_h_xirr(vcpu); + kvmppc_set_gpr(vcpu, 5, get_tb() + vc->tb_offset); + return H_SUCCESS; + } + + return H_UNSUPPORTED; +} +EXPORT_SYMBOL_GPL(kvmppc_xive_xics_hcall); + int kvmppc_xive_debug_show_queues(struct seq_file *m, struct kvm_vcpu *vcpu) { struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu; From patchwork Mon Apr 5 01:19:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462197 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ZQRdDF6E; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCY41MJYz9shn for ; Mon, 5 Apr 2021 11:21:28 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231843AbhDEBVb (ORCPT ); Sun, 4 Apr 2021 21:21:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231841AbhDEBVa (ORCPT ); Sun, 4 Apr 2021 21:21:30 -0400 Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DAE75C061756 for ; Sun, 4 Apr 2021 18:21:23 -0700 (PDT) Received: by mail-pg1-x532.google.com with SMTP id g35so2419218pgg.9 for ; Sun, 04 Apr 2021 18:21:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QxLOBUUPsmcmYWvGNUQT+fwEBAPKT3eRXC/liKQ+kU0=; b=ZQRdDF6EbQOyNCo2zPq1E2pCr0+B8MsLkIjMQRtf6WbrZ0Po7KCFMt1nmNoACL2avU 8e+hk1fm+9hkq1LPlnBagOsPE/bPA/0Q8zhs26l98BU9CJjRSmh1dUXzuv8xu7noVZuU SH5yLQnqTkKArt25FSbMarOwHxnOG/52J2IG8jPeMazwrepy3XiA4IG54mx6nfWho9Xx j6T6zwgrEL0zxbz9+OpjYc+BFVXSA/ZKVpyiMLxqxXtGC7mevF0eFZiqiZeczt77ceDs WMaMU+ZizByZ6S8FSp22KmN0qIG32dClDL9AQ9RB2CkjRP0lP6Yr97DONk/zQ/xaRU2x Dxlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QxLOBUUPsmcmYWvGNUQT+fwEBAPKT3eRXC/liKQ+kU0=; b=r34yKCgtyEFrYtNy2fO93L/bTFivbstdId6Yk7VBR3xmgBLERAdJrPGYWYgLv2zMHN h5tY5vBYtaYUrTc6dC5Sy1xS5lkvpHxwZs1/dw8OM6R8JPDa3VEBGKXlZ+tPQdRyn/tR FLaWM0WFfM4VXiuBQ9ipHG3X35eCJse+lIEEXvxjjmoHKJSuoVyEyXX4uJlYNFrIWJWI JAhRMOwVO2uYD2+mHoYBF7Ua2BMXWaNRRkGPfEsncq0kANJmFDyNfY7tOuuNMk2y+iQf GssBLxFKGX2yIHGjUguzUqYskAIN53Zfuprhejg7bpLSLbpEGHoEXakbVAs4Pq+y58rk A0bw== X-Gm-Message-State: AOAM532tf+a9Hw81ytkWfrunrCUzxTZJfsm3fCDbWrba5k38mmJjr52o eVW2Vy97bepUWJ4f+gMMqXeqKVdpT6A5YQ== X-Google-Smtp-Source: ABdhPJyRe3Q+W1HOZzcCNDB6XF/GW8hCGHQBzfUis/17lKiAuhfokV8UwfRPKqEosOvGmtFJyJLO3Q== X-Received: by 2002:a62:82cb:0:b029:1f6:213b:6590 with SMTP id w194-20020a6282cb0000b02901f6213b6590mr21258088pfd.17.1617585683327; Sun, 04 Apr 2021 18:21:23 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:23 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Alexey Kardashevskiy , Fabiano Rosas Subject: [PATCH v6 23/48] KVM: PPC: Book3S HV P9: Move setting HDEC after switching to guest LPCR Date: Mon, 5 Apr 2021 11:19:23 +1000 Message-Id: <20210405011948.675354-24-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org LPCR[HDICE]=0 suppresses hypervisor decrementer exceptions on some processors, so it must be enabled before HDEC is set. Rather than set it in the host LPCR then setting HDEC, move the HDEC update to after the guest MMU context (including LPCR) is loaded. There shouldn't be much concern with delaying HDEC by some 10s or 100s of nanoseconds by setting it a bit later. Reviewed-by: Alexey Kardashevskiy Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 782c02520741..f2aefd478d8c 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3557,20 +3557,9 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, host_dawrx1 = mfspr(SPRN_DAWRX1); } - /* - * P8 and P9 suppress the HDEC exception when LPCR[HDICE] = 0, - * so set HDICE before writing HDEC. - */ - mtspr(SPRN_LPCR, kvm->arch.host_lpcr | LPCR_HDICE); - isync(); - hdec = time_limit - mftb(); - if (hdec < 0) { - mtspr(SPRN_LPCR, kvm->arch.host_lpcr); - isync(); + if (hdec < 0) return BOOK3S_INTERRUPT_HV_DECREMENTER; - } - mtspr(SPRN_HDEC, hdec); if (vc->tb_offset) { u64 new_tb = mftb() + vc->tb_offset; @@ -3616,6 +3605,12 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, switch_mmu_to_guest_radix(kvm, vcpu, lpcr); + /* + * P9 suppresses the HDEC exception when LPCR[HDICE] = 0, + * so set guest LPCR (with HDICE) before writing HDEC. + */ + mtspr(SPRN_HDEC, hdec); + mtspr(SPRN_SRR0, vcpu->arch.shregs.srr0); mtspr(SPRN_SRR1, vcpu->arch.shregs.srr1); From patchwork Mon Apr 5 01:19:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462198 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=d+yzFvdJ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCY61FXSz9sjD for ; Mon, 5 Apr 2021 11:21:30 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231844AbhDEBVe (ORCPT ); Sun, 4 Apr 2021 21:21:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231841AbhDEBVd (ORCPT ); Sun, 4 Apr 2021 21:21:33 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3EA2C061756 for ; Sun, 4 Apr 2021 18:21:26 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id h20so4927144plr.4 for ; Sun, 04 Apr 2021 18:21:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eg/LONsL/v52XjsVvWJ/gHvuI8LKJE3Kwm8pO3cmUh0=; b=d+yzFvdJsE6Udw/oxnGd3ktn5ac7k831rkFpUl8FtxS001A2LxpL5O7ag2EL8uzCX3 oNP9V76IiCg8Yt/mREBeiXENpwdVugAk1mYOOibYGRzjwfY4z35OqXiOQtQqTlzO6FKE SyjYiAygNkZNZNDg8mNnXq6bVYFkWPvmDrRnrp0pyRHGaDY+X2np7tMi+8n2E9Ybjujs hIOCTF6iS1bPz3WidEreQ2IbdV5terOn84aWpepT/gHuJF+7tHTPjDSA6XM2crwJwiAb KM5wrSLH/b4XtULkAs2qwB1XjUZO87BJ1zfrGzvLV24U11X42SLmEYC7Efj3q0cGu9rg CoCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eg/LONsL/v52XjsVvWJ/gHvuI8LKJE3Kwm8pO3cmUh0=; b=YHQGx38AdWtgfU8q4LeMlBAii59mNRFTRxHZm7rDyXRDWLgkiXcHRgJMe+pFTBCr3n Guk+OlAvZvqzSkQk6tULS4wvfIeFV0tQFAHap+e6IdAhu/qH6S5irFrKyCPmONApsNDP nmaHwoqFXpFkfwl/GWKtBoZ92XEbKOG7S4XspX8xjEcHreFzK6q2GnG0wL7D7xq695OG W8loxI++Q1VfncfUT8zhA/KMwlkoZez11S0T2JW+8ZHaLG9uaj3U3JOKrgbGfOhF2gMU rOF83GAFuCThz5QvzgAtI/8I5KDtyTSXGM4+Vy+ek6goNaaBT0kXVsiq1XZEVhsUgIXU 7lZw== X-Gm-Message-State: AOAM531QyDr1xu+8aB9uX+WScOZc4E85z6ooxs4Bs2FtsL7C9NrGeMhu Z0ophRY0yqm3EUX8j/Ih9+qIftjsaJqMKg== X-Google-Smtp-Source: ABdhPJwuC5NMjyPrlIoyrCpfwlN5Tc7wQeInmrVcBhfZFtuqGaK5f1sraFyGvnd4fEdyS7KGbpZQpw== X-Received: by 2002:a17:902:780c:b029:e6:9193:56e2 with SMTP id p12-20020a170902780cb02900e6919356e2mr21811789pll.39.1617585686267; Sun, 04 Apr 2021 18:21:26 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:26 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Alexey Kardashevskiy Subject: [PATCH v6 24/48] KVM: PPC: Book3S HV P9: Use large decrementer for HDEC Date: Mon, 5 Apr 2021 11:19:24 +1000 Message-Id: <20210405011948.675354-25-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org On processors that don't suppress the HDEC exceptions when LPCR[HDICE]=0, this could help reduce needless guest exits due to leftover exceptions on entering the guest. Reviewed-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/time.h | 2 ++ arch/powerpc/kernel/time.c | 1 + arch/powerpc/kvm/book3s_hv.c | 3 ++- 3 files changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index 8dd3cdb25338..68d94711811e 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -18,6 +18,8 @@ #include /* time.c */ +extern u64 decrementer_max; + extern unsigned long tb_ticks_per_jiffy; extern unsigned long tb_ticks_per_usec; extern unsigned long tb_ticks_per_sec; diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index b67d93a609a2..fc42594c8223 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -89,6 +89,7 @@ static struct clocksource clocksource_timebase = { #define DECREMENTER_DEFAULT_MAX 0x7FFFFFFF u64 decrementer_max = DECREMENTER_DEFAULT_MAX; +EXPORT_SYMBOL_GPL(decrementer_max); /* for KVM HDEC */ static int decrementer_set_next_event(unsigned long evt, struct clock_event_device *dev); diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index f2aefd478d8c..3029ffb4b792 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3675,7 +3675,8 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, vc->tb_offset_applied = 0; } - mtspr(SPRN_HDEC, 0x7fffffff); + /* HDEC must be at least as large as DEC, so decrementer_max fits */ + mtspr(SPRN_HDEC, decrementer_max); switch_mmu_to_host_radix(kvm, host_pidr); From patchwork Mon Apr 5 01:19:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462199 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Y30UY1Og; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCY80M1fz9t0k for ; Mon, 5 Apr 2021 11:21:32 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231845AbhDEBVg (ORCPT ); Sun, 4 Apr 2021 21:21:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231841AbhDEBVf (ORCPT ); Sun, 4 Apr 2021 21:21:35 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFDEAC061756 for ; Sun, 4 Apr 2021 18:21:29 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id a6so1597699pls.1 for ; Sun, 04 Apr 2021 18:21:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/MfbnRN8dnuzRj+Aq/3W1T9BSSMMXrvqkGuD+Sv636E=; b=Y30UY1OgHfg+ZONfB5UWRtxeu1z7SAmRVSRRpwSWYu1X0M60zAQm/dxBMlFadC71Xn gFCwn0aAoi9Uz7cgoch+a4fOwIZt71Qj0R7R9Gz4T2voCbECnuPZQISATswKeUw7kBpn LO22W8C4/aUDRo15mnzq51q96lAbgtQ4POROIHidD+OeV60hA2ptBmJwu1nBbZcv3qYg EmLlAu1bKtBwHCrjsXywZXuJrkx0OyhUeZ8LkNv1DbzdjT+xfCMTyvC3gtCxZW7bL4Dj s4zHNWdxewjs0waBce7pteF4al5f/fLN4BLTiq2pnGnGXqGZ774f48hz0uGzIaMdJ71w kxag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/MfbnRN8dnuzRj+Aq/3W1T9BSSMMXrvqkGuD+Sv636E=; b=cHbefwt+6/sXZ8V/vkQlQNd8dVcpQEZS/vj5L3NcKbsaMhEvyGTQstvMyoTejqYC5J b+z43vTeUEot1cvc5e5kz/avd34G7xxBxLziqAuaSObywKL6NpN2qo3HGXNh4JhgIxTs 5tdjxGv3AMua8a9xZF+gOR4bN+9lqO6g1W3igv6adDzqjndyuz7JxRaD5YaH+jTGDssL QqfP4GWP+qeJZk0/cHLcP7NxlyoCWlkRKrMxu9R5tfiu6pTStUNfAxPVm+GHhN/aDUnO aMqrocc0cO5haYLy+drUSp9LxhW7NTlegP7MJvI9KGx6f4/HcGdPIR0XIB6EkJsZ1jz0 Ecvw== X-Gm-Message-State: AOAM531FO1R06hCaWjBdMyuKjPIYz/GMa0YTZFfTD4xM9UGBokiIGt0V gOKK74+ZxU9YQgf42bTQbYKo9NwtA/hSVA== X-Google-Smtp-Source: ABdhPJxFZ92JUxArnB0Kjn4AsWGbEnW4X0TXT/LT+PNWX5v4UnSYf00JBDJV/2+536BmLm/nculj2Q== X-Received: by 2002:a17:90a:9f8d:: with SMTP id o13mr23645774pjp.25.1617585689484; Sun, 04 Apr 2021 18:21:29 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:29 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 25/48] KVM: PPC: Book3S HV P9: Use host timer accounting to avoid decrementer read Date: Mon, 5 Apr 2021 11:19:25 +1000 Message-Id: <20210405011948.675354-26-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org There is no need to save away the host DEC value, as it is derived from the host timer subsystem, which maintains the next timer time. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/time.h | 5 +++++ arch/powerpc/kernel/time.c | 1 + arch/powerpc/kvm/book3s_hv.c | 14 +++++++------- 3 files changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index 68d94711811e..0128cd9769bc 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -101,6 +101,11 @@ extern void __init time_init(void); DECLARE_PER_CPU(u64, decrementers_next_tb); +static inline u64 timer_get_next_tb(void) +{ + return __this_cpu_read(decrementers_next_tb); +} + /* Convert timebase ticks to nanoseconds */ unsigned long long tb_to_ns(unsigned long long tb_ticks); diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index fc42594c8223..8b9b38a8ce57 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -109,6 +109,7 @@ struct clock_event_device decrementer_clockevent = { EXPORT_SYMBOL(decrementer_clockevent); DEFINE_PER_CPU(u64, decrementers_next_tb); +EXPORT_SYMBOL_GPL(decrementers_next_tb); static DEFINE_PER_CPU(struct clock_event_device, decrementers); #define XSEC_PER_SEC (1024*1024) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 3029ffb4b792..5c4ccebce682 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3703,16 +3703,15 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long host_amr = mfspr(SPRN_AMR); unsigned long host_fscr = mfspr(SPRN_FSCR); s64 dec; - u64 tb; + u64 tb, next_timer; int trap, save_pmu; - dec = mfspr(SPRN_DEC); tb = mftb(); - if (dec < 0) + next_timer = timer_get_next_tb(); + if (tb >= next_timer) return BOOK3S_INTERRUPT_HV_DECREMENTER; - local_paca->kvm_hstate.dec_expires = dec + tb; - if (local_paca->kvm_hstate.dec_expires < time_limit) - time_limit = local_paca->kvm_hstate.dec_expires; + if (next_timer < time_limit) + time_limit = next_timer; vcpu->arch.ceded = 0; @@ -3895,7 +3894,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 0x101; vc->in_guest = 0; - mtspr(SPRN_DEC, local_paca->kvm_hstate.dec_expires - mftb()); + next_timer = timer_get_next_tb(); + mtspr(SPRN_DEC, next_timer - mftb()); mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); kvmhv_load_host_pmu(); From patchwork Mon Apr 5 01:19:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462200 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=tSmSaEhm; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCYL4bJ1z9sWY for ; Mon, 5 Apr 2021 11:21:42 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231847AbhDEBVp (ORCPT ); Sun, 4 Apr 2021 21:21:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231841AbhDEBVk (ORCPT ); Sun, 4 Apr 2021 21:21:40 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77114C061756 for ; Sun, 4 Apr 2021 18:21:33 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id y2so4928828plg.5 for ; Sun, 04 Apr 2021 18:21:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UTdFrL60rN3R2a4gJO4rGMS50+4Qry4ThLyQTVbs3+w=; b=tSmSaEhmk9tUFLcu1b7wpbjcIvl6TNiIA+rpbb9nvW7B9ScMmNz07Ey/y8XAsmiNw1 G7YKBEtHvp0pZ4b3F/xxk+Xk4E3di2GhxWEhcQg8gPQzdT2HhlMmdK8REEr48UHfbZAW 0ciO2fcd1J6cVtkMm9fde2IGaMfhyIJH7UjBlCNcQhbdiY49BX2LIIlGfY1e7eGEuSSN 4tdkYuQzzsGadDF+7nBm/TzqGOLa7Tof31Jmu4llRWEkritWt8cnIOmrMsAwUE01WhHd G/Imu4r0SZyiDJ8+X1hgwY1eRXEgTksVs82drRyx6LtkY8u/zIyYg1HnRtoALHbB6Xe+ 3suA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UTdFrL60rN3R2a4gJO4rGMS50+4Qry4ThLyQTVbs3+w=; b=cTs2qaLSvyd+yQPnuBAdb7I7lrYu0yOKuFXjXYDkV+zq+nQgCEvZp7iMUYr8FFwC1q 87TbLmwjdCVEw8g0DiKT5IIree/aXBU2EJM+0m6kVRCkUCwNE9KUrbY8NR3UyRH7V8IR yqHwoBao9tdyJzVOlizaQtZ/saiJ/+6zSNZBO/QCongTlUvnYGAzXRjM/aGP2ER+g+bw Yl6fVJ4jXZEvtETjheJjQ29ncNyQ9XxhcqMhiFs4W7DVHpiIgr/aoYG8p7RpQ2/ta6qJ Hc0tdJycJTV/tS/4IElexSDGsRXzTY3SMhCFPGSi/hdXEpVGhmYpaGyKWH0TNcVfBb6y h7jg== X-Gm-Message-State: AOAM531tgQfi2s8aqnH8in+19V0frsReMnppWsOEn7uTb4jnVllVClqV a5yabwN/kx+dvl5Vd2ayOfe5MJZ493lwjQ== X-Google-Smtp-Source: ABdhPJzt4Nw7omWht0GkfIRhyfsUb8mGIt3gtOEO5he5gZ3Pvyl5AH54D5QTOU0D0ZDar/Lg6ND29A== X-Received: by 2002:a17:90a:5d14:: with SMTP id s20mr24457309pji.6.1617585692953; Sun, 04 Apr 2021 18:21:32 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:32 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v6 26/48] KVM: PPC: Book3S HV P9: Reduce mftb per guest entry/exit Date: Mon, 5 Apr 2021 11:19:26 +1000 Message-Id: <20210405011948.675354-27-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org mftb is serialising (dispatch next-to-complete) so it is heavy weight for a mfspr. Avoid reading it multiple times in the entry or exit paths. A small number of cycles delay to timers is tolerable. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 5c4ccebce682..f4e5a64457e6 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3557,12 +3557,13 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, host_dawrx1 = mfspr(SPRN_DAWRX1); } - hdec = time_limit - mftb(); + tb = mftb(); + hdec = time_limit - tb; if (hdec < 0) return BOOK3S_INTERRUPT_HV_DECREMENTER; if (vc->tb_offset) { - u64 new_tb = mftb() + vc->tb_offset; + u64 new_tb = tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); tb = mftb(); if ((tb & 0xffffff) < (new_tb & 0xffffff)) @@ -3760,7 +3761,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, if (!(vcpu->arch.ctrl & 1)) mtspr(SPRN_CTRLT, mfspr(SPRN_CTRLF) & ~1); - mtspr(SPRN_DEC, vcpu->arch.dec_expires - mftb()); + mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb); if (kvmhv_on_pseries()) { /* @@ -3895,7 +3896,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->in_guest = 0; next_timer = timer_get_next_tb(); - mtspr(SPRN_DEC, next_timer - mftb()); + mtspr(SPRN_DEC, next_timer - tb); mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); kvmhv_load_host_pmu(); From patchwork Mon Apr 5 01:19:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462201 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=XqqXjBva; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCYM216Fz9sj1 for ; Mon, 5 Apr 2021 11:21:43 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231841AbhDEBVr (ORCPT ); Sun, 4 Apr 2021 21:21:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231555AbhDEBVo (ORCPT ); Sun, 4 Apr 2021 21:21:44 -0400 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83E1AC0613E6 for ; Sun, 4 Apr 2021 18:21:36 -0700 (PDT) Received: by mail-pg1-x530.google.com with SMTP id w10so1676810pgh.5 for ; Sun, 04 Apr 2021 18:21:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=f+RLgCMRv6O15PdiYr5GNGDONEdfPWC0BUYjL4OnyJc=; b=XqqXjBvasFw+mvDlcQ/12Ge0gaQq5Ipff7tNUFiI3mRhX47BqpZrogoLW4VASjLw3j tL3M9YM3GURnZw2ivDP3VxuN0HOsRwtSQcY0APA+mrh2Jv292Afv0Y5Xgfz3//5zOlKo AYBcAeIzBKO2s8x3cqimf8adjZBB5tB0Zo+Dko78tC0g/wPhgmTfuCQJcmT6NzdnM0Q2 nSQHn/ZKxKNawFQ4OLk0D5X9CZ63MKrAs+aTIz+x82dufca/xjOo+vBWkkqD9GKktmsj 0pmi2e8IeXlB7vXQ+FUr/ng6M9whqx3GjWpvgRu3cYYSiR3xGyhd5HuQXzLh+3CeveT+ 1fBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=f+RLgCMRv6O15PdiYr5GNGDONEdfPWC0BUYjL4OnyJc=; b=PDwL0bY+8LwV2C6UyjrQsS2mTZ3XT9j/7+XE8eazgzX5AczUuA+SkAIFza6Zd7vo47 9fcBioV3O/n2lkjYUJlRp8iog6rKSTA3ZbKf9ZxukAUKUo6mplBWotqBpW6/NSRnu2b2 IR9CX5Tps5iluHdoZMAqby/yF9Nu9Uzk0K831LrHRtBVcz9WNfxX3CLRHhCnYpUkNs85 p0P3KQC8rBwbTQMu6os3Duuo/dgwz3vv0I155/tFeyVYdugpLacRToc2QfxJfTHrZGMC xNLUyneGuY6dsmMkwUVDyPdy289maox1nadunWrNsNBH1zn/BweTGdR75Vf5JrRtHIrh vjsA== X-Gm-Message-State: AOAM533ikK2S4Pd8uhCrrsqdYKitxuw7VlfKqJfpb+IPFLA3hmjeNACk RMG/jg2yQu2/wTjjgubWZfEp9/Mrv4iYew== X-Google-Smtp-Source: ABdhPJxhaCnIkBD0fPONlRhruTAvtVBHOrnpyr67ZOIEYp3UTwNJ2MhnxAyUI+9q04Ln6f7WDgKoyQ== X-Received: by 2002:a65:41c6:: with SMTP id b6mr20794644pgq.7.1617585695980; Sun, 04 Apr 2021 18:21:35 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:35 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 27/48] KVM: PPC: Book3S HV P9: Reduce irq_work vs guest decrementer races Date: Mon, 5 Apr 2021 11:19:27 +1000 Message-Id: <20210405011948.675354-28-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org irq_work's use of the DEC SPR is racy with guest<->host switch and guest entry which flips the DEC interrupt to guest, which could lose a host work interrupt. This patch closes one race, and attempts to comment another class of races. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index f4e5a64457e6..dae59f05ef50 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3761,6 +3761,18 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, if (!(vcpu->arch.ctrl & 1)) mtspr(SPRN_CTRLT, mfspr(SPRN_CTRLF) & ~1); + /* + * When setting DEC, we must always deal with irq_work_raise via NMI vs + * setting DEC. The problem occurs right as we switch into guest mode + * if a NMI hits and sets pending work and sets DEC, then that will + * apply to the guest and not bring us back to the host. + * + * irq_work_raise could check a flag (or possibly LPCR[HDICE] for + * example) and set HDEC to 1? That wouldn't solve the nested hv + * case which needs to abort the hcall or zero the time limit. + * + * XXX: Another day's problem. + */ mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb); if (kvmhv_on_pseries()) { @@ -3897,6 +3909,9 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, next_timer = timer_get_next_tb(); mtspr(SPRN_DEC, next_timer - tb); + /* We may have raced with new irq work */ + if (test_irq_work_pending()) + set_dec(1); mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); kvmhv_load_host_pmu(); From patchwork Mon Apr 5 01:19:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462202 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=tTgscvMk; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCYP5wWtz9sX5 for ; Mon, 5 Apr 2021 11:21:45 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231852AbhDEBVs (ORCPT ); Sun, 4 Apr 2021 21:21:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231851AbhDEBVr (ORCPT ); Sun, 4 Apr 2021 21:21:47 -0400 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C93F2C061756 for ; Sun, 4 Apr 2021 18:21:39 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id q5so7186063pfh.10 for ; Sun, 04 Apr 2021 18:21:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+yz3nCHgncTH1iYxogo/O0czpRYa7H+0gjI15C0SIns=; b=tTgscvMkNalSwwTNq71/3p8BkL40aWAK/vQgyTG0WMQCn5gWjt9K6vG0CmxuIDZr/e 1etwZED7jEAYE7P5CChqkKaLiEYM+Jn+990O4da6mdvImhCuvSMu+pMjDGq9CX6LNzxy urBXqWrP5l43MRHkctsoxSPuFKRKk617Bo0WfySnBQLK8IJ0b1X9oZO/zvV7v7OkUF4q rZCwKXPlmDony8iIiegr9MmEO129bxBlpQxW08twke97y1uEUGd7+XB+zh/ysKGCOI+p fq4iV+GhdPk/C1oX/ySjB13HfTilsZS2ZISqv+FoAfM1lqBMwrU47TXYneKvUloeHUz+ ZFTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+yz3nCHgncTH1iYxogo/O0czpRYa7H+0gjI15C0SIns=; b=gX977CVXcIjrS+ayS1KS/Ooh03LWYG6uQuRe0VnUBAFThthjT2oSceAxmhFsGv9fL/ jGeP/Xp31B34Y61qMc0bIAK8+bNgtn2FA0AxPmI6yLfZe2KVko6t73CMOqs+eT5YtZt4 UCfdhMbnadMqhEVfJHSHN3SWgemwegSFQHwNOOfRwWBjhKdvdeay+BaIYVPwPSeANSPa CupV1LPkJKAxKlRbjlvktu4EEFfPvxAoOalbGaydrTby0Ja4/ec9U/E7UVp79jzOtEPT fUhsZ21idaP0b4rvUxUd8RRUdPsllGEaEv2LrnEjAyY8pkGLtdCrbYwK5WV0x3nJi7J9 SgxQ== X-Gm-Message-State: AOAM531qRgTMgu1LX6SwqtMFQflX27qXWEU5DYAh38BeUQJ9kswKaBzr fTrACkxLbh7n9UJTDOwgo4eXJK88Zi/MQg== X-Google-Smtp-Source: ABdhPJxG/XX78xpw1PSF9I438oh7eHsSm3t8WDghRmuYLmmb29ptdu+WOVjzBD9Zt2FgGGlgszJo1Q== X-Received: by 2002:a62:8f4a:0:b029:20a:448e:7018 with SMTP id n71-20020a628f4a0000b029020a448e7018mr21215419pfd.62.1617585699303; Sun, 04 Apr 2021 18:21:39 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:39 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Alexey Kardashevskiy Subject: [PATCH v6 28/48] KMV: PPC: Book3S HV: Use set_dec to set decrementer to host Date: Mon, 5 Apr 2021 11:19:28 +1000 Message-Id: <20210405011948.675354-29-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The host Linux timer code arms the decrementer with the value 'decrementers_next_tb - current_tb' using set_dec(), which stores val - 1 on Book3S-64, which is not quite the same as what KVM does to re-arm the host decrementer when exiting the guest. This shouldn't be a significant change, but it makes the logic match and avoids this small extra change being brought into the next patch. Suggested-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index dae59f05ef50..65ddae3958ab 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3908,7 +3908,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->in_guest = 0; next_timer = timer_get_next_tb(); - mtspr(SPRN_DEC, next_timer - tb); + set_dec(next_timer - tb); /* We may have raced with new irq work */ if (test_irq_work_pending()) set_dec(1); From patchwork Mon Apr 5 01:19:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462204 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=LREuIrsa; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCYS4t87z9shn for ; Mon, 5 Apr 2021 11:21:48 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231555AbhDEBVu (ORCPT ); Sun, 4 Apr 2021 21:21:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231848AbhDEBVs (ORCPT ); Sun, 4 Apr 2021 21:21:48 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F4C2C0613E6 for ; Sun, 4 Apr 2021 18:21:43 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id t20so4905745plr.13 for ; Sun, 04 Apr 2021 18:21:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Cd6t9aTE+ivLiFaiPdzeNP2ex++q6hisetNhEX3OXsw=; b=LREuIrsaCjYT8AboqP4MfBoV3/XCl++F7ttb+RX3mwsD7vP8v3Oi5X4r2DkyjSN431 kWuZhm87hRLZoDBzJ7Qk3mAS5tUITmf0Bx7rCWjlkNinF1Icsvp1y4Cm1+duLvyGJ8Q9 MIFUrWTaBhIaerG22AxSTpmupYLeDBIF0JE9R73FN6YUW8Nx4TfmTQ3928+YarBHemHH LQvAyO77o/tu+Ujdq/B3mjz7jmTmtT9LojRX93w4HFU5LB2EkW024lWque8r28DP9pEs LU6R5QthVPPq51WFAI7UnKjnuz3btLRbw22aQulnkrOBvV9Iy37OiZLnDMVGGezxxgQK p6fQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Cd6t9aTE+ivLiFaiPdzeNP2ex++q6hisetNhEX3OXsw=; b=ozuLVaI0oyjeFHDaXUp9bi0vnDw8X0xul8UecE+uy5klGzmB/WpN4e7/5YJuF+FOjB np2LAaPYlCHyvKtZZcFDuUS5qDQlI6uizOy1JiqUhtC/lo2YIMz2WQHNtimavHeYBlqs q2INGDteG7nKpwGjZeF1FafkZmZacQrx2dpGj2WFdFRMYLY4r7vxms6W/aXiq+8d6J4K LSqtp5vs7GsvORa1JMWQ3YviTXU2QglKBhvcXoHej/Py8TRudywLR1oTTOf+XVuZ9JH5 GE8AJu7HP/iPpzyvurLe1OKRyzx4SFf+ELVGxTyl4meLfXIQ2WAhLti5cqfE9zCfErQN dLUQ== X-Gm-Message-State: AOAM531a8IouN/PfTgQDUjRBhvmL3CDssG2O1e4/vMYCF6uGCdnGAP+w tRkHy9YVwEJnuzCiUsDGtS8FHEyvpwVPNw== X-Google-Smtp-Source: ABdhPJy6xdTPqpr/7ODLjZgW9V1v5ekNi6yVVZF4iPqdbpjZdSsV1KfahPb5OxVnyQg0k3af88ZEyg== X-Received: by 2002:a17:90b:4d0f:: with SMTP id mw15mr24363649pjb.92.1617585702565; Sun, 04 Apr 2021 18:21:42 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:42 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 29/48] powerpc/time: add API for KVM to re-arm the host timer/decrementer Date: Mon, 5 Apr 2021 11:19:29 +1000 Message-Id: <20210405011948.675354-30-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rather than have KVM look up the host timer and fiddle with the irq-work internal details, have the powerpc/time.c code provide a function for KVM to re-arm the Linux timer code when exiting a guest. This is implementation has an improvement over existing code of marking a decrementer interrupt as soft-pending if a timer has expired, rather than setting DEC to a -ve value, which tended to cause host timers to take two interrupts (first hdec to exit the guest, then the immediate dec). Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/time.h | 4 ++++ arch/powerpc/kernel/time.c | 41 +++++++++++++++++++++++++-------- arch/powerpc/kvm/book3s_hv.c | 6 +---- 3 files changed, 37 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index 0128cd9769bc..924b2157882f 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -106,6 +106,10 @@ static inline u64 timer_get_next_tb(void) return __this_cpu_read(decrementers_next_tb); } +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +void timer_rearm_host_dec(u64 now); +#endif + /* Convert timebase ticks to nanoseconds */ unsigned long long tb_to_ns(unsigned long long tb_ticks); diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index 8b9b38a8ce57..8bbcc6be40c0 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -563,13 +563,43 @@ void arch_irq_work_raise(void) preempt_enable(); } +static void set_dec_or_work(u64 val) +{ + set_dec(val); + /* We may have raced with new irq work */ + if (unlikely(test_irq_work_pending())) + set_dec(1); +} + #else /* CONFIG_IRQ_WORK */ #define test_irq_work_pending() 0 #define clear_irq_work_pending() +static void set_dec_or_work(u64 val) +{ + set_dec(val); +} #endif /* CONFIG_IRQ_WORK */ +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +void timer_rearm_host_dec(u64 now) +{ + u64 *next_tb = this_cpu_ptr(&decrementers_next_tb); + + WARN_ON_ONCE(!arch_irqs_disabled()); + WARN_ON_ONCE(mfmsr() & MSR_EE); + + if (now >= *next_tb) { + now = *next_tb - now; + set_dec_or_work(now); + } else { + local_paca->irq_happened |= PACA_IRQ_DEC; + } +} +EXPORT_SYMBOL_GPL(timer_rearm_host_dec); +#endif + /* * timer_interrupt - gets called when the decrementer overflows, * with interrupts disabled. @@ -630,10 +660,7 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(timer_interrupt) } else { now = *next_tb - now; if (now <= decrementer_max) - set_dec(now); - /* We may have raced with new irq work */ - if (test_irq_work_pending()) - set_dec(1); + set_dec_or_work(now); __this_cpu_inc(irq_stat.timer_irqs_others); } @@ -875,11 +902,7 @@ static int decrementer_set_next_event(unsigned long evt, struct clock_event_device *dev) { __this_cpu_write(decrementers_next_tb, get_tb() + evt); - set_dec(evt); - - /* We may have raced with new irq work */ - if (test_irq_work_pending()) - set_dec(1); + set_dec_or_work(evt); return 0; } diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 65ddae3958ab..353a0c8b79fa 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3907,11 +3907,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 0x101; vc->in_guest = 0; - next_timer = timer_get_next_tb(); - set_dec(next_timer - tb); - /* We may have raced with new irq work */ - if (test_irq_work_pending()) - set_dec(1); + timer_rearm_host_dec(tb); mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); kvmhv_load_host_pmu(); From patchwork Mon Apr 5 01:19:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462205 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=bGLJe6vD; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCYY1hmpz9sWX for ; Mon, 5 Apr 2021 11:21:53 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231855AbhDEBV4 (ORCPT ); Sun, 4 Apr 2021 21:21:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231851AbhDEBVy (ORCPT ); Sun, 4 Apr 2021 21:21:54 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30D17C061756 for ; Sun, 4 Apr 2021 18:21:47 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id t24so1193591pjw.4 for ; Sun, 04 Apr 2021 18:21:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OOS3vg69VMrSnHGJdUyrzz96D+4D9TIEWp0GTQUCqFY=; b=bGLJe6vDxk38IH+gSEbVD0AYE3gasdsTmCo9B079Aoqyea6DdyxIa2lEHysoguNKhg SW/EDdHWLmGDCFbSuyM1vdjW4AErjASPfDaTdWvUzqcW6XV8yDK0S0ght/9vmoUmgmta N61kHeucKUTrbbRGjR9Dv/YV3N/NhStvX0pidqgj/JJzf5icrEGXoV8IaN8QLsLPRUc+ hgJiNQEgkIIf8nAqgHKHBxZtiJufrCxLy/cMCQ/C9OLwbW5klKf3Er/6BzoTw3f6j5zM CuTGC+1K0SROIRoIrZjGZc9SGyVK2Ccmo8ndmpJBc8u/AnQfJGLvVfHtGlF70XX2OG8i AfpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OOS3vg69VMrSnHGJdUyrzz96D+4D9TIEWp0GTQUCqFY=; b=tKtyngmU+gIvM1f8t+oUtv/HwmzPy2a3ShCSqHk8vovg7PRFEbGCGljBydQCxPbh9m FjLfkRS3QYan4b4fyo9IFNy3i36LlxKawcSyeMnWzVb7qxKBb9CWVqZ7x+MT0TmBi0EO Sf26QHGLGy+H6LMaDkal1Ka2i+vYFOU8zBfipSqo4UMNFca5KuEwjRGP6IQeePxg6u+O NKlLYPheRI+CnH+wBYEPA5xb0CZVHqmHMm/Et3dwXZfDyXez6RTAfXWUbk6JWERcXFh6 HeZB08p2NsbiH8yxYVjXb96bPK6sdxaJ1A4/WZ526b/D2PCJh+f+KVIVAwW3CMSCxfmE T7Dg== X-Gm-Message-State: AOAM532Fn2vFOPf23UjF48XxTRsvQ/CprGylkCVip/1SkJ0RkWo+ARUj +8iElUDSNGqTXHoZegDuvcl8HfFonWO41g== X-Google-Smtp-Source: ABdhPJxVLH6tmCi9Z34N5grtwFuPzaM0zSZ7BB49MKjEP1uW4EE/EqpqFIbVVSJ3WsM6vHX/QLzwsQ== X-Received: by 2002:a17:902:6a87:b029:e6:6a3d:29e8 with SMTP id n7-20020a1709026a87b02900e66a3d29e8mr21269387plk.10.1617585706172; Sun, 04 Apr 2021 18:21:46 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:45 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 30/48] KVM: PPC: Book3S HV P9: Implement the rest of the P9 path in C Date: Mon, 5 Apr 2021 11:19:30 +1000 Message-Id: <20210405011948.675354-31-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Almost all logic is moved to C, by introducing a new in_guest mode for the P9 path that branches very early in the KVM interrupt handler to P9 exit code. The main P9 entry and exit assembly is now only about 160 lines of low level stack setup and register save/restore, plus a bad-interrupt handler. There are two motivations for this, the first is just make the code more maintainable being in C. The second is to reduce the amount of code running in a special KVM mode, "realmode". In quotes because with radix it is no longer necessarily real-mode in the MMU, but it still has to be treated specially because it may be in real-mode, and has various important registers like PID, DEC, TB, etc set to guest. This is hostile to the rest of Linux and can't use arbitrary kernel functionality or be instrumented well. This initial patch is a reasonably faithful conversion of the asm code, but it does lack any loop to return quickly back into the guest without switching out of realmode in the case of unimportant or easily handled interrupts. As explained in previous changes, handling HV interrupts in real mode is not so important for P9. Use of Linux 64s interrupt entry code register conventions including paca EX_ save areas are brought into the KVM code. There is no point shuffling things into different paca save areas and making up a different calling convention for KVM. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/asm-prototypes.h | 3 +- arch/powerpc/include/asm/kvm_asm.h | 3 +- arch/powerpc/include/asm/kvm_book3s_64.h | 8 + arch/powerpc/include/asm/kvm_host.h | 7 +- arch/powerpc/kernel/security.c | 5 +- arch/powerpc/kvm/Makefile | 1 + arch/powerpc/kvm/book3s_64_entry.S | 247 ++++++++++++++++++++++ arch/powerpc/kvm/book3s_hv.c | 9 +- arch/powerpc/kvm/book3s_hv_interrupt.c | 218 +++++++++++++++++++ arch/powerpc/kvm/book3s_hv_rmhandlers.S | 125 +---------- 10 files changed, 501 insertions(+), 125 deletions(-) create mode 100644 arch/powerpc/kvm/book3s_hv_interrupt.c diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h index 939f3c94c8f3..7c74c80ed994 100644 --- a/arch/powerpc/include/asm/asm-prototypes.h +++ b/arch/powerpc/include/asm/asm-prototypes.h @@ -122,6 +122,7 @@ extern s32 patch__call_flush_branch_caches3; extern s32 patch__flush_count_cache_return; extern s32 patch__flush_link_stack_return; extern s32 patch__call_kvm_flush_link_stack; +extern s32 patch__call_kvm_flush_link_stack_p9; extern s32 patch__memset_nocache, patch__memcpy_nocache; extern long flush_branch_caches; @@ -142,7 +143,7 @@ void kvmhv_load_host_pmu(void); void kvmhv_save_guest_pmu(struct kvm_vcpu *vcpu, bool pmu_in_use); void kvmhv_load_guest_pmu(struct kvm_vcpu *vcpu); -int __kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu); +void kvmppc_p9_enter_guest(struct kvm_vcpu *vcpu); long kvmppc_h_set_dabr(struct kvm_vcpu *vcpu, unsigned long dabr); long kvmppc_h_set_xdabr(struct kvm_vcpu *vcpu, unsigned long dabr, diff --git a/arch/powerpc/include/asm/kvm_asm.h b/arch/powerpc/include/asm/kvm_asm.h index a3633560493b..b4f9996bd331 100644 --- a/arch/powerpc/include/asm/kvm_asm.h +++ b/arch/powerpc/include/asm/kvm_asm.h @@ -146,7 +146,8 @@ #define KVM_GUEST_MODE_GUEST 1 #define KVM_GUEST_MODE_SKIP 2 #define KVM_GUEST_MODE_GUEST_HV 3 -#define KVM_GUEST_MODE_HOST_HV 4 +#define KVM_GUEST_MODE_GUEST_HV_FAST 4 /* ISA v3.0 with host radix mode */ +#define KVM_GUEST_MODE_HOST_HV 5 #define KVM_INST_FETCH_FAILED -1 diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index 9bb9bb370b53..c214bcffb441 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -153,9 +153,17 @@ static inline bool kvmhv_vcpu_is_radix(struct kvm_vcpu *vcpu) return radix; } +int __kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu); + #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ #endif +/* + * Invalid HDSISR value which is used to indicate when HW has not set the reg. + * Used to work around an errata. + */ +#define HDSISR_CANARY 0x7fff + /* * We use a lock bit in HPTE dword 0 to synchronize updates and * accesses to each HPTE, and another bit to indicate non-present diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 05fb00d37609..fa0083345b11 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -690,7 +690,12 @@ struct kvm_vcpu_arch { ulong fault_dar; u32 fault_dsisr; unsigned long intr_msr; - ulong fault_gpa; /* guest real address of page fault (POWER9) */ + /* + * POWER9 and later, fault_gpa contains the guest real address of page + * fault for a radix guest, or segment descriptor (equivalent to result + * from slbmfev of SLB entry that translated the EA) for hash guests. + */ + ulong fault_gpa; #endif #ifdef CONFIG_BOOKE diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c index e4e1a94ccf6a..3a607c11f20f 100644 --- a/arch/powerpc/kernel/security.c +++ b/arch/powerpc/kernel/security.c @@ -430,16 +430,19 @@ device_initcall(stf_barrier_debugfs_init); static void update_branch_cache_flush(void) { - u32 *site; + u32 *site, __maybe_unused *site2; #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE site = &patch__call_kvm_flush_link_stack; + site2 = &patch__call_kvm_flush_link_stack_p9; // This controls the branch from guest_exit_cont to kvm_flush_link_stack if (link_stack_flush_type == BRANCH_CACHE_FLUSH_NONE) { patch_instruction_site(site, ppc_inst(PPC_INST_NOP)); + patch_instruction_site(site2, ppc_inst(PPC_INST_NOP)); } else { // Could use HW flush, but that could also flush count cache patch_branch_site(site, (u64)&kvm_flush_link_stack, BRANCH_SET_LINK); + patch_branch_site(site2, (u64)&kvm_flush_link_stack, BRANCH_SET_LINK); } #endif diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile index cdd119028f64..ca7c86aa9360 100644 --- a/arch/powerpc/kvm/Makefile +++ b/arch/powerpc/kvm/Makefile @@ -88,6 +88,7 @@ kvm-book3s_64-builtin-tm-objs-$(CONFIG_PPC_TRANSACTIONAL_MEM) += \ ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) += \ + book3s_hv_interrupt.o \ book3s_hv_hmi.o \ book3s_hv_rmhandlers.o \ book3s_hv_rm_mmu.o \ diff --git a/arch/powerpc/kvm/book3s_64_entry.S b/arch/powerpc/kvm/book3s_64_entry.S index 0c79c89c6a4b..d98ad580fd98 100644 --- a/arch/powerpc/kvm/book3s_64_entry.S +++ b/arch/powerpc/kvm/book3s_64_entry.S @@ -1,11 +1,16 @@ /* SPDX-License-Identifier: GPL-2.0-only */ #include #include +#include #include +#include #include #include +#include #include +#include #include +#include /* * These are branched to from interrupt handlers in exception-64s.S which set @@ -29,10 +34,15 @@ .global kvmppc_hcall .balign IFETCH_ALIGN_BYTES kvmppc_hcall: + lbz r10,HSTATE_IN_GUEST(r13) + cmpwi r10,KVM_GUEST_MODE_GUEST_HV_FAST + beq kvmppc_p9_exit_hcall ld r10,PACA_EXGEN+EX_R13(r13) SET_SCRATCH0(r10) li r10,0xc00 /* Now we look like kvmppc_interrupt */ + li r11,PACA_EXGEN + b 1f /* * KVM interrupt entry occurs after GEN_INT_ENTRY runs, and follows that @@ -53,6 +63,12 @@ kvmppc_hcall: .global kvmppc_interrupt .balign IFETCH_ALIGN_BYTES kvmppc_interrupt: + std r10,HSTATE_SCRATCH0(r13) + lbz r10,HSTATE_IN_GUEST(r13) + cmpwi r10,KVM_GUEST_MODE_GUEST_HV_FAST + beq kvmppc_p9_exit_interrupt + ld r10,HSTATE_SCRATCH0(r13) + lbz r11,HSTATE_IN_GUEST(r13) li r11,PACA_EXGEN cmpdi r10,0x200 bgt+ 1f @@ -154,3 +170,234 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) GET_SCRATCH0(r13) HRFI_TO_KERNEL #endif + +/* Stack frame offsets for kvmppc_hv_entry */ +#define SFS (144 + STACK_FRAME_MIN_SIZE) +#define STACK_SLOT_NVGPRS (SFS - 144) /* 18 gprs */ + +/* + * void kvmppc_p9_enter_guest(struct vcpu *vcpu); + * + * Enter the guest on a ISAv3.0 or later system where we have exactly + * one vcpu per vcore, and both the host and guest are radix, and threads + * are set to "indepdent mode". + */ +.balign IFETCH_ALIGN_BYTES +_GLOBAL(kvmppc_p9_enter_guest) +EXPORT_SYMBOL_GPL(kvmppc_p9_enter_guest) + mflr r0 + std r0,PPC_LR_STKOFF(r1) + stdu r1,-SFS(r1) + + std r1,HSTATE_HOST_R1(r13) + + mfcr r4 + stw r4,SFS+8(r1) + + reg = 14 + .rept 18 + std reg,STACK_SLOT_NVGPRS + ((reg - 14) * 8)(r1) + reg = reg + 1 + .endr + + ld r4,VCPU_LR(r3) + mtlr r4 + ld r4,VCPU_CTR(r3) + mtctr r4 + ld r4,VCPU_XER(r3) + mtspr SPRN_XER,r4 + + ld r1,VCPU_CR(r3) + +BEGIN_FTR_SECTION + ld r4,VCPU_CFAR(r3) + mtspr SPRN_CFAR,r4 +END_FTR_SECTION_IFSET(CPU_FTR_CFAR) +BEGIN_FTR_SECTION + ld r4,VCPU_PPR(r3) + mtspr SPRN_PPR,r4 +END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) + + reg = 4 + .rept 28 + ld reg,__VCPU_GPR(reg)(r3) + reg = reg + 1 + .endr + + ld r4,VCPU_KVM(r3) + lbz r4,KVM_SECURE_GUEST(r4) + cmpdi r4,0 + ld r4,VCPU_GPR(R4)(r3) + bne .Lret_to_ultra + + mtcr r1 + + ld r0,VCPU_GPR(R0)(r3) + ld r1,VCPU_GPR(R1)(r3) + ld r2,VCPU_GPR(R2)(r3) + ld r3,VCPU_GPR(R3)(r3) + + HRFI_TO_GUEST + b . + + /* + * Use UV_RETURN ultracall to return control back to the Ultravisor + * after processing an hypercall or interrupt that was forwarded + * (a.k.a. reflected) to the Hypervisor. + * + * All registers have already been reloaded except the ucall requires: + * R0 = hcall result + * R2 = SRR1, so UV can detect a synthesized interrupt (if any) + * R3 = UV_RETURN + */ +.Lret_to_ultra: + mtcr r1 + ld r1,VCPU_GPR(R1)(r3) + + ld r0,VCPU_GPR(R3)(r3) + mfspr r2,SPRN_SRR1 + LOAD_REG_IMMEDIATE(r3, UV_RETURN) + sc 2 + +/* + * kvmppc_p9_exit_hcall and kvmppc_p9_exit_interrupt are branched to from + * above if the interrupt was taken for a guest that was entered via + * kvmppc_p9_enter_guest(). + * + * The exit code recovers the host stack and vcpu pointer, saves all guest GPRs + * and CR, LR, XER as well as guest MSR and NIA into the VCPU, then re- + * establishes the host stack and registers to return from the + * kvmppc_p9_enter_guest() function, which saves CTR and other guest registers + * (SPRs and FP, VEC, etc). + */ +.balign IFETCH_ALIGN_BYTES +kvmppc_p9_exit_hcall: + mfspr r11,SPRN_SRR0 + mfspr r12,SPRN_SRR1 + li r10,0xc00 + std r10,HSTATE_SCRATCH0(r13) + +.balign IFETCH_ALIGN_BYTES +kvmppc_p9_exit_interrupt: + /* + * If set to KVM_GUEST_MODE_GUEST_HV_FAST but we're still in the + * hypervisor, that means we can't return from the entry stack. + */ + rldicl. r10,r12,64-MSR_HV_LG,63 + bne- kvmppc_p9_bad_interrupt + + std r1,HSTATE_SCRATCH1(r13) + std r3,HSTATE_SCRATCH2(r13) + ld r1,HSTATE_HOST_R1(r13) + ld r3,HSTATE_KVM_VCPU(r13) + + std r9,VCPU_CR(r3) + +1: + std r11,VCPU_PC(r3) + std r12,VCPU_MSR(r3) + + reg = 14 + .rept 18 + std reg,__VCPU_GPR(reg)(r3) + reg = reg + 1 + .endr + + /* r1, r3, r9-r13 are saved to vcpu by C code */ + std r0,VCPU_GPR(R0)(r3) + std r2,VCPU_GPR(R2)(r3) + reg = 4 + .rept 5 + std reg,__VCPU_GPR(reg)(r3) + reg = reg + 1 + .endr + + ld r2,PACATOC(r13) + + mflr r4 + std r4,VCPU_LR(r3) + mfspr r4,SPRN_XER + std r4,VCPU_XER(r3) + + reg = 14 + .rept 18 + ld reg,STACK_SLOT_NVGPRS + ((reg - 14) * 8)(r1) + reg = reg + 1 + .endr + + lwz r4,SFS+8(r1) + mtcr r4 + + /* + * Flush the link stack here, before executing the first blr on the + * way out of the guest. + * + * The link stack won't match coming out of the guest anyway so the + * only cost is the flush itself. The call clobbers r0. + */ +1: nop + patch_site 1b patch__call_kvm_flush_link_stack_p9 + + addi r1,r1,SFS + ld r0,PPC_LR_STKOFF(r1) + mtlr r0 + blr + +/* + * Took an interrupt somewhere right before HRFID to guest, so registers are + * in a bad way. Return things hopefully enough to run host virtual code and + * run the Linux interrupt handler (SRESET or MCE) to print something useful. + * + * We could be really clever and save all host registers in known locations + * before setting HSTATE_IN_GUEST, then restoring them all here, and setting + * return address to a fixup that sets them up again. But that's a lot of + * effort for a small bit of code. Lots of other things to do first. + */ +kvmppc_p9_bad_interrupt: + /* + * Set GUEST_MODE_NONE so the handler won't branch to KVM, and clear + * MSR_RI in r12 ([H]SRR1) so the handler won't try to return. + */ + li r10,KVM_GUEST_MODE_NONE + stb r10,HSTATE_IN_GUEST(r13) + li r10,MSR_RI + andc r12,r12,r10 + + /* + * Clean up guest registers to give host a chance to run. + */ + li r10,0 + mtspr SPRN_AMR,r10 + mtspr SPRN_IAMR,r10 + mtspr SPRN_CIABR,r10 + mtspr SPRN_DAWRX0,r10 +BEGIN_FTR_SECTION + mtspr SPRN_DAWRX1,r10 +END_FTR_SECTION_IFSET(CPU_FTR_DAWR1) + mtspr SPRN_PID,r10 + + /* + * Switch to host MMU mode + */ + ld r10, HSTATE_KVM_VCPU(r13) + ld r10, VCPU_KVM(r10) + lwz r10, KVM_HOST_LPID(r10) + mtspr SPRN_LPID,r10 + + ld r10, HSTATE_KVM_VCPU(r13) + ld r10, VCPU_KVM(r10) + ld r10, KVM_HOST_LPCR(r10) + mtspr SPRN_LPCR,r10 + + /* + * Go back to interrupt handler + */ + ld r10,HSTATE_SCRATCH0(r13) + cmpwi r10,BOOK3S_INTERRUPT_MACHINE_CHECK + beq machine_check_common + + ld r10,HSTATE_SCRATCH0(r13) + cmpwi r10,BOOK3S_INTERRUPT_SYSTEM_RESET + beq system_reset_common + + b . diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 353a0c8b79fa..9a1476daca8c 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1442,6 +1442,8 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, */ case BOOK3S_INTERRUPT_H_DATA_STORAGE: r = RESUME_PAGE_FAULT; + if (vcpu->arch.fault_dsisr == HDSISR_CANARY) + r = RESUME_GUEST; /* Just retry if it's the canary */ break; case BOOK3S_INTERRUPT_H_INST_STORAGE: vcpu->arch.fault_dar = kvmppc_get_pc(vcpu); @@ -3707,6 +3709,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, u64 tb, next_timer; int trap, save_pmu; + WARN_ON_ONCE(vcpu->arch.ceded); + tb = mftb(); next_timer = timer_get_next_tb(); if (tb >= next_timer) @@ -3714,8 +3718,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, if (next_timer < time_limit) time_limit = next_timer; - vcpu->arch.ceded = 0; - kvmhv_save_host_pmu(); /* saves it to PACA kvm_hstate */ kvmppc_subcore_enter_guest(); @@ -3842,9 +3844,10 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, } } kvmppc_xive_pull_vcpu(vcpu); + + vcpu->arch.slb_max = 0; } - vcpu->arch.slb_max = 0; dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; diff --git a/arch/powerpc/kvm/book3s_hv_interrupt.c b/arch/powerpc/kvm/book3s_hv_interrupt.c new file mode 100644 index 000000000000..3c507daed064 --- /dev/null +++ b/arch/powerpc/kvm/book3s_hv_interrupt.c @@ -0,0 +1,218 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include +#include + +#ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING +static void __start_timing(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator *next) +{ + struct kvmppc_vcore *vc = vcpu->arch.vcore; + u64 tb = mftb() - vc->tb_offset_applied; + + vcpu->arch.cur_activity = next; + vcpu->arch.cur_tb_start = tb; +} + +static void __accumulate_time(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator *next) +{ + struct kvmppc_vcore *vc = vcpu->arch.vcore; + struct kvmhv_tb_accumulator *curr; + u64 tb = mftb() - vc->tb_offset_applied; + u64 prev_tb; + u64 delta; + u64 seq; + + curr = vcpu->arch.cur_activity; + vcpu->arch.cur_activity = next; + prev_tb = vcpu->arch.cur_tb_start; + vcpu->arch.cur_tb_start = tb; + + if (!curr) + return; + + delta = tb - prev_tb; + + seq = curr->seqcount; + curr->seqcount = seq + 1; + smp_wmb(); + curr->tb_total += delta; + if (seq == 0 || delta < curr->tb_min) + curr->tb_min = delta; + if (delta > curr->tb_max) + curr->tb_max = delta; + smp_wmb(); + curr->seqcount = seq + 2; +} + +#define start_timing(vcpu, next) __start_timing(vcpu, next) +#define end_timing(vcpu) __start_timing(vcpu, NULL) +#define accumulate_time(vcpu, next) __accumulate_time(vcpu, next) +#else +#define start_timing(vcpu, next) do {} while (0) +#define end_timing(vcpu) do {} while (0) +#define accumulate_time(vcpu, next) do {} while (0) +#endif + +static inline void mfslb(unsigned int idx, u64 *slbee, u64 *slbev) +{ + asm volatile("slbmfev %0,%1" : "=r" (*slbev) : "r" (idx)); + asm volatile("slbmfee %0,%1" : "=r" (*slbee) : "r" (idx)); +} + +static inline void mtslb(unsigned int idx, u64 slbee, u64 slbev) +{ + BUG_ON((slbee & 0xfff) != idx); + + asm volatile("slbmte %0,%1" :: "r" (slbev), "r" (slbee)); +} + +/* + * Malicious or buggy radix guests may have inserted SLB entries + * (only 0..3 because radix always runs with UPRT=1), so these must + * be cleared here to avoid side-channels. slbmte is used rather + * than slbia, as it won't clear cached translations. + */ +static void radix_clear_slb(void) +{ + u64 slbee, slbev; + int i; + + for (i = 0; i < 4; i++) { + mfslb(i, &slbee, &slbev); + if (unlikely(slbee || slbev)) { + slbee = i; + slbev = 0; + mtslb(i, slbee, slbev); + } + } +} + +int __kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu) +{ + u64 *exsave; + unsigned long msr = mfmsr(); + int trap; + + start_timing(vcpu, &vcpu->arch.rm_entry); + + vcpu->arch.ceded = 0; + + WARN_ON_ONCE(vcpu->arch.shregs.msr & MSR_HV); + WARN_ON_ONCE(!(vcpu->arch.shregs.msr & MSR_ME)); + + mtspr(SPRN_HSRR0, vcpu->arch.regs.nip); + mtspr(SPRN_HSRR1, (vcpu->arch.shregs.msr & ~MSR_HV) | MSR_ME); + + /* + * On POWER9 DD2.1 and below, sometimes on a Hypervisor Data Storage + * Interrupt (HDSI) the HDSISR is not be updated at all. + * + * To work around this we put a canary value into the HDSISR before + * returning to a guest and then check for this canary when we take a + * HDSI. If we find the canary on a HDSI, we know the hardware didn't + * update the HDSISR. In this case we return to the guest to retake the + * HDSI which should correctly update the HDSISR the second time HDSI + * entry. + * + * Just do this on all p9 processors for now. + */ + mtspr(SPRN_HDSISR, HDSISR_CANARY); + + accumulate_time(vcpu, &vcpu->arch.guest_time); + + local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_GUEST_HV_FAST; + kvmppc_p9_enter_guest(vcpu); + // Radix host and guest means host never runs with guest MMU state + local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; + + accumulate_time(vcpu, &vcpu->arch.rm_intr); + + /* XXX: Could get these from r11/12 and paca exsave instead */ + vcpu->arch.shregs.srr0 = mfspr(SPRN_SRR0); + vcpu->arch.shregs.srr1 = mfspr(SPRN_SRR1); + vcpu->arch.shregs.dar = mfspr(SPRN_DAR); + vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR); + + /* 0x2 bit for HSRR is only used by PR and P7/8 HV paths, clear it */ + trap = local_paca->kvm_hstate.scratch0 & ~0x2; + if (likely(trap > BOOK3S_INTERRUPT_MACHINE_CHECK)) { + exsave = local_paca->exgen; + } else if (trap == BOOK3S_INTERRUPT_SYSTEM_RESET) { + exsave = local_paca->exnmi; + } else { /* trap == 0x200 */ + exsave = local_paca->exmc; + } + + vcpu->arch.regs.gpr[1] = local_paca->kvm_hstate.scratch1; + vcpu->arch.regs.gpr[3] = local_paca->kvm_hstate.scratch2; + vcpu->arch.regs.gpr[9] = exsave[EX_R9/sizeof(u64)]; + vcpu->arch.regs.gpr[10] = exsave[EX_R10/sizeof(u64)]; + vcpu->arch.regs.gpr[11] = exsave[EX_R11/sizeof(u64)]; + vcpu->arch.regs.gpr[12] = exsave[EX_R12/sizeof(u64)]; + vcpu->arch.regs.gpr[13] = exsave[EX_R13/sizeof(u64)]; + vcpu->arch.ppr = exsave[EX_PPR/sizeof(u64)]; + vcpu->arch.cfar = exsave[EX_CFAR/sizeof(u64)]; + vcpu->arch.regs.ctr = exsave[EX_CTR/sizeof(u64)]; + + vcpu->arch.last_inst = KVM_INST_FETCH_FAILED; + + if (unlikely(trap == BOOK3S_INTERRUPT_MACHINE_CHECK)) { + vcpu->arch.fault_dar = exsave[EX_DAR/sizeof(u64)]; + vcpu->arch.fault_dsisr = exsave[EX_DSISR/sizeof(u64)]; + kvmppc_realmode_machine_check(vcpu); + + } else if (unlikely(trap == BOOK3S_INTERRUPT_HMI)) { + kvmppc_realmode_hmi_handler(); + + } else if (trap == BOOK3S_INTERRUPT_H_EMUL_ASSIST) { + vcpu->arch.emul_inst = mfspr(SPRN_HEIR); + + } else if (trap == BOOK3S_INTERRUPT_H_DATA_STORAGE) { + vcpu->arch.fault_dar = exsave[EX_DAR/sizeof(u64)]; + vcpu->arch.fault_dsisr = exsave[EX_DSISR/sizeof(u64)]; + vcpu->arch.fault_gpa = mfspr(SPRN_ASDR); + + } else if (trap == BOOK3S_INTERRUPT_H_INST_STORAGE) { + vcpu->arch.fault_gpa = mfspr(SPRN_ASDR); + + } else if (trap == BOOK3S_INTERRUPT_H_FAC_UNAVAIL) { + vcpu->arch.hfscr = mfspr(SPRN_HFSCR); + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + /* + * Softpatch interrupt for transactional memory emulation cases + * on POWER9 DD2.2. This is early in the guest exit path - we + * haven't saved registers or done a treclaim yet. + */ + } else if (trap == BOOK3S_INTERRUPT_HV_SOFTPATCH) { + vcpu->arch.emul_inst = mfspr(SPRN_HEIR); + + /* + * The cases we want to handle here are those where the guest + * is in real suspend mode and is trying to transition to + * transactional mode. + */ + if (local_paca->kvm_hstate.fake_suspend && + (vcpu->arch.shregs.msr & MSR_TS_S)) { + if (kvmhv_p9_tm_emulation_early(vcpu)) { + /* Prevent it being handled again. */ + trap = 0; + } + } +#endif + } + + radix_clear_slb(); + + __mtmsrd(msr, 0); + mtspr(SPRN_CTRLT, 1); + + accumulate_time(vcpu, &vcpu->arch.rm_exit); + + end_timing(vcpu); + + return trap; +} +EXPORT_SYMBOL_GPL(__kvmhv_vcpu_entry_p9); diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 2d0d14ed1d92..85c2595ead8d 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -44,9 +44,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) #define NAPPING_UNSPLIT 3 /* Stack frame offsets for kvmppc_hv_entry */ -#define SFS 208 +#define SFS 160 #define STACK_SLOT_TRAP (SFS-4) -#define STACK_SLOT_SHORT_PATH (SFS-8) #define STACK_SLOT_TID (SFS-16) #define STACK_SLOT_PSSCR (SFS-24) #define STACK_SLOT_PID (SFS-32) @@ -59,8 +58,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) #define STACK_SLOT_UAMOR (SFS-88) #define STACK_SLOT_DAWR1 (SFS-96) #define STACK_SLOT_DAWRX1 (SFS-104) -/* the following is used by the P9 short path */ -#define STACK_SLOT_NVGPRS (SFS-152) /* 18 gprs */ /* * Call kvmppc_hv_entry in real mode. @@ -1008,9 +1005,6 @@ END_MMU_FTR_SECTION_IFCLR(MMU_FTR_TYPE_RADIX) no_xive: #endif /* CONFIG_KVM_XICS */ - li r0, 0 - stw r0, STACK_SLOT_SHORT_PATH(r1) - deliver_guest_interrupt: /* r4 = vcpu, r13 = paca */ /* Check if we can deliver an external or decrementer interrupt now */ ld r0, VCPU_PENDING_EXC(r4) @@ -1030,7 +1024,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) mtspr SPRN_SRR0, r6 mtspr SPRN_SRR1, r7 -fast_guest_entry_c: ld r10, VCPU_PC(r4) ld r11, VCPU_MSR(r4) /* r11 = vcpu->arch.msr & ~MSR_HV */ @@ -1135,97 +1128,6 @@ ret_to_ultra: ld r4, VCPU_GPR(R4)(r4) sc 2 -/* - * Enter the guest on a P9 or later system where we have exactly - * one vcpu per vcore and we don't need to go to real mode - * (which implies that host and guest are both using radix MMU mode). - * r3 = vcpu pointer - * Most SPRs and all the VSRs have been loaded already. - */ -_GLOBAL(__kvmhv_vcpu_entry_p9) -EXPORT_SYMBOL_GPL(__kvmhv_vcpu_entry_p9) - mflr r0 - std r0, PPC_LR_STKOFF(r1) - stdu r1, -SFS(r1) - - li r0, 1 - stw r0, STACK_SLOT_SHORT_PATH(r1) - - std r3, HSTATE_KVM_VCPU(r13) - mfcr r4 - stw r4, SFS+8(r1) - - std r1, HSTATE_HOST_R1(r13) - - reg = 14 - .rept 18 - std reg, STACK_SLOT_NVGPRS + ((reg - 14) * 8)(r1) - reg = reg + 1 - .endr - - reg = 14 - .rept 18 - ld reg, __VCPU_GPR(reg)(r3) - reg = reg + 1 - .endr - - mfmsr r10 - std r10, HSTATE_HOST_MSR(r13) - - mr r4, r3 - b fast_guest_entry_c -guest_exit_short_path: - /* - * Malicious or buggy radix guests may have inserted SLB entries - * (only 0..3 because radix always runs with UPRT=1), so these must - * be cleared here to avoid side-channels. slbmte is used rather - * than slbia, as it won't clear cached translations. - */ - li r0,0 - slbmte r0,r0 - li r4,1 - slbmte r0,r4 - li r4,2 - slbmte r0,r4 - li r4,3 - slbmte r0,r4 - - li r0, KVM_GUEST_MODE_NONE - stb r0, HSTATE_IN_GUEST(r13) - - reg = 14 - .rept 18 - std reg, __VCPU_GPR(reg)(r9) - reg = reg + 1 - .endr - - reg = 14 - .rept 18 - ld reg, STACK_SLOT_NVGPRS + ((reg - 14) * 8)(r1) - reg = reg + 1 - .endr - - lwz r4, SFS+8(r1) - mtcr r4 - - mr r3, r12 /* trap number */ - - addi r1, r1, SFS - ld r0, PPC_LR_STKOFF(r1) - mtlr r0 - - /* If we are in real mode, do a rfid to get back to the caller */ - mfmsr r4 - andi. r5, r4, MSR_IR - bnelr - rldicl r5, r4, 64 - MSR_TS_S_LG, 62 /* extract TS field */ - mtspr SPRN_SRR0, r0 - ld r10, HSTATE_HOST_MSR(r13) - rldimi r10, r5, MSR_TS_S_LG, 63 - MSR_TS_T_LG - mtspr SPRN_SRR1, r10 - RFI_TO_KERNEL - b . - secondary_too_late: li r12, 0 stw r12, STACK_SLOT_TRAP(r1) @@ -1397,14 +1299,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) mr r4,r9 bge fast_guest_return 2: - /* If we came in through the P9 short path, no real mode hcalls */ - lwz r0, STACK_SLOT_SHORT_PATH(r1) - cmpwi r0, 0 - bne no_try_real /* See if this is an hcall we can handle in real mode */ cmpwi r12,BOOK3S_INTERRUPT_SYSCALL beq hcall_try_real_mode -no_try_real: /* Hypervisor doorbell - exit only if host IPI flag set */ cmpwi r12, BOOK3S_INTERRUPT_H_DOORBELL @@ -1447,11 +1344,6 @@ guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */ bl kvmhv_accumulate_time #endif #ifdef CONFIG_KVM_XICS - /* If we came in through the P9 short path, xive pull is done in C */ - lwz r0, STACK_SLOT_SHORT_PATH(r1) - cmpwi r0, 0 - bne 1f - /* We are exiting, pull the VP from the XIVE */ lbz r0, VCPU_XIVE_PUSHED(r9) cmpwi cr0, r0, 0 @@ -1491,16 +1383,11 @@ guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */ /* * Possibly flush the link stack here, before we do a blr in - * guest_exit_short_path. + * kvmhv_switch_to_host. */ 1: nop patch_site 1b patch__call_kvm_flush_link_stack - /* If we came in through the P9 short path, go back out to C now */ - lwz r0, STACK_SLOT_SHORT_PATH(r1) - cmpwi r0, 0 - bne guest_exit_short_path - /* For hash guest, read the guest SLB and save it away */ ld r5, VCPU_KVM(r9) lbz r0, KVM_RADIX(r5) @@ -1548,8 +1435,10 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX) b guest_bypass 0: /* - * Sanitise radix guest SLB, see guest_exit_short_path comment. - * We clear vcpu->arch.slb_max to match earlier behaviour. + * Malicious or buggy radix guests may have inserted SLB entries + * (only 0..3 because radix always runs with UPRT=1), so these must + * be cleared here to avoid side-channels. slbmte is used rather + * than slbia, as it won't clear cached translations. */ li r0,0 stw r0,VCPU_SLB_MAX(r9) @@ -3362,7 +3251,7 @@ BEGIN_FTR_SECTION mtspr SPRN_DAWRX1, r0 END_FTR_SECTION_IFSET(CPU_FTR_DAWR1) - /* Clear hash and radix guest SLB, see guest_exit_short_path comment. */ + /* Clear hash and radix guest SLB. */ slbmte r0, r0 PPC_SLBIA(6) From patchwork Mon Apr 5 01:19:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462206 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=jyF7Pig6; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCYZ2B9Bz9sj1 for ; Mon, 5 Apr 2021 11:21:54 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231848AbhDEBV5 (ORCPT ); Sun, 4 Apr 2021 21:21:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231853AbhDEBVz (ORCPT ); Sun, 4 Apr 2021 21:21:55 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56DD1C0613E6 for ; Sun, 4 Apr 2021 18:21:50 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id t22so17708pgu.0 for ; Sun, 04 Apr 2021 18:21:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NIvhSKFNHbWq24cTYZ8ztE/61/g5rbwdjqPFlD602MI=; b=jyF7Pig69ubEIbVEY4OzKMBnU89qYrXvyM6J7Xi0fn+pW2K/yuACITDHrC1w776ZDR eCyC5zZ8cDkp6jXmH4pubJvQjniub+4cPiueYERc/MP+rzrw6euJB+XZo7zlmKX2QyeJ 5gRuBpvOlLo1WwI9HPc7T2LULvlzSeif22DQDM/Y8Zv5cvwCUznFl/lBIcLpCQz8nXlP 6uTPnPm6DtzxfFEedIzZS0io+27T561wMWTrB7Du+DhNl4X+rMiGByMSHHqyf0SUI+IF ejTlM1ETQDEwbn5jUJuEL2bWJJSl1jH9gfn4GL9Yg/2Mp7/X5/xTvSDmQTpt4qAKnJ/E zTLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NIvhSKFNHbWq24cTYZ8ztE/61/g5rbwdjqPFlD602MI=; b=l+rus/IOXYT+eN9AMc+HdwC3HKsuDDwp6v0DibBB59IRrZc/eZ0Jx6O4NkdyjYVTtl AaFW9kZhWVJJgmrIpST/y8oeiNB/6mprF4tS9ozMUSFYZKOgqUSa0TPaR+Q1/A3CrQS7 x3iuNDpxfdpo61wke7mGH7cEw0a8r4FwbUMHbS/Dv7YAnGmTn6l6aIVKyEwN2XqUkMdy 7cYBWcH4jGOzVShriVMmhcUva681UJ6Ftokb871Yysh5RcwqP5GrjsHDw5nXLJvaEBvZ /2l0o5NuKgd0njdPRBS48rQPzYmD0ra5nMJefijixY+0x26lBiqegBx8T8dinvjixN29 ar/A== X-Gm-Message-State: AOAM532XTqAdEcd5mtp+eXNetH8blfforTFPvSBqJKlbR3gMEVhoXf7g n+pzXStixLa4G2JKTydf0V1laXu5SeQ14w== X-Google-Smtp-Source: ABdhPJzOxEht77AnszZ6Bu+SLUYJSA4Dw/nEpElUUVjnJcsQ7y4HvcH1hZJ4JyJnqjTheGxmj0zFOA== X-Received: by 2002:a65:6645:: with SMTP id z5mr20664322pgv.273.1617585709524; Sun, 04 Apr 2021 18:21:49 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:49 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v6 31/48] KVM: PPC: Book3S HV P9: inline kvmhv_load_hv_regs_and_go into __kvmhv_vcpu_entry_p9 Date: Mon, 5 Apr 2021 11:19:31 +1000 Message-Id: <20210405011948.675354-32-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Now the initial C implementation is done, inline more HV code to make rearranging things easier. And rename __kvmhv_vcpu_entry_p9 to drop the leading underscores as it's now C, and is now a more complete vcpu entry. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 2 +- arch/powerpc/kvm/book3s_hv.c | 192 +---------------------- arch/powerpc/kvm/book3s_hv_interrupt.c | 179 ++++++++++++++++++++- 3 files changed, 180 insertions(+), 193 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index c214bcffb441..eaf3a562bf1e 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -153,7 +153,7 @@ static inline bool kvmhv_vcpu_is_radix(struct kvm_vcpu *vcpu) return radix; } -int __kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu); +int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr); #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ #endif diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 9a1476daca8c..d6eecedaa5a5 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3498,194 +3498,6 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) trace_kvmppc_run_core(vc, 1); } -static void switch_mmu_to_guest_radix(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpcr) -{ - struct kvmppc_vcore *vc = vcpu->arch.vcore; - struct kvm_nested_guest *nested = vcpu->arch.nested; - u32 lpid; - - lpid = nested ? nested->shadow_lpid : kvm->arch.lpid; - - /* - * All the isync()s are overkill but trivially follow the ISA - * requirements. Some can likely be replaced with justification - * comment for why they are not needed. - */ - isync(); - mtspr(SPRN_LPID, lpid); - isync(); - mtspr(SPRN_LPCR, lpcr); - isync(); - mtspr(SPRN_PID, vcpu->arch.pid); - isync(); - - /* TLBIEL must have LPIDR set, so set guest LPID before flushing. */ - kvmppc_check_need_tlb_flush(kvm, vc->pcpu, nested); -} - -static void switch_mmu_to_host_radix(struct kvm *kvm, u32 pid) -{ - isync(); - mtspr(SPRN_PID, pid); - isync(); - mtspr(SPRN_LPID, kvm->arch.host_lpid); - isync(); - mtspr(SPRN_LPCR, kvm->arch.host_lpcr); - isync(); -} - -/* - * Load up hypervisor-mode registers on P9. - */ -static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, - unsigned long lpcr) -{ - struct kvm *kvm = vcpu->kvm; - struct kvmppc_vcore *vc = vcpu->arch.vcore; - s64 hdec; - u64 tb, purr, spurr; - int trap; - unsigned long host_hfscr = mfspr(SPRN_HFSCR); - unsigned long host_ciabr = mfspr(SPRN_CIABR); - unsigned long host_dawr0 = mfspr(SPRN_DAWR0); - unsigned long host_dawrx0 = mfspr(SPRN_DAWRX0); - unsigned long host_psscr = mfspr(SPRN_PSSCR); - unsigned long host_pidr = mfspr(SPRN_PID); - unsigned long host_dawr1 = 0; - unsigned long host_dawrx1 = 0; - - if (cpu_has_feature(CPU_FTR_DAWR1)) { - host_dawr1 = mfspr(SPRN_DAWR1); - host_dawrx1 = mfspr(SPRN_DAWRX1); - } - - tb = mftb(); - hdec = time_limit - tb; - if (hdec < 0) - return BOOK3S_INTERRUPT_HV_DECREMENTER; - - if (vc->tb_offset) { - u64 new_tb = tb + vc->tb_offset; - mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); - vc->tb_offset_applied = vc->tb_offset; - } - - if (vc->pcr) - mtspr(SPRN_PCR, vc->pcr | PCR_MASK); - mtspr(SPRN_DPDES, vc->dpdes); - mtspr(SPRN_VTB, vc->vtb); - - local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); - local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); - mtspr(SPRN_PURR, vcpu->arch.purr); - mtspr(SPRN_SPURR, vcpu->arch.spurr); - - if (dawr_enabled()) { - mtspr(SPRN_DAWR0, vcpu->arch.dawr0); - mtspr(SPRN_DAWRX0, vcpu->arch.dawrx0); - if (cpu_has_feature(CPU_FTR_DAWR1)) { - mtspr(SPRN_DAWR1, vcpu->arch.dawr1); - mtspr(SPRN_DAWRX1, vcpu->arch.dawrx1); - } - } - mtspr(SPRN_CIABR, vcpu->arch.ciabr); - mtspr(SPRN_IC, vcpu->arch.ic); - - mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | - (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); - - mtspr(SPRN_HFSCR, vcpu->arch.hfscr); - - mtspr(SPRN_SPRG0, vcpu->arch.shregs.sprg0); - mtspr(SPRN_SPRG1, vcpu->arch.shregs.sprg1); - mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2); - mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3); - - mtspr(SPRN_AMOR, ~0UL); - - switch_mmu_to_guest_radix(kvm, vcpu, lpcr); - - /* - * P9 suppresses the HDEC exception when LPCR[HDICE] = 0, - * so set guest LPCR (with HDICE) before writing HDEC. - */ - mtspr(SPRN_HDEC, hdec); - - mtspr(SPRN_SRR0, vcpu->arch.shregs.srr0); - mtspr(SPRN_SRR1, vcpu->arch.shregs.srr1); - - trap = __kvmhv_vcpu_entry_p9(vcpu); - - /* Advance host PURR/SPURR by the amount used by guest */ - purr = mfspr(SPRN_PURR); - spurr = mfspr(SPRN_SPURR); - mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr + - purr - vcpu->arch.purr); - mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr + - spurr - vcpu->arch.spurr); - vcpu->arch.purr = purr; - vcpu->arch.spurr = spurr; - - vcpu->arch.ic = mfspr(SPRN_IC); - vcpu->arch.pid = mfspr(SPRN_PID); - vcpu->arch.psscr = mfspr(SPRN_PSSCR) & PSSCR_GUEST_VIS; - - vcpu->arch.shregs.sprg0 = mfspr(SPRN_SPRG0); - vcpu->arch.shregs.sprg1 = mfspr(SPRN_SPRG1); - vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); - vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); - - /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ - mtspr(SPRN_PSSCR, host_psscr | - (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); - mtspr(SPRN_HFSCR, host_hfscr); - mtspr(SPRN_CIABR, host_ciabr); - mtspr(SPRN_DAWR0, host_dawr0); - mtspr(SPRN_DAWRX0, host_dawrx0); - if (cpu_has_feature(CPU_FTR_DAWR1)) { - mtspr(SPRN_DAWR1, host_dawr1); - mtspr(SPRN_DAWRX1, host_dawrx1); - } - - /* - * Since this is radix, do a eieio; tlbsync; ptesync sequence in - * case we interrupted the guest between a tlbie and a ptesync. - */ - asm volatile("eieio; tlbsync; ptesync"); - - /* - * cp_abort is required if the processor supports local copy-paste - * to clear the copy buffer that was under control of the guest. - */ - if (cpu_has_feature(CPU_FTR_ARCH_31)) - asm volatile(PPC_CP_ABORT); - - vc->dpdes = mfspr(SPRN_DPDES); - vc->vtb = mfspr(SPRN_VTB); - mtspr(SPRN_DPDES, 0); - if (vc->pcr) - mtspr(SPRN_PCR, PCR_MASK); - - if (vc->tb_offset_applied) { - u64 new_tb = mftb() - vc->tb_offset_applied; - mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); - vc->tb_offset_applied = 0; - } - - /* HDEC must be at least as large as DEC, so decrementer_max fits */ - mtspr(SPRN_HDEC, decrementer_max); - - switch_mmu_to_host_radix(kvm, host_pidr); - - return trap; -} - static inline bool hcall_is_xics(unsigned long req) { return req == H_EOI || req == H_CPPR || req == H_IPI || @@ -3782,7 +3594,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, * We need to save and restore the guest visible part of the * psscr (i.e. using SPRN_PSSCR_PR) since the hypervisor * doesn't do this for us. Note only required if pseries since - * this is done in kvmhv_load_hv_regs_and_go() below otherwise. + * this is done in kvmhv_vcpu_entry_p9() below otherwise. */ unsigned long host_psscr; /* call our hypervisor to load up HV regs and go */ @@ -3820,7 +3632,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, } } else { kvmppc_xive_push_vcpu(vcpu); - trap = kvmhv_load_hv_regs_and_go(vcpu, time_limit, lpcr); + trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr); if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && !(vcpu->arch.shregs.msr & MSR_PR)) { unsigned long req = kvmppc_get_gpr(vcpu, 3); diff --git a/arch/powerpc/kvm/book3s_hv_interrupt.c b/arch/powerpc/kvm/book3s_hv_interrupt.c index 3c507daed064..6fdd93936e16 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupt.c +++ b/arch/powerpc/kvm/book3s_hv_interrupt.c @@ -55,6 +55,42 @@ static void __accumulate_time(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator #define accumulate_time(vcpu, next) do {} while (0) #endif +static void switch_mmu_to_guest_radix(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpcr) +{ + struct kvmppc_vcore *vc = vcpu->arch.vcore; + struct kvm_nested_guest *nested = vcpu->arch.nested; + u32 lpid; + + lpid = nested ? nested->shadow_lpid : kvm->arch.lpid; + + /* + * All the isync()s are overkill but trivially follow the ISA + * requirements. Some can likely be replaced with justification + * comment for why they are not needed. + */ + isync(); + mtspr(SPRN_LPID, lpid); + isync(); + mtspr(SPRN_LPCR, lpcr); + isync(); + mtspr(SPRN_PID, vcpu->arch.pid); + isync(); + + /* TLBIEL must have LPIDR set, so set guest LPID before flushing. */ + kvmppc_check_need_tlb_flush(kvm, vc->pcpu, nested); +} + +static void switch_mmu_to_host_radix(struct kvm *kvm, u32 pid) +{ + isync(); + mtspr(SPRN_PID, pid); + isync(); + mtspr(SPRN_LPID, kvm->arch.host_lpid); + isync(); + mtspr(SPRN_LPCR, kvm->arch.host_lpcr); + isync(); +} + static inline void mfslb(unsigned int idx, u64 *slbee, u64 *slbev) { asm volatile("slbmfev %0,%1" : "=r" (*slbev) : "r" (idx)); @@ -89,11 +125,86 @@ static void radix_clear_slb(void) } } -int __kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu) +int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr) { + struct kvm *kvm = vcpu->kvm; + struct kvmppc_vcore *vc = vcpu->arch.vcore; + s64 hdec; + u64 tb, purr, spurr; u64 *exsave; unsigned long msr = mfmsr(); int trap; + unsigned long host_hfscr = mfspr(SPRN_HFSCR); + unsigned long host_ciabr = mfspr(SPRN_CIABR); + unsigned long host_dawr0 = mfspr(SPRN_DAWR0); + unsigned long host_dawrx0 = mfspr(SPRN_DAWRX0); + unsigned long host_psscr = mfspr(SPRN_PSSCR); + unsigned long host_pidr = mfspr(SPRN_PID); + unsigned long host_dawr1 = 0; + unsigned long host_dawrx1 = 0; + + if (cpu_has_feature(CPU_FTR_DAWR1)) { + host_dawr1 = mfspr(SPRN_DAWR1); + host_dawrx1 = mfspr(SPRN_DAWRX1); + } + + tb = mftb(); + hdec = time_limit - tb; + if (hdec < 0) + return BOOK3S_INTERRUPT_HV_DECREMENTER; + + if (vc->tb_offset) { + u64 new_tb = tb + vc->tb_offset; + mtspr(SPRN_TBU40, new_tb); + tb = mftb(); + if ((tb & 0xffffff) < (new_tb & 0xffffff)) + mtspr(SPRN_TBU40, new_tb + 0x1000000); + vc->tb_offset_applied = vc->tb_offset; + } + + if (vc->pcr) + mtspr(SPRN_PCR, vc->pcr | PCR_MASK); + mtspr(SPRN_DPDES, vc->dpdes); + mtspr(SPRN_VTB, vc->vtb); + + local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); + local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); + mtspr(SPRN_PURR, vcpu->arch.purr); + mtspr(SPRN_SPURR, vcpu->arch.spurr); + + if (dawr_enabled()) { + mtspr(SPRN_DAWR0, vcpu->arch.dawr0); + mtspr(SPRN_DAWRX0, vcpu->arch.dawrx0); + if (cpu_has_feature(CPU_FTR_DAWR1)) { + mtspr(SPRN_DAWR1, vcpu->arch.dawr1); + mtspr(SPRN_DAWRX1, vcpu->arch.dawrx1); + } + } + mtspr(SPRN_CIABR, vcpu->arch.ciabr); + mtspr(SPRN_IC, vcpu->arch.ic); + + mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | + (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + + mtspr(SPRN_HFSCR, vcpu->arch.hfscr); + + mtspr(SPRN_SPRG0, vcpu->arch.shregs.sprg0); + mtspr(SPRN_SPRG1, vcpu->arch.shregs.sprg1); + mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2); + mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3); + + mtspr(SPRN_AMOR, ~0UL); + + switch_mmu_to_guest_radix(kvm, vcpu, lpcr); + + /* + * P9 suppresses the HDEC exception when LPCR[HDICE] = 0, + * so set guest LPCR (with HDICE) before writing HDEC. + */ + mtspr(SPRN_HDEC, hdec); + + mtspr(SPRN_SRR0, vcpu->arch.shregs.srr0); + mtspr(SPRN_SRR1, vcpu->arch.shregs.srr1); start_timing(vcpu, &vcpu->arch.rm_entry); @@ -213,6 +324,70 @@ int __kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu) end_timing(vcpu); + /* Advance host PURR/SPURR by the amount used by guest */ + purr = mfspr(SPRN_PURR); + spurr = mfspr(SPRN_SPURR); + mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr + + purr - vcpu->arch.purr); + mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr + + spurr - vcpu->arch.spurr); + vcpu->arch.purr = purr; + vcpu->arch.spurr = spurr; + + vcpu->arch.ic = mfspr(SPRN_IC); + vcpu->arch.pid = mfspr(SPRN_PID); + vcpu->arch.psscr = mfspr(SPRN_PSSCR) & PSSCR_GUEST_VIS; + + vcpu->arch.shregs.sprg0 = mfspr(SPRN_SPRG0); + vcpu->arch.shregs.sprg1 = mfspr(SPRN_SPRG1); + vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); + vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); + + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ + mtspr(SPRN_PSSCR, host_psscr | + (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + mtspr(SPRN_HFSCR, host_hfscr); + mtspr(SPRN_CIABR, host_ciabr); + mtspr(SPRN_DAWR0, host_dawr0); + mtspr(SPRN_DAWRX0, host_dawrx0); + if (cpu_has_feature(CPU_FTR_DAWR1)) { + mtspr(SPRN_DAWR1, host_dawr1); + mtspr(SPRN_DAWRX1, host_dawrx1); + } + + /* + * Since this is radix, do a eieio; tlbsync; ptesync sequence in + * case we interrupted the guest between a tlbie and a ptesync. + */ + asm volatile("eieio; tlbsync; ptesync"); + + /* + * cp_abort is required if the processor supports local copy-paste + * to clear the copy buffer that was under control of the guest. + */ + if (cpu_has_feature(CPU_FTR_ARCH_31)) + asm volatile(PPC_CP_ABORT); + + vc->dpdes = mfspr(SPRN_DPDES); + vc->vtb = mfspr(SPRN_VTB); + mtspr(SPRN_DPDES, 0); + if (vc->pcr) + mtspr(SPRN_PCR, PCR_MASK); + + if (vc->tb_offset_applied) { + u64 new_tb = mftb() - vc->tb_offset_applied; + mtspr(SPRN_TBU40, new_tb); + tb = mftb(); + if ((tb & 0xffffff) < (new_tb & 0xffffff)) + mtspr(SPRN_TBU40, new_tb + 0x1000000); + vc->tb_offset_applied = 0; + } + + /* HDEC must be at least as large as DEC, so decrementer_max fits */ + mtspr(SPRN_HDEC, decrementer_max); + + switch_mmu_to_host_radix(kvm, host_pidr); + return trap; } -EXPORT_SYMBOL_GPL(__kvmhv_vcpu_entry_p9); +EXPORT_SYMBOL_GPL(kvmhv_vcpu_entry_p9); From patchwork Mon Apr 5 01:19:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462207 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=a4+R02gh; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCYc235Sz9sXh for ; Mon, 5 Apr 2021 11:21:56 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231853AbhDEBWA (ORCPT ); Sun, 4 Apr 2021 21:22:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231851AbhDEBV7 (ORCPT ); Sun, 4 Apr 2021 21:21:59 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 308FFC061756 for ; Sun, 4 Apr 2021 18:21:53 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id q5so7186346pfh.10 for ; Sun, 04 Apr 2021 18:21:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7CRYSeLbLSFKCpaARUVAd/gYhdLsjf/87RHZe+ieXjk=; b=a4+R02ghC5IGcnLFxjva5NOWLMSCHlJhKR/N39h0/8CN7svlkQLAkcLFC34tHvOLab Jl0uEBf0zWLTPiazifbX75Zpyj2JTZNtJ7fJwBPBT78lq3aa9KkBQg9m9O0aJ5CQjumc UHx/1n4tHX/5Q+kdrDB2asKhqwr4P3kgKJwz6cGlWtGV87pjrf4biX0bfTfDIJGMGM99 MNXuf0oyEPSbGLUe4cQqEzFVREgwBThgIWvPuxlbb+ytNhaDBDoas7OlqFZewIFLb/TZ oedBjSLklKMUb4T7AslqG5KaKPO+Zi6nuQ0IYGk3FMq9mCxKcHdBZGtctY6NNwHlFReP Larg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7CRYSeLbLSFKCpaARUVAd/gYhdLsjf/87RHZe+ieXjk=; b=UBLfNXsdd2NHrWOOkPSq4ZAvWCPSoDGlzUuH+ql8gLvm8rHEz7/P19KWMl+yyqoJtL /rgIMW5BHROC81cH1jpKEoqWE9gqZfR92TGAMmygZ127UoSrShDLhq+31rzM3MN1MFLM 3oB+Cj2/yYmf+JfKuvRpVEuU279Iw5StkX/nZEQ3bZ0bbOK7HOqSRmc2M5xUw2Skq/wt FAIoOQRkgZ/xaDLNrVIff8/UuznVm77bW17yTlp+6LwYisOn2116WBGSr0WQeYbRJPYj X7y+SOPxj+9wa7Zx0Vx9Hw7UYl1rWMip20bES3ZV0AMdDmMHK4qDN6bUaVr6Qm6hDPod zTmQ== X-Gm-Message-State: AOAM5335TM1ysnnxZkaRGF7DL08F4Qi2rghwMpUuGvoA/Swt1N3zBDoE XsTCv66NaGAG1hpCttitCsaeiPwFjD9Vuw== X-Google-Smtp-Source: ABdhPJxqlZK8B9g43oq/0t2e1NVJIauqhsnsLNNk+8vld7hfWKA+uOT3ZaxvFEFX9owcVZ9FRTDY9Q== X-Received: by 2002:aa7:8702:0:b029:200:50a8:2354 with SMTP id b2-20020aa787020000b029020050a82354mr21726128pfo.72.1617585712609; Sun, 04 Apr 2021 18:21:52 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:52 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 32/48] KVM: PPC: Book3S HV P9: Read machine check registers while MSR[RI] is 0 Date: Mon, 5 Apr 2021 11:19:32 +1000 Message-Id: <20210405011948.675354-33-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org SRR0/1, DAR, DSISR must all be protected from machine check which can clobber them. Ensure MSR[RI] is clear while they are live. Signed-off-by: Nicholas Piggin Reviewed-by: Alexey Kardashevskiy --- arch/powerpc/kvm/book3s_hv.c | 11 +++++++-- arch/powerpc/kvm/book3s_hv_interrupt.c | 33 +++++++++++++++++++++++--- arch/powerpc/kvm/book3s_hv_ras.c | 2 ++ 3 files changed, 41 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index d6eecedaa5a5..5f0ac6567a06 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3567,11 +3567,16 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, mtspr(SPRN_BESCR, vcpu->arch.bescr); mtspr(SPRN_WORT, vcpu->arch.wort); mtspr(SPRN_TIDR, vcpu->arch.tid); - mtspr(SPRN_DAR, vcpu->arch.shregs.dar); - mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); mtspr(SPRN_AMR, vcpu->arch.amr); mtspr(SPRN_UAMOR, vcpu->arch.uamor); + /* + * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI] + * clear (or hstate set appropriately to catch those registers + * being clobbered if we take a MCE or SRESET), so those are done + * later. + */ + if (!(vcpu->arch.ctrl & 1)) mtspr(SPRN_CTRLT, mfspr(SPRN_CTRLF) & ~1); @@ -3614,6 +3619,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, hvregs.vcpu_token = vcpu->vcpu_id; } hvregs.hdec_expiry = time_limit; + mtspr(SPRN_DAR, vcpu->arch.shregs.dar); + mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs), __pa(&vcpu->arch.regs)); kvmhv_restore_hv_return_state(vcpu, &hvregs); diff --git a/arch/powerpc/kvm/book3s_hv_interrupt.c b/arch/powerpc/kvm/book3s_hv_interrupt.c index 6fdd93936e16..e93d2a6456ff 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupt.c +++ b/arch/powerpc/kvm/book3s_hv_interrupt.c @@ -132,6 +132,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc s64 hdec; u64 tb, purr, spurr; u64 *exsave; + bool ri_set; unsigned long msr = mfmsr(); int trap; unsigned long host_hfscr = mfspr(SPRN_HFSCR); @@ -203,9 +204,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc */ mtspr(SPRN_HDEC, hdec); - mtspr(SPRN_SRR0, vcpu->arch.shregs.srr0); - mtspr(SPRN_SRR1, vcpu->arch.shregs.srr1); - start_timing(vcpu, &vcpu->arch.rm_entry); vcpu->arch.ceded = 0; @@ -231,6 +229,13 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc */ mtspr(SPRN_HDSISR, HDSISR_CANARY); + __mtmsrd(0, 1); /* clear RI */ + + mtspr(SPRN_DAR, vcpu->arch.shregs.dar); + mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); + mtspr(SPRN_SRR0, vcpu->arch.shregs.srr0); + mtspr(SPRN_SRR1, vcpu->arch.shregs.srr1); + accumulate_time(vcpu, &vcpu->arch.guest_time); local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_GUEST_HV_FAST; @@ -248,7 +253,13 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc /* 0x2 bit for HSRR is only used by PR and P7/8 HV paths, clear it */ trap = local_paca->kvm_hstate.scratch0 & ~0x2; + + /* HSRR interrupts leave MSR[RI] unchanged, SRR interrupts clear it. */ + ri_set = false; if (likely(trap > BOOK3S_INTERRUPT_MACHINE_CHECK)) { + if (trap != BOOK3S_INTERRUPT_SYSCALL && + (vcpu->arch.shregs.msr & MSR_RI)) + ri_set = true; exsave = local_paca->exgen; } else if (trap == BOOK3S_INTERRUPT_SYSTEM_RESET) { exsave = local_paca->exnmi; @@ -258,6 +269,22 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.regs.gpr[1] = local_paca->kvm_hstate.scratch1; vcpu->arch.regs.gpr[3] = local_paca->kvm_hstate.scratch2; + + /* + * Only set RI after reading machine check regs (DAR, DSISR, SRR0/1) + * and hstate scratch (which we need to move into exsave to make + * re-entrant vs SRESET/MCE) + */ + if (ri_set) { + if (unlikely(!(mfmsr() & MSR_RI))) { + __mtmsrd(MSR_RI, 1); + WARN_ON_ONCE(1); + } + } else { + WARN_ON_ONCE(mfmsr() & MSR_RI); + __mtmsrd(MSR_RI, 1); + } + vcpu->arch.regs.gpr[9] = exsave[EX_R9/sizeof(u64)]; vcpu->arch.regs.gpr[10] = exsave[EX_R10/sizeof(u64)]; vcpu->arch.regs.gpr[11] = exsave[EX_R11/sizeof(u64)]; diff --git a/arch/powerpc/kvm/book3s_hv_ras.c b/arch/powerpc/kvm/book3s_hv_ras.c index d4bca93b79f6..8d8a4d5f0b55 100644 --- a/arch/powerpc/kvm/book3s_hv_ras.c +++ b/arch/powerpc/kvm/book3s_hv_ras.c @@ -199,6 +199,8 @@ static void kvmppc_tb_resync_done(void) * know about the exact state of the TB value. Resync TB call will * restore TB to host timebase. * + * This could use the new OPAL_HANDLE_HMI2 to avoid resyncing TB every time. + * * Things to consider: * - On TB error, HMI interrupt is reported on all the threads of the core * that has encountered TB error irrespective of split-core mode. From patchwork Mon Apr 5 01:19:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462208 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=NWlVgz69; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCYg1Qdkz9sWX for ; Mon, 5 Apr 2021 11:21:59 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231854AbhDEBWC (ORCPT ); Sun, 4 Apr 2021 21:22:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231851AbhDEBWB (ORCPT ); Sun, 4 Apr 2021 21:22:01 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFEA7C061756 for ; Sun, 4 Apr 2021 18:21:55 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id z12so863831plb.9 for ; Sun, 04 Apr 2021 18:21:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=S1/wLv6JH1sf1fLvgZJI8q0x4TVI1vTVyQKpt0OY8xU=; b=NWlVgz69yj6AWGgJ6H1qkeg0I17SG6FhY6bKCuPGszzvL8bcvL/M5+2KtsT6lcNMO+ H0it5QvZ0jh8d4zObf/uupQXBA5VK+LR/dBQO9lloi3fXMdVvpBRQJY3aumB6zYM9rpn /86H4PlqdyAlIWHKGnhGQMMHM0NsdJIDjOYqPsiegRzoIS4Gh3NoMhkjB010vjXdVGqF HiCxuw6IJEWhTDZV+rxCGC5FedTtKu34W8IKLtGFR0b83jl6eILYquxil6JFJLV8/obd etuJABWBuOJGTGoKp1rHhrR/Sld9tmTzx4CkvA0RqTsT4eShLtpxO9pnL/LWEsMq2W9t AiAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=S1/wLv6JH1sf1fLvgZJI8q0x4TVI1vTVyQKpt0OY8xU=; b=jPAsEd5VSZNKDBcGIEZwFGExeO5EqKiCoE13GnmNlPd9fPBBxpeQ62IT/3K0TQPwgl 1QBCHYgK+MphxgwjmOU1gLEbrqg2HdQc8duNzJbWXQlw5DOKisqmXzbR63OzeYnlfuMp wn2w1+UvdSQXgraxEq1Crhi8DWF/U4/533wGYLmtmpx0tLIsuCjY5qx2fw+Il4xAGcaX qjHtIxy82tqIUN2D3MOFI4pX/K7+qQbIrDhtUe1AHrsc+htxKRptwiaGJhN6xmWMLKaM gpJOIouzKfVHy9s1uKqt+h4CfPKHCZ/I0DTnIQqDGEECM5LzncnSHRBDUN5sA1jfI5/L XNRw== X-Gm-Message-State: AOAM533JEj4jsxGY0DvdfY1/UzaomneNKIW/FteuvmAD4VkvW105j98U yvY3wQqeHAqNJX4Tmxz/XAhjdrY0aam/Wg== X-Google-Smtp-Source: ABdhPJxkEQxlb7eZ2ibgPEgugM6FftrbQed36RZUUwugq3af2SEZ/9DUcu58amm9JQIaj8nFxO/BVw== X-Received: by 2002:a17:902:704b:b029:e9:b5e:5333 with SMTP id h11-20020a170902704bb02900e90b5e5333mr2890781plt.78.1617585715349; Sun, 04 Apr 2021 18:21:55 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:55 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 33/48] KVM: PPC: Book3S HV P9: Improve exit timing accounting coverage Date: Mon, 5 Apr 2021 11:19:33 +1000 Message-Id: <20210405011948.675354-34-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The C conversion caused exit timing to become a bit cramped. Expand it to cover more of the entry and exit code. Signed-off-by: Nicholas Piggin Reviewed-by: Alexey Kardashevskiy --- arch/powerpc/kvm/book3s_hv_interrupt.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_interrupt.c b/arch/powerpc/kvm/book3s_hv_interrupt.c index e93d2a6456ff..44c77f907f91 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupt.c +++ b/arch/powerpc/kvm/book3s_hv_interrupt.c @@ -154,6 +154,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (hdec < 0) return BOOK3S_INTERRUPT_HV_DECREMENTER; + start_timing(vcpu, &vcpu->arch.rm_entry); + if (vc->tb_offset) { u64 new_tb = tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); @@ -204,8 +206,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc */ mtspr(SPRN_HDEC, hdec); - start_timing(vcpu, &vcpu->arch.rm_entry); - vcpu->arch.ceded = 0; WARN_ON_ONCE(vcpu->arch.shregs.msr & MSR_HV); @@ -349,8 +349,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc accumulate_time(vcpu, &vcpu->arch.rm_exit); - end_timing(vcpu); - /* Advance host PURR/SPURR by the amount used by guest */ purr = mfspr(SPRN_PURR); spurr = mfspr(SPRN_SPURR); @@ -415,6 +413,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc switch_mmu_to_host_radix(kvm, host_pidr); + end_timing(vcpu); + return trap; } EXPORT_SYMBOL_GPL(kvmhv_vcpu_entry_p9); From patchwork Mon Apr 5 01:19:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462209 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=muOHp/vI; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCYk0rBwz9sWD for ; Mon, 5 Apr 2021 11:22:02 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231856AbhDEBWF (ORCPT ); Sun, 4 Apr 2021 21:22:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231851AbhDEBWE (ORCPT ); Sun, 4 Apr 2021 21:22:04 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4CE4C061756 for ; Sun, 4 Apr 2021 18:21:58 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id cl21-20020a17090af695b02900c61ac0f0e9so7623056pjb.1 for ; Sun, 04 Apr 2021 18:21:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TGKHP+bbZ5HpeqjL0ziVqhKR6G70TwNSVTHqxmQ2sOI=; b=muOHp/vI+RhL5vOz6Bu5jATo+MrWev1hms+tcMLsQUtMe+ZR1ua4Mx9NKk05l0798I UGSB3/BtoGPV6sMhIBTBiIeJMj0cioS81x4Hu5oT2TFkpL/Php2RE5C3rxQR4iwDlHUg WZ7E8hC1NBhTvy/5NqcUFjc11ZwRXkhu33wVqMoI2ZaqIlGEkzB6OJg+WfOTosvvPQML QCPHp+yYB4FX48toTHfpjjU/Y0Okw4cvhjJXeIXTM0YSyAYTHNLbtGICaqiaxBN0gazF COs6Chf8mZhflp7796ngJMJMqj47sd7WZhyj8KO56f5NkeArip96XsjTnFmIN3ZNMl+w xZUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TGKHP+bbZ5HpeqjL0ziVqhKR6G70TwNSVTHqxmQ2sOI=; b=EmQztZZ6e50aHqYO0w17nvxC5VGyjUZGtONC6WVdYkW1C2LcoSA6nkpNk8pS4GR9tJ KbtXcOxmTfYyJ6XRFxwE3eC8PBQZfmJY3jGZhEV5Q2bjRkD29Ft0gtilRJOod3AeOhga yC2cTzbdiMj88A6f8QdgBi6tzpKw0zUquSDS6o7L/zzZAxKwKnv0JO4xPelfJHzfwQdy rrZOtHZYxltEYlXce4PD/k2eLZQjuA3WBsg2s7o154tgJD45VAH5HKbV9XbX8u19fjF3 0V+cphrp1EyndLcDfyFId4a7OE9S+DJOfdiITqj8OLLg4AO3IyFyWJFEUCg/qe43pje1 LW7w== X-Gm-Message-State: AOAM533Zj7byXS+/sXGUvqkD+UwEnfnE50Kc9wwqWTfwXarmCZlTDK2M anzm6IQt3JIQkiZ9hSfq5/A5TaiHLaMVgA== X-Google-Smtp-Source: ABdhPJxcGNfJM1qoYqtN8hzaJAWk6fODQ6qJicUk+aq23Cv/V0CpmV8cqsQSUTETOoBV1Q84lo69vw== X-Received: by 2002:a17:902:c407:b029:e7:2272:d12e with SMTP id k7-20020a170902c407b02900e72272d12emr21804963plk.52.1617585718333; Sun, 04 Apr 2021 18:21:58 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:58 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 34/48] KVM: PPC: Book3S HV P9: Move SPR loading after expiry time check Date: Mon, 5 Apr 2021 11:19:34 +1000 Message-Id: <20210405011948.675354-35-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This is wasted work if the time limit is exceeded. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_interrupt.c | 36 ++++++++++++++++---------- 1 file changed, 22 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_interrupt.c b/arch/powerpc/kvm/book3s_hv_interrupt.c index 44c77f907f91..b12bf7c01460 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupt.c +++ b/arch/powerpc/kvm/book3s_hv_interrupt.c @@ -133,21 +133,16 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc u64 tb, purr, spurr; u64 *exsave; bool ri_set; - unsigned long msr = mfmsr(); int trap; - unsigned long host_hfscr = mfspr(SPRN_HFSCR); - unsigned long host_ciabr = mfspr(SPRN_CIABR); - unsigned long host_dawr0 = mfspr(SPRN_DAWR0); - unsigned long host_dawrx0 = mfspr(SPRN_DAWRX0); - unsigned long host_psscr = mfspr(SPRN_PSSCR); - unsigned long host_pidr = mfspr(SPRN_PID); - unsigned long host_dawr1 = 0; - unsigned long host_dawrx1 = 0; - - if (cpu_has_feature(CPU_FTR_DAWR1)) { - host_dawr1 = mfspr(SPRN_DAWR1); - host_dawrx1 = mfspr(SPRN_DAWRX1); - } + unsigned long msr; + unsigned long host_hfscr; + unsigned long host_ciabr; + unsigned long host_dawr0; + unsigned long host_dawrx0; + unsigned long host_psscr; + unsigned long host_pidr; + unsigned long host_dawr1; + unsigned long host_dawrx1; tb = mftb(); hdec = time_limit - tb; @@ -165,6 +160,19 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = vc->tb_offset; } + msr = mfmsr(); + + host_hfscr = mfspr(SPRN_HFSCR); + host_ciabr = mfspr(SPRN_CIABR); + host_dawr0 = mfspr(SPRN_DAWR0); + host_dawrx0 = mfspr(SPRN_DAWRX0); + host_psscr = mfspr(SPRN_PSSCR); + host_pidr = mfspr(SPRN_PID); + if (cpu_has_feature(CPU_FTR_DAWR1)) { + host_dawr1 = mfspr(SPRN_DAWR1); + host_dawrx1 = mfspr(SPRN_DAWRX1); + } + if (vc->pcr) mtspr(SPRN_PCR, vc->pcr | PCR_MASK); mtspr(SPRN_DPDES, vc->dpdes); From patchwork Mon Apr 5 01:19:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462210 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=K6snk4fx; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCYr0DPfz9sX2 for ; Mon, 5 Apr 2021 11:22:08 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231851AbhDEBWL (ORCPT ); Sun, 4 Apr 2021 21:22:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231857AbhDEBWH (ORCPT ); Sun, 4 Apr 2021 21:22:07 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00744C061756 for ; Sun, 4 Apr 2021 18:22:01 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id a12so7200378pfc.7 for ; Sun, 04 Apr 2021 18:22:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=n409qqh2Jq+wZToH5finj1D0nqz7fE20cnCS/z/mo98=; b=K6snk4fxGzLAdrYwoJOU3nIPAhCvz+YL9x5Kg1AD3ZKM24esz/INk/cHlJWcEAp/si p+L4u1jCKpy/dOEpmyVQWDmG4nhjf+KmAuJEDao3fTAH8NdHpHrCZOWbUbPpZA4AaZ1z 0DTTwSXdscxTdwOzdqbu71R2umOvE+5DBQa49/AspBl8ft477jsOP8UoG8K0OCSs7ivF LVCVmM5fnx1RkCDoEEaPnX1oOUhEw1HNnvc1q0533BWeDm1oWGHWdaLMDRdl6kag0Ed1 H/tIfluqvtr9RksKhpCNlm/Zq5TNjFKqUjIXECwZ9x6HFe892mYsy2fgTE17G/nneD1S E8dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=n409qqh2Jq+wZToH5finj1D0nqz7fE20cnCS/z/mo98=; b=svpJ7WfaJuyG8r20AQgPa2GXXLoT7Eiux9LgOwKhvub+7NPYlvTPaGkRPDo4abEEG3 7wTFJutU+F69hqPCmfax5nKlIdqV/p4Z/rm6yFlxNoVlKnfsOydr5kFD8kzMBcfqkvp+ 0wlZVg533QCXIyHyLs6sE/ra9Ln5O5PP3NpsMRigAAeXrF4Uah4vRAIuI/0bP7K/AU2E o+INXs05LMGmedWCeyQEba5FgnDRwA3jhq8aSxOhkmOYW4DhDuuVp5EhmI/SDeKHsLG8 xHBEVwRK7wAelgqt3ABTK2K0HfceL3yIg5qzPWmVT0XW4NGmATuiiITRhuRBJgWQC/99 s9+Q== X-Gm-Message-State: AOAM5336Ox25cU1hfmphbZutlbLXyaeju7jdCHGDGy5U+GZH6EmoXw3i PDBFiWkbTzGbWFqf98SVExZHBo4iyPxnzg== X-Google-Smtp-Source: ABdhPJxVGQhHDwv12L1B/4NaN/OOvXeT4JBd8BCHiomki2GWTnFOqaPHBs6UVkLm0vcK0ldqP3CMzA== X-Received: by 2002:a05:6a00:1ad4:b029:216:aa9d:dcea with SMTP id f20-20020a056a001ad4b0290216aa9ddceamr21025779pfv.47.1617585721451; Sun, 04 Apr 2021 18:22:01 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:22:01 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 35/48] KVM: PPC: Book3S HV P9: Add helpers for OS SPR handling Date: Mon, 5 Apr 2021 11:19:35 +1000 Message-Id: <20210405011948.675354-36-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This is a first step to wrapping supervisor and user SPR saving and loading up into helpers, which will then be called independently in bare metal and nested HV cases in order to optimise SPR access. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 141 ++++++++++++++++++++++------------- 1 file changed, 89 insertions(+), 52 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 5f0ac6567a06..c2098464eb5e 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3498,6 +3498,89 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) trace_kvmppc_run_core(vc, 1); } +static void load_spr_state(struct kvm_vcpu *vcpu) +{ + mtspr(SPRN_DSCR, vcpu->arch.dscr); + mtspr(SPRN_IAMR, vcpu->arch.iamr); + mtspr(SPRN_PSPB, vcpu->arch.pspb); + mtspr(SPRN_FSCR, vcpu->arch.fscr); + mtspr(SPRN_TAR, vcpu->arch.tar); + mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); + mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); + mtspr(SPRN_BESCR, vcpu->arch.bescr); + mtspr(SPRN_WORT, vcpu->arch.wort); + mtspr(SPRN_TIDR, vcpu->arch.tid); + mtspr(SPRN_AMR, vcpu->arch.amr); + mtspr(SPRN_UAMOR, vcpu->arch.uamor); + + /* + * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI] + * clear (or hstate set appropriately to catch those registers + * being clobbered if we take a MCE or SRESET), so those are done + * later. + */ + + if (!(vcpu->arch.ctrl & 1)) + mtspr(SPRN_CTRLT, mfspr(SPRN_CTRLF) & ~1); +} + +static void store_spr_state(struct kvm_vcpu *vcpu) +{ + vcpu->arch.ctrl = mfspr(SPRN_CTRLF); + + vcpu->arch.iamr = mfspr(SPRN_IAMR); + vcpu->arch.pspb = mfspr(SPRN_PSPB); + vcpu->arch.fscr = mfspr(SPRN_FSCR); + vcpu->arch.tar = mfspr(SPRN_TAR); + vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); + vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); + vcpu->arch.bescr = mfspr(SPRN_BESCR); + vcpu->arch.wort = mfspr(SPRN_WORT); + vcpu->arch.tid = mfspr(SPRN_TIDR); + vcpu->arch.amr = mfspr(SPRN_AMR); + vcpu->arch.uamor = mfspr(SPRN_UAMOR); + vcpu->arch.dscr = mfspr(SPRN_DSCR); +} + +/* + * Privileged (non-hypervisor) host registers to save. + */ +struct p9_host_os_sprs { + unsigned long dscr; + unsigned long tidr; + unsigned long iamr; + unsigned long amr; + unsigned long fscr; +}; + +static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) +{ + host_os_sprs->dscr = mfspr(SPRN_DSCR); + host_os_sprs->tidr = mfspr(SPRN_TIDR); + host_os_sprs->iamr = mfspr(SPRN_IAMR); + host_os_sprs->amr = mfspr(SPRN_AMR); + host_os_sprs->fscr = mfspr(SPRN_FSCR); +} + +/* vcpu guest regs must already be saved */ +static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + mtspr(SPRN_PSPB, 0); + mtspr(SPRN_WORT, 0); + mtspr(SPRN_UAMOR, 0); + + mtspr(SPRN_DSCR, host_os_sprs->dscr); + mtspr(SPRN_TIDR, host_os_sprs->tidr); + mtspr(SPRN_IAMR, host_os_sprs->iamr); + + if (host_os_sprs->amr != vcpu->arch.amr) + mtspr(SPRN_AMR, host_os_sprs->amr); + + if (host_os_sprs->fscr != vcpu->arch.fscr) + mtspr(SPRN_FSCR, host_os_sprs->fscr); +} + static inline bool hcall_is_xics(unsigned long req) { return req == H_EOI || req == H_CPPR || req == H_IPI || @@ -3512,11 +3595,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr) { struct kvmppc_vcore *vc = vcpu->arch.vcore; - unsigned long host_dscr = mfspr(SPRN_DSCR); - unsigned long host_tidr = mfspr(SPRN_TIDR); - unsigned long host_iamr = mfspr(SPRN_IAMR); - unsigned long host_amr = mfspr(SPRN_AMR); - unsigned long host_fscr = mfspr(SPRN_FSCR); + struct p9_host_os_sprs host_os_sprs; s64 dec; u64 tb, next_timer; int trap, save_pmu; @@ -3530,6 +3609,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, if (next_timer < time_limit) time_limit = next_timer; + save_p9_host_os_sprs(&host_os_sprs); + kvmhv_save_host_pmu(); /* saves it to PACA kvm_hstate */ kvmppc_subcore_enter_guest(); @@ -3557,28 +3638,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, #endif mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); - mtspr(SPRN_DSCR, vcpu->arch.dscr); - mtspr(SPRN_IAMR, vcpu->arch.iamr); - mtspr(SPRN_PSPB, vcpu->arch.pspb); - mtspr(SPRN_FSCR, vcpu->arch.fscr); - mtspr(SPRN_TAR, vcpu->arch.tar); - mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); - mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); - mtspr(SPRN_BESCR, vcpu->arch.bescr); - mtspr(SPRN_WORT, vcpu->arch.wort); - mtspr(SPRN_TIDR, vcpu->arch.tid); - mtspr(SPRN_AMR, vcpu->arch.amr); - mtspr(SPRN_UAMOR, vcpu->arch.uamor); - - /* - * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI] - * clear (or hstate set appropriately to catch those registers - * being clobbered if we take a MCE or SRESET), so those are done - * later. - */ - - if (!(vcpu->arch.ctrl & 1)) - mtspr(SPRN_CTRLT, mfspr(SPRN_CTRLF) & ~1); + load_spr_state(vcpu); /* * When setting DEC, we must always deal with irq_work_raise via NMI vs @@ -3674,33 +3734,10 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.dec_expires = dec + tb; vcpu->cpu = -1; vcpu->arch.thread_cpu = -1; - vcpu->arch.ctrl = mfspr(SPRN_CTRLF); - - vcpu->arch.iamr = mfspr(SPRN_IAMR); - vcpu->arch.pspb = mfspr(SPRN_PSPB); - vcpu->arch.fscr = mfspr(SPRN_FSCR); - vcpu->arch.tar = mfspr(SPRN_TAR); - vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); - vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); - vcpu->arch.bescr = mfspr(SPRN_BESCR); - vcpu->arch.wort = mfspr(SPRN_WORT); - vcpu->arch.tid = mfspr(SPRN_TIDR); - vcpu->arch.amr = mfspr(SPRN_AMR); - vcpu->arch.uamor = mfspr(SPRN_UAMOR); - vcpu->arch.dscr = mfspr(SPRN_DSCR); - - mtspr(SPRN_PSPB, 0); - mtspr(SPRN_WORT, 0); - mtspr(SPRN_UAMOR, 0); - mtspr(SPRN_DSCR, host_dscr); - mtspr(SPRN_TIDR, host_tidr); - mtspr(SPRN_IAMR, host_iamr); - if (host_amr != vcpu->arch.amr) - mtspr(SPRN_AMR, host_amr); + store_spr_state(vcpu); - if (host_fscr != vcpu->arch.fscr) - mtspr(SPRN_FSCR, host_fscr); + restore_p9_host_os_sprs(vcpu, &host_os_sprs); msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); store_fp_state(&vcpu->arch.fp); From patchwork Mon Apr 5 01:19:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462211 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=CvoKXKqB; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCYr4Mznz9shx for ; Mon, 5 Apr 2021 11:22:08 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231656AbhDEBWM (ORCPT ); Sun, 4 Apr 2021 21:22:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231819AbhDEBWK (ORCPT ); Sun, 4 Apr 2021 21:22:10 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3413C061756 for ; Sun, 4 Apr 2021 18:22:04 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id c17so7196347pfn.6 for ; Sun, 04 Apr 2021 18:22:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ELleIfITWKdYl/6JRZ1bTw1HIDto3+vH9d6zAZjOFfA=; b=CvoKXKqB0mduB54xAuGzHReLCSvsf8NkTPhYy05NE3+u8LPDlsi5uRXZYqL5TvmsY0 m40f3L9opavU30fAP+EAA+ZZrnThUUYBIpRmJfGoPdXCW+x+PrFI9W0kCyAMyMWFCfLG 4hs8gXiXj4No3l8YxfVOQvskaTLk67RDDjIs+T5nF77R3n10GFVB8DLaMCdY4FJotO/w gTrz/opoSnKk4IC7wYT1qpfG53nxH+VUXEVweLyoP6/UMh+QLMh6NjXVKkIjY4GXzkZO zNfhwuXIuSpTqYG5o52C0dD9Mfd2jrlxbWPP0CQsKCGwbq6a/Y8jt3CM0pgrDaQ2RDTK 6mQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ELleIfITWKdYl/6JRZ1bTw1HIDto3+vH9d6zAZjOFfA=; b=dWx+qW3/Wpo/eqOdRk5oPK/1AI3nnnkvWV/v4b0yF/6zgX8txSyV7Yo122OJnl6+CB XT/DbTVkKJ+jXNUCOr+chSKwObscwyHODQfJtOHXQgDRO7TkH8ikrrTwYbK6K2V8clfb 8im6chzSpR/zGKajupJn2nd16hu/fBlxfvAp2DeKwgYz7qJCClK0EYve+2H4uMiyAU6l J1wZyZjg+bQPgWQqg0zt3W7DG4Sl1KjlRID9USN/RMzEhGgytTP8AYC7MQN4VfMcV9Dn 3BC+Qof5TZHWSRb57ILjGBL+7Nt9YwS7szawn4qT1pv0ICg4U4s9PekVnBAnJZRR3jq0 xeNA== X-Gm-Message-State: AOAM531MnxSgou/QPXuixTpKMS4hsHXj4pBTjS+47FLj+6P+sW9oCP0I CrGGHyvQk/Q7ZdS2756rabw1N4h1kH7EMw== X-Google-Smtp-Source: ABdhPJyfvKxwVAuMt2iSIbkOrUQMNymo5IttS8bHrxfmb/HgpZvgpNsglEmUpT4W8ZFYq/n7GxICZQ== X-Received: by 2002:a63:c48:: with SMTP id 8mr20794300pgm.74.1617585724463; Sun, 04 Apr 2021 18:22:04 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.22.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:22:04 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 36/48] KVM: PPC: Book3S HV P9: Switch to guest MMU context as late as possible Date: Mon, 5 Apr 2021 11:19:36 +1000 Message-Id: <20210405011948.675354-37-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move MMU context switch as late as reasonably possible to minimise code running with guest context switched in. This becomes more important when this code may run in real-mode, with later changes. Move WARN_ON as early as possible so program check interrupts are less likely to tangle everything up. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_interrupt.c | 40 +++++++++++++------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_interrupt.c b/arch/powerpc/kvm/book3s_hv_interrupt.c index b12bf7c01460..a430cefb822a 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupt.c +++ b/arch/powerpc/kvm/book3s_hv_interrupt.c @@ -149,8 +149,13 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (hdec < 0) return BOOK3S_INTERRUPT_HV_DECREMENTER; + WARN_ON_ONCE(vcpu->arch.shregs.msr & MSR_HV); + WARN_ON_ONCE(!(vcpu->arch.shregs.msr & MSR_ME)); + start_timing(vcpu, &vcpu->arch.rm_entry); + vcpu->arch.ceded = 0; + if (vc->tb_offset) { u64 new_tb = tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); @@ -199,26 +204,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_HFSCR, vcpu->arch.hfscr); - mtspr(SPRN_SPRG0, vcpu->arch.shregs.sprg0); - mtspr(SPRN_SPRG1, vcpu->arch.shregs.sprg1); - mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2); - mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3); - - mtspr(SPRN_AMOR, ~0UL); - - switch_mmu_to_guest_radix(kvm, vcpu, lpcr); - - /* - * P9 suppresses the HDEC exception when LPCR[HDICE] = 0, - * so set guest LPCR (with HDICE) before writing HDEC. - */ - mtspr(SPRN_HDEC, hdec); - - vcpu->arch.ceded = 0; - - WARN_ON_ONCE(vcpu->arch.shregs.msr & MSR_HV); - WARN_ON_ONCE(!(vcpu->arch.shregs.msr & MSR_ME)); - mtspr(SPRN_HSRR0, vcpu->arch.regs.nip); mtspr(SPRN_HSRR1, (vcpu->arch.shregs.msr & ~MSR_HV) | MSR_ME); @@ -237,6 +222,21 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc */ mtspr(SPRN_HDSISR, HDSISR_CANARY); + mtspr(SPRN_SPRG0, vcpu->arch.shregs.sprg0); + mtspr(SPRN_SPRG1, vcpu->arch.shregs.sprg1); + mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2); + mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3); + + mtspr(SPRN_AMOR, ~0UL); + + switch_mmu_to_guest_radix(kvm, vcpu, lpcr); + + /* + * P9 suppresses the HDEC exception when LPCR[HDICE] = 0, + * so set guest LPCR (with HDICE) before writing HDEC. + */ + mtspr(SPRN_HDEC, hdec); + __mtmsrd(0, 1); /* clear RI */ mtspr(SPRN_DAR, vcpu->arch.shregs.dar); From patchwork Mon Apr 5 01:19:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462212 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=djnwdAS9; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCYv44M3z9sWX for ; Mon, 5 Apr 2021 11:22:11 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231809AbhDEBWP (ORCPT ); Sun, 4 Apr 2021 21:22:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231779AbhDEBWN (ORCPT ); Sun, 4 Apr 2021 21:22:13 -0400 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 243BFC061756 for ; Sun, 4 Apr 2021 18:22:08 -0700 (PDT) Received: by mail-pg1-x52b.google.com with SMTP id q10so7182145pgj.2 for ; Sun, 04 Apr 2021 18:22:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xVpGx6J8jXeCgXQCNAslVFUsqG92W7pP5cB+3xfiUn4=; b=djnwdAS9xmKlDalzFRs023aq2vqUPJoJYntf3+xFG41axbI52b506ma1RKh9JOQgLU rYo4JsjxKOidObi3f9Q9BsmH27bgOzyQEucm/YXCAgpmNR+DLorP9dtexfN6rTcE7qq4 aoIpgF2Bwlz2jBNlWPNFlJucOvO1tcwNE984juPYT2SZJXT0O6QFc3ebBZTXCBY9AV/D 5UAO8Y9xpFSbjSuIaxF5L/8e4llgq8mAaHmUlN1Z9sueEVQ8fttCODYOTp924LKDHJWw c75qBRayrGg1lIcssbsRGPAYndkh02kP+yTVq5x4XOzFBTrDjDK3I1TU2+AWAxwq7/3K nbmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xVpGx6J8jXeCgXQCNAslVFUsqG92W7pP5cB+3xfiUn4=; b=M9Dy0PbeUguRye54tQKHxrTJBScCJvbMDk8TSB4BxKLFGTskexXokKsUcXl8R8M3KX FSwUpP1xxODr+RIBXG2P4htZfFc7O41pGyO0lJTHxOo0oEpYf+U9OXDI4QWovL9wAo7z sbSueIt8oUNle4GeacNfuBVNpwN/8Afzf4nGaOvy4uUUYrVzmyWKWcVmsX8I0jxAp4Wq nKAP/op6ilFE4n7zafpPf9P+HSuCgEetmwFdEuzqGVH3ACQVds1LqwYXxfZ0oEMTAIMN dMTQ0JtgH4SLVSG0z+9/UJLIpJXDPif6VkAmj77Faa5v0ksL2V6JYA0yNJpb2UwXnQRH Qm+g== X-Gm-Message-State: AOAM532rq2l0A9rFvbcFmfz8+RgiInugiFYh6hRtOafy1jThxAkAw3/0 Gf5V3vsimrxkvIJzy8isY2dJRQjUUxuS5Q== X-Google-Smtp-Source: ABdhPJwzbOM6VB4idDT9tXRCEXRHzad6hvzta3ZnEKS8puVyFaFX+ueXOK+dRSZtRXKwk5HbpE6IPQ== X-Received: by 2002:a63:570b:: with SMTP id l11mr21148326pgb.193.1617585727459; Sun, 04 Apr 2021 18:22:07 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.22.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:22:07 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 37/48] KVM: PPC: Book3S HV: Implement radix prefetch workaround by disabling MMU Date: Mon, 5 Apr 2021 11:19:37 +1000 Message-Id: <20210405011948.675354-38-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rather than partition the guest PID space + flush a rogue guest PID to work around this problem, instead fix it by always disabling the MMU when switching in or out of guest MMU context in HV mode. This may be a bit less efficient, but it is a lot less complicated and allows the P9 path to trivally implement the workaround too. Newer CPUs are not subject to this issue. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/mmu_context.h | 6 ---- arch/powerpc/kvm/book3s_hv.c | 21 +++++++---- arch/powerpc/kvm/book3s_hv_interrupt.c | 16 ++++++--- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 34 ------------------ arch/powerpc/mm/book3s64/radix_pgtable.c | 27 +++++--------- arch/powerpc/mm/book3s64/radix_tlb.c | 46 ------------------------ arch/powerpc/mm/mmu_context.c | 4 +-- 7 files changed, 36 insertions(+), 118 deletions(-) diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index 652ce85f9410..bb5c7e5e142e 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -122,12 +122,6 @@ static inline bool need_extra_context(struct mm_struct *mm, unsigned long ea) } #endif -#if defined(CONFIG_KVM_BOOK3S_HV_POSSIBLE) && defined(CONFIG_PPC_RADIX_MMU) -extern void radix_kvm_prefetch_workaround(struct mm_struct *mm); -#else -static inline void radix_kvm_prefetch_workaround(struct mm_struct *mm) { } -#endif - extern void switch_cop(struct mm_struct *next); extern int use_cop(unsigned long acop, struct mm_struct *mm); extern void drop_cop(unsigned long acop, struct mm_struct *mm); diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index c2098464eb5e..21e13f93235a 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -809,6 +809,9 @@ static int kvmppc_h_set_mode(struct kvm_vcpu *vcpu, unsigned long mflags, */ if (mflags != 0 && mflags != 3) return H_UNSUPPORTED_FLAG_START; + if (cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG) && + kvmhv_vcpu_is_radix(vcpu) && mflags == 3) + return H_UNSUPPORTED_FLAG_START; return H_TOO_HARD; default: return H_TOO_HARD; @@ -1674,6 +1677,14 @@ unsigned long kvmppc_filter_lpcr_hv(struct kvm *kvm, unsigned long lpcr) lpcr &= ~LPCR_AIL; if ((lpcr & LPCR_AIL) != LPCR_AIL_3) lpcr &= ~LPCR_AIL; /* LPCR[AIL]=1/2 is disallowed */ + /* + * On some POWER9s we force AIL off for radix guests to prevent + * executing in MSR[HV]=1 mode with the MMU enabled and PIDR set to + * guest, which can result in Q0 translations with LPID=0 PID=PIDR to + * be cached, which the host TLB management does not expect. + */ + if (kvm_is_radix(kvm) && cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) + lpcr &= ~LPCR_AIL; /* * On POWER9, allow userspace to enable large decrementer for the @@ -4349,12 +4360,7 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST; do { - /* - * The TLB prefetch bug fixup is only in the kvmppc_run_vcpu - * path, which also handles hash and dependent threads mode. - */ - if (kvm->arch.threads_indep && kvm_is_radix(kvm) && - !cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) + if (kvm->arch.threads_indep && kvm_is_radix(kvm)) r = kvmhv_run_single_vcpu(vcpu, ~(u64)0, vcpu->arch.vcore->lpcr); else @@ -4985,6 +4991,9 @@ static int kvmppc_core_init_vm_hv(struct kvm *kvm) if (!indep_threads_mode && !cpu_has_feature(CPU_FTR_HVMODE)) { pr_warn("KVM: Ignoring indep_threads_mode=N in nested hypervisor\n"); kvm->arch.threads_indep = true; + } else if (!indep_threads_mode && cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) { + pr_warn("KVM: Ignoring indep_threads_mode=N on pre-DD2.2 POWER9\n"); + kvm->arch.threads_indep = true; } else { kvm->arch.threads_indep = indep_threads_mode; } diff --git a/arch/powerpc/kvm/book3s_hv_interrupt.c b/arch/powerpc/kvm/book3s_hv_interrupt.c index a430cefb822a..f09b11ea2033 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupt.c +++ b/arch/powerpc/kvm/book3s_hv_interrupt.c @@ -229,6 +229,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_AMOR, ~0UL); + if (cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) + __mtmsrd(msr & ~(MSR_IR|MSR_DR|MSR_RI), 0); + switch_mmu_to_guest_radix(kvm, vcpu, lpcr); /* @@ -237,7 +240,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc */ mtspr(SPRN_HDEC, hdec); - __mtmsrd(0, 1); /* clear RI */ + if (!cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) + __mtmsrd(0, 1); /* clear RI */ mtspr(SPRN_DAR, vcpu->arch.shregs.dar); mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); @@ -352,9 +356,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc radix_clear_slb(); - __mtmsrd(msr, 0); - mtspr(SPRN_CTRLT, 1); - accumulate_time(vcpu, &vcpu->arch.rm_exit); /* Advance host PURR/SPURR by the amount used by guest */ @@ -421,6 +422,13 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc switch_mmu_to_host_radix(kvm, host_pidr); + /* + * If we are in real mode, only switch MMU on after the MMU is + * switched to host, to avoid the P9_RADIX_PREFETCH_BUG. + */ + __mtmsrd(msr, 0); + mtspr(SPRN_CTRLT, 1); + end_timing(vcpu); return trap; diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 85c2595ead8d..9fd7e9e7fda6 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -1710,40 +1710,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) eieio tlbsync ptesync - -BEGIN_FTR_SECTION - /* Radix: Handle the case where the guest used an illegal PID */ - LOAD_REG_ADDR(r4, mmu_base_pid) - lwz r3, VCPU_GUEST_PID(r9) - lwz r5, 0(r4) - cmpw cr0,r3,r5 - blt 2f - - /* - * Illegal PID, the HW might have prefetched and cached in the TLB - * some translations for the LPID 0 / guest PID combination which - * Linux doesn't know about, so we need to flush that PID out of - * the TLB. First we need to set LPIDR to 0 so tlbiel applies to - * the right context. - */ - li r0,0 - mtspr SPRN_LPID,r0 - isync - - /* Then do a congruence class local flush */ - ld r6,VCPU_KVM(r9) - lwz r0,KVM_TLB_SETS(r6) - mtctr r0 - li r7,0x400 /* IS field = 0b01 */ - ptesync - sldi r0,r3,32 /* RS has PID */ -1: PPC_TLBIEL(7,0,2,1,1) /* RIC=2, PRS=1, R=1 */ - addi r7,r7,0x1000 - bdnz 1b - ptesync -END_FTR_SECTION_IFSET(CPU_FTR_P9_RADIX_PREFETCH_BUG) - -2: #endif /* CONFIG_PPC_RADIX_MMU */ /* diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index 98f0b243c1ab..1ea95891a79e 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -357,30 +357,19 @@ static void __init radix_init_pgtable(void) } /* Find out how many PID bits are supported */ - if (!cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) { - if (!mmu_pid_bits) - mmu_pid_bits = 20; - mmu_base_pid = 1; - } else if (cpu_has_feature(CPU_FTR_HVMODE)) { - if (!mmu_pid_bits) - mmu_pid_bits = 20; -#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE + if (!cpu_has_feature(CPU_FTR_HVMODE) && + cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) { /* - * When KVM is possible, we only use the top half of the - * PID space to avoid collisions between host and guest PIDs - * which can cause problems due to prefetch when exiting the - * guest with AIL=3 + * Older versions of KVM on these machines perfer if the + * guest only uses the low 19 PID bits. */ - mmu_base_pid = 1 << (mmu_pid_bits - 1); -#else - mmu_base_pid = 1; -#endif - } else { - /* The guest uses the bottom half of the PID space */ if (!mmu_pid_bits) mmu_pid_bits = 19; - mmu_base_pid = 1; + } else { + if (!mmu_pid_bits) + mmu_pid_bits = 20; } + mmu_base_pid = 1; /* * Allocate Partition table and process table for the diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c index 409e61210789..312236a6b085 100644 --- a/arch/powerpc/mm/book3s64/radix_tlb.c +++ b/arch/powerpc/mm/book3s64/radix_tlb.c @@ -1336,49 +1336,3 @@ void radix__flush_tlb_all(void) : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(0) : "memory"); asm volatile("eieio; tlbsync; ptesync": : :"memory"); } - -#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE -extern void radix_kvm_prefetch_workaround(struct mm_struct *mm) -{ - unsigned long pid = mm->context.id; - - if (unlikely(pid == MMU_NO_CONTEXT)) - return; - - if (!cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) - return; - - /* - * If this context hasn't run on that CPU before and KVM is - * around, there's a slim chance that the guest on another - * CPU just brought in obsolete translation into the TLB of - * this CPU due to a bad prefetch using the guest PID on - * the way into the hypervisor. - * - * We work around this here. If KVM is possible, we check if - * any sibling thread is in KVM. If it is, the window may exist - * and thus we flush that PID from the core. - * - * A potential future improvement would be to mark which PIDs - * have never been used on the system and avoid it if the PID - * is new and the process has no other cpumask bit set. - */ - if (cpu_has_feature(CPU_FTR_HVMODE) && radix_enabled()) { - int cpu = smp_processor_id(); - int sib = cpu_first_thread_sibling(cpu); - bool flush = false; - - for (; sib <= cpu_last_thread_sibling(cpu) && !flush; sib++) { - if (sib == cpu) - continue; - if (!cpu_possible(sib)) - continue; - if (paca_ptrs[sib]->kvm_hstate.kvm_vcpu) - flush = true; - } - if (flush) - _tlbiel_pid(pid, RIC_FLUSH_ALL); - } -} -EXPORT_SYMBOL_GPL(radix_kvm_prefetch_workaround); -#endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */ diff --git a/arch/powerpc/mm/mmu_context.c b/arch/powerpc/mm/mmu_context.c index 18f20da0d348..7479d39976c9 100644 --- a/arch/powerpc/mm/mmu_context.c +++ b/arch/powerpc/mm/mmu_context.c @@ -81,9 +81,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, if (cpu_has_feature(CPU_FTR_ALTIVEC)) asm volatile ("dssall"); - if (new_on_cpu) - radix_kvm_prefetch_workaround(next); - else + if (!new_on_cpu) membarrier_arch_switch_mm(prev, next, tsk); /* From patchwork Mon Apr 5 01:19:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462213 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ubfLRA8v; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCYy6m2pz9sX2 for ; Mon, 5 Apr 2021 11:22:14 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231819AbhDEBWS (ORCPT ); Sun, 4 Apr 2021 21:22:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231779AbhDEBWQ (ORCPT ); Sun, 4 Apr 2021 21:22:16 -0400 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9628C061756 for ; Sun, 4 Apr 2021 18:22:10 -0700 (PDT) Received: by mail-pg1-x52e.google.com with SMTP id q10so7182196pgj.2 for ; Sun, 04 Apr 2021 18:22:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fU8rVkLPEHoGeyqW2eB3kTtDzLVfSoiwh9umkMxq39o=; b=ubfLRA8vtVLCQI8/PfHtoiTKhRJinzORY7U2/Iqf8xWIR0GYMU0rXJlAhdj7PfSpsQ yHTISvePqMMRvSaJekCbtD2nj6EhpCLMUOL6U+V9EUDrcUFywhGf+aJolKXXp8Mrzige IjNbSpiJcoB2Hpg1Yp/U/g/7sJo3qH76Eg33x155S34TvrIabm0yZm/W8bWDEpIINfN2 BYTrAZawl54NLW11HyGiZBQfkOHmvdsESEjPvputVuxSuMPfv6AgtbRl88uzpHHgD7am LJ3V31yf5ogQYnpgFNkFV039doZbMzKqfGUPwg3XsCqe97d9kyssU9VSl7C2F7qOLAn8 tehA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fU8rVkLPEHoGeyqW2eB3kTtDzLVfSoiwh9umkMxq39o=; b=uCIEM8jksfwMZSswRL1U0ib1/mvus+GK/PNkh2sp/1hRFdQ8gZhSlyiAftD92Qlhv0 tnc9BH0BTqZNdM9TopRPB6lMIFKEzHqwBLV/YlX1tnmHBnGV/7Yi9FvIQ29MtHg8qt8r azTD48ZNa5XRTwYOSBnp0HoJRuYZZKoTJYBbN4OCtFdkgzEV9gPJn1bSsDSfSAyX+jjl jGp1e7MhI8BSqNuDARkkQ6fqiB7iUh5Ec/PSYJt/ZEzCxVUP799RDv159ZMf5P+dUmLJ KcPkVerlnGwSjh/zyJR00Tcm82ycMQDpJvKPqjHyTd/wogf56RWBhHVx8lctfiRg7YIt TlzQ== X-Gm-Message-State: AOAM533a+ApUXPgZIgvqlHZ7xb/Pxk8imovbeteslGszZVV0K3waacQV P8LAT4XzJ2lKYjyeJqfUXmjCHzL7LtX9LA== X-Google-Smtp-Source: ABdhPJxpbUPbFpaB42uNqOZR+mV9rupEDrjjm2d5eLCjRD47aw1Sypjfg9qPTgWZwcO+2VM4AiHQRw== X-Received: by 2002:a65:640f:: with SMTP id a15mr20435369pgv.121.1617585730176; Sun, 04 Apr 2021 18:22:10 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.22.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:22:10 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 38/48] KVM: PPC: Book3S HV: Remove support for dependent threads mode on P9 Date: Mon, 5 Apr 2021 11:19:38 +1000 Message-Id: <20210405011948.675354-39-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Radix guest support will be removed from the P7/8 path, so disallow dependent threads mode on P9. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_host.h | 1 - arch/powerpc/kvm/book3s_hv.c | 27 +++++---------------------- 2 files changed, 5 insertions(+), 23 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index fa0083345b11..102928811da9 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -304,7 +304,6 @@ struct kvm_arch { u8 fwnmi_enabled; u8 secure_guest; u8 svm_enabled; - bool threads_indep; bool nested_enable; bool dawr1_enabled; pgd_t *pgtable; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 21e13f93235a..20ced6c5edfd 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -103,13 +103,9 @@ static int target_smt_mode; module_param(target_smt_mode, int, 0644); MODULE_PARM_DESC(target_smt_mode, "Target threads per core (0 = max)"); -static bool indep_threads_mode = true; -module_param(indep_threads_mode, bool, S_IRUGO | S_IWUSR); -MODULE_PARM_DESC(indep_threads_mode, "Independent-threads mode (only on POWER9)"); - static bool one_vm_per_core; module_param(one_vm_per_core, bool, S_IRUGO | S_IWUSR); -MODULE_PARM_DESC(one_vm_per_core, "Only run vCPUs from the same VM on a core (requires indep_threads_mode=N)"); +MODULE_PARM_DESC(one_vm_per_core, "Only run vCPUs from the same VM on a core (requires POWER8 or older)"); #ifdef CONFIG_KVM_XICS static const struct kernel_param_ops module_param_ops = { @@ -2264,7 +2260,7 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, */ static int threads_per_vcore(struct kvm *kvm) { - if (kvm->arch.threads_indep) + if (cpu_has_feature(CPU_FTR_ARCH_300)) return 1; return threads_per_subcore; } @@ -4360,7 +4356,7 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST; do { - if (kvm->arch.threads_indep && kvm_is_radix(kvm)) + if (kvm_is_radix(kvm)) r = kvmhv_run_single_vcpu(vcpu, ~(u64)0, vcpu->arch.vcore->lpcr); else @@ -4984,21 +4980,8 @@ static int kvmppc_core_init_vm_hv(struct kvm *kvm) /* * Track that we now have a HV mode VM active. This blocks secondary * CPU threads from coming online. - * On POWER9, we only need to do this if the "indep_threads_mode" - * module parameter has been set to N. */ - if (cpu_has_feature(CPU_FTR_ARCH_300)) { - if (!indep_threads_mode && !cpu_has_feature(CPU_FTR_HVMODE)) { - pr_warn("KVM: Ignoring indep_threads_mode=N in nested hypervisor\n"); - kvm->arch.threads_indep = true; - } else if (!indep_threads_mode && cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) { - pr_warn("KVM: Ignoring indep_threads_mode=N on pre-DD2.2 POWER9\n"); - kvm->arch.threads_indep = true; - } else { - kvm->arch.threads_indep = indep_threads_mode; - } - } - if (!kvm->arch.threads_indep) + if (!cpu_has_feature(CPU_FTR_ARCH_300)) kvm_hv_vm_activated(); /* @@ -5039,7 +5022,7 @@ static void kvmppc_core_destroy_vm_hv(struct kvm *kvm) { debugfs_remove_recursive(kvm->arch.debugfs_dir); - if (!kvm->arch.threads_indep) + if (!cpu_has_feature(CPU_FTR_ARCH_300)) kvm_hv_vm_deactivated(); kvmppc_free_vcores(kvm); From patchwork Mon Apr 5 01:19:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462214 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ZVNP/laJ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCZ210JZz9sWX for ; Mon, 5 Apr 2021 11:22:18 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231811AbhDEBWV (ORCPT ); Sun, 4 Apr 2021 21:22:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40168 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231857AbhDEBWS (ORCPT ); Sun, 4 Apr 2021 21:22:18 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F79CC061756 for ; Sun, 4 Apr 2021 18:22:13 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id b17so3516205pgh.7 for ; Sun, 04 Apr 2021 18:22:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Bgj6ZVQ1/Pl35CJQp9ZmWiGl78cl6oSaKcj2HrE6tyU=; b=ZVNP/laJhtVaKvXRjxshoPIdoljBUNtxTvK9sao0rTs9jEpxJ/c62v0of6Xl1psgZC IUZg00342UB2oOWeBCEJ9fptDd+2DuUiwKNEXXCA0jHCx2gJ/cULsSyc4qkzAEbT84L9 Rfm8+cpZwrCxdT2+WtFyn6bOOU72UCvaQzJGPwK4PaWm79EiK4yhtuSeuQvquj2aYT/6 yo6zzMoq1Zke0Q7FNuvtxzxQAlZS555g6BGlfkNKhmKM6+0J8ifOE1KOdXNww18M5RuY fmkFF18xj83mwfWYo9KDBjq5vXfrO0VGTBnEcPLGjLtxYnIjynsVEJNko4bT3unApw9/ /hyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Bgj6ZVQ1/Pl35CJQp9ZmWiGl78cl6oSaKcj2HrE6tyU=; b=HtrYOqq/5bQmwV4Xr4d0FvIrHpusg9NsIP8G+0861QsNfI/w/xh+fY7ipfG4WjT2mX zz8OAE7R+PcvIPK3ME+Yyy5x04EKB4SSpxL3PcFd92qdVOXE1MhiW4E9zsrKFNzApv+t VNMUNJByjJINIFRo0r2UC/sc/jBw3m/WkzlK32fujRYOGd+rKGUAAK2a+KvAmweMemEe FnmWCROKLdJGbQib0ShDsdV/tYawBWN1Ay1I4FbY18t1NkkUMU6covyZzROcY+hfCXXs m4GUTVm1g3r3oIt6V5d2GQg2+3Rg6l4wkT8pNb0VkSVLa5U+fScRMuSpTHCgvKE9CgMK guOQ== X-Gm-Message-State: AOAM5337Bd98qPPWxkWVy/p5t4QSzXsED5mq2JgewjpefnXoILupnN2c 8P11YFGNb34NAIeqHUKiCXYKTqBEVFyrLw== X-Google-Smtp-Source: ABdhPJyuYqDj0Zk+VuhdDqhUapBontLUeX9LzAYDIfH8Y+Y0A1K+GOtpjXIU6nxsdLIsn//5eau/dA== X-Received: by 2002:a62:7ad6:0:b029:20f:eaee:deed with SMTP id v205-20020a627ad60000b029020feaeedeedmr21346691pfc.37.1617585732756; Sun, 04 Apr 2021 18:22:12 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.22.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:22:12 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 39/48] KVM: PPC: Book3S HV: Remove radix guest support from P7/8 path Date: Mon, 5 Apr 2021 11:19:39 +1000 Message-Id: <20210405011948.675354-40-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The P9 path now runs all supported radix guest combinations, so remove radix guest support from the P7/8 path. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/asm-offsets.c | 1 - arch/powerpc/kvm/book3s_hv_rmhandlers.S | 103 +----------------------- 2 files changed, 3 insertions(+), 101 deletions(-) diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c index f3a662201a9f..0da926258ab2 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -551,7 +551,6 @@ int main(void) OFFSET(VCPU_SLB_NR, kvm_vcpu, arch.slb_nr); OFFSET(VCPU_FAULT_DSISR, kvm_vcpu, arch.fault_dsisr); OFFSET(VCPU_FAULT_DAR, kvm_vcpu, arch.fault_dar); - OFFSET(VCPU_FAULT_GPA, kvm_vcpu, arch.fault_gpa); OFFSET(VCPU_INTR_MSR, kvm_vcpu, arch.intr_msr); OFFSET(VCPU_LAST_INST, kvm_vcpu, arch.last_inst); OFFSET(VCPU_TRAP, kvm_vcpu, arch.trap); diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 9fd7e9e7fda6..87fa8bda3515 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -133,15 +133,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) /* Return the trap number on this thread as the return value */ mr r3, r12 - /* - * If we came back from the guest via a relocation-on interrupt, - * we will be in virtual mode at this point, which makes it a - * little easier to get back to the caller. - */ - mfmsr r0 - andi. r0, r0, MSR_IR /* in real mode? */ - bne .Lvirt_return - /* RFI into the highmem handler */ mfmsr r6 li r0, MSR_RI @@ -151,11 +142,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) mtsrr1 r7 RFI_TO_KERNEL - /* Virtual-mode return */ -.Lvirt_return: - mtlr r8 - blr - kvmppc_primary_no_guest: /* We handle this much like a ceded vcpu */ /* put the HDEC into the DEC, since HDEC interrupts don't wake us */ @@ -899,11 +885,6 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300) cmpdi r3, 512 /* 1 microsecond */ blt hdec_soon - ld r6, VCPU_KVM(r4) - lbz r0, KVM_RADIX(r6) - cmpwi r0, 0 - bne 9f - /* For hash guest, clear out and reload the SLB */ BEGIN_MMU_FTR_SECTION /* Radix host won't have populated the SLB, so no need to clear */ @@ -1091,12 +1072,8 @@ BEGIN_FTR_SECTION mtspr SPRN_HDSISR, r0 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) - ld r6, VCPU_KVM(r4) - lbz r7, KVM_SECURE_GUEST(r6) - cmpdi r7, 0 ld r6, VCPU_GPR(R6)(r4) ld r7, VCPU_GPR(R7)(r4) - bne ret_to_ultra ld r0, VCPU_CR(r4) mtcr r0 @@ -1107,26 +1084,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) ld r4, VCPU_GPR(R4)(r4) HRFI_TO_GUEST b . -/* - * Use UV_RETURN ultracall to return control back to the Ultravisor after - * processing an hypercall or interrupt that was forwarded (a.k.a. reflected) - * to the Hypervisor. - * - * All registers have already been loaded, except: - * R0 = hcall result - * R2 = SRR1, so UV can detect a synthesized interrupt (if any) - * R3 = UV_RETURN - */ -ret_to_ultra: - ld r0, VCPU_CR(r4) - mtcr r0 - - ld r0, VCPU_GPR(R3)(r4) - mfspr r2, SPRN_SRR1 - li r3, 0 - ori r3, r3, UV_RETURN - ld r4, VCPU_GPR(R4)(r4) - sc 2 secondary_too_late: li r12, 0 @@ -1389,11 +1346,7 @@ guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */ patch_site 1b patch__call_kvm_flush_link_stack /* For hash guest, read the guest SLB and save it away */ - ld r5, VCPU_KVM(r9) - lbz r0, KVM_RADIX(r5) li r5, 0 - cmpwi r0, 0 - bne 0f /* for radix, save 0 entries */ lwz r0,VCPU_SLB_NR(r9) /* number of entries in SLB */ mtctr r0 li r6,0 @@ -1432,23 +1385,6 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX) slbmte r6,r5 1: addi r8,r8,16 .endr - b guest_bypass - -0: /* - * Malicious or buggy radix guests may have inserted SLB entries - * (only 0..3 because radix always runs with UPRT=1), so these must - * be cleared here to avoid side-channels. slbmte is used rather - * than slbia, as it won't clear cached translations. - */ - li r0,0 - stw r0,VCPU_SLB_MAX(r9) - slbmte r0,r0 - li r4,1 - slbmte r0,r4 - li r4,2 - slbmte r0,r4 - li r4,3 - slbmte r0,r4 guest_bypass: stw r12, STACK_SLOT_TRAP(r1) @@ -1694,24 +1630,6 @@ BEGIN_FTR_SECTION mtspr SPRN_PID, r7 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) -#ifdef CONFIG_PPC_RADIX_MMU - /* - * Are we running hash or radix ? - */ - ld r5, VCPU_KVM(r9) - lbz r0, KVM_RADIX(r5) - cmpwi cr2, r0, 0 - beq cr2, 2f - - /* - * Radix: do eieio; tlbsync; ptesync sequence in case we - * interrupted the guest between a tlbie and a ptesync. - */ - eieio - tlbsync - ptesync -#endif /* CONFIG_PPC_RADIX_MMU */ - /* * cp_abort is required if the processor supports local copy-paste * to clear the copy buffer that was under control of the guest. @@ -1970,8 +1888,6 @@ kvmppc_tm_emul: * reflect the HDSI to the guest as a DSI. */ kvmppc_hdsi: - ld r3, VCPU_KVM(r9) - lbz r0, KVM_RADIX(r3) mfspr r4, SPRN_HDAR mfspr r6, SPRN_HDSISR BEGIN_FTR_SECTION @@ -1979,8 +1895,6 @@ BEGIN_FTR_SECTION cmpdi r6, 0x7fff beq 6f END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) - cmpwi r0, 0 - bne .Lradix_hdsi /* on radix, just save DAR/DSISR/ASDR */ /* HPTE not found fault or protection fault? */ andis. r0, r6, (DSISR_NOHPTE | DSISR_PROTFAULT)@h beq 1f /* if not, send it to the guest */ @@ -2057,23 +1971,11 @@ fast_interrupt_c_return: stb r0, HSTATE_IN_GUEST(r13) b guest_exit_cont -.Lradix_hdsi: - std r4, VCPU_FAULT_DAR(r9) - stw r6, VCPU_FAULT_DSISR(r9) -.Lradix_hisi: - mfspr r5, SPRN_ASDR - std r5, VCPU_FAULT_GPA(r9) - b guest_exit_cont - /* * Similarly for an HISI, reflect it to the guest as an ISI unless * it is an HPTE not found fault for a page that we have paged out. */ kvmppc_hisi: - ld r3, VCPU_KVM(r9) - lbz r0, KVM_RADIX(r3) - cmpwi r0, 0 - bne .Lradix_hisi /* for radix, just save ASDR */ andis. r0, r11, SRR1_ISI_NOPT@h beq 1f andi. r0, r11, MSR_IR /* instruction relocation enabled? */ @@ -3217,15 +3119,16 @@ BEGIN_FTR_SECTION mtspr SPRN_DAWRX1, r0 END_FTR_SECTION_IFSET(CPU_FTR_DAWR1) - /* Clear hash and radix guest SLB. */ + /* Clear guest SLB. */ slbmte r0, r0 PPC_SLBIA(6) + ptesync BEGIN_MMU_FTR_SECTION b 4f END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX) - ptesync + /* load host SLB entries */ ld r8, PACA_SLBSHADOWPTR(r13) .rept SLB_NUM_BOLTED li r3, SLBSHADOW_SAVEAREA From patchwork Mon Apr 5 01:19:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462216 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=uOJB+Ghy; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCZ447HVz9sWD for ; Mon, 5 Apr 2021 11:22:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231857AbhDEBWX (ORCPT ); Sun, 4 Apr 2021 21:22:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231858AbhDEBWW (ORCPT ); Sun, 4 Apr 2021 21:22:22 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFF7BC061756 for ; Sun, 4 Apr 2021 18:22:16 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id a6so1598898pls.1 for ; Sun, 04 Apr 2021 18:22:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=x1na3vKWwEVLh/wkax9yvasb6uioVZX1qkzSylCi2Js=; b=uOJB+Ghy53Ghc8UzNECb4Nf0+lGPr+JWma/TJ6fF/BGpu+5++Wg+HMFjkZO+WYbe1j OKCQUkIZlz02/1D6MawQDCEQrLRQe8rbUYM7NXGpu0OkQn9ziJl1i0r/IAm6CjoSCpgj IGv4LuFe5geIjpkpP7ZBAJSkvvT9Mx9eVIxHi9EqhvSfVbP9YgYHkXbPd2rhqyMTjlTa ylvz86VqhbIFn8mGn4kAZhknIGmi/Q8RaGnS4JB+e/sUh8zZBfCO7RNnrGfleGBquSbM BZuvupBGyCe3BIAzh/OvxMsqzGaZ/La+/KqIQN0ORpzprM9XzXMFU1nUN75UlgjDRkaU tyog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=x1na3vKWwEVLh/wkax9yvasb6uioVZX1qkzSylCi2Js=; b=rkN3qZe0/BBC8ZNLVkp7j+QagmUFo+3CByIW3mdPDbaH5p1dnEboKN4b2ek0z+quMi BYgzSm2+Sl7oc3+phI3oK3hc9j2hcr82oAAKDmWn93yfbcbcks9DLIX5gHqiUUkLQ7qg SIz+FOutJuWq2o3CsROUED8FNDS9i7n4T5iQ4TGi3ifR64W2KwbyM9wNcPx0lHK8+IR9 3NpPfQi/mZoJdDYrxwaxcaSi6doTNkeD8jnwI3r7VXSUdXyKmOjCh/skkfWM8Ib45kz6 SVx1KKH4qgw1UMSjGIAtv0YyBBIWW+R2YAzXGtUf14ILif1NodIvhSHJHHkjix3qD6Ib y7zw== X-Gm-Message-State: AOAM532P+bJfM9w8QJXkJ8pftZ5BLwqt8fJKVYWJdaFTXfLnvG19gXKy TmD6YOrtQJQKFr2YHlOUcGU085m94hdn/A== X-Google-Smtp-Source: ABdhPJwcO3PSZjxZHKnHQzXBvAAaNe2vJZtVwpFsI++nW9t9O5Pc9xDjIHBlaCON+A97iNPEwVOWnA== X-Received: by 2002:a17:90b:360b:: with SMTP id ml11mr23488807pjb.98.1617585736153; Sun, 04 Apr 2021 18:22:16 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.22.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:22:15 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, =?utf-8?q?C=C3=A9dric_Le_Goater?= Subject: [PATCH v6 40/48] KVM: PPC: Book3S HV: Remove virt mode checks from real mode handlers Date: Mon, 5 Apr 2021 11:19:40 +1000 Message-Id: <20210405011948.675354-41-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Now that the P7/8 path no longer supports radix, real-mode handlers do not need to deal with being called in virt mode. This change effectively reverts commit acde25726bc6 ("KVM: PPC: Book3S HV: Add radix checks in real-mode hypercall handlers"). It removes a few more real-mode tests in rm hcall handlers, which allows the indirect ops for the xive module to be removed from the built-in xics rm handlers. kvmppc_h_random is renamed to kvmppc_rm_h_random to be a bit more descriptive and consistent with other rm handlers. Reviewed-by: Cédric Le Goater Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_ppc.h | 10 +-- arch/powerpc/kvm/book3s.c | 11 +-- arch/powerpc/kvm/book3s_64_vio_hv.c | 12 ---- arch/powerpc/kvm/book3s_hv_builtin.c | 91 ++++++------------------- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 2 +- arch/powerpc/kvm/book3s_xive.c | 18 ----- arch/powerpc/kvm/book3s_xive.h | 7 -- arch/powerpc/kvm/book3s_xive_native.c | 10 --- 8 files changed, 23 insertions(+), 138 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index cbd8deba2e8a..896e309431fb 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -660,8 +660,6 @@ extern int kvmppc_xive_get_xive(struct kvm *kvm, u32 irq, u32 *server, u32 *priority); extern int kvmppc_xive_int_on(struct kvm *kvm, u32 irq); extern int kvmppc_xive_int_off(struct kvm *kvm, u32 irq); -extern void kvmppc_xive_init_module(void); -extern void kvmppc_xive_exit_module(void); extern int kvmppc_xive_connect_vcpu(struct kvm_device *dev, struct kvm_vcpu *vcpu, u32 cpu); @@ -687,8 +685,6 @@ static inline int kvmppc_xive_enabled(struct kvm_vcpu *vcpu) extern int kvmppc_xive_native_connect_vcpu(struct kvm_device *dev, struct kvm_vcpu *vcpu, u32 cpu); extern void kvmppc_xive_native_cleanup_vcpu(struct kvm_vcpu *vcpu); -extern void kvmppc_xive_native_init_module(void); -extern void kvmppc_xive_native_exit_module(void); extern int kvmppc_xive_native_get_vp(struct kvm_vcpu *vcpu, union kvmppc_one_reg *val); extern int kvmppc_xive_native_set_vp(struct kvm_vcpu *vcpu, @@ -702,8 +698,6 @@ static inline int kvmppc_xive_get_xive(struct kvm *kvm, u32 irq, u32 *server, u32 *priority) { return -1; } static inline int kvmppc_xive_int_on(struct kvm *kvm, u32 irq) { return -1; } static inline int kvmppc_xive_int_off(struct kvm *kvm, u32 irq) { return -1; } -static inline void kvmppc_xive_init_module(void) { } -static inline void kvmppc_xive_exit_module(void) { } static inline int kvmppc_xive_connect_vcpu(struct kvm_device *dev, struct kvm_vcpu *vcpu, u32 cpu) { return -EBUSY; } @@ -726,8 +720,6 @@ static inline int kvmppc_xive_enabled(struct kvm_vcpu *vcpu) static inline int kvmppc_xive_native_connect_vcpu(struct kvm_device *dev, struct kvm_vcpu *vcpu, u32 cpu) { return -EBUSY; } static inline void kvmppc_xive_native_cleanup_vcpu(struct kvm_vcpu *vcpu) { } -static inline void kvmppc_xive_native_init_module(void) { } -static inline void kvmppc_xive_native_exit_module(void) { } static inline int kvmppc_xive_native_get_vp(struct kvm_vcpu *vcpu, union kvmppc_one_reg *val) { return 0; } @@ -763,7 +755,7 @@ long kvmppc_rm_h_stuff_tce(struct kvm_vcpu *vcpu, unsigned long tce_value, unsigned long npages); long int kvmppc_rm_h_confer(struct kvm_vcpu *vcpu, int target, unsigned int yield_count); -long kvmppc_h_random(struct kvm_vcpu *vcpu); +long kvmppc_rm_h_random(struct kvm_vcpu *vcpu); void kvmhv_commence_exit(int trap); void kvmppc_realmode_machine_check(struct kvm_vcpu *vcpu); void kvmppc_subcore_enter_guest(void); diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index 1658f899e289..d3bd509b4093 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -1052,13 +1052,10 @@ static int kvmppc_book3s_init(void) #ifdef CONFIG_KVM_XICS #ifdef CONFIG_KVM_XIVE if (xics_on_xive()) { - kvmppc_xive_init_module(); kvm_register_device_ops(&kvm_xive_ops, KVM_DEV_TYPE_XICS); - if (kvmppc_xive_native_supported()) { - kvmppc_xive_native_init_module(); + if (kvmppc_xive_native_supported()) kvm_register_device_ops(&kvm_xive_native_ops, KVM_DEV_TYPE_XIVE); - } } else #endif kvm_register_device_ops(&kvm_xics_ops, KVM_DEV_TYPE_XICS); @@ -1068,12 +1065,6 @@ static int kvmppc_book3s_init(void) static void kvmppc_book3s_exit(void) { -#ifdef CONFIG_KVM_XICS - if (xics_on_xive()) { - kvmppc_xive_exit_module(); - kvmppc_xive_native_exit_module(); - } -#endif #ifdef CONFIG_KVM_BOOK3S_32_HANDLER kvmppc_book3s_exit_pr(); #endif diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c index 083a4e037718..dc6591548f0c 100644 --- a/arch/powerpc/kvm/book3s_64_vio_hv.c +++ b/arch/powerpc/kvm/book3s_64_vio_hv.c @@ -391,10 +391,6 @@ long kvmppc_rm_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn, /* udbg_printf("H_PUT_TCE(): liobn=0x%lx ioba=0x%lx, tce=0x%lx\n", */ /* liobn, ioba, tce); */ - /* For radix, we might be in virtual mode, so punt */ - if (kvm_is_radix(vcpu->kvm)) - return H_TOO_HARD; - stt = kvmppc_find_table(vcpu->kvm, liobn); if (!stt) return H_TOO_HARD; @@ -489,10 +485,6 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu, bool prereg = false; struct kvmppc_spapr_tce_iommu_table *stit; - /* For radix, we might be in virtual mode, so punt */ - if (kvm_is_radix(vcpu->kvm)) - return H_TOO_HARD; - /* * used to check for invalidations in progress */ @@ -602,10 +594,6 @@ long kvmppc_rm_h_stuff_tce(struct kvm_vcpu *vcpu, long i, ret; struct kvmppc_spapr_tce_iommu_table *stit; - /* For radix, we might be in virtual mode, so punt */ - if (kvm_is_radix(vcpu->kvm)) - return H_TOO_HARD; - stt = kvmppc_find_table(vcpu->kvm, liobn); if (!stt) return H_TOO_HARD; diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c index 7a0e33a9c980..8d669a0e15f8 100644 --- a/arch/powerpc/kvm/book3s_hv_builtin.c +++ b/arch/powerpc/kvm/book3s_hv_builtin.c @@ -34,21 +34,6 @@ #include "book3s_xics.h" #include "book3s_xive.h" -/* - * The XIVE module will populate these when it loads - */ -unsigned long (*__xive_vm_h_xirr)(struct kvm_vcpu *vcpu); -unsigned long (*__xive_vm_h_ipoll)(struct kvm_vcpu *vcpu, unsigned long server); -int (*__xive_vm_h_ipi)(struct kvm_vcpu *vcpu, unsigned long server, - unsigned long mfrr); -int (*__xive_vm_h_cppr)(struct kvm_vcpu *vcpu, unsigned long cppr); -int (*__xive_vm_h_eoi)(struct kvm_vcpu *vcpu, unsigned long xirr); -EXPORT_SYMBOL_GPL(__xive_vm_h_xirr); -EXPORT_SYMBOL_GPL(__xive_vm_h_ipoll); -EXPORT_SYMBOL_GPL(__xive_vm_h_ipi); -EXPORT_SYMBOL_GPL(__xive_vm_h_cppr); -EXPORT_SYMBOL_GPL(__xive_vm_h_eoi); - /* * Hash page table alignment on newer cpus(CPU_FTR_ARCH_206) * should be power of 2. @@ -196,16 +181,9 @@ int kvmppc_hwrng_present(void) } EXPORT_SYMBOL_GPL(kvmppc_hwrng_present); -long kvmppc_h_random(struct kvm_vcpu *vcpu) +long kvmppc_rm_h_random(struct kvm_vcpu *vcpu) { - int r; - - /* Only need to do the expensive mfmsr() on radix */ - if (kvm_is_radix(vcpu->kvm) && (mfmsr() & MSR_IR)) - r = powernv_get_random_long(&vcpu->arch.regs.gpr[4]); - else - r = powernv_get_random_real_mode(&vcpu->arch.regs.gpr[4]); - if (r) + if (powernv_get_random_real_mode(&vcpu->arch.regs.gpr[4])) return H_SUCCESS; return H_HARDWARE; @@ -541,22 +519,13 @@ static long kvmppc_read_one_intr(bool *again) } #ifdef CONFIG_KVM_XICS -static inline bool is_rm(void) -{ - return !(mfmsr() & MSR_DR); -} - unsigned long kvmppc_rm_h_xirr(struct kvm_vcpu *vcpu) { if (!kvmppc_xics_enabled(vcpu)) return H_TOO_HARD; - if (xics_on_xive()) { - if (is_rm()) - return xive_rm_h_xirr(vcpu); - if (unlikely(!__xive_vm_h_xirr)) - return H_NOT_AVAILABLE; - return __xive_vm_h_xirr(vcpu); - } else + if (xics_on_xive()) + return xive_rm_h_xirr(vcpu); + else return xics_rm_h_xirr(vcpu); } @@ -565,13 +534,9 @@ unsigned long kvmppc_rm_h_xirr_x(struct kvm_vcpu *vcpu) if (!kvmppc_xics_enabled(vcpu)) return H_TOO_HARD; vcpu->arch.regs.gpr[5] = get_tb(); - if (xics_on_xive()) { - if (is_rm()) - return xive_rm_h_xirr(vcpu); - if (unlikely(!__xive_vm_h_xirr)) - return H_NOT_AVAILABLE; - return __xive_vm_h_xirr(vcpu); - } else + if (xics_on_xive()) + return xive_rm_h_xirr(vcpu); + else return xics_rm_h_xirr(vcpu); } @@ -579,13 +544,9 @@ unsigned long kvmppc_rm_h_ipoll(struct kvm_vcpu *vcpu, unsigned long server) { if (!kvmppc_xics_enabled(vcpu)) return H_TOO_HARD; - if (xics_on_xive()) { - if (is_rm()) - return xive_rm_h_ipoll(vcpu, server); - if (unlikely(!__xive_vm_h_ipoll)) - return H_NOT_AVAILABLE; - return __xive_vm_h_ipoll(vcpu, server); - } else + if (xics_on_xive()) + return xive_rm_h_ipoll(vcpu, server); + else return H_TOO_HARD; } @@ -594,13 +555,9 @@ int kvmppc_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server, { if (!kvmppc_xics_enabled(vcpu)) return H_TOO_HARD; - if (xics_on_xive()) { - if (is_rm()) - return xive_rm_h_ipi(vcpu, server, mfrr); - if (unlikely(!__xive_vm_h_ipi)) - return H_NOT_AVAILABLE; - return __xive_vm_h_ipi(vcpu, server, mfrr); - } else + if (xics_on_xive()) + return xive_rm_h_ipi(vcpu, server, mfrr); + else return xics_rm_h_ipi(vcpu, server, mfrr); } @@ -608,13 +565,9 @@ int kvmppc_rm_h_cppr(struct kvm_vcpu *vcpu, unsigned long cppr) { if (!kvmppc_xics_enabled(vcpu)) return H_TOO_HARD; - if (xics_on_xive()) { - if (is_rm()) - return xive_rm_h_cppr(vcpu, cppr); - if (unlikely(!__xive_vm_h_cppr)) - return H_NOT_AVAILABLE; - return __xive_vm_h_cppr(vcpu, cppr); - } else + if (xics_on_xive()) + return xive_rm_h_cppr(vcpu, cppr); + else return xics_rm_h_cppr(vcpu, cppr); } @@ -622,13 +575,9 @@ int kvmppc_rm_h_eoi(struct kvm_vcpu *vcpu, unsigned long xirr) { if (!kvmppc_xics_enabled(vcpu)) return H_TOO_HARD; - if (xics_on_xive()) { - if (is_rm()) - return xive_rm_h_eoi(vcpu, xirr); - if (unlikely(!__xive_vm_h_eoi)) - return H_NOT_AVAILABLE; - return __xive_vm_h_eoi(vcpu, xirr); - } else + if (xics_on_xive()) + return xive_rm_h_eoi(vcpu, xirr); + else return xics_rm_h_eoi(vcpu, xirr); } #endif /* CONFIG_KVM_XICS */ diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 87fa8bda3515..1bc5a3805a26 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -2299,7 +2299,7 @@ hcall_real_table: #else .long 0 /* 0x2fc - H_XIRR_X*/ #endif - .long DOTSYM(kvmppc_h_random) - hcall_real_table + .long DOTSYM(kvmppc_rm_h_random) - hcall_real_table .globl hcall_real_table_end hcall_real_table_end: diff --git a/arch/powerpc/kvm/book3s_xive.c b/arch/powerpc/kvm/book3s_xive.c index 24c07094651a..9268d386b128 100644 --- a/arch/powerpc/kvm/book3s_xive.c +++ b/arch/powerpc/kvm/book3s_xive.c @@ -2352,21 +2352,3 @@ struct kvm_device_ops kvm_xive_ops = { .get_attr = xive_get_attr, .has_attr = xive_has_attr, }; - -void kvmppc_xive_init_module(void) -{ - __xive_vm_h_xirr = xive_vm_h_xirr; - __xive_vm_h_ipoll = xive_vm_h_ipoll; - __xive_vm_h_ipi = xive_vm_h_ipi; - __xive_vm_h_cppr = xive_vm_h_cppr; - __xive_vm_h_eoi = xive_vm_h_eoi; -} - -void kvmppc_xive_exit_module(void) -{ - __xive_vm_h_xirr = NULL; - __xive_vm_h_ipoll = NULL; - __xive_vm_h_ipi = NULL; - __xive_vm_h_cppr = NULL; - __xive_vm_h_eoi = NULL; -} diff --git a/arch/powerpc/kvm/book3s_xive.h b/arch/powerpc/kvm/book3s_xive.h index 86c24a4ad809..afe9eeac6d56 100644 --- a/arch/powerpc/kvm/book3s_xive.h +++ b/arch/powerpc/kvm/book3s_xive.h @@ -289,13 +289,6 @@ extern int xive_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server, extern int xive_rm_h_cppr(struct kvm_vcpu *vcpu, unsigned long cppr); extern int xive_rm_h_eoi(struct kvm_vcpu *vcpu, unsigned long xirr); -extern unsigned long (*__xive_vm_h_xirr)(struct kvm_vcpu *vcpu); -extern unsigned long (*__xive_vm_h_ipoll)(struct kvm_vcpu *vcpu, unsigned long server); -extern int (*__xive_vm_h_ipi)(struct kvm_vcpu *vcpu, unsigned long server, - unsigned long mfrr); -extern int (*__xive_vm_h_cppr)(struct kvm_vcpu *vcpu, unsigned long cppr); -extern int (*__xive_vm_h_eoi)(struct kvm_vcpu *vcpu, unsigned long xirr); - /* * Common Xive routines for XICS-over-XIVE and XIVE native */ diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3s_xive_native.c index 76800c84f2a3..1253666dd4d8 100644 --- a/arch/powerpc/kvm/book3s_xive_native.c +++ b/arch/powerpc/kvm/book3s_xive_native.c @@ -1281,13 +1281,3 @@ struct kvm_device_ops kvm_xive_native_ops = { .has_attr = kvmppc_xive_native_has_attr, .mmap = kvmppc_xive_native_mmap, }; - -void kvmppc_xive_native_init_module(void) -{ - ; -} - -void kvmppc_xive_native_exit_module(void) -{ - ; -} From patchwork Mon Apr 5 01:19:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462217 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=JvJIGJzo; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCZ754YHz9sWY for ; Mon, 5 Apr 2021 11:22:23 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231779AbhDEBW1 (ORCPT ); Sun, 4 Apr 2021 21:22:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231820AbhDEBWZ (ORCPT ); Sun, 4 Apr 2021 21:22:25 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 608E1C061756 for ; Sun, 4 Apr 2021 18:22:20 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id ha17so5270968pjb.2 for ; Sun, 04 Apr 2021 18:22:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YHRWFD3iHSSdLF286j0f1zwOIMoz2IhYKJedkP23IR8=; b=JvJIGJzomKpzMbmRIRPSXtRrRHIxLRz5l/L5tuzTn38pX8rVLd1Kro+AiOuze3gKWh 9KcYPQT3CQeNHuvKguiIooQV9u/Cxpfo6j0Hxez3tsNW1KM6l95i/IM0KoYLvdbhsM37 5qEHzRZYEYDmVIcJRzgmAGlLlOBnRmSd8VFoxrjVJPbURN64k8OlMeail2+HCP/DO1Up 4s/OT7vuqtWP6MBNNDUzYhIcEhuY02Bf9foRfUqaY7s3NgDrSmv982BFBi9yjrSqS0Tz Ynyqn204HCaq5tx4EL9wa9ZyZYSU9ioSHKuniD9l4yi8UBYpOCK2shslRwzWJv28OmOv bUBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YHRWFD3iHSSdLF286j0f1zwOIMoz2IhYKJedkP23IR8=; b=n9Vv23SwnUvayXrtopEWuH0xQdjlQSHAuG5P7iWNMKdHVSZ38vq63XWSNV1cVusU2h Su9Mv8zM9U8FgR1OECUjdhPX+4nDnotqQATzM19BNgH+W9uiqfkfsVGOy/dpwvjUDWgN L+HzLdCZ7ghsipoMBqtuC/JuTv5R+H5j8G8md/ZFc/lv31uHWTy6/qXRsmAYnoYbjKrb D1rAetGcy2u4CBdjqOTif1P5YCujo2n3V9Y5C0901UDRFxpGjDvals7M0G/nrEfCApVY Q4aZXlHWErgh5/HJgBbQ7zDK+rBKSkukZDe2JMs3h4dZ6B70cvBt6XwHbB71CTbqwj7t zdhQ== X-Gm-Message-State: AOAM531Xo3ABxVSR5+RjPY6NQqPQlAYRxiUVIWYCog7vtf6Q54sYWVe2 s7DBxT8iP+dRLkRcOghnczpTvcwAJ3B70Q== X-Google-Smtp-Source: ABdhPJxAjwm6qLDiYzk1zZD0bQiuwzBDB0BDdHsYixr0VCHB7L9/1GoyZnnAVX/pEb6veMAcbjlFBQ== X-Received: by 2002:a17:90b:f93:: with SMTP id ft19mr3828835pjb.135.1617585739856; Sun, 04 Apr 2021 18:22:19 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.22.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:22:19 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, =?utf-8?q?C=C3=A9dric_Le_Goater?= Subject: [PATCH v6 41/48] KVM: PPC: Book3S HV: Remove unused nested HV tests in XICS emulation Date: Mon, 5 Apr 2021 11:19:41 +1000 Message-Id: <20210405011948.675354-42-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Commit f3c18e9342a44 ("KVM: PPC: Book3S HV: Use XICS hypercalls when running as a nested hypervisor") added nested HV tests in XICS hypercalls, but not all are required. * icp_eoi is only called by kvmppc_deliver_irq_passthru which is only called by kvmppc_check_passthru which is only caled by kvmppc_read_one_intr. * kvmppc_read_one_intr is only called by kvmppc_read_intr which is only called by the L0 HV rmhandlers code. * kvmhv_rm_send_ipi is called by: - kvmhv_interrupt_vcore which is only called by kvmhv_commence_exit which is only called by the L0 HV rmhandlers code. - icp_send_hcore_msg which is only called by icp_rm_set_vcpu_irq. - icp_rm_set_vcpu_irq which is only called by icp_rm_try_update - icp_rm_set_vcpu_irq is not nested HV safe because it writes to LPCR directly without a kvmhv_on_pseries test. Nested handlers should not in general be using the rm handlers. The important test seems to be in kvmppc_ipi_thread, which sends the virt-mode H_IPI handler kick to use smp_call_function rather than msgsnd. Cc: Cédric Le Goater Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_builtin.c | 44 +++++----------------------- arch/powerpc/kvm/book3s_hv_rm_xics.c | 15 ---------- 2 files changed, 8 insertions(+), 51 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c index 8d669a0e15f8..259492bb4153 100644 --- a/arch/powerpc/kvm/book3s_hv_builtin.c +++ b/arch/powerpc/kvm/book3s_hv_builtin.c @@ -199,15 +199,6 @@ void kvmhv_rm_send_ipi(int cpu) void __iomem *xics_phys; unsigned long msg = PPC_DBELL_TYPE(PPC_DBELL_SERVER); - /* For a nested hypervisor, use the XICS via hcall */ - if (kvmhv_on_pseries()) { - unsigned long retbuf[PLPAR_HCALL_BUFSIZE]; - - plpar_hcall_raw(H_IPI, retbuf, get_hard_smp_processor_id(cpu), - IPI_PRIORITY); - return; - } - /* On POWER9 we can use msgsnd for any destination cpu. */ if (cpu_has_feature(CPU_FTR_ARCH_300)) { msg |= get_hard_smp_processor_id(cpu); @@ -420,19 +411,12 @@ static long kvmppc_read_one_intr(bool *again) return 1; /* Now read the interrupt from the ICP */ - if (kvmhv_on_pseries()) { - unsigned long retbuf[PLPAR_HCALL_BUFSIZE]; - - rc = plpar_hcall_raw(H_XIRR, retbuf, 0xFF); - xirr = cpu_to_be32(retbuf[0]); - } else { - xics_phys = local_paca->kvm_hstate.xics_phys; - rc = 0; - if (!xics_phys) - rc = opal_int_get_xirr(&xirr, false); - else - xirr = __raw_rm_readl(xics_phys + XICS_XIRR); - } + xics_phys = local_paca->kvm_hstate.xics_phys; + rc = 0; + if (!xics_phys) + rc = opal_int_get_xirr(&xirr, false); + else + xirr = __raw_rm_readl(xics_phys + XICS_XIRR); if (rc < 0) return 1; @@ -461,13 +445,7 @@ static long kvmppc_read_one_intr(bool *again) */ if (xisr == XICS_IPI) { rc = 0; - if (kvmhv_on_pseries()) { - unsigned long retbuf[PLPAR_HCALL_BUFSIZE]; - - plpar_hcall_raw(H_IPI, retbuf, - hard_smp_processor_id(), 0xff); - plpar_hcall_raw(H_EOI, retbuf, h_xirr); - } else if (xics_phys) { + if (xics_phys) { __raw_rm_writeb(0xff, xics_phys + XICS_MFRR); __raw_rm_writel(xirr, xics_phys + XICS_XIRR); } else { @@ -493,13 +471,7 @@ static long kvmppc_read_one_intr(bool *again) /* We raced with the host, * we need to resend that IPI, bummer */ - if (kvmhv_on_pseries()) { - unsigned long retbuf[PLPAR_HCALL_BUFSIZE]; - - plpar_hcall_raw(H_IPI, retbuf, - hard_smp_processor_id(), - IPI_PRIORITY); - } else if (xics_phys) + if (xics_phys) __raw_rm_writeb(IPI_PRIORITY, xics_phys + XICS_MFRR); else diff --git a/arch/powerpc/kvm/book3s_hv_rm_xics.c b/arch/powerpc/kvm/book3s_hv_rm_xics.c index c2c9c733f359..0a11ec88a0ae 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_xics.c +++ b/arch/powerpc/kvm/book3s_hv_rm_xics.c @@ -141,13 +141,6 @@ static void icp_rm_set_vcpu_irq(struct kvm_vcpu *vcpu, return; } - if (xive_enabled() && kvmhv_on_pseries()) { - /* No XICS access or hypercalls available, too hard */ - this_icp->rm_action |= XICS_RM_KICK_VCPU; - this_icp->rm_kick_target = vcpu; - return; - } - /* * Check if the core is loaded, * if not, find an available host core to post to wake the VCPU, @@ -771,14 +764,6 @@ static void icp_eoi(struct irq_chip *c, u32 hwirq, __be32 xirr, bool *again) void __iomem *xics_phys; int64_t rc; - if (kvmhv_on_pseries()) { - unsigned long retbuf[PLPAR_HCALL_BUFSIZE]; - - iosync(); - plpar_hcall_raw(H_EOI, retbuf, hwirq); - return; - } - rc = pnv_opal_pci_msi_eoi(c, hwirq); if (rc) From patchwork Mon Apr 5 01:19:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462218 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=t7RQa1AA; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCZ974PZz9sWY for ; Mon, 5 Apr 2021 11:22:25 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231824AbhDEBWa (ORCPT ); Sun, 4 Apr 2021 21:22:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231820AbhDEBW2 (ORCPT ); Sun, 4 Apr 2021 21:22:28 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70BDBC061756 for ; Sun, 4 Apr 2021 18:22:23 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id j25so7208437pfe.2 for ; Sun, 04 Apr 2021 18:22:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vFMMQgThsUAc2S0LaT7h9p9PH0nf5CHdrn+oC6qHpCo=; b=t7RQa1AAPbICBHtL8E70gqsHEBscLP9xgAq9L4rDfPNtEPV/RQgBJ9S/DH4OmSdp1+ PFyah3EPxGggyNqvxja+hPRdFokmnWm/yVj4+gOOoyQHwNut/JxFa0dYq1tU7PTA4x2u D7L5vAy+m7IroXLg4/GHcw7zDPvKdG3Q+QQR9Um+H1LB4DceerKeucxt8FthR60FiYk1 I2A0L3/E9wD8Jl4OePJ7G/X6dfwxEkLi9zLNZfNbSAqE4vJ5lOzsvbJuvoY1fpd6CjsZ mSEq1mygCHkNZW/W18HAsvGd+zd8HunASvkcReeradGU37kHChRyfg9i1KMhyJK+vVAm +IAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vFMMQgThsUAc2S0LaT7h9p9PH0nf5CHdrn+oC6qHpCo=; b=n1952pYOVgXyhX/+sakxqi+GMS5Q5AdHASnh6Gtj+S1FYePhwJMCiQItIkARFKWspq e2g7MufFz0zENU0SinffLXxv8R/g/mRwfiWZfNTKkGbYT8nyzQso6rkpRrPeYHz1+XE8 qnoRbwK9ZVB9JXzow1z2D0nKQY4r/Gvk0IdtUKDzhYsKxgcKqhVe5LOFsfmR3UAuYZIT B4sXdW/UOm6rpsnZlAG0P60S3a7v1wR1D/XSWqQjRLf2AKzJ94gf+Te/vw0Qr0DMQsAJ JmQKVrivBg0/Ck+vitwRjsFS8BLTUEcyCUGhuOJcyOcn5+84dEWMVPyRAXHKBeS8iUUe vP2w== X-Gm-Message-State: AOAM5335qqUVBhtvqLdU8/zD/JFotxdxBzX90mB7EVqQi4SdSFcVh8Mg IiI5qQRTlWztzCMEZzMXuE7tZLHrsdPAxg== X-Google-Smtp-Source: ABdhPJw8R1k5TWI9uLfaH0DflPKY0792pji+A58QORsoUYVY/k19haFzvL7nlhHgJF/feZXhIgIMbA== X-Received: by 2002:a65:610f:: with SMTP id z15mr20974993pgu.360.1617585742986; Sun, 04 Apr 2021 18:22:22 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.22.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:22:22 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 42/48] KVM: PPC: Book3S HV P9: Allow all P9 processors to enable nested HV Date: Mon, 5 Apr 2021 11:19:42 +1000 Message-Id: <20210405011948.675354-43-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org All radix guests go via the P9 path now, so there is no need to limit nested HV to processors that support "mixed mode" MMU. Remove the restriction. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 20ced6c5edfd..5ef43d9b19bc 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -5443,7 +5443,7 @@ static int kvmhv_enable_nested(struct kvm *kvm) { if (!nested) return -EPERM; - if (!cpu_has_feature(CPU_FTR_ARCH_300) || no_mixing_hpt_and_radix) + if (!cpu_has_feature(CPU_FTR_ARCH_300)) return -ENODEV; /* kvm == NULL means the caller is testing if the capability exists */ From patchwork Mon Apr 5 01:19:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462219 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=HHbOVKHi; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCZG7576z9sWX for ; Mon, 5 Apr 2021 11:22:30 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231830AbhDEBWd (ORCPT ); Sun, 4 Apr 2021 21:22:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231820AbhDEBWc (ORCPT ); Sun, 4 Apr 2021 21:22:32 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2FAEEC061756 for ; Sun, 4 Apr 2021 18:22:26 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id w10so1678187pgh.5 for ; Sun, 04 Apr 2021 18:22:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pdmRKC0+A/lxg5MleIGjZCeeIx1sWRVwvGao9UDQW6o=; b=HHbOVKHiOBWr+G8xFW7RIRSZv8tUFBfxIdYuJfm3IEJ25RcDfky9eSniF+vD1y6Yd+ 0Z+yFBaSDjBo+WM2/UNo+8mG5L9h9vpITNQ6a71YAd4lckcH2ocjiFhY+YFnk24+b0ih M/C60OQ0YC4on9bSP7XR/BcL5jJmzt4A/QQvEJ6RxYKAO1OPE1YfyQ6xLH21ZckjAkMs W6gpPix22tLER1Pb/H635cRroX5jnLygmNTIF3mgn8h4b1QL/hAWfFrVXiJPF1BuyTdr +vfQDIi6DCBpZPzdtebaRC+iIst8Mnvao/XHrYzpKSCog5nkagjCuRor4LblZWtLWwB7 at1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pdmRKC0+A/lxg5MleIGjZCeeIx1sWRVwvGao9UDQW6o=; b=DD/zU5wbHIdc3PUYoU58G52ardl+dPTMFgAMsy3pl+ETARcQ15BP08P+6YISnOTA+N +j1JwEV2r4DvhwnoLv+eoVmFqbXOq1Im1gosMU23j/lbLKuDXS097tuu+Z7xWGaj3IVq KhDWRzLMiVvOUUVI51MB+bN0Z+EVqJono3HGrCSWZfmucfYUX2ptcKbRUy2HqN15goz2 LdA0sfWy/kJsdjMVlL2c1aX1i/QDiLoPiVz6/aWqzdnsf/UFyiypVR0fMUIbha2fOAPq 0ZA/A8Vl94s4eN8VziM8HwwP5GRWHVqBPJPxMCHa/iKGN/sBm+2jcbX7aL3cFzZrhmoN 6/Aw== X-Gm-Message-State: AOAM532GW/21KLAR8h5Qa5GHuGN3cvAjOPu8XD90WocyEz3cdCk14tj8 BisIaBUlcGE8kA9hrtwNxZztfzTzKf3LcQ== X-Google-Smtp-Source: ABdhPJwSIFx8ukfEtjjf+im29WHCk6IgpkFeZtw1PSGg5YUWzCZXJbLote7uXh0iWV1wQg1JU6SGGw== X-Received: by 2002:a63:4415:: with SMTP id r21mr21215311pga.222.1617585745681; Sun, 04 Apr 2021 18:22:25 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.22.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:22:25 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 43/48] KVM: PPC: Book3S HV: small pseries_do_hcall cleanup Date: Mon, 5 Apr 2021 11:19:43 +1000 Message-Id: <20210405011948.675354-44-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Functionality should not be changed. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 5ef43d9b19bc..4b4250c04117 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -925,6 +925,7 @@ static int kvmppc_get_yield_count(struct kvm_vcpu *vcpu) int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) { + struct kvm *kvm = vcpu->kvm; unsigned long req = kvmppc_get_gpr(vcpu, 3); unsigned long target, ret = H_SUCCESS; int yield_count; @@ -940,7 +941,7 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) break; case H_PROD: target = kvmppc_get_gpr(vcpu, 4); - tvcpu = kvmppc_find_vcpu(vcpu->kvm, target); + tvcpu = kvmppc_find_vcpu(kvm, target); if (!tvcpu) { ret = H_PARAMETER; break; @@ -954,7 +955,7 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) target = kvmppc_get_gpr(vcpu, 4); if (target == -1) break; - tvcpu = kvmppc_find_vcpu(vcpu->kvm, target); + tvcpu = kvmppc_find_vcpu(kvm, target); if (!tvcpu) { ret = H_PARAMETER; break; @@ -970,12 +971,12 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) kvmppc_get_gpr(vcpu, 6)); break; case H_RTAS: - if (list_empty(&vcpu->kvm->arch.rtas_tokens)) + if (list_empty(&kvm->arch.rtas_tokens)) return RESUME_HOST; - idx = srcu_read_lock(&vcpu->kvm->srcu); + idx = srcu_read_lock(&kvm->srcu); rc = kvmppc_rtas_hcall(vcpu); - srcu_read_unlock(&vcpu->kvm->srcu, idx); + srcu_read_unlock(&kvm->srcu, idx); if (rc == -ENOENT) return RESUME_HOST; @@ -1062,12 +1063,12 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) case H_SET_PARTITION_TABLE: ret = H_FUNCTION; - if (nesting_enabled(vcpu->kvm)) + if (nesting_enabled(kvm)) ret = kvmhv_set_partition_table(vcpu); break; case H_ENTER_NESTED: ret = H_FUNCTION; - if (!nesting_enabled(vcpu->kvm)) + if (!nesting_enabled(kvm)) break; ret = kvmhv_enter_nested_guest(vcpu); if (ret == H_INTERRUPT) { @@ -1082,12 +1083,12 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) break; case H_TLB_INVALIDATE: ret = H_FUNCTION; - if (nesting_enabled(vcpu->kvm)) + if (nesting_enabled(kvm)) ret = kvmhv_do_nested_tlbie(vcpu); break; case H_COPY_TOFROM_GUEST: ret = H_FUNCTION; - if (nesting_enabled(vcpu->kvm)) + if (nesting_enabled(kvm)) ret = kvmhv_copy_tofrom_guest_nested(vcpu); break; case H_PAGE_INIT: @@ -1098,7 +1099,7 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) case H_SVM_PAGE_IN: ret = H_UNSUPPORTED; if (kvmppc_get_srr1(vcpu) & MSR_S) - ret = kvmppc_h_svm_page_in(vcpu->kvm, + ret = kvmppc_h_svm_page_in(kvm, kvmppc_get_gpr(vcpu, 4), kvmppc_get_gpr(vcpu, 5), kvmppc_get_gpr(vcpu, 6)); @@ -1106,7 +1107,7 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) case H_SVM_PAGE_OUT: ret = H_UNSUPPORTED; if (kvmppc_get_srr1(vcpu) & MSR_S) - ret = kvmppc_h_svm_page_out(vcpu->kvm, + ret = kvmppc_h_svm_page_out(kvm, kvmppc_get_gpr(vcpu, 4), kvmppc_get_gpr(vcpu, 5), kvmppc_get_gpr(vcpu, 6)); @@ -1114,12 +1115,12 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) case H_SVM_INIT_START: ret = H_UNSUPPORTED; if (kvmppc_get_srr1(vcpu) & MSR_S) - ret = kvmppc_h_svm_init_start(vcpu->kvm); + ret = kvmppc_h_svm_init_start(kvm); break; case H_SVM_INIT_DONE: ret = H_UNSUPPORTED; if (kvmppc_get_srr1(vcpu) & MSR_S) - ret = kvmppc_h_svm_init_done(vcpu->kvm); + ret = kvmppc_h_svm_init_done(kvm); break; case H_SVM_INIT_ABORT: /* @@ -1129,7 +1130,7 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) * Instead the kvm->arch.secure_guest flag is checked inside * kvmppc_h_svm_init_abort(). */ - ret = kvmppc_h_svm_init_abort(vcpu->kvm); + ret = kvmppc_h_svm_init_abort(kvm); break; default: From patchwork Mon Apr 5 01:19:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462220 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=KzyyK6fh; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCZN2qnPz9sWD for ; Mon, 5 Apr 2021 11:22:36 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231859AbhDEBWg (ORCPT ); Sun, 4 Apr 2021 21:22:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231820AbhDEBWe (ORCPT ); Sun, 4 Apr 2021 21:22:34 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B5C6C061756 for ; Sun, 4 Apr 2021 18:22:29 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id il9-20020a17090b1649b0290114bcb0d6c2so7103132pjb.0 for ; Sun, 04 Apr 2021 18:22:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=w9g9tEIvKA9BaaOhyCdKJhH/7pnlKO+LumCOdT40cHo=; b=KzyyK6fh7FXEEP8X3gGPI33ooxIx35VOAyAdSCryB43BwpT2glwfDa13nPmxX8qQln u4G5pBZVKNdk8wan5cAzAtdRv07y4Pz5TCm++vbCFHQYMUbEilfVRPEnO1ZPbf10+GUk qI2y6NaGRC94F5z6S6AByfnJZJOBeMGJZsBS85iUZv3MYbx74n1KS3Kp4pOH4k2FmDgc ixefxxXK071ieGA/VooY7deZknTWpB18/ddCUTLqaeDkKW4xfEzHD+oxk8FWVPup1bqD 9xsI9LTDJhdhZH624O9pEVGXN7qVRekKMqY7MPRTulJxPn4J3s1Ov+/6b7PHUzyylkck Gzow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=w9g9tEIvKA9BaaOhyCdKJhH/7pnlKO+LumCOdT40cHo=; b=CUsLrEBGIQ3tokhQau1uDJ83arxAmU4mCmIdUJxZIjpkD7HuiQ7cUXC7GDoxLfWIaW RFoTZnT07i9xsEO8fjjEW5sIt/Wx+eMjVpOdSSAzGTyO2Ogq/9/mqgac0VOaa6c2GmZf RitqP1zWr3c8kLy7JyAuvTmGQ2v3f0fFgrpD2g178n+vEhWKDXOOsYNYn4W6JmHvjmBp WD4M5tRg2gX9g4AV+60O3BIAdP9uKm4Kit6pEYTXYlnhNygXCEMxeziT6pfHbZuLdW1r Ak8Nj3u13xL/cp1XdQFvhF+36Obd1ScRa0EoemNNSpd0XmNmAR3/wb4qTBeZp8H8IHrD 516Q== X-Gm-Message-State: AOAM5311MUCk6rfigGDAJLmIQ1c/XKyCDTvvVClduaJP0x9+ETrXYYaS +IAoumYYW3Vgtzq60aEbHnNj/sX1oWmZdw== X-Google-Smtp-Source: ABdhPJzJfsoPPimRtez0ZO7paZfzCo4qu1VuzIH3E1IyOQ/H7hbN2C22TJlHDr6s43pnz2fHHBbjzA== X-Received: by 2002:a17:902:263:b029:e7:35d8:4554 with SMTP id 90-20020a1709020263b02900e735d84554mr21596890plc.83.1617585748664; Sun, 04 Apr 2021 18:22:28 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.22.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:22:28 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 44/48] KVM: PPC: Book3S HV: add virtual mode handlers for HPT hcalls and page faults Date: Mon, 5 Apr 2021 11:19:44 +1000 Message-Id: <20210405011948.675354-45-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org In order to support hash guests in the P9 path (which does not do real mode hcalls or page fault handling), these real-mode hash specific interrupts need to be implemented in virt mode. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 145 ++++++++++++++++++++++++++-- arch/powerpc/kvm/book3s_hv_rm_mmu.c | 8 ++ 2 files changed, 144 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 4b4250c04117..d8e36f1ea66d 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -937,6 +937,52 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) return RESUME_HOST; switch (req) { + case H_REMOVE: + ret = kvmppc_h_remove(vcpu, kvmppc_get_gpr(vcpu, 4), + kvmppc_get_gpr(vcpu, 5), + kvmppc_get_gpr(vcpu, 6)); + if (ret == H_TOO_HARD) + return RESUME_HOST; + break; + case H_ENTER: + ret = kvmppc_h_enter(vcpu, kvmppc_get_gpr(vcpu, 4), + kvmppc_get_gpr(vcpu, 5), + kvmppc_get_gpr(vcpu, 6), + kvmppc_get_gpr(vcpu, 7)); + if (ret == H_TOO_HARD) + return RESUME_HOST; + break; + case H_READ: + ret = kvmppc_h_read(vcpu, kvmppc_get_gpr(vcpu, 4), + kvmppc_get_gpr(vcpu, 5)); + if (ret == H_TOO_HARD) + return RESUME_HOST; + break; + case H_CLEAR_MOD: + ret = kvmppc_h_clear_mod(vcpu, kvmppc_get_gpr(vcpu, 4), + kvmppc_get_gpr(vcpu, 5)); + if (ret == H_TOO_HARD) + return RESUME_HOST; + break; + case H_CLEAR_REF: + ret = kvmppc_h_clear_ref(vcpu, kvmppc_get_gpr(vcpu, 4), + kvmppc_get_gpr(vcpu, 5)); + if (ret == H_TOO_HARD) + return RESUME_HOST; + break; + case H_PROTECT: + ret = kvmppc_h_protect(vcpu, kvmppc_get_gpr(vcpu, 4), + kvmppc_get_gpr(vcpu, 5), + kvmppc_get_gpr(vcpu, 6)); + if (ret == H_TOO_HARD) + return RESUME_HOST; + break; + case H_BULK_REMOVE: + ret = kvmppc_h_bulk_remove(vcpu); + if (ret == H_TOO_HARD) + return RESUME_HOST; + break; + case H_CEDE: break; case H_PROD: @@ -1136,6 +1182,7 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) default: return RESUME_HOST; } + WARN_ON_ONCE(ret == H_TOO_HARD); kvmppc_set_gpr(vcpu, 3, ret); vcpu->arch.hcall_needed = 0; return RESUME_GUEST; @@ -1437,22 +1484,102 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, * We get these next two if the guest accesses a page which it thinks * it has mapped but which is not actually present, either because * it is for an emulated I/O device or because the corresonding - * host page has been paged out. Any other HDSI/HISI interrupts - * have been handled already. + * host page has been paged out. + * + * Any other HDSI/HISI interrupts have been handled already for P7/8 + * guests. For POWER9 hash guests not using rmhandlers, basic hash + * fault handling is done here. */ - case BOOK3S_INTERRUPT_H_DATA_STORAGE: - r = RESUME_PAGE_FAULT; - if (vcpu->arch.fault_dsisr == HDSISR_CANARY) + case BOOK3S_INTERRUPT_H_DATA_STORAGE: { + unsigned long vsid; + long err; + + if (vcpu->arch.fault_dsisr == HDSISR_CANARY) { r = RESUME_GUEST; /* Just retry if it's the canary */ + break; + } + + if (kvm_is_radix(vcpu->kvm) || !cpu_has_feature(CPU_FTR_ARCH_300)) { + /* + * Radix doesn't require anything, and pre-ISAv3.0 hash + * already attempted to handle this in rmhandlers. The + * hash fault handling below is v3 only (it uses ASDR + * via fault_gpa). + */ + r = RESUME_PAGE_FAULT; + break; + } + + if (!(vcpu->arch.fault_dsisr & (DSISR_NOHPTE | DSISR_PROTFAULT))) { + kvmppc_core_queue_data_storage(vcpu, + vcpu->arch.fault_dar, vcpu->arch.fault_dsisr); + r = RESUME_GUEST; + break; + } + + if (!(vcpu->arch.shregs.msr & MSR_DR)) + vsid = vcpu->kvm->arch.vrma_slb_v; + else + vsid = vcpu->arch.fault_gpa; + + err = kvmppc_hpte_hv_fault(vcpu, vcpu->arch.fault_dar, + vsid, vcpu->arch.fault_dsisr, true); + if (err == 0) { + r = RESUME_GUEST; + } else if (err == -1 || err == -2) { + r = RESUME_PAGE_FAULT; + } else { + kvmppc_core_queue_data_storage(vcpu, + vcpu->arch.fault_dar, err); + r = RESUME_GUEST; + } break; - case BOOK3S_INTERRUPT_H_INST_STORAGE: + } + case BOOK3S_INTERRUPT_H_INST_STORAGE: { + unsigned long vsid; + long err; + vcpu->arch.fault_dar = kvmppc_get_pc(vcpu); vcpu->arch.fault_dsisr = vcpu->arch.shregs.msr & DSISR_SRR1_MATCH_64S; - if (vcpu->arch.shregs.msr & HSRR1_HISI_WRITE) - vcpu->arch.fault_dsisr |= DSISR_ISSTORE; - r = RESUME_PAGE_FAULT; + if (kvm_is_radix(vcpu->kvm) || !cpu_has_feature(CPU_FTR_ARCH_300)) { + /* + * Radix doesn't require anything, and pre-ISAv3.0 hash + * already attempted to handle this in rmhandlers. The + * hash fault handling below is v3 only (it uses ASDR + * via fault_gpa). + */ + if (vcpu->arch.shregs.msr & HSRR1_HISI_WRITE) + vcpu->arch.fault_dsisr |= DSISR_ISSTORE; + r = RESUME_PAGE_FAULT; + break; + } + + if (!(vcpu->arch.fault_dsisr & SRR1_ISI_NOPT)) { + kvmppc_core_queue_inst_storage(vcpu, + vcpu->arch.fault_dsisr); + r = RESUME_GUEST; + break; + } + + if (!(vcpu->arch.shregs.msr & MSR_IR)) + vsid = vcpu->kvm->arch.vrma_slb_v; + else + vsid = vcpu->arch.fault_gpa; + + err = kvmppc_hpte_hv_fault(vcpu, vcpu->arch.fault_dar, + vsid, vcpu->arch.fault_dsisr, false); + if (err == 0) { + r = RESUME_GUEST; + } else if (err == -1) { + r = RESUME_PAGE_FAULT; + } else { + kvmppc_core_queue_inst_storage(vcpu, err); + r = RESUME_GUEST; + } break; + } + /* * This occurs if the guest executes an illegal instruction. * If the guest debug is disabled, generate a program interrupt diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c index 7af7c70f1468..8cc73abbf42b 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c @@ -409,6 +409,7 @@ long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags, vcpu->arch.pgdir, true, &vcpu->arch.regs.gpr[4]); } +EXPORT_SYMBOL_GPL(kvmppc_h_enter); #ifdef __BIG_ENDIAN__ #define LOCK_TOKEN (*(u32 *)(&get_paca()->lock_token)) @@ -553,6 +554,7 @@ long kvmppc_h_remove(struct kvm_vcpu *vcpu, unsigned long flags, return kvmppc_do_h_remove(vcpu->kvm, flags, pte_index, avpn, &vcpu->arch.regs.gpr[4]); } +EXPORT_SYMBOL_GPL(kvmppc_h_remove); long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu) { @@ -671,6 +673,7 @@ long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu) return ret; } +EXPORT_SYMBOL_GPL(kvmppc_h_bulk_remove); long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags, unsigned long pte_index, unsigned long avpn) @@ -741,6 +744,7 @@ long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags, return H_SUCCESS; } +EXPORT_SYMBOL_GPL(kvmppc_h_protect); long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long flags, unsigned long pte_index) @@ -781,6 +785,7 @@ long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long flags, } return H_SUCCESS; } +EXPORT_SYMBOL_GPL(kvmppc_h_read); long kvmppc_h_clear_ref(struct kvm_vcpu *vcpu, unsigned long flags, unsigned long pte_index) @@ -829,6 +834,7 @@ long kvmppc_h_clear_ref(struct kvm_vcpu *vcpu, unsigned long flags, unlock_hpte(hpte, v & ~HPTE_V_HVLOCK); return ret; } +EXPORT_SYMBOL_GPL(kvmppc_h_clear_ref); long kvmppc_h_clear_mod(struct kvm_vcpu *vcpu, unsigned long flags, unsigned long pte_index) @@ -876,6 +882,7 @@ long kvmppc_h_clear_mod(struct kvm_vcpu *vcpu, unsigned long flags, unlock_hpte(hpte, v & ~HPTE_V_HVLOCK); return ret; } +EXPORT_SYMBOL_GPL(kvmppc_h_clear_mod); static int kvmppc_get_hpa(struct kvm_vcpu *vcpu, unsigned long mmu_seq, unsigned long gpa, int writing, unsigned long *hpa, @@ -1294,3 +1301,4 @@ long kvmppc_hpte_hv_fault(struct kvm_vcpu *vcpu, unsigned long addr, return -1; /* send fault up to host kernel mode */ } +EXPORT_SYMBOL_GPL(kvmppc_hpte_hv_fault); From patchwork Mon Apr 5 01:19:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462222 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=UxFd1Tip; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCZW4nQNz9sWX for ; Mon, 5 Apr 2021 11:22:43 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231823AbhDEBWr (ORCPT ); Sun, 4 Apr 2021 21:22:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231858AbhDEBWk (ORCPT ); Sun, 4 Apr 2021 21:22:40 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08339C061756 for ; Sun, 4 Apr 2021 18:22:32 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id kk2-20020a17090b4a02b02900c777aa746fso5060109pjb.3 for ; Sun, 04 Apr 2021 18:22:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cVXXJ3+5eVqba9QwRIR5tmg1j34fukVhUGM2jj9knwc=; b=UxFd1TiplL4W9TdnF/sfU53l0ZIm0lAMZUDx586IddSoDglNkbBjurTTb2jD/AhcfF vPb2ju32VhDtbAvA+TL1CP9eFFhmD4qdRlPCOySJBjm5pyCe1EReVkS2pgAgGRz1qXBd 5/nuhxcWK1a8BLblxevjuX4oo7lbaSoRd+vF6VOMM3vU9aoVrHHugjzCFLk64Oav63TZ CoBdFzIA02joFVcbOzrnwUeqLKUsdJBMSnsBhME+dMk/sMSIS3V+1MQ54vHiqTyNH1BG cZOojORn6+0CnoOvhdRsoA5nVccISmXsg/xuRARGeZxG78jfk4Nmlb4gGEADtLgXm7a1 w0CA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cVXXJ3+5eVqba9QwRIR5tmg1j34fukVhUGM2jj9knwc=; b=ew81iKyHKtFcHmE4wEXkAujnn6YMTQA65W6bTRetuqMc0R7+w+zzl6+heF7+e7McPi aSYwEa66dDjxBurhEcAAvqOCXH8QEvpON38yx2wOJ1I9sPP/kGM6PzqFS4Gfmi0kwADj Jg50z2WhEO9Evlnj9rgtXX+JLF4DkrAO2/FhOPDdRmAWlRgpZeXiq1LokweSuw9qaVW5 3a01wPoGRNNfFUBEV49YzPeY6nIdLSGmHD0vwSeOF2r7C87ei3QNVxqR5CtB1oSXO5V2 1HwNUitbDLryCu81G2dtlY2/wNEmIJrG21msnPquaaf/vbUmpTlNON3h2dYw+EuxNrop M3rQ== X-Gm-Message-State: AOAM530euBEMKvBTF6Cd+Om7EFKTz+l9aOylejtTnDw5v6twYQsMCS3O nuxEz0RsBissEkb3qdLdEiVOVGX5Sa14HQ== X-Google-Smtp-Source: ABdhPJyiPcFFg3gpUzlKShVVdy22TWMQoFv79DmwmX+Zci1OAOGbGS+7VdZgO1zPtQU9sdcmg3FtzA== X-Received: by 2002:a17:90b:1b42:: with SMTP id nv2mr24097499pjb.190.1617585751512; Sun, 04 Apr 2021 18:22:31 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.22.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:22:31 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 45/48] KVM: PPC: Book3S HV P9: Reflect userspace hcalls to hash guests to support PR KVM Date: Mon, 5 Apr 2021 11:19:45 +1000 Message-Id: <20210405011948.675354-46-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The reflection of sc 1 interrupts from guest PR=1 to the guest kernel is required to support a hash guest running PR KVM where its guest is making hcalls with sc 1. In preparation for hash guest support, add this hcall reflection to the P9 path. The P7/8 path does this in its realmode hcall handler (sc_1_fast_return). Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index d8e36f1ea66d..7cd97e6757e5 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1455,13 +1455,23 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, * Guest userspace executed sc 1. This can only be * reached by the P9 path because the old path * handles this case in realmode hcall handlers. - * - * Radix guests can not run PR KVM or nested HV hash - * guests which might run PR KVM, so this is always - * a privilege fault. Send a program check to guest - * kernel. */ - kvmppc_core_queue_program(vcpu, SRR1_PROGPRIV); + if (!kvmhv_vcpu_is_radix(vcpu)) { + /* + * A guest could be running PR KVM, so this + * may be a PR KVM hcall. It must be reflected + * to the guest kernel as a sc interrupt. + */ + kvmppc_core_queue_syscall(vcpu); + } else { + /* + * Radix guests can not run PR KVM or nested HV + * hash guests which might run PR KVM, so this + * is always a privilege fault. Send a program + * check to guest kernel. + */ + kvmppc_core_queue_program(vcpu, SRR1_PROGPRIV); + } r = RESUME_GUEST; break; } From patchwork Mon Apr 5 01:19:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462223 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=kY0kGBFY; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCZX1RqMz9sX2 for ; Mon, 5 Apr 2021 11:22:44 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231820AbhDEBWs (ORCPT ); Sun, 4 Apr 2021 21:22:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231695AbhDEBWo (ORCPT ); Sun, 4 Apr 2021 21:22:44 -0400 Received: from mail-pg1-x52c.google.com (mail-pg1-x52c.google.com [IPv6:2607:f8b0:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55528C0613E6 for ; Sun, 4 Apr 2021 18:22:35 -0700 (PDT) Received: by mail-pg1-x52c.google.com with SMTP id l76so7158306pga.6 for ; Sun, 04 Apr 2021 18:22:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gtNbRijnhbrDIky97fx0/Aav8opUD86uP1RkTln38AQ=; b=kY0kGBFYSuhibVSMJFgTUr2FrO1anThOmp77W5CPdIInYl/2h2aZ5xrtjnqe5gJwt2 KBPaR/SGhqslgC0t7Aojm3/Fxb0GJeJJRy96hGKIVkTbWm09IlDHtA33U3tLs1//fT+a aC4IlYz1kTVlsXkZAVWw3RYRg9GG2rtm9syOG1HctTWEBcyomjnxCAnWha4wD32LKXrR J9Z/zrPf07Zl1ZNCZCUcIhkKZiA09TgkjMIeqqdyAeEZXjKLGA2jBgAoG4h9jt+2jDy1 lu+Gh+grNDhAx524VwPvhF0Pg+CVzrlL/PSnBKyEFbE8u36mgyYEOGEDS1ubT9rKFs5k YopQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gtNbRijnhbrDIky97fx0/Aav8opUD86uP1RkTln38AQ=; b=uEiD0ld0b2Y+P1FKo1MoJdnPgvQ1E8bR7O9A7Td3O5gHSwKTh97JUoY1fyoHkm5evU V33dzGKswZsL/Kqh9XhqejAk5n4Be8iP/jSsm3o6oLwFmvV+k426GpoioNolD2Pa24sp 0HxVmTC+Sv6piE4cyCygV81CLnfmnHRPXys1DUAX3RUdEdGRBEH/TPQX+H+l8AF3+Yb/ E4NFHyDPVa6kHLQHn7gJ+t3A1UGXDMQJ6rAQ3rOySNMgToX4hNmtn1LW0rk5mzNhBFIk 4y4nD4cNIHKHh0KT8hoielagK1h7aGOtZcoTBMhm+g7iOJ8j+hIcsEqygYBbxbcbqk5d C2aw== X-Gm-Message-State: AOAM530wdBW3iHkYYn4gaYh/UK2NcYwKLQtc0DgKqE6j6cqnuuTg96z8 EAOypVJLvxZxC8oNZ/nwKeMb6MzP+DMKAw== X-Google-Smtp-Source: ABdhPJyNJ+WTZqNjYZAjVc/BIKuf+0kz99tOix273Qx162uTH3U5yw6ix8l+qU/wdk/wvgBhBmrZEA== X-Received: by 2002:a63:2507:: with SMTP id l7mr21155678pgl.198.1617585754665; Sun, 04 Apr 2021 18:22:34 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.22.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:22:34 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 46/48] KVM: PPC: Book3S HV P9: implement hash guest support Date: Mon, 5 Apr 2021 11:19:46 +1000 Message-Id: <20210405011948.675354-47-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Guest entry/exit has to restore and save/clear the SLB, plus several other bits to accommodate hash guests in the P9 path. Radix host, hash guest support is removed from the P7/8 path. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 20 ++- arch/powerpc/kvm/book3s_hv_interrupt.c | 156 +++++++++++++++++------- arch/powerpc/kvm/book3s_hv_rm_mmu.c | 4 + arch/powerpc/kvm/book3s_hv_rmhandlers.S | 14 +-- 4 files changed, 132 insertions(+), 62 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 7cd97e6757e5..4d0bb5b31307 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3869,7 +3869,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, } kvmppc_xive_pull_vcpu(vcpu); - vcpu->arch.slb_max = 0; + if (kvm_is_radix(vcpu->kvm)) + vcpu->arch.slb_max = 0; } dec = mfspr(SPRN_DEC); @@ -4101,7 +4102,6 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) /* * This never fails for a radix guest, as none of the operations it does * for a radix guest can fail or have a way to report failure. - * kvmhv_run_single_vcpu() relies on this fact. */ static int kvmhv_setup_mmu(struct kvm_vcpu *vcpu) { @@ -4280,8 +4280,15 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, vc->runner = vcpu; /* See if the MMU is ready to go */ - if (!kvm->arch.mmu_ready) - kvmhv_setup_mmu(vcpu); + if (!kvm->arch.mmu_ready) { + r = kvmhv_setup_mmu(vcpu); + if (r) { + run->exit_reason = KVM_EXIT_FAIL_ENTRY; + run->fail_entry.hardware_entry_failure_reason = 0; + vcpu->arch.ret = r; + return r; + } + } if (need_resched()) cond_resched(); @@ -4294,7 +4301,8 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, preempt_disable(); pcpu = smp_processor_id(); vc->pcpu = pcpu; - kvmppc_prepare_radix_vcpu(vcpu, pcpu); + if (kvm_is_radix(kvm)) + kvmppc_prepare_radix_vcpu(vcpu, pcpu); local_irq_disable(); hard_irq_disable(); @@ -4494,7 +4502,7 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST; do { - if (kvm_is_radix(kvm)) + if (radix_enabled()) r = kvmhv_run_single_vcpu(vcpu, ~(u64)0, vcpu->arch.vcore->lpcr); else diff --git a/arch/powerpc/kvm/book3s_hv_interrupt.c b/arch/powerpc/kvm/book3s_hv_interrupt.c index f09b11ea2033..a878cb5ec1b8 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupt.c +++ b/arch/powerpc/kvm/book3s_hv_interrupt.c @@ -4,6 +4,7 @@ #include #include #include +#include #ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING static void __start_timing(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator *next) @@ -55,6 +56,50 @@ static void __accumulate_time(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator #define accumulate_time(vcpu, next) do {} while (0) #endif +static inline void mfslb(unsigned int idx, u64 *slbee, u64 *slbev) +{ + asm volatile("slbmfev %0,%1" : "=r" (*slbev) : "r" (idx)); + asm volatile("slbmfee %0,%1" : "=r" (*slbee) : "r" (idx)); +} + +static inline void __mtslb(u64 slbee, u64 slbev) +{ + asm volatile("slbmte %0,%1" :: "r" (slbev), "r" (slbee)); +} + +static inline void mtslb(unsigned int idx, u64 slbee, u64 slbev) +{ + BUG_ON((slbee & 0xfff) != idx); + + __mtslb(slbee, slbev); +} + +static inline void slb_invalidate(unsigned int ih) +{ + asm volatile(PPC_SLBIA(%0) :: "i"(ih)); +} + +/* + * Malicious or buggy radix guests may have inserted SLB entries + * (only 0..3 because radix always runs with UPRT=1), so these must + * be cleared here to avoid side-channels. slbmte is used rather + * than slbia, as it won't clear cached translations. + */ +static void radix_clear_slb(void) +{ + u64 slbee, slbev; + int i; + + for (i = 0; i < 4; i++) { + mfslb(i, &slbee, &slbev); + if (unlikely(slbee || slbev)) { + slbee = i; + slbev = 0; + mtslb(i, slbee, slbev); + } + } +} + static void switch_mmu_to_guest_radix(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpcr) { struct kvmppc_vcore *vc = vcpu->arch.vcore; @@ -80,6 +125,31 @@ static void switch_mmu_to_guest_radix(struct kvm *kvm, struct kvm_vcpu *vcpu, u6 kvmppc_check_need_tlb_flush(kvm, vc->pcpu, nested); } +static void switch_mmu_to_guest_hpt(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpcr) +{ + struct kvm_nested_guest *nested = vcpu->arch.nested; + u32 lpid; + int i; + + BUG_ON(nested); + + lpid = kvm->arch.lpid; + + mtspr(SPRN_LPID, lpid); + mtspr(SPRN_LPCR, lpcr); + mtspr(SPRN_PID, vcpu->arch.pid); + + for (i = 0; i < vcpu->arch.slb_max; i++) + __mtslb(vcpu->arch.slb[i].orige, vcpu->arch.slb[i].origv); + + isync(); + + /* + * TLBIEL is not virtualised for HPT guests, so check_need_tlb_flush + * is not required here. + */ +} + static void switch_mmu_to_host_radix(struct kvm *kvm, u32 pid) { isync(); @@ -91,37 +161,30 @@ static void switch_mmu_to_host_radix(struct kvm *kvm, u32 pid) isync(); } -static inline void mfslb(unsigned int idx, u64 *slbee, u64 *slbev) -{ - asm volatile("slbmfev %0,%1" : "=r" (*slbev) : "r" (idx)); - asm volatile("slbmfee %0,%1" : "=r" (*slbee) : "r" (idx)); -} - -static inline void mtslb(unsigned int idx, u64 slbee, u64 slbev) -{ - BUG_ON((slbee & 0xfff) != idx); - - asm volatile("slbmte %0,%1" :: "r" (slbev), "r" (slbee)); -} - -/* - * Malicious or buggy radix guests may have inserted SLB entries - * (only 0..3 because radix always runs with UPRT=1), so these must - * be cleared here to avoid side-channels. slbmte is used rather - * than slbia, as it won't clear cached translations. - */ -static void radix_clear_slb(void) +static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) { - u64 slbee, slbev; - int i; + if (kvm_is_radix(kvm)) { + radix_clear_slb(); + } else { + int i; + int nr = 0; - for (i = 0; i < 4; i++) { - mfslb(i, &slbee, &slbev); - if (unlikely(slbee || slbev)) { - slbee = i; - slbev = 0; - mtslb(i, slbee, slbev); + /* + * This must run before switching to host (radix host can't + * access all SLBs). + */ + for (i = 0; i < vcpu->arch.slb_nr; i++) { + u64 slbee, slbev; + mfslb(i, &slbee, &slbev); + if (slbee & SLB_ESID_V) { + vcpu->arch.slb[nr].orige = slbee | i; + vcpu->arch.slb[nr].origv = slbev; + nr++; + } } + vcpu->arch.slb_max = nr; + mtslb(0, 0, 0); + slb_invalidate(6); } } @@ -229,10 +292,18 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_AMOR, ~0UL); - if (cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) - __mtmsrd(msr & ~(MSR_IR|MSR_DR|MSR_RI), 0); + local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_GUEST_HV_FAST; + if (kvm_is_radix(kvm)) { + if (cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) + __mtmsrd(msr & ~(MSR_IR|MSR_DR|MSR_RI), 0); + switch_mmu_to_guest_radix(kvm, vcpu, lpcr); + if (!cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) + __mtmsrd(0, 1); /* clear RI */ - switch_mmu_to_guest_radix(kvm, vcpu, lpcr); + } else { + __mtmsrd(msr & ~(MSR_IR|MSR_DR|MSR_RI), 0); + switch_mmu_to_guest_hpt(kvm, vcpu, lpcr); + } /* * P9 suppresses the HDEC exception when LPCR[HDICE] = 0, @@ -240,9 +311,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc */ mtspr(SPRN_HDEC, hdec); - if (!cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) - __mtmsrd(0, 1); /* clear RI */ - mtspr(SPRN_DAR, vcpu->arch.shregs.dar); mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); mtspr(SPRN_SRR0, vcpu->arch.shregs.srr0); @@ -250,10 +318,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc accumulate_time(vcpu, &vcpu->arch.guest_time); - local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_GUEST_HV_FAST; kvmppc_p9_enter_guest(vcpu); - // Radix host and guest means host never runs with guest MMU state - local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; accumulate_time(vcpu, &vcpu->arch.rm_intr); @@ -354,8 +419,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc #endif } - radix_clear_slb(); - accumulate_time(vcpu, &vcpu->arch.rm_exit); /* Advance host PURR/SPURR by the amount used by guest */ @@ -389,11 +452,14 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_DAWRX1, host_dawrx1); } - /* - * Since this is radix, do a eieio; tlbsync; ptesync sequence in - * case we interrupted the guest between a tlbie and a ptesync. - */ - asm volatile("eieio; tlbsync; ptesync"); + if (kvm_is_radix(kvm)) { + /* + * Since this is radix, do a eieio; tlbsync; ptesync sequence + * in case we interrupted the guest between a tlbie and a + * ptesync. + */ + asm volatile("eieio; tlbsync; ptesync"); + } /* * cp_abort is required if the processor supports local copy-paste @@ -420,7 +486,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc /* HDEC must be at least as large as DEC, so decrementer_max fits */ mtspr(SPRN_HDEC, decrementer_max); + save_clear_guest_mmu(kvm, vcpu); switch_mmu_to_host_radix(kvm, host_pidr); + local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; /* * If we are in real mode, only switch MMU on after the MMU is diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c index 8cc73abbf42b..f487ebb3a70a 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c @@ -57,6 +57,10 @@ static int global_invalidates(struct kvm *kvm) else global = 1; + /* LPID has been switched to host if in virt mode so can't do local */ + if (!global && (mfmsr() & (MSR_IR|MSR_DR))) + global = 1; + if (!global) { /* any other core might now have stale TLB entries... */ smp_wmb(); diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 1bc5a3805a26..8a2efe2118a5 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -885,14 +885,11 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300) cmpdi r3, 512 /* 1 microsecond */ blt hdec_soon - /* For hash guest, clear out and reload the SLB */ -BEGIN_MMU_FTR_SECTION - /* Radix host won't have populated the SLB, so no need to clear */ + /* Clear out and reload the SLB */ li r6, 0 slbmte r6, r6 PPC_SLBIA(6) ptesync -END_MMU_FTR_SECTION_IFCLR(MMU_FTR_TYPE_RADIX) /* Load up guest SLB entries (N.B. slb_max will be 0 for radix) */ lwz r5,VCPU_SLB_MAX(r4) @@ -1370,9 +1367,6 @@ guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */ stw r5,VCPU_SLB_MAX(r9) /* load host SLB entries */ -BEGIN_MMU_FTR_SECTION - b guest_bypass -END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX) ld r8,PACA_SLBSHADOWPTR(r13) .rept SLB_NUM_BOLTED @@ -3124,10 +3118,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_DAWR1) PPC_SLBIA(6) ptesync -BEGIN_MMU_FTR_SECTION - b 4f -END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX) - /* load host SLB entries */ ld r8, PACA_SLBSHADOWPTR(r13) .rept SLB_NUM_BOLTED @@ -3141,7 +3131,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX) 3: addi r8, r8, 16 .endr -4: lwz r7, KVM_HOST_LPID(r10) + lwz r7, KVM_HOST_LPID(r10) mtspr SPRN_LPID, r7 mtspr SPRN_PID, r0 ld r8, KVM_HOST_LPCR(r10) From patchwork Mon Apr 5 01:19:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462224 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=jXgo5euk; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCZY0yb4z9sj5 for ; Mon, 5 Apr 2021 11:22:45 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231695AbhDEBWs (ORCPT ); Sun, 4 Apr 2021 21:22:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231793AbhDEBWo (ORCPT ); Sun, 4 Apr 2021 21:22:44 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73102C061788 for ; Sun, 4 Apr 2021 18:22:38 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id x126so2205403pfc.13 for ; Sun, 04 Apr 2021 18:22:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lT40J0V9NBeZrLgQ+b5j14kA1GasZu+lHS7oD02caS4=; b=jXgo5eukd93CxmPdudi1Zn0FIJaqPSsy1qHvxo75HcVhOEH7W+hEhBRb1OHItsXyyc bOubrUn/wR7nXIHrYfam7CHwoPG5ynGStpqzErf2svhEE64yxcL/c+cR+UntSdufwB40 tYTNIwxkcX0/mB5p1bJXdOnF9JMk5ExupVExCl9Yl/vqyj/noUh84hSlqcp5Kro7J7Xk pSXgb/CidGfElyMub/GoKEau9tTnFMSDK3puqtFn5nWk0B5el4V3t85L8+B71jDcYCkF xmmfw9yRpdcP51VUFVUHplm3bvIRWfMkWFXLtge31hluxG0l5nasQ0W7lNPFNTzt8L0I g5mA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lT40J0V9NBeZrLgQ+b5j14kA1GasZu+lHS7oD02caS4=; b=CDrZNqbREEifEL5W1m0YSmuyr0mCwdVVjbeVXVOwl/if2qa4nZcRNbGwRTxSMOo/Me DJ7zdD+tq7jQPb3OhkA6kJ+ovkcEfDqI+Zj3rfPymlvmeJoiMV8XQIf8Tq/C+WMRsYJ2 r9jfSxjMPw+Lp+gRByiyX/PDsMlFdy6Nfoa+LVjU1y002hLUnGV691GnY8fPMlT+Qgrq VsOclO7Khbutpqlx43jpOVTl1Tzt76fHTNH7Dos4GKux8LBAZ5T7FGYVlCiSd9G35KUX yi8RzWOQKwlATqviLJKfoMSrR5ktGADqjViC2rK/6ZTkh8ShKqFnV7IoVJshYgRkfYog 6kjg== X-Gm-Message-State: AOAM530Z3Hzgw93Y+7fqR9e99WzocByqDUVyrRdv3GTdbBkhFVPmYSNX LSD48xwgiaG9dic/+l6h6AktsM2AZwQVSw== X-Google-Smtp-Source: ABdhPJxoA8OIPEFbQEQagSInxYL96zfWuvboWl4YHV7EzCzMQjkCQ05iYPmJk7B0/tVcy6mOAvi3lw== X-Received: by 2002:a63:4763:: with SMTP id w35mr21011033pgk.226.1617585757754; Sun, 04 Apr 2021 18:22:37 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.22.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:22:37 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 47/48] KVM: PPC: Book3S HV P9: implement hash host / hash guest support Date: Mon, 5 Apr 2021 11:19:47 +1000 Message-Id: <20210405011948.675354-48-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This additionally has to save and restore the host SLB, and also ensure that the MMU is off while switching into the guest SLB. P9 and later CPUs now always go via the P9 path. The "fast" guest mode is now renamed to the P9 mode, which is consistent with functionality and naming. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_asm.h | 2 +- arch/powerpc/kvm/book3s_64_entry.S | 12 ++++++--- arch/powerpc/kvm/book3s_hv.c | 4 ++- arch/powerpc/kvm/book3s_hv_interrupt.c | 37 +++++++++++++++++++++----- 4 files changed, 44 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_asm.h b/arch/powerpc/include/asm/kvm_asm.h index b4f9996bd331..b996002882b1 100644 --- a/arch/powerpc/include/asm/kvm_asm.h +++ b/arch/powerpc/include/asm/kvm_asm.h @@ -146,7 +146,7 @@ #define KVM_GUEST_MODE_GUEST 1 #define KVM_GUEST_MODE_SKIP 2 #define KVM_GUEST_MODE_GUEST_HV 3 -#define KVM_GUEST_MODE_GUEST_HV_FAST 4 /* ISA v3.0 with host radix mode */ +#define KVM_GUEST_MODE_GUEST_HV_P9 4 /* ISA >= v3.0 path */ #define KVM_GUEST_MODE_HOST_HV 5 #define KVM_INST_FETCH_FAILED -1 diff --git a/arch/powerpc/kvm/book3s_64_entry.S b/arch/powerpc/kvm/book3s_64_entry.S index d98ad580fd98..5d7eca29b471 100644 --- a/arch/powerpc/kvm/book3s_64_entry.S +++ b/arch/powerpc/kvm/book3s_64_entry.S @@ -35,7 +35,7 @@ .balign IFETCH_ALIGN_BYTES kvmppc_hcall: lbz r10,HSTATE_IN_GUEST(r13) - cmpwi r10,KVM_GUEST_MODE_GUEST_HV_FAST + cmpwi r10,KVM_GUEST_MODE_GUEST_HV_P9 beq kvmppc_p9_exit_hcall ld r10,PACA_EXGEN+EX_R13(r13) SET_SCRATCH0(r10) @@ -65,7 +65,7 @@ kvmppc_hcall: kvmppc_interrupt: std r10,HSTATE_SCRATCH0(r13) lbz r10,HSTATE_IN_GUEST(r13) - cmpwi r10,KVM_GUEST_MODE_GUEST_HV_FAST + cmpwi r10,KVM_GUEST_MODE_GUEST_HV_P9 beq kvmppc_p9_exit_interrupt ld r10,HSTATE_SCRATCH0(r13) lbz r11,HSTATE_IN_GUEST(r13) @@ -280,7 +280,7 @@ kvmppc_p9_exit_hcall: .balign IFETCH_ALIGN_BYTES kvmppc_p9_exit_interrupt: /* - * If set to KVM_GUEST_MODE_GUEST_HV_FAST but we're still in the + * If set to KVM_GUEST_MODE_GUEST_HV_P9 but we're still in the * hypervisor, that means we can't return from the entry stack. */ rldicl. r10,r12,64-MSR_HV_LG,63 @@ -354,6 +354,12 @@ kvmppc_p9_exit_interrupt: * effort for a small bit of code. Lots of other things to do first. */ kvmppc_p9_bad_interrupt: +BEGIN_MMU_FTR_SECTION + /* + * Hash host doesn't try to recover MMU (requires host SLB reload) + */ + b . +END_MMU_FTR_SECTION_IFCLR(MMU_FTR_TYPE_RADIX) /* * Set GUEST_MODE_NONE so the handler won't branch to KVM, and clear * MSR_RI in r12 ([H]SRR1) so the handler won't try to return. diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 4d0bb5b31307..e8d9843a134d 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4502,7 +4502,7 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST; do { - if (radix_enabled()) + if (cpu_has_feature(CPU_FTR_ARCH_300)) r = kvmhv_run_single_vcpu(vcpu, ~(u64)0, vcpu->arch.vcore->lpcr); else @@ -5591,6 +5591,8 @@ static int kvmhv_enable_nested(struct kvm *kvm) return -EPERM; if (!cpu_has_feature(CPU_FTR_ARCH_300)) return -ENODEV; + if (!radix_enabled()) + return -ENODEV; /* kvm == NULL means the caller is testing if the capability exists */ if (kvm) diff --git a/arch/powerpc/kvm/book3s_hv_interrupt.c b/arch/powerpc/kvm/book3s_hv_interrupt.c index a878cb5ec1b8..db24962f52f5 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupt.c +++ b/arch/powerpc/kvm/book3s_hv_interrupt.c @@ -150,7 +150,7 @@ static void switch_mmu_to_guest_hpt(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 */ } -static void switch_mmu_to_host_radix(struct kvm *kvm, u32 pid) +static void switch_mmu_to_host(struct kvm *kvm, u32 pid) { isync(); mtspr(SPRN_PID, pid); @@ -159,6 +159,23 @@ static void switch_mmu_to_host_radix(struct kvm *kvm, u32 pid) isync(); mtspr(SPRN_LPCR, kvm->arch.host_lpcr); isync(); + + if (!radix_enabled()) + slb_restore_bolted_realmode(); +} + +static void save_clear_host_mmu(struct kvm *kvm) +{ + if (!radix_enabled()) { + /* + * Hash host could save and restore host SLB entries to + * reduce SLB fault overheads of VM exits, but for now the + * existing code clears all entries and restores just the + * bolted ones when switching back to host. + */ + mtslb(0, 0, 0); + slb_invalidate(6); + } } static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) @@ -292,16 +309,24 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_AMOR, ~0UL); - local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_GUEST_HV_FAST; + local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_GUEST_HV_P9; + + /* + * Hash host, hash guest, or radix guest with prefetch bug, all have + * to disable the MMU before switching to guest MMU state. + */ + if (!radix_enabled() || !kvm_is_radix(kvm) || + cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) + __mtmsrd(msr & ~(MSR_IR|MSR_DR|MSR_RI), 0); + + save_clear_host_mmu(kvm); + if (kvm_is_radix(kvm)) { - if (cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) - __mtmsrd(msr & ~(MSR_IR|MSR_DR|MSR_RI), 0); switch_mmu_to_guest_radix(kvm, vcpu, lpcr); if (!cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) __mtmsrd(0, 1); /* clear RI */ } else { - __mtmsrd(msr & ~(MSR_IR|MSR_DR|MSR_RI), 0); switch_mmu_to_guest_hpt(kvm, vcpu, lpcr); } @@ -487,7 +512,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_HDEC, decrementer_max); save_clear_guest_mmu(kvm, vcpu); - switch_mmu_to_host_radix(kvm, host_pidr); + switch_mmu_to_host(kvm, host_pidr); local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; /* From patchwork Mon Apr 5 01:19:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1462225 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ubgkAdJ9; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FDCZY6VvYz9t0J for ; Mon, 5 Apr 2021 11:22:45 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231793AbhDEBWt (ORCPT ); Sun, 4 Apr 2021 21:22:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231799AbhDEBWr (ORCPT ); Sun, 4 Apr 2021 21:22:47 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4889C061756 for ; Sun, 4 Apr 2021 18:22:41 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id x26so467344pfn.0 for ; Sun, 04 Apr 2021 18:22:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+4cI0GpYhQjhQhmYRyyIsQU6k1HFSyNxyC5kwfKaemg=; b=ubgkAdJ9g0m0WswmtWm4yVekok5n5VcH+ktNH0hxgDBSS/albMpS6Nh2Ekd1an1iVs p1sY5m46+TN0Kdu51MFPohjbx1ipfDf78GYBxiUtywCVATdd2B/icvXFCLK3bVIG0/3q qzSQrd7i5HFvhzPQl4NKmGG8yMKPSR5lDC3ePmLbk0PICnrtynsR+ZJlbUvVmhlMrbnn N5MJNObnijo/zSEPMImjavAwsHRsVzpaNxRKvJ6pW8r2Rxo2w4EXtEFLFHaBXDgnuHip GDGCuRYfQ9pcE6ndnUpHrcsjHBqFcRYev3upXHoQbtQ42FN2dIlr5/7F+3k0pYG8UFyu IMug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+4cI0GpYhQjhQhmYRyyIsQU6k1HFSyNxyC5kwfKaemg=; b=kZ+8MdK2gPrkweuChpLDRpGb1yjNTB1k2es0qd/fdD9Cgx8MnfcYeBBWF54tajgSrE 3OLztnhQbzLzkxuduz1C9MSEkZNCn/5ewYKsXn6g4DLhk+nMxuV6zVPcn5vSg9LgRfJr Lh1jXJNO3a7WxfnZtdktgrqmOoKVo/L30i0cbnBGsjggxb89jvII9410oSCEK7DPx/TQ NIF4EMwWDeGpOuX5cYZ0yQ4ONee5Amhv6lb1BaWouIaiDz2inJBfywBj66pRJ18D4nMX CWrxB3kNBP+/Jexn2GLo1LNZf5N0L61jYdIu999NgOsUv4JFG6MF9046B7PWb1XeAJkC f09Q== X-Gm-Message-State: AOAM5337c82R7H09I6TjrkswkB/S3t++/cAnxkzWR5KnHWWISimVke0/ 867rw9IObiEiRClVKpOUV/IypXDkip6KuQ== X-Google-Smtp-Source: ABdhPJz6TgrVsHW1s+isSfrf609SbMz7IvXmSD+YMFACsu4hcGMd750DtTcS9EsIoAKgW9ZnpAUGoQ== X-Received: by 2002:a62:2cce:0:b029:21d:97da:833e with SMTP id s197-20020a622cce0000b029021d97da833emr21003614pfs.40.1617585760607; Sun, 04 Apr 2021 18:22:40 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.22.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:22:40 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 48/48] KVM: PPC: Book3S HV: remove ISA v3.0 and v3.1 support from P7/8 path Date: Mon, 5 Apr 2021 11:19:48 +1000 Message-Id: <20210405011948.675354-49-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org POWER9 and later processors always go via the P9 guest entry path now. Remove the remaining support from the P7/8 path. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 62 ++-- arch/powerpc/kvm/book3s_hv_interrupts.S | 9 +- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 419 +----------------------- arch/powerpc/platforms/powernv/idle.c | 52 +-- 4 files changed, 42 insertions(+), 500 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index e8d9843a134d..cd9e2940d1c6 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -130,9 +130,6 @@ static inline bool nesting_enabled(struct kvm *kvm) return kvm->arch.nested_enable && kvm_is_radix(kvm); } -/* If set, the threads on each CPU core have to be in the same MMU mode */ -static bool no_mixing_hpt_and_radix __read_mostly; - static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu); /* @@ -3132,9 +3129,6 @@ static void prepare_threads(struct kvmppc_vcore *vc) for_each_runnable_thread(i, vcpu, vc) { if (signal_pending(vcpu->arch.run_task)) vcpu->arch.ret = -EINTR; - else if (no_mixing_hpt_and_radix && - kvm_is_radix(vc->kvm) != radix_enabled()) - vcpu->arch.ret = -EINVAL; else if (vcpu->arch.vpa.update_pending || vcpu->arch.slb_shadow.update_pending || vcpu->arch.dtl.update_pending) @@ -3341,6 +3335,9 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) int trap; bool is_power8; + if (WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300))) + return; + /* * Remove from the list any threads that have a signal pending * or need a VPA update done @@ -3368,9 +3365,6 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) * Make sure we are running on primary threads, and that secondary * threads are offline. Also check if the number of threads in this * guest are greater than the current system threads per guest. - * On POWER9, we need to be not in independent-threads mode if - * this is a HPT guest on a radix host machine where the - * CPU threads may not be in different MMU modes. */ if ((controlled_threads > 1) && ((vc->num_threads > threads_per_subcore) || !on_primary_thread())) { @@ -3394,18 +3388,6 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) if (vc->num_threads < target_threads) collect_piggybacks(&core_info, target_threads); - /* - * On radix, arrange for TLB flushing if necessary. - * This has to be done before disabling interrupts since - * it uses smp_call_function(). - */ - pcpu = smp_processor_id(); - if (kvm_is_radix(vc->kvm)) { - for (sub = 0; sub < core_info.n_subcores; ++sub) - for_each_runnable_thread(i, vcpu, core_info.vc[sub]) - kvmppc_prepare_radix_vcpu(vcpu, pcpu); - } - /* * Hard-disable interrupts, and check resched flag and signals. * If we need to reschedule or deliver a signal, clean up @@ -3438,8 +3420,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) cmd_bit = stat_bit = 0; split = core_info.n_subcores; sip = NULL; - is_power8 = cpu_has_feature(CPU_FTR_ARCH_207S) - && !cpu_has_feature(CPU_FTR_ARCH_300); + is_power8 = cpu_has_feature(CPU_FTR_ARCH_207S); if (split > 1) { sip = &split_info; @@ -3733,8 +3714,7 @@ static inline bool hcall_is_xics(unsigned long req) } /* - * Virtual-mode guest entry for POWER9 and later when the host and - * guest are both using the radix MMU. The LPIDR has already been set. + * Guest entry for POWER9 and later CPUs. */ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr) @@ -5754,11 +5734,25 @@ static int kvmhv_enable_dawr1(struct kvm *kvm) static bool kvmppc_hash_v3_possible(void) { - if (radix_enabled() && no_mixing_hpt_and_radix) + if (!cpu_has_feature(CPU_FTR_ARCH_300)) return false; - return cpu_has_feature(CPU_FTR_ARCH_300) && - cpu_has_feature(CPU_FTR_HVMODE); + if (!cpu_has_feature(CPU_FTR_HVMODE)) + return false; + + /* + * POWER9 chips before version 2.02 can't have some threads in + * HPT mode and some in radix mode on the same core. + */ + if (radix_enabled()) { + unsigned int pvr = mfspr(SPRN_PVR); + if ((pvr >> 16) == PVR_POWER9 && + (((pvr & 0xe000) == 0 && (pvr & 0xfff) < 0x202) || + ((pvr & 0xe000) == 0x2000 && (pvr & 0xfff) < 0x101))) + return false; + } + + return true; } static struct kvmppc_ops kvm_ops_hv = { @@ -5902,18 +5896,6 @@ static int kvmppc_book3s_init_hv(void) if (kvmppc_radix_possible()) r = kvmppc_radix_init(); - /* - * POWER9 chips before version 2.02 can't have some threads in - * HPT mode and some in radix mode on the same core. - */ - if (cpu_has_feature(CPU_FTR_ARCH_300)) { - unsigned int pvr = mfspr(SPRN_PVR); - if ((pvr >> 16) == PVR_POWER9 && - (((pvr & 0xe000) == 0 && (pvr & 0xfff) < 0x202) || - ((pvr & 0xe000) == 0x2000 && (pvr & 0xfff) < 0x101))) - no_mixing_hpt_and_radix = true; - } - r = kvmppc_uvmem_init(); if (r < 0) pr_err("KVM-HV: kvmppc_uvmem_init failed %d\n", r); diff --git a/arch/powerpc/kvm/book3s_hv_interrupts.S b/arch/powerpc/kvm/book3s_hv_interrupts.S index 327417d79eac..4444f83cb133 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupts.S +++ b/arch/powerpc/kvm/book3s_hv_interrupts.S @@ -58,7 +58,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) /* * Put whatever is in the decrementer into the * hypervisor decrementer. - * Because of a hardware deviation in P8 and P9, + * Because of a hardware deviation in P8, * we need to set LPCR[HDICE] before writing HDEC. */ ld r5, HSTATE_KVM_VCORE(r13) @@ -67,15 +67,10 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) ori r8, r9, LPCR_HDICE mtspr SPRN_LPCR, r8 isync - andis. r0, r9, LPCR_LD@h mfspr r8,SPRN_DEC mftb r7 -BEGIN_FTR_SECTION - /* On POWER9, don't sign-extend if host LPCR[LD] bit is set */ - bne 32f -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) extsw r8,r8 -32: mtspr SPRN_HDEC,r8 + mtspr SPRN_HDEC,r8 add r8,r8,r7 std r8,HSTATE_DECEXP(r13) diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 8a2efe2118a5..ba0d6674fb9a 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -25,18 +25,10 @@ #include #include #include -#include #include #include #include #include -#include - -/* Sign-extend HDEC if not on POWER9 */ -#define EXTEND_HDEC(reg) \ -BEGIN_FTR_SECTION; \ - extsw reg, reg; \ -END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) /* Values in HSTATE_NAPPING(r13) */ #define NAPPING_CEDE 1 @@ -56,8 +48,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) #define STACK_SLOT_HFSCR (SFS-72) #define STACK_SLOT_AMR (SFS-80) #define STACK_SLOT_UAMOR (SFS-88) -#define STACK_SLOT_DAWR1 (SFS-96) -#define STACK_SLOT_DAWRX1 (SFS-104) /* * Call kvmppc_hv_entry in real mode. @@ -228,7 +218,7 @@ kvm_novcpu_wakeup: /* See if our timeslice has expired (HDEC is negative) */ mfspr r0, SPRN_HDEC - EXTEND_HDEC(r0) + extsw r0, r0 li r12, BOOK3S_INTERRUPT_HV_DECREMENTER cmpdi r0, 0 blt kvm_novcpu_exit @@ -330,10 +320,8 @@ kvm_secondary_got_guest: lbz r4, HSTATE_PTID(r13) cmpwi r4, 0 bne 63f - LOAD_REG_ADDR(r6, decrementer_max) - ld r6, 0(r6) + lis r6,0x7fff /* MAX_INT@h */ mtspr SPRN_HDEC, r6 -BEGIN_FTR_SECTION /* and set per-LPAR registers, if doing dynamic micro-threading */ ld r6, HSTATE_SPLIT_MODE(r13) cmpdi r6, 0 @@ -345,7 +333,6 @@ BEGIN_FTR_SECTION ld r0, KVM_SPLIT_LDBAR(r6) mtspr SPRN_LDBAR, r0 isync -END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) 63: /* Order load of vcpu after load of vcore */ lwsync @@ -416,7 +403,6 @@ kvm_no_guest: blr 53: -BEGIN_FTR_SECTION HMT_LOW ld r5, HSTATE_KVM_VCORE(r13) cmpdi r5, 0 @@ -431,14 +417,6 @@ BEGIN_FTR_SECTION b kvm_unsplit_nap 60: HMT_MEDIUM b kvm_secondary_got_guest -FTR_SECTION_ELSE - HMT_LOW - ld r5, HSTATE_KVM_VCORE(r13) - cmpdi r5, 0 - beq kvm_no_guest - HMT_MEDIUM - b kvm_secondary_got_guest -ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300) 54: li r0, KVM_HWTHREAD_IN_KVM stb r0, HSTATE_HWTHREAD_STATE(r13) @@ -564,13 +542,11 @@ kvmppc_hv_entry: bne 10f lwz r7,KVM_LPID(r9) -BEGIN_FTR_SECTION ld r6,KVM_SDR1(r9) li r0,LPID_RSVD /* switch to reserved LPID */ mtspr SPRN_LPID,r0 ptesync mtspr SPRN_SDR1,r6 /* switch to partition page table */ -END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) mtspr SPRN_LPID,r7 isync @@ -650,16 +626,6 @@ kvmppc_got_guest: mtspr SPRN_SPURR,r8 /* Save host values of some registers */ -BEGIN_FTR_SECTION - mfspr r5, SPRN_TIDR - mfspr r6, SPRN_PSSCR - mfspr r7, SPRN_PID - std r5, STACK_SLOT_TID(r1) - std r6, STACK_SLOT_PSSCR(r1) - std r7, STACK_SLOT_PID(r1) - mfspr r5, SPRN_HFSCR - std r5, STACK_SLOT_HFSCR(r1) -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) BEGIN_FTR_SECTION mfspr r5, SPRN_CIABR mfspr r6, SPRN_DAWR0 @@ -670,12 +636,6 @@ BEGIN_FTR_SECTION std r7, STACK_SLOT_DAWRX0(r1) std r8, STACK_SLOT_IAMR(r1) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) -BEGIN_FTR_SECTION - mfspr r6, SPRN_DAWR1 - mfspr r7, SPRN_DAWRX1 - std r6, STACK_SLOT_DAWR1(r1) - std r7, STACK_SLOT_DAWRX1(r1) -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S | CPU_FTR_DAWR1) mfspr r5, SPRN_AMR std r5, STACK_SLOT_AMR(r1) @@ -693,13 +653,9 @@ BEGIN_FTR_SECTION END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) #ifdef CONFIG_PPC_TRANSACTIONAL_MEM -/* - * Branch around the call if both CPU_FTR_TM and - * CPU_FTR_P9_TM_HV_ASSIST are off. - */ BEGIN_FTR_SECTION b 91f -END_FTR_SECTION(CPU_FTR_TM | CPU_FTR_P9_TM_HV_ASSIST, 0) +END_FTR_SECTION_IFCLR(CPU_FTR_TM) /* * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS (but not CR) */ @@ -766,12 +722,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) ld r6, VCPU_DAWRX0(r4) mtspr SPRN_DAWR0, r5 mtspr SPRN_DAWRX0, r6 -BEGIN_FTR_SECTION - ld r5, VCPU_DAWR1(r4) - ld r6, VCPU_DAWRX1(r4) - mtspr SPRN_DAWR1, r5 - mtspr SPRN_DAWRX1, r6 -END_FTR_SECTION_IFSET(CPU_FTR_DAWR1) 1: ld r7, VCPU_CIABR(r4) ld r8, VCPU_TAR(r4) @@ -789,7 +739,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_DAWR1) mtspr SPRN_BESCR, r6 mtspr SPRN_PID, r7 mtspr SPRN_WORT, r8 -BEGIN_FTR_SECTION /* POWER8-only registers */ ld r5, VCPU_TCSCR(r4) ld r6, VCPU_ACOP(r4) @@ -800,18 +749,6 @@ BEGIN_FTR_SECTION mtspr SPRN_CSIGR, r7 mtspr SPRN_TACR, r8 nop -FTR_SECTION_ELSE - /* POWER9-only registers */ - ld r5, VCPU_TID(r4) - ld r6, VCPU_PSSCR(r4) - lbz r8, HSTATE_FAKE_SUSPEND(r13) - oris r6, r6, PSSCR_EC@h /* This makes stop trap to HV */ - rldimi r6, r8, PSSCR_FAKE_SUSPEND_LG, 63 - PSSCR_FAKE_SUSPEND_LG - ld r7, VCPU_HFSCR(r4) - mtspr SPRN_TIDR, r5 - mtspr SPRN_PSSCR, r6 - mtspr SPRN_HFSCR, r7 -ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300) 8: ld r5, VCPU_SPRG0(r4) @@ -881,7 +818,7 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300) /* Check if HDEC expires soon */ mfspr r3, SPRN_HDEC - EXTEND_HDEC(r3) + extsw r3, r3 cmpdi r3, 512 /* 1 microsecond */ blt hdec_soon @@ -904,93 +841,9 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300) bdnz 1b 9: -#ifdef CONFIG_KVM_XICS - /* We are entering the guest on that thread, push VCPU to XIVE */ - ld r11, VCPU_XIVE_SAVED_STATE(r4) - li r9, TM_QW1_OS - lwz r8, VCPU_XIVE_CAM_WORD(r4) - cmpwi r8, 0 - beq no_xive - li r7, TM_QW1_OS + TM_WORD2 - mfmsr r0 - andi. r0, r0, MSR_DR /* in real mode? */ - beq 2f - ld r10, HSTATE_XIVE_TIMA_VIRT(r13) - cmpldi cr1, r10, 0 - beq cr1, no_xive - eieio - stdx r11,r9,r10 - stwx r8,r7,r10 - b 3f -2: ld r10, HSTATE_XIVE_TIMA_PHYS(r13) - cmpldi cr1, r10, 0 - beq cr1, no_xive - eieio - stdcix r11,r9,r10 - stwcix r8,r7,r10 -3: li r9, 1 - stb r9, VCPU_XIVE_PUSHED(r4) - eieio - - /* - * We clear the irq_pending flag. There is a small chance of a - * race vs. the escalation interrupt happening on another - * processor setting it again, but the only consequence is to - * cause a spurrious wakeup on the next H_CEDE which is not an - * issue. - */ - li r0,0 - stb r0, VCPU_IRQ_PENDING(r4) - - /* - * In single escalation mode, if the escalation interrupt is - * on, we mask it. - */ - lbz r0, VCPU_XIVE_ESC_ON(r4) - cmpwi cr1, r0,0 - beq cr1, 1f - li r9, XIVE_ESB_SET_PQ_01 - beq 4f /* in real mode? */ - ld r10, VCPU_XIVE_ESC_VADDR(r4) - ldx r0, r10, r9 - b 5f -4: ld r10, VCPU_XIVE_ESC_RADDR(r4) - ldcix r0, r10, r9 -5: sync - - /* We have a possible subtle race here: The escalation interrupt might - * have fired and be on its way to the host queue while we mask it, - * and if we unmask it early enough (re-cede right away), there is - * a theorical possibility that it fires again, thus landing in the - * target queue more than once which is a big no-no. - * - * Fortunately, solving this is rather easy. If the above load setting - * PQ to 01 returns a previous value where P is set, then we know the - * escalation interrupt is somewhere on its way to the host. In that - * case we simply don't clear the xive_esc_on flag below. It will be - * eventually cleared by the handler for the escalation interrupt. - * - * Then, when doing a cede, we check that flag again before re-enabling - * the escalation interrupt, and if set, we abort the cede. - */ - andi. r0, r0, XIVE_ESB_VAL_P - bne- 1f - - /* Now P is 0, we can clear the flag */ - li r0, 0 - stb r0, VCPU_XIVE_ESC_ON(r4) -1: -no_xive: -#endif /* CONFIG_KVM_XICS */ - deliver_guest_interrupt: /* r4 = vcpu, r13 = paca */ /* Check if we can deliver an external or decrementer interrupt now */ ld r0, VCPU_PENDING_EXC(r4) -BEGIN_FTR_SECTION - /* On POWER9, also check for emulated doorbell interrupt */ - lbz r3, VCPU_DBELL_REQ(r4) - or r0, r0, r3 -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) cmpdi r0, 0 beq 71f mr r3, r4 @@ -1063,12 +916,6 @@ BEGIN_FTR_SECTION mtspr SPRN_PPR, r0 END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) -/* Move canary into DSISR to check for later */ -BEGIN_FTR_SECTION - li r0, 0x7fff - mtspr SPRN_HDSISR, r0 -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) - ld r6, VCPU_GPR(R6)(r4) ld r7, VCPU_GPR(R7)(r4) @@ -1248,7 +1095,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) cmpwi r12,BOOK3S_INTERRUPT_HV_DECREMENTER bne 2f mfspr r3,SPRN_HDEC - EXTEND_HDEC(r3) + extsw r3, r3 cmpdi r3,0 mr r4,r9 bge fast_guest_return @@ -1260,14 +1107,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) /* Hypervisor doorbell - exit only if host IPI flag set */ cmpwi r12, BOOK3S_INTERRUPT_H_DOORBELL bne 3f -BEGIN_FTR_SECTION - PPC_MSGSYNC - lwsync - /* always exit if we're running a nested guest */ - ld r0, VCPU_NESTED(r9) - cmpdi r0, 0 - bne guest_exit_cont -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) lbz r0, HSTATE_HOST_IPI(r13) cmpwi r0, 0 beq maybe_reenter_guest @@ -1297,43 +1136,6 @@ guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */ mr r4, r9 bl kvmhv_accumulate_time #endif -#ifdef CONFIG_KVM_XICS - /* We are exiting, pull the VP from the XIVE */ - lbz r0, VCPU_XIVE_PUSHED(r9) - cmpwi cr0, r0, 0 - beq 1f - li r7, TM_SPC_PULL_OS_CTX - li r6, TM_QW1_OS - mfmsr r0 - andi. r0, r0, MSR_DR /* in real mode? */ - beq 2f - ld r10, HSTATE_XIVE_TIMA_VIRT(r13) - cmpldi cr0, r10, 0 - beq 1f - /* First load to pull the context, we ignore the value */ - eieio - lwzx r11, r7, r10 - /* Second load to recover the context state (Words 0 and 1) */ - ldx r11, r6, r10 - b 3f -2: ld r10, HSTATE_XIVE_TIMA_PHYS(r13) - cmpldi cr0, r10, 0 - beq 1f - /* First load to pull the context, we ignore the value */ - eieio - lwzcix r11, r7, r10 - /* Second load to recover the context state (Words 0 and 1) */ - ldcix r11, r6, r10 -3: std r11, VCPU_XIVE_SAVED_STATE(r9) - /* Fixup some of the state for the next load */ - li r10, 0 - li r0, 0xff - stb r10, VCPU_XIVE_PUSHED(r9) - stb r10, (VCPU_XIVE_SAVED_STATE+3)(r9) - stb r0, (VCPU_XIVE_SAVED_STATE+4)(r9) - eieio -1: -#endif /* CONFIG_KVM_XICS */ /* * Possibly flush the link stack here, before we do a blr in @@ -1388,12 +1190,6 @@ guest_bypass: ld r3, HSTATE_KVM_VCORE(r13) mfspr r5,SPRN_DEC mftb r6 - /* On P9, if the guest has large decr enabled, don't sign extend */ -BEGIN_FTR_SECTION - ld r4, VCORE_LPCR(r3) - andis. r4, r4, LPCR_LD@h - bne 16f -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) extsw r5,r5 16: add r5,r5,r6 /* r5 is a guest timebase value here, convert to host TB */ @@ -1467,7 +1263,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) std r6, VCPU_BESCR(r9) stw r7, VCPU_GUEST_PID(r9) std r8, VCPU_WORT(r9) -BEGIN_FTR_SECTION mfspr r5, SPRN_TCSCR mfspr r6, SPRN_ACOP mfspr r7, SPRN_CSIGR @@ -1476,17 +1271,6 @@ BEGIN_FTR_SECTION std r6, VCPU_ACOP(r9) std r7, VCPU_CSIGR(r9) std r8, VCPU_TACR(r9) -FTR_SECTION_ELSE - mfspr r5, SPRN_TIDR - mfspr r6, SPRN_PSSCR - std r5, VCPU_TID(r9) - rldicl r6, r6, 4, 50 /* r6 &= PSSCR_GUEST_VIS */ - rotldi r6, r6, 60 - std r6, VCPU_PSSCR(r9) - /* Restore host HFSCR value */ - ld r7, STACK_SLOT_HFSCR(r1) - mtspr SPRN_HFSCR, r7 -ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300) /* * Restore various registers to 0, where non-zero values * set by the guest could disrupt the host. @@ -1494,13 +1278,11 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300) li r0, 0 mtspr SPRN_PSPB, r0 mtspr SPRN_WORT, r0 -BEGIN_FTR_SECTION mtspr SPRN_TCSCR, r0 /* Set MMCRS to 1<<31 to freeze and disable the SPMC counters */ li r0, 1 sldi r0, r0, 31 mtspr SPRN_MMCRS, r0 -END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) /* Save and restore AMR, IAMR and UAMOR before turning on the MMU */ ld r8, STACK_SLOT_IAMR(r1) @@ -1557,13 +1339,9 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) bl kvmppc_save_fp #ifdef CONFIG_PPC_TRANSACTIONAL_MEM -/* - * Branch around the call if both CPU_FTR_TM and - * CPU_FTR_P9_TM_HV_ASSIST are off. - */ BEGIN_FTR_SECTION b 91f -END_FTR_SECTION(CPU_FTR_TM | CPU_FTR_P9_TM_HV_ASSIST, 0) +END_FTR_SECTION_IFCLR(CPU_FTR_TM) /* * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS (but not CR) */ @@ -1609,28 +1387,6 @@ BEGIN_FTR_SECTION mtspr SPRN_DAWR0, r6 mtspr SPRN_DAWRX0, r7 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) -BEGIN_FTR_SECTION - ld r6, STACK_SLOT_DAWR1(r1) - ld r7, STACK_SLOT_DAWRX1(r1) - mtspr SPRN_DAWR1, r6 - mtspr SPRN_DAWRX1, r7 -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S | CPU_FTR_DAWR1) -BEGIN_FTR_SECTION - ld r5, STACK_SLOT_TID(r1) - ld r6, STACK_SLOT_PSSCR(r1) - ld r7, STACK_SLOT_PID(r1) - mtspr SPRN_TIDR, r5 - mtspr SPRN_PSSCR, r6 - mtspr SPRN_PID, r7 -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) - - /* - * cp_abort is required if the processor supports local copy-paste - * to clear the copy buffer that was under control of the guest. - */ -BEGIN_FTR_SECTION - PPC_CP_ABORT -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) /* * POWER7/POWER8 guest -> host partition switch code. @@ -1667,13 +1423,11 @@ kvmhv_switch_to_host: /* Primary thread switches back to host partition */ lwz r7,KVM_HOST_LPID(r4) -BEGIN_FTR_SECTION ld r6,KVM_HOST_SDR1(r4) li r8,LPID_RSVD /* switch to reserved LPID */ mtspr SPRN_LPID,r8 ptesync mtspr SPRN_SDR1,r6 /* switch to host page table */ -END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) mtspr SPRN_LPID,r7 isync @@ -1884,20 +1638,11 @@ kvmppc_tm_emul: kvmppc_hdsi: mfspr r4, SPRN_HDAR mfspr r6, SPRN_HDSISR -BEGIN_FTR_SECTION - /* Look for DSISR canary. If we find it, retry instruction */ - cmpdi r6, 0x7fff - beq 6f -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) /* HPTE not found fault or protection fault? */ andis. r0, r6, (DSISR_NOHPTE | DSISR_PROTFAULT)@h beq 1f /* if not, send it to the guest */ andi. r0, r11, MSR_DR /* data relocation enabled? */ beq 3f -BEGIN_FTR_SECTION - mfspr r5, SPRN_ASDR /* on POWER9, use ASDR to get VSID */ - b 4f -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) clrrdi r0, r4, 28 PPC_SLBFEE_DOT(R5, R0) /* if so, look up SLB */ li r0, BOOK3S_INTERRUPT_DATA_SEGMENT @@ -1974,10 +1719,6 @@ kvmppc_hisi: beq 1f andi. r0, r11, MSR_IR /* instruction relocation enabled? */ beq 3f -BEGIN_FTR_SECTION - mfspr r5, SPRN_ASDR /* on POWER9, use ASDR to get VSID */ - b 4f -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) clrrdi r0, r10, 28 PPC_SLBFEE_DOT(R5, R0) /* if so, look up SLB */ li r0, BOOK3S_INTERRUPT_INST_SEGMENT @@ -2025,10 +1766,6 @@ hcall_try_real_mode: andi. r0,r11,MSR_PR /* sc 1 from userspace - reflect to guest syscall */ bne sc_1_fast_return - /* sc 1 from nested guest - give it to L1 to handle */ - ld r0, VCPU_NESTED(r9) - cmpdi r0, 0 - bne guest_exit_cont clrrdi r3,r3,2 cmpldi r3,hcall_real_table_end - hcall_real_table bge guest_exit_cont @@ -2424,13 +2161,9 @@ _GLOBAL(kvmppc_h_cede) /* r3 = vcpu pointer, r11 = msr, r13 = paca */ bl kvmppc_save_fp #ifdef CONFIG_PPC_TRANSACTIONAL_MEM -/* - * Branch around the call if both CPU_FTR_TM and - * CPU_FTR_P9_TM_HV_ASSIST are off. - */ BEGIN_FTR_SECTION b 91f -END_FTR_SECTION(CPU_FTR_TM | CPU_FTR_P9_TM_HV_ASSIST, 0) +END_FTR_SECTION_IFCLR(CPU_FTR_TM) /* * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS (but not CR) */ @@ -2450,15 +2183,8 @@ END_FTR_SECTION(CPU_FTR_TM | CPU_FTR_P9_TM_HV_ASSIST, 0) mfspr r3, SPRN_DEC mfspr r4, SPRN_HDEC mftb r5 -BEGIN_FTR_SECTION - /* On P9 check whether the guest has large decrementer mode enabled */ - ld r6, HSTATE_KVM_VCORE(r13) - ld r6, VCORE_LPCR(r6) - andis. r6, r6, LPCR_LD@h - bne 68f -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) extsw r3, r3 -68: EXTEND_HDEC(r4) + extsw r4, r4 cmpd r3, r4 ble 67f mtspr SPRN_DEC, r4 @@ -2503,28 +2229,11 @@ BEGIN_FTR_SECTION END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) kvm_nap_sequence: /* desired LPCR value in r5 */ -BEGIN_FTR_SECTION - /* - * PSSCR bits: exit criterion = 1 (wakeup based on LPCR at sreset) - * enable state loss = 1 (allow SMT mode switch) - * requested level = 0 (just stop dispatching) - */ - lis r3, (PSSCR_EC | PSSCR_ESL)@h - /* Set LPCR_PECE_HVEE bit to enable wakeup by HV interrupts */ - li r4, LPCR_PECE_HVEE@higher - sldi r4, r4, 32 - or r5, r5, r4 -FTR_SECTION_ELSE li r3, PNV_THREAD_NAP -ALT_FTR_SECTION_END_IFSET(CPU_FTR_ARCH_300) mtspr SPRN_LPCR,r5 isync -BEGIN_FTR_SECTION - bl isa300_idle_stop_mayloss -FTR_SECTION_ELSE bl isa206_idle_insn_mayloss -ALT_FTR_SECTION_END_IFSET(CPU_FTR_ARCH_300) mfspr r0, SPRN_CTRLF ori r0, r0, 1 @@ -2543,10 +2252,8 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_ARCH_300) beq kvm_end_cede cmpwi r0, NAPPING_NOVCPU beq kvm_novcpu_wakeup -BEGIN_FTR_SECTION cmpwi r0, NAPPING_UNSPLIT beq kvm_unsplit_wakeup -END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) twi 31,0,0 /* Nap state must not be zero */ 33: mr r4, r3 @@ -2566,13 +2273,9 @@ kvm_end_cede: #endif #ifdef CONFIG_PPC_TRANSACTIONAL_MEM -/* - * Branch around the call if both CPU_FTR_TM and - * CPU_FTR_P9_TM_HV_ASSIST are off. - */ BEGIN_FTR_SECTION b 91f -END_FTR_SECTION(CPU_FTR_TM | CPU_FTR_P9_TM_HV_ASSIST, 0) +END_FTR_SECTION_IFCLR(CPU_FTR_TM) /* * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS (but not CR) */ @@ -2662,47 +2365,7 @@ kvm_cede_prodded: /* we've ceded but we want to give control to the host */ kvm_cede_exit: ld r9, HSTATE_KVM_VCPU(r13) -#ifdef CONFIG_KVM_XICS - /* are we using XIVE with single escalation? */ - ld r10, VCPU_XIVE_ESC_VADDR(r9) - cmpdi r10, 0 - beq 3f - li r6, XIVE_ESB_SET_PQ_00 - /* - * If we still have a pending escalation, abort the cede, - * and we must set PQ to 10 rather than 00 so that we don't - * potentially end up with two entries for the escalation - * interrupt in the XIVE interrupt queue. In that case - * we also don't want to set xive_esc_on to 1 here in - * case we race with xive_esc_irq(). - */ - lbz r5, VCPU_XIVE_ESC_ON(r9) - cmpwi r5, 0 - beq 4f - li r0, 0 - stb r0, VCPU_CEDED(r9) - /* - * The escalation interrupts are special as we don't EOI them. - * There is no need to use the load-after-store ordering offset - * to set PQ to 10 as we won't use StoreEOI. - */ - li r6, XIVE_ESB_SET_PQ_10 - b 5f -4: li r0, 1 - stb r0, VCPU_XIVE_ESC_ON(r9) - /* make sure store to xive_esc_on is seen before xive_esc_irq runs */ - sync -5: /* Enable XIVE escalation */ - mfmsr r0 - andi. r0, r0, MSR_DR /* in real mode? */ - beq 1f - ldx r0, r10, r6 - b 2f -1: ld r10, VCPU_XIVE_ESC_RADDR(r9) - ldcix r0, r10, r6 -2: sync -#endif /* CONFIG_KVM_XICS */ -3: b guest_exit_cont + b guest_exit_cont /* Try to do machine check recovery in real mode */ machine_check_realmode: @@ -2779,10 +2442,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) PPC_MSGCLR(6) /* see if it's a host IPI */ li r3, 1 -BEGIN_FTR_SECTION - PPC_MSGSYNC - lwsync -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) lbz r0, HSTATE_HOST_IPI(r13) cmpwi r0, 0 bnelr @@ -3091,70 +2750,12 @@ kvmppc_bad_host_intr: std r3, STACK_FRAME_OVERHEAD-16(r1) /* - * On POWER9 do a minimal restore of the MMU and call C code, - * which will print a message and panic. * XXX On POWER7 and POWER8, we just spin here since we don't * know what the other threads are doing (and we don't want to * coordinate with them) - but at least we now have register state * in memory that we might be able to look at from another CPU. */ -BEGIN_FTR_SECTION b . -END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) - ld r9, HSTATE_KVM_VCPU(r13) - ld r10, VCPU_KVM(r9) - - li r0, 0 - mtspr SPRN_AMR, r0 - mtspr SPRN_IAMR, r0 - mtspr SPRN_CIABR, r0 - mtspr SPRN_DAWRX0, r0 -BEGIN_FTR_SECTION - mtspr SPRN_DAWRX1, r0 -END_FTR_SECTION_IFSET(CPU_FTR_DAWR1) - - /* Clear guest SLB. */ - slbmte r0, r0 - PPC_SLBIA(6) - ptesync - - /* load host SLB entries */ - ld r8, PACA_SLBSHADOWPTR(r13) - .rept SLB_NUM_BOLTED - li r3, SLBSHADOW_SAVEAREA - LDX_BE r5, r8, r3 - addi r3, r3, 8 - LDX_BE r6, r8, r3 - andis. r7, r5, SLB_ESID_V@h - beq 3f - slbmte r6, r5 -3: addi r8, r8, 16 - .endr - - lwz r7, KVM_HOST_LPID(r10) - mtspr SPRN_LPID, r7 - mtspr SPRN_PID, r0 - ld r8, KVM_HOST_LPCR(r10) - mtspr SPRN_LPCR, r8 - isync - li r0, KVM_GUEST_MODE_NONE - stb r0, HSTATE_IN_GUEST(r13) - - /* - * Turn on the MMU and jump to C code - */ - bcl 20, 31, .+4 -5: mflr r3 - addi r3, r3, 9f - 5b - li r4, -1 - rldimi r3, r4, 62, 0 /* ensure 0xc000000000000000 bits are set */ - ld r4, PACAKMSR(r13) - mtspr SPRN_SRR0, r3 - mtspr SPRN_SRR1, r4 - RFI_TO_KERNEL -9: addi r3, r1, STACK_FRAME_OVERHEAD - bl kvmppc_bad_interrupt - b 9b /* * This mimics the MSR transition on IRQ delivery. The new guest MSR is taken diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c index 999997d9e9a9..528a7e0cf83a 100644 --- a/arch/powerpc/platforms/powernv/idle.c +++ b/arch/powerpc/platforms/powernv/idle.c @@ -604,7 +604,7 @@ struct p9_sprs { u64 uamor; }; -static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on) +static unsigned long power9_idle_stop(unsigned long psscr) { int cpu = raw_smp_processor_id(); int first = cpu_first_thread_sibling(cpu); @@ -620,8 +620,6 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on) if (!(psscr & (PSSCR_EC|PSSCR_ESL))) { /* EC=ESL=0 case */ - BUG_ON(!mmu_on); - /* * Wake synchronously. SRESET via xscom may still cause * a 0x100 powersave wakeup with SRR1 reason! @@ -803,8 +801,7 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on) __slb_restore_bolted_realmode(); out: - if (mmu_on) - mtmsr(MSR_KERNEL); + mtmsr(MSR_KERNEL); return srr1; } @@ -895,7 +892,7 @@ struct p10_sprs { */ }; -static unsigned long power10_idle_stop(unsigned long psscr, bool mmu_on) +static unsigned long power10_idle_stop(unsigned long psscr) { int cpu = raw_smp_processor_id(); int first = cpu_first_thread_sibling(cpu); @@ -909,8 +906,6 @@ static unsigned long power10_idle_stop(unsigned long psscr, bool mmu_on) if (!(psscr & (PSSCR_EC|PSSCR_ESL))) { /* EC=ESL=0 case */ - BUG_ON(!mmu_on); - /* * Wake synchronously. SRESET via xscom may still cause * a 0x100 powersave wakeup with SRR1 reason! @@ -991,8 +986,7 @@ static unsigned long power10_idle_stop(unsigned long psscr, bool mmu_on) __slb_restore_bolted_realmode(); out: - if (mmu_on) - mtmsr(MSR_KERNEL); + mtmsr(MSR_KERNEL); return srr1; } @@ -1002,40 +996,10 @@ static unsigned long arch300_offline_stop(unsigned long psscr) { unsigned long srr1; -#ifndef CONFIG_KVM_BOOK3S_HV_POSSIBLE - __ppc64_runlatch_off(); if (cpu_has_feature(CPU_FTR_ARCH_31)) - srr1 = power10_idle_stop(psscr, true); + srr1 = power10_idle_stop(psscr); else - srr1 = power9_idle_stop(psscr, true); - __ppc64_runlatch_on(); -#else - /* - * Tell KVM we're entering idle. - * This does not have to be done in real mode because the P9 MMU - * is independent per-thread. Some steppings share radix/hash mode - * between threads, but in that case KVM has a barrier sync in real - * mode before and after switching between radix and hash. - * - * kvm_start_guest must still be called in real mode though, hence - * the false argument. - */ - local_paca->kvm_hstate.hwthread_state = KVM_HWTHREAD_IN_IDLE; - - __ppc64_runlatch_off(); - if (cpu_has_feature(CPU_FTR_ARCH_31)) - srr1 = power10_idle_stop(psscr, false); - else - srr1 = power9_idle_stop(psscr, false); - __ppc64_runlatch_on(); - - local_paca->kvm_hstate.hwthread_state = KVM_HWTHREAD_IN_KERNEL; - /* Order setting hwthread_state vs. testing hwthread_req */ - smp_mb(); - if (local_paca->kvm_hstate.hwthread_req) - srr1 = idle_kvm_start_guest(srr1); - mtmsr(MSR_KERNEL); -#endif + srr1 = power9_idle_stop(psscr); return srr1; } @@ -1055,9 +1019,9 @@ void arch300_idle_type(unsigned long stop_psscr_val, __ppc64_runlatch_off(); if (cpu_has_feature(CPU_FTR_ARCH_31)) - srr1 = power10_idle_stop(psscr, true); + srr1 = power10_idle_stop(psscr); else - srr1 = power9_idle_stop(psscr, true); + srr1 = power9_idle_stop(psscr); __ppc64_runlatch_on(); fini_irq_for_idle_irqsoff();