From patchwork Mon Jul 26 03:49:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509702 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=gZHMKKZ8; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Yk0WQJz9t0T for ; Mon, 26 Jul 2021 13:50:50 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231207AbhGZDKT (ORCPT ); Sun, 25 Jul 2021 23:10:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbhGZDKS (ORCPT ); Sun, 25 Jul 2021 23:10:18 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5500BC061757 for ; Sun, 25 Jul 2021 20:50:47 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id a4-20020a17090aa504b0290176a0d2b67aso9375828pjq.2 for ; Sun, 25 Jul 2021 20:50:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tBUS86yzW++npW3fONJoOYSWPBRI54iAbyTNCBlq0wE=; b=gZHMKKZ8Bwij9kIOUgjyL2u2T9r766e2UvTf+se7ksIInEfRWZIvTBfMb5f9Ro7nCL 39LTzq781qnnwdcpe3uSm9B61p86wAxvCfuFtXutFNs5ftedtFl5UCJTNwSKFZl1VLi1 EB8YfhZU7Nkormc75q3pjUcW+J+sbAr5/a754rLtR6Z5W/FMhIcMN2j9kIRpcZd2s1MN Xx/3Qo6Iu5LGXoTPSEd++CUl21G7JAmEGkUaw0trW+IxDwWYwT0w28R5TuTXEv/+24GR M3xNyjdTcrCvnI2Mk2VmoB0f/L0BhTIEP7BZoexp9Y/eWAneVEPljKBGKJDIG9P2wZX3 tRGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tBUS86yzW++npW3fONJoOYSWPBRI54iAbyTNCBlq0wE=; b=QKHZAwVGlc2ee3Tr3tQ+gtXl+bxTzOVHE5irVfgmc6FY7AEMqeYCy3fixMeN/OoJSI 6bfXNvDnmld7RnVigGjN6HOq/XZoZ5jZXJBfGn0STOb5priFJyXq3i2QbHs+6pvJu/nR P88LpfhKSEot6cinqzOdJ4khCN/kNt2/sm4zff9TXiqALTuDgPvVZaK1iYlSIyi0OoMk tBOxoKMQqz85VRmak+2NcUM/YVotqg2SUbbrY7tKVePNZemGJn+xbzGV6b1m2gDRSWu4 XJ10MmlUchUlLnJ28SopWcbx7sakE9DFWSkWYg4J9IdYIqOhyC47Tljv+rurWctrlr5Z /DPw== X-Gm-Message-State: AOAM530ReBINNqImlfLuqZY0nxZzgtab3FCCanJLxIeLepkRGHadTnCf zT/AiDtFzzlSMOb67dv50TuFuH8w/Y0= X-Google-Smtp-Source: ABdhPJz3trP7pDym2e415b3RsbmZOL9WcoANJnLcFx883v2fnv5zGIksLxn8S+G332jFZlwJE1wybQ== X-Received: by 2002:a05:6a00:1582:b029:332:67bf:c196 with SMTP id u2-20020a056a001582b029033267bfc196mr16147180pfk.52.1627271446786; Sun, 25 Jul 2021 20:50:46 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.50.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:50:46 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 01/55] KVM: PPC: Book3S HV: Remove TM emulation from POWER7/8 path Date: Mon, 26 Jul 2021 13:49:42 +1000 Message-Id: <20210726035036.739609-2-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org TM fake-suspend emulation is only used by POWER9. Remove it from the old code path. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 42 ------------------------- 1 file changed, 42 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 8dd437d7a2c6..75079397c2a5 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -1088,12 +1088,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) cmpwi r12, BOOK3S_INTERRUPT_H_INST_STORAGE beq kvmppc_hisi -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM - /* For softpatch interrupt, go off and do TM instruction emulation */ - cmpwi r12, BOOK3S_INTERRUPT_HV_SOFTPATCH - beq kvmppc_tm_emul -#endif - /* See if this is a leftover HDEC interrupt */ cmpwi r12,BOOK3S_INTERRUPT_HV_DECREMENTER bne 2f @@ -1599,42 +1593,6 @@ maybe_reenter_guest: blt deliver_guest_interrupt b guest_exit_cont -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM -/* - * Softpatch interrupt for transactional memory emulation cases - * on POWER9 DD2.2. This is early in the guest exit path - we - * haven't saved registers or done a treclaim yet. - */ -kvmppc_tm_emul: - /* Save instruction image in HEIR */ - mfspr r3, SPRN_HEIR - stw r3, VCPU_HEIR(r9) - - /* - * The cases we want to handle here are those where the guest - * is in real suspend mode and is trying to transition to - * transactional mode. - */ - lbz r0, HSTATE_FAKE_SUSPEND(r13) - cmpwi r0, 0 /* keep exiting guest if in fake suspend */ - bne guest_exit_cont - rldicl r3, r11, 64 - MSR_TS_S_LG, 62 - cmpwi r3, 1 /* or if not in suspend state */ - bne guest_exit_cont - - /* Call C code to do the emulation */ - mr r3, r9 - bl kvmhv_p9_tm_emulation_early - nop - ld r9, HSTATE_KVM_VCPU(r13) - li r12, BOOK3S_INTERRUPT_HV_SOFTPATCH - cmpwi r3, 0 - beq guest_exit_cont /* continue exiting if not handled */ - ld r10, VCPU_PC(r9) - ld r11, VCPU_MSR(r9) - b fast_interrupt_c_return /* go back to guest if handled */ -#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ - /* * Check whether an HDSI is an HPTE not found fault or something else. * If it is an HPTE not found fault that is due to the guest accessing From patchwork Mon Jul 26 03:49:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509703 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=tk8mYaUi; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Ym2xLMz9sXS for ; Mon, 26 Jul 2021 13:50:52 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231219AbhGZDKW (ORCPT ); Sun, 25 Jul 2021 23:10:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbhGZDKU (ORCPT ); Sun, 25 Jul 2021 23:10:20 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 874F4C061757 for ; Sun, 25 Jul 2021 20:50:49 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id m2-20020a17090a71c2b0290175cf22899cso12422764pjs.2 for ; Sun, 25 Jul 2021 20:50:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ket7QMZuBWxOmfOxsVbiAMFMn9xvu4czvT5FGDl8BRI=; b=tk8mYaUiuS2WgrsvDLlxUf3uNzG7n+tMTnZtUPlZzIqnKan+EuRoG8y8MhyxN1kp7J yw+fh+ZLtulxewv7YU3IrL7XOXAHukXPPn0GSDq1MFqTSmF+/23P3iJLoFD1kljLhAaH zqhLgKOVo3sNU9atzMLyO6TqnhzhF4JiQoqS0aqO0QPLH5MLcLmdLs4eoP08uI0vMbMM is/W1sTZ2nmfMXcmh3k+QxWKcdKYPErskyBPieLdkiVgxelVEXydO4H+/x090H+oapgX QhqNlnN7oVDipxq0pIktDg1a+EeBd+TAPJYHJErx4w4CUnkNdvpvx10Rv86wbOgWQlJt /hLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ket7QMZuBWxOmfOxsVbiAMFMn9xvu4czvT5FGDl8BRI=; b=C1x5OufpsXm40faOgXRR5PlgGGjJDR46G1MmBJZxn1UeWhHeL2iDT9xoyToudAD5VZ IqZngwuSTItdR8lpA6VEjHdvZDv4djuSvuvfPrhyQB7GEOZXF2gTBI/7lKPD6caVbmed S87oULDJkuoYcWF6R5ynp1jxmc6qCfZw/rsQOfsTbIAWrjE2EqJtROQ3p6mPsrbMh6p5 N6uyby+zPMha7D4TIPAfXvxISh7rJyTiB3t/uoDnsIni1TeOVz1vAaoqdUxAiTAkr54g PrDeI4a4j72oCmv0k8MnpIG4+sKyT1Qw8wwatITupTGAsubVzUCZnCoTlmUKmLfuRhOn Cpxw== X-Gm-Message-State: AOAM533QYEUcgwXsDj4f1mvFXSBArLjqeKS02qbwbWxdSS7CMB2Ry6hu F4F7CTebOxpxLbRkc+Xy3mvpIWrVWzE= X-Google-Smtp-Source: ABdhPJxMi0/9rRanoDPnX92J5xw0CAnQIrAxWn8MrZ3Hyt72Pz3vf9LWo8mIdoLlv5OicHw96GcgVw== X-Received: by 2002:a17:90b:3844:: with SMTP id nl4mr23777654pjb.78.1627271449014; Sun, 25 Jul 2021 20:50:49 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.50.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:50:48 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 02/55] KVM: PPC: Book3S HV P9: Fixes for TM softpatch interrupt Date: Mon, 26 Jul 2021 13:49:43 +1000 Message-Id: <20210726035036.739609-3-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The softpatch interrupt sets HSRR0 to the faulting instruction +4, so it should subtract 4 for the faulting instruction address. Also have it emulate and deliver HFAC interrupts correctly, which is important for nested HV and facility demand-faulting in future. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/reg.h | 3 +- arch/powerpc/kvm/book3s_hv.c | 35 ++++++++++++-------- arch/powerpc/kvm/book3s_hv_tm.c | 57 +++++++++++++++++++++------------ 3 files changed, 61 insertions(+), 34 deletions(-) diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h index be85cf156a1f..e9d27265253b 100644 --- a/arch/powerpc/include/asm/reg.h +++ b/arch/powerpc/include/asm/reg.h @@ -415,6 +415,7 @@ #define FSCR_TAR __MASK(FSCR_TAR_LG) #define FSCR_EBB __MASK(FSCR_EBB_LG) #define FSCR_DSCR __MASK(FSCR_DSCR_LG) +#define FSCR_INTR_CAUSE (ASM_CONST(0xFF) << 56) /* interrupt cause */ #define SPRN_HFSCR 0xbe /* HV=1 Facility Status & Control Register */ #define HFSCR_PREFIX __MASK(FSCR_PREFIX_LG) #define HFSCR_MSGP __MASK(FSCR_MSGP_LG) @@ -426,7 +427,7 @@ #define HFSCR_DSCR __MASK(FSCR_DSCR_LG) #define HFSCR_VECVSX __MASK(FSCR_VECVSX_LG) #define HFSCR_FP __MASK(FSCR_FP_LG) -#define HFSCR_INTR_CAUSE (ASM_CONST(0xFF) << 56) /* interrupt cause */ +#define HFSCR_INTR_CAUSE FSCR_INTR_CAUSE #define SPRN_TAR 0x32f /* Target Address Register */ #define SPRN_LPCR 0x13E /* LPAR Control Register */ #define LPCR_VPM0 ASM_CONST(0x8000000000000000) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index ce7ff12cfc03..adac1a6431a0 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1682,6 +1682,21 @@ XXX benchmark guest exits r = RESUME_GUEST; } break; + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + case BOOK3S_INTERRUPT_HV_SOFTPATCH: + /* + * This occurs for various TM-related instructions that + * we need to emulate on POWER9 DD2.2. We have already + * handled the cases where the guest was in real-suspend + * mode and was transitioning to transactional state. + */ + r = kvmhv_p9_tm_emulation(vcpu); + if (r != -1) + break; + fallthrough; /* go to facility unavailable handler */ +#endif + /* * This occurs if the guest (kernel or userspace), does something that * is prohibited by HFSCR. @@ -1700,18 +1715,6 @@ XXX benchmark guest exits } break; -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM - case BOOK3S_INTERRUPT_HV_SOFTPATCH: - /* - * This occurs for various TM-related instructions that - * we need to emulate on POWER9 DD2.2. We have already - * handled the cases where the guest was in real-suspend - * mode and was transitioning to transactional state. - */ - r = kvmhv_p9_tm_emulation(vcpu); - break; -#endif - case BOOK3S_INTERRUPT_HV_RM_HARD: r = RESUME_PASSTHROUGH; break; @@ -1814,9 +1817,15 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu) * mode and was transitioning to transactional state. */ r = kvmhv_p9_tm_emulation(vcpu); - break; + if (r != -1) + break; + fallthrough; /* go to facility unavailable handler */ #endif + case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: + r = RESUME_HOST; + break; + case BOOK3S_INTERRUPT_HV_RM_HARD: vcpu->arch.trap = 0; r = RESUME_GUEST; diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c index cc90b8b82329..e4fd4a9dee08 100644 --- a/arch/powerpc/kvm/book3s_hv_tm.c +++ b/arch/powerpc/kvm/book3s_hv_tm.c @@ -74,19 +74,23 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) case PPC_INST_RFEBB: if ((msr & MSR_PR) && (vcpu->arch.vcore->pcr & PCR_ARCH_206)) { /* generate an illegal instruction interrupt */ + vcpu->arch.regs.nip -= 4; kvmppc_core_queue_program(vcpu, SRR1_PROGILL); return RESUME_GUEST; } /* check EBB facility is available */ if (!(vcpu->arch.hfscr & HFSCR_EBB)) { - /* generate an illegal instruction interrupt */ - kvmppc_core_queue_program(vcpu, SRR1_PROGILL); - return RESUME_GUEST; + vcpu->arch.regs.nip -= 4; + vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE; + vcpu->arch.hfscr |= (u64)FSCR_EBB_LG << 56; + vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL; + return -1; /* rerun host interrupt handler */ } if ((msr & MSR_PR) && !(vcpu->arch.fscr & FSCR_EBB)) { /* generate a facility unavailable interrupt */ - vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) | - ((u64)FSCR_EBB_LG << 56); + vcpu->arch.regs.nip -= 4; + vcpu->arch.fscr &= ~FSCR_INTR_CAUSE; + vcpu->arch.fscr |= (u64)FSCR_EBB_LG << 56; kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_FAC_UNAVAIL); return RESUME_GUEST; } @@ -123,19 +127,23 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) /* check for PR=1 and arch 2.06 bit set in PCR */ if ((msr & MSR_PR) && (vcpu->arch.vcore->pcr & PCR_ARCH_206)) { /* generate an illegal instruction interrupt */ + vcpu->arch.regs.nip -= 4; kvmppc_core_queue_program(vcpu, SRR1_PROGILL); return RESUME_GUEST; } /* check for TM disabled in the HFSCR or MSR */ if (!(vcpu->arch.hfscr & HFSCR_TM)) { - /* generate an illegal instruction interrupt */ - kvmppc_core_queue_program(vcpu, SRR1_PROGILL); - return RESUME_GUEST; + vcpu->arch.regs.nip -= 4; + vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE; + vcpu->arch.hfscr |= (u64)FSCR_TM_LG << 56; + vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL; + return -1; /* rerun host interrupt handler */ } if (!(msr & MSR_TM)) { /* generate a facility unavailable interrupt */ - vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) | - ((u64)FSCR_TM_LG << 56); + vcpu->arch.regs.nip -= 4; + vcpu->arch.fscr &= ~FSCR_INTR_CAUSE; + vcpu->arch.fscr |= (u64)FSCR_TM_LG << 56; kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_FAC_UNAVAIL); return RESUME_GUEST; @@ -158,20 +166,24 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) case (PPC_INST_TRECLAIM & PO_XOP_OPCODE_MASK): /* check for TM disabled in the HFSCR or MSR */ if (!(vcpu->arch.hfscr & HFSCR_TM)) { - /* generate an illegal instruction interrupt */ - kvmppc_core_queue_program(vcpu, SRR1_PROGILL); - return RESUME_GUEST; + vcpu->arch.regs.nip -= 4; + vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE; + vcpu->arch.hfscr |= (u64)FSCR_TM_LG << 56; + vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL; + return -1; /* rerun host interrupt handler */ } if (!(msr & MSR_TM)) { /* generate a facility unavailable interrupt */ - vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) | - ((u64)FSCR_TM_LG << 56); + vcpu->arch.regs.nip -= 4; + vcpu->arch.fscr &= ~FSCR_INTR_CAUSE; + vcpu->arch.fscr |= (u64)FSCR_TM_LG << 56; kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_FAC_UNAVAIL); return RESUME_GUEST; } /* If no transaction active, generate TM bad thing */ if (!MSR_TM_ACTIVE(msr)) { + vcpu->arch.regs.nip -= 4; kvmppc_core_queue_program(vcpu, SRR1_PROGTM); return RESUME_GUEST; } @@ -196,20 +208,24 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) /* XXX do we need to check for PR=0 here? */ /* check for TM disabled in the HFSCR or MSR */ if (!(vcpu->arch.hfscr & HFSCR_TM)) { - /* generate an illegal instruction interrupt */ - kvmppc_core_queue_program(vcpu, SRR1_PROGILL); - return RESUME_GUEST; + vcpu->arch.regs.nip -= 4; + vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE; + vcpu->arch.hfscr |= (u64)FSCR_TM_LG << 56; + vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL; + return -1; /* rerun host interrupt handler */ } if (!(msr & MSR_TM)) { /* generate a facility unavailable interrupt */ - vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) | - ((u64)FSCR_TM_LG << 56); + vcpu->arch.regs.nip -= 4; + vcpu->arch.fscr &= ~FSCR_INTR_CAUSE; + vcpu->arch.fscr |= (u64)FSCR_TM_LG << 56; kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_FAC_UNAVAIL); return RESUME_GUEST; } /* If transaction active or TEXASR[FS] = 0, bad thing */ if (MSR_TM_ACTIVE(msr) || !(vcpu->arch.texasr & TEXASR_FS)) { + vcpu->arch.regs.nip -= 4; kvmppc_core_queue_program(vcpu, SRR1_PROGTM); return RESUME_GUEST; } @@ -224,6 +240,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) } /* What should we do here? We didn't recognize the instruction */ + vcpu->arch.regs.nip -= 4; kvmppc_core_queue_program(vcpu, SRR1_PROGILL); pr_warn_ratelimited("Unrecognized TM-related instruction %#x for emulation", instr); From patchwork Mon Jul 26 03:49:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509704 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=noIJf7Zl; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Yp3FGqz9svs for ; Mon, 26 Jul 2021 13:50:54 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231250AbhGZDKY (ORCPT ); Sun, 25 Jul 2021 23:10:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbhGZDKX (ORCPT ); Sun, 25 Jul 2021 23:10:23 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B501C061757 for ; Sun, 25 Jul 2021 20:50:52 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id mt6so11146099pjb.1 for ; Sun, 25 Jul 2021 20:50:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kEqA7vOEeKgJgAq14VGJCgRh3U53BtCOszjkng3J0z0=; b=noIJf7ZlBDB+Bu1tHe5pndp5cWC49V8fxdqm+W2ebEdHIBIldnDHsz4fIf0sFtXOiI fYNUocu+P87fc1Ef4FIXHhoZwYK19R5zAvPTnZdLDHAHQuOu2Hcxg/gmvN0Aa1V6BI0Q iH6b9mrhH0HfbNQLcLUDU9sO0nBUnXIvcq2AoIM7kD3o81MbqSXz1S387a3tXNpiHrbl oqFeyT8bCO9ywxdagRv6N2HKzgLmNU8TXWKr4uOr2QGAGeVmV480wm6F5TXDptbgGTRG Zve+BSWD1hqgExhPh2m5uxioPecd9wkmdccQ5gLrpFvxxAEoYyjLKTUFpoP1nuMXwUKJ eETQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kEqA7vOEeKgJgAq14VGJCgRh3U53BtCOszjkng3J0z0=; b=ttTbo/YMFsizuM9KzsESvzg0FcjWLCa/oyxodTKIk9bvuUJFnm5wCdNyC4WWAFFOHr tMbAxC4MijdZP8RxbvFrvBM1II1hfC4blzuqq6wz9VS7+ItVRI88B5ER67OEMC+NtJUM CJ9lxf4wBl0QTZEV5BsP+zSVKk3loet3rbiU9L2s7jw2zcKExu7/VfPRlEw+d7WhOtkX OV61l+wO/hiXMXRsTbtkf523kCacyWTXxY0qQYSw3I/34ItQir2pLBvZykFbI1qRbY7U 4pKfUV0P1XRuGYfoBl1ky6efPJZ0FZwe/ne2kKK5Jg9MgrG7uvAMGer21q8w6Zh48XaB r7Fw== X-Gm-Message-State: AOAM532VJc1gFgUJ0LV/Yc6Otld5+mTlXHSrKIWkCOOo0wujv2puQjYG xBVEE2ZI2k+vLRBiwfPtGzLHxmS6u0A= X-Google-Smtp-Source: ABdhPJxaQVBhbrGsaoMcpf5xUTpTHJmnZ8sxLi2eKmsYWcdr/05Hx9NuWp+m00WUMo8fOSTym2xtYw== X-Received: by 2002:a65:63d0:: with SMTP id n16mr6254047pgv.432.1627271451548; Sun, 25 Jul 2021 20:50:51 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.50.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:50:51 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v1 03/55] KVM: PPC: Book3S HV: Sanitise vcpu registers in nested path Date: Mon, 26 Jul 2021 13:49:44 +1000 Message-Id: <20210726035036.739609-4-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Fabiano Rosas As one of the arguments of the H_ENTER_NESTED hypercall, the nested hypervisor (L1) prepares a structure containing the values of various hypervisor-privileged registers with which it wants the nested guest (L2) to run. Since the nested HV runs in supervisor mode it needs the host to write to these registers. To stop a nested HV manipulating this mechanism and using a nested guest as a proxy to access a facility that has been made unavailable to it, we have a routine that sanitises the values of the HV registers before copying them into the nested guest's vcpu struct. However, when coming out of the guest the values are copied as they were back into L1 memory, which means that any sanitisation we did during guest entry will be exposed to L1 after H_ENTER_NESTED returns. This patch alters this sanitisation to have effect on the vcpu->arch registers directly before entering and after exiting the guest, leaving the structure that is copied back into L1 unchanged (except when we really want L1 to access the value, e.g the Cause bits of HFSCR). Signed-off-by: Fabiano Rosas --- arch/powerpc/kvm/book3s_hv_nested.c | 100 +++++++++++++++------------- 1 file changed, 52 insertions(+), 48 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 898f942eb198..9bb0788d312c 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -104,8 +104,17 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap, { struct kvmppc_vcore *vc = vcpu->arch.vcore; + /* + * When loading the hypervisor-privileged registers to run L2, + * we might have used bits from L1 state to restrict what the + * L2 state is allowed to be. Since L1 is not allowed to read + * the HV registers, do not include these modifications in the + * return state. + */ + hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) | + (HFSCR_INTR_CAUSE & vcpu->arch.hfscr)); + hr->dpdes = vc->dpdes; - hr->hfscr = vcpu->arch.hfscr; hr->purr = vcpu->arch.purr; hr->spurr = vcpu->arch.spurr; hr->ic = vcpu->arch.ic; @@ -134,49 +143,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap, } } -/* - * This can result in some L0 HV register state being leaked to an L1 - * hypervisor when the hv_guest_state is copied back to the guest after - * being modified here. - * - * There is no known problem with such a leak, and in many cases these - * register settings could be derived by the guest by observing behaviour - * and timing, interrupts, etc., but it is an issue to consider. - */ -static void sanitise_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr) -{ - struct kvmppc_vcore *vc = vcpu->arch.vcore; - u64 mask; - - /* - * Don't let L1 change LPCR bits for the L2 except these: - */ - mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD | - LPCR_LPES | LPCR_MER; - - /* - * Additional filtering is required depending on hardware - * and configuration. - */ - hr->lpcr = kvmppc_filter_lpcr_hv(vcpu->kvm, - (vc->lpcr & ~mask) | (hr->lpcr & mask)); - - /* - * Don't let L1 enable features for L2 which we've disabled for L1, - * but preserve the interrupt cause field. - */ - hr->hfscr &= (HFSCR_INTR_CAUSE | vcpu->arch.hfscr); - - /* Don't let data address watchpoint match in hypervisor state */ - hr->dawrx0 &= ~DAWRX_HYP; - hr->dawrx1 &= ~DAWRX_HYP; - - /* Don't let completed instruction address breakpt match in HV state */ - if ((hr->ciabr & CIABR_PRIV) == CIABR_PRIV_HYPER) - hr->ciabr &= ~CIABR_PRIV; -} - -static void restore_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr) +static void restore_hv_regs(struct kvm_vcpu *vcpu, const struct hv_guest_state *hr) { struct kvmppc_vcore *vc = vcpu->arch.vcore; @@ -288,6 +255,43 @@ static int kvmhv_write_guest_state_and_regs(struct kvm_vcpu *vcpu, sizeof(struct pt_regs)); } +static void load_l2_hv_regs(struct kvm_vcpu *vcpu, + const struct hv_guest_state *l2_hv, + const struct hv_guest_state *l1_hv, u64 *lpcr) +{ + struct kvmppc_vcore *vc = vcpu->arch.vcore; + u64 mask; + + restore_hv_regs(vcpu, l2_hv); + + /* + * Don't let L1 change LPCR bits for the L2 except these: + */ + mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD | + LPCR_LPES | LPCR_MER; + + /* + * Additional filtering is required depending on hardware + * and configuration. + */ + *lpcr = kvmppc_filter_lpcr_hv(vcpu->kvm, + (vc->lpcr & ~mask) | (*lpcr & mask)); + + /* + * Don't let L1 enable features for L2 which we've disabled for L1, + * but preserve the interrupt cause field. + */ + vcpu->arch.hfscr = l2_hv->hfscr & (HFSCR_INTR_CAUSE | l1_hv->hfscr); + + /* Don't let data address watchpoint match in hypervisor state */ + vcpu->arch.dawrx0 = l2_hv->dawrx0 & ~DAWRX_HYP; + vcpu->arch.dawrx1 = l2_hv->dawrx1 & ~DAWRX_HYP; + + /* Don't let completed instruction address breakpt match in HV state */ + if ((l2_hv->ciabr & CIABR_PRIV) == CIABR_PRIV_HYPER) + vcpu->arch.ciabr = l2_hv->ciabr & ~CIABR_PRIV; +} + long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) { long int err, r; @@ -296,7 +300,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) struct hv_guest_state l2_hv = {0}, saved_l1_hv; struct kvmppc_vcore *vc = vcpu->arch.vcore; u64 hv_ptr, regs_ptr; - u64 hdec_exp; + u64 hdec_exp, lpcr; s64 delta_purr, delta_spurr, delta_ic, delta_vtb; if (vcpu->kvm->arch.l1_ptcr == 0) @@ -369,8 +373,8 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) /* Guest must always run with ME enabled, HV disabled. */ vcpu->arch.shregs.msr = (vcpu->arch.regs.msr | MSR_ME) & ~MSR_HV; - sanitise_hv_regs(vcpu, &l2_hv); - restore_hv_regs(vcpu, &l2_hv); + lpcr = l2_hv.lpcr; + load_l2_hv_regs(vcpu, &l2_hv, &saved_l1_hv, &lpcr); vcpu->arch.ret = RESUME_GUEST; vcpu->arch.trap = 0; @@ -380,7 +384,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) r = RESUME_HOST; break; } - r = kvmhv_run_single_vcpu(vcpu, hdec_exp, l2_hv.lpcr); + r = kvmhv_run_single_vcpu(vcpu, hdec_exp, lpcr); } while (is_kvmppc_resume_guest(r)); /* save L2 state for return */ From patchwork Mon Jul 26 03:49:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509705 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ZZgsfapa; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Yr3g4Dz9snk for ; Mon, 26 Jul 2021 13:50:56 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231280AbhGZDK0 (ORCPT ); Sun, 25 Jul 2021 23:10:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbhGZDKZ (ORCPT ); Sun, 25 Jul 2021 23:10:25 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7DE58C061757 for ; Sun, 25 Jul 2021 20:50:54 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id mt6so11146207pjb.1 for ; Sun, 25 Jul 2021 20:50:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8UBTM45PptEgGbJ6eGLIibMu8Q6Rsmel7GFQgZufaPE=; b=ZZgsfapa4LOcRIARu+vtQhK/XfomCmdBHNAdRZ7FFxh1y0vUuyc4F54Pt0xv3aXqcG +9OLJea0LQQttph5g54UKKcYZLlu/ZJTrheWz1tqJ3aCfSoLFDMM39ShhmDdDRS1HeU5 3YkNpynzrSlqs/nmc5jwqGywnJCJbrwzBI+nzMMUaWznKlbxyvceap2yGOQgJuNx5R2y Mfdu34B8nR5X5+gtPJ8WYKSwXxwmjs4IlhwVQC0QkMxPDAU80vbo2yWHZQnFdBE1UeKu H1YwKzEBx6G6UmltaQsLz8dpP8DD//tE9f0xYlWx64q7ws0uihyhAv+xfjEUMT/6MpG4 kbzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8UBTM45PptEgGbJ6eGLIibMu8Q6Rsmel7GFQgZufaPE=; b=odVMmPqXq/qAIF8nAzq9d5+n4Mx8c8Zf3ifT5yueQLiyW718ZG1xkTWB9pY13zFTz9 Qm/UnnGz1HE7DPtwWLiAYEEPTNOehRCtxpzz7uj5PrMVxjfvN8L9/KRL//XoW591gVuC HjVuNyc3Zj1weL/l8Emg338q9+gJYTZbZG/oKnr6Xxkp1WFg2VS77iYc7koqnVA6S+ou uBOh7iutgnHvY4H7tMdhOLmg2S8CndldzNFacQ9zB0HOJ7iWh+iWCkXm98BFfN/WpxB6 GFcME2NUsmb5KXCGB71kr/YamzOr2/pLrgrb25IHnUu74Mul79cThpkXYq6BN5b7j7Vp PiGw== X-Gm-Message-State: AOAM53026n/oh4yM4Y5vaZmhOAIL2/pmWl0tddvBcgsYlsHmbOdgssLr XI4bEskBkT45yq+NnFa4NhTjCcVckIM= X-Google-Smtp-Source: ABdhPJwrtE0BBzK/PPlVzKAFYKj6zEVYkcJdteKsLPBHCq8pIQR9UR7baOjBCzAtyc1nI7BeVgstzA== X-Received: by 2002:a17:90b:4b08:: with SMTP id lx8mr15404083pjb.66.1627271454050; Sun, 25 Jul 2021 20:50:54 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.50.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:50:53 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v1 04/55] KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1 Date: Mon, 26 Jul 2021 13:49:45 +1000 Message-Id: <20210726035036.739609-5-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Fabiano Rosas If the nested hypervisor has no access to a facility because it has been disabled by the host, it should also not be able to see the Hypervisor Facility Unavailable that arises from one of its guests trying to access the facility. This patch turns a HFU that happened in L2 into a Hypervisor Emulation Assistance interrupt and forwards it to L1 for handling. The ones that happened because L1 explicitly disabled the facility for L2 are still let through, along with the corresponding Cause bits in the HFSCR. Signed-off-by: Fabiano Rosas --- arch/powerpc/kvm/book3s_hv_nested.c | 27 ++++++++++++++++++++++++--- 1 file changed, 24 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 9bb0788d312c..983628ed4376 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -99,7 +99,7 @@ static void byteswap_hv_regs(struct hv_guest_state *hr) hr->dawrx1 = swab64(hr->dawrx1); } -static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap, +static void save_hv_return_state(struct kvm_vcpu *vcpu, struct hv_guest_state *hr) { struct kvmppc_vcore *vc = vcpu->arch.vcore; @@ -128,7 +128,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap, hr->pidr = vcpu->arch.pid; hr->cfar = vcpu->arch.cfar; hr->ppr = vcpu->arch.ppr; - switch (trap) { + switch (vcpu->arch.trap) { case BOOK3S_INTERRUPT_H_DATA_STORAGE: hr->hdar = vcpu->arch.fault_dar; hr->hdsisr = vcpu->arch.fault_dsisr; @@ -137,6 +137,27 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap, case BOOK3S_INTERRUPT_H_INST_STORAGE: hr->asdr = vcpu->arch.fault_gpa; break; + case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: + { + u8 cause = vcpu->arch.hfscr >> 56; + + WARN_ON_ONCE(cause >= BITS_PER_LONG); + + if (!(hr->hfscr & (1UL << cause))) + break; + + /* + * We have disabled this facility, so it does not + * exist from L1's perspective. Turn it into a HEAI. + */ + vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST; + kvmppc_load_last_inst(vcpu, INST_GENERIC, &vcpu->arch.emul_inst); + + /* Don't leak the cause field */ + hr->hfscr &= ~HFSCR_INTR_CAUSE; + + fallthrough; + } case BOOK3S_INTERRUPT_H_EMUL_ASSIST: hr->heir = vcpu->arch.emul_inst; break; @@ -394,7 +415,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) delta_spurr = vcpu->arch.spurr - l2_hv.spurr; delta_ic = vcpu->arch.ic - l2_hv.ic; delta_vtb = vc->vtb - l2_hv.vtb; - save_hv_return_state(vcpu, vcpu->arch.trap, &l2_hv); + save_hv_return_state(vcpu, &l2_hv); /* restore L1 state */ vcpu->arch.nested = NULL; From patchwork Mon Jul 26 03:49:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509706 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=l/PcRICZ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Yv0YKJz9t10 for ; Mon, 26 Jul 2021 13:50:59 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231371AbhGZDK3 (ORCPT ); Sun, 25 Jul 2021 23:10:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbhGZDK2 (ORCPT ); Sun, 25 Jul 2021 23:10:28 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B176C061757 for ; Sun, 25 Jul 2021 20:50:57 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id m2-20020a17090a71c2b0290175cf22899cso12423092pjs.2 for ; Sun, 25 Jul 2021 20:50:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wNummKPcVaVmiGYV9E4Z8BF0hMfr2KcTst4lEXRe9HM=; b=l/PcRICZkHQrE8/9m4QNi6nXKBBLHuoIBRevXCd+e7PEBXKLDhVibNhTQhIN67ap9s 5uI/ro1xKX8jVrrFm/TwOCcUKXZcQYUWzBV7TjzCNfuD3gqAEzb/LOdF45sc9muFhZq8 Fld6Cxf9VqhBC+i68T49k0jTt96K0ynqR94mWbkpBsw6YVx8YY1MPx3Cpwt80y5x+N+0 a8P8QyzT2wuQZjoIS76VmK6aoXGUCec2Az+rMCyOqcDuzgfLqQ6fZg7nKKnefTQVRfI2 s1o2jWbZA+2rQOHQAPyYc7HNIRn1IBOHYOJxkZdfd3NDJKnP+ov2FYLRwFy51CBThSB8 nMgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wNummKPcVaVmiGYV9E4Z8BF0hMfr2KcTst4lEXRe9HM=; b=LVLoJi4Bvx66lhzgunVqBfirNOVrXHb0WxhVbZbYj4z+FfYV+od1QYl00mI9q8jJki PC4NaoQUPqOITlpAzPJTqslkHxk3TzhSqWz/MQU0jAoJn6X1xUayPqUVJNCRigPsOyNj xNXunn6PcJ5P50XwjlsYqiiM67k2OLI2M7vI1gJv2ybAEyBKL19HQC/Kz7HObZFPoeb2 x1gOv7k7wmVUn73E5/Ao6gYsGnpLE+uPRUBniu75A/hWONV2sAVPHfROy+l+vfV78oKQ vyCE9WkXdbhD3nVzy6leKgR2D0/L5uHDLVvF5Zrk28O8x1nvfrsALZxbSL9C9SrxliuU BOwQ== X-Gm-Message-State: AOAM5321lQbw23u2nJODBBLBl9EAGH5embsAW/mShohG6aCzCPaYdojY jQnhC/SPYiOAKty27nxXg5Wu2cfKIks= X-Google-Smtp-Source: ABdhPJxoiy+p1LoDS1VZilzKAgEuYia2n5b0397IJnz73YHJROVKs0FbtfxMxEwkOpLQ7w9zU7TpzQ== X-Received: by 2002:a17:902:e54f:b029:12b:55c9:3b48 with SMTP id n15-20020a170902e54fb029012b55c93b48mr13016745plf.45.1627271456537; Sun, 25 Jul 2021 20:50:56 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.50.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:50:56 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v1 05/55] KVM: PPC: Book3S HV Nested: Reflect guest PMU in-use to L0 when guest SPRs are live Date: Mon, 26 Jul 2021 13:49:46 +1000 Message-Id: <20210726035036.739609-6-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org After the L1 saves its PMU SPRs but before loading the L2's PMU SPRs, switch the pmcregs_in_use field in the L1 lppaca to the value advertised by the L2 in its VPA. On the way out of the L2, set it back after saving the L2 PMU registers (if they were in-use). This transfers the PMU liveness indication between the L1 and L2 at the points where the registers are not live. This fixes the nested HV bug for which a workaround was added to the L0 HV by commit 63279eeb7f93a ("KVM: PPC: Book3S HV: Always save guest pmu for guest capable of nesting"), which explains the problem in detail. That workaround is no longer required for guests that include this bug fix. Fixes: 360cae313702 ("KVM: PPC: Book3S HV: Nested guest entry via hypercall") Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/pmc.h | 7 +++++++ arch/powerpc/kvm/book3s_hv.c | 20 ++++++++++++++++++++ 2 files changed, 27 insertions(+) diff --git a/arch/powerpc/include/asm/pmc.h b/arch/powerpc/include/asm/pmc.h index c6bbe9778d3c..3c09109e708e 100644 --- a/arch/powerpc/include/asm/pmc.h +++ b/arch/powerpc/include/asm/pmc.h @@ -34,6 +34,13 @@ static inline void ppc_set_pmu_inuse(int inuse) #endif } +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +static inline int ppc_get_pmu_inuse(void) +{ + return get_paca()->pmcregs_in_use; +} +#endif + extern void power4_enable_pmcs(void); #else /* CONFIG_PPC64 */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index adac1a6431a0..c743020837e7 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -59,6 +59,7 @@ #include #include #include +#include #include #include #include @@ -3864,6 +3865,18 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) { + barrier(); + if (vcpu->arch.vpa.pinned_addr) { + struct lppaca *lp = vcpu->arch.vpa.pinned_addr; + get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; + } else { + get_lppaca()->pmcregs_in_use = 1; + } + barrier(); + } +#endif kvmhv_load_guest_pmu(vcpu); msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); @@ -3998,6 +4011,13 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, save_pmu |= nesting_enabled(vcpu->kvm); kvmhv_save_guest_pmu(vcpu, save_pmu); +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) { + barrier(); + get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); + barrier(); + } +#endif vc->entry_exit_map = 0x101; vc->in_guest = 0; From patchwork Mon Jul 26 03:49:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509707 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=gKo0r927; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Yz6mdvz9t5H for ; Mon, 26 Jul 2021 13:51:01 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231419AbhGZDKb (ORCPT ); Sun, 25 Jul 2021 23:10:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbhGZDKb (ORCPT ); Sun, 25 Jul 2021 23:10:31 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBB32C061757 for ; Sun, 25 Jul 2021 20:50:59 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id g23-20020a17090a5797b02901765d605e14so12346711pji.5 for ; Sun, 25 Jul 2021 20:50:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vHJ3Y57b/rviJNEhgVpmV6ko0RmXjy00ajwevlOD5SA=; b=gKo0r927juWtn87EJD51DvWI4ge3OKVYVlJ47/NJoPokPbSZgOB/+DRqsxLYuXjUYK hv4xG5T9lG71vhan1HwU4OzE05X+jZ32zcX44IYT4g8SHsrxmYbeaS4+Os3TpntUyux4 W4ZaPFTgNpYedMqhIdRu2iEHsYsUrDH/4aGS6H1Qi/T5SXIL4Xay0qYxImdlOXUkibgi ntQA27hZ1aX6edF4E/RsYl8p75kllhHCMzA+HOsFnE4IQvQ1mcvClvsEPnpZmj5P8H5A qAih5HXzFfIxN4p6R9jQCDkLrKSIaHSIrJRpd2UkDMXmsX3vSlXLZGVd+uzZ5mUr8ttL 1xAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vHJ3Y57b/rviJNEhgVpmV6ko0RmXjy00ajwevlOD5SA=; b=FzNIo03nfewf+CNRFerlJNmmaKYkITnTXaAwbZUQwNPgr9/cnA/EYkNPxInVPmtIqa yeowNaGLx2a1BIuZe6T5myshHt7i0KzNEBjKLEjdpjfeblRQC0vo/cNfzcpWpUjoGHay aSCmoAP+oqEb5UCld+U/Sv0mtHdbnDnwyfz57cdfum3t01nHCgwZsvJ80LJXzSsg14uv OXiLp5bYGBA6BjsKS8rDuI+cbi840wa47XymN5lTONQc0hj0ScbvY5TGxUgYEy/eTiYj vlZqp6z3kNMTn8aqjqNhxb6Hk+eE7dd5c1KMO6ceOZKYbfi5LCAa97h+G4VEX9GVCrhB uXug== X-Gm-Message-State: AOAM533dhtpGNgfJwpXfyTzouuBBnBE58PxYf4lKf/ZrmvlPfpxaWodq JK5YDEm/z5Htyb66TNoZ4uCbyMjbYVE= X-Google-Smtp-Source: ABdhPJyi4OmcTEciC/UAfHaN9YcU4R53EcBKSJmalI+W2dfBSIAvevoR8WwjBXxrrO6bu7KO+muLNQ== X-Received: by 2002:a62:584:0:b029:32e:3b57:a1c6 with SMTP id 126-20020a6205840000b029032e3b57a1c6mr15624560pff.13.1627271459222; Sun, 25 Jul 2021 20:50:59 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.50.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:50:59 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v1 06/55] powerpc/64s: Remove WORT SPR from POWER9/10 Date: Mon, 26 Jul 2021 13:49:47 +1000 Message-Id: <20210726035036.739609-7-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This register is not architected and not implemented in POWER9 or 10, it just reads back zeroes for compatibility. -78 cycles (9255) cycles POWER9 virt-mode NULL hcall Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 3 --- arch/powerpc/platforms/powernv/idle.c | 2 -- 2 files changed, 5 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index c743020837e7..905bf29940ea 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3740,7 +3740,6 @@ static void load_spr_state(struct kvm_vcpu *vcpu) mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); mtspr(SPRN_BESCR, vcpu->arch.bescr); - mtspr(SPRN_WORT, vcpu->arch.wort); mtspr(SPRN_TIDR, vcpu->arch.tid); mtspr(SPRN_AMR, vcpu->arch.amr); mtspr(SPRN_UAMOR, vcpu->arch.uamor); @@ -3767,7 +3766,6 @@ static void store_spr_state(struct kvm_vcpu *vcpu) vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); vcpu->arch.bescr = mfspr(SPRN_BESCR); - vcpu->arch.wort = mfspr(SPRN_WORT); vcpu->arch.tid = mfspr(SPRN_TIDR); vcpu->arch.amr = mfspr(SPRN_AMR); vcpu->arch.uamor = mfspr(SPRN_UAMOR); @@ -3799,7 +3797,6 @@ static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { mtspr(SPRN_PSPB, 0); - mtspr(SPRN_WORT, 0); mtspr(SPRN_UAMOR, 0); mtspr(SPRN_DSCR, host_os_sprs->dscr); diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c index 1e908536890b..df19e2ff9d3c 100644 --- a/arch/powerpc/platforms/powernv/idle.c +++ b/arch/powerpc/platforms/powernv/idle.c @@ -667,7 +667,6 @@ static unsigned long power9_idle_stop(unsigned long psscr) sprs.purr = mfspr(SPRN_PURR); sprs.spurr = mfspr(SPRN_SPURR); sprs.dscr = mfspr(SPRN_DSCR); - sprs.wort = mfspr(SPRN_WORT); sprs.ciabr = mfspr(SPRN_CIABR); sprs.mmcra = mfspr(SPRN_MMCRA); @@ -785,7 +784,6 @@ static unsigned long power9_idle_stop(unsigned long psscr) mtspr(SPRN_PURR, sprs.purr); mtspr(SPRN_SPURR, sprs.spurr); mtspr(SPRN_DSCR, sprs.dscr); - mtspr(SPRN_WORT, sprs.wort); mtspr(SPRN_CIABR, sprs.ciabr); mtspr(SPRN_MMCRA, sprs.mmcra); From patchwork Mon Jul 26 03:49:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509708 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=h/yBtb1l; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Z13jGPz9ssD for ; Mon, 26 Jul 2021 13:51:05 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231421AbhGZDKe (ORCPT ); Sun, 25 Jul 2021 23:10:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53072 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbhGZDKd (ORCPT ); Sun, 25 Jul 2021 23:10:33 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3741BC061757 for ; Sun, 25 Jul 2021 20:51:02 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id k4-20020a17090a5144b02901731c776526so17810726pjm.4 for ; Sun, 25 Jul 2021 20:51:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=inYRnRg2NKqvIr+mMkG5hJldbNhI6GosEIHyZNPewnI=; b=h/yBtb1lxuYcIdeaW8xJeYIeiRd/mC2XLmTXR4L1yqExrTFFrhi1W+pLsi/XEmUSX+ FeK4TVSsOuHXwKa/LWG5HIiCka/eZ37gzNvSIelcTKY1DRJJOuZKcbSuYykF2GgEuJUv qRkUFsjtewgMdAlJcaBmyHvVMVeGpxuM/pXlzH8Nxy9yfHsS7eTRYRfBiftpm/KvsivE ZLc/3tjQFYRFhtMRgvWcdTrnUrzrFW2bv5T7mwxRBkgq356eo8FtD8tFkRc8PmOf5DPE H1x8HLD8WAnafTXjt0fFxfdaHXkPoHQRQa6juZ0cLYvlKnJHQz20yozSk5l7CSJkvmeA FohQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=inYRnRg2NKqvIr+mMkG5hJldbNhI6GosEIHyZNPewnI=; b=PkZZrJF3ZJAbud3Qxtz4BfujQhmv9TUrNcKzfVpISqicIwf48VDMQw+zG9ajWrD2YD 7fo2ISt9pAk99oq5l5FYiGw1PXl8BVJpOY3ZNlD3Ru6zTVXQETLRneSaQgXQGifrEZAj 1NnLOMqsIbMAznqg9vz8alB2t9y5Vm8PnxVyiaduz8ZKR7WXYsA7nLye3HQkzdryIgfm I67YbrQ1zt2BAs9fNkteBwz04Kq+Qlws2bROxVQZgJtvWt9kwSHCLYgNo3/Ttzs4S2hC NmUdGDbR/3Z2cCfeIlt3WzMeZ7KU001XyIXTCV3gqwAMEOUwqZqP8yMqRvmi9CGLLaws /+tQ== X-Gm-Message-State: AOAM530S+BuO25SMtcLAwFgI02w/LLGuGKdrumGLUh2MZXoWgLLapP+S lLOAWgz8NZGraIBjb6OWU2baMjzPMI4= X-Google-Smtp-Source: ABdhPJwKZUlDXjIvGIVIblAM1sFF3K9m6X/SqDI+cAiv+fYwhbh5Z51Mm+dYXV8beBhKuYVBxJwyXw== X-Received: by 2002:a63:ed47:: with SMTP id m7mr16427295pgk.194.1627271461747; Sun, 25 Jul 2021 20:51:01 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.50.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:01 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Alexey Kardashevskiy Subject: [PATCH v1 07/55] KMV: PPC: Book3S HV P9: Use set_dec to set decrementer to host Date: Mon, 26 Jul 2021 13:49:48 +1000 Message-Id: <20210726035036.739609-8-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The host Linux timer code arms the decrementer with the value 'decrementers_next_tb - current_tb' using set_dec(), which stores val - 1 on Book3S-64, which is not quite the same as what KVM does to re-arm the host decrementer when exiting the guest. This shouldn't be a significant change, but it makes the logic match and avoids this small extra change being brought into the next patch. Suggested-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 905bf29940ea..7020cbbf3aa1 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4019,7 +4019,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 0x101; vc->in_guest = 0; - mtspr(SPRN_DEC, local_paca->kvm_hstate.dec_expires - mftb()); + set_dec(local_paca->kvm_hstate.dec_expires - mftb()); /* We may have raced with new irq work */ if (test_irq_work_pending()) set_dec(1); From patchwork Mon Jul 26 03:49:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509709 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=QPzPdEGT; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Z23Ldlz9t6h for ; Mon, 26 Jul 2021 13:51:06 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231455AbhGZDKg (ORCPT ); Sun, 25 Jul 2021 23:10:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbhGZDKf (ORCPT ); Sun, 25 Jul 2021 23:10:35 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73183C061757 for ; Sun, 25 Jul 2021 20:51:04 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id c16so4438606plh.7 for ; Sun, 25 Jul 2021 20:51:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XKVxs40Y3Ht4HwL4eALZtDA3njvIWQGeUzaIg+a0YoY=; b=QPzPdEGTO+eSvTcth7+jwiO5ueC0TvVJQS4BUYKRHy38O0yEYtwm9gMGhRpBpUJW/E xzSLzqzfErZZIsMcf/ws1SFHZe4LgyNE3z7VlqRMjJXEOV8zBY4t1IVGyt/QbYsfOZZd V6G7WABrCjummg3Uhvi3UhB5UaoE+GLWPHubt09PLRSh18iNb9gCO+xCju47pqUCjwsA ESQuBIRJym46XxTS9kcXhmfgngF1NdqA0jletsUx3XT0rMkmBG7GQAAKl7ZOdvS6R0aw TbO7co8QSvMOb8b+65l5QVaxWiYledgtQuuQaEvTkMS8v9b8d25WpYoRmFZhGL+c0BqK 71TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XKVxs40Y3Ht4HwL4eALZtDA3njvIWQGeUzaIg+a0YoY=; b=fdi/exXZyFqXumSOhu2GWvKQO4HNfvb5fIBOBYWG/qUAO2C3z51rBaMdN7qmomGwM0 uVWPI4a4Cslq2Cc5VsPkRthZJOaX+Y8il/KtUjS2aiUByYzTMuM6NGGfUaFJSzVI0K0K lALVKI1+IDOVc1CAZkLkq8OKMdrrfZLollEjGq4Oz9PpiRVCT5miRFQgi9i5DmVAjaty epxlhbMjLAQR/nVYPTkpLH0qKGkTOFa5sCavX92Bvi7zoAWbbhqd5ACXBDiZS88XllIa Not/QGU0NGXlygKnedbLVwZbVBuO91wr2tov25fcIOpFE7G+iaW06aZTw5Mfw4djWlcD SuhQ== X-Gm-Message-State: AOAM5308i5D6aDA4g6aDBk5UJcg4hfDmcI8ifGydpM/Fr7W0Lj4zuh1M V9yyTq3ebh/lBeeKonXbukhMuM94fRA= X-Google-Smtp-Source: ABdhPJxo9hVzjn7hW3VmNYfRG/qy/icGPi79DwZhCNr6qvpgl+mPOUebyZqm/10pXKdzhDyuGjIcYg== X-Received: by 2002:a17:90b:3905:: with SMTP id ob5mr1018792pjb.211.1627271463935; Sun, 25 Jul 2021 20:51:03 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:03 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 08/55] KVM: PPC: Book3S HV P9: Use host timer accounting to avoid decrementer read Date: Mon, 26 Jul 2021 13:49:49 +1000 Message-Id: <20210726035036.739609-9-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org There is no need to save away the host DEC value, as it is derived from the host timer subsystem which maintains the next timer time, so it can be restored from there. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/time.h | 5 +++++ arch/powerpc/kernel/time.c | 1 + arch/powerpc/kvm/book3s_hv.c | 14 +++++++------- 3 files changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index 8c2c3dd4ddba..fd09b4797fd7 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -111,6 +111,11 @@ static inline unsigned long test_irq_work_pending(void) DECLARE_PER_CPU(u64, decrementers_next_tb); +static inline u64 timer_get_next_tb(void) +{ + return __this_cpu_read(decrementers_next_tb); +} + /* Convert timebase ticks to nanoseconds */ unsigned long long tb_to_ns(unsigned long long tb_ticks); diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index e45ce427bffb..01df89918aa4 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -108,6 +108,7 @@ struct clock_event_device decrementer_clockevent = { EXPORT_SYMBOL(decrementer_clockevent); DEFINE_PER_CPU(u64, decrementers_next_tb); +EXPORT_SYMBOL_GPL(decrementers_next_tb); static DEFINE_PER_CPU(struct clock_event_device, decrementers); #define XSEC_PER_SEC (1024*1024) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 7020cbbf3aa1..82976f734bd1 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3829,18 +3829,17 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, struct kvmppc_vcore *vc = vcpu->arch.vcore; struct p9_host_os_sprs host_os_sprs; s64 dec; - u64 tb; + u64 tb, next_timer; int trap, save_pmu; WARN_ON_ONCE(vcpu->arch.ceded); - dec = mfspr(SPRN_DEC); tb = mftb(); - if (dec < 0) + next_timer = timer_get_next_tb(); + if (tb >= next_timer) return BOOK3S_INTERRUPT_HV_DECREMENTER; - local_paca->kvm_hstate.dec_expires = dec + tb; - if (local_paca->kvm_hstate.dec_expires < time_limit) - time_limit = local_paca->kvm_hstate.dec_expires; + if (next_timer < time_limit) + time_limit = next_timer; save_p9_host_os_sprs(&host_os_sprs); @@ -4019,7 +4018,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 0x101; vc->in_guest = 0; - set_dec(local_paca->kvm_hstate.dec_expires - mftb()); + next_timer = timer_get_next_tb(); + set_dec(next_timer - mftb()); /* We may have raced with new irq work */ if (test_irq_work_pending()) set_dec(1); From patchwork Mon Jul 26 03:49:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509710 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=T4JDfWBF; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Z46Vd6z9ssD for ; Mon, 26 Jul 2021 13:51:08 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231476AbhGZDKi (ORCPT ); Sun, 25 Jul 2021 23:10:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbhGZDKi (ORCPT ); Sun, 25 Jul 2021 23:10:38 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2AB6C061757 for ; Sun, 25 Jul 2021 20:51:06 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id q17-20020a17090a2e11b02901757deaf2c8so12487604pjd.0 for ; Sun, 25 Jul 2021 20:51:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kGMO3KzFWfaiWdxRFMMRg8fmrR2Jvox/ePS+dlDySeM=; b=T4JDfWBFUi5OWwQq4UEJXRpK2IbKJs+ukhR/hf+kauIbzD2aPqKEOVbgZhsFvBh+JX RUB/cueg7stI0g0lVEr44BhZ9g2exoOzev9rG4PVl9G7z99TF4FXpinMtBt0RCfYSktX S8ugeHUqJ8sS7HQkbyQl6Go9yuFOyKodhq17sRR62ijrc7aY4jSsgQBLehJ+sdyovj2E Efwt3iCTkPxucLqHM+NCszZM5/7n+NeGMl3XFJfspJfEKcYFa9w3yOaayonDWr2efAy0 ProqW08pW75r3xoSuAu18ttes0P1GX+/yoL+fnULmsruHzLEY3CgA5GpdHGdYtBgvIEg 6OMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kGMO3KzFWfaiWdxRFMMRg8fmrR2Jvox/ePS+dlDySeM=; b=ck+pUgGQ4BQ3Xn/I0heQ1u+voeOvJnNNAErEaqOXCZ6kuTOsLkybaQfhCeYdeVQPeX dIS+4ah/0fBse+jRw+oMQ/EZlRgybmG+0GQd9DqrvDBv3q9yjf6Zu3m10wtMTQpJgsLI 7cuftXOBeFRAfwaiHoBhW6lHOiuW1L+FmEiiRrIFIV3/QrzHRGmfjCIIYRexRLQhkdS3 7kpCXuqYayhvvPeqUPPSBYdmCUa1wY5yHsIYTv4TM9wkxVBmG+tkUG0uCRnYO2Ar8uPd hSB42R3SpHk7hyLOCjooNPfl/Jws2SxE6Pzl2UJWoMVfqcQ7UlGeMxrp/TKyk8/fmG9d fIkA== X-Gm-Message-State: AOAM532TUsm2L8/3cC9S+LtojK77YehVvF2XV4iAqhRQMSITn0qGCYi+ VeZGwAXx+ja7lab9upYgykPqqeZtKDI= X-Google-Smtp-Source: ABdhPJwEJ+Et8P+gZNwFbeoqExsDuLzIlB7g/K+Y2zrsv2h1lyvvqSoiCwxddrMfMn7eS3fbGtCJ3Q== X-Received: by 2002:a17:90b:11d4:: with SMTP id gv20mr15488659pjb.200.1627271466497; Sun, 25 Jul 2021 20:51:06 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:06 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Alexey Kardashevskiy Subject: [PATCH v1 09/55] KVM: PPC: Book3S HV P9: Use large decrementer for HDEC Date: Mon, 26 Jul 2021 13:49:50 +1000 Message-Id: <20210726035036.739609-10-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org On processors that don't suppress the HDEC exceptions when LPCR[HDICE]=0, this could help reduce needless guest exits due to leftover exceptions on entering the guest. Reviewed-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/time.h | 2 ++ arch/powerpc/kernel/time.c | 1 + arch/powerpc/kvm/book3s_hv_p9_entry.c | 3 ++- 3 files changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index fd09b4797fd7..69b6be617772 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -18,6 +18,8 @@ #include /* time.c */ +extern u64 decrementer_max; + extern unsigned long tb_ticks_per_jiffy; extern unsigned long tb_ticks_per_usec; extern unsigned long tb_ticks_per_sec; diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index 01df89918aa4..72d872b49167 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -89,6 +89,7 @@ static struct clocksource clocksource_timebase = { #define DECREMENTER_DEFAULT_MAX 0x7FFFFFFF u64 decrementer_max = DECREMENTER_DEFAULT_MAX; +EXPORT_SYMBOL_GPL(decrementer_max); /* for KVM HDEC */ static int decrementer_set_next_event(unsigned long evt, struct clock_event_device *dev); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 961b3d70483c..0ff9ddb5e7ca 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -504,7 +504,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = 0; } - mtspr(SPRN_HDEC, 0x7fffffff); + /* HDEC must be at least as large as DEC, so decrementer_max fits */ + mtspr(SPRN_HDEC, decrementer_max); save_clear_guest_mmu(kvm, vcpu); switch_mmu_to_host(kvm, host_pidr); From patchwork Mon Jul 26 03:49:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509712 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=XzrO6kzk; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Z925ySz9sXJ for ; Mon, 26 Jul 2021 13:51:13 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231489AbhGZDKl (ORCPT ); Sun, 25 Jul 2021 23:10:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbhGZDKk (ORCPT ); Sun, 25 Jul 2021 23:10:40 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A68BC061757 for ; Sun, 25 Jul 2021 20:51:09 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id ds11-20020a17090b08cbb0290172f971883bso17851740pjb.1 for ; Sun, 25 Jul 2021 20:51:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EgZ4lpFfP2pWHmRv1UKIWaTNAZ0Rl+EpOrIWSU3SiM0=; b=XzrO6kzkK2HwLUUSSSBJzaIK9uqXtBDKZRhxDjYmwEa2A+BO7maUk5GqoxrL6ZAg5L QFjX4zYm2+S7/i/xcWeGC3RVwtBkXMxYiwetxongZEqhUMgt/q0LI9AynUyeU34Qc1GM pcOqWtXh+s5tJdBwLfRrRzDjGdVbWGGPw6IndAZS27ZC/rMcaokeGA2JCRg1S3lTh9TA jcFmA4//GKObolRxpmJ5JSiR7v/aFhr1D64UMRabmOw3+SqIIXVb/6XbRHZiGXVu/zxZ YUMHCMb+1EyfCaPWU/wlcdgkyAMvlFum+3dyQHqfLm3xu0ddY0+vIZeLg4rjtKgo+DM1 7sFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EgZ4lpFfP2pWHmRv1UKIWaTNAZ0Rl+EpOrIWSU3SiM0=; b=f7gA0AZfxO+hF+CrABBi/fIEyakrnrSJ8vJgt9UD3x3BhZry5fBnPaWjbdWqZxmE0v L9wsshefCc37kGSpGFat6B844Wgff0FWB01JwBB+RR1PRnqL6eTXH6AhijWj/8TsNQnG KOSN/qtRvWkeAsKQoCwqlm5rLtFAOGweEQi7pKAbja8tJJGCJFZsElPZAaL4qmZ2NZ3+ ogZIOk6DZl58GRg6tv3kwFHygXv2OyOEIS6n1Dd4hoGfXWuuwpdx5zGIzC/ERMlevIGb RinpQsfaJ3gzAlTqbbCqDp7jNfJj/aJq+VJ8Wo5i8HNILrKkNRiSHSFzexyfRDGuSsld HJVg== X-Gm-Message-State: AOAM533rq5KhC1wF7L2KFxXchu2QysKHko5PXF1TFq6DmEDswu+I6W8N x8kIzyQGbUbxv8Irb6FVoHHxJuv94uE= X-Google-Smtp-Source: ABdhPJxJe/TQNIbWluVSSKqWVVzlk1rNVgOoBs3cG3lcm2X/e4R7v/2XpBTZqoQ5lREEQFjwzZzohA== X-Received: by 2002:a62:cfc4:0:b029:2fe:eaf8:8012 with SMTP id b187-20020a62cfc40000b02902feeaf88012mr15652003pfg.45.1627271469004; Sun, 25 Jul 2021 20:51:09 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:08 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v1 10/55] KVM: PPC: Book3S HV P9: Reduce mftb per guest entry/exit Date: Mon, 26 Jul 2021 13:49:51 +1000 Message-Id: <20210726035036.739609-11-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org mftb is serialising (dispatch next-to-complete) so it is heavy weight for a mfspr. Avoid reading it multiple times in the entry or exit paths. A small number of cycles delay to timers is tolerable. -118 cycles (9137) POWER9 virt-mode NULL hcall Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 4 ++-- arch/powerpc/kvm/book3s_hv_p9_entry.c | 5 +++-- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 82976f734bd1..6e6cfb10e9bb 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3896,7 +3896,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, * * XXX: Another day's problem. */ - mtspr(SPRN_DEC, vcpu->arch.dec_expires - mftb()); + mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb); if (kvmhv_on_pseries()) { /* @@ -4019,7 +4019,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->in_guest = 0; next_timer = timer_get_next_tb(); - set_dec(next_timer - mftb()); + set_dec(next_timer - tb); /* We may have raced with new irq work */ if (test_irq_work_pending()) set_dec(1); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 0ff9ddb5e7ca..bd8cf0a65ce8 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -203,7 +203,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc unsigned long host_dawr1; unsigned long host_dawrx1; - hdec = time_limit - mftb(); + tb = mftb(); + hdec = time_limit - tb; if (hdec < 0) return BOOK3S_INTERRUPT_HV_DECREMENTER; @@ -215,7 +216,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ceded = 0; if (vc->tb_offset) { - u64 new_tb = mftb() + vc->tb_offset; + u64 new_tb = tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); tb = mftb(); if ((tb & 0xffffff) < (new_tb & 0xffffff)) From patchwork Mon Jul 26 03:49:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509713 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=lBOAk8ys; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5ZB0Ftnz9t8j for ; Mon, 26 Jul 2021 13:51:14 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231491AbhGZDKn (ORCPT ); Sun, 25 Jul 2021 23:10:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbhGZDKn (ORCPT ); Sun, 25 Jul 2021 23:10:43 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D2A5DC061757 for ; Sun, 25 Jul 2021 20:51:11 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id ch6so2114925pjb.5 for ; Sun, 25 Jul 2021 20:51:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=U+hxFadF7rDD737thjtwTXg1szqkFi6egNx7RN8wOw8=; b=lBOAk8ysarc37c5RgJdg+ncx9+W7XJTFZnI0pdmZH5q/UvOOHorvI48Z5LsZ1UaQiU rjKl2iMAy+n5Cy2T93ZKIckWqIHl4I4afXgLtgC1YgdMPkfF8Eq5lC6NlY8QJgMYihHL LnbOlEDw+BlpFHcyo6wvdFC/fE+wrcOfp3WNnrgsg8aHR5QYLwFYJ0VKg2cCHvgsYsN6 9L2vqPXOZoRLWTWm8pH8G0+x75lor/VzFieUdoV0Sl4WYrlbqrmJ8ItbY3jDw8rgjEXJ 0Dxe48kPj1l1uKDn4zsw1wgYl4hZO1rjATeZCePf3Abb3vfgh4mCvs9+IaCK1qtPO8Y4 nT1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=U+hxFadF7rDD737thjtwTXg1szqkFi6egNx7RN8wOw8=; b=aVcjj/7nu5K7/Oiv18eMk6uMR0bPDlrtGdbEzk1+hAJoOIgQe9HtUKTMQCHb1o5/mO OhqCS1ik54ShPKrmXb6Z4sl1hDID0C4nE/rOvEWjFVRgvzZ+r/oijkxuZvnpSMwb2OMf VPh89GUngX/ZNAjLYPnCBju18dRgy3w5AO9i2L2IUngLSym9uTYBdT2EJjLXTdH+s80k US+pNYMpW/u0eScbmPr6iD7ksRMk6VBeQrSHOjcbkEJBFbnKTY6ANIXdy9trUrEq4ViE zVrJFuT0VeYDv4EWI1Mc2jQC5dFWrmKws5ZZCUVu5VFC+i1uvaf8//9NlqmKXvWKdAUk fWTQ== X-Gm-Message-State: AOAM532khtyWNW5ybOOKzJjSO4ZjDwKcX1k9WSHVoD4XOeLIR//YAFf0 uHZ2AWBgfM8VMAjv2HoJCBAmeH+vSzQ= X-Google-Smtp-Source: ABdhPJyERyuOio4qvoqLtz/rUqcHpCZaHWROFblTuPaA1JSruI9r5uXy4KMCWGiK/KExivVaqPDNZw== X-Received: by 2002:a65:61ab:: with SMTP id i11mr16390663pgv.168.1627271471238; Sun, 25 Jul 2021 20:51:11 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:11 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 11/55] powerpc/time: add API for KVM to re-arm the host timer/decrementer Date: Mon, 26 Jul 2021 13:49:52 +1000 Message-Id: <20210726035036.739609-12-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rather than have KVM look up the host timer and fiddle with the irq-work internal details, have the powerpc/time.c code provide a function for KVM to re-arm the Linux timer code when exiting a guest. This is implementation has an improvement over existing code of marking a decrementer interrupt as soft-pending if a timer has expired, rather than setting DEC to a -ve value, which tended to cause host timers to take two interrupts (first hdec to exit the guest, then the immediate dec). Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/time.h | 16 +++------- arch/powerpc/kernel/time.c | 52 +++++++++++++++++++++++++++------ arch/powerpc/kvm/book3s_hv.c | 7 ++--- 3 files changed, 49 insertions(+), 26 deletions(-) diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index 69b6be617772..924b2157882f 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -99,18 +99,6 @@ extern void div128_by_32(u64 dividend_high, u64 dividend_low, extern void secondary_cpu_time_init(void); extern void __init time_init(void); -#ifdef CONFIG_PPC64 -static inline unsigned long test_irq_work_pending(void) -{ - unsigned long x; - - asm volatile("lbz %0,%1(13)" - : "=r" (x) - : "i" (offsetof(struct paca_struct, irq_work_pending))); - return x; -} -#endif - DECLARE_PER_CPU(u64, decrementers_next_tb); static inline u64 timer_get_next_tb(void) @@ -118,6 +106,10 @@ static inline u64 timer_get_next_tb(void) return __this_cpu_read(decrementers_next_tb); } +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +void timer_rearm_host_dec(u64 now); +#endif + /* Convert timebase ticks to nanoseconds */ unsigned long long tb_to_ns(unsigned long long tb_ticks); diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index 72d872b49167..016828b7401b 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -499,6 +499,16 @@ EXPORT_SYMBOL(profile_pc); * 64-bit uses a byte in the PACA, 32-bit uses a per-cpu variable... */ #ifdef CONFIG_PPC64 +static inline unsigned long test_irq_work_pending(void) +{ + unsigned long x; + + asm volatile("lbz %0,%1(13)" + : "=r" (x) + : "i" (offsetof(struct paca_struct, irq_work_pending))); + return x; +} + static inline void set_irq_work_pending_flag(void) { asm volatile("stb %0,%1(13)" : : @@ -542,13 +552,44 @@ void arch_irq_work_raise(void) preempt_enable(); } +static void set_dec_or_work(u64 val) +{ + set_dec(val); + /* We may have raced with new irq work */ + if (unlikely(test_irq_work_pending())) + set_dec(1); +} + #else /* CONFIG_IRQ_WORK */ #define test_irq_work_pending() 0 #define clear_irq_work_pending() +static void set_dec_or_work(u64 val) +{ + set_dec(val); +} #endif /* CONFIG_IRQ_WORK */ +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +void timer_rearm_host_dec(u64 now) +{ + u64 *next_tb = this_cpu_ptr(&decrementers_next_tb); + + WARN_ON_ONCE(!arch_irqs_disabled()); + WARN_ON_ONCE(mfmsr() & MSR_EE); + + if (now >= *next_tb) { + local_paca->irq_happened |= PACA_IRQ_DEC; + } else { + now = *next_tb - now; + if (now <= decrementer_max) + set_dec_or_work(now); + } +} +EXPORT_SYMBOL_GPL(timer_rearm_host_dec); +#endif + /* * timer_interrupt - gets called when the decrementer overflows, * with interrupts disabled. @@ -609,10 +650,7 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(timer_interrupt) } else { now = *next_tb - now; if (now <= decrementer_max) - set_dec(now); - /* We may have raced with new irq work */ - if (test_irq_work_pending()) - set_dec(1); + set_dec_or_work(now); __this_cpu_inc(irq_stat.timer_irqs_others); } @@ -854,11 +892,7 @@ static int decrementer_set_next_event(unsigned long evt, struct clock_event_device *dev) { __this_cpu_write(decrementers_next_tb, get_tb() + evt); - set_dec(evt); - - /* We may have raced with new irq work */ - if (test_irq_work_pending()) - set_dec(1); + set_dec_or_work(evt); return 0; } diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 6e6cfb10e9bb..0cef578930f9 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4018,11 +4018,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 0x101; vc->in_guest = 0; - next_timer = timer_get_next_tb(); - set_dec(next_timer - tb); - /* We may have raced with new irq work */ - if (test_irq_work_pending()) - set_dec(1); + timer_rearm_host_dec(tb); + mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); kvmhv_load_host_pmu(); From patchwork Mon Jul 26 03:49:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509714 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=XJpB6Q5O; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5ZD3Ykzz9sxS for ; Mon, 26 Jul 2021 13:51:16 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231517AbhGZDKq (ORCPT ); Sun, 25 Jul 2021 23:10:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbhGZDKp (ORCPT ); Sun, 25 Jul 2021 23:10:45 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68CA7C061757 for ; Sun, 25 Jul 2021 20:51:14 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id gv20-20020a17090b11d4b0290173b9578f1cso11984362pjb.0 for ; Sun, 25 Jul 2021 20:51:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nHXorjUusC91+jyzJOv3GiElWAcAvxrDfngAVt6HQtc=; b=XJpB6Q5OVySTQUMREJ36E7xsrRDoAhmruwwzWwAmOkgS62otiHnGIeNEqF5jvtcteA lkMqHfOflQoYtWHUxMcxEH1ehmbiqfDJrpZbkqo34eFlre+hi3qHGZya1riGmLphf3zW Wl1/CKIFX9rUj+rhQJC4eTaLnuI66TPWTgXr1J/HIJHZFbAPZrmrDbdpP3P1TVqGxJ04 CezxFJSNq/SdHEI0owOXHJHbfnrzEurtDXN+mLfEViAFNa5cDvvPemf4tL9gHvoOhhOJ FMWhbkFtbOkbEqckP4nUnarVJQpErOeKDPPISkrPVhNbQc6Ub/ezfBIaQbezJvXlPvBG 2y/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nHXorjUusC91+jyzJOv3GiElWAcAvxrDfngAVt6HQtc=; b=GZ0SP0Ge2ud1fd7N8ymGbn9plHiWnULObQf1pcESk1jBZ7KMvnws198n7ju7Fvsz9d s1kkPVi0p1y6REQFQwj1S6Ukw2Z4L3hRl6a7I634IHkGyqpPcfOtRK6GCFaMunWkPCXd eLDxXtQvRpgygH2gu2AaIKQMPiwkE/z8rqF6V1CVj8wh3fUWAR+QXJrZLsQiSXiSdz80 QmrhXmeumFWOAkq3mwo3BNhSr4mYDh5bWP9d7e0X4fn7exkUXN10y9MpPipxImNhWycu nNyALqqirFD9GSllvvjNB/EIgSmKbSMPWITCAgk1zHjpzRB/1VJQTMetq1cix/22SLUt PysQ== X-Gm-Message-State: AOAM532BqsAduT00tHA3Ratfsr/IpCXpdHHcVSUr4x5RR/GP5HRjh4Sf A8N2sjghLTcSTsbhzZL+HGqnwRSZ6C8= X-Google-Smtp-Source: ABdhPJwGI8F7bs93/D8S+wnBK4/3Kk44t+pq4eUUgGYplkQR8R/Ji5RnzvPuewdwOYF/YWdvm83CNw== X-Received: by 2002:a17:90a:3048:: with SMTP id q8mr3507857pjl.120.1627271473907; Sun, 25 Jul 2021 20:51:13 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:13 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v1 12/55] KVM: PPC: Book3S HV: POWER10 enable HAIL when running radix guests Date: Mon, 26 Jul 2021 13:49:53 +1000 Message-Id: <20210726035036.739609-13-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org HV interrupts may be taken with the MMU enabled when radix guests are running. Enable LPCR[HAIL] on ISA v3.1 processors for radix guests. Make this depend on the host LPCR[HAIL] being enabled. Currently that is always enabled, but having this test means any issue that might require LPCR[HAIL] to be disabled in the host will not have to be duplicated in KVM. -1380 cycles on P10 NULL hcall entry+exit Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 29 +++++++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 0cef578930f9..e7f8cc04944b 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -5004,6 +5004,8 @@ static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu) */ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm) { + unsigned long lpcr, lpcr_mask; + if (nesting_enabled(kvm)) kvmhv_release_all_nested(kvm); kvmppc_rmap_reset(kvm); @@ -5013,8 +5015,13 @@ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm) kvm->arch.radix = 0; spin_unlock(&kvm->mmu_lock); kvmppc_free_radix(kvm); - kvmppc_update_lpcr(kvm, LPCR_VPM1, - LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR); + + lpcr = LPCR_VPM1; + lpcr_mask = LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR; + if (cpu_has_feature(CPU_FTR_ARCH_31)) + lpcr_mask |= LPCR_HAIL; + kvmppc_update_lpcr(kvm, lpcr, lpcr_mask); + return 0; } @@ -5024,6 +5031,7 @@ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm) */ int kvmppc_switch_mmu_to_radix(struct kvm *kvm) { + unsigned long lpcr, lpcr_mask; int err; err = kvmppc_init_vm_radix(kvm); @@ -5035,8 +5043,17 @@ int kvmppc_switch_mmu_to_radix(struct kvm *kvm) kvm->arch.radix = 1; spin_unlock(&kvm->mmu_lock); kvmppc_free_hpt(&kvm->arch.hpt); - kvmppc_update_lpcr(kvm, LPCR_UPRT | LPCR_GTSE | LPCR_HR, - LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR); + + lpcr = LPCR_UPRT | LPCR_GTSE | LPCR_HR; + lpcr_mask = LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + lpcr_mask |= LPCR_HAIL; + if (cpu_has_feature(CPU_FTR_HVMODE) && + (kvm->arch.host_lpcr & LPCR_HAIL)) + lpcr |= LPCR_HAIL; + } + kvmppc_update_lpcr(kvm, lpcr, lpcr_mask); + return 0; } @@ -5200,6 +5217,10 @@ static int kvmppc_core_init_vm_hv(struct kvm *kvm) kvm->arch.mmu_ready = 1; lpcr &= ~LPCR_VPM1; lpcr |= LPCR_UPRT | LPCR_GTSE | LPCR_HR; + if (cpu_has_feature(CPU_FTR_HVMODE) && + cpu_has_feature(CPU_FTR_ARCH_31) && + (kvm->arch.host_lpcr & LPCR_HAIL)) + lpcr |= LPCR_HAIL; ret = kvmppc_init_vm_radix(kvm); if (ret) { kvmppc_free_lpid(kvm->arch.lpid); From patchwork Mon Jul 26 03:49:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509717 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=g3EmeckY; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5ZQ622bz9t6g for ; Mon, 26 Jul 2021 13:51:26 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230321AbhGZDKz (ORCPT ); Sun, 25 Jul 2021 23:10:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231529AbhGZDKy (ORCPT ); Sun, 25 Jul 2021 23:10:54 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 098AAC061757 for ; Sun, 25 Jul 2021 20:51:17 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id c16so4439020plh.7 for ; Sun, 25 Jul 2021 20:51:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iWjyTSlQNF6ZmccM9wHFHsF0l3hw0FGKI/julB9LAE4=; b=g3EmeckYA66BwYW2N63mT4UBIVOqDLpmhSZhTFX6wSVr9wOiHD1zSJE4q40h6XhsN2 k95y680XeNZJo5oz06xNwpEPJ2TWPENhfC7u+lEYZ1frn6XabWS3X6TDtTgRPIwmo2xQ fOvVV6iuPVRsOpb1hnto6yDyIeYqkWZ9oKr+WkKfzxOOUvPQ50tnQ833IemjphtCSln4 h2qu+o2vXdQndMh4yBiEQJClCYPaQI0XUs4KA4ct77epqYQrO+qNqC+q24SRlsfu9UYx SjSlmKnz6QyFNvmbO7r8OhVocWmPsNvgKoQQfGZoOWqD3CE/kj+hCWH4SMP9CBwYDVK3 /+ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iWjyTSlQNF6ZmccM9wHFHsF0l3hw0FGKI/julB9LAE4=; b=R6LZwO23B55PTg0pkwa579upRH/yuXPaXeDcac6CPbjgow8REEidMHzknFE+axnds/ 3aSlfXTRFVxRc57RAiYuXHdwsL+Z2Ml4Cqsspj7Ex4cM1jchfuuWgnR5nb+mHjXfee9K GydrT4F1uhtuoFQL/w0MzcxC87iyYygvdVgsfynZhKtJP91Rb61xbRfND/FmxGuWrZw7 QC7kqZJd6X9viVpxjNV6SvgSVgOyIvP2aiHWP1UUUZkP5BvfL+jLmaJi3zgHF278+RJc mbs9vHh0na3I6ceYC0KPlzx65GpgZcO5sJnG/MUvG5z6B7/8EdmF/0IRwieJ2Gh6Im6u /FOA== X-Gm-Message-State: AOAM530urj6EzCLonTTPkIJ1iASIqv9KXmRUbPHDPPAoCVy7FTF9XY79 YTYlezyBUi+Brm63ifYk3P7lC1h7gTU= X-Google-Smtp-Source: ABdhPJyEEuAV7WyIqY8/vzugZp5TK9pjx2056cC9UIzBR3iATW4hubwtOpb35rytXBA5yFxRZvp0YQ== X-Received: by 2002:a05:6a00:158e:b029:32b:9de5:a199 with SMTP id u14-20020a056a00158eb029032b9de5a199mr15566647pfk.76.1627271476466; Sun, 25 Jul 2021 20:51:16 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:16 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v1 13/55] powerpc/64s: Keep AMOR SPR a constant ~0 at runtime Date: Mon, 26 Jul 2021 13:49:54 +1000 Message-Id: <20210726035036.739609-14-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This register controls supervisor SPR modifications, and as such is only relevant for KVM. KVM always sets AMOR to ~0 on guest entry, and never restores it coming back out to the host, so it can be kept constant and avoid the mtSPR in KVM guest entry. -21 cycles (9116) cycles POWER9 virt-mode NULL hcall Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/cpu_setup_power.c | 8 ++++++++ arch/powerpc/kernel/dt_cpu_ftrs.c | 2 ++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 2 -- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 2 -- arch/powerpc/mm/book3s64/radix_pgtable.c | 15 --------------- arch/powerpc/platforms/powernv/idle.c | 8 +++----- 6 files changed, 13 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/kernel/cpu_setup_power.c b/arch/powerpc/kernel/cpu_setup_power.c index 3cca88ee96d7..a29dc8326622 100644 --- a/arch/powerpc/kernel/cpu_setup_power.c +++ b/arch/powerpc/kernel/cpu_setup_power.c @@ -137,6 +137,7 @@ void __setup_cpu_power7(unsigned long offset, struct cpu_spec *t) return; mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA206(mfspr(SPRN_LPCR), LPCR_LPES1 >> LPCR_LPES_SH); } @@ -150,6 +151,7 @@ void __restore_cpu_power7(void) return; mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA206(mfspr(SPRN_LPCR), LPCR_LPES1 >> LPCR_LPES_SH); } @@ -164,6 +166,7 @@ void __setup_cpu_power8(unsigned long offset, struct cpu_spec *t) return; mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA206(mfspr(SPRN_LPCR) | LPCR_PECEDH, 0); /* LPES = 0 */ init_HFSCR(); @@ -184,6 +187,7 @@ void __restore_cpu_power8(void) return; mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA206(mfspr(SPRN_LPCR) | LPCR_PECEDH, 0); /* LPES = 0 */ init_HFSCR(); @@ -202,6 +206,7 @@ void __setup_cpu_power9(unsigned long offset, struct cpu_spec *t) mtspr(SPRN_PSSCR, 0); mtspr(SPRN_LPID, 0); mtspr(SPRN_PID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0); @@ -223,6 +228,7 @@ void __restore_cpu_power9(void) mtspr(SPRN_PSSCR, 0); mtspr(SPRN_LPID, 0); mtspr(SPRN_PID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0); @@ -242,6 +248,7 @@ void __setup_cpu_power10(unsigned long offset, struct cpu_spec *t) mtspr(SPRN_PSSCR, 0); mtspr(SPRN_LPID, 0); mtspr(SPRN_PID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0); @@ -264,6 +271,7 @@ void __restore_cpu_power10(void) mtspr(SPRN_PSSCR, 0); mtspr(SPRN_LPID, 0); mtspr(SPRN_PID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0); diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c index af95f337e54b..38ea20fadc4a 100644 --- a/arch/powerpc/kernel/dt_cpu_ftrs.c +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c @@ -80,6 +80,7 @@ static void __restore_cpu_cpufeatures(void) mtspr(SPRN_LPCR, system_registers.lpcr); if (hv_mode) { mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_HFSCR, system_registers.hfscr); mtspr(SPRN_PCR, system_registers.pcr); } @@ -216,6 +217,7 @@ static int __init feat_enable_hv(struct dt_cpu_feature *f) } mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); lpcr = mfspr(SPRN_LPCR); lpcr &= ~LPCR_LPES0; /* HV external interrupts */ diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index bd8cf0a65ce8..a7f63082b4e3 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -286,8 +286,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2); mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3); - mtspr(SPRN_AMOR, ~0UL); - local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_HV_P9; /* diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 75079397c2a5..9021052f1579 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -772,10 +772,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) /* Restore AMR and UAMOR, set AMOR to all 1s */ ld r5,VCPU_AMR(r4) ld r6,VCPU_UAMOR(r4) - li r7,-1 mtspr SPRN_AMR,r5 mtspr SPRN_UAMOR,r6 - mtspr SPRN_AMOR,r7 /* Restore state of CTRL run bit; assume 1 on entry */ lwz r5,VCPU_CTRL(r4) diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index e50ddf129c15..5aebd70ef66a 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -572,18 +572,6 @@ void __init radix__early_init_devtree(void) return; } -static void radix_init_amor(void) -{ - /* - * In HV mode, we init AMOR (Authority Mask Override Register) so that - * the hypervisor and guest can setup IAMR (Instruction Authority Mask - * Register), enable key 0 and set it to 1. - * - * AMOR = 0b1100 .... 0000 (Mask for key 0 is 11) - */ - mtspr(SPRN_AMOR, (3ul << 62)); -} - void __init radix__early_init_mmu(void) { unsigned long lpcr; @@ -644,7 +632,6 @@ void __init radix__early_init_mmu(void) lpcr = mfspr(SPRN_LPCR); mtspr(SPRN_LPCR, lpcr | LPCR_UPRT | LPCR_HR); radix_init_partition_table(); - radix_init_amor(); } else { radix_init_pseries(); } @@ -668,8 +655,6 @@ void radix__early_init_mmu_secondary(void) set_ptcr_when_no_uv(__pa(partition_tb) | (PATB_SIZE_SHIFT - 12)); - - radix_init_amor(); } radix__switch_mmu_context(NULL, &init_mm); diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c index df19e2ff9d3c..721ac4f7e2d1 100644 --- a/arch/powerpc/platforms/powernv/idle.c +++ b/arch/powerpc/platforms/powernv/idle.c @@ -306,8 +306,8 @@ struct p7_sprs { /* per thread SPRs that get lost in shallow states */ u64 amr; u64 iamr; - u64 amor; u64 uamor; + /* amor is restored to constant ~0 */ }; static unsigned long power7_idle_insn(unsigned long type) @@ -378,7 +378,6 @@ static unsigned long power7_idle_insn(unsigned long type) if (cpu_has_feature(CPU_FTR_ARCH_207S)) { sprs.amr = mfspr(SPRN_AMR); sprs.iamr = mfspr(SPRN_IAMR); - sprs.amor = mfspr(SPRN_AMOR); sprs.uamor = mfspr(SPRN_UAMOR); } @@ -397,7 +396,7 @@ static unsigned long power7_idle_insn(unsigned long type) */ mtspr(SPRN_AMR, sprs.amr); mtspr(SPRN_IAMR, sprs.iamr); - mtspr(SPRN_AMOR, sprs.amor); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_UAMOR, sprs.uamor); } } @@ -687,7 +686,6 @@ static unsigned long power9_idle_stop(unsigned long psscr) sprs.amr = mfspr(SPRN_AMR); sprs.iamr = mfspr(SPRN_IAMR); - sprs.amor = mfspr(SPRN_AMOR); sprs.uamor = mfspr(SPRN_UAMOR); srr1 = isa300_idle_stop_mayloss(psscr); /* go idle */ @@ -708,7 +706,7 @@ static unsigned long power9_idle_stop(unsigned long psscr) */ mtspr(SPRN_AMR, sprs.amr); mtspr(SPRN_IAMR, sprs.iamr); - mtspr(SPRN_AMOR, sprs.amor); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_UAMOR, sprs.uamor); /* From patchwork Mon Jul 26 03:49:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509715 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=MHfKxkF5; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5ZP2KDzz9snk for ; Mon, 26 Jul 2021 13:51:25 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231533AbhGZDKz (ORCPT ); Sun, 25 Jul 2021 23:10:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231530AbhGZDKy (ORCPT ); Sun, 25 Jul 2021 23:10:54 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C990C061760 for ; Sun, 25 Jul 2021 20:51:19 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id n10so2352931plc.2 for ; Sun, 25 Jul 2021 20:51:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=shePQEuLHDgx7Ejl/VppXX+u34JQZkGKzR8cfPeQyKk=; b=MHfKxkF55s08XB75KxHKhdcaY5/fpLsPhhoMr0Wk4mmzkck0NIX6A04iWbCL0qy4vP vqyNVPXL3ULhDFHj7hjhE1K8DRg/EINisFXAdmuCcWea/3JG5XZHxdA8hbdAOjdm96ml T5rnyPRHKdCaCyDuFK8tHsQQ9DOFM6KjWsu3dXYei/FM1rfUw4+TaBCwwIZUBxWZqKjx zRraE7uZB4gm+cZ5wpg/03GfZsLWJJuKzwolXfW32s16b8ogttyx5LcPhVFUN7ZVOaZ0 VshU77xzlPiwpi5A8qXEzjNCJZebgM+qab78CAm33neuN5ixq0nxVsLDbZYcpFDUgUlh 4+3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=shePQEuLHDgx7Ejl/VppXX+u34JQZkGKzR8cfPeQyKk=; b=fdMsHe0wUY7EfPRug/bXAaPlf9xJ7b8Nvow1u3H+dkBcXIOqiXfzXCE2PNPPI+QVAj 1oFfEPO/yU4JSV4bUdElvBHO9vNzmK8SpGvdUJK3x0+v1RJEXq6FVKJ0n7sOtAoaml91 JmYkJNX6S0zJrD1CbiP2/bQHVToUpScHs0EqoDRfUMD3beBAak7C2bMJRyuh527of65H Df0r7/s6Q95IhVW2x9ScoNq79azv2DJSJ99rMdP2si3+1wCrqWG3AM0J+cw1FdDLv+Tc 45OUVbkI+6Bvg5qn1GYG5o8NhHyNcMbLsz3irEWUD4RqoPKF6Dkctr1OgnRQdXnCzj3i VOBQ== X-Gm-Message-State: AOAM5336HeKcAm5RD/8MOP4Mf/pLa9Im+vMxKW2eoceVQawi16EZcxyi Zy46gLQbaDKDMskfzou6oX8qZeffhH0= X-Google-Smtp-Source: ABdhPJxukfXi22mn2P1qUJcY5oYKgsPpb+58bBenCrGHt4BbANGyK8mJ1f5SFPmFlcI5ZbBGxNxY+w== X-Received: by 2002:a17:902:c20c:b029:12a:edee:a7fa with SMTP id 12-20020a170902c20cb029012aedeea7famr13124967pll.2.1627271478693; Sun, 25 Jul 2021 20:51:18 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:18 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 14/55] KVM: PPC: Book3S HV: Don't always save PMU for guest capable of nesting Date: Mon, 26 Jul 2021 13:49:55 +1000 Message-Id: <20210726035036.739609-15-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Revert the workaround added by commit 63279eeb7f93a ("KVM: PPC: Book3S HV: Always save guest pmu for guest capable of nesting"). Nested capable guests running with the earlier commit ("KVM: PPC: Book3S HV Nested: Indicate guest PMU in-use in VPA") will now indicate the PMU in-use status of their guests, which means the parent does not need to unconditionally save the PMU for nested capable guests. This will cause the PMU to break for nested guests when running older nested hypervisor guests under a kernel with this change. It's unclear there's an easy way to avoid that, so this could wait for a release or so for the fix to filter into stable kernels. -134 cycles (8982) POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index e7f8cc04944b..ab89db561c85 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4003,8 +4003,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.vpa.dirty = 1; save_pmu = lp->pmcregs_in_use; } - /* Must save pmu if this guest is capable of running nested guests */ - save_pmu |= nesting_enabled(vcpu->kvm); kvmhv_save_guest_pmu(vcpu, save_pmu); #ifdef CONFIG_PPC_PSERIES From patchwork Mon Jul 26 03:49:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509716 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=MrnqtCfO; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5ZP5TBHz9t0T for ; Mon, 26 Jul 2021 13:51:25 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231536AbhGZDKz (ORCPT ); Sun, 25 Jul 2021 23:10:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbhGZDKy (ORCPT ); Sun, 25 Jul 2021 23:10:54 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C475C061764 for ; Sun, 25 Jul 2021 20:51:21 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id q17-20020a17090a2e11b02901757deaf2c8so12488187pjd.0 for ; Sun, 25 Jul 2021 20:51:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RUY4DjhoVcwh8vw/6xuD0mzIiPpFrZzyIYMqsoNMJh4=; b=MrnqtCfOgs+2lfNq5vjfN03YUCQ80Z5XjAbdu7GXoNslUaPkTt9YcHIAEeP8u13s1q KicdBW6jZIbfzLI6tf00YeDcUfHAe+7n9oSPWfFDuXEhs/6jp63syOVVHgjURoS9jsFo f0+ASg5h2+u7ZEkK+NmLx3L6VRruJ3DRipafFdxETRAs6enf6AuLqXVg2dCXuTh+GNiH bWgGTnXJfESeG6VfGCizzM69vNKE9jowHfjxSF4nhdiyj3q9RI+iclc9zrcBBC615ceS aONS9zhY+fMP1zKcyt9ngJoEd5MLB1ka0njsyl8kNSYJvxzEis2bnGBLCEYTBq3KV2vU mKbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RUY4DjhoVcwh8vw/6xuD0mzIiPpFrZzyIYMqsoNMJh4=; b=H9J0mJpN6Tz0dm4kGROAmENDWW4kcuIemjuIvDyhpK/DhxABDUk2qF+AWSW1k1zeAz VUTGO5wbvDainddklhJj+KWsAoki0Gl1UzaWJ+4LNDY3l7niUs+vz/XlqWx5G4H+m1Qz 0q46yaTf11p2eU/FxaaY85sftuR6FHd8kKKmrhmStMRYDZPm58OibIFEZp4VcDQD6xS8 r1NLWpXlcEtkV02Y44+KdS8UPWeJXp8uCZDSBmQrAjjy/GAnvjnBad+EnxcPyeqVpM75 6q/nqpq9ptN5qQtAiZVakTEdS+ftyrjjgULbBWhJaHAZxG7QO/A8zcbtnYgHWSUOsmD2 bGKA== X-Gm-Message-State: AOAM533NLeguqcwOYajN4hr5QK2SAa2L0EXml0/f1MY35KG+DTnIYWsV v6KMh/R0dHFIBGt+wqzBfHf18LbmyFM= X-Google-Smtp-Source: ABdhPJxrwfLnqWKH2gIYKRX/ffRt6PcNWY15ekXVP+YAfRQvKiVKQGSJV9ZzXL8DItYzdaoU+5Dwhg== X-Received: by 2002:a63:ae48:: with SMTP id e8mr16599544pgp.0.1627271481104; Sun, 25 Jul 2021 20:51:21 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:20 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 15/55] powerpc/64s: Always set PMU control registers to frozen/disabled when not in use Date: Mon, 26 Jul 2021 13:49:56 +1000 Message-Id: <20210726035036.739609-16-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org KVM PMU management code looks for particular frozen/disabled bits in the PMU registers so it knows whether it must clear them when coming out of a guest or not. Setting this up helps KVM make these optimisations without getting confused. Longer term the better approach might be to move guest/host PMU switching to the perf subsystem. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/cpu_setup_power.c | 4 ++-- arch/powerpc/kernel/dt_cpu_ftrs.c | 6 +++--- arch/powerpc/kvm/book3s_hv.c | 5 +++++ 3 files changed, 10 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kernel/cpu_setup_power.c b/arch/powerpc/kernel/cpu_setup_power.c index a29dc8326622..3dc61e203f37 100644 --- a/arch/powerpc/kernel/cpu_setup_power.c +++ b/arch/powerpc/kernel/cpu_setup_power.c @@ -109,7 +109,7 @@ static void init_PMU_HV_ISA207(void) static void init_PMU(void) { mtspr(SPRN_MMCRA, 0); - mtspr(SPRN_MMCR0, 0); + mtspr(SPRN_MMCR0, MMCR0_FC); mtspr(SPRN_MMCR1, 0); mtspr(SPRN_MMCR2, 0); } @@ -123,7 +123,7 @@ static void init_PMU_ISA31(void) { mtspr(SPRN_MMCR3, 0); mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE); - mtspr(SPRN_MMCR0, MMCR0_PMCCEXT); + mtspr(SPRN_MMCR0, MMCR0_FC | MMCR0_PMCCEXT); } /* diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c index 38ea20fadc4a..a6bb0ee179cd 100644 --- a/arch/powerpc/kernel/dt_cpu_ftrs.c +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c @@ -353,7 +353,7 @@ static void init_pmu_power8(void) } mtspr(SPRN_MMCRA, 0); - mtspr(SPRN_MMCR0, 0); + mtspr(SPRN_MMCR0, MMCR0_FC); mtspr(SPRN_MMCR1, 0); mtspr(SPRN_MMCR2, 0); mtspr(SPRN_MMCRS, 0); @@ -392,7 +392,7 @@ static void init_pmu_power9(void) mtspr(SPRN_MMCRC, 0); mtspr(SPRN_MMCRA, 0); - mtspr(SPRN_MMCR0, 0); + mtspr(SPRN_MMCR0, MMCR0_FC); mtspr(SPRN_MMCR1, 0); mtspr(SPRN_MMCR2, 0); } @@ -428,7 +428,7 @@ static void init_pmu_power10(void) mtspr(SPRN_MMCR3, 0); mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE); - mtspr(SPRN_MMCR0, MMCR0_PMCCEXT); + mtspr(SPRN_MMCR0, MMCR0_FC | MMCR0_PMCCEXT); } static int __init feat_enable_pmu_power10(struct dt_cpu_feature *f) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index ab89db561c85..2eef708c4354 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -2691,6 +2691,11 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu) #endif #endif vcpu->arch.mmcr[0] = MMCR0_FC; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + vcpu->arch.mmcr[0] |= MMCR0_PMCCEXT; + vcpu->arch.mmcra = MMCRA_BHRB_DISABLE; + } + vcpu->arch.ctrl = CTRL_RUNLATCH; /* default to host PVR, since we can't spoof it */ kvmppc_set_pvr_hv(vcpu, mfspr(SPRN_PVR)); From patchwork Mon Jul 26 03:49:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509718 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=q80DHRZ+; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5ZR5n0Fz9t0T for ; Mon, 26 Jul 2021 13:51:27 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231529AbhGZDK4 (ORCPT ); Sun, 25 Jul 2021 23:10:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231530AbhGZDKz (ORCPT ); Sun, 25 Jul 2021 23:10:55 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBA23C061765 for ; Sun, 25 Jul 2021 20:51:23 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id d17so9978944plh.10 for ; Sun, 25 Jul 2021 20:51:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7DsHP7bEgsQeWVqOdKD4J0H7e5jZZDtouJKkt2bv9y4=; b=q80DHRZ+j/5oGFd8p2r+7t8/6URh2MCVRukwkrr2FzTHLY8A+Wb4PzUUB4J6jiy+QQ YtZztDSSuGAVZDm+XbyiyhTd+6s+r/DY2u8vbCpWxZ5x+Pl7FevjgYigP3Ueo3CvqGyk dV1CXHddFIvgDfoZ3eASIqmrJjux3ZweQNo7DGKUGBXiJpWX8m7k8vigYc7m9+ulAIGM g7l+2RYNBg2L+F+YI7oVxoXvuAL5uucY7l44KRaVl2973NZzevssOJ/xcILM+37OTZ5e mb51zJ+7YYffOWBV/TzukzhczHN5Yc06x885Pwe223fLCPJy7xcBZf5awVU0L8+QJE7+ +xyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7DsHP7bEgsQeWVqOdKD4J0H7e5jZZDtouJKkt2bv9y4=; b=KgjNRpDGVhGOeZA0GMGHnTUhqt5OA+usy/SNSJxhFwhAHVuVTM/kOaO+cACGJ11w/A mCjV1gHyY5p9s7UeqML05948/odoB6IsvNj5yBTDO6X3NfzB3CpSW9VME9D/v6OVeBOK gEOUu2RAJKjJo9NnaFP+UdIhphCG8eqMfQnan/8Rbl5HVRPDL/OsHSrQeoeW5Jb0ou2y 9pQ5clGeL8KK59hII+nSuvSAXuafIIyinXHe0/Hra/HJlpokNqxVi/KgVizRiO+XOp32 CnCB/Qkax2/4YRpOwPTCX4VzalbYAa5Y/SCDgX1cHYfIQlq3Z/xEeoPf51QtyI2eczSL nm8Q== X-Gm-Message-State: AOAM531ebt9v1JvA6/CDDPYmJguXgYQKxR7oyjOEIZditmWvHZBKe8p+ H83nGICEYmA5FMZanjJeI3VXE8c4Xnk= X-Google-Smtp-Source: ABdhPJz61NMHUJ9SfTGmWPSMM0Xm/MVj7Ndh+xqpp743u8kDhk9xa9Ov9ISIxt7fs+xvDXooV0LwUg== X-Received: by 2002:aa7:804f:0:b029:334:4951:da88 with SMTP id y15-20020aa7804f0000b02903344951da88mr15789879pfm.29.1627271483402; Sun, 25 Jul 2021 20:51:23 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:23 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 16/55] powerpc/64s: Implement PMU override command line option Date: Mon, 26 Jul 2021 13:49:57 +1000 Message-Id: <20210726035036.739609-17-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org It can be useful in simulators (with very constrained environments) to allow some PMCs to run from boot so they can be sampled directly by a test harness, rather than having to run perf. A previous change freezes counters at boot by default, so provide a boot time option to un-freeze (plus a bit more flexibility). Signed-off-by: Nicholas Piggin Reviewed-by: Athira Rajeev --- .../admin-guide/kernel-parameters.txt | 7 ++++ arch/powerpc/perf/core-book3s.c | 35 +++++++++++++++++++ 2 files changed, 42 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index bdb22006f713..96b7d0ebaa40 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4089,6 +4089,13 @@ Override pmtimer IOPort with a hex value. e.g. pmtmr=0x508 + pmu= [PPC] Manually enable the PMU. + Enable the PMU by setting MMCR0 to 0 (clear FC bit). + This option is implemented for Book3S processors. + If a number is given, then MMCR1 is set to that number, + otherwise (e.g., 'pmu=on'), it is left 0. The perf + subsystem is disabled if this option is used. + pm_debug_messages [SUSPEND,KNL] Enable suspend/resume debug messages during boot up. diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c index 65795cadb475..e7cef4fe17d7 100644 --- a/arch/powerpc/perf/core-book3s.c +++ b/arch/powerpc/perf/core-book3s.c @@ -2428,8 +2428,24 @@ int register_power_pmu(struct power_pmu *pmu) } #ifdef CONFIG_PPC64 +static bool pmu_override = false; +static unsigned long pmu_override_val; +static void do_pmu_override(void *data) +{ + ppc_set_pmu_inuse(1); + if (pmu_override_val) + mtspr(SPRN_MMCR1, pmu_override_val); + mtspr(SPRN_MMCR0, mfspr(SPRN_MMCR0) & ~MMCR0_FC); +} + static int __init init_ppc64_pmu(void) { + if (cpu_has_feature(CPU_FTR_HVMODE) && pmu_override) { + printk(KERN_WARNING "perf: disabling perf due to pmu= command line option.\n"); + on_each_cpu(do_pmu_override, NULL, 1); + return 0; + } + /* run through all the pmu drivers one at a time */ if (!init_power5_pmu()) return 0; @@ -2451,4 +2467,23 @@ static int __init init_ppc64_pmu(void) return init_generic_compat_pmu(); } early_initcall(init_ppc64_pmu); + +static int __init pmu_setup(char *str) +{ + unsigned long val; + + if (!early_cpu_has_feature(CPU_FTR_HVMODE)) + return 0; + + pmu_override = true; + + if (kstrtoul(str, 0, &val)) + val = 0; + + pmu_override_val = val; + + return 1; +} +__setup("pmu=", pmu_setup); + #endif From patchwork Mon Jul 26 03:49:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509719 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=nsWis7NM; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5ZS3zczz9snk for ; Mon, 26 Jul 2021 13:51:28 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231537AbhGZDK6 (ORCPT ); Sun, 25 Jul 2021 23:10:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231530AbhGZDK6 (ORCPT ); Sun, 25 Jul 2021 23:10:58 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32086C061757 for ; Sun, 25 Jul 2021 20:51:26 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id e21so5526382pla.5 for ; Sun, 25 Jul 2021 20:51:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nsKjkYQbz8FX95pkXyGd6VAp8RRdGTKp7DBhTCJG5lQ=; b=nsWis7NMvufv63ehzRTu/OTK7XgWV1Han6VRRjTjINaapQMrWbI9NxreQ6BhtE/+fd nJO2DV/8h1zP9JQWahBsgy/9Ntk/ZvvyMLdWFuEtEg+3R0u+e/OKS+PB7cfDmQaQnH/f rOj7jxOKwACIPRAyDE/55d1McfIqL6vQKkcYj+5Eoefd0MD82qiTm6j/Zu+hiZ9wfOCE mn9Nri6dpsBqPh4sApjeQeEcp7uN9InWWy6yznOVcx5ZvvK9X9r1MpZYbTa9YF53ClAi 01c7OTb4Vc6JqzyGEUQDCEEd3R4fvMf6bBY2FOE49BzF0p+HGZ+wLEeOTlwrMWMsgbK6 SQxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nsKjkYQbz8FX95pkXyGd6VAp8RRdGTKp7DBhTCJG5lQ=; b=PsStlZUol4cx370HbNOZ0UKHFwvEnXr7kvkTWu0SWwf/uf/DpxoaUhfRcwmoGRrdo8 7Xbil+WV9QHqYCMV8oFQVEvY/78z2wdDFTortkn9MPLN9nvmqHmZn9mgDWaEs/83TniH wryTjtVMpigsBAlxlrPUfOigexm8q+jl8qnXJ0TEiTnOav01Lbf11GGd1t6IXpX/7Zv0 HOynYEjTFdXPN4CjZcEQrw62mvaNDWj/Qnz+4eoGt77u3mL0SO4EBrk+fU2Hz63rk1YR 1plD8sLgPe2APA6kZPrnq6E+8XBpEllbm8d02h5yQc55QI8cfP/h2Fk0W7YRxXrSsQ1W XRLw== X-Gm-Message-State: AOAM532FcDxZzqCsrSnpa/hcAic1RQOm2BvEK1BF/hOZfx9GPxjk+vDI r6U6n1VRulub8RHj8jd/CgjWEF//0to= X-Google-Smtp-Source: ABdhPJwqmxOZV1gaiQx0vHb2orssJuxVFLpCfBg3lpsNn6HU2wSP1VnbR5CBiAlgAGkhjRfovIab1Q== X-Received: by 2002:a63:e62:: with SMTP id 34mr10851030pgo.189.1627271485673; Sun, 25 Jul 2021 20:51:25 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:25 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 17/55] KVM: PPC: Book3S HV P9: Implement PMU save/restore in C Date: Mon, 26 Jul 2021 13:49:58 +1000 Message-Id: <20210726035036.739609-18-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Implement the P9 path PMU save/restore code in C, and remove the POWER9/10 code from the P7/8 path assembly. -449 cycles (8533) POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin Reviewed-by: Athira Rajeev --- arch/powerpc/include/asm/asm-prototypes.h | 5 - arch/powerpc/kvm/book3s_hv.c | 205 ++++++++++++++++++++-- arch/powerpc/kvm/book3s_hv_interrupts.S | 13 +- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 43 +---- 4 files changed, 200 insertions(+), 66 deletions(-) diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h index 222823861a67..41b8a1e1144a 100644 --- a/arch/powerpc/include/asm/asm-prototypes.h +++ b/arch/powerpc/include/asm/asm-prototypes.h @@ -141,11 +141,6 @@ static inline void kvmppc_restore_tm_hv(struct kvm_vcpu *vcpu, u64 msr, bool preserve_nv) { } #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ -void kvmhv_save_host_pmu(void); -void kvmhv_load_host_pmu(void); -void kvmhv_save_guest_pmu(struct kvm_vcpu *vcpu, bool pmu_in_use); -void kvmhv_load_guest_pmu(struct kvm_vcpu *vcpu); - void kvmppc_p9_enter_guest(struct kvm_vcpu *vcpu); long kvmppc_h_set_dabr(struct kvm_vcpu *vcpu, unsigned long dabr); diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 2eef708c4354..d20b579ddcdf 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3735,6 +3735,188 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) trace_kvmppc_run_core(vc, 1); } +/* + * Privileged (non-hypervisor) host registers to save. + */ +struct p9_host_os_sprs { + unsigned long dscr; + unsigned long tidr; + unsigned long iamr; + unsigned long amr; + unsigned long fscr; + + unsigned int pmc1; + unsigned int pmc2; + unsigned int pmc3; + unsigned int pmc4; + unsigned int pmc5; + unsigned int pmc6; + unsigned long mmcr0; + unsigned long mmcr1; + unsigned long mmcr2; + unsigned long mmcr3; + unsigned long mmcra; + unsigned long siar; + unsigned long sier1; + unsigned long sier2; + unsigned long sier3; + unsigned long sdar; +}; + +static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) +{ + if (!(mmcr0 & MMCR0_FC)) + goto do_freeze; + if (mmcra & MMCRA_SAMPLE_ENABLE) + goto do_freeze; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + if (!(mmcr0 & MMCR0_PMCCEXT)) + goto do_freeze; + if (!(mmcra & MMCRA_BHRB_DISABLE)) + goto do_freeze; + } + return; + +do_freeze: + mmcr0 = MMCR0_FC; + mmcra = 0; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mmcr0 |= MMCR0_PMCCEXT; + mmcra = MMCRA_BHRB_DISABLE; + } + + mtspr(SPRN_MMCR0, mmcr0); + mtspr(SPRN_MMCRA, mmcra); + isync(); +} + +static void save_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) +{ + if (ppc_get_pmu_inuse()) { + /* + * It might be better to put PMU handling (at least for the + * host) in the perf subsystem because it knows more about what + * is being used. + */ + + /* POWER9, POWER10 do not implement HPMC or SPMC */ + + host_os_sprs->mmcr0 = mfspr(SPRN_MMCR0); + host_os_sprs->mmcra = mfspr(SPRN_MMCRA); + + freeze_pmu(host_os_sprs->mmcr0, host_os_sprs->mmcra); + + host_os_sprs->pmc1 = mfspr(SPRN_PMC1); + host_os_sprs->pmc2 = mfspr(SPRN_PMC2); + host_os_sprs->pmc3 = mfspr(SPRN_PMC3); + host_os_sprs->pmc4 = mfspr(SPRN_PMC4); + host_os_sprs->pmc5 = mfspr(SPRN_PMC5); + host_os_sprs->pmc6 = mfspr(SPRN_PMC6); + host_os_sprs->mmcr1 = mfspr(SPRN_MMCR1); + host_os_sprs->mmcr2 = mfspr(SPRN_MMCR2); + host_os_sprs->sdar = mfspr(SPRN_SDAR); + host_os_sprs->siar = mfspr(SPRN_SIAR); + host_os_sprs->sier1 = mfspr(SPRN_SIER); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + host_os_sprs->mmcr3 = mfspr(SPRN_MMCR3); + host_os_sprs->sier2 = mfspr(SPRN_SIER2); + host_os_sprs->sier3 = mfspr(SPRN_SIER3); + } + } +} + +static void load_p9_guest_pmu(struct kvm_vcpu *vcpu) +{ + mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); + mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); + mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); + mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); + mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); + mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); + mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); + mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); + mtspr(SPRN_SDAR, vcpu->arch.sdar); + mtspr(SPRN_SIAR, vcpu->arch.siar); + mtspr(SPRN_SIER, vcpu->arch.sier[0]); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); + mtspr(SPRN_SIER2, vcpu->arch.sier[1]); + mtspr(SPRN_SIER3, vcpu->arch.sier[2]); + } + + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, vcpu->arch.mmcra); + mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); + /* No isync necessary because we're starting counters */ +} + +static void save_p9_guest_pmu(struct kvm_vcpu *vcpu) +{ + struct lppaca *lp; + int save_pmu = 1; + + lp = vcpu->arch.vpa.pinned_addr; + if (lp) + save_pmu = lp->pmcregs_in_use; + + if (save_pmu) { + vcpu->arch.mmcr[0] = mfspr(SPRN_MMCR0); + vcpu->arch.mmcra = mfspr(SPRN_MMCRA); + + freeze_pmu(vcpu->arch.mmcr[0], vcpu->arch.mmcra); + + vcpu->arch.pmc[0] = mfspr(SPRN_PMC1); + vcpu->arch.pmc[1] = mfspr(SPRN_PMC2); + vcpu->arch.pmc[2] = mfspr(SPRN_PMC3); + vcpu->arch.pmc[3] = mfspr(SPRN_PMC4); + vcpu->arch.pmc[4] = mfspr(SPRN_PMC5); + vcpu->arch.pmc[5] = mfspr(SPRN_PMC6); + vcpu->arch.mmcr[1] = mfspr(SPRN_MMCR1); + vcpu->arch.mmcr[2] = mfspr(SPRN_MMCR2); + vcpu->arch.sdar = mfspr(SPRN_SDAR); + vcpu->arch.siar = mfspr(SPRN_SIAR); + vcpu->arch.sier[0] = mfspr(SPRN_SIER); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + vcpu->arch.mmcr[3] = mfspr(SPRN_MMCR3); + vcpu->arch.sier[1] = mfspr(SPRN_SIER2); + vcpu->arch.sier[2] = mfspr(SPRN_SIER3); + } + } else { + freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); + } +} + +static void load_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) +{ + if (ppc_get_pmu_inuse()) { + mtspr(SPRN_PMC1, host_os_sprs->pmc1); + mtspr(SPRN_PMC2, host_os_sprs->pmc2); + mtspr(SPRN_PMC3, host_os_sprs->pmc3); + mtspr(SPRN_PMC4, host_os_sprs->pmc4); + mtspr(SPRN_PMC5, host_os_sprs->pmc5); + mtspr(SPRN_PMC6, host_os_sprs->pmc6); + mtspr(SPRN_MMCR1, host_os_sprs->mmcr1); + mtspr(SPRN_MMCR2, host_os_sprs->mmcr2); + mtspr(SPRN_SDAR, host_os_sprs->sdar); + mtspr(SPRN_SIAR, host_os_sprs->siar); + mtspr(SPRN_SIER, host_os_sprs->sier1); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, host_os_sprs->mmcr3); + mtspr(SPRN_SIER2, host_os_sprs->sier2); + mtspr(SPRN_SIER3, host_os_sprs->sier3); + } + + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, host_os_sprs->mmcra); + mtspr(SPRN_MMCR0, host_os_sprs->mmcr0); + isync(); + } +} + static void load_spr_state(struct kvm_vcpu *vcpu) { mtspr(SPRN_DSCR, vcpu->arch.dscr); @@ -3777,17 +3959,6 @@ static void store_spr_state(struct kvm_vcpu *vcpu) vcpu->arch.dscr = mfspr(SPRN_DSCR); } -/* - * Privileged (non-hypervisor) host registers to save. - */ -struct p9_host_os_sprs { - unsigned long dscr; - unsigned long tidr; - unsigned long iamr; - unsigned long amr; - unsigned long fscr; -}; - static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) { host_os_sprs->dscr = mfspr(SPRN_DSCR); @@ -3835,7 +4006,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, struct p9_host_os_sprs host_os_sprs; s64 dec; u64 tb, next_timer; - int trap, save_pmu; + int trap; WARN_ON_ONCE(vcpu->arch.ceded); @@ -3848,7 +4019,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, save_p9_host_os_sprs(&host_os_sprs); - kvmhv_save_host_pmu(); /* saves it to PACA kvm_hstate */ + save_p9_host_pmu(&host_os_sprs); kvmppc_subcore_enter_guest(); @@ -3878,7 +4049,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, barrier(); } #endif - kvmhv_load_guest_pmu(vcpu); + load_p9_guest_pmu(vcpu); msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); load_fp_state(&vcpu->arch.fp); @@ -4000,16 +4171,14 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - save_pmu = 1; if (vcpu->arch.vpa.pinned_addr) { struct lppaca *lp = vcpu->arch.vpa.pinned_addr; u32 yield_count = be32_to_cpu(lp->yield_count) + 1; lp->yield_count = cpu_to_be32(yield_count); vcpu->arch.vpa.dirty = 1; - save_pmu = lp->pmcregs_in_use; } - kvmhv_save_guest_pmu(vcpu, save_pmu); + save_p9_guest_pmu(vcpu); #ifdef CONFIG_PPC_PSERIES if (kvmhv_on_pseries()) { barrier(); @@ -4025,7 +4194,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - kvmhv_load_host_pmu(); + load_p9_host_pmu(&host_os_sprs); kvmppc_subcore_exit_guest(); diff --git a/arch/powerpc/kvm/book3s_hv_interrupts.S b/arch/powerpc/kvm/book3s_hv_interrupts.S index 4444f83cb133..59d89e4b154a 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupts.S +++ b/arch/powerpc/kvm/book3s_hv_interrupts.S @@ -104,7 +104,10 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) mtlr r0 blr -_GLOBAL(kvmhv_save_host_pmu) +/* + * void kvmhv_save_host_pmu(void) + */ +kvmhv_save_host_pmu: BEGIN_FTR_SECTION /* Work around P8 PMAE bug */ li r3, -1 @@ -138,14 +141,6 @@ BEGIN_FTR_SECTION std r8, HSTATE_MMCR2(r13) std r9, HSTATE_SIER(r13) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) -BEGIN_FTR_SECTION - mfspr r5, SPRN_MMCR3 - mfspr r6, SPRN_SIER2 - mfspr r7, SPRN_SIER3 - std r5, HSTATE_MMCR3(r13) - std r6, HSTATE_SIER2(r13) - std r7, HSTATE_SIER3(r13) -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) mfspr r3, SPRN_PMC1 mfspr r5, SPRN_PMC2 mfspr r6, SPRN_PMC3 diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 9021052f1579..551ce223b40c 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -2738,10 +2738,11 @@ kvmppc_msr_interrupt: blr /* + * void kvmhv_load_guest_pmu(struct kvm_vcpu *vcpu) + * * Load up guest PMU state. R3 points to the vcpu struct. */ -_GLOBAL(kvmhv_load_guest_pmu) -EXPORT_SYMBOL_GPL(kvmhv_load_guest_pmu) +kvmhv_load_guest_pmu: mr r4, r3 mflr r0 li r3, 1 @@ -2775,27 +2776,17 @@ END_FTR_SECTION_IFSET(CPU_FTR_PMAO_BUG) mtspr SPRN_MMCRA, r6 mtspr SPRN_SIAR, r7 mtspr SPRN_SDAR, r8 -BEGIN_FTR_SECTION - ld r5, VCPU_MMCR + 24(r4) - ld r6, VCPU_SIER + 8(r4) - ld r7, VCPU_SIER + 16(r4) - mtspr SPRN_MMCR3, r5 - mtspr SPRN_SIER2, r6 - mtspr SPRN_SIER3, r7 -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) BEGIN_FTR_SECTION ld r5, VCPU_MMCR + 16(r4) ld r6, VCPU_SIER(r4) mtspr SPRN_MMCR2, r5 mtspr SPRN_SIER, r6 -BEGIN_FTR_SECTION_NESTED(96) lwz r7, VCPU_PMC + 24(r4) lwz r8, VCPU_PMC + 28(r4) ld r9, VCPU_MMCRS(r4) mtspr SPRN_SPMC1, r7 mtspr SPRN_SPMC2, r8 mtspr SPRN_MMCRS, r9 -END_FTR_SECTION_NESTED(CPU_FTR_ARCH_300, 0, 96) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) mtspr SPRN_MMCR0, r3 isync @@ -2803,10 +2794,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) blr /* + * void kvmhv_load_host_pmu(void) + * * Reload host PMU state saved in the PACA by kvmhv_save_host_pmu. */ -_GLOBAL(kvmhv_load_host_pmu) -EXPORT_SYMBOL_GPL(kvmhv_load_host_pmu) +kvmhv_load_host_pmu: mflr r0 lbz r4, PACA_PMCINUSE(r13) /* is the host using the PMU? */ cmpwi r4, 0 @@ -2844,25 +2836,18 @@ BEGIN_FTR_SECTION mtspr SPRN_MMCR2, r8 mtspr SPRN_SIER, r9 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) -BEGIN_FTR_SECTION - ld r5, HSTATE_MMCR3(r13) - ld r6, HSTATE_SIER2(r13) - ld r7, HSTATE_SIER3(r13) - mtspr SPRN_MMCR3, r5 - mtspr SPRN_SIER2, r6 - mtspr SPRN_SIER3, r7 -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) mtspr SPRN_MMCR0, r3 isync mtlr r0 23: blr /* + * void kvmhv_save_guest_pmu(struct kvm_vcpu *vcpu, bool pmu_in_use) + * * Save guest PMU state into the vcpu struct. * r3 = vcpu, r4 = full save flag (PMU in use flag set in VPA) */ -_GLOBAL(kvmhv_save_guest_pmu) -EXPORT_SYMBOL_GPL(kvmhv_save_guest_pmu) +kvmhv_save_guest_pmu: mr r9, r3 mr r8, r4 BEGIN_FTR_SECTION @@ -2911,14 +2896,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) BEGIN_FTR_SECTION std r10, VCPU_MMCR + 16(r9) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) -BEGIN_FTR_SECTION - mfspr r5, SPRN_MMCR3 - mfspr r6, SPRN_SIER2 - mfspr r7, SPRN_SIER3 - std r5, VCPU_MMCR + 24(r9) - std r6, VCPU_SIER + 8(r9) - std r7, VCPU_SIER + 16(r9) -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) std r7, VCPU_SIAR(r9) std r8, VCPU_SDAR(r9) mfspr r3, SPRN_PMC1 @@ -2936,7 +2913,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) BEGIN_FTR_SECTION mfspr r5, SPRN_SIER std r5, VCPU_SIER(r9) -BEGIN_FTR_SECTION_NESTED(96) mfspr r6, SPRN_SPMC1 mfspr r7, SPRN_SPMC2 mfspr r8, SPRN_MMCRS @@ -2945,7 +2921,6 @@ BEGIN_FTR_SECTION_NESTED(96) std r8, VCPU_MMCRS(r9) lis r4, 0x8000 mtspr SPRN_MMCRS, r4 -END_FTR_SECTION_NESTED(CPU_FTR_ARCH_300, 0, 96) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) 22: blr From patchwork Mon Jul 26 03:49:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509720 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=LQ8LcWXE; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5ZV3bvKz9t25 for ; Mon, 26 Jul 2021 13:51:30 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231538AbhGZDLA (ORCPT ); Sun, 25 Jul 2021 23:11:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231530AbhGZDK7 (ORCPT ); Sun, 25 Jul 2021 23:10:59 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F48FC061757 for ; Sun, 25 Jul 2021 20:51:28 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id ds11-20020a17090b08cbb0290172f971883bso17852605pjb.1 for ; Sun, 25 Jul 2021 20:51:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IPqcBmWn1RQ5ScRaMQkK0PgFL0uqvjQZGq+S5mHe7GY=; b=LQ8LcWXESFN7E7UDjn68MfXx3Rm2glVyKpczCyDBpsllBugNqgH5unWXyTQmcdEhN/ r/Ikvb4S/5WIjBs5EdTt/QrweUp0MF78vQaBBhzOq+kS2nrcUPoPXG8HvIqI3owQ71zN ZYY8jPzHJsXw/JNY4BdUIsQp9e+4JCzqdwd/Hvb8yd1wHelhN/YmkIcjs5g112l8CDx4 ojRjeuecSE7kA2msC0notNCv3V21i5CvHubBMzABd58kK3hEfRCxuA79/XXIDpQ9yjt7 0cDBr5XigENUfhRyJ8wizS3n+nmbsoxWfAfWQfgXoOXvtLeIhxQZboldizEXhQJ/OQD0 VImg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IPqcBmWn1RQ5ScRaMQkK0PgFL0uqvjQZGq+S5mHe7GY=; b=PqUqoSFA7A99Hb2KkM1y/i1e3T7Nth4Ft66TapSjQ+/pUpHxCs/HX1gt4IXCGMjbv2 q0a8AxZXsRE4cuWyXmC8tn5N9tdr1g4BkLxMKbjA2gj9ShPBieZydwDg69RKYbaMwNaP F1HKvUsVmtTIFvwd3lpH5qnYqLbI2v0A1elgAT8MYAj0pwdDv6Q15uAVzDgCBz8CG9oc suxrlWiaNBZ9iTM4fxC3omx7ix9ZfbrzJ+n3CiVb12Ks3ZcfPEa3TnomynYh3NVWWLPO sEbBuR68dqHxqTVfJ4CCjfSQGK0zI+rPsJf3hf63GZT2bUdAlNrx5fQr30CuYH4Drgut OpPw== X-Gm-Message-State: AOAM533qxvfU1cYnD9QRpEBiQ3BeWLRCY0YPBUTOcL62x2xGbYrfCEZ9 4UBuYrYkLIWsPD4GhRV/qOtWoMxuT4E= X-Google-Smtp-Source: ABdhPJx9wM3iZBEIyRCfm4LJw+CiX3l99fEWgebiSSIYaGlO+n8yORHHgkzIx2S+of1qLJjr2hNhKg== X-Received: by 2002:a63:1205:: with SMTP id h5mr16044475pgl.204.1627271487880; Sun, 25 Jul 2021 20:51:27 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:27 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 18/55] KVM: PPC: Book3S HV P9: Factor PMU save/load into context switch functions Date: Mon, 26 Jul 2021 13:49:59 +1000 Message-Id: <20210726035036.739609-19-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rather than guest/host save/retsore functions, implement context switch functions that take care of details like the VPA update for nested. The reason to split these kind of helpers into explicit save/load functions is mainly to schedule SPR access nicely, but PMU is a special case where the load requires mtSPR (to stop counters) and other difficulties, so there's less possibility to schedule those nicely. The SPR accesses also have side-effects if the PMU is running, and in later changes we keep the host PMU running as long as possible so this code can be better profiled, which also complicates scheduling. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 61 +++++++++++++++++------------------- 1 file changed, 28 insertions(+), 33 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index d20b579ddcdf..091b67ef6eba 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3790,7 +3790,8 @@ static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) isync(); } -static void save_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) +static void switch_pmu_to_guest(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) { if (ppc_get_pmu_inuse()) { /* @@ -3824,10 +3825,21 @@ static void save_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) host_os_sprs->sier3 = mfspr(SPRN_SIER3); } } -} -static void load_p9_guest_pmu(struct kvm_vcpu *vcpu) -{ +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) { + barrier(); + if (vcpu->arch.vpa.pinned_addr) { + struct lppaca *lp = vcpu->arch.vpa.pinned_addr; + get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; + } else { + get_lppaca()->pmcregs_in_use = 1; + } + barrier(); + } +#endif + + /* load guest */ mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); @@ -3852,7 +3864,8 @@ static void load_p9_guest_pmu(struct kvm_vcpu *vcpu) /* No isync necessary because we're starting counters */ } -static void save_p9_guest_pmu(struct kvm_vcpu *vcpu) +static void switch_pmu_to_host(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) { struct lppaca *lp; int save_pmu = 1; @@ -3887,10 +3900,15 @@ static void save_p9_guest_pmu(struct kvm_vcpu *vcpu) } else { freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); } -} -static void load_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) -{ +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) { + barrier(); + get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); + barrier(); + } +#endif + if (ppc_get_pmu_inuse()) { mtspr(SPRN_PMC1, host_os_sprs->pmc1); mtspr(SPRN_PMC2, host_os_sprs->pmc2); @@ -4019,8 +4037,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, save_p9_host_os_sprs(&host_os_sprs); - save_p9_host_pmu(&host_os_sprs); - kvmppc_subcore_enter_guest(); vc->entry_exit_map = 1; @@ -4037,19 +4053,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); -#ifdef CONFIG_PPC_PSERIES - if (kvmhv_on_pseries()) { - barrier(); - if (vcpu->arch.vpa.pinned_addr) { - struct lppaca *lp = vcpu->arch.vpa.pinned_addr; - get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; - } else { - get_lppaca()->pmcregs_in_use = 1; - } - barrier(); - } -#endif - load_p9_guest_pmu(vcpu); + switch_pmu_to_guest(vcpu, &host_os_sprs); msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); load_fp_state(&vcpu->arch.fp); @@ -4178,14 +4182,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.vpa.dirty = 1; } - save_p9_guest_pmu(vcpu); -#ifdef CONFIG_PPC_PSERIES - if (kvmhv_on_pseries()) { - barrier(); - get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); - barrier(); - } -#endif + switch_pmu_to_host(vcpu, &host_os_sprs); vc->entry_exit_map = 0x101; vc->in_guest = 0; @@ -4194,8 +4191,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - load_p9_host_pmu(&host_os_sprs); - kvmppc_subcore_exit_guest(); return trap; From patchwork Mon Jul 26 03:50:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509721 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=oZeer18z; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5ZX6bTGz9t56 for ; Mon, 26 Jul 2021 13:51:32 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231543AbhGZDLC (ORCPT ); Sun, 25 Jul 2021 23:11:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231530AbhGZDLC (ORCPT ); Sun, 25 Jul 2021 23:11:02 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9EB6C061757 for ; Sun, 25 Jul 2021 20:51:30 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id n10so10140319plf.4 for ; Sun, 25 Jul 2021 20:51:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wdQy5WVPVHkw2p4fx6chDgQPompALYeXuTxR1zPIu0g=; b=oZeer18zQFuxRKlTNP/mtAFcbCybd6TpntQ97n7+QJR5+pHeL11SRljlWLY1v1/n5I c/5noMQ9ulEvCCE3swtFVbI++fN2Y4CoaCNbW8gBPflze6yVYZnTsrtCQ7BiBE0UtE8F 10rN518OGFbb5N7RpeD9Nyj9q5+W9HZeMjTvkpFapuqxZXHPV+IIF5Mzbdwqrv2O/NgE P/rlRdZhaMpa7A/9QN0JK/RWoM6CqYTQshfWIiW8qVjQJjs3Dt58oFz5EybfUA5HE1Mc 4lyTJK/9rvWfoWyP/IEbx1zgYWhM1nQETzLgWjykRAJTA1CJg7Oqcowlvo+YzUZBWmUr WrQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wdQy5WVPVHkw2p4fx6chDgQPompALYeXuTxR1zPIu0g=; b=ODdq5rNcwLSsScEF2XRk7VUUTJ2eSDc3gckCwLJxeDRdI+EYzlZAX17ty5lUK+bVwS DIFegBhWMO+vZUEZg9ZFITgbRCgTALDcXReVLPWcSRusBjz+afeu+oLUjohgTc08c4+t 9wGxBrqGiCCLjbc6sUtVDUBue5j3P/BmvB/REDOC+YT6UL8MIwulR6ZvhZm7FwfhKQnO 0G46UkPY5JUdlmsqlLR+Ozm0b80FDFsttBkj+DZ+qt1R3zT+zvYXlqIug0noVCZqilOq bQIaQFpD5SUlJHRXUDjfYIBRmFMa11qMCVRI0/Jg8MwI1Ll6DSU1aJW77HBiyCaTHiBZ LloQ== X-Gm-Message-State: AOAM532vi60dR8TGMuhiDt/kMyaOk/7Yeg6BhiUIQjjyV3KVc6qmBfBo FBLN9sjd28qnLqxmAq/2SJgqSrh34qs= X-Google-Smtp-Source: ABdhPJyVr+zx0lsxfM8F0WR+H0hYNABFHeXFaPiU/cEkKLnxaPK1BsGkLii2okQDS1CMDBcE92d+Kg== X-Received: by 2002:a65:64cf:: with SMTP id t15mr16109876pgv.131.1627271490277; Sun, 25 Jul 2021 20:51:30 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:30 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 19/55] KVM: PPC: Book3S HV P9: Demand fault PMU SPRs when marked not inuse Date: Mon, 26 Jul 2021 13:50:00 +1000 Message-Id: <20210726035036.739609-20-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The pmcregs_in_use field in the guest VPA can not be trusted to reflect what the guest is doing with PMU SPRs, so the PMU must always be managed (stopped) when exiting the guest, and SPR values set when entering the guest to ensure it can't cause a covert channel or otherwise cause other guests or the host to misbehave. So prevent guest access to the PMU with HFSCR[PM] if pmcregs_in_use is clear, and avoid the PMU SPR access on every partition switch. Guests that set pmcregs_in_use incorrectly or when first setting it and using the PMU will take a hypervisor facility unavailable interrupt that will bring in the PMU SPRs. -774 cycles (7759) cycles POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 1 + arch/powerpc/include/asm/kvm_host.h | 1 + arch/powerpc/kvm/book3s_hv.c | 133 +++++++++++++++++------ arch/powerpc/kvm/book3s_hv_nested.c | 38 +++---- 4 files changed, 119 insertions(+), 54 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index eaf3a562bf1e..df6bed4b2a46 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -39,6 +39,7 @@ struct kvm_nested_guest { pgd_t *shadow_pgtable; /* our page table for this guest */ u64 l1_gr_to_hr; /* L1's addr of part'n-scoped table */ u64 process_table; /* process table entry for this guest */ + u64 hfscr; /* L1's HFSCR */ long refcnt; /* number of pointers to this struct */ struct mutex tlb_lock; /* serialize page faults and tlbies */ struct kvm_nested_guest *next; diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 9f52f282b1aa..aee41edcfe6b 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -804,6 +804,7 @@ struct kvm_vcpu_arch { struct kvmppc_vpa slb_shadow; spinlock_t tbacct_lock; + u64 hfscr_permitted; /* A mask of permitted HFSCR facilities */ u64 busy_stolen; u64 busy_preempt; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 091b67ef6eba..7c75f63648d6 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1421,6 +1421,23 @@ static int kvmppc_emulate_doorbell_instr(struct kvm_vcpu *vcpu) return RESUME_GUEST; } +/* + * If the lppaca had pmcregs_in_use clear when we exited the guest, then + * HFSCR_PM is cleared for next entry. If the guest then tries to access + * the PMU SPRs, we get this facility unavailable interrupt. Putting HFSCR_PM + * back in the guest HFSCR will cause the next entry to load the PMU SPRs and + * allow the guest access to continue. + */ +static int kvmppc_pmu_unavailable(struct kvm_vcpu *vcpu) +{ + if (!(vcpu->arch.hfscr_permitted & HFSCR_PM)) + return EMULATE_FAIL; + + vcpu->arch.hfscr |= HFSCR_PM; + + return RESUME_GUEST; +} + static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, struct task_struct *tsk) { @@ -1705,16 +1722,22 @@ XXX benchmark guest exits * to emulate. * Otherwise, we just generate a program interrupt to the guest. */ - case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: + case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: { r = EMULATE_FAIL; - if (((vcpu->arch.hfscr >> 56) == FSCR_MSGP_LG) && - cpu_has_feature(CPU_FTR_ARCH_300)) - r = kvmppc_emulate_doorbell_instr(vcpu); + if (cpu_has_feature(CPU_FTR_ARCH_300)) { + unsigned long cause = vcpu->arch.hfscr >> 56; + + if (cause == FSCR_MSGP_LG) + r = kvmppc_emulate_doorbell_instr(vcpu); + if (cause == FSCR_PM_LG) + r = kvmppc_pmu_unavailable(vcpu); + } if (r == EMULATE_FAIL) { kvmppc_core_queue_program(vcpu, SRR1_PROGILL); r = RESUME_GUEST; } break; + } case BOOK3S_INTERRUPT_HV_RM_HARD: r = RESUME_PASSTHROUGH; @@ -2723,6 +2746,13 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu) if (cpu_has_feature(CPU_FTR_TM_COMP)) vcpu->arch.hfscr |= HFSCR_TM; + vcpu->arch.hfscr_permitted = vcpu->arch.hfscr; + + /* + * PM is demand-faulted so start with it clear. + */ + vcpu->arch.hfscr &= ~HFSCR_PM; + kvmppc_mmu_book3s_hv_init(vcpu); vcpu->arch.state = KVMPPC_VCPU_NOTREADY; @@ -3793,6 +3823,14 @@ static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) static void switch_pmu_to_guest(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { + struct lppaca *lp; + int load_pmu = 1; + + lp = vcpu->arch.vpa.pinned_addr; + if (lp) + load_pmu = lp->pmcregs_in_use; + + /* Save host */ if (ppc_get_pmu_inuse()) { /* * It might be better to put PMU handling (at least for the @@ -3827,41 +3865,47 @@ static void switch_pmu_to_guest(struct kvm_vcpu *vcpu, } #ifdef CONFIG_PPC_PSERIES + /* After saving PMU, before loading guest PMU, flip pmcregs_in_use */ if (kvmhv_on_pseries()) { barrier(); - if (vcpu->arch.vpa.pinned_addr) { - struct lppaca *lp = vcpu->arch.vpa.pinned_addr; - get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; - } else { - get_lppaca()->pmcregs_in_use = 1; - } + get_lppaca()->pmcregs_in_use = load_pmu; barrier(); } #endif - /* load guest */ - mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); - mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); - mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); - mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); - mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); - mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); - mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); - mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); - mtspr(SPRN_SDAR, vcpu->arch.sdar); - mtspr(SPRN_SIAR, vcpu->arch.siar); - mtspr(SPRN_SIER, vcpu->arch.sier[0]); + /* + * Load guest. If the VPA said the PMCs are not in use but the guest + * tried to access them anyway, HFSCR[PM] will be set by the HFAC + * fault so we can make forward progress. + */ + if (load_pmu || (vcpu->arch.hfscr & HFSCR_PM)) { + mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); + mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); + mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); + mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); + mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); + mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); + mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); + mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); + mtspr(SPRN_SDAR, vcpu->arch.sdar); + mtspr(SPRN_SIAR, vcpu->arch.siar); + mtspr(SPRN_SIER, vcpu->arch.sier[0]); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); + mtspr(SPRN_SIER2, vcpu->arch.sier[1]); + mtspr(SPRN_SIER3, vcpu->arch.sier[2]); + } - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); - mtspr(SPRN_SIER2, vcpu->arch.sier[1]); - mtspr(SPRN_SIER3, vcpu->arch.sier[2]); - } + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, vcpu->arch.mmcra); + mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); + /* No isync necessary because we're starting counters */ - /* Set MMCRA then MMCR0 last */ - mtspr(SPRN_MMCRA, vcpu->arch.mmcra); - mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); - /* No isync necessary because we're starting counters */ + if (!vcpu->arch.nested && + (vcpu->arch.hfscr_permitted & HFSCR_PM)) + vcpu->arch.hfscr |= HFSCR_PM; + } } static void switch_pmu_to_host(struct kvm_vcpu *vcpu, @@ -3897,9 +3941,32 @@ static void switch_pmu_to_host(struct kvm_vcpu *vcpu, vcpu->arch.sier[1] = mfspr(SPRN_SIER2); vcpu->arch.sier[2] = mfspr(SPRN_SIER3); } - } else { + + } else if (vcpu->arch.hfscr & HFSCR_PM) { + /* + * The guest accessed PMC SPRs without specifying they should + * be preserved, or it cleared pmcregs_in_use after the last + * access. Just ensure they are frozen. + */ freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); - } + + /* + * Demand-fault PMU register access in the guest. + * + * This is used to grab the guest's VPA pmcregs_in_use value + * and reflect it into the host's VPA in the case of a nested + * hypervisor. + * + * It also avoids having to zero-out SPRs after each guest + * exit to avoid side-channels when. + * + * This is cleared here when we exit the guest, so later HFSCR + * interrupt handling can add it back to run the guest with + * PM enabled next time. + */ + if (!vcpu->arch.nested) + vcpu->arch.hfscr &= ~HFSCR_PM; + } /* otherwise the PMU should still be frozen */ #ifdef CONFIG_PPC_PSERIES if (kvmhv_on_pseries()) { diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 983628ed4376..3ffc63ffebc5 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -104,16 +104,6 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, { struct kvmppc_vcore *vc = vcpu->arch.vcore; - /* - * When loading the hypervisor-privileged registers to run L2, - * we might have used bits from L1 state to restrict what the - * L2 state is allowed to be. Since L1 is not allowed to read - * the HV registers, do not include these modifications in the - * return state. - */ - hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) | - (HFSCR_INTR_CAUSE & vcpu->arch.hfscr)); - hr->dpdes = vc->dpdes; hr->purr = vcpu->arch.purr; hr->spurr = vcpu->arch.spurr; @@ -137,14 +127,23 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, case BOOK3S_INTERRUPT_H_INST_STORAGE: hr->asdr = vcpu->arch.fault_gpa; break; - case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: - { - u8 cause = vcpu->arch.hfscr >> 56; + case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: { + u64 cause = vcpu->arch.hfscr >> 56; WARN_ON_ONCE(cause >= BITS_PER_LONG); - if (!(hr->hfscr & (1UL << cause))) + /* + * When loading the hypervisor-privileged registers to run L2, + * we might have used bits from L1 state to restrict what the + * L2 state is allowed to be. Since L1 is not allowed to read + * the HV registers, do not include these modifications in the + * return state. + */ + hr->hfscr &= ~HFSCR_INTR_CAUSE; + if (!(hr->hfscr & (1UL << cause))) { + hr->hfscr |= vcpu->arch.hfscr & HFSCR_INTR_CAUSE; break; + } /* * We have disabled this facility, so it does not @@ -152,10 +151,6 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, */ vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST; kvmppc_load_last_inst(vcpu, INST_GENERIC, &vcpu->arch.emul_inst); - - /* Don't leak the cause field */ - hr->hfscr &= ~HFSCR_INTR_CAUSE; - fallthrough; } case BOOK3S_INTERRUPT_H_EMUL_ASSIST: @@ -299,10 +294,10 @@ static void load_l2_hv_regs(struct kvm_vcpu *vcpu, (vc->lpcr & ~mask) | (*lpcr & mask)); /* - * Don't let L1 enable features for L2 which we've disabled for L1, - * but preserve the interrupt cause field. + * Don't let L1 enable features for L2 which we disallow for L1. + * Preserve the interrupt cause field. */ - vcpu->arch.hfscr = l2_hv->hfscr & (HFSCR_INTR_CAUSE | l1_hv->hfscr); + vcpu->arch.hfscr = l2_hv->hfscr & (HFSCR_INTR_CAUSE | vcpu->arch.hfscr_permitted); /* Don't let data address watchpoint match in hypervisor state */ vcpu->arch.dawrx0 = l2_hv->dawrx0 & ~DAWRX_HYP; @@ -389,6 +384,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) /* set L1 state to L2 state */ vcpu->arch.nested = l2; vcpu->arch.nested_vcpu_id = l2_hv.vcpu_token; + l2->hfscr = l2_hv.hfscr; vcpu->arch.regs = l2_regs; /* Guest must always run with ME enabled, HV disabled. */ From patchwork Mon Jul 26 03:50:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509722 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=L4k3j5Dp; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Zb0fL2z9t2g for ; Mon, 26 Jul 2021 13:51:35 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231548AbhGZDLF (ORCPT ); Sun, 25 Jul 2021 23:11:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231530AbhGZDLE (ORCPT ); Sun, 25 Jul 2021 23:11:04 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BE49C061757 for ; Sun, 25 Jul 2021 20:51:33 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id e21so5526606pla.5 for ; Sun, 25 Jul 2021 20:51:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3EPodwAY04xAoBbUr0lK1mnu3c10gNvTZRm9usPrYgg=; b=L4k3j5Dp6o7T71/jhIrr7oSzW9uROFemXyfUu/avw1s4K6QadE4NS6lf4IbtDRhCoQ SKNp2Tdz3f1oGUZE/hXPX+uENx5dnXf4WUS/gNJ4w8ZywCvxGsH3hEYo0cw2WUD9QXxb Uf4RKznOSArw6xMXW6jCOTUvvT/fyul3ImbUePg5bt85uD8AhKPn7E8vITIEsmq0+Ofe N3AyEXisbbIZRiKCSdE+wxFkQ19UlEhGiWA3eZRhKHZhdyFxxLoMM1O2XYEOUw4HGIeu dgc2+tY1G0PHMkCywQptVvw3MSDnCum7n/hoANT4dl7BGNHhZodrC3LjVw1ofYb4UXNg O2Zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3EPodwAY04xAoBbUr0lK1mnu3c10gNvTZRm9usPrYgg=; b=B8oaCzfSm5q9ppFY554yqYLUImRPP+KgqiEpfeGHHLnRf9wCDtIAp/qL0G3hgyYXh5 PwZ6cWaCgQPQuctasFq5YwCfZYdyTbTPXZzWUgW3bl8SgSU0cSg1QVW/PKbOQV0q3C1I L2AGYXVpLtST7L2Ec2QyryXNdEpp6g0LEh8yIDjJZAkVKPykake6H/F04OgjdkIf2cn8 bkIHhb/WM4pXESkoNSDnvsHxsyeZFu6kwL3J3q/iTF3//QqiDWDivR27iOcO7Yz/cOeB b/OTr+GsE0A/120oELSWB8WBzKhRFdRX6qpRXvvrMo5eE2IUBeBM0Kc2XRMmx2t+gEbN Qa7w== X-Gm-Message-State: AOAM532YCJw7Mc4eg4aE9CukZUROZZMkyN/W0bJ/cbNu3HETPbYKnifa hdzFxWBL3yio5H/0Zz2kSboEflqV244= X-Google-Smtp-Source: ABdhPJylr8Y1mDqJPxXyJH8MJieXqvQy65R5784ZTRSdS5SquIZmSZAh/czNwnmyPbehP0baKjar9A== X-Received: by 2002:a17:90a:19c2:: with SMTP id 2mr15305821pjj.233.1627271492764; Sun, 25 Jul 2021 20:51:32 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:32 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v1 20/55] KVM: PPC: Book3S HV P9: Factor out yield_count increment Date: Mon, 26 Jul 2021 13:50:01 +1000 Message-Id: <20210726035036.739609-21-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Factor duplicated code into a helper function. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 7c75f63648d6..772f1e6c93e1 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4081,6 +4081,16 @@ static inline bool hcall_is_xics(unsigned long req) req == H_IPOLL || req == H_XIRR || req == H_XIRR_X; } +static void vcpu_vpa_increment_dispatch(struct kvm_vcpu *vcpu) +{ + struct lppaca *lp = vcpu->arch.vpa.pinned_addr; + if (lp) { + u32 yield_count = be32_to_cpu(lp->yield_count) + 1; + lp->yield_count = cpu_to_be32(yield_count); + vcpu->arch.vpa.dirty = 1; + } +} + /* * Guest entry for POWER9 and later CPUs. */ @@ -4109,12 +4119,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 1; vc->in_guest = 1; - if (vcpu->arch.vpa.pinned_addr) { - struct lppaca *lp = vcpu->arch.vpa.pinned_addr; - u32 yield_count = be32_to_cpu(lp->yield_count) + 1; - lp->yield_count = cpu_to_be32(yield_count); - vcpu->arch.vpa.dirty = 1; - } + vcpu_vpa_increment_dispatch(vcpu); if (cpu_has_feature(CPU_FTR_TM) || cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) @@ -4242,12 +4247,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - if (vcpu->arch.vpa.pinned_addr) { - struct lppaca *lp = vcpu->arch.vpa.pinned_addr; - u32 yield_count = be32_to_cpu(lp->yield_count) + 1; - lp->yield_count = cpu_to_be32(yield_count); - vcpu->arch.vpa.dirty = 1; - } + vcpu_vpa_increment_dispatch(vcpu); switch_pmu_to_host(vcpu, &host_os_sprs); From patchwork Mon Jul 26 03:50:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509723 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=HRDm27uA; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Zd1r66z9t2g for ; Mon, 26 Jul 2021 13:51:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231558AbhGZDLH (ORCPT ); Sun, 25 Jul 2021 23:11:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231530AbhGZDLG (ORCPT ); Sun, 25 Jul 2021 23:11:06 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77989C061757 for ; Sun, 25 Jul 2021 20:51:35 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id d1so2753529pll.1 for ; Sun, 25 Jul 2021 20:51:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=F6EXfh7wwFLDO6tYfgdCgo9uI2whqhTz3oD65TnzIBk=; b=HRDm27uAj54KL0N0O6EShyPejA2HVwuvz4TbRwkcPJKpHvUCkyuCaecubM0laavj+X 9jb8VbAtA7+bu6bDElTvi6k/sQNplhrZvNTpnxzEgaZZLkqfoG+cSoInODlyRN7/z1pQ GzY07TqLgIg3k2kDI1M/zKDLyHLZz1eSVMTFAiPGSMNXvxGenOCT3dssH/eIqEYuaox+ OnE9zdw9TS1HU5yBT8MR2VewR44h4N3dIH44kPZlFRtvcofauX3qLcqYLNezjc7z8GLk vlvAdCug89gsQ9PxbFSyCTrpiH670cYsXKwAqM+n4hWokJjLC/wgAYLaHwBhI4HWnXPA zeaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=F6EXfh7wwFLDO6tYfgdCgo9uI2whqhTz3oD65TnzIBk=; b=ayUvNVkZI+/iPp5IzYrikL41Bdn55AxVxgffEvVACPtVlotrjMsJC/H7emc1qvu7EL Fy1Ja7ZsbHTQB/5Fl6GrWz0RtHkjtdiF7l5/RKnYI+GCk0PEJG1RZaAcyiW4OzhrdyFk ckulh9SLG0+vyie+DCDyhR/dPlA/NELWwpkOSwr5/2mICeaOtnaQ8u60L1bH6/rDk/7i VYt308alg7W6Z1hBfJb6i2JWpHn6HkzWmGFTIlWJA1vyPmGsXAnJEAHSES+2GRKNFSii vDUHsbISR/fpDszJnHQMMiN1gjkhsLudoln0sx+OGRYtrjSsyS01VJEwgz02CcqDQ/gC p8IA== X-Gm-Message-State: AOAM533tqD0iVPzm40GLivWNa+JfhyyVWUVXDxgXHJAqQhu5mH2YxRP5 ik3vnGTk6qwHBngUAii4lrkMYAYa6AA= X-Google-Smtp-Source: ABdhPJyEqODVeSC6Sg+g/8PLrsHLPa3K65zRCYAfuCQMV7V+iFZkeSlfLPF/sDlCcFMkaf5H5DWWbg== X-Received: by 2002:a17:902:7598:b029:12b:e9ca:dfd5 with SMTP id j24-20020a1709027598b029012be9cadfd5mr8682207pll.12.1627271494995; Sun, 25 Jul 2021 20:51:34 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:34 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 21/55] KVM: PPC: Book3S HV: CTRL SPR does not require read-modify-write Date: Mon, 26 Jul 2021 13:50:02 +1000 Message-Id: <20210726035036.739609-22-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Processors that support KVM HV do not require read-modify-write of the CTRL SPR to set/clear their thread's runlatch. Just write 1 or 0 to it. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 2 +- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 15 ++++++--------- 2 files changed, 7 insertions(+), 10 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 772f1e6c93e1..f212d5013622 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4024,7 +4024,7 @@ static void load_spr_state(struct kvm_vcpu *vcpu) */ if (!(vcpu->arch.ctrl & 1)) - mtspr(SPRN_CTRLT, mfspr(SPRN_CTRLF) & ~1); + mtspr(SPRN_CTRLT, 0); } static void store_spr_state(struct kvm_vcpu *vcpu) diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 551ce223b40c..05be8648937d 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -775,12 +775,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) mtspr SPRN_AMR,r5 mtspr SPRN_UAMOR,r6 - /* Restore state of CTRL run bit; assume 1 on entry */ + /* Restore state of CTRL run bit; the host currently has it set to 1 */ lwz r5,VCPU_CTRL(r4) andi. r5,r5,1 bne 4f - mfspr r6,SPRN_CTRLF - clrrdi r6,r6,1 + li r6,0 mtspr SPRN_CTRLT,r6 4: /* Secondary threads wait for primary to have done partition switch */ @@ -1203,12 +1202,12 @@ guest_bypass: stw r0, VCPU_CPU(r9) stw r0, VCPU_THREAD_CPU(r9) - /* Save guest CTRL register, set runlatch to 1 */ + /* Save guest CTRL register, set runlatch to 1 if it was clear */ mfspr r6,SPRN_CTRLF stw r6,VCPU_CTRL(r9) andi. r0,r6,1 bne 4f - ori r6,r6,1 + li r6,1 mtspr SPRN_CTRLT,r6 4: /* @@ -2178,8 +2177,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM) * Also clear the runlatch bit before napping. */ kvm_do_nap: - mfspr r0, SPRN_CTRLF - clrrdi r0, r0, 1 + li r0,0 mtspr SPRN_CTRLT, r0 li r0,1 @@ -2198,8 +2196,7 @@ kvm_nap_sequence: /* desired LPCR value in r5 */ bl isa206_idle_insn_mayloss - mfspr r0, SPRN_CTRLF - ori r0, r0, 1 + li r0,1 mtspr SPRN_CTRLT, r0 mtspr SPRN_SRR1, r3 From patchwork Mon Jul 26 03:50:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509724 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=g+HajHr1; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Zh1Wk0z9t6h for ; Mon, 26 Jul 2021 13:51:39 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231575AbhGZDLJ (ORCPT ); Sun, 25 Jul 2021 23:11:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231530AbhGZDLJ (ORCPT ); Sun, 25 Jul 2021 23:11:09 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3B01C061757 for ; Sun, 25 Jul 2021 20:51:37 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id j1so11099426pjv.3 for ; Sun, 25 Jul 2021 20:51:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TlyaSEq0t+PRD6ftzOjtBOST1TcfxMZ4B/R2htdQQDg=; b=g+HajHr1SO9Bsn/F4YCZmiWqX6zSkdUF1UemGJUPQoZqwW0CMXOzECMOG5Xq89mtGM 9XDq3crVgP8t3JwF+1RlJQWBph82iwHg3E+KAQvy6DfiHm+KnCqKe/aL+GsDXV9sxCiZ DXmkQAdY2JKrJZyuHD9BngEPY3aBKJpL9gv8swydw7i9pqEbHnsUJbUuGWN7ESGVmNtu 3m9IZU9BII2G25kuXl7ap56cWYqmYHEJGsqV6K+cBAeayglVb//oQ7Y7gzDhiRrQJpqQ VBh2rGBWNNLNhHiAmoofkBs4lBxmGAZI3qJOXUrqbQNKiYzQWpu6BoOwW+1Eh8iT9TXK E7wQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TlyaSEq0t+PRD6ftzOjtBOST1TcfxMZ4B/R2htdQQDg=; b=eLrnIhDKbQ8KijQXBoIYJW3ToNan+cffc9dLofncMVbT3mqN5NvIB6NWDw83i3N+jS iM8ehwWY0yZOf3a+n03lMO5K2hiC7HaNHdAygTFcV5/C921DFNCP5TZ7R0g/S4DgzfbD GMtDUbL2+tYZj3vnnPY9hFL3LEzyzMe4gI4Go6xILMfR4XfsCEtRzC5fyoir/7S6fdPf ZJ6ZglLRKgn4i8wx/By7MtMNU5+g3PMYdRXYfAOhPefI7jg0sQIMdouISoz5C+XSLVPV hyaCEb5zm/W/okTwEh5HZuHmaFosVIiwFHMI9yIrlZNTz6uwqj6oOBPN3YoL7ySoqnlE BUxg== X-Gm-Message-State: AOAM532CfL5SDC83yhzQBF2OFTozPsJa4M1PNcmABifKkj7tPGJF543w 2pXxjKfqrU+wHs5YQZ7Ai43BLes/Q6E= X-Google-Smtp-Source: ABdhPJzDx+e6KcW+rc1cBJfLlZ9+HLdDjj/ZOf3LfPcyJbtU7eVuwRkC9WAFWEqtFw5wkpez6U4NcA== X-Received: by 2002:a17:902:a9c1:b029:12b:8ae3:e077 with SMTP id b1-20020a170902a9c1b029012b8ae3e077mr12921049plr.75.1627271497200; Sun, 25 Jul 2021 20:51:37 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:37 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 22/55] KVM: PPC: Book3S HV P9: Move SPRG restore to restore_p9_host_os_sprs Date: Mon, 26 Jul 2021 13:50:03 +1000 Message-Id: <20210726035036.739609-23-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move the SPR update into its relevant helper function. This will help with SPR scheduling improvements in later changes. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index f212d5013622..2e966d62a583 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4057,6 +4057,8 @@ static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { + mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); + mtspr(SPRN_PSPB, 0); mtspr(SPRN_UAMOR, 0); @@ -4256,8 +4258,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, timer_rearm_host_dec(tb); - mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - kvmppc_subcore_exit_guest(); return trap; From patchwork Mon Jul 26 03:50:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509726 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=S6/W3WJS; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Zj6Yz5z9t18 for ; Mon, 26 Jul 2021 13:51:41 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231577AbhGZDLL (ORCPT ); Sun, 25 Jul 2021 23:11:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231530AbhGZDLL (ORCPT ); Sun, 25 Jul 2021 23:11:11 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFBE0C061757 for ; Sun, 25 Jul 2021 20:51:39 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id b6so11101105pji.4 for ; Sun, 25 Jul 2021 20:51:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yrQqMqJQy0pD69345VC9XUGRd5IuhvhW0vV2AuIPX00=; b=S6/W3WJS8IFExRdExqTAvDDz6z0UTUOsILfUtj+NLBTHO/LV3RWAyVHeK2HZJ008/U zLkS19ZdH4MF2L55rd5X4oAVDSBqCYBjg8T+7BJ8U8utfoegtEHdjRXonYpCkGkkYzYr 7RBm9lSC6Lpmy9TzULKFViXqEMWbKNgOhTiS6g3WPqsEM1U1zAPFK1GGZN2mKrmBX3xd cebU/hVXh/u4ISsvtlgPXa/Wl8Y+xeefHABFXYrm9Cwiaq/7FYG2/GRqi15I30oqmgo9 oEq2FuXu0of0g+B+wg01+bUPX/tuRmvt52PajX2DZHSHDaNNV3k+zmbQ3DwosEjjCX8G MCIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yrQqMqJQy0pD69345VC9XUGRd5IuhvhW0vV2AuIPX00=; b=kMeQCGOx5kXaQpT/5h1O9u6Pdq3NZDxsy/odnqKntswtyoKadYlaKGDPwQgt2yBbN9 o1wubBRn3yfp3QZgjhtlgKoNi95x5SrHDqZu94Boaj1s2v8A5qUIYZ0+mArPqJa+hufo gLj9I/UUAyvRoKlGHdWDoXnCSY/daALv33kXDOuaAEDE1fAFSGl3rwKhCI6o/XfSEwEy RpD9c+hJYNDO0UPqE+4oFUnuyJllPb+i0D7a+GlmFFEkp6mhbeQwHOJYyVq6ejXP7IPi LC6Xzm65DAVGNA06eJVfACOFJhK+OL07o7+4bAmZWViiYoHEzK5YnX9L4aD8B+4VArjK 4Yuw== X-Gm-Message-State: AOAM533XCQ89G3almU8lcisi3hoprJHVLXM2no6jEaGOfIyQWgx8mrSP xzBosrqgROSGvrD2CgFdedEyCEPJObw= X-Google-Smtp-Source: ABdhPJyvmfU7zAJVedynlPUw2XNSDPiMja3RDY2SXQXjITzUPKoZrA/HjwMtqnySk+q1TQa1GiahNQ== X-Received: by 2002:a17:90a:7884:: with SMTP id x4mr25345156pjk.53.1627271499434; Sun, 25 Jul 2021 20:51:39 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:39 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 23/55] KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs Date: Mon, 26 Jul 2021 13:50:04 +1000 Message-Id: <20210726035036.739609-24-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This reduces the number of mtmsrd required to enable facility bits when saving/restoring registers, by having the KVM code set all bits up front rather than using individual facility functions that set their particular MSR bits. -42 cycles (7803) POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin Reported-by: kernel test robot Reported-by: kernel test robot --- arch/powerpc/kernel/process.c | 24 +++++++++++ arch/powerpc/kvm/book3s_hv.c | 61 ++++++++++++++++++--------- arch/powerpc/kvm/book3s_hv_p9_entry.c | 1 + 3 files changed, 67 insertions(+), 19 deletions(-) diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 185beb290580..00b55b38a460 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -593,6 +593,30 @@ static void save_all(struct task_struct *tsk) msr_check_and_clear(msr_all_available); } +void save_user_regs_kvm(void) +{ + unsigned long usermsr; + + if (!current->thread.regs) + return; + + usermsr = current->thread.regs->msr; + + if (usermsr & MSR_FP) + save_fpu(current); + + if (usermsr & MSR_VEC) + save_altivec(current); + + if (usermsr & MSR_TM) { + current->thread.tm_tfhar = mfspr(SPRN_TFHAR); + current->thread.tm_tfiar = mfspr(SPRN_TFIAR); + current->thread.tm_texasr = mfspr(SPRN_TEXASR); + current->thread.regs->msr &= ~MSR_TM; + } +} +EXPORT_SYMBOL_GPL(save_user_regs_kvm); + void flush_all_to_thread(struct task_struct *tsk) { if (tsk->thread.regs) { diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 2e966d62a583..dedcf3ddba3b 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4103,6 +4103,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, struct p9_host_os_sprs host_os_sprs; s64 dec; u64 tb, next_timer; + unsigned long msr; int trap; WARN_ON_ONCE(vcpu->arch.ceded); @@ -4114,8 +4115,23 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, if (next_timer < time_limit) time_limit = next_timer; + vcpu->arch.ceded = 0; + save_p9_host_os_sprs(&host_os_sprs); + /* MSR bits may have been cleared by context switch */ + msr = 0; + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr |= MSR_VSX; + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + msr |= MSR_TM; + msr = msr_check_and_set(msr); + kvmppc_subcore_enter_guest(); vc->entry_exit_map = 1; @@ -4124,12 +4140,13 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + msr = mfmsr(); /* TM restore can update msr */ + } switch_pmu_to_guest(vcpu, &host_os_sprs); - msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); load_fp_state(&vcpu->arch.fp); #ifdef CONFIG_ALTIVEC load_vr_state(&vcpu->arch.vr); @@ -4238,7 +4255,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, restore_p9_host_os_sprs(vcpu, &host_os_sprs); - msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); store_fp_state(&vcpu->arch.fp); #ifdef CONFIG_ALTIVEC store_vr_state(&vcpu->arch.vr); @@ -4767,6 +4783,8 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, goto done; } +void save_user_regs_kvm(void); + static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) { struct kvm_run *run = vcpu->run; @@ -4776,19 +4794,24 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) unsigned long user_tar = 0; unsigned int user_vrsave; struct kvm *kvm; + unsigned long msr; if (!vcpu->arch.sane) { run->exit_reason = KVM_EXIT_INTERNAL_ERROR; return -EINVAL; } + /* No need to go into the guest when all we'll do is come back out */ + if (signal_pending(current)) { + run->exit_reason = KVM_EXIT_INTR; + return -EINTR; + } + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM /* * Don't allow entry with a suspended transaction, because * the guest entry/exit code will lose it. - * If the guest has TM enabled, save away their TM-related SPRs - * (they will get restored by the TM unavailable interrupt). */ -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM if (cpu_has_feature(CPU_FTR_TM) && current->thread.regs && (current->thread.regs->msr & MSR_TM)) { if (MSR_TM_ACTIVE(current->thread.regs->msr)) { @@ -4796,12 +4819,6 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) run->fail_entry.hardware_entry_failure_reason = 0; return -EINVAL; } - /* Enable TM so we can read the TM SPRs */ - mtmsr(mfmsr() | MSR_TM); - current->thread.tm_tfhar = mfspr(SPRN_TFHAR); - current->thread.tm_tfiar = mfspr(SPRN_TFIAR); - current->thread.tm_texasr = mfspr(SPRN_TEXASR); - current->thread.regs->msr &= ~MSR_TM; } #endif @@ -4816,18 +4833,24 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) kvmppc_core_prepare_to_enter(vcpu); - /* No need to go into the guest when all we'll do is come back out */ - if (signal_pending(current)) { - run->exit_reason = KVM_EXIT_INTR; - return -EINTR; - } - kvm = vcpu->kvm; atomic_inc(&kvm->arch.vcpus_running); /* Order vcpus_running vs. mmu_ready, see kvmppc_alloc_reset_hpt */ smp_mb(); - flush_all_to_thread(current); + msr = 0; + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr |= MSR_VSX; + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + msr |= MSR_TM; + msr = msr_check_and_set(msr); + + save_user_regs_kvm(); /* Save userspace EBB and other register values */ if (cpu_has_feature(CPU_FTR_ARCH_207S)) { diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index a7f63082b4e3..fb9cb34445ea 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -224,6 +224,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = vc->tb_offset; } + /* Could avoid mfmsr by passing around, but probably no big deal */ msr = mfmsr(); host_hfscr = mfspr(SPRN_HFSCR); From patchwork Mon Jul 26 03:50:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509727 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=vOPZwS02; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Zm0JpXz9sjJ for ; Mon, 26 Jul 2021 13:51:44 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231579AbhGZDLO (ORCPT ); Sun, 25 Jul 2021 23:11:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231530AbhGZDLN (ORCPT ); Sun, 25 Jul 2021 23:11:13 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1BC69C061757 for ; Sun, 25 Jul 2021 20:51:42 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id ch6so2116167pjb.5 for ; Sun, 25 Jul 2021 20:51:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dcPpW+gVoT4ynH1L1M9O+sLZoAI7u1MA7ldnBV8oR7Y=; b=vOPZwS02yoJOGLWz4Iy/PiyeyCbWOXHv/4at/U8Vh8SFm4GtkI73RdS+WYVbzP/Z+t kAMa95542GpqHu8SyBCfHmBP2lwv5IP/DZZAx/BKSYuIChwaJ3ktqb0ofgrXb7UEicRJ BFqspS/o4M1MqtrBynYhzdhci7t9QS/Fu6kSi/u1aQt1LnZRqAiXyPVTnxfQeCtKGCqA JLtZwLYhcdHdtpoF/NQAPnDt/3AlHKVXOKYVT0XoWib3y/3Nuna8yBezK69OCtgoz87/ wro8AYkpcESiUIkUDfyUZG3N89oFJxpRVeoy1pBTOv891E+7cqumPG+tty8rmqloIJuy Zlow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dcPpW+gVoT4ynH1L1M9O+sLZoAI7u1MA7ldnBV8oR7Y=; b=paHYjtlmR78F7uBRu5uMh1h2M6SmhQPrucrGIjPYoT+UcnYsx5lCbVXC3re9Re0OFy bL9wLJva4EdGmGnaEh5OW9R0VZ+AzzCYPQya5anmCqTSCDeUcMgyO0N7HIQb8xvuq5Zr hH91T3bD7Rl+udsxH47ytXaeOx8D+ovd0tOLIMWQ45wmmuEfgKPVa/QQY/wACtx2T2Fe AFFGmq70nocD+JrM7gdml0jslRfiYyAzue/iWFPptsltO3i+lsUYd95rYJuFBRcT8bR3 JMjDQNxGS45R5n+AeXN5U+egCQy4j4cjIxg45Fw69ZRCP89HbVsWGItpLfgBV6m4TXj+ 6PXQ== X-Gm-Message-State: AOAM530+H5xjTVTQ/Hg+dVhmPjaBw7/R/dE1pd+SV2YJA7LvNtrdAtMc 9UzuIICndLmInrYB9W2G78pMrtyF14U= X-Google-Smtp-Source: ABdhPJypQXxe5aG6GybNm+5ublhlaXyP4/3pV0wcw8Z9Sgbhefsl3Umgsds3uIkyF80QP+hKmJy0UQ== X-Received: by 2002:a63:4c03:: with SMTP id z3mr16084182pga.130.1627271501608; Sun, 25 Jul 2021 20:51:41 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:41 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 24/55] KVM: PPC: Book3S HV P9: Improve mtmsrd scheduling by delaying MSR[EE] disable Date: Mon, 26 Jul 2021 13:50:05 +1000 Message-Id: <20210726035036.739609-25-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Moving the mtmsrd after the host SPRs are saved and before the guest SPRs start to be loaded can prevent an SPR scoreboard stall (because the mtmsrd is L=1 type which does not cause context synchronisation. This is also now more convenient to combined with the mtmsrd L=0 instruction to enable facilities just below, but that is not done yet. -12 cycles (7791) POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index dedcf3ddba3b..7654235c1507 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4119,6 +4119,18 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, save_p9_host_os_sprs(&host_os_sprs); + /* + * This could be combined with MSR[RI] clearing, but that expands + * the unrecoverable window. It would be better to cover unrecoverable + * with KVM bad interrupt handling rather than use MSR[RI] at all. + * + * Much more difficult and less worthwhile to combine with IR/DR + * disable. + */ + hard_irq_disable(); + if (lazy_irq_pending()) + return 0; + /* MSR bits may have been cleared by context switch */ msr = 0; if (IS_ENABLED(CONFIG_PPC_FPU)) @@ -4618,6 +4630,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, struct kvmppc_vcore *vc; struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; + unsigned long flags; trace_kvmppc_run_vcpu_enter(vcpu); @@ -4661,11 +4674,11 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, if (kvm_is_radix(kvm)) kvmppc_prepare_radix_vcpu(vcpu, pcpu); - local_irq_disable(); - hard_irq_disable(); + /* flags save not required, but irq_pmu has no disable/enable API */ + powerpc_local_irq_pmu_save(flags); if (signal_pending(current)) goto sigpend; - if (lazy_irq_pending() || need_resched() || !kvm->arch.mmu_ready) + if (need_resched() || !kvm->arch.mmu_ready) goto out; if (!nested) { @@ -4720,7 +4733,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, guest_exit_irqoff(); - local_irq_enable(); + powerpc_local_irq_pmu_restore(flags); cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest); @@ -4778,7 +4791,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, run->exit_reason = KVM_EXIT_INTR; vcpu->arch.ret = -EINTR; out: - local_irq_enable(); + powerpc_local_irq_pmu_restore(flags); preempt_enable(); goto done; } From patchwork Mon Jul 26 03:50:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509728 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=IoOAmkNr; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Zp4cRCz9svs for ; Mon, 26 Jul 2021 13:51:46 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231582AbhGZDLQ (ORCPT ); Sun, 25 Jul 2021 23:11:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231530AbhGZDLQ (ORCPT ); Sun, 25 Jul 2021 23:11:16 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A35BCC061757 for ; Sun, 25 Jul 2021 20:51:44 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id k1so9914351plt.12 for ; Sun, 25 Jul 2021 20:51:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9VyGXv1qn0uoS5eA7BXzADizNISqN4ZTKzrUeAY//9w=; b=IoOAmkNrLBRNWr3ilu/SFJPoljfPjGQJCKRHtxFDUFSada09gMtmFGAOSihQ2WsX/m Ryz9vUvSkidgc4ZYqw/dJSTvBoiMIDb/IW3dTCou4f2i19wGaN4GHwYDkZknpWlsbcdQ Tb/mZJ5HLNkzvzwqz7C75ZWyCipmIn9Ok1NcT7+UQvrYQAk4/n6k5ieEvZfCNO8Sx85x v6GZyHcQJ9awqVBqsYK4E3Rkgctd9rTch0GHD4jhAt8odBtXGUHyPJlf6az5ys+iCByR 4xAyBQuAlRSe6kBfAHuzgR89DdRQSaPIrku7EnaGDDZ+FGsHZVI7ZzGcluhBycJLB6qL 2aPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9VyGXv1qn0uoS5eA7BXzADizNISqN4ZTKzrUeAY//9w=; b=ZNY9/gewEDNF1efbj0b3U1x/kNRj2TStC73J8ZeF2NytthXJfqQhctYNAMnaCIvmc8 nMzZ4XzYzJ6P51ofsjAdUbIyc5GrfCM4qB5PtllY2b5bXgBWbimUkHsQ/TEchfo+tdCr di4UOzH0QmWbJtSJC6v9Jurjq74XEioHntcdn+ZevzM2zZgfFTgJaaX1ahr6HbejDXHX zzWeUw7HMr/8aIvQcWf5wjbzGFe9Wm0oFrIlGjvSJAptysINKXwZo8A3Ih1M91vYMqha bxefgmcUg2ZDNNnLH5MJpqiDicTEsS9FSfDen8+TYG3LwHTMRJNYhtgtTiCK1i2UhHh7 uJAg== X-Gm-Message-State: AOAM532xe/R+q6xbIvJa8OX4Fn2h3cOlg7jcunkXYrt0b1JbbGixlWvP cLaL8rywrx2dltgpq/tP9v8fO/3kCPA= X-Google-Smtp-Source: ABdhPJyyepy2WnT5E3LdywhsJvWyzOZZkZDfMg020dcPBKUi6Vq51xKSXDckbl76dj55YUhmPjVPww== X-Received: by 2002:a17:90a:8410:: with SMTP id j16mr15415744pjn.111.1627271504109; Sun, 25 Jul 2021 20:51:44 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:43 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v1 25/55] KVM: PPC: Book3S HV P9: Add kvmppc_stop_thread to match kvmppc_start_thread Date: Mon, 26 Jul 2021 13:50:06 +1000 Message-Id: <20210726035036.739609-26-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Small cleanup makes it a bit easier to match up entry and exit operations. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 7654235c1507..4d757e4904c4 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3045,6 +3045,13 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc) kvmppc_ipi_thread(cpu); } +/* Old path does this in asm */ +static void kvmppc_stop_thread(struct kvm_vcpu *vcpu) +{ + vcpu->cpu = -1; + vcpu->arch.thread_cpu = -1; +} + static void kvmppc_wait_for_nap(int n_threads) { int cpu = smp_processor_id(); @@ -4260,8 +4267,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, dec = (s32) dec; tb = mftb(); vcpu->arch.dec_expires = dec + tb; - vcpu->cpu = -1; - vcpu->arch.thread_cpu = -1; store_spr_state(vcpu); @@ -4733,6 +4738,8 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, guest_exit_irqoff(); + kvmppc_stop_thread(vcpu); + powerpc_local_irq_pmu_restore(flags); cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest); From patchwork Mon Jul 26 03:50:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509729 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=cX3KIRrc; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5Zz6pNXz9sXJ for ; Mon, 26 Jul 2021 13:51:55 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231630AbhGZDLT (ORCPT ); Sun, 25 Jul 2021 23:11:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231598AbhGZDLS (ORCPT ); Sun, 25 Jul 2021 23:11:18 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7751C061757 for ; Sun, 25 Jul 2021 20:51:46 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id u9-20020a17090a1f09b029017554809f35so17809466pja.5 for ; Sun, 25 Jul 2021 20:51:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=thTL8AGCjyxwkLctqzhVOZ7LNT4QrcA8E1+CvVxt834=; b=cX3KIRrcghco6dIa5CID12wYVIw5EQOJNc8OCdeE1sS6bP5IHQQK3WekLofI5dQJqs wbT77Reh5NGozEMcUs2Y7vvxUYSbZjO8h0iknsiVUSyx8HEOy6d+KeTJjt9q1x53lnvZ Vs7sdzCBKQOwxw2y0FJ2k4dw5o3laDsbHDwMbXaCgmi0zjjuXmN3BJpxWGN9zqxWAPZY TKKhU6b0cxKusIVaoYqoIT5GDjTnlWcoCE6NOZuEPmrEMqY88HOHI/ses76wEsp1KDgo emOiEcyNgaquwH+S3xlET2gvCur4h/kjYNgc1trz5xhACNpDI3dDWBsl5OGC4XQEmIND IJyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=thTL8AGCjyxwkLctqzhVOZ7LNT4QrcA8E1+CvVxt834=; b=McCZC2uN5pH8O4GnIiE8mn7o0fQhg19M6JEQAQxIsM9SR+GYArdOM+IwcIFrJfLHQP rWgb4HeYLP53idHiBx011ra8OQqe/iz7XOXysNQBH58xKExfK4IrOu8radlbeadTzIbS IKtFmGrI3xjKYjiwPdgzPo7CLcdN2NAQgPsubkiAIs47iZ9xCvBBieQAv3BJKI2qUA4r PeX11NrOOpVB+3RHD9urP0CWkyQsMqnQaB8nqC+VFkQE+Sd/3McNBpk8+4LoAya4q6eW RXK/pOHEMZv+Ik8JSPQgmVRbDlAq5QZVZZoltlZEheZpgy6AQoxNE8g1xY4kPCD3xNlx KsCA== X-Gm-Message-State: AOAM531s3Vuw5OOD8m0jiTOaG7bHPuLwBUjKbfMLZDYwLLg9Xty502Tp gtGV0chZjAD1XCrL4oFht4z1lIm5dVs= X-Google-Smtp-Source: ABdhPJzDFBOez+aII4t9yaTo+Vu40KYUPyU5+27/yhLju+ponGQo2AJuaG9W3o7DShr7R8xOzCJPmw== X-Received: by 2002:a17:90a:f690:: with SMTP id cl16mr11567309pjb.164.1627271506392; Sun, 25 Jul 2021 20:51:46 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:46 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 26/55] KVM: PPC: Book3S HV: Change dec_expires to be relative to guest timebase Date: Mon, 26 Jul 2021 13:50:07 +1000 Message-Id: <20210726035036.739609-27-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Change dec_expires to be relative to the guest timebase, and allow it to be moved into low level P9 guest entry functions, to improve SPR access scheduling. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s.h | 6 +++ arch/powerpc/include/asm/kvm_host.h | 2 +- arch/powerpc/kvm/book3s_hv.c | 58 +++++++++++++------------ arch/powerpc/kvm/book3s_hv_nested.c | 3 ++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 10 ++++- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 14 ------ 6 files changed, 49 insertions(+), 44 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index caaa0f592d8e..15b573671f99 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -406,6 +406,12 @@ static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu) return vcpu->arch.fault_dar; } +/* Expiry time of vcpu DEC relative to host TB */ +static inline u64 kvmppc_dec_expires_host_tb(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.dec_expires - vcpu->arch.vcore->tb_offset; +} + static inline bool is_kvmppc_resume_guest(int r) { return (r == RESUME_GUEST || r == RESUME_GUEST_NV); diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index aee41edcfe6b..f105eaeb4521 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -742,7 +742,7 @@ struct kvm_vcpu_arch { struct hrtimer dec_timer; u64 dec_jiffies; - u64 dec_expires; + u64 dec_expires; /* Relative to guest timebase. */ unsigned long pending_exceptions; u8 ceded; u8 prodded; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 4d757e4904c4..027ae0b60e70 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -2237,8 +2237,7 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, *val = get_reg_val(id, vcpu->arch.vcore->arch_compat); break; case KVM_REG_PPC_DEC_EXPIRY: - *val = get_reg_val(id, vcpu->arch.dec_expires + - vcpu->arch.vcore->tb_offset); + *val = get_reg_val(id, vcpu->arch.dec_expires); break; case KVM_REG_PPC_ONLINE: *val = get_reg_val(id, vcpu->arch.online); @@ -2490,8 +2489,7 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, r = kvmppc_set_arch_compat(vcpu, set_reg_val(id, *val)); break; case KVM_REG_PPC_DEC_EXPIRY: - vcpu->arch.dec_expires = set_reg_val(id, *val) - - vcpu->arch.vcore->tb_offset; + vcpu->arch.dec_expires = set_reg_val(id, *val); break; case KVM_REG_PPC_ONLINE: i = set_reg_val(id, *val); @@ -2877,13 +2875,13 @@ static void kvmppc_set_timer(struct kvm_vcpu *vcpu) unsigned long dec_nsec, now; now = get_tb(); - if (now > vcpu->arch.dec_expires) { + if (now > kvmppc_dec_expires_host_tb(vcpu)) { /* decrementer has already gone negative */ kvmppc_core_queue_dec(vcpu); kvmppc_core_prepare_to_enter(vcpu); return; } - dec_nsec = tb_to_ns(vcpu->arch.dec_expires - now); + dec_nsec = tb_to_ns(kvmppc_dec_expires_host_tb(vcpu) - now); hrtimer_start(&vcpu->arch.dec_timer, dec_nsec, HRTIMER_MODE_REL); vcpu->arch.timer_running = 1; } @@ -3355,7 +3353,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master) */ spin_unlock(&vc->lock); /* cancel pending dec exception if dec is positive */ - if (now < vcpu->arch.dec_expires && + if (now < kvmppc_dec_expires_host_tb(vcpu) && kvmppc_core_pending_dec(vcpu)) kvmppc_core_dequeue_dec(vcpu); @@ -4174,20 +4172,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, load_spr_state(vcpu); - /* - * When setting DEC, we must always deal with irq_work_raise via NMI vs - * setting DEC. The problem occurs right as we switch into guest mode - * if a NMI hits and sets pending work and sets DEC, then that will - * apply to the guest and not bring us back to the host. - * - * irq_work_raise could check a flag (or possibly LPCR[HDICE] for - * example) and set HDEC to 1? That wouldn't solve the nested hv - * case which needs to abort the hcall or zero the time limit. - * - * XXX: Another day's problem. - */ - mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb); - if (kvmhv_on_pseries()) { /* * We need to save and restore the guest visible part of the @@ -4213,6 +4197,23 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, hvregs.vcpu_token = vcpu->vcpu_id; } hvregs.hdec_expiry = time_limit; + + /* + * When setting DEC, we must always deal with irq_work_raise + * via NMI vs setting DEC. The problem occurs right as we + * switch into guest mode if a NMI hits and sets pending work + * and sets DEC, then that will apply to the guest and not + * bring us back to the host. + * + * irq_work_raise could check a flag (or possibly LPCR[HDICE] + * for example) and set HDEC to 1? That wouldn't solve the + * nested hv case which needs to abort the hcall or zero the + * time limit. + * + * XXX: Another day's problem. + */ + mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - tb); + mtspr(SPRN_DAR, vcpu->arch.shregs.dar); mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs), @@ -4224,6 +4225,12 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); mtspr(SPRN_PSSCR_PR, host_psscr); + dec = mfspr(SPRN_DEC); + if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ + dec = (s32) dec; + tb = mftb(); + vcpu->arch.dec_expires = dec + (tb + vc->tb_offset); + /* H_CEDE has to be handled now, not later */ if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && kvmppc_get_gpr(vcpu, 3) == H_CEDE) { @@ -4231,6 +4238,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, kvmppc_set_gpr(vcpu, 3, 0); trap = 0; } + } else { kvmppc_xive_push_vcpu(vcpu); trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr); @@ -4262,12 +4270,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.slb_max = 0; } - dec = mfspr(SPRN_DEC); - if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ - dec = (s32) dec; - tb = mftb(); - vcpu->arch.dec_expires = dec + tb; - store_spr_state(vcpu); restore_p9_host_os_sprs(vcpu, &host_os_sprs); @@ -4752,7 +4754,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, * by L2 and the L1 decrementer is provided in hdec_expires */ if (kvmppc_core_pending_dec(vcpu) && - ((get_tb() < vcpu->arch.dec_expires) || + ((get_tb() < kvmppc_dec_expires_host_tb(vcpu)) || (trap == BOOK3S_INTERRUPT_SYSCALL && kvmppc_get_gpr(vcpu, 3) == H_ENTER_NESTED))) kvmppc_core_dequeue_dec(vcpu); diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 3ffc63ffebc5..fad7bc8736ea 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -380,6 +380,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) /* convert TB values/offsets to host (L0) values */ hdec_exp = l2_hv.hdec_expiry - vc->tb_offset; vc->tb_offset += l2_hv.tb_offset; + vcpu->arch.dec_expires += l2_hv.tb_offset; /* set L1 state to L2 state */ vcpu->arch.nested = l2; @@ -421,6 +422,8 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) if (l2_regs.msr & MSR_TS_MASK) vcpu->arch.shregs.msr |= MSR_TS_S; vc->tb_offset = saved_l1_hv.tb_offset; + /* XXX: is this always the same delta as saved_l1_hv.tb_offset? */ + vcpu->arch.dec_expires -= l2_hv.tb_offset; restore_hv_regs(vcpu, &saved_l1_hv); vcpu->arch.purr += delta_purr; vcpu->arch.spurr += delta_spurr; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index fb9cb34445ea..814b0dfd590f 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -188,7 +188,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; struct kvmppc_vcore *vc = vcpu->arch.vcore; - s64 hdec; + s64 hdec, dec; u64 tb, purr, spurr; u64 *exsave; bool ri_set; @@ -317,6 +317,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc */ mtspr(SPRN_HDEC, hdec); + mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb); + #ifdef CONFIG_PPC_TRANSACTIONAL_MEM tm_return_to_guest: #endif @@ -461,6 +463,12 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); + dec = mfspr(SPRN_DEC); + if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ + dec = (s32) dec; + tb = mftb(); + vcpu->arch.dec_expires = dec + tb; + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ mtspr(SPRN_PSSCR, host_psscr | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 05be8648937d..16cb3240df52 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -808,10 +808,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) * Set the decrementer to the guest decrementer. */ ld r8,VCPU_DEC_EXPIRES(r4) - /* r8 is a host timebase value here, convert to guest TB */ - ld r5,HSTATE_KVM_VCORE(r13) - ld r6,VCORE_TB_OFFSET_APPL(r5) - add r8,r8,r6 mftb r7 subf r3,r7,r8 mtspr SPRN_DEC,r3 @@ -1186,9 +1182,6 @@ guest_bypass: mftb r6 extsw r5,r5 16: add r5,r5,r6 - /* r5 is a guest timebase value here, convert to host TB */ - ld r4,VCORE_TB_OFFSET_APPL(r3) - subf r5,r4,r5 std r5,VCPU_DEC_EXPIRES(r9) /* Increment exit count, poke other threads to exit */ @@ -2153,10 +2146,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM) 67: /* save expiry time of guest decrementer */ add r3, r3, r5 - ld r4, HSTATE_KVM_VCPU(r13) - ld r5, HSTATE_KVM_VCORE(r13) - ld r6, VCORE_TB_OFFSET_APPL(r5) - subf r3, r6, r3 /* convert to host TB value */ std r3, VCPU_DEC_EXPIRES(r4) #ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING @@ -2253,9 +2242,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM) /* Restore guest decrementer */ ld r3, VCPU_DEC_EXPIRES(r4) - ld r5, HSTATE_KVM_VCORE(r13) - ld r6, VCORE_TB_OFFSET_APPL(r5) - add r3, r3, r6 /* convert host TB to guest TB value */ mftb r7 subf r3, r7, r3 mtspr SPRN_DEC, r3 From patchwork Mon Jul 26 03:50:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509733 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=lnOzFb4Y; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bD6KYCz9snk for ; Mon, 26 Jul 2021 13:52:08 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231598AbhGZDLe (ORCPT ); Sun, 25 Jul 2021 23:11:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231653AbhGZDLU (ORCPT ); Sun, 25 Jul 2021 23:11:20 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13FA7C061757 for ; Sun, 25 Jul 2021 20:51:49 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id c16so4439995plh.7 for ; Sun, 25 Jul 2021 20:51:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=InnIngdCZKWqNjgAIB+VqHmGiblNFRvsyL7J2HN5wII=; b=lnOzFb4YJpYCfhT+K1PMdbx9Nfbf1/IjFRcap6NDJB/eSyMcf0i9r2cFi7yHEcf9Lk XUY9jpj3hk4Dh/bNFCvTMaogclMUopmicvdC6yu62jiws/hQnsFhprXg4BZQZzoi+2on oZskw7fOeWlxdEz1xWDwcMIDwJ64ZoWQnkYZDlDeFDtKT2pQVsi1LxJ9hhjahnwaBd4/ /8e2geN76hB7FTYGmPL0+tq2sYYWrxoQtMxIwMWSSQir57FBQYmvpgYdsEGnorqtoXGt +glVlDmKb9rJ9sTcRsf9gdEKssTnxtBo3lq6LCDbRqtm7Ffews80srcjUB1ey3Ym9+HH yuWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=InnIngdCZKWqNjgAIB+VqHmGiblNFRvsyL7J2HN5wII=; b=F8QK25xiTMNqUYqfnKDwpvCoZsjiw2vNrlYGBmPP7pYNHRzSTsoLcCb6gU5oRFyXnF Ev+A6GVtK16EuEhd0l5SmCxN6PMDHVeKaZv04hzZwcV3///iUeAV83WAxpqZcyj4jC7v htl07SaJszMefYK7Jb4cHOoK1Ho0bHIIX62He9+xisGcmFFE1hIgjuK7o2TjLYV5DbBu X/Xz5VBT3+dlRNlmv5CgH43gCbksRf2idBEN3LtVQJwYj9P8FHsppoMIjsQratk2oZDG 2MQ+PyJVImPZ9qPZ9+fELIU37jh7gF+zYGqMkuyw9vlq4Gx9ken7EfqnYfHBALpkYoKY 1xjQ== X-Gm-Message-State: AOAM533j4EAnvRSvUJWsDyxUIi4AprOD7ldpQtIthxJZKvsq7Us7xHHJ 0NPgpngfyy+d1BUBaSqL/v3oZvTBHsE= X-Google-Smtp-Source: ABdhPJzs+7y0r81P3TH9r8iY5j4AV0YnhCSDQP1XzTu7Q2upvYgtQi2kw3DNSbHWVaLRRPEqPIwk+Q== X-Received: by 2002:a65:63d0:: with SMTP id n16mr6256947pgv.432.1627271508565; Sun, 25 Jul 2021 20:51:48 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:48 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 27/55] KVM: PPC: Book3S HV P9: Move TB updates Date: Mon, 26 Jul 2021 13:50:08 +1000 Message-Id: <20210726035036.739609-28-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move the TB updates between saving and loading guest and host SPRs, to improve scheduling by keeping issue-NTC operations together as much as possible. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 36 +++++++++++++-------------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 814b0dfd590f..e7793bb806eb 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -215,15 +215,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ceded = 0; - if (vc->tb_offset) { - u64 new_tb = tb + vc->tb_offset; - mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); - vc->tb_offset_applied = vc->tb_offset; - } - /* Could avoid mfmsr by passing around, but probably no big deal */ msr = mfmsr(); @@ -238,6 +229,15 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc host_dawrx1 = mfspr(SPRN_DAWRX1); } + if (vc->tb_offset) { + u64 new_tb = tb + vc->tb_offset; + mtspr(SPRN_TBU40, new_tb); + tb = mftb(); + if ((tb & 0xffffff) < (new_tb & 0xffffff)) + mtspr(SPRN_TBU40, new_tb + 0x1000000); + vc->tb_offset_applied = vc->tb_offset; + } + if (vc->pcr) mtspr(SPRN_PCR, vc->pcr | PCR_MASK); mtspr(SPRN_DPDES, vc->dpdes); @@ -469,6 +469,15 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc tb = mftb(); vcpu->arch.dec_expires = dec + tb; + if (vc->tb_offset_applied) { + u64 new_tb = tb - vc->tb_offset_applied; + mtspr(SPRN_TBU40, new_tb); + tb = mftb(); + if ((tb & 0xffffff) < (new_tb & 0xffffff)) + mtspr(SPRN_TBU40, new_tb + 0x1000000); + vc->tb_offset_applied = 0; + } + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ mtspr(SPRN_PSSCR, host_psscr | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); @@ -503,15 +512,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (vc->pcr) mtspr(SPRN_PCR, PCR_MASK); - if (vc->tb_offset_applied) { - u64 new_tb = mftb() - vc->tb_offset_applied; - mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); - vc->tb_offset_applied = 0; - } - /* HDEC must be at least as large as DEC, so decrementer_max fits */ mtspr(SPRN_HDEC, decrementer_max); From patchwork Mon Jul 26 03:50:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509734 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=VkQ9n7wZ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bF3B2hz9t9b for ; Mon, 26 Jul 2021 13:52:09 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231653AbhGZDLf (ORCPT ); Sun, 25 Jul 2021 23:11:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231657AbhGZDLW (ORCPT ); Sun, 25 Jul 2021 23:11:22 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E3CDC061757 for ; Sun, 25 Jul 2021 20:51:51 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id q17-20020a17090a2e11b02901757deaf2c8so12489284pjd.0 for ; Sun, 25 Jul 2021 20:51:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=szuQrLWzW02rwRsW+/XMURC8pEP/QBSmKnBix5HOfIQ=; b=VkQ9n7wZIMhurrbfXwlj1heawXKrkEgiWTM5eWgmmxQ95Fzfyu4tsjB8XjoOr4t8+N SlhFgn8AmBhPBIlmHYXaUPnB6jn9qL8sVqHxwyU5RJbNaq4bbiObg8VhxDvEtM6sAV0/ 2jd/qNFq5v07yrRC1Fmi76reMmEN4vlbLMupyoaLQw1ma23NOWdBJngQaBwualSqmQci 0M45gLFt/Tb1d5PvqllCxdBvBdYzRzZ/cHUMTHnbF3NS++LSQIkLz77/PgYFi5lSKNJ5 rO3X+CAaEEtX6lVLovbsTpil9xhEbmlBo+/3xNAqpPeYM+3DfYBsiegGh2DXesCrkneh x/bQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=szuQrLWzW02rwRsW+/XMURC8pEP/QBSmKnBix5HOfIQ=; b=EUWnKssIlfGVNoU4Tz/D+RRwYY0gwY/AjnVgPHLUuHkIsoH+VI0Kc+eqKI//xVsARx i7iyT/seMfKMMj+H8DrHFnUV/6TlxrtdSEhO8Vk7xT6YX3+aZ6blmW0At5UFg0wNGswY gUdIOVo8dcSVQuM9J4taVVUjoMddm4xVz4qVw7u3BKSXc+HQ+SN8JAs7HdsUxc6gfIxx wLAbPYMwrx9x/5ii8kEep3C7DDRsFOcO3XM5tZAW249QderP7Zsyooe/25lLwFdtiG5z lQEGB6srpfde07L6HKmSoZoLV9BQfsfZBir5II/t6bIzUiCIlnwKFnZ3LbTJBipldr3O K9GA== X-Gm-Message-State: AOAM532zVn3Te4AvMmgEsmJkMFz5MQyaiQbTbRr2TTXRvDyrYHhkv9zS MenNpFtmH0qd7/bhtPQi4HeH1oFicGc= X-Google-Smtp-Source: ABdhPJzu8C7eJu08vNCrRAuj+Ff7PY6Mep+yasDh1pwXPqvnFKvafcLCZaDBCgOey85HuMpR6NpLVA== X-Received: by 2002:a63:f712:: with SMTP id x18mr16446362pgh.389.1627271510841; Sun, 25 Jul 2021 20:51:50 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:50 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 28/55] KVM: PPC: Book3S HV P9: Optimise timebase reads Date: Mon, 26 Jul 2021 13:50:09 +1000 Message-Id: <20210726035036.739609-29-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Reduce the number of mfTB executed by passing the current timebase around entry and exit code rather than read it multiple times. -213 cycles (7578) POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 2 +- arch/powerpc/kvm/book3s_hv.c | 88 +++++++++++++----------- arch/powerpc/kvm/book3s_hv_p9_entry.c | 33 +++++---- 3 files changed, 65 insertions(+), 58 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index df6bed4b2a46..52e2b7a352c7 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -154,7 +154,7 @@ static inline bool kvmhv_vcpu_is_radix(struct kvm_vcpu *vcpu) return radix; } -int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr); +int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb); #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ #endif diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 027ae0b60e70..fa44bbca75e4 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -276,22 +276,22 @@ static void kvmppc_fast_vcpu_kick_hv(struct kvm_vcpu *vcpu) * they should never fail.) */ -static void kvmppc_core_start_stolen(struct kvmppc_vcore *vc) +static void kvmppc_core_start_stolen(struct kvmppc_vcore *vc, u64 tb) { unsigned long flags; spin_lock_irqsave(&vc->stoltb_lock, flags); - vc->preempt_tb = mftb(); + vc->preempt_tb = tb; spin_unlock_irqrestore(&vc->stoltb_lock, flags); } -static void kvmppc_core_end_stolen(struct kvmppc_vcore *vc) +static void kvmppc_core_end_stolen(struct kvmppc_vcore *vc, u64 tb) { unsigned long flags; spin_lock_irqsave(&vc->stoltb_lock, flags); if (vc->preempt_tb != TB_NIL) { - vc->stolen_tb += mftb() - vc->preempt_tb; + vc->stolen_tb += tb - vc->preempt_tb; vc->preempt_tb = TB_NIL; } spin_unlock_irqrestore(&vc->stoltb_lock, flags); @@ -301,6 +301,7 @@ static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu) { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long flags; + u64 now = mftb(); /* * We can test vc->runner without taking the vcore lock, @@ -309,12 +310,12 @@ static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu) * ever sets it to NULL. */ if (vc->runner == vcpu && vc->vcore_state >= VCORE_SLEEPING) - kvmppc_core_end_stolen(vc); + kvmppc_core_end_stolen(vc, now); spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); if (vcpu->arch.state == KVMPPC_VCPU_BUSY_IN_HOST && vcpu->arch.busy_preempt != TB_NIL) { - vcpu->arch.busy_stolen += mftb() - vcpu->arch.busy_preempt; + vcpu->arch.busy_stolen += now - vcpu->arch.busy_preempt; vcpu->arch.busy_preempt = TB_NIL; } spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); @@ -324,13 +325,14 @@ static void kvmppc_core_vcpu_put_hv(struct kvm_vcpu *vcpu) { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long flags; + u64 now = mftb(); if (vc->runner == vcpu && vc->vcore_state >= VCORE_SLEEPING) - kvmppc_core_start_stolen(vc); + kvmppc_core_start_stolen(vc, now); spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); if (vcpu->arch.state == KVMPPC_VCPU_BUSY_IN_HOST) - vcpu->arch.busy_preempt = mftb(); + vcpu->arch.busy_preempt = now; spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); } @@ -685,7 +687,7 @@ static u64 vcore_stolen_time(struct kvmppc_vcore *vc, u64 now) } static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, - struct kvmppc_vcore *vc) + struct kvmppc_vcore *vc, u64 tb) { struct dtl_entry *dt; struct lppaca *vpa; @@ -696,7 +698,7 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, dt = vcpu->arch.dtl_ptr; vpa = vcpu->arch.vpa.pinned_addr; - now = mftb(); + now = tb; core_stolen = vcore_stolen_time(vc, now); stolen = core_stolen - vcpu->arch.stolen_logged; vcpu->arch.stolen_logged = core_stolen; @@ -2889,14 +2891,14 @@ static void kvmppc_set_timer(struct kvm_vcpu *vcpu) extern int __kvmppc_vcore_entry(void); static void kvmppc_remove_runnable(struct kvmppc_vcore *vc, - struct kvm_vcpu *vcpu) + struct kvm_vcpu *vcpu, u64 tb) { u64 now; if (vcpu->arch.state != KVMPPC_VCPU_RUNNABLE) return; spin_lock_irq(&vcpu->arch.tbacct_lock); - now = mftb(); + now = tb; vcpu->arch.busy_stolen += vcore_stolen_time(vc, now) - vcpu->arch.stolen_logged; vcpu->arch.busy_preempt = now; @@ -3147,14 +3149,14 @@ static void kvmppc_vcore_preempt(struct kvmppc_vcore *vc) } /* Start accumulating stolen time */ - kvmppc_core_start_stolen(vc); + kvmppc_core_start_stolen(vc, mftb()); } static void kvmppc_vcore_end_preempt(struct kvmppc_vcore *vc) { struct preempted_vcore_list *lp; - kvmppc_core_end_stolen(vc); + kvmppc_core_end_stolen(vc, mftb()); if (!list_empty(&vc->preempt_list)) { lp = &per_cpu(preempted_vcores, vc->pcpu); spin_lock(&lp->lock); @@ -3281,7 +3283,7 @@ static void prepare_threads(struct kvmppc_vcore *vc) vcpu->arch.ret = RESUME_GUEST; else continue; - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, mftb()); wake_up(&vcpu->arch.cpu_run); } } @@ -3300,7 +3302,7 @@ static void collect_piggybacks(struct core_info *cip, int target_threads) list_del_init(&pvc->preempt_list); if (pvc->runner == NULL) { pvc->vcore_state = VCORE_INACTIVE; - kvmppc_core_end_stolen(pvc); + kvmppc_core_end_stolen(pvc, mftb()); } spin_unlock(&pvc->lock); continue; @@ -3309,7 +3311,7 @@ static void collect_piggybacks(struct core_info *cip, int target_threads) spin_unlock(&pvc->lock); continue; } - kvmppc_core_end_stolen(pvc); + kvmppc_core_end_stolen(pvc, mftb()); pvc->vcore_state = VCORE_PIGGYBACK; if (cip->total_threads >= target_threads) break; @@ -3376,7 +3378,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master) else ++still_running; } else { - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, mftb()); wake_up(&vcpu->arch.cpu_run); } } @@ -3385,7 +3387,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master) kvmppc_vcore_preempt(vc); } else if (vc->runner) { vc->vcore_state = VCORE_PREEMPT; - kvmppc_core_start_stolen(vc); + kvmppc_core_start_stolen(vc, mftb()); } else { vc->vcore_state = VCORE_INACTIVE; } @@ -3516,7 +3518,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) ((vc->num_threads > threads_per_subcore) || !on_primary_thread())) { for_each_runnable_thread(i, vcpu, vc) { vcpu->arch.ret = -EBUSY; - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, mftb()); wake_up(&vcpu->arch.cpu_run); } goto out; @@ -3648,7 +3650,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) pvc->pcpu = pcpu + thr; for_each_runnable_thread(i, vcpu, pvc) { kvmppc_start_thread(vcpu, pvc); - kvmppc_create_dtl_entry(vcpu, pvc); + kvmppc_create_dtl_entry(vcpu, pvc, mftb()); trace_kvm_guest_enter(vcpu); if (!vcpu->arch.ptid) thr0_done = true; @@ -4102,20 +4104,17 @@ static void vcpu_vpa_increment_dispatch(struct kvm_vcpu *vcpu) * Guest entry for POWER9 and later CPUs. */ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, - unsigned long lpcr) + unsigned long lpcr, u64 *tb) { struct kvmppc_vcore *vc = vcpu->arch.vcore; struct p9_host_os_sprs host_os_sprs; s64 dec; - u64 tb, next_timer; + u64 next_timer; unsigned long msr; int trap; - WARN_ON_ONCE(vcpu->arch.ceded); - - tb = mftb(); next_timer = timer_get_next_tb(); - if (tb >= next_timer) + if (*tb >= next_timer) return BOOK3S_INTERRUPT_HV_DECREMENTER; if (next_timer < time_limit) time_limit = next_timer; @@ -4212,7 +4211,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, * * XXX: Another day's problem. */ - mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - tb); + mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - *tb); mtspr(SPRN_DAR, vcpu->arch.shregs.dar); mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); @@ -4228,8 +4227,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; - tb = mftb(); - vcpu->arch.dec_expires = dec + (tb + vc->tb_offset); + *tb = mftb(); + vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset); /* H_CEDE has to be handled now, not later */ if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && @@ -4241,7 +4240,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, } else { kvmppc_xive_push_vcpu(vcpu); - trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr); + trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr, tb); if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && !(vcpu->arch.shregs.msr & MSR_PR)) { unsigned long req = kvmppc_get_gpr(vcpu, 3); @@ -4272,6 +4271,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, store_spr_state(vcpu); + timer_rearm_host_dec(*tb); + restore_p9_host_os_sprs(vcpu, &host_os_sprs); store_fp_state(&vcpu->arch.fp); @@ -4291,8 +4292,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 0x101; vc->in_guest = 0; - timer_rearm_host_dec(tb); - kvmppc_subcore_exit_guest(); return trap; @@ -4534,7 +4533,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) if ((vc->vcore_state == VCORE_PIGGYBACK || vc->vcore_state == VCORE_RUNNING) && !VCORE_IS_EXITING(vc)) { - kvmppc_create_dtl_entry(vcpu, vc); + kvmppc_create_dtl_entry(vcpu, vc, mftb()); kvmppc_start_thread(vcpu, vc); trace_kvm_guest_enter(vcpu); } else if (vc->vcore_state == VCORE_SLEEPING) { @@ -4569,7 +4568,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) for_each_runnable_thread(i, v, vc) { kvmppc_core_prepare_to_enter(v); if (signal_pending(v->arch.run_task)) { - kvmppc_remove_runnable(vc, v); + kvmppc_remove_runnable(vc, v, mftb()); v->stat.signal_exits++; v->run->exit_reason = KVM_EXIT_INTR; v->arch.ret = -EINTR; @@ -4610,7 +4609,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) kvmppc_vcore_end_preempt(vc); if (vcpu->arch.state == KVMPPC_VCPU_RUNNABLE) { - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, mftb()); vcpu->stat.signal_exits++; run->exit_reason = KVM_EXIT_INTR; vcpu->arch.ret = -EINTR; @@ -4638,6 +4637,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; unsigned long flags; + u64 tb; trace_kvmppc_run_vcpu_enter(vcpu); @@ -4648,7 +4648,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, vc = vcpu->arch.vcore; vcpu->arch.ceded = 0; vcpu->arch.run_task = current; - vcpu->arch.stolen_logged = vcore_stolen_time(vc, mftb()); vcpu->arch.state = KVMPPC_VCPU_RUNNABLE; vcpu->arch.busy_preempt = TB_NIL; vcpu->arch.last_inst = KVM_INST_FETCH_FAILED; @@ -4673,7 +4672,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, kvmppc_update_vpas(vcpu); init_vcore_to_run(vc); - vc->preempt_tb = TB_NIL; preempt_disable(); pcpu = smp_processor_id(); @@ -4683,6 +4681,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, /* flags save not required, but irq_pmu has no disable/enable API */ powerpc_local_irq_pmu_save(flags); + if (signal_pending(current)) goto sigpend; if (need_resched() || !kvm->arch.mmu_ready) @@ -4705,12 +4704,17 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, goto out; } + tb = mftb(); + + vcpu->arch.stolen_logged = vcore_stolen_time(vc, tb); + vc->preempt_tb = TB_NIL; + kvmppc_clear_host_core(pcpu); local_paca->kvm_hstate.napping = 0; local_paca->kvm_hstate.kvm_split_mode = NULL; kvmppc_start_thread(vcpu, vc); - kvmppc_create_dtl_entry(vcpu, vc); + kvmppc_create_dtl_entry(vcpu, vc, tb); trace_kvm_guest_enter(vcpu); vc->vcore_state = VCORE_RUNNING; @@ -4725,7 +4729,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, /* Tell lockdep that we're about to enable interrupts */ trace_hardirqs_on(); - trap = kvmhv_p9_guest_entry(vcpu, time_limit, lpcr); + trap = kvmhv_p9_guest_entry(vcpu, time_limit, lpcr, &tb); vcpu->arch.trap = trap; trace_hardirqs_off(); @@ -4754,7 +4758,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, * by L2 and the L1 decrementer is provided in hdec_expires */ if (kvmppc_core_pending_dec(vcpu) && - ((get_tb() < kvmppc_dec_expires_host_tb(vcpu)) || + ((tb < kvmppc_dec_expires_host_tb(vcpu)) || (trap == BOOK3S_INTERRUPT_SYSCALL && kvmppc_get_gpr(vcpu, 3) == H_ENTER_NESTED))) kvmppc_core_dequeue_dec(vcpu); @@ -4790,7 +4794,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, trace_kvmppc_run_core(vc, 1); done: - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, tb); trace_kvmppc_run_vcpu_exit(vcpu); return vcpu->arch.ret; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index e7793bb806eb..2bd96d8256d1 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -183,13 +183,13 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) } } -int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr) +int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; struct kvmppc_vcore *vc = vcpu->arch.vcore; s64 hdec, dec; - u64 tb, purr, spurr; + u64 purr, spurr; u64 *exsave; bool ri_set; int trap; @@ -203,8 +203,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc unsigned long host_dawr1; unsigned long host_dawrx1; - tb = mftb(); - hdec = time_limit - tb; + hdec = time_limit - *tb; if (hdec < 0) return BOOK3S_INTERRUPT_HV_DECREMENTER; @@ -230,11 +229,13 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc } if (vc->tb_offset) { - u64 new_tb = tb + vc->tb_offset; + u64 new_tb = *tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); + if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) { + new_tb += 0x1000000; + mtspr(SPRN_TBU40, new_tb); + } + *tb = new_tb; vc->tb_offset_applied = vc->tb_offset; } @@ -317,7 +318,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc */ mtspr(SPRN_HDEC, hdec); - mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb); + mtspr(SPRN_DEC, vcpu->arch.dec_expires - *tb); #ifdef CONFIG_PPC_TRANSACTIONAL_MEM tm_return_to_guest: @@ -466,15 +467,17 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; - tb = mftb(); - vcpu->arch.dec_expires = dec + tb; + *tb = mftb(); + vcpu->arch.dec_expires = dec + *tb; if (vc->tb_offset_applied) { - u64 new_tb = tb - vc->tb_offset_applied; + u64 new_tb = *tb - vc->tb_offset_applied; mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); + if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) { + new_tb += 0x1000000; + mtspr(SPRN_TBU40, new_tb); + } + *tb = new_tb; vc->tb_offset_applied = 0; } From patchwork Mon Jul 26 03:50:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509735 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ebr+W46B; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bK16RWz9tB1 for ; Mon, 26 Jul 2021 13:52:11 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231689AbhGZDLg (ORCPT ); Sun, 25 Jul 2021 23:11:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231664AbhGZDLZ (ORCPT ); Sun, 25 Jul 2021 23:11:25 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92A40C061757 for ; Sun, 25 Jul 2021 20:51:53 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id k4-20020a17090a5144b02901731c776526so17812798pjm.4 for ; Sun, 25 Jul 2021 20:51:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dkosrLJncWeVp01oPT/L/8v9iInUM0LnXzW1YmzkeCg=; b=ebr+W46BWAkrvxze47hy6Dw2mtEoNd77bXIc+lm1eJkO8/rFAAYX2AVbvQKIefbFN8 bXS3LF+wchbeRMkHzvGw8uQd6NXJwxZp53r9ag0dYZLmXDu7Fu8RAP6+9s6uipPcw5km Pmu8u7yQm2e6yXJ+t7GwakZ075KRrUnymW35ugql1r1HDLdkyuBTWlRcGWD+Y2UyM1y6 oriScHpLVwTO1Od6rKHnLnShrlqY0UCFlOjjlQjqCTdqnx40yWJwQZB0hl5fu23+8TQo I+cta2zlsxcJV4z/mkNwyNNpdaS6xb8bQ0Qp6s5+c2oYUf/e34bKhSpQ70SmcPbUV8k2 3DsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dkosrLJncWeVp01oPT/L/8v9iInUM0LnXzW1YmzkeCg=; b=sCqYiBwu+k6wXEtWEbrn9ZaHAaimTEcI0chcdiiMo5Gg3c4mYvEM0mlntmQ9vaIiT/ aAfl0Hj6SaRAJBsZwyeEsv7i+s/Ip1aTwJ8YK/AdY06VmXbid+fftiI80tJV59MuhP4g G5AVxmxrhBsRIqd5d5P3IM2Z4oLkbQoyUFAuFv/vlYzbA1JhGUvT15+MFvp7UZlEkzjR rvk/IOtyI1fm3gcnQ0p4eLa5lGpMao0En/vI21tYFSlRvPDaZQlaCC0T+GvuDila0Rxz o3xkWgsZh9kPmRT8pEAPRsxLOKP652C830GSAgzAmP2OIaaUxqGmZBWYuatEKNUYb4AA 4yOA== X-Gm-Message-State: AOAM530XGkMUYCAUoILnzk1ZQX+WkKITVLRsVNFaVkqFkracLoblB65L BAwHx9Uw4Dgz3vkn8p8WkYAhReTzkSA= X-Google-Smtp-Source: ABdhPJw9Omqh9W5+iCAZK7Gajq6QtnmP3F/tcC+JwM+NbCWFKQTQ8yJaoNlq5fJWBfBFDdeVqJFHDA== X-Received: by 2002:aa7:82cb:0:b029:2e6:f397:d248 with SMTP id f11-20020aa782cb0000b02902e6f397d248mr16056472pfn.52.1627271513035; Sun, 25 Jul 2021 20:51:53 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:52 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 29/55] KVM: PPC: Book3S HV P9: Avoid SPR scoreboard stalls Date: Mon, 26 Jul 2021 13:50:10 +1000 Message-Id: <20210726035036.739609-30-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Avoid interleaving mfSPR and mtSPR. -151 cycles (7427) POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 8 ++++---- arch/powerpc/kvm/book3s_hv_p9_entry.c | 19 +++++++++++-------- 2 files changed, 15 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index fa44bbca75e4..0d97138e6fa4 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4271,10 +4271,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, store_spr_state(vcpu); - timer_rearm_host_dec(*tb); - - restore_p9_host_os_sprs(vcpu, &host_os_sprs); - store_fp_state(&vcpu->arch.fp); #ifdef CONFIG_ALTIVEC store_vr_state(&vcpu->arch.vr); @@ -4289,6 +4285,10 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, switch_pmu_to_host(vcpu, &host_os_sprs); + timer_rearm_host_dec(*tb); + + restore_p9_host_os_sprs(vcpu, &host_os_sprs); + vc->entry_exit_map = 0x101; vc->in_guest = 0; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 2bd96d8256d1..bd0021cd3a67 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -228,6 +228,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc host_dawrx1 = mfspr(SPRN_DAWRX1); } + local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); + local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); + if (vc->tb_offset) { u64 new_tb = *tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); @@ -244,8 +247,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_DPDES, vc->dpdes); mtspr(SPRN_VTB, vc->vtb); - local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); - local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); mtspr(SPRN_PURR, vcpu->arch.purr); mtspr(SPRN_SPURR, vcpu->arch.spurr); @@ -448,10 +449,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc /* Advance host PURR/SPURR by the amount used by guest */ purr = mfspr(SPRN_PURR); spurr = mfspr(SPRN_SPURR); - mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr + - purr - vcpu->arch.purr); - mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr + - spurr - vcpu->arch.spurr); + local_paca->kvm_hstate.host_purr += purr - vcpu->arch.purr; + local_paca->kvm_hstate.host_spurr += spurr - vcpu->arch.spurr; vcpu->arch.purr = purr; vcpu->arch.spurr = spurr; @@ -464,6 +463,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); + vc->dpdes = mfspr(SPRN_DPDES); + vc->vtb = mfspr(SPRN_VTB); + dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; @@ -481,6 +483,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = 0; } + mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr); + mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr); + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ mtspr(SPRN_PSSCR, host_psscr | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); @@ -509,8 +514,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (cpu_has_feature(CPU_FTR_ARCH_31)) asm volatile(PPC_CP_ABORT); - vc->dpdes = mfspr(SPRN_DPDES); - vc->vtb = mfspr(SPRN_VTB); mtspr(SPRN_DPDES, 0); if (vc->pcr) mtspr(SPRN_PCR, PCR_MASK); From patchwork Mon Jul 26 03:50:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509736 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=pckFsxcy; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bK5tK3z9t0T for ; Mon, 26 Jul 2021 13:52:13 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231664AbhGZDLg (ORCPT ); Sun, 25 Jul 2021 23:11:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231672AbhGZDL1 (ORCPT ); Sun, 25 Jul 2021 23:11:27 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B35E0C061757 for ; Sun, 25 Jul 2021 20:51:55 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id mt6so11148677pjb.1 for ; Sun, 25 Jul 2021 20:51:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7tAT3HvikoAXKDhRxpUtJx0vEQZUJEB9w4l+mRfsano=; b=pckFsxcyvVWskwDhpjF09B6N0xznPbT+bhpxLFexQIFt/JPw0PlfD2Ztiggi7DdIPz a8ca7OKQuAyP0S/QjHHdcwv5+O7Pgphz2GiIwZ31+vxa3FwLPTG3mq6e7nrOvYqUPAyB IU+VX3dDNwM0bZ6SkJmAypOS1HvrIalSpn0m3/7r3mPoEo3K1Ru2fy8MfSv8y8OvP1B8 S55Z5SEerfEFDhN7Hs3Xu+l5g2J0FX6r4oKWo8YUYr5m2QSDg4tNcw7H/9xnABrbsL/k Z5x7PCHgU/0mmXg2SfD4RmNsFeQXRTHsvo4fg1xYwAUn/y8D/sVQhFVYuzDuPSDunOdk NvJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7tAT3HvikoAXKDhRxpUtJx0vEQZUJEB9w4l+mRfsano=; b=j7Bb5+Ez5351Rd5OgjVAoiYEPslKb2ueajeGk6GjpHOAfRd5sPi34B9QjA4keKNu6h lBx1YKIqWky3eqaGZZIVmmupwNVjpLPocuWHiHHCk4/GPunmc25A1EnzsQuyUmEzkFu1 mQecJx3WVJ0d9WF+nr6BxmuamHkkLGPCdN1CZZakPADyqNwYC/hcZPPYpb55fByjn74p Vk4oGt2NSJwncIMQEK/r74vzHs2KcU5GLXrfoh2J5BiwJofGwG3rAZDIrr87QHs7N3GN yLC7rggF1/sFMNFKn7xm0mpOY0/JJZIK3yxL2lManCc0W14xpzhRACNCQ63V77GGWDxB DGAg== X-Gm-Message-State: AOAM532suWuXc+o8ZCRBbRqR+1TcAGEBQiJLdjjuK0mj2MMI7T/1myxo qkbsRiSQu1c4Zyu06HA0IcunzDisvhs= X-Google-Smtp-Source: ABdhPJy/lqE+wtpGnXPzD4SZ39YUCx/UMRezY6naKzzpLBBBh9dr1yv2Xn6pcPDAns81LmS8IxIkNw== X-Received: by 2002:a63:ed47:: with SMTP id m7mr16430085pgk.194.1627271515220; Sun, 25 Jul 2021 20:51:55 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:55 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 30/55] KVM: PPC: Book3S HV P9: Only execute mtSPR if the value changed Date: Mon, 26 Jul 2021 13:50:11 +1000 Message-Id: <20210726035036.739609-31-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Keep better track of the current SPR value in places where they are to be loaded with a new context, to reduce expensive mtSPR operations. -73 cycles (7354) POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin Reviewed-by: Fabiano Rosas --- arch/powerpc/kvm/book3s_hv.c | 64 ++++++++++++++++++++++-------------- 1 file changed, 39 insertions(+), 25 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 0d97138e6fa4..56429b53f4dc 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4009,19 +4009,28 @@ static void switch_pmu_to_host(struct kvm_vcpu *vcpu, } } -static void load_spr_state(struct kvm_vcpu *vcpu) +static void load_spr_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) { - mtspr(SPRN_DSCR, vcpu->arch.dscr); - mtspr(SPRN_IAMR, vcpu->arch.iamr); - mtspr(SPRN_PSPB, vcpu->arch.pspb); - mtspr(SPRN_FSCR, vcpu->arch.fscr); mtspr(SPRN_TAR, vcpu->arch.tar); mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); mtspr(SPRN_BESCR, vcpu->arch.bescr); - mtspr(SPRN_TIDR, vcpu->arch.tid); - mtspr(SPRN_AMR, vcpu->arch.amr); - mtspr(SPRN_UAMOR, vcpu->arch.uamor); + + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + mtspr(SPRN_TIDR, vcpu->arch.tid); + if (host_os_sprs->iamr != vcpu->arch.iamr) + mtspr(SPRN_IAMR, vcpu->arch.iamr); + if (host_os_sprs->amr != vcpu->arch.amr) + mtspr(SPRN_AMR, vcpu->arch.amr); + if (vcpu->arch.uamor != 0) + mtspr(SPRN_UAMOR, vcpu->arch.uamor); + if (host_os_sprs->fscr != vcpu->arch.fscr) + mtspr(SPRN_FSCR, vcpu->arch.fscr); + if (host_os_sprs->dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, vcpu->arch.dscr); + if (vcpu->arch.pspb != 0) + mtspr(SPRN_PSPB, vcpu->arch.pspb); /* * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI] @@ -4036,28 +4045,31 @@ static void load_spr_state(struct kvm_vcpu *vcpu) static void store_spr_state(struct kvm_vcpu *vcpu) { - vcpu->arch.ctrl = mfspr(SPRN_CTRLF); - - vcpu->arch.iamr = mfspr(SPRN_IAMR); - vcpu->arch.pspb = mfspr(SPRN_PSPB); - vcpu->arch.fscr = mfspr(SPRN_FSCR); vcpu->arch.tar = mfspr(SPRN_TAR); vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); vcpu->arch.bescr = mfspr(SPRN_BESCR); - vcpu->arch.tid = mfspr(SPRN_TIDR); + + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + vcpu->arch.tid = mfspr(SPRN_TIDR); + vcpu->arch.iamr = mfspr(SPRN_IAMR); vcpu->arch.amr = mfspr(SPRN_AMR); vcpu->arch.uamor = mfspr(SPRN_UAMOR); + vcpu->arch.fscr = mfspr(SPRN_FSCR); vcpu->arch.dscr = mfspr(SPRN_DSCR); + vcpu->arch.pspb = mfspr(SPRN_PSPB); + + vcpu->arch.ctrl = mfspr(SPRN_CTRLF); } static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) { - host_os_sprs->dscr = mfspr(SPRN_DSCR); - host_os_sprs->tidr = mfspr(SPRN_TIDR); + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + host_os_sprs->tidr = mfspr(SPRN_TIDR); host_os_sprs->iamr = mfspr(SPRN_IAMR); host_os_sprs->amr = mfspr(SPRN_AMR); host_os_sprs->fscr = mfspr(SPRN_FSCR); + host_os_sprs->dscr = mfspr(SPRN_DSCR); } /* vcpu guest regs must already be saved */ @@ -4066,18 +4078,20 @@ static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, { mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - mtspr(SPRN_PSPB, 0); - mtspr(SPRN_UAMOR, 0); - - mtspr(SPRN_DSCR, host_os_sprs->dscr); - mtspr(SPRN_TIDR, host_os_sprs->tidr); - mtspr(SPRN_IAMR, host_os_sprs->iamr); - + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + mtspr(SPRN_TIDR, host_os_sprs->tidr); + if (host_os_sprs->iamr != vcpu->arch.iamr) + mtspr(SPRN_IAMR, host_os_sprs->iamr); + if (vcpu->arch.uamor != 0) + mtspr(SPRN_UAMOR, 0); if (host_os_sprs->amr != vcpu->arch.amr) mtspr(SPRN_AMR, host_os_sprs->amr); - if (host_os_sprs->fscr != vcpu->arch.fscr) mtspr(SPRN_FSCR, host_os_sprs->fscr); + if (host_os_sprs->dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, host_os_sprs->dscr); + if (vcpu->arch.pspb != 0) + mtspr(SPRN_PSPB, 0); /* Save guest CTRL register, set runlatch to 1 */ if (!(vcpu->arch.ctrl & 1)) @@ -4169,7 +4183,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, #endif mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); - load_spr_state(vcpu); + load_spr_state(vcpu, &host_os_sprs); if (kvmhv_on_pseries()) { /* From patchwork Mon Jul 26 03:50:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509737 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=HOKqpTPk; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bL4gZRz9tjs for ; Mon, 26 Jul 2021 13:52:14 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231695AbhGZDLh (ORCPT ); Sun, 25 Jul 2021 23:11:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231674AbhGZDL3 (ORCPT ); Sun, 25 Jul 2021 23:11:29 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0092C061757 for ; Sun, 25 Jul 2021 20:51:57 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id p5-20020a17090a8685b029015d1a9a6f1aso11972650pjn.1 for ; Sun, 25 Jul 2021 20:51:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4FMW47dARaup+eOqBMjIjGjKfF+ltOIcNH9DSadt5C4=; b=HOKqpTPk5lE5fjYw+uVPSwYFwNbq92SAWi8giqUyT9LLGQSBP9CPlEVg6B7jxa2QME 4ckogK4xix6yhsQzIiFvajyHAdKGK/bUONAXdvqZofKGGA+6Q6gVNhUbILRxNDAp0Yyc aLkU/JwQLSk/9UUhwEm4zn5/45TuDir4OKZiu96YnarcMYYnxATog9gQaQpoS2ak8I6L CNJ78E0Yd3SlYBtmEj4ebqdL71AZhG9ZQ5pcV92xuBnDi5FSdZ0pNs729k2e7UIeIjg+ IhwmNIUlXyJtdz9obRQI0HzLLLNjfLpUDqgZ/vnuhcSGHCzFJlAFmTUvnpB8P+7KKgXQ UV8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4FMW47dARaup+eOqBMjIjGjKfF+ltOIcNH9DSadt5C4=; b=OZ/FlsdbBnrIamwUvLNUwLEaLpkMFvNwrjUq5l+n5vJ10gs44SfaM0Hno/K4bmEDUo 9QZ99pRM64jQ47rGpIaLtJgYxcXP/p7GMgHnYsaHGKg9CUvi9lddsH0aOdR2fptnZYrr v/JgOTgeIkArEA0zRWhBpo9R7ueKffYyWx8DOmV+e56FTwNc9wuYdcsd80IlAwZasFpi ak5or/dnIHLIKf1IUY3shnKsFO4TOuYTm4qkXCxiuGoipQjChldSHuMYx5ifwbSRu1EG wKEr5Z+sGV5NiQJvZXUjO4E5DfZWS0wPbd/IZ0tnmGzIIjEgQnL+/g0VlpbaPduHk3g2 4sew== X-Gm-Message-State: AOAM532ImHAEiNIHHjjOK1wLwftH1DuSzEupjWdtfVEk5j5naSFgmH7V c0ZaFyFkECfZSlnjbqLGZ7U/V+nKWcM= X-Google-Smtp-Source: ABdhPJwHn4E6yTCWXtErPNc+BeOQyynG2dD/DRZCLKaW97xyd4T92AoQfu8XAB7VKDUHnQbRlMCpQw== X-Received: by 2002:a63:1f24:: with SMTP id f36mr16264020pgf.151.1627271517416; Sun, 25 Jul 2021 20:51:57 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:57 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 31/55] KVM: PPC: Book3S HV P9: Juggle SPR switching around Date: Mon, 26 Jul 2021 13:50:12 +1000 Message-Id: <20210726035036.739609-32-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This juggles SPR switching on the entry and exit sides to be more symmetric, which makes the next refactoring patch possible with no functional change. Signed-off-by: Nicholas Piggin Reviewed-by: Fabiano Rosas --- arch/powerpc/kvm/book3s_hv.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 56429b53f4dc..c2c72875fca9 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4175,7 +4175,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, msr = mfmsr(); /* TM restore can update msr */ } - switch_pmu_to_guest(vcpu, &host_os_sprs); + load_spr_state(vcpu, &host_os_sprs); load_fp_state(&vcpu->arch.fp); #ifdef CONFIG_ALTIVEC @@ -4183,7 +4183,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, #endif mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); - load_spr_state(vcpu, &host_os_sprs); + switch_pmu_to_guest(vcpu, &host_os_sprs); if (kvmhv_on_pseries()) { /* @@ -4283,6 +4283,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.slb_max = 0; } + switch_pmu_to_host(vcpu, &host_os_sprs); + store_spr_state(vcpu); store_fp_state(&vcpu->arch.fp); @@ -4297,8 +4299,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - switch_pmu_to_host(vcpu, &host_os_sprs); - timer_rearm_host_dec(*tb); restore_p9_host_os_sprs(vcpu, &host_os_sprs); From patchwork Mon Jul 26 03:50:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509738 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=O6Rb7Rmt; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bM0txNz9t2G for ; Mon, 26 Jul 2021 13:52:15 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231718AbhGZDLi (ORCPT ); Sun, 25 Jul 2021 23:11:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231679AbhGZDLb (ORCPT ); Sun, 25 Jul 2021 23:11:31 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 187EAC061757 for ; Sun, 25 Jul 2021 20:52:00 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id ch6so2116887pjb.5 for ; Sun, 25 Jul 2021 20:52:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YoEeMmSqnvFKWKfBwYyk8RCXQVi3ja3orrar8JiO4tk=; b=O6Rb7Rmt6QCh6w+Klwzr9LN7o5OZd3TtSkNXnW95T7qkdMdW6Ny2eOcF+g1iEWzILD EdorcqXgUXUTkU4Z2eeju2pE6H5dEIHosih3PpWVofy3X/vBcIiNnVHg3W7cqXOZHxEi XgWTlO+ChfhYwKT4M04Wjpt3xR3fxAZmaBqMyzTfp4KLJqv8lOGUCKAs67ngQptsl6QD TXpAdFUEQLbTgzALNLgeBjw9fXlzVES0bmM08X6h4kLclEucyivaH35zJwYKFVbMmbOR bZGHGDCzn5KDskbyM+qpMxBfIZOfTRzFGZ0poZdEIFulGY47CvLQZ44KogrspdfP2sZ9 9bQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YoEeMmSqnvFKWKfBwYyk8RCXQVi3ja3orrar8JiO4tk=; b=MDSlcQJRsgeafuG0QvlH7iUFLzEFAVgGOu+wp9zD9Bbw5EpTNw2V10d9OCnAALENuZ be1QDrFD05+/MEllKwaYCkYnyJKLvPN39/BpxJUs95wqOcp9eG4iLP7R9gYDRrHIDpcI /S47XNC5+oiInmuBEMiLgOQ5eQXGY7M2oPIA783PYio4RpYl/aB0OtNqtk0xNdlw+/wf vwRdrJLNA2SyodQ5HHTYhkeCXzdhtOHJeQPxUxRqkopOHxGaGi/RJnejC+EBo0Ey9+Yd ma09SZsArYP/ozBcg+q3SnjBpNsEbwaJOvueRs3DfuswomANy34NnXl6Gti6orWRxZOU MCzA== X-Gm-Message-State: AOAM531wUcfmzso6zHzGOQ0RW+PUjf7CkoHu7x31spfLJwOfm1n40DSE gYTftuq1hJm/5wDYszGlWFg9oc61psI= X-Google-Smtp-Source: ABdhPJzr0hLlHf/k25iTZeVNCkWOSf/iquwxNFX4+rD1Ug8qEvheoO7fpDgRunhFC/hjz4E6grSyqw== X-Received: by 2002:a65:61ab:: with SMTP id i11mr16393083pgv.168.1627271519612; Sun, 25 Jul 2021 20:51:59 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:51:59 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 32/55] KVM: PPC: Book3S HV P9: Move vcpu register save/restore into functions Date: Mon, 26 Jul 2021 13:50:13 +1000 Message-Id: <20210726035036.739609-33-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This should be no functional difference but makes the caller easier to read. Signed-off-by: Nicholas Piggin Reviewed-by: Fabiano Rosas --- arch/powerpc/kvm/book3s_hv.c | 65 +++++++++++++++++++++++------------- 1 file changed, 41 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index c2c72875fca9..45211458ac05 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4062,6 +4062,44 @@ static void store_spr_state(struct kvm_vcpu *vcpu) vcpu->arch.ctrl = mfspr(SPRN_CTRLF); } +/* Returns true if current MSR and/or guest MSR may have changed */ +static bool load_vcpu_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + bool ret = false; + + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + ret = true; + } + + load_spr_state(vcpu, host_os_sprs); + + load_fp_state(&vcpu->arch.fp); +#ifdef CONFIG_ALTIVEC + load_vr_state(&vcpu->arch.vr); +#endif + mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); + + return ret; +} + +static void store_vcpu_state(struct kvm_vcpu *vcpu) +{ + store_spr_state(vcpu); + + store_fp_state(&vcpu->arch.fp); +#ifdef CONFIG_ALTIVEC + store_vr_state(&vcpu->arch.vr); +#endif + vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); + + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); +} + static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) { if (!cpu_has_feature(CPU_FTR_ARCH_31)) @@ -4169,19 +4207,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { - kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - msr = mfmsr(); /* TM restore can update msr */ - } - - load_spr_state(vcpu, &host_os_sprs); - - load_fp_state(&vcpu->arch.fp); -#ifdef CONFIG_ALTIVEC - load_vr_state(&vcpu->arch.vr); -#endif - mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); + if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) + msr = mfmsr(); /* MSR may have been updated */ switch_pmu_to_guest(vcpu, &host_os_sprs); @@ -4285,17 +4312,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, switch_pmu_to_host(vcpu, &host_os_sprs); - store_spr_state(vcpu); - - store_fp_state(&vcpu->arch.fp); -#ifdef CONFIG_ALTIVEC - store_vr_state(&vcpu->arch.vr); -#endif - vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); - - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) - kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + store_vcpu_state(vcpu); vcpu_vpa_increment_dispatch(vcpu); From patchwork Mon Jul 26 03:50:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509739 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=PEVQZA5V; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bM4HCRz9t0T for ; Mon, 26 Jul 2021 13:52:15 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231752AbhGZDLj (ORCPT ); Sun, 25 Jul 2021 23:11:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231680AbhGZDLe (ORCPT ); Sun, 25 Jul 2021 23:11:34 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5E28C061757 for ; Sun, 25 Jul 2021 20:52:02 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id gv20-20020a17090b11d4b0290173b9578f1cso11985816pjb.0 for ; Sun, 25 Jul 2021 20:52:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZyxOga97QK0VQyc6gPI7ZtuFN1BPOlKdb1xu1VWlvxo=; b=PEVQZA5VTfOAFq4QbiLjsKPumZDxnmdp/jgrjO0X6bjzUGqL6RYs9Vxo6nuKCElLiD XFuF1uXojC/vQ9g7CkElXVAvrGZR8Exr6pHVwpNtXjdRdYi3nubRKBCZmJqyGlGeqd0H VxoS3gp4yi3QwVF1gazKwTbLwtcbDqqquXkuwV3aYo7EXE68OMDSh9u4x01RDH7mbZth qeIPmUqqMkQOZGF5Zl+E/JjSUAoW9nGbs8JXTz81AQrQaOL9Gp0fKn7VTLq7+PG9Bv3S ndYUA6Xx941I+fxOpLTUH2y9g0iBLzkHRrdkDFqmV4jU5fnYqcOrNJ5mQlqjBSc33nVt V2/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZyxOga97QK0VQyc6gPI7ZtuFN1BPOlKdb1xu1VWlvxo=; b=bMQQFruFTuS/b+T0sn+PT1gDHcedB+i3Lc+Cstf1JzWZxrpk+6S5jsupXMstfdVv9j 3DwxbwtLljMHGN8yGyeRwmz06YDDrA8SXxjIZU3Qi052DErtfvcPI2xvWotQlIHlewOl rT/xBieABSdwxl9CHFAqxlmFYzGON1lXjBLKmLKWh74aG0eX+JfysgqaVbHrWC8MrDvg Meg5O42OptRrweZDa4VmrQCxTr4o740Xk2RgAvezYrCrZTO9TcB9mpTGnxR6NmtyUNBc 8XmBBdJl06/4ihO0s9thk7GksvnJozv7Jh/nYaiZ0a1CeuOUx02yQI4Q+wrtKgoFJFve OwOg== X-Gm-Message-State: AOAM531XPgJ5QUVyZwib5egHUE13sNUOdr5dYpH+ELpm/wv/e9mYBUKm WuztRKKw74D1/N14J5Dfv7quNfvNTbU= X-Google-Smtp-Source: ABdhPJxDSOXALgUgPz7uIDrF1EOcG6PVyD6dHi1FLPkq+I83EDY/Y33yoibtOMdvC7tlbgHagg3qKQ== X-Received: by 2002:aa7:8550:0:b029:32b:963f:f53b with SMTP id y16-20020aa785500000b029032b963ff53bmr16030952pfn.0.1627271521935; Sun, 25 Jul 2021 20:52:01 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.51.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:01 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 33/55] KVM: PPC: Book3S HV P9: Move host OS save/restore functions to built-in Date: Mon, 26 Jul 2021 13:50:14 +1000 Message-Id: <20210726035036.739609-34-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move the P9 guest/host register switching functions to the built-in P9 entry code, and export it for nested to use as well. This allows more flexibility in scheduling these supervisor privileged SPR accesses with the HV privileged and PR SPR accesses in the low level entry code. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 365 +------------------------- arch/powerpc/kvm/book3s_hv.h | 39 +++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 345 ++++++++++++++++++++++++ 3 files changed, 385 insertions(+), 364 deletions(-) create mode 100644 arch/powerpc/kvm/book3s_hv.h diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 45211458ac05..977712eb74e0 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -80,6 +80,7 @@ #include #include "book3s.h" +#include "book3s_hv.h" #define CREATE_TRACE_POINTS #include "trace_hv.h" @@ -3772,370 +3773,6 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) trace_kvmppc_run_core(vc, 1); } -/* - * Privileged (non-hypervisor) host registers to save. - */ -struct p9_host_os_sprs { - unsigned long dscr; - unsigned long tidr; - unsigned long iamr; - unsigned long amr; - unsigned long fscr; - - unsigned int pmc1; - unsigned int pmc2; - unsigned int pmc3; - unsigned int pmc4; - unsigned int pmc5; - unsigned int pmc6; - unsigned long mmcr0; - unsigned long mmcr1; - unsigned long mmcr2; - unsigned long mmcr3; - unsigned long mmcra; - unsigned long siar; - unsigned long sier1; - unsigned long sier2; - unsigned long sier3; - unsigned long sdar; -}; - -static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) -{ - if (!(mmcr0 & MMCR0_FC)) - goto do_freeze; - if (mmcra & MMCRA_SAMPLE_ENABLE) - goto do_freeze; - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - if (!(mmcr0 & MMCR0_PMCCEXT)) - goto do_freeze; - if (!(mmcra & MMCRA_BHRB_DISABLE)) - goto do_freeze; - } - return; - -do_freeze: - mmcr0 = MMCR0_FC; - mmcra = 0; - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - mmcr0 |= MMCR0_PMCCEXT; - mmcra = MMCRA_BHRB_DISABLE; - } - - mtspr(SPRN_MMCR0, mmcr0); - mtspr(SPRN_MMCRA, mmcra); - isync(); -} - -static void switch_pmu_to_guest(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - struct lppaca *lp; - int load_pmu = 1; - - lp = vcpu->arch.vpa.pinned_addr; - if (lp) - load_pmu = lp->pmcregs_in_use; - - /* Save host */ - if (ppc_get_pmu_inuse()) { - /* - * It might be better to put PMU handling (at least for the - * host) in the perf subsystem because it knows more about what - * is being used. - */ - - /* POWER9, POWER10 do not implement HPMC or SPMC */ - - host_os_sprs->mmcr0 = mfspr(SPRN_MMCR0); - host_os_sprs->mmcra = mfspr(SPRN_MMCRA); - - freeze_pmu(host_os_sprs->mmcr0, host_os_sprs->mmcra); - - host_os_sprs->pmc1 = mfspr(SPRN_PMC1); - host_os_sprs->pmc2 = mfspr(SPRN_PMC2); - host_os_sprs->pmc3 = mfspr(SPRN_PMC3); - host_os_sprs->pmc4 = mfspr(SPRN_PMC4); - host_os_sprs->pmc5 = mfspr(SPRN_PMC5); - host_os_sprs->pmc6 = mfspr(SPRN_PMC6); - host_os_sprs->mmcr1 = mfspr(SPRN_MMCR1); - host_os_sprs->mmcr2 = mfspr(SPRN_MMCR2); - host_os_sprs->sdar = mfspr(SPRN_SDAR); - host_os_sprs->siar = mfspr(SPRN_SIAR); - host_os_sprs->sier1 = mfspr(SPRN_SIER); - - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - host_os_sprs->mmcr3 = mfspr(SPRN_MMCR3); - host_os_sprs->sier2 = mfspr(SPRN_SIER2); - host_os_sprs->sier3 = mfspr(SPRN_SIER3); - } - } - -#ifdef CONFIG_PPC_PSERIES - /* After saving PMU, before loading guest PMU, flip pmcregs_in_use */ - if (kvmhv_on_pseries()) { - barrier(); - get_lppaca()->pmcregs_in_use = load_pmu; - barrier(); - } -#endif - - /* - * Load guest. If the VPA said the PMCs are not in use but the guest - * tried to access them anyway, HFSCR[PM] will be set by the HFAC - * fault so we can make forward progress. - */ - if (load_pmu || (vcpu->arch.hfscr & HFSCR_PM)) { - mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); - mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); - mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); - mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); - mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); - mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); - mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); - mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); - mtspr(SPRN_SDAR, vcpu->arch.sdar); - mtspr(SPRN_SIAR, vcpu->arch.siar); - mtspr(SPRN_SIER, vcpu->arch.sier[0]); - - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); - mtspr(SPRN_SIER2, vcpu->arch.sier[1]); - mtspr(SPRN_SIER3, vcpu->arch.sier[2]); - } - - /* Set MMCRA then MMCR0 last */ - mtspr(SPRN_MMCRA, vcpu->arch.mmcra); - mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); - /* No isync necessary because we're starting counters */ - - if (!vcpu->arch.nested && - (vcpu->arch.hfscr_permitted & HFSCR_PM)) - vcpu->arch.hfscr |= HFSCR_PM; - } -} - -static void switch_pmu_to_host(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - struct lppaca *lp; - int save_pmu = 1; - - lp = vcpu->arch.vpa.pinned_addr; - if (lp) - save_pmu = lp->pmcregs_in_use; - - if (save_pmu) { - vcpu->arch.mmcr[0] = mfspr(SPRN_MMCR0); - vcpu->arch.mmcra = mfspr(SPRN_MMCRA); - - freeze_pmu(vcpu->arch.mmcr[0], vcpu->arch.mmcra); - - vcpu->arch.pmc[0] = mfspr(SPRN_PMC1); - vcpu->arch.pmc[1] = mfspr(SPRN_PMC2); - vcpu->arch.pmc[2] = mfspr(SPRN_PMC3); - vcpu->arch.pmc[3] = mfspr(SPRN_PMC4); - vcpu->arch.pmc[4] = mfspr(SPRN_PMC5); - vcpu->arch.pmc[5] = mfspr(SPRN_PMC6); - vcpu->arch.mmcr[1] = mfspr(SPRN_MMCR1); - vcpu->arch.mmcr[2] = mfspr(SPRN_MMCR2); - vcpu->arch.sdar = mfspr(SPRN_SDAR); - vcpu->arch.siar = mfspr(SPRN_SIAR); - vcpu->arch.sier[0] = mfspr(SPRN_SIER); - - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - vcpu->arch.mmcr[3] = mfspr(SPRN_MMCR3); - vcpu->arch.sier[1] = mfspr(SPRN_SIER2); - vcpu->arch.sier[2] = mfspr(SPRN_SIER3); - } - - } else if (vcpu->arch.hfscr & HFSCR_PM) { - /* - * The guest accessed PMC SPRs without specifying they should - * be preserved, or it cleared pmcregs_in_use after the last - * access. Just ensure they are frozen. - */ - freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); - - /* - * Demand-fault PMU register access in the guest. - * - * This is used to grab the guest's VPA pmcregs_in_use value - * and reflect it into the host's VPA in the case of a nested - * hypervisor. - * - * It also avoids having to zero-out SPRs after each guest - * exit to avoid side-channels when. - * - * This is cleared here when we exit the guest, so later HFSCR - * interrupt handling can add it back to run the guest with - * PM enabled next time. - */ - if (!vcpu->arch.nested) - vcpu->arch.hfscr &= ~HFSCR_PM; - } /* otherwise the PMU should still be frozen */ - -#ifdef CONFIG_PPC_PSERIES - if (kvmhv_on_pseries()) { - barrier(); - get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); - barrier(); - } -#endif - - if (ppc_get_pmu_inuse()) { - mtspr(SPRN_PMC1, host_os_sprs->pmc1); - mtspr(SPRN_PMC2, host_os_sprs->pmc2); - mtspr(SPRN_PMC3, host_os_sprs->pmc3); - mtspr(SPRN_PMC4, host_os_sprs->pmc4); - mtspr(SPRN_PMC5, host_os_sprs->pmc5); - mtspr(SPRN_PMC6, host_os_sprs->pmc6); - mtspr(SPRN_MMCR1, host_os_sprs->mmcr1); - mtspr(SPRN_MMCR2, host_os_sprs->mmcr2); - mtspr(SPRN_SDAR, host_os_sprs->sdar); - mtspr(SPRN_SIAR, host_os_sprs->siar); - mtspr(SPRN_SIER, host_os_sprs->sier1); - - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - mtspr(SPRN_MMCR3, host_os_sprs->mmcr3); - mtspr(SPRN_SIER2, host_os_sprs->sier2); - mtspr(SPRN_SIER3, host_os_sprs->sier3); - } - - /* Set MMCRA then MMCR0 last */ - mtspr(SPRN_MMCRA, host_os_sprs->mmcra); - mtspr(SPRN_MMCR0, host_os_sprs->mmcr0); - isync(); - } -} - -static void load_spr_state(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - mtspr(SPRN_TAR, vcpu->arch.tar); - mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); - mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); - mtspr(SPRN_BESCR, vcpu->arch.bescr); - - if (!cpu_has_feature(CPU_FTR_ARCH_31)) - mtspr(SPRN_TIDR, vcpu->arch.tid); - if (host_os_sprs->iamr != vcpu->arch.iamr) - mtspr(SPRN_IAMR, vcpu->arch.iamr); - if (host_os_sprs->amr != vcpu->arch.amr) - mtspr(SPRN_AMR, vcpu->arch.amr); - if (vcpu->arch.uamor != 0) - mtspr(SPRN_UAMOR, vcpu->arch.uamor); - if (host_os_sprs->fscr != vcpu->arch.fscr) - mtspr(SPRN_FSCR, vcpu->arch.fscr); - if (host_os_sprs->dscr != vcpu->arch.dscr) - mtspr(SPRN_DSCR, vcpu->arch.dscr); - if (vcpu->arch.pspb != 0) - mtspr(SPRN_PSPB, vcpu->arch.pspb); - - /* - * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI] - * clear (or hstate set appropriately to catch those registers - * being clobbered if we take a MCE or SRESET), so those are done - * later. - */ - - if (!(vcpu->arch.ctrl & 1)) - mtspr(SPRN_CTRLT, 0); -} - -static void store_spr_state(struct kvm_vcpu *vcpu) -{ - vcpu->arch.tar = mfspr(SPRN_TAR); - vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); - vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); - vcpu->arch.bescr = mfspr(SPRN_BESCR); - - if (!cpu_has_feature(CPU_FTR_ARCH_31)) - vcpu->arch.tid = mfspr(SPRN_TIDR); - vcpu->arch.iamr = mfspr(SPRN_IAMR); - vcpu->arch.amr = mfspr(SPRN_AMR); - vcpu->arch.uamor = mfspr(SPRN_UAMOR); - vcpu->arch.fscr = mfspr(SPRN_FSCR); - vcpu->arch.dscr = mfspr(SPRN_DSCR); - vcpu->arch.pspb = mfspr(SPRN_PSPB); - - vcpu->arch.ctrl = mfspr(SPRN_CTRLF); -} - -/* Returns true if current MSR and/or guest MSR may have changed */ -static bool load_vcpu_state(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - bool ret = false; - - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { - kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - ret = true; - } - - load_spr_state(vcpu, host_os_sprs); - - load_fp_state(&vcpu->arch.fp); -#ifdef CONFIG_ALTIVEC - load_vr_state(&vcpu->arch.vr); -#endif - mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); - - return ret; -} - -static void store_vcpu_state(struct kvm_vcpu *vcpu) -{ - store_spr_state(vcpu); - - store_fp_state(&vcpu->arch.fp); -#ifdef CONFIG_ALTIVEC - store_vr_state(&vcpu->arch.vr); -#endif - vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); - - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) - kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); -} - -static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) -{ - if (!cpu_has_feature(CPU_FTR_ARCH_31)) - host_os_sprs->tidr = mfspr(SPRN_TIDR); - host_os_sprs->iamr = mfspr(SPRN_IAMR); - host_os_sprs->amr = mfspr(SPRN_AMR); - host_os_sprs->fscr = mfspr(SPRN_FSCR); - host_os_sprs->dscr = mfspr(SPRN_DSCR); -} - -/* vcpu guest regs must already be saved */ -static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - - if (!cpu_has_feature(CPU_FTR_ARCH_31)) - mtspr(SPRN_TIDR, host_os_sprs->tidr); - if (host_os_sprs->iamr != vcpu->arch.iamr) - mtspr(SPRN_IAMR, host_os_sprs->iamr); - if (vcpu->arch.uamor != 0) - mtspr(SPRN_UAMOR, 0); - if (host_os_sprs->amr != vcpu->arch.amr) - mtspr(SPRN_AMR, host_os_sprs->amr); - if (host_os_sprs->fscr != vcpu->arch.fscr) - mtspr(SPRN_FSCR, host_os_sprs->fscr); - if (host_os_sprs->dscr != vcpu->arch.dscr) - mtspr(SPRN_DSCR, host_os_sprs->dscr); - if (vcpu->arch.pspb != 0) - mtspr(SPRN_PSPB, 0); - - /* Save guest CTRL register, set runlatch to 1 */ - if (!(vcpu->arch.ctrl & 1)) - mtspr(SPRN_CTRLT, 1); -} - static inline bool hcall_is_xics(unsigned long req) { return req == H_EOI || req == H_CPPR || req == H_IPI || diff --git a/arch/powerpc/kvm/book3s_hv.h b/arch/powerpc/kvm/book3s_hv.h new file mode 100644 index 000000000000..a9065a380547 --- /dev/null +++ b/arch/powerpc/kvm/book3s_hv.h @@ -0,0 +1,39 @@ + +/* + * Privileged (non-hypervisor) host registers to save. + */ +struct p9_host_os_sprs { + unsigned long dscr; + unsigned long tidr; + unsigned long iamr; + unsigned long amr; + unsigned long fscr; + + unsigned int pmc1; + unsigned int pmc2; + unsigned int pmc3; + unsigned int pmc4; + unsigned int pmc5; + unsigned int pmc6; + unsigned long mmcr0; + unsigned long mmcr1; + unsigned long mmcr2; + unsigned long mmcr3; + unsigned long mmcra; + unsigned long siar; + unsigned long sier1; + unsigned long sier2; + unsigned long sier3; + unsigned long sdar; +}; + +bool load_vcpu_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs); +void store_vcpu_state(struct kvm_vcpu *vcpu); +void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs); +void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs); +void switch_pmu_to_guest(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs); +void switch_pmu_to_host(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index bd0021cd3a67..5a34f0199bfe 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -4,8 +4,353 @@ #include #include #include +#include #include +#include "book3s_hv.h" + +static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) +{ + if (!(mmcr0 & MMCR0_FC)) + goto do_freeze; + if (mmcra & MMCRA_SAMPLE_ENABLE) + goto do_freeze; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + if (!(mmcr0 & MMCR0_PMCCEXT)) + goto do_freeze; + if (!(mmcra & MMCRA_BHRB_DISABLE)) + goto do_freeze; + } + return; + +do_freeze: + mmcr0 = MMCR0_FC; + mmcra = 0; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mmcr0 |= MMCR0_PMCCEXT; + mmcra = MMCRA_BHRB_DISABLE; + } + + mtspr(SPRN_MMCR0, mmcr0); + mtspr(SPRN_MMCRA, mmcra); + isync(); +} + +void switch_pmu_to_guest(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + struct lppaca *lp; + int load_pmu = 1; + + lp = vcpu->arch.vpa.pinned_addr; + if (lp) + load_pmu = lp->pmcregs_in_use; + + /* Save host */ + if (ppc_get_pmu_inuse()) { + /* + * It might be better to put PMU handling (at least for the + * host) in the perf subsystem because it knows more about what + * is being used. + */ + + /* POWER9, POWER10 do not implement HPMC or SPMC */ + + host_os_sprs->mmcr0 = mfspr(SPRN_MMCR0); + host_os_sprs->mmcra = mfspr(SPRN_MMCRA); + + freeze_pmu(host_os_sprs->mmcr0, host_os_sprs->mmcra); + + host_os_sprs->pmc1 = mfspr(SPRN_PMC1); + host_os_sprs->pmc2 = mfspr(SPRN_PMC2); + host_os_sprs->pmc3 = mfspr(SPRN_PMC3); + host_os_sprs->pmc4 = mfspr(SPRN_PMC4); + host_os_sprs->pmc5 = mfspr(SPRN_PMC5); + host_os_sprs->pmc6 = mfspr(SPRN_PMC6); + host_os_sprs->mmcr1 = mfspr(SPRN_MMCR1); + host_os_sprs->mmcr2 = mfspr(SPRN_MMCR2); + host_os_sprs->sdar = mfspr(SPRN_SDAR); + host_os_sprs->siar = mfspr(SPRN_SIAR); + host_os_sprs->sier1 = mfspr(SPRN_SIER); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + host_os_sprs->mmcr3 = mfspr(SPRN_MMCR3); + host_os_sprs->sier2 = mfspr(SPRN_SIER2); + host_os_sprs->sier3 = mfspr(SPRN_SIER3); + } + } + +#ifdef CONFIG_PPC_PSERIES + /* After saving PMU, before loading guest PMU, flip pmcregs_in_use */ + if (kvmhv_on_pseries()) { + barrier(); + get_lppaca()->pmcregs_in_use = load_pmu; + barrier(); + } +#endif + + /* + * Load guest. If the VPA said the PMCs are not in use but the guest + * tried to access them anyway, HFSCR[PM] will be set by the HFAC + * fault so we can make forward progress. + */ + if (load_pmu || (vcpu->arch.hfscr & HFSCR_PM)) { + mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); + mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); + mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); + mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); + mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); + mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); + mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); + mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); + mtspr(SPRN_SDAR, vcpu->arch.sdar); + mtspr(SPRN_SIAR, vcpu->arch.siar); + mtspr(SPRN_SIER, vcpu->arch.sier[0]); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); + mtspr(SPRN_SIER2, vcpu->arch.sier[1]); + mtspr(SPRN_SIER3, vcpu->arch.sier[2]); + } + + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, vcpu->arch.mmcra); + mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); + /* No isync necessary because we're starting counters */ + + if (!vcpu->arch.nested && + (vcpu->arch.hfscr_permitted & HFSCR_PM)) + vcpu->arch.hfscr |= HFSCR_PM; + } +} +EXPORT_SYMBOL_GPL(switch_pmu_to_guest); + +void switch_pmu_to_host(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + struct lppaca *lp; + int save_pmu = 1; + + lp = vcpu->arch.vpa.pinned_addr; + if (lp) + save_pmu = lp->pmcregs_in_use; + + if (save_pmu) { + vcpu->arch.mmcr[0] = mfspr(SPRN_MMCR0); + vcpu->arch.mmcra = mfspr(SPRN_MMCRA); + + freeze_pmu(vcpu->arch.mmcr[0], vcpu->arch.mmcra); + + vcpu->arch.pmc[0] = mfspr(SPRN_PMC1); + vcpu->arch.pmc[1] = mfspr(SPRN_PMC2); + vcpu->arch.pmc[2] = mfspr(SPRN_PMC3); + vcpu->arch.pmc[3] = mfspr(SPRN_PMC4); + vcpu->arch.pmc[4] = mfspr(SPRN_PMC5); + vcpu->arch.pmc[5] = mfspr(SPRN_PMC6); + vcpu->arch.mmcr[1] = mfspr(SPRN_MMCR1); + vcpu->arch.mmcr[2] = mfspr(SPRN_MMCR2); + vcpu->arch.sdar = mfspr(SPRN_SDAR); + vcpu->arch.siar = mfspr(SPRN_SIAR); + vcpu->arch.sier[0] = mfspr(SPRN_SIER); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + vcpu->arch.mmcr[3] = mfspr(SPRN_MMCR3); + vcpu->arch.sier[1] = mfspr(SPRN_SIER2); + vcpu->arch.sier[2] = mfspr(SPRN_SIER3); + } + + } else if (vcpu->arch.hfscr & HFSCR_PM) { + /* + * The guest accessed PMC SPRs without specifying they should + * be preserved, or it cleared pmcregs_in_use after the last + * access. Just ensure they are frozen. + */ + freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); + + /* + * Demand-fault PMU register access in the guest. + * + * This is used to grab the guest's VPA pmcregs_in_use value + * and reflect it into the host's VPA in the case of a nested + * hypervisor. + * + * It also avoids having to zero-out SPRs after each guest + * exit to avoid side-channels when. + * + * This is cleared here when we exit the guest, so later HFSCR + * interrupt handling can add it back to run the guest with + * PM enabled next time. + */ + if (!vcpu->arch.nested) + vcpu->arch.hfscr &= ~HFSCR_PM; + } /* otherwise the PMU should still be frozen */ + +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) { + barrier(); + get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); + barrier(); + } +#endif + + if (ppc_get_pmu_inuse()) { + mtspr(SPRN_PMC1, host_os_sprs->pmc1); + mtspr(SPRN_PMC2, host_os_sprs->pmc2); + mtspr(SPRN_PMC3, host_os_sprs->pmc3); + mtspr(SPRN_PMC4, host_os_sprs->pmc4); + mtspr(SPRN_PMC5, host_os_sprs->pmc5); + mtspr(SPRN_PMC6, host_os_sprs->pmc6); + mtspr(SPRN_MMCR1, host_os_sprs->mmcr1); + mtspr(SPRN_MMCR2, host_os_sprs->mmcr2); + mtspr(SPRN_SDAR, host_os_sprs->sdar); + mtspr(SPRN_SIAR, host_os_sprs->siar); + mtspr(SPRN_SIER, host_os_sprs->sier1); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, host_os_sprs->mmcr3); + mtspr(SPRN_SIER2, host_os_sprs->sier2); + mtspr(SPRN_SIER3, host_os_sprs->sier3); + } + + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, host_os_sprs->mmcra); + mtspr(SPRN_MMCR0, host_os_sprs->mmcr0); + isync(); + } +} +EXPORT_SYMBOL_GPL(switch_pmu_to_host); + +static void load_spr_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + mtspr(SPRN_TAR, vcpu->arch.tar); + mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); + mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); + mtspr(SPRN_BESCR, vcpu->arch.bescr); + + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + mtspr(SPRN_TIDR, vcpu->arch.tid); + if (host_os_sprs->iamr != vcpu->arch.iamr) + mtspr(SPRN_IAMR, vcpu->arch.iamr); + if (host_os_sprs->amr != vcpu->arch.amr) + mtspr(SPRN_AMR, vcpu->arch.amr); + if (vcpu->arch.uamor != 0) + mtspr(SPRN_UAMOR, vcpu->arch.uamor); + if (host_os_sprs->fscr != vcpu->arch.fscr) + mtspr(SPRN_FSCR, vcpu->arch.fscr); + if (host_os_sprs->dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, vcpu->arch.dscr); + if (vcpu->arch.pspb != 0) + mtspr(SPRN_PSPB, vcpu->arch.pspb); + + /* + * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI] + * clear (or hstate set appropriately to catch those registers + * being clobbered if we take a MCE or SRESET), so those are done + * later. + */ + + if (!(vcpu->arch.ctrl & 1)) + mtspr(SPRN_CTRLT, 0); +} + +static void store_spr_state(struct kvm_vcpu *vcpu) +{ + vcpu->arch.tar = mfspr(SPRN_TAR); + vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); + vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); + vcpu->arch.bescr = mfspr(SPRN_BESCR); + + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + vcpu->arch.tid = mfspr(SPRN_TIDR); + vcpu->arch.iamr = mfspr(SPRN_IAMR); + vcpu->arch.amr = mfspr(SPRN_AMR); + vcpu->arch.uamor = mfspr(SPRN_UAMOR); + vcpu->arch.fscr = mfspr(SPRN_FSCR); + vcpu->arch.dscr = mfspr(SPRN_DSCR); + vcpu->arch.pspb = mfspr(SPRN_PSPB); + + vcpu->arch.ctrl = mfspr(SPRN_CTRLF); +} + +/* Returns true if current MSR and/or guest MSR may have changed */ +bool load_vcpu_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + bool ret = false; + + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + ret = true; + } + + load_spr_state(vcpu, host_os_sprs); + + load_fp_state(&vcpu->arch.fp); +#ifdef CONFIG_ALTIVEC + load_vr_state(&vcpu->arch.vr); +#endif + mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); + + return ret; +} +EXPORT_SYMBOL_GPL(load_vcpu_state); + +void store_vcpu_state(struct kvm_vcpu *vcpu) +{ + store_spr_state(vcpu); + + store_fp_state(&vcpu->arch.fp); +#ifdef CONFIG_ALTIVEC + store_vr_state(&vcpu->arch.vr); +#endif + vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); + + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); +} +EXPORT_SYMBOL_GPL(store_vcpu_state); + +void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) +{ + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + host_os_sprs->tidr = mfspr(SPRN_TIDR); + host_os_sprs->iamr = mfspr(SPRN_IAMR); + host_os_sprs->amr = mfspr(SPRN_AMR); + host_os_sprs->fscr = mfspr(SPRN_FSCR); + host_os_sprs->dscr = mfspr(SPRN_DSCR); +} +EXPORT_SYMBOL_GPL(save_p9_host_os_sprs); + +/* vcpu guest regs must already be saved */ +void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); + + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + mtspr(SPRN_TIDR, host_os_sprs->tidr); + if (host_os_sprs->iamr != vcpu->arch.iamr) + mtspr(SPRN_IAMR, host_os_sprs->iamr); + if (vcpu->arch.uamor != 0) + mtspr(SPRN_UAMOR, 0); + if (host_os_sprs->amr != vcpu->arch.amr) + mtspr(SPRN_AMR, host_os_sprs->amr); + if (host_os_sprs->fscr != vcpu->arch.fscr) + mtspr(SPRN_FSCR, host_os_sprs->fscr); + if (host_os_sprs->dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, host_os_sprs->dscr); + if (vcpu->arch.pspb != 0) + mtspr(SPRN_PSPB, 0); + + /* Save guest CTRL register, set runlatch to 1 */ + if (!(vcpu->arch.ctrl & 1)) + mtspr(SPRN_CTRLT, 1); +} +EXPORT_SYMBOL_GPL(restore_p9_host_os_sprs); + #ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING static void __start_timing(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator *next) { From patchwork Mon Jul 26 03:50:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509740 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=YBpupHco; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bN268Mz9tjv for ; Mon, 26 Jul 2021 13:52:16 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231709AbhGZDLk (ORCPT ); Sun, 25 Jul 2021 23:11:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231684AbhGZDLf (ORCPT ); Sun, 25 Jul 2021 23:11:35 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9ED63C061757 for ; Sun, 25 Jul 2021 20:52:04 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id ch6so2117060pjb.5 for ; Sun, 25 Jul 2021 20:52:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=E7lSZKkeRBx/lmK1ZoScnTXTHEokSNfF7n9IIZhjKQo=; b=YBpupHcoosdsjx1bOr/CykY+lat2mvzOYg7BAfxFWPCzAl7yFICO2Qjm3SikCcM1hi +vRUxqF0omZ3T88ih3A0goPjvKwCXHf4XFE2QCXHqhg5uhmUSFEC8x95IuXtkUJEdi3u Y3KlazkdDRyx/wmtpPflD0SWPvecuA/H+mfWFqfElGRuNY5/o5w5lIonsyKK9RxKmPTh LZjR3F7QhPTZlbT1ne4Iyl4en8zK4gW5Wo8xJbLK4dNePxjf82lFphw0TRMChl/8YT/u fHQnE8LFavjroy4SEV8gjFH+9embDMCjHegJ9uIl/IPmDG1/7kYyZ+VsvXSd72z2yMlr BeaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=E7lSZKkeRBx/lmK1ZoScnTXTHEokSNfF7n9IIZhjKQo=; b=mJp1Hbq39pIpt5sdyGrF+gvwqiTmSvMyYSMBbefzBKhSaGQ5Y82PTCcHpGjgSA7Rrx VJBTpgZZg6QEdZiWrLXfaq88zBK65TmrUjLjph2uM0aobxzR+XU8I2hrwEY4+kKIHJq9 kjirZUhUaNwhy+y4XDkV3d6qMIsPJElyN/ashGp6GzQxedM04HkV9SprGlvV1EHJfrIq NoM6o+A/JyM9EHC/Fum8/kDD3iUOR/QSkVJ780m3x1U5MlQtJziY8ERbJl32S+ORdkC8 tBtHfe4m16EEGvUYE2XISu0SwfGmCMbcORcCmj3yhaz2OWdRkDGX0HdL5bp9GTdIqE3Z GwhA== X-Gm-Message-State: AOAM533ru1+JatNBgXfc0jbu5MCxOz+wUn1g6LjSj+GLklJhmIWgJYmz qRV9plwYti+VSD7fLy4vC/+ZYmOD8Gc= X-Google-Smtp-Source: ABdhPJxXIfnXaG+zeKXPwYRcltDiIUqW6u3IvV15bXucFIUJFYdE8KTH07jRzewjdwaIFknPG2FQhA== X-Received: by 2002:a17:90b:4b08:: with SMTP id lx8mr15407571pjb.66.1627271524129; Sun, 25 Jul 2021 20:52:04 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:03 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 34/55] KVM: PPC: Book3S HV P9: Move nested guest entry into its own function Date: Mon, 26 Jul 2021 13:50:15 +1000 Message-Id: <20210726035036.739609-35-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This is just refactoring. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 125 +++++++++++++++++++---------------- 1 file changed, 67 insertions(+), 58 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 977712eb74e0..cb66c9534dbf 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3789,6 +3789,72 @@ static void vcpu_vpa_increment_dispatch(struct kvm_vcpu *vcpu) } } +/* call our hypervisor to load up HV regs and go */ +static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) +{ + struct kvmppc_vcore *vc = vcpu->arch.vcore; + unsigned long host_psscr; + struct hv_guest_state hvregs; + int trap; + s64 dec; + + /* + * We need to save and restore the guest visible part of the + * psscr (i.e. using SPRN_PSSCR_PR) since the hypervisor + * doesn't do this for us. Note only required if pseries since + * this is done in kvmhv_vcpu_entry_p9() below otherwise. + */ + host_psscr = mfspr(SPRN_PSSCR_PR); + mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); + kvmhv_save_hv_regs(vcpu, &hvregs); + hvregs.lpcr = lpcr; + vcpu->arch.regs.msr = vcpu->arch.shregs.msr; + hvregs.version = HV_GUEST_STATE_VERSION; + if (vcpu->arch.nested) { + hvregs.lpid = vcpu->arch.nested->shadow_lpid; + hvregs.vcpu_token = vcpu->arch.nested_vcpu_id; + } else { + hvregs.lpid = vcpu->kvm->arch.lpid; + hvregs.vcpu_token = vcpu->vcpu_id; + } + hvregs.hdec_expiry = time_limit; + + /* + * When setting DEC, we must always deal with irq_work_raise + * via NMI vs setting DEC. The problem occurs right as we + * switch into guest mode if a NMI hits and sets pending work + * and sets DEC, then that will apply to the guest and not + * bring us back to the host. + * + * irq_work_raise could check a flag (or possibly LPCR[HDICE] + * for example) and set HDEC to 1? That wouldn't solve the + * nested hv case which needs to abort the hcall or zero the + * time limit. + * + * XXX: Another day's problem. + */ + mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - *tb); + + mtspr(SPRN_DAR, vcpu->arch.shregs.dar); + mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); + trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs), + __pa(&vcpu->arch.regs)); + kvmhv_restore_hv_return_state(vcpu, &hvregs); + vcpu->arch.shregs.msr = vcpu->arch.regs.msr; + vcpu->arch.shregs.dar = mfspr(SPRN_DAR); + vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR); + vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); + mtspr(SPRN_PSSCR_PR, host_psscr); + + dec = mfspr(SPRN_DEC); + if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ + dec = (s32) dec; + *tb = mftb(); + vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset); + + return trap; +} + /* * Guest entry for POWER9 and later CPUs. */ @@ -3797,7 +3863,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, { struct kvmppc_vcore *vc = vcpu->arch.vcore; struct p9_host_os_sprs host_os_sprs; - s64 dec; u64 next_timer; unsigned long msr; int trap; @@ -3850,63 +3915,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, switch_pmu_to_guest(vcpu, &host_os_sprs); if (kvmhv_on_pseries()) { - /* - * We need to save and restore the guest visible part of the - * psscr (i.e. using SPRN_PSSCR_PR) since the hypervisor - * doesn't do this for us. Note only required if pseries since - * this is done in kvmhv_vcpu_entry_p9() below otherwise. - */ - unsigned long host_psscr; - /* call our hypervisor to load up HV regs and go */ - struct hv_guest_state hvregs; - - host_psscr = mfspr(SPRN_PSSCR_PR); - mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); - kvmhv_save_hv_regs(vcpu, &hvregs); - hvregs.lpcr = lpcr; - vcpu->arch.regs.msr = vcpu->arch.shregs.msr; - hvregs.version = HV_GUEST_STATE_VERSION; - if (vcpu->arch.nested) { - hvregs.lpid = vcpu->arch.nested->shadow_lpid; - hvregs.vcpu_token = vcpu->arch.nested_vcpu_id; - } else { - hvregs.lpid = vcpu->kvm->arch.lpid; - hvregs.vcpu_token = vcpu->vcpu_id; - } - hvregs.hdec_expiry = time_limit; - - /* - * When setting DEC, we must always deal with irq_work_raise - * via NMI vs setting DEC. The problem occurs right as we - * switch into guest mode if a NMI hits and sets pending work - * and sets DEC, then that will apply to the guest and not - * bring us back to the host. - * - * irq_work_raise could check a flag (or possibly LPCR[HDICE] - * for example) and set HDEC to 1? That wouldn't solve the - * nested hv case which needs to abort the hcall or zero the - * time limit. - * - * XXX: Another day's problem. - */ - mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - *tb); - - mtspr(SPRN_DAR, vcpu->arch.shregs.dar); - mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); - trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs), - __pa(&vcpu->arch.regs)); - kvmhv_restore_hv_return_state(vcpu, &hvregs); - vcpu->arch.shregs.msr = vcpu->arch.regs.msr; - vcpu->arch.shregs.dar = mfspr(SPRN_DAR); - vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR); - vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); - mtspr(SPRN_PSSCR_PR, host_psscr); - - dec = mfspr(SPRN_DEC); - if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ - dec = (s32) dec; - *tb = mftb(); - vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset); + trap = kvmhv_vcpu_entry_p9_nested(vcpu, time_limit, lpcr, tb); /* H_CEDE has to be handled now, not later */ if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && From patchwork Mon Jul 26 03:50:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509741 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=gHYbre+z; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bN6QS1z9tk0 for ; Mon, 26 Jul 2021 13:52:16 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231684AbhGZDLl (ORCPT ); Sun, 25 Jul 2021 23:11:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231721AbhGZDLi (ORCPT ); Sun, 25 Jul 2021 23:11:38 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0636C061757 for ; Sun, 25 Jul 2021 20:52:06 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id k4-20020a17090a5144b02901731c776526so17813324pjm.4 for ; Sun, 25 Jul 2021 20:52:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+MWm14uY7P1fSf/YhIqEtrAZmMe0JMxZyWqzL2Rlp0w=; b=gHYbre+zoSHTn/bjhWokit9lnR7JT8muvJl+YYCdx5Cos5LSAStwhkNyIAzohFOUmE C8hUF6M130O6Vucz17nLFzuzw64xFw/eqvmT7Ob1f6ccXJNH/yH4SsHCbbn0d/zDhLsO FFXN8shSrxQQnFXwUxbnbt4wGopxzcc8weRMvAFJ9tTutK9hvZKamFKlK3r+66P3kRcP WbRJPJqznSD5BqjQe+CL/kliDpl9p9yG2Yj3ooaFVGGrK3bHxWwJtSEHauC925FIgBzO 0HT6F5R22F2eaqk3XSa9dtTa7bvcmp1qPPI0n+wKYWKdUuGQQpsV7GDTP8JBtPI0Rfx+ ReAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+MWm14uY7P1fSf/YhIqEtrAZmMe0JMxZyWqzL2Rlp0w=; b=Qpl48HQmkkIzSD4n30NngYbOHrCFWIWPSoLAbfaeJWZKmBw8e+VgmtcMd4Bend83Vl evLhxhIZ/axlJZOrIZnWZoiyuMb6LN7J+4rgmMUO5T7srXee6ck6TMjdVZN0PQg1VOCn 50kU34m0UzNpoqmhLle+fr4H9cbknvpTOlTv5VcvQAET6WhA6qoOUOiU/WNyrYgYmasV wCMurKRyxtIfpNHFa+QAj6okEe/PsFNZkla5ZnvFmxOZWGEW4/1ZFpfqExE6pHXNGYwZ 5AY9oQCzsMfIjzMSdgFTTVmQKqZKs4R4E5MXSN3fMKsJPOD7PO6Nv/VbV8UD7zY/wRkw E3cA== X-Gm-Message-State: AOAM530EJrmrYR5Z1NQM6fOGeZuJ1xOd8z29tiqy5BQXNa2iLkO3XNGd 9kOkXAWtW0RCqodqZuhJclDvh5ajH+4= X-Google-Smtp-Source: ABdhPJxhXQUApJnv5htINwSDdCfylRkOyd7ywK9QL5xMbUpxqB8lvuBxIbI3nmkF5q93a1gRzyooqA== X-Received: by 2002:a62:6103:0:b029:396:f515:94bf with SMTP id v3-20020a6261030000b0290396f51594bfmr4151205pfb.4.1627271526346; Sun, 25 Jul 2021 20:52:06 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:06 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 35/55] KVM: PPC: Book3S HV P9: Move remaining SPR and MSR access into low level entry Date: Mon, 26 Jul 2021 13:50:16 +1000 Message-Id: <20210726035036.739609-36-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move register saving and loading from kvmhv_p9_guest_entry() into the HV and nested entry handlers. Accesses are scheduled to reduce mtSPR / mfSPR interleaving which reduces SPR scoreboard stalls. XXX +212 cycles here somewhere (7566), investigate POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 79 ++++++++++------------ arch/powerpc/kvm/book3s_hv_p9_entry.c | 96 ++++++++++++++++++++------- 2 files changed, 109 insertions(+), 66 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index cb66c9534dbf..8c1c93ebd669 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3794,9 +3794,15 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long host_psscr; + unsigned long msr; struct hv_guest_state hvregs; - int trap; + struct p9_host_os_sprs host_os_sprs; s64 dec; + int trap; + + switch_pmu_to_guest(vcpu, &host_os_sprs); + + save_p9_host_os_sprs(&host_os_sprs); /* * We need to save and restore the guest visible part of the @@ -3805,6 +3811,27 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns * this is done in kvmhv_vcpu_entry_p9() below otherwise. */ host_psscr = mfspr(SPRN_PSSCR_PR); + + hard_irq_disable(); + if (lazy_irq_pending()) + return 0; + + /* MSR bits may have been cleared by context switch */ + msr = 0; + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr |= MSR_VSX; + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + msr |= MSR_TM; + msr = msr_check_and_set(msr); + + if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) + msr = mfmsr(); /* TM restore can update msr */ + mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); kvmhv_save_hv_regs(vcpu, &hvregs); hvregs.lpcr = lpcr; @@ -3846,12 +3873,20 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); mtspr(SPRN_PSSCR_PR, host_psscr); + store_vcpu_state(vcpu); + dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; *tb = mftb(); vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset); + timer_rearm_host_dec(*tb); + + restore_p9_host_os_sprs(vcpu, &host_os_sprs); + + switch_pmu_to_host(vcpu, &host_os_sprs); + return trap; } @@ -3862,9 +3897,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { struct kvmppc_vcore *vc = vcpu->arch.vcore; - struct p9_host_os_sprs host_os_sprs; u64 next_timer; - unsigned long msr; int trap; next_timer = timer_get_next_tb(); @@ -3875,33 +3908,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.ceded = 0; - save_p9_host_os_sprs(&host_os_sprs); - - /* - * This could be combined with MSR[RI] clearing, but that expands - * the unrecoverable window. It would be better to cover unrecoverable - * with KVM bad interrupt handling rather than use MSR[RI] at all. - * - * Much more difficult and less worthwhile to combine with IR/DR - * disable. - */ - hard_irq_disable(); - if (lazy_irq_pending()) - return 0; - - /* MSR bits may have been cleared by context switch */ - msr = 0; - if (IS_ENABLED(CONFIG_PPC_FPU)) - msr |= MSR_FP; - if (cpu_has_feature(CPU_FTR_ALTIVEC)) - msr |= MSR_VEC; - if (cpu_has_feature(CPU_FTR_VSX)) - msr |= MSR_VSX; - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) - msr |= MSR_TM; - msr = msr_check_and_set(msr); - kvmppc_subcore_enter_guest(); vc->entry_exit_map = 1; @@ -3909,11 +3915,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) - msr = mfmsr(); /* MSR may have been updated */ - - switch_pmu_to_guest(vcpu, &host_os_sprs); - if (kvmhv_on_pseries()) { trap = kvmhv_vcpu_entry_p9_nested(vcpu, time_limit, lpcr, tb); @@ -3956,16 +3957,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.slb_max = 0; } - switch_pmu_to_host(vcpu, &host_os_sprs); - - store_vcpu_state(vcpu); - vcpu_vpa_increment_dispatch(vcpu); - timer_rearm_host_dec(*tb); - - restore_p9_host_os_sprs(vcpu, &host_os_sprs); - vc->entry_exit_map = 0x101; vc->in_guest = 0; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 5a34f0199bfe..ea531f76f116 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -530,6 +530,7 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { + struct p9_host_os_sprs host_os_sprs; struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; struct kvmppc_vcore *vc = vcpu->arch.vcore; @@ -559,9 +560,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ceded = 0; - /* Could avoid mfmsr by passing around, but probably no big deal */ - msr = mfmsr(); - host_hfscr = mfspr(SPRN_HFSCR); host_ciabr = mfspr(SPRN_CIABR); host_dawr0 = mfspr(SPRN_DAWR0); @@ -576,6 +574,41 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); + switch_pmu_to_guest(vcpu, &host_os_sprs); + + save_p9_host_os_sprs(&host_os_sprs); + + /* + * This could be combined with MSR[RI] clearing, but that expands + * the unrecoverable window. It would be better to cover unrecoverable + * with KVM bad interrupt handling rather than use MSR[RI] at all. + * + * Much more difficult and less worthwhile to combine with IR/DR + * disable. + */ + hard_irq_disable(); + if (lazy_irq_pending()) { + trap = 0; + goto out; + } + + /* MSR bits may have been cleared by context switch */ + msr = 0; + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr |= MSR_VSX; + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + msr |= MSR_TM; + msr = msr_check_and_set(msr); + /* Save MSR for restore. This is after hard disable, so EE is clear. */ + + if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) + msr = mfmsr(); /* MSR may have been updated */ + if (vc->tb_offset) { u64 new_tb = *tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); @@ -634,6 +667,14 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2); mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3); + /* + * It might be preferable to load_vcpu_state here, in order to get the + * GPR/FP register loads executing in parallel with the previous mtSPR + * instructions, but for now that can't be done because the TM handling + * in load_vcpu_state can change some SPRs and vcpu state (nip, msr). + * But TM could be split out if this would be a significant benefit. + */ + local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_HV_P9; /* @@ -811,6 +852,20 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->dpdes = mfspr(SPRN_DPDES); vc->vtb = mfspr(SPRN_VTB); + save_clear_guest_mmu(kvm, vcpu); + switch_mmu_to_host(kvm, host_pidr); + + /* + * If we are in real mode, only switch MMU on after the MMU is + * switched to host, to avoid the P9_RADIX_PREFETCH_BUG. + */ + if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && + vcpu->arch.shregs.msr & MSR_TS_MASK) + msr |= MSR_TS_S; + __mtmsrd(msr, 0); + + store_vcpu_state(vcpu); + dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; @@ -843,6 +898,19 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_DAWRX1, host_dawrx1); } + mtspr(SPRN_DPDES, 0); + if (vc->pcr) + mtspr(SPRN_PCR, PCR_MASK); + + /* HDEC must be at least as large as DEC, so decrementer_max fits */ + mtspr(SPRN_HDEC, decrementer_max); + + timer_rearm_host_dec(*tb); + + restore_p9_host_os_sprs(vcpu, &host_os_sprs); + + local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; + if (kvm_is_radix(kvm)) { /* * Since this is radix, do a eieio; tlbsync; ptesync sequence @@ -859,26 +927,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (cpu_has_feature(CPU_FTR_ARCH_31)) asm volatile(PPC_CP_ABORT); - mtspr(SPRN_DPDES, 0); - if (vc->pcr) - mtspr(SPRN_PCR, PCR_MASK); - - /* HDEC must be at least as large as DEC, so decrementer_max fits */ - mtspr(SPRN_HDEC, decrementer_max); - - save_clear_guest_mmu(kvm, vcpu); - switch_mmu_to_host(kvm, host_pidr); - local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; - - /* - * If we are in real mode, only switch MMU on after the MMU is - * switched to host, to avoid the P9_RADIX_PREFETCH_BUG. - */ - if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && - vcpu->arch.shregs.msr & MSR_TS_MASK) - msr |= MSR_TS_S; - - __mtmsrd(msr, 0); +out: + switch_pmu_to_host(vcpu, &host_os_sprs); end_timing(vcpu); From patchwork Mon Jul 26 03:50:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509742 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=SX6rBbXm; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bP3Rqlz9tB1 for ; Mon, 26 Jul 2021 13:52:17 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231731AbhGZDLp (ORCPT ); Sun, 25 Jul 2021 23:11:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231759AbhGZDLk (ORCPT ); Sun, 25 Jul 2021 23:11:40 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 10FC1C061757 for ; Sun, 25 Jul 2021 20:52:09 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id n10so10141462plf.4 for ; Sun, 25 Jul 2021 20:52:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=I/TJyOnmABEE9AkNTcVjeb806PCZeTTaeMWQWBzIZXw=; b=SX6rBbXmeohLYDiGBmdYikgQxBkPOKkSixP00Rb1Nm7DLPofm/Hn7FVKh7zTRTvoug gJiw4ubHPfOde/XFILOW6YdWCSjHCoeh4IjxHnyqoRan0zxBrNnTB7aZN/mNw0EkLBph M3x4gSvR6KGwLwgPnzioW3xibNHcBnzsXAm73pJmANcQBwlbOx/YgACZGm32NkMrYxkk L2ghSu4xOzJ1W0h5SYnPIuToFvt4MTf56IJXxbLHDcDKSr5S0n4N+OOtjVEdohQY0FCm W5t7YlpWo/QZpadXlWIb0dBsLmB2IPBYNwl4CeXOEeVz2AlNGSHZNQbKbYHQYOTlb6+a RNoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=I/TJyOnmABEE9AkNTcVjeb806PCZeTTaeMWQWBzIZXw=; b=TKFN8hK5H6O7WL6Yzf2BXU3pUj5EUOEU2+8Ezqp9xOhiYnNnhUeVJi+VKtB19NHypR iNcdgKGbjEYPpD3Vu3CbQw5nItdkmvMiutyzq0JN9iM99W5p7Zk/WVhSk/gwWZWPJMVt 9Tm34s6jeO4bE+7+gGdov8nwTYY30B54p1Q3zQ5fU4GjsCwBv4gmzqEXEsNcoHcQFNz8 YzqZL+SFs7BGSZrZApWQU0TKNcXrp3gSZ67sK7cpNDH6aZjZqyuaqyIbyJmh0e7PXKxc tBQkGz7jpMUkmkVhX6TCw9JqjjMAAshoKSTUcSTh7y/qMkOqYjhubKKJaVOFrAiswC+D 9jSw== X-Gm-Message-State: AOAM532UYBG1T3s3gmj8fQInURuJjDju5zowG0W2DZxIUjtd+66F7J/+ 7yDi2ZjgyrkbV5eYMfWKp5ne1Y18I70= X-Google-Smtp-Source: ABdhPJwbjlo5VbE7IwK1qAiTSiFySOjd7sqAjrEDnb5ZUOg4q6E0ClnLPxIIyKgTP3ns+CZ23MwUqA== X-Received: by 2002:a62:cfc4:0:b029:2fe:eaf8:8012 with SMTP id b187-20020a62cfc40000b02902feeaf88012mr15654662pfg.45.1627271528553; Sun, 25 Jul 2021 20:52:08 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:08 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 36/55] KVM: PPC: Book3S HV P9: Implement TM fastpath for guest entry/exit Date: Mon, 26 Jul 2021 13:50:17 +1000 Message-Id: <20210726035036.739609-37-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org If TM is not active, only TM register state needs to be saved. -348 cycles (7218) POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 23 +++++++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index ea531f76f116..2e7498817b2e 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -281,8 +281,15 @@ bool load_vcpu_state(struct kvm_vcpu *vcpu, if (cpu_has_feature(CPU_FTR_TM) || cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { - kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - ret = true; + unsigned long guest_msr = vcpu->arch.shregs.msr; + if (MSR_TM_ACTIVE(guest_msr)) { + kvmppc_restore_tm_hv(vcpu, guest_msr, true); + ret = true; + } else { + mtspr(SPRN_TEXASR, vcpu->arch.texasr); + mtspr(SPRN_TFHAR, vcpu->arch.tfhar); + mtspr(SPRN_TFIAR, vcpu->arch.tfiar); + } } load_spr_state(vcpu, host_os_sprs); @@ -308,8 +315,16 @@ void store_vcpu_state(struct kvm_vcpu *vcpu) vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) - kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + unsigned long guest_msr = vcpu->arch.shregs.msr; + if (MSR_TM_ACTIVE(guest_msr)) { + kvmppc_save_tm_hv(vcpu, guest_msr, true); + } else { + vcpu->arch.texasr = mfspr(SPRN_TEXASR); + vcpu->arch.tfhar = mfspr(SPRN_TFHAR); + vcpu->arch.tfiar = mfspr(SPRN_TFIAR); + } + } } EXPORT_SYMBOL_GPL(store_vcpu_state); From patchwork Mon Jul 26 03:50:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509743 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ml2SPelv; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bQ0Rn4z9t56 for ; Mon, 26 Jul 2021 13:52:18 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231712AbhGZDLq (ORCPT ); Sun, 25 Jul 2021 23:11:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231674AbhGZDLm (ORCPT ); Sun, 25 Jul 2021 23:11:42 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 410D3C061760 for ; Sun, 25 Jul 2021 20:52:11 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id pf12-20020a17090b1d8cb0290175c085e7a5so17861758pjb.0 for ; Sun, 25 Jul 2021 20:52:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Fj7EqZ1a9KKmPKWWgZnBHqfjaGtCWGrav4QAo3a930k=; b=ml2SPelvbj9Nffp52p9OVwG9OFrnF8xxapcL+Eq6SenWfhry3RuMimkYnAMsuvzAY3 gFl4ZNUsPQ19Fr/xgsK1pqUyKL1WKPhLhYygbYjfAy9mCuAyerEWB/GezhVco3ax5vHA Asj+GM7CaTpiud9YBVswsFjelaH8NZgsF+e+9wdSOAW+MyC1RabTsFFo9VNb7+rQO5wh b9rGTpzt2RN8yPZKd1iFsQujMUHo7URKFussIlBqV4vl3u/XOPbkIF52Y8rIasn/jJEa nYAznWMLEb9m2m3kUpJNZ4hYW9dQKxHdGQk6MC7RcYvMgJXAJp+4hymG1Gfg0V2ReKZF qMRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Fj7EqZ1a9KKmPKWWgZnBHqfjaGtCWGrav4QAo3a930k=; b=P9XwwN0OD6zOgK8if7uhcXHAXA5lN9Ino5obvRN4BNMN2EvH0IE/MTiS5cKr0uvkz1 0r+S/AsiCm6pjjZNW8KfMvoN/bNpZysok8umfA/Mt+4JcspMJ3ldrTyFTPrTB1QljMl3 Y4Yr/mNDTXUs3A2W1IN2iNF7toUeLHm4wct2FVH9I+iEIqFcCZPJOXNqlocV382Dh45T kMj5TvqHt+fKwRqojq3V26X+yeTAFa/vfBFn/YtPWCM3ngCavsCqYLXcLonF5PhlJw9w /2qw0wA0gNXnU3dfKVw9Fgb2jSSbu2xcHd2rk9ArH8pjOCeFtLQNTlIr839I1ElI2YWh WNJg== X-Gm-Message-State: AOAM530X4SgAaZqJIBN1xmRA7jQc2D51PJyKFyufxG/t5F06lSLDu3Q2 HD4jFUPcLvt4oxKk77CHUURo2kL5rfc= X-Google-Smtp-Source: ABdhPJwURJ7v1fon0Xcds+Pd8NEDmZUqscIezSGHTil3VDPjh8f7u4NLsAarbim2nbvo4LJ1/QaEWg== X-Received: by 2002:a05:6a00:1582:b029:333:a366:fe47 with SMTP id u2-20020a056a001582b0290333a366fe47mr16104658pfk.0.1627271530761; Sun, 25 Jul 2021 20:52:10 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:10 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 37/55] KVM: PPC: Book3S HV P9: Switch PMU to guest as late as possible Date: Mon, 26 Jul 2021 13:50:18 +1000 Message-Id: <20210726035036.739609-38-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This moves PMU switch to guest as late as possible in entry, and switch back to host as early as possible at exit. This helps the host get the most perf coverage of KVM entry/exit code as possible. This is slightly suboptimal for SPR scheduling point of view when the PMU is enabled, but when perf is disabled there is no real difference. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 6 ++---- arch/powerpc/kvm/book3s_hv_p9_entry.c | 6 ++---- 2 files changed, 4 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 8c1c93ebd669..e7dfc33e2b38 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3800,8 +3800,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns s64 dec; int trap; - switch_pmu_to_guest(vcpu, &host_os_sprs); - save_p9_host_os_sprs(&host_os_sprs); /* @@ -3864,9 +3862,11 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns mtspr(SPRN_DAR, vcpu->arch.shregs.dar); mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); + switch_pmu_to_guest(vcpu, &host_os_sprs); trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs), __pa(&vcpu->arch.regs)); kvmhv_restore_hv_return_state(vcpu, &hvregs); + switch_pmu_to_host(vcpu, &host_os_sprs); vcpu->arch.shregs.msr = vcpu->arch.regs.msr; vcpu->arch.shregs.dar = mfspr(SPRN_DAR); vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR); @@ -3885,8 +3885,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns restore_p9_host_os_sprs(vcpu, &host_os_sprs); - switch_pmu_to_host(vcpu, &host_os_sprs); - return trap; } diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 2e7498817b2e..737d4eaf74bc 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -589,8 +589,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); - switch_pmu_to_guest(vcpu, &host_os_sprs); - save_p9_host_os_sprs(&host_os_sprs); /* @@ -732,7 +730,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc accumulate_time(vcpu, &vcpu->arch.guest_time); + switch_pmu_to_guest(vcpu, &host_os_sprs); kvmppc_p9_enter_guest(vcpu); + switch_pmu_to_host(vcpu, &host_os_sprs); accumulate_time(vcpu, &vcpu->arch.rm_intr); @@ -943,8 +943,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc asm volatile(PPC_CP_ABORT); out: - switch_pmu_to_host(vcpu, &host_os_sprs); - end_timing(vcpu); return trap; From patchwork Mon Jul 26 03:50:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509744 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=iII4mvQZ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bQ4bztz9tkB for ; Mon, 26 Jul 2021 13:52:18 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231679AbhGZDLq (ORCPT ); Sun, 25 Jul 2021 23:11:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231738AbhGZDLn (ORCPT ); Sun, 25 Jul 2021 23:11:43 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69774C0613C1 for ; Sun, 25 Jul 2021 20:52:13 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id o44-20020a17090a0a2fb0290176ca3e5a2fso5068803pjo.1 for ; Sun, 25 Jul 2021 20:52:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NSmVzL4c1YQEwgpeygYKTRCpdKgworBmM1lrd2e93zY=; b=iII4mvQZd7A61sKYSg5blG9nCxBOJkE7sLfMABwTu+6BvfEiPRQODXjf/zRpZZUqBy xJEwiYBOK6WeH4DEmYUuc95c0LnX3lvLzuy/7aXLaG1xWcZ2VCZxIgikJbjK8IvCQFos p2QdHepyt+z9gIkROglukNYu7WB5zmckhYpaB1Zc3p5+f1YdpFYzDG9Ykis1pc1H+2EK IVHqcGdgwMR7Hlqdre0sov8ELYbcpStB/PEvYzlSa8ym0+cslmpFkNWKmZpck686B0ia upCOoaAZEBD00wC2/r5hIYldjrJRFh9u3/mFpERSI1QqJvA8lcDVuquCEb5WML3S0fbP VEEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NSmVzL4c1YQEwgpeygYKTRCpdKgworBmM1lrd2e93zY=; b=lY+nnCyPjfVzgMf5AcZ/dbHlGdKymOg4lO6XN2epAZQqrjwpnG+Ai7Oyq9XMSMt6Wd Aw1DNSQUo76wVovEupCGNhYOghhZVHNV+dXVKU/0TTEBviy7H+/0df5Hpr/WCiF2wodR zlC8ELbvwvb9GpppKnWidGDkIjjAt0UmiAcPsA6ioar6LFt+nbprL/nqE3KhrtnQlnER s5A/6uppMs3EFOYc7VHKNPNpXVBMz/VmiNKG7OPtfpvJ5WTIXm/1JbREzFnWCogwomcV XWZVbeJ2L1WxRZ/BwtQA0RAsS2qVrlGnK4o5MNT9rgiR5JcUZLGkhFiMxaErHMKstj+4 8JcA== X-Gm-Message-State: AOAM530sQgjvmrsSW7gZg9+TFO0Qk2fq+7OGkCK9c3IcfGWB3aAMHea0 +rxF1tY1gxvGAqbKzr+wUXUZaPRV9i8= X-Google-Smtp-Source: ABdhPJwqbi7Bm1VNXw0rR05N1QF7VUxQw7HRIu5SUrB3z0DuugVhc7yinaOnxi7co67WY/jHhzsJiw== X-Received: by 2002:aa7:978c:0:b029:32a:403e:88cc with SMTP id o12-20020aa7978c0000b029032a403e88ccmr16006240pfp.7.1627271532915; Sun, 25 Jul 2021 20:52:12 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:12 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 38/55] KVM: PPC: Book3S HV P9: Restrict DSISR canary workaround to processors that require it Date: Mon, 26 Jul 2021 13:50:19 +1000 Message-Id: <20210726035036.739609-39-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Use CPU_FTR_P9_RADIX_PREFETCH_BUG for this, to test for DD2.1 and below processors. -43 cycles (7178) POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 3 ++- arch/powerpc/kvm/book3s_hv_p9_entry.c | 6 ++++-- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index e7dfc33e2b38..47ccea5ffba2 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1598,7 +1598,8 @@ XXX benchmark guest exits unsigned long vsid; long err; - if (vcpu->arch.fault_dsisr == HDSISR_CANARY) { + if (cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG) && + unlikely(vcpu->arch.fault_dsisr == HDSISR_CANARY)) { r = RESUME_GUEST; /* Just retry if it's the canary */ break; } diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 737d4eaf74bc..d83b5d4d02c1 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -671,9 +671,11 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc * HDSI which should correctly update the HDSISR the second time HDSI * entry. * - * Just do this on all p9 processors for now. + * The "radix prefetch bug" test can be used to test for this bug, as + * it also exists fo DD2.1 and below. */ - mtspr(SPRN_HDSISR, HDSISR_CANARY); + if (cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) + mtspr(SPRN_HDSISR, HDSISR_CANARY); mtspr(SPRN_SPRG0, vcpu->arch.shregs.sprg0); mtspr(SPRN_SPRG1, vcpu->arch.shregs.sprg1); From patchwork Mon Jul 26 03:50:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509745 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=C7TZA/rA; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bR4sPrz9tkM for ; Mon, 26 Jul 2021 13:52:19 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231674AbhGZDLr (ORCPT ); Sun, 25 Jul 2021 23:11:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231759AbhGZDLr (ORCPT ); Sun, 25 Jul 2021 23:11:47 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A58F0C061757 for ; Sun, 25 Jul 2021 20:52:15 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id o44-20020a17090a0a2fb0290176ca3e5a2fso5068888pjo.1 for ; Sun, 25 Jul 2021 20:52:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=86e4Ir+GEs/5DHLdJaHdInDBlUze8Yk+Kpd4tTCHNYc=; b=C7TZA/rAlCWxwMf2EMMbByyBRWsKpwk2yPXEv1z4YRMODsMclb3dwX3IL7zMpV2ckg w7AKpzx+a4H6Ryom1TPUc1UTI5xS2V52JjPz09kI9HfXf/8nYQWmmTq3D0OM3Y61lDq7 P3T8NlyxnawOjX4r51u0M5DUD/hhO0WpAteCg/nVlgVY3I9O4LsBzobFx5/yqsdFL215 BojwqjFjwio3tvdfPU//KRAhXCnVMr2VLQ+tjI7fxLPbahNw78eL86/JlecTdecd/C3f 7lh8pfxS7Oa8pv4MEcSJ+wnHnn9HnrZZhF2F48iaP0JmIuwd0f4nTu24HNon/IDxkoRu XDCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=86e4Ir+GEs/5DHLdJaHdInDBlUze8Yk+Kpd4tTCHNYc=; b=i58z/UsEGDPouJMm3OzRAIpKUv/dwMwBziIWKFoqaUC5bs8Up8ptNiafQ7aY0srwWQ eF5p7BDbLEvdpX4DPbyJNS9f2Ts9qAx2h+EteLeRrxParmjKck47AK8sOKLde9Etbrel Fbxrv2bNA6Bjc5WGWjH9CQthLZYwHqj8VFfQfjFn3f1+UVfktOj+6B3H5O+/+WktG/2K 2f+fCTO7GiqnR/7Firp5VdqbTMvwlf9CXlu92QCZ/EeuKJizULuf5w/JUivIsNFPmlhN cq/R3nAMMtd8t7dyUL3NaV8zIlsP87NKp9zWQyXJKd3T8uNenmikVAM3d/vI+1E6lbGT FCkQ== X-Gm-Message-State: AOAM531/3oFkJzzo5oax3Uxi5wHCyOpNSIhwXO9nPbm/BuQc9Fekv6j5 EZwhNwxWNrStvLTVCJ2+6+eC/++2D94= X-Google-Smtp-Source: ABdhPJyvbE/mgqiqH4BsDllLyRe4rI83EOMWmvwbKGvzlbeAozLtB/VuSyR7CWJxYbpK+s7rHaKmxQ== X-Received: by 2002:a63:34a:: with SMTP id 71mr16330513pgd.289.1627271535149; Sun, 25 Jul 2021 20:52:15 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:14 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 39/55] KVM: PPC: Book3S HV P9: More SPR speed improvements Date: Mon, 26 Jul 2021 13:50:20 +1000 Message-Id: <20210726035036.739609-40-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This avoids more scoreboard stalls and reduces mtSPRs. -193 cycles (6985) POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 73 ++++++++++++++++----------- 1 file changed, 43 insertions(+), 30 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index d83b5d4d02c1..c4e93167d120 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -633,24 +633,29 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = vc->tb_offset; } - if (vc->pcr) - mtspr(SPRN_PCR, vc->pcr | PCR_MASK); - mtspr(SPRN_DPDES, vc->dpdes); mtspr(SPRN_VTB, vc->vtb); - mtspr(SPRN_PURR, vcpu->arch.purr); mtspr(SPRN_SPURR, vcpu->arch.spurr); + if (vc->pcr) + mtspr(SPRN_PCR, vc->pcr | PCR_MASK); + if (vc->dpdes) + mtspr(SPRN_DPDES, vc->dpdes); + if (dawr_enabled()) { - mtspr(SPRN_DAWR0, vcpu->arch.dawr0); - mtspr(SPRN_DAWRX0, vcpu->arch.dawrx0); + if (vcpu->arch.dawr0 != host_dawr0) + mtspr(SPRN_DAWR0, vcpu->arch.dawr0); + if (vcpu->arch.dawrx0 != host_dawrx0) + mtspr(SPRN_DAWRX0, vcpu->arch.dawrx0); if (cpu_has_feature(CPU_FTR_DAWR1)) { - mtspr(SPRN_DAWR1, vcpu->arch.dawr1); - mtspr(SPRN_DAWRX1, vcpu->arch.dawrx1); + if (vcpu->arch.dawr1 != host_dawr1) + mtspr(SPRN_DAWR1, vcpu->arch.dawr1); + if (vcpu->arch.dawrx1 != host_dawrx1) + mtspr(SPRN_DAWRX1, vcpu->arch.dawrx1); } } - mtspr(SPRN_CIABR, vcpu->arch.ciabr); - mtspr(SPRN_IC, vcpu->arch.ic); + if (vcpu->arch.ciabr != host_ciabr) + mtspr(SPRN_CIABR, vcpu->arch.ciabr); mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); @@ -869,20 +874,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->dpdes = mfspr(SPRN_DPDES); vc->vtb = mfspr(SPRN_VTB); - save_clear_guest_mmu(kvm, vcpu); - switch_mmu_to_host(kvm, host_pidr); - - /* - * If we are in real mode, only switch MMU on after the MMU is - * switched to host, to avoid the P9_RADIX_PREFETCH_BUG. - */ - if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && - vcpu->arch.shregs.msr & MSR_TS_MASK) - msr |= MSR_TS_S; - __mtmsrd(msr, 0); - - store_vcpu_state(vcpu); - dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; @@ -900,6 +891,22 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = 0; } + save_clear_guest_mmu(kvm, vcpu); + switch_mmu_to_host(kvm, host_pidr); + + /* + * Enable MSR here in order to have facilities enabled to save + * guest registers. This enables MMU (if we were in realmode), so + * only switch MMU on after the MMU is switched to host, to avoid + * the P9_RADIX_PREFETCH_BUG or hash guest context. + */ + if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && + vcpu->arch.shregs.msr & MSR_TS_MASK) + msr |= MSR_TS_S; + __mtmsrd(msr, 0); + + store_vcpu_state(vcpu); + mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr); mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr); @@ -907,15 +914,21 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_PSSCR, host_psscr | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); mtspr(SPRN_HFSCR, host_hfscr); - mtspr(SPRN_CIABR, host_ciabr); - mtspr(SPRN_DAWR0, host_dawr0); - mtspr(SPRN_DAWRX0, host_dawrx0); + if (vcpu->arch.ciabr != host_ciabr) + mtspr(SPRN_CIABR, host_ciabr); + if (vcpu->arch.dawr0 != host_dawr0) + mtspr(SPRN_DAWR0, host_dawr0); + if (vcpu->arch.dawrx0 != host_dawrx0) + mtspr(SPRN_DAWRX0, host_dawrx0); if (cpu_has_feature(CPU_FTR_DAWR1)) { - mtspr(SPRN_DAWR1, host_dawr1); - mtspr(SPRN_DAWRX1, host_dawrx1); + if (vcpu->arch.dawr1 != host_dawr1) + mtspr(SPRN_DAWR1, host_dawr1); + if (vcpu->arch.dawrx1 != host_dawrx1) + mtspr(SPRN_DAWRX1, host_dawrx1); } - mtspr(SPRN_DPDES, 0); + if (vc->dpdes) + mtspr(SPRN_DPDES, 0); if (vc->pcr) mtspr(SPRN_PCR, PCR_MASK); From patchwork Mon Jul 26 03:50:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509746 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=hHFz8NRy; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bS3mS0z9tjt for ; Mon, 26 Jul 2021 13:52:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231727AbhGZDLt (ORCPT ); Sun, 25 Jul 2021 23:11:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231597AbhGZDLs (ORCPT ); Sun, 25 Jul 2021 23:11:48 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 291F7C061757 for ; Sun, 25 Jul 2021 20:52:18 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id a20so10255392plm.0 for ; Sun, 25 Jul 2021 20:52:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1/EeEmpkENbYKzjTPkVIqeul9e2Dav0nMW6DA1lL2A8=; b=hHFz8NRyDXUDvGn4D0+zMSOjsQnKYUxL7wMs8t5FwC24qWwgMoJjDbNqjOWSMAyobH dfQgLAxBPWqLtTOciszdMRBzjDndhoNZP4DUxriVnPVLMdCle0VesG8VhHQ9aRFqhq+M 4ovAJm8VM90NigxCwN+ggFUnkW5mdNkSGtDd+n9OBZ+W1KH1p5m8q1K6w9lUWcBagaPD 3KC5+XvlmBmTXlsV4EFeuzvC6nS4j7Yp/bPaWcj5frLvFhki5r95RNV3NxlOBxkyQcPe RUI+zPlMUAYYjjD82UM9zN9nxzEIFQaOpASZprpWBHfv3fnXpKc7rxwP60Kb2cY6o3OX 448A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1/EeEmpkENbYKzjTPkVIqeul9e2Dav0nMW6DA1lL2A8=; b=Tej1909KeyfY3mKc2yfpD0HnU/cNiJ56NJrY+vajumQJjgTKhV08pbHlcBDKepxrGK avJa1ECl2EGKaLY8/AqG0mT3nHHHtgmuSK/rB7ooEttBaN07dGOcpCiG1mmbOOsiDFfO iEkYqwzS1rLo/dk9U6eKLzji9yXNidpje7UI+AVJ+Ll+XPRX/5apP97bj+a+VrjwaliU D7AR83JoxdYzr7mwbEMZ6R617nS2bKN/WO92+T74ccsk4vc2HPOn9hH/XVgT8uedyq/D lWZJTH3dlUAg62pjBGR5sncCl5e++U/NgUrMAgPckFKtNxT/s9fL5sw5rZB8yTr/rkaw WbKA== X-Gm-Message-State: AOAM531SYmJb1DvA1nDvuR7bkKJ/V0aV7RareHIE+B96zy5rHoH5I6r6 Y/L5v7jNwczTeEQ5kDb3w1DC+TgIdfw= X-Google-Smtp-Source: ABdhPJzPAa3Mr9lYPTKBxMrg3VVIeodf6rRE6BYENfYH0dIjf8/Zl93BHlns1HIHPh/+48B+tXCK6w== X-Received: by 2002:a17:902:ed95:b029:ee:aa46:547a with SMTP id e21-20020a170902ed95b02900eeaa46547amr12818233plj.27.1627271537659; Sun, 25 Jul 2021 20:52:17 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:17 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v1 40/55] KVM: PPC: Book3S HV P9: Demand fault EBB facility registers Date: Mon, 26 Jul 2021 13:50:21 +1000 Message-Id: <20210726035036.739609-41-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Use HFSCR facility disabling to implement demand faulting for EBB, with a hysteresis counter similar to the load_fp etc counters in context switching that implement the equivalent demand faulting for userspace facilities. This speeds up guest entry/exit by avoiding the register save/restore when a guest is not frequently using them. When a guest does use them often, there will be some additional demand fault overhead, but these are not commonly used facilities. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_host.h | 1 + arch/powerpc/kvm/book3s_hv.c | 16 +++++++++++++-- arch/powerpc/kvm/book3s_hv_p9_entry.c | 28 +++++++++++++++++++++------ 3 files changed, 37 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index f105eaeb4521..1c00c4a565f5 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -580,6 +580,7 @@ struct kvm_vcpu_arch { ulong cfar; ulong ppr; u32 pspb; + u8 load_ebb; ulong fscr; ulong shadow_fscr; ulong ebbhr; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 47ccea5ffba2..dd8199a423cf 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1441,6 +1441,16 @@ static int kvmppc_pmu_unavailable(struct kvm_vcpu *vcpu) return RESUME_GUEST; } +static int kvmppc_ebb_unavailable(struct kvm_vcpu *vcpu) +{ + if (!(vcpu->arch.hfscr_permitted & HFSCR_EBB)) + return EMULATE_FAIL; + + vcpu->arch.hfscr |= HFSCR_EBB; + + return RESUME_GUEST; +} + static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, struct task_struct *tsk) { @@ -1735,6 +1745,8 @@ XXX benchmark guest exits r = kvmppc_emulate_doorbell_instr(vcpu); if (cause == FSCR_PM_LG) r = kvmppc_pmu_unavailable(vcpu); + if (cause == FSCR_EBB_LG) + r = kvmppc_ebb_unavailable(vcpu); } if (r == EMULATE_FAIL) { kvmppc_core_queue_program(vcpu, SRR1_PROGILL); @@ -2751,9 +2763,9 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu) vcpu->arch.hfscr_permitted = vcpu->arch.hfscr; /* - * PM is demand-faulted so start with it clear. + * PM, EBB is demand-faulted so start with it clear. */ - vcpu->arch.hfscr &= ~HFSCR_PM; + vcpu->arch.hfscr &= ~(HFSCR_PM | HFSCR_EBB); kvmppc_mmu_book3s_hv_init(vcpu); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index c4e93167d120..f68a3d107d04 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -224,9 +224,12 @@ static void load_spr_state(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { mtspr(SPRN_TAR, vcpu->arch.tar); - mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); - mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); - mtspr(SPRN_BESCR, vcpu->arch.bescr); + + if (vcpu->arch.hfscr & HFSCR_EBB) { + mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); + mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); + mtspr(SPRN_BESCR, vcpu->arch.bescr); + } if (!cpu_has_feature(CPU_FTR_ARCH_31)) mtspr(SPRN_TIDR, vcpu->arch.tid); @@ -257,9 +260,22 @@ static void load_spr_state(struct kvm_vcpu *vcpu, static void store_spr_state(struct kvm_vcpu *vcpu) { vcpu->arch.tar = mfspr(SPRN_TAR); - vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); - vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); - vcpu->arch.bescr = mfspr(SPRN_BESCR); + + if (vcpu->arch.hfscr & HFSCR_EBB) { + vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); + vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); + vcpu->arch.bescr = mfspr(SPRN_BESCR); + /* + * This is like load_fp in context switching, turn off the + * facility after it wraps the u8 to try avoiding saving + * and restoring the registers each partition switch. + */ + if (!vcpu->arch.nested) { + vcpu->arch.load_ebb++; + if (!vcpu->arch.load_ebb) + vcpu->arch.hfscr &= ~HFSCR_EBB; + } + } if (!cpu_has_feature(CPU_FTR_ARCH_31)) vcpu->arch.tid = mfspr(SPRN_TIDR); From patchwork Mon Jul 26 03:50:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509747 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=e8ofBwpR; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bX0B1dz9t9b for ; Mon, 26 Jul 2021 13:52:24 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231597AbhGZDLw (ORCPT ); Sun, 25 Jul 2021 23:11:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231616AbhGZDLw (ORCPT ); Sun, 25 Jul 2021 23:11:52 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB428C061757 for ; Sun, 25 Jul 2021 20:52:20 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id c11so9967963plg.11 for ; Sun, 25 Jul 2021 20:52:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tttWjAtBQpkJ27E6dJTO6j5NLUnzZ0bw5ZHHFoKnVA0=; b=e8ofBwpRuix3BZnZ/sCmRdVBeUzScf3k1D1QdscTeSA1CRauJ46tQxlDoaY+vg61W4 bwu3sltOwRzhmAUtqpYH4q9b7zd21lAQB3zWBwr35Izry7wgOLBV2PWjyEKpCnYr9jk+ 3z17gj+XN6WIs60U05ZTsaFeKpn0GlHXrxrg1YJSM74dyi3d4sMpu5IP6FqkfxN003pF gLdLUK5RE2PrQJELSUjs6lKp7pSFMqbzmmb2NWjra/UR4NPQ9ThoXgBhsRiov2tXZgxu JECRfoW28XaYnD9aoNkuMw2iNvr5bfCoVDrwjcjjtUAEd+essDE47dhXpND5v2Gccp+R oI6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tttWjAtBQpkJ27E6dJTO6j5NLUnzZ0bw5ZHHFoKnVA0=; b=OgFD2A78hVRLPo/MiEXM1/RmOn0rNOzeA/9sGYX63tWM1wX5O+L5BfOxyTi8A6e412 9fQOYG6CVIHE5afdOYETennSeOrphKQVzW5LzR6OHTF6aJisryxsYSV5fliGvjFSjUtk rNMlRcWnlWdNhzMqqbWc5ZDSGDyXBW0PefNjbE+HUwRPiUi2BPsezkw2uYfTr16rID4o BBa18Spno+gmXKsXkIwS3xQgmlzgvUYGnSOch4oy9PriSoQ/5u5z2vCpExtmb8JwuUAj Ti8vjf34SzjtgE0RWxDNSHk4EXSMnS5WDeF1WzSpJxPVAzrrDXlgBhm311BOeN4qcyeo TAwA== X-Gm-Message-State: AOAM533ljo6WZk3dpYPLJU4tXLpWg6VaMymkVtrjNfRdhYATDflFJiO8 1OhTDOYQiezuN4jugvUAe0gfNSCq9h0= X-Google-Smtp-Source: ABdhPJx0wIzImavcgz6WNC2kw85/dKnC2rstlM3M6u9aU2Db3EM8upzVhqbzxz+TW2P56OtHe3NI5Q== X-Received: by 2002:a17:902:e54f:b029:12b:55c9:3b48 with SMTP id n15-20020a170902e54fb029012b55c93b48mr13020127plf.45.1627271540148; Sun, 25 Jul 2021 20:52:20 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:19 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v1 41/55] KVM: PPC: Book3S HV P9: Demand fault TM facility registers Date: Mon, 26 Jul 2021 13:50:22 +1000 Message-Id: <20210726035036.739609-42-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Use HFSCR facility disabling to implement demand faulting for TM, with a hysteresis counter similar to the load_fp etc counters in context switching that implement the equivalent demand faulting for userspace facilities. This speeds up guest entry/exit by avoiding the register save/restore when a guest is not frequently using them. When a guest does use them often, there will be some additional demand fault overhead, but these are not commonly used facilities. -304 cycles (6681) POWER9 virt-mode NULL hcall with the previous patch Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_host.h | 1 + arch/powerpc/kvm/book3s_hv.c | 26 ++++++++++++++++++++------ arch/powerpc/kvm/book3s_hv_p9_entry.c | 25 +++++++++++++++++-------- 3 files changed, 38 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 1c00c4a565f5..74ee3a5b110e 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -581,6 +581,7 @@ struct kvm_vcpu_arch { ulong ppr; u32 pspb; u8 load_ebb; + u8 load_tm; ulong fscr; ulong shadow_fscr; ulong ebbhr; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index dd8199a423cf..5b2114c00c43 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1451,6 +1451,16 @@ static int kvmppc_ebb_unavailable(struct kvm_vcpu *vcpu) return RESUME_GUEST; } +static int kvmppc_tm_unavailable(struct kvm_vcpu *vcpu) +{ + if (!(vcpu->arch.hfscr_permitted & HFSCR_TM)) + return EMULATE_FAIL; + + vcpu->arch.hfscr |= HFSCR_TM; + + return RESUME_GUEST; +} + static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, struct task_struct *tsk) { @@ -1747,6 +1757,8 @@ XXX benchmark guest exits r = kvmppc_pmu_unavailable(vcpu); if (cause == FSCR_EBB_LG) r = kvmppc_ebb_unavailable(vcpu); + if (cause == FSCR_TM_LG) + r = kvmppc_tm_unavailable(vcpu); } if (r == EMULATE_FAIL) { kvmppc_core_queue_program(vcpu, SRR1_PROGILL); @@ -2763,9 +2775,9 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu) vcpu->arch.hfscr_permitted = vcpu->arch.hfscr; /* - * PM, EBB is demand-faulted so start with it clear. + * PM, EBB, TM are demand-faulted so start with it clear. */ - vcpu->arch.hfscr &= ~(HFSCR_PM | HFSCR_EBB); + vcpu->arch.hfscr &= ~(HFSCR_PM | HFSCR_EBB | HFSCR_TM); kvmppc_mmu_book3s_hv_init(vcpu); @@ -3835,8 +3847,9 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns msr |= MSR_VEC; if (cpu_has_feature(CPU_FTR_VSX)) msr |= MSR_VSX; - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) msr |= MSR_TM; msr = msr_check_and_set(msr); @@ -4552,8 +4565,9 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) msr |= MSR_VEC; if (cpu_has_feature(CPU_FTR_VSX)) msr |= MSR_VSX; - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) msr |= MSR_TM; msr = msr_check_and_set(msr); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index f68a3d107d04..db5eb83e26d1 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -295,10 +295,11 @@ bool load_vcpu_state(struct kvm_vcpu *vcpu, { bool ret = false; - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) { unsigned long guest_msr = vcpu->arch.shregs.msr; - if (MSR_TM_ACTIVE(guest_msr)) { + if (MSR_TM_ACTIVE(guest_msr) || local_paca->kvm_hstate.fake_suspend) { kvmppc_restore_tm_hv(vcpu, guest_msr, true); ret = true; } else { @@ -330,15 +331,22 @@ void store_vcpu_state(struct kvm_vcpu *vcpu) #endif vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) { unsigned long guest_msr = vcpu->arch.shregs.msr; - if (MSR_TM_ACTIVE(guest_msr)) { + if (MSR_TM_ACTIVE(guest_msr) || local_paca->kvm_hstate.fake_suspend) { kvmppc_save_tm_hv(vcpu, guest_msr, true); } else { vcpu->arch.texasr = mfspr(SPRN_TEXASR); vcpu->arch.tfhar = mfspr(SPRN_TFHAR); vcpu->arch.tfiar = mfspr(SPRN_TFIAR); + + if (!vcpu->arch.nested) { + vcpu->arch.load_tm++; /* see load_ebb comment */ + if (!vcpu->arch.load_tm) + vcpu->arch.hfscr &= ~HFSCR_TM; + } } } } @@ -629,8 +637,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc msr |= MSR_VEC; if (cpu_has_feature(CPU_FTR_VSX)) msr |= MSR_VSX; - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) msr |= MSR_TM; msr = msr_check_and_set(msr); /* Save MSR for restore. This is after hard disable, so EE is clear. */ From patchwork Mon Jul 26 03:50:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509748 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=IREJ8dMT; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bY5Vyyz9tjt for ; Mon, 26 Jul 2021 13:52:25 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231646AbhGZDLz (ORCPT ); Sun, 25 Jul 2021 23:11:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231616AbhGZDLy (ORCPT ); Sun, 25 Jul 2021 23:11:54 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D632EC061760 for ; Sun, 25 Jul 2021 20:52:22 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id k4-20020a17090a5144b02901731c776526so17813949pjm.4 for ; Sun, 25 Jul 2021 20:52:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YDldY7WS69pzamb+qF1B1lhBHOY60F2IGoOPkLlUvXg=; b=IREJ8dMTDIhYmdKNdsxmPvt6UzKlLqUqqmfAoiL7fy2d/2QntBZ0Sl5La2hnN6nR8j YnzMWK+LhrPBHrnTtUXZBJmXmANPAOSUK0/LQS7NPABsVZcFxSt6EJ8+pFee1b8UYcTj BR5INrCTL7CfXT2c8HRAS9VkWOVNCcmpzPs1ReUK6CM8MESFlDEZBEc96sILnAZiKW/r GaxGyDr7NZEP3q2nMLpUcmmk8kra1a7u0blE2UZsE6dxWaJuJcuXG8F+cTNKtallo0iC EVfADXBClZA/kN1aJMjswz+SCnhljtkf759USVN0nYzp1+LjC5p+Gm+No5jknrQMY1xK cWAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YDldY7WS69pzamb+qF1B1lhBHOY60F2IGoOPkLlUvXg=; b=jXy6srOgYCs8viNUziiE77JcDwNVgeYZRTlshgsVwxphMoxWokXm9AVAjsvQg4bOm6 p2E8i7Y4lFhImnrW8nnw/nAHzuvmoWn4Q4GHo0mc25IH2QGmqTk7fs6EeFjNyxhe18XL PVoKJsnEakF3tfiZNFvdpSKgQVsJEGXtXUm+Z1T4GO32zQQzefk1Ni9P9Fp9zw0q+dUe 6Z7R8keXE51/NYL1jUCk8WLxifo4Dz0iFRMB19Ucbm7Pl33O7O5hiWA9kdN0X1gVs19B Z85sHVrYHp4St2o6T5mqHpT2nzribp61RvPHCej/NqgpK35yxvxsW6GyFbq3CjVEVZVq SghA== X-Gm-Message-State: AOAM532UJM/yZlJT4VupRe+hul4n2wSPR7RDMnWl3Y4b0rYEvjvb3CfP luiBi69VOfQMQWiqHnYE3MSRG18htWc= X-Google-Smtp-Source: ABdhPJx5DsfwPbLxORQw9voCYaFAIUrVyUdr4Ll+vVm6M4idPAgw7Sn6TW4fuvue82h4nUEKXacarg== X-Received: by 2002:a65:568c:: with SMTP id v12mr16603284pgs.88.1627271542323; Sun, 25 Jul 2021 20:52:22 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:22 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 42/55] KVM: PPC: Book3S HV P9: Use Linux SPR save/restore to manage some host SPRs Date: Mon, 26 Jul 2021 13:50:23 +1000 Message-Id: <20210726035036.739609-43-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Linux implements SPR save/restore including storage space for registers in the task struct for process context switching. Make use of this similarly to the way we make use of the context switching fp/vec save restore. This improves code reuse, allows some stack space to be saved, and helps with avoiding VRSAVE updates if they are not required. -61 cycles (6620) POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/switch_to.h | 2 + arch/powerpc/kernel/process.c | 6 ++ arch/powerpc/kvm/book3s_hv.c | 21 +----- arch/powerpc/kvm/book3s_hv.h | 3 - arch/powerpc/kvm/book3s_hv_p9_entry.c | 93 +++++++++++++++++++-------- 5 files changed, 74 insertions(+), 51 deletions(-) diff --git a/arch/powerpc/include/asm/switch_to.h b/arch/powerpc/include/asm/switch_to.h index 9d1fbd8be1c7..de17c45314bc 100644 --- a/arch/powerpc/include/asm/switch_to.h +++ b/arch/powerpc/include/asm/switch_to.h @@ -112,6 +112,8 @@ static inline void clear_task_ebb(struct task_struct *t) #endif } +void kvmppc_save_current_sprs(void); + extern int set_thread_tidr(struct task_struct *t); #endif /* _ASM_POWERPC_SWITCH_TO_H */ diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 00b55b38a460..d54baa3e20d2 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -1180,6 +1180,12 @@ static inline void save_sprs(struct thread_struct *t) #endif } +void kvmppc_save_current_sprs(void) +{ + save_sprs(¤t->thread); +} +EXPORT_SYMBOL_GPL(kvmppc_save_current_sprs); + static inline void restore_sprs(struct thread_struct *old_thread, struct thread_struct *new_thread) { diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 5b2114c00c43..c0a04ce39e00 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4510,9 +4510,6 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) struct kvm_run *run = vcpu->run; int r; int srcu_idx; - unsigned long ebb_regs[3] = {}; /* shut up GCC */ - unsigned long user_tar = 0; - unsigned int user_vrsave; struct kvm *kvm; unsigned long msr; @@ -4573,14 +4570,7 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) save_user_regs_kvm(); - /* Save userspace EBB and other register values */ - if (cpu_has_feature(CPU_FTR_ARCH_207S)) { - ebb_regs[0] = mfspr(SPRN_EBBHR); - ebb_regs[1] = mfspr(SPRN_EBBRR); - ebb_regs[2] = mfspr(SPRN_BESCR); - user_tar = mfspr(SPRN_TAR); - } - user_vrsave = mfspr(SPRN_VRSAVE); + kvmppc_save_current_sprs(); vcpu->arch.waitp = &vcpu->arch.vcore->wait; vcpu->arch.pgdir = kvm->mm->pgd; @@ -4621,15 +4611,6 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) } } while (is_kvmppc_resume_guest(r)); - /* Restore userspace EBB and other register values */ - if (cpu_has_feature(CPU_FTR_ARCH_207S)) { - mtspr(SPRN_EBBHR, ebb_regs[0]); - mtspr(SPRN_EBBRR, ebb_regs[1]); - mtspr(SPRN_BESCR, ebb_regs[2]); - mtspr(SPRN_TAR, user_tar); - } - mtspr(SPRN_VRSAVE, user_vrsave); - vcpu->arch.state = KVMPPC_VCPU_NOTREADY; atomic_dec(&kvm->arch.vcpus_running); diff --git a/arch/powerpc/kvm/book3s_hv.h b/arch/powerpc/kvm/book3s_hv.h index a9065a380547..04884e271862 100644 --- a/arch/powerpc/kvm/book3s_hv.h +++ b/arch/powerpc/kvm/book3s_hv.h @@ -3,11 +3,8 @@ * Privileged (non-hypervisor) host registers to save. */ struct p9_host_os_sprs { - unsigned long dscr; - unsigned long tidr; unsigned long iamr; unsigned long amr; - unsigned long fscr; unsigned int pmc1; unsigned int pmc2; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index db5eb83e26d1..5fca0a09425d 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -223,15 +223,26 @@ EXPORT_SYMBOL_GPL(switch_pmu_to_host); static void load_spr_state(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { + /* TAR is very fast */ mtspr(SPRN_TAR, vcpu->arch.tar); +#ifdef CONFIG_ALTIVEC + if (cpu_has_feature(CPU_FTR_ALTIVEC) && + current->thread.vrsave != vcpu->arch.vrsave) + mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); +#endif + if (vcpu->arch.hfscr & HFSCR_EBB) { - mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); - mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); - mtspr(SPRN_BESCR, vcpu->arch.bescr); + if (current->thread.ebbhr != vcpu->arch.ebbhr) + mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); + if (current->thread.ebbrr != vcpu->arch.ebbrr) + mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); + if (current->thread.bescr != vcpu->arch.bescr) + mtspr(SPRN_BESCR, vcpu->arch.bescr); } - if (!cpu_has_feature(CPU_FTR_ARCH_31)) + if (!cpu_has_feature(CPU_FTR_ARCH_31) && + current->thread.tidr != vcpu->arch.tid) mtspr(SPRN_TIDR, vcpu->arch.tid); if (host_os_sprs->iamr != vcpu->arch.iamr) mtspr(SPRN_IAMR, vcpu->arch.iamr); @@ -239,9 +250,9 @@ static void load_spr_state(struct kvm_vcpu *vcpu, mtspr(SPRN_AMR, vcpu->arch.amr); if (vcpu->arch.uamor != 0) mtspr(SPRN_UAMOR, vcpu->arch.uamor); - if (host_os_sprs->fscr != vcpu->arch.fscr) + if (current->thread.fscr != vcpu->arch.fscr) mtspr(SPRN_FSCR, vcpu->arch.fscr); - if (host_os_sprs->dscr != vcpu->arch.dscr) + if (current->thread.dscr != vcpu->arch.dscr) mtspr(SPRN_DSCR, vcpu->arch.dscr); if (vcpu->arch.pspb != 0) mtspr(SPRN_PSPB, vcpu->arch.pspb); @@ -261,20 +272,15 @@ static void store_spr_state(struct kvm_vcpu *vcpu) { vcpu->arch.tar = mfspr(SPRN_TAR); +#ifdef CONFIG_ALTIVEC + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); +#endif + if (vcpu->arch.hfscr & HFSCR_EBB) { vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); vcpu->arch.bescr = mfspr(SPRN_BESCR); - /* - * This is like load_fp in context switching, turn off the - * facility after it wraps the u8 to try avoiding saving - * and restoring the registers each partition switch. - */ - if (!vcpu->arch.nested) { - vcpu->arch.load_ebb++; - if (!vcpu->arch.load_ebb) - vcpu->arch.hfscr &= ~HFSCR_EBB; - } } if (!cpu_has_feature(CPU_FTR_ARCH_31)) @@ -315,7 +321,6 @@ bool load_vcpu_state(struct kvm_vcpu *vcpu, #ifdef CONFIG_ALTIVEC load_vr_state(&vcpu->arch.vr); #endif - mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); return ret; } @@ -329,7 +334,6 @@ void store_vcpu_state(struct kvm_vcpu *vcpu) #ifdef CONFIG_ALTIVEC store_vr_state(&vcpu->arch.vr); #endif - vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); if ((cpu_has_feature(CPU_FTR_TM) || cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && @@ -354,12 +358,8 @@ EXPORT_SYMBOL_GPL(store_vcpu_state); void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) { - if (!cpu_has_feature(CPU_FTR_ARCH_31)) - host_os_sprs->tidr = mfspr(SPRN_TIDR); host_os_sprs->iamr = mfspr(SPRN_IAMR); host_os_sprs->amr = mfspr(SPRN_AMR); - host_os_sprs->fscr = mfspr(SPRN_FSCR); - host_os_sprs->dscr = mfspr(SPRN_DSCR); } EXPORT_SYMBOL_GPL(save_p9_host_os_sprs); @@ -367,26 +367,63 @@ EXPORT_SYMBOL_GPL(save_p9_host_os_sprs); void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { + /* + * current->thread.xxx registers must all be restored to host + * values before a potential context switch, othrewise the context + * switch itself will overwrite current->thread.xxx with the values + * from the guest SPRs. + */ + mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - if (!cpu_has_feature(CPU_FTR_ARCH_31)) - mtspr(SPRN_TIDR, host_os_sprs->tidr); + if (!cpu_has_feature(CPU_FTR_ARCH_31) && + current->thread.tidr != vcpu->arch.tid) + mtspr(SPRN_TIDR, current->thread.tidr); if (host_os_sprs->iamr != vcpu->arch.iamr) mtspr(SPRN_IAMR, host_os_sprs->iamr); if (vcpu->arch.uamor != 0) mtspr(SPRN_UAMOR, 0); if (host_os_sprs->amr != vcpu->arch.amr) mtspr(SPRN_AMR, host_os_sprs->amr); - if (host_os_sprs->fscr != vcpu->arch.fscr) - mtspr(SPRN_FSCR, host_os_sprs->fscr); - if (host_os_sprs->dscr != vcpu->arch.dscr) - mtspr(SPRN_DSCR, host_os_sprs->dscr); + if (current->thread.fscr != vcpu->arch.fscr) + mtspr(SPRN_FSCR, current->thread.fscr); + if (current->thread.dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, current->thread.dscr); if (vcpu->arch.pspb != 0) mtspr(SPRN_PSPB, 0); /* Save guest CTRL register, set runlatch to 1 */ if (!(vcpu->arch.ctrl & 1)) mtspr(SPRN_CTRLT, 1); + +#ifdef CONFIG_ALTIVEC + if (cpu_has_feature(CPU_FTR_ALTIVEC) && + vcpu->arch.vrsave != current->thread.vrsave) + mtspr(SPRN_VRSAVE, current->thread.vrsave); +#endif + if (vcpu->arch.hfscr & HFSCR_EBB) { + if (vcpu->arch.bescr != current->thread.bescr) + mtspr(SPRN_BESCR, current->thread.bescr); + if (vcpu->arch.ebbhr != current->thread.ebbhr) + mtspr(SPRN_EBBHR, current->thread.ebbhr); + if (vcpu->arch.ebbrr != current->thread.ebbrr) + mtspr(SPRN_EBBRR, current->thread.ebbrr); + + if (!vcpu->arch.nested) { + /* + * This is like load_fp in context switching, turn off + * the facility after it wraps the u8 to try avoiding + * saving and restoring the registers each partition + * switch. + */ + vcpu->arch.load_ebb++; + if (!vcpu->arch.load_ebb) + vcpu->arch.hfscr &= ~HFSCR_EBB; + } + } + + if (vcpu->arch.tar != current->thread.tar) + mtspr(SPRN_TAR, current->thread.tar); } EXPORT_SYMBOL_GPL(restore_p9_host_os_sprs); From patchwork Mon Jul 26 03:50:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509749 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ZXebVxUP; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bZ6CKbz9snk for ; Mon, 26 Jul 2021 13:52:26 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231205AbhGZDL4 (ORCPT ); Sun, 25 Jul 2021 23:11:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231616AbhGZDL4 (ORCPT ); Sun, 25 Jul 2021 23:11:56 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0274AC061757 for ; Sun, 25 Jul 2021 20:52:25 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id q17-20020a17090a2e11b02901757deaf2c8so12490830pjd.0 for ; Sun, 25 Jul 2021 20:52:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rHN3RL3Kn4gtidkizBvHwU218V2wDO26g3I1nj+ZzZU=; b=ZXebVxUPRpZWJRSIzunQL1v9ZNlR/v9unJ6D2We+uCwLXOLRmV1CKxV06gdl0M4oAs f1JV/zx4DgMFkHPILQdNm+LHfL1yFS07RPQpjQ2V0xh3J6AVIP+yifZ79ZXNR0BqdA9M xzX9GZEYf0kiVgzo0RDqP5r3LRLm9AisDRNNINvEK5NFUu4xQXwo8BXlyvNQWCj/TEmi gqtHf3OKSQJ0ggCMXCdLU1RexQYSeOWIOrGOGy7a5R1FYeE3SZytQbbPRe2D1Iqi7E6h Jhz6W6Epo78Bht9iA+3mm9t1851yorPUilxn/0Ti4gcKiSsKG12Vq0SiKpp9bUA9o5lV ZoYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rHN3RL3Kn4gtidkizBvHwU218V2wDO26g3I1nj+ZzZU=; b=Lkl+Uz0niX895bBb0Djq1rAxqvy4WYNPyxpYKdhJ/1LGafYRg4/05lurHVGaIV0BKK UnjK3SglEZJ8Hr011bY/OC57tiEPOpFHClGg9A/QtIILXf6mfJXo+V2Spd+ASKpcS9tE lc4NfSaHv5Ee7NFqiNm7JH+wOF7CMZi+BF6SOc7vDZX7QgcefW9EU//AKD9K2cU3gYLh sEMvJBlb7ycwEU/VDSPVYDBh1Fn/HsoYxvR540qvVUnKoK2006hGL9sW5xNU72LHuDLS K+fKRV6GnDtUqX+YElG+2UESa8eg8Lf7FBCbok6sTqJzkjxEGY3DgHpastd1l1cd/tH9 JmfQ== X-Gm-Message-State: AOAM5333WflXsSY+0almCrSc4AHXkMYdnm3QoQok+R7Sm7e7gZrBqCwS oZ4mc6dB5UDH9drq4Gl6jQP9cyVzODQ= X-Google-Smtp-Source: ABdhPJzSv71EF3qtG2jx97SNiTNF/lT49k775lPxxecYyeqng2PbysFg9oEU1fj20g9NC0dCPiSigQ== X-Received: by 2002:a17:90a:ea98:: with SMTP id h24mr23846562pjz.7.1627271544517; Sun, 25 Jul 2021 20:52:24 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:24 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 43/55] KVM: PPC: Book3S HV P9: Comment and fix MMU context switching code Date: Mon, 26 Jul 2021 13:50:24 +1000 Message-Id: <20210726035036.739609-44-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Tighten up partition switching code synchronisation and comments. In particular, hwsync ; isync is required after the last access that is performed in the context of a partition, before the partition is switched away from. -301 cycles (6319) POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_64_mmu_radix.c | 4 +++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 40 +++++++++++++++++++------- 2 files changed, 33 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index b5905ae4377c..c5508744e14c 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -54,6 +54,8 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid, preempt_disable(); + asm volatile("hwsync" ::: "memory"); + isync(); /* switch the lpid first to avoid running host with unallocated pid */ old_lpid = mfspr(SPRN_LPID); if (old_lpid != lpid) @@ -70,6 +72,8 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid, else ret = copy_to_user_nofault((void __user *)to, from, n); + asm volatile("hwsync" ::: "memory"); + isync(); /* switch the pid first to avoid running host with unallocated pid */ if (quadrant == 1 && pid != old_pid) mtspr(SPRN_PID, old_pid); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 5fca0a09425d..0aad2bf29d6e 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -521,17 +521,19 @@ static void switch_mmu_to_guest_radix(struct kvm *kvm, struct kvm_vcpu *vcpu, u6 lpid = nested ? nested->shadow_lpid : kvm->arch.lpid; /* - * All the isync()s are overkill but trivially follow the ISA - * requirements. Some can likely be replaced with justification - * comment for why they are not needed. + * Prior memory accesses to host PID Q3 must be completed before we + * start switching, and stores must be drained to avoid not-my-LPAR + * logic (see switch_mmu_to_host). */ + asm volatile("hwsync" ::: "memory"); isync(); mtspr(SPRN_LPID, lpid); - isync(); mtspr(SPRN_LPCR, lpcr); - isync(); mtspr(SPRN_PID, vcpu->arch.pid); - isync(); + /* + * isync not required here because we are HRFID'ing to guest before + * any guest context access, which is context synchronising. + */ } static void switch_mmu_to_guest_hpt(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpcr) @@ -541,25 +543,41 @@ static void switch_mmu_to_guest_hpt(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpid = kvm->arch.lpid; + /* + * See switch_mmu_to_guest_radix. ptesync should not be required here + * even if the host is in HPT mode because speculative accesses would + * not cause RC updates (we are in real mode). + */ + asm volatile("hwsync" ::: "memory"); + isync(); mtspr(SPRN_LPID, lpid); mtspr(SPRN_LPCR, lpcr); mtspr(SPRN_PID, vcpu->arch.pid); for (i = 0; i < vcpu->arch.slb_max; i++) mtslb(vcpu->arch.slb[i].orige, vcpu->arch.slb[i].origv); - - isync(); + /* + * isync not required here, see switch_mmu_to_guest_radix. + */ } static void switch_mmu_to_host(struct kvm *kvm, u32 pid) { + /* + * The guest has exited, so guest MMU context is no longer being + * non-speculatively accessed, but a hwsync is needed before the + * mtLPIDR / mtPIDR switch, in order to ensure all stores are drained, + * so the not-my-LPAR tlbie logic does not overlook them. + */ + asm volatile("hwsync" ::: "memory"); isync(); mtspr(SPRN_PID, pid); - isync(); mtspr(SPRN_LPID, kvm->arch.host_lpid); - isync(); mtspr(SPRN_LPCR, kvm->arch.host_lpcr); - isync(); + /* + * isync is not required after the switch, because mtmsrd with L=0 + * is performed after this switch, which is context synchronising. + */ if (!radix_enabled()) slb_restore_bolted_realmode(); From patchwork Mon Jul 26 03:50:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509751 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=jY1YdaiW; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bf1hcjz9tk6 for ; Mon, 26 Jul 2021 13:52:30 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231616AbhGZDL7 (ORCPT ); Sun, 25 Jul 2021 23:11:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231219AbhGZDL6 (ORCPT ); Sun, 25 Jul 2021 23:11:58 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1FA14C061757 for ; Sun, 25 Jul 2021 20:52:27 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id e21so5528375pla.5 for ; Sun, 25 Jul 2021 20:52:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HLk6HQ2z18NVVIxNvv7ggfI6nteda2G9xIwBhEx3DRA=; b=jY1YdaiW8IgRfdLESK4f4q/4PvLd8YOSNM0z4I5+GnB93BrdspXXkM38MaNV8dmbv1 43HOOjeCqympcBUjz0ZjAmh5Q26NrTeRX86qRWBDaSLoMV5ua14a1rTawJ59XyWFS7Xf OoQmJfjekfKWvwkN9cLC9LVD8LnFdlxftawKU11q/0xPFsSfa74tAiM7bZTcsPOs6KZQ G1nZlfDIU0AkPbEdD4G/aZd0n0SIGb8UkdvXfO2ECSu49dRNsLa8rzWlsBuuyzc7gWsw S8hvwkvJ2sCOu3zZZxLSJh39I5PyJUUHBcGIk31mU+IeyFD+yuzxaURZ0E4z4KbhceU+ p92Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HLk6HQ2z18NVVIxNvv7ggfI6nteda2G9xIwBhEx3DRA=; b=rbYGS/0upOz7a2TiVagzF3ecxK51cBzr4wFIHnwheKHRXNLJxkXbgBE0Yr2BBPS/xC JZKY8SuibH0rHXA8kovYbJAPmBrDcFCmUSfGQ6QD3DlDHR3D9gcB+NoMMU2r6fFxefcn aeKayq3LUaDoX7pk1UAc9F2uuWVm1LCka3KAKkvcSCxH2wXhqzc02yJeRSEc7tpMjoLX DnOmF7I4VFGvn/Aw8dk9yiTip5p0nE9jSWAiGpL4l06YF2XQcjBZe5CEGVX4Ikt8NiUW nffXu4OszqRi7FajHbTUgC0MNFCEgmFf9YRSxw+m4WB4/Mb34bj13VbZLcgUrV5zsNBO vnvg== X-Gm-Message-State: AOAM533OkZbuhK7++zWEyf7PR7gACL2AIB5iDTkrDHnEz8yTtoxCI6tO 5OvE898cnquQWUXTnBpyHAepjyZk4Gw= X-Google-Smtp-Source: ABdhPJy5Iq0w/nkquumaKPmXNbJJpEX0ODTDzd4tC7tGfKeCS5HCEBavwGpVltJ+qGXCpsT58KKsbQ== X-Received: by 2002:a63:1f5c:: with SMTP id q28mr16263015pgm.114.1627271546679; Sun, 25 Jul 2021 20:52:26 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:26 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 44/55] KVM: PPC: Book3S HV P9: Test dawr_enabled() before saving host DAWR SPRs Date: Mon, 26 Jul 2021 13:50:25 +1000 Message-Id: <20210726035036.739609-45-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Some of the DAWR SPR access is already predicated on dawr_enabled(), apply this to the remainder of the accesses. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 34 ++++++++++++++++----------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 0aad2bf29d6e..976687c3709a 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -656,13 +656,16 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc host_hfscr = mfspr(SPRN_HFSCR); host_ciabr = mfspr(SPRN_CIABR); - host_dawr0 = mfspr(SPRN_DAWR0); - host_dawrx0 = mfspr(SPRN_DAWRX0); host_psscr = mfspr(SPRN_PSSCR); host_pidr = mfspr(SPRN_PID); - if (cpu_has_feature(CPU_FTR_DAWR1)) { - host_dawr1 = mfspr(SPRN_DAWR1); - host_dawrx1 = mfspr(SPRN_DAWRX1); + + if (dawr_enabled()) { + host_dawr0 = mfspr(SPRN_DAWR0); + host_dawrx0 = mfspr(SPRN_DAWRX0); + if (cpu_has_feature(CPU_FTR_DAWR1)) { + host_dawr1 = mfspr(SPRN_DAWR1); + host_dawrx1 = mfspr(SPRN_DAWRX1); + } } local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); @@ -996,15 +999,18 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_HFSCR, host_hfscr); if (vcpu->arch.ciabr != host_ciabr) mtspr(SPRN_CIABR, host_ciabr); - if (vcpu->arch.dawr0 != host_dawr0) - mtspr(SPRN_DAWR0, host_dawr0); - if (vcpu->arch.dawrx0 != host_dawrx0) - mtspr(SPRN_DAWRX0, host_dawrx0); - if (cpu_has_feature(CPU_FTR_DAWR1)) { - if (vcpu->arch.dawr1 != host_dawr1) - mtspr(SPRN_DAWR1, host_dawr1); - if (vcpu->arch.dawrx1 != host_dawrx1) - mtspr(SPRN_DAWRX1, host_dawrx1); + + if (dawr_enabled()) { + if (vcpu->arch.dawr0 != host_dawr0) + mtspr(SPRN_DAWR0, host_dawr0); + if (vcpu->arch.dawrx0 != host_dawrx0) + mtspr(SPRN_DAWRX0, host_dawrx0); + if (cpu_has_feature(CPU_FTR_DAWR1)) { + if (vcpu->arch.dawr1 != host_dawr1) + mtspr(SPRN_DAWR1, host_dawr1); + if (vcpu->arch.dawrx1 != host_dawrx1) + mtspr(SPRN_DAWRX1, host_dawrx1); + } } if (vc->dpdes) From patchwork Mon Jul 26 03:50:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509752 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Aqa31gWs; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bg31qWz9tkB for ; Mon, 26 Jul 2021 13:52:31 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231717AbhGZDMA (ORCPT ); Sun, 25 Jul 2021 23:12:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231219AbhGZDMA (ORCPT ); Sun, 25 Jul 2021 23:12:00 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5AB26C061757 for ; Sun, 25 Jul 2021 20:52:29 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id m1so11126158pjv.2 for ; Sun, 25 Jul 2021 20:52:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=i9H2zE5BQgIqgQefqlymemI2qVOIx89ENHZOOC1fDxM=; b=Aqa31gWsRY2Tv2rhn5S8YAx+p89ZkAuMaDh3Ve6ZIrO2lz+niaGWpPFS+jrmCymdKh K086RDyrCHiUfqd3Bu7dEhJofHkRF2eW0bdf/BGGncM6RjlxFI+DT6IFLrXvN+6UokMx EJ4W44e85CU/CjRHWQ7Wx3LwOeMnBzvpxaCeRLtdUHTtky2wuCam9NLbtgwWtLNuvO/A okx6Hllf1KbnypVjz4z3+1FocjEE1CVCYSYKP9dTUcWdcqf60F4tWdTK7xA2Pbql/Sjr Vc1/3CAQls/vudjKGqBWVky9kbZn4J+8OGPmwSXrJROhNGQ3nggZOo/S9ZichhWEB4yd GM4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=i9H2zE5BQgIqgQefqlymemI2qVOIx89ENHZOOC1fDxM=; b=S6lg1lDc7uFZHveWzKwMPSMrINq5NfdEr2Gt998rpE2OYCoSlxxrLJwkzZq6iZXeR6 MYrQro1MnX5i0817SkcamPXVWQPwYfW+qIt4crs/sydrWMpsehlLtfMFN6K/B+U3++Ga XByHSDq/NHFY3V6/fL2qHlHCxGnNNYnx0rwNBPXbngRuQKon93lKaAWxRqQMfQQguuN1 Spj4EkMdJZdw2r91+hWPzABSSssGG7rMgkjUegeUVbTgNDTGZspw2rO4hn/BwgzkOuE5 D+jdWqILFiSkoKfAOebmmi9KOU710Qxx2NK8i9Qi1L7P6dZYd7Dk35Y/LRhzejAxxtHw vURQ== X-Gm-Message-State: AOAM531xjXbexxYgZ3WLTuynP6qUgdyQxukMqQt5xlI43Ao0IOHeitnR 9b6KO/tDokAOSwDv06PVmvXdDV3ecAg= X-Google-Smtp-Source: ABdhPJyqTWQXw0SLueB8qxriflkAzv8/ErHOIbGjBB7w86lwMqlhPg+N0AQKEv5gjyqZyNMnPWfqjg== X-Received: by 2002:a62:3852:0:b029:32e:50d4:6ee5 with SMTP id f79-20020a6238520000b029032e50d46ee5mr15751754pfa.3.1627271548855; Sun, 25 Jul 2021 20:52:28 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:28 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 45/55] KVM: PPC: Book3S HV P9: Don't restore PSSCR if not needed Date: Mon, 26 Jul 2021 13:50:26 +1000 Message-Id: <20210726035036.739609-46-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This also moves the PSSCR update in nested entry to avoid a SPR scoreboard stall. -45 cycles (6276) POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 7 +++++-- arch/powerpc/kvm/book3s_hv_p9_entry.c | 26 +++++++++++++++++++------- 2 files changed, 24 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index c0a04ce39e00..a37ab798eb7c 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3856,7 +3856,9 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) msr = mfmsr(); /* TM restore can update msr */ - mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); + if (vcpu->arch.psscr != host_psscr) + mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); + kvmhv_save_hv_regs(vcpu, &hvregs); hvregs.lpcr = lpcr; vcpu->arch.regs.msr = vcpu->arch.shregs.msr; @@ -3897,7 +3899,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns vcpu->arch.shregs.dar = mfspr(SPRN_DAR); vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR); vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); - mtspr(SPRN_PSSCR_PR, host_psscr); store_vcpu_state(vcpu); @@ -3910,6 +3911,8 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns timer_rearm_host_dec(*tb); restore_p9_host_os_sprs(vcpu, &host_os_sprs); + if (vcpu->arch.psscr != host_psscr) + mtspr(SPRN_PSSCR_PR, host_psscr); return trap; } diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 976687c3709a..52690af66ca9 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -639,6 +639,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc unsigned long host_dawr0; unsigned long host_dawrx0; unsigned long host_psscr; + unsigned long host_hpsscr; unsigned long host_pidr; unsigned long host_dawr1; unsigned long host_dawrx1; @@ -656,7 +657,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc host_hfscr = mfspr(SPRN_HFSCR); host_ciabr = mfspr(SPRN_CIABR); - host_psscr = mfspr(SPRN_PSSCR); + host_psscr = mfspr(SPRN_PSSCR_PR); + if (cpu_has_feature(CPU_FTRS_POWER9_DD2_2)) + host_hpsscr = mfspr(SPRN_PSSCR); host_pidr = mfspr(SPRN_PID); if (dawr_enabled()) { @@ -740,8 +743,14 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (vcpu->arch.ciabr != host_ciabr) mtspr(SPRN_CIABR, vcpu->arch.ciabr); - mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | - (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + + if (cpu_has_feature(CPU_FTRS_POWER9_DD2_2)) { + mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | + (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + } else { + if (vcpu->arch.psscr != host_psscr) + mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); + } mtspr(SPRN_HFSCR, vcpu->arch.hfscr); @@ -947,7 +956,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ic = mfspr(SPRN_IC); vcpu->arch.pid = mfspr(SPRN_PID); - vcpu->arch.psscr = mfspr(SPRN_PSSCR) & PSSCR_GUEST_VIS; + vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); vcpu->arch.shregs.sprg0 = mfspr(SPRN_SPRG0); vcpu->arch.shregs.sprg1 = mfspr(SPRN_SPRG1); @@ -993,9 +1002,12 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr); mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr); - /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ - mtspr(SPRN_PSSCR, host_psscr | - (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + if (cpu_has_feature(CPU_FTRS_POWER9_DD2_2)) { + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ + mtspr(SPRN_PSSCR, host_hpsscr | + (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + } + mtspr(SPRN_HFSCR, host_hfscr); if (vcpu->arch.ciabr != host_ciabr) mtspr(SPRN_CIABR, host_ciabr); From patchwork Mon Jul 26 03:50:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509753 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=EMPYL54E; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bj5BNgz9snk for ; Mon, 26 Jul 2021 13:52:33 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231723AbhGZDMD (ORCPT ); Sun, 25 Jul 2021 23:12:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231219AbhGZDMC (ORCPT ); Sun, 25 Jul 2021 23:12:02 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91A7CC061757 for ; Sun, 25 Jul 2021 20:52:31 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id n10so10142241plf.4 for ; Sun, 25 Jul 2021 20:52:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lXIv8+mtVkMgsv+PEpjVCTZ/uwGBbTyFBO8PdcIpTk0=; b=EMPYL54EwTFULzHCsxaN9dj8RhCUGi3pxeswB+2CG7oKsXJ+7ZZn7y5YXq3ufooioL lGc/lgHe9jEaR4BmzjL9dgtpTe3S8YX/+LjVGy3onWxV47BYuBQ0Yaz9Q+b+Od60QBwA axWvrEHWh4R4q1w5tMi8nLSX5hyVUnkhbpSLNhWa4AU1lg/OZfNm6f+NgoctHmKjkv9J tfXyZ7NffhixUJqTWo4uDMtUdkrqYUJSxerikMnzVuqknbNTteKvYYJ02TB6WLqsr45l cCcjyF96u/7PTfeVfsT0VA3xRUoHchhx6dBTUivhCRvsmxTT9g3RthfzuP4wgDJO2Xny +opA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lXIv8+mtVkMgsv+PEpjVCTZ/uwGBbTyFBO8PdcIpTk0=; b=m2aiqcE2o+ptsTBgh6b5StmOvYXS0XWMsd4mH7fHPWHIQJhQ96z7T1MmcmuMVhqf8N 8eZgTnToGkfxd4ToZc3Qimy9PrgNzT2/fHRMU6MhjIlb2CLgQ8ph9JNc5q01PK6dzEdt ctXiMDOfg43oVBWCbxiH0h851b7+fGc60wZbUsKU/QXkPtOtdMrbz/wi8HkbgwuqFpVA kKWRf01DJZIm4KztLUo0mnnCCJ+ld8YJJAlfn6xoe70wvvm1rCrRzD/6HvdOGSWpo4ey QzIvC9KP5Yv343OcuGoecdOuopaPMOXEj68aEwoY4m5GyHMInR/Cao+ntv80NX1mOhJR 9IJw== X-Gm-Message-State: AOAM530cJscu0/UsHFqQ3+xcjHEDulMD8PUcyMtFOApkuMbfu5Vj5d05 hTgTI3hcqFFQr/wVNdHhNWdfqpFFIJU= X-Google-Smtp-Source: ABdhPJwqtCO47R6gTVGnL6uw3vwhx0CNGShT1EDzg4B1ve2x0F307g9FK9O8aY1/rNRYsxXpHeYiFw== X-Received: by 2002:aa7:874c:0:b029:39a:56d1:6d42 with SMTP id g12-20020aa7874c0000b029039a56d16d42mr2015069pfo.58.1627271551083; Sun, 25 Jul 2021 20:52:31 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:30 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 46/55] KVM: PPC: Book3S HV P9: Avoid tlbsync sequence on radix guest exit Date: Mon, 26 Jul 2021 13:50:27 +1000 Message-Id: <20210726035036.739609-47-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Use the existing TLB flushing logic to IPI the previous CPU and run the necessary barriers before running a guest vCPU on a new physical CPU, to do the necessary radix GTSE barriers for handling the case of an interrupted guest tlbie sequence. This results in more IPIs than the TLB flush logic requires, but it's a significant win for common case scheduling when the vCPU remains on the same physical CPU. -522 cycles (5754) POWER9 virt-mode NULL hcall Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 31 +++++++++++++++++++++++---- arch/powerpc/kvm/book3s_hv_p9_entry.c | 9 -------- 2 files changed, 27 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index a37ab798eb7c..3e5c6b745394 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3005,6 +3005,25 @@ static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu) smp_call_function_single(i, do_nothing, NULL, 1); } +static void do_migrate_away_vcpu(void *arg) +{ + struct kvm_vcpu *vcpu = arg; + struct kvm *kvm = vcpu->kvm; + + /* + * If the guest has GTSE, it may execute tlbie, so do a eieio; tlbsync; + * ptesync sequence on the old CPU before migrating to a new one, in + * case we interrupted the guest between a tlbie ; eieio ; + * tlbsync; ptesync sequence. + * + * Otherwise, ptesync is sufficient. + */ + if (kvm->arch.lpcr & LPCR_GTSE) + asm volatile("eieio; tlbsync; ptesync"); + else + asm volatile("ptesync"); +} + static void kvmppc_prepare_radix_vcpu(struct kvm_vcpu *vcpu, int pcpu) { struct kvm_nested_guest *nested = vcpu->arch.nested; @@ -3032,10 +3051,14 @@ static void kvmppc_prepare_radix_vcpu(struct kvm_vcpu *vcpu, int pcpu) * so we use a single bit in .need_tlb_flush for all 4 threads. */ if (prev_cpu != pcpu) { - if (prev_cpu >= 0 && - cpu_first_tlb_thread_sibling(prev_cpu) != - cpu_first_tlb_thread_sibling(pcpu)) - radix_flush_cpu(kvm, prev_cpu, vcpu); + if (prev_cpu >= 0) { + if (cpu_first_tlb_thread_sibling(prev_cpu) != + cpu_first_tlb_thread_sibling(pcpu)) + radix_flush_cpu(kvm, prev_cpu, vcpu); + + smp_call_function_single(prev_cpu, + do_migrate_away_vcpu, vcpu, 1); + } if (nested) nested->prev_cpu[vcpu->arch.nested_vcpu_id] = pcpu; else diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 52690af66ca9..1bb81be09d4f 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -1039,15 +1039,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; - if (kvm_is_radix(kvm)) { - /* - * Since this is radix, do a eieio; tlbsync; ptesync sequence - * in case we interrupted the guest between a tlbie and a - * ptesync. - */ - asm volatile("eieio; tlbsync; ptesync"); - } - /* * cp_abort is required if the processor supports local copy-paste * to clear the copy buffer that was under control of the guest. From patchwork Mon Jul 26 03:50:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509754 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=BRSoxDop; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bk6jSjz9tkB for ; Mon, 26 Jul 2021 13:52:34 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231725AbhGZDME (ORCPT ); Sun, 25 Jul 2021 23:12:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231219AbhGZDME (ORCPT ); Sun, 25 Jul 2021 23:12:04 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D61B5C061757 for ; Sun, 25 Jul 2021 20:52:33 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id c11so9968388plg.11 for ; Sun, 25 Jul 2021 20:52:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uO2q8xALjPGGI79F4bk9pgeAW2H5oBKaXR5DgIXBdNY=; b=BRSoxDopPSvAk2THcmNSXnA1oZeWbmRQD4ntwoyt3nDzlBuPO7FMzbiLuayIFWNVCm ChPhBGyfKEX75npSq5K5Qi++FILtw3ri+602fK5u/bEzdHgxAYz5YjcAozn395bbJz0o yCQ7+P3kwUKkQsMEF/cOX7nFFe3zBognZ3WOZBdkKMr7Ug0W/ISdrKQewnZu/WQY46sF WVpoO0VEySnSeD4XBtxp4q/hUz5s+GfXP/InX4WLupEucB4dzyEl4RcKPLjGblR/laQd PWElyCybjbOTkS33XeCwsTYVRlpd8nuiloINVMEka/uWftlKveDSWtDxB+Sj9lcDhl6i N2Iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uO2q8xALjPGGI79F4bk9pgeAW2H5oBKaXR5DgIXBdNY=; b=mxCgCC11G6DZzCiFYeP9oc1gYVnyaZ7gnhqcGrSyHglsKAEF0n8+Wm+jjMETrR17FI WppjuiDp+cfCvbCKrUV4nSiqIyJKJ+QoC1bkLv0eRmhHfC+EwQSzPyl/rLZnUKufwoCi eRd9bT77FYnabe0AMSYOKPuOEJ1N0nV377WygrPF110LoD8jsGxpeT/X8jeGqRruBRtn IpmljdY7QPbNSrZyG068udkxjnkNAtQHVOBtV16CC9XqSUmvlJlJgyeDXu5ptZLrfaTE yzmYBjaVXXsEfcn3Hbjom0YHYBHKSyKYj4rRu1TU1J1n7cpvmprTpp5xgrelXKFkmYbu xVUA== X-Gm-Message-State: AOAM5313cMy5aamfc55Onf5PKlp3vusgE8kk7d/RMR8nVum6yNDJzUyT OA6PmQGnM9j/CCCSGwm0/VjBfGgEd24= X-Google-Smtp-Source: ABdhPJxPrvxDmXyBJyR9M09fENlyIpZ+o/+PW2SxX4P4b7L7K5IQ+P028QCajZoW+tkGd2ZhAWRGWw== X-Received: by 2002:a63:1944:: with SMTP id 4mr16115546pgz.306.1627271553272; Sun, 25 Jul 2021 20:52:33 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:33 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 47/55] KVM: PPC: Book3S HV Nested: Avoid extra mftb() in nested entry Date: Mon, 26 Jul 2021 13:50:28 +1000 Message-Id: <20210726035036.739609-48-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org mftb() is expensive and one can be avoided on nested guest dispatch. If the time checking code distinguishes between the L0 timer and the nested HV timer, then both can be tested in the same place with the same mftb() value. This also nicely illustrates the relationship between the L0 and nested HV timers. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_asm.h | 1 + arch/powerpc/kvm/book3s_hv.c | 12 ++++++++++++ arch/powerpc/kvm/book3s_hv_nested.c | 5 ----- 3 files changed, 13 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_asm.h b/arch/powerpc/include/asm/kvm_asm.h index fbbf3cec92e9..d68d71987d5c 100644 --- a/arch/powerpc/include/asm/kvm_asm.h +++ b/arch/powerpc/include/asm/kvm_asm.h @@ -79,6 +79,7 @@ #define BOOK3S_INTERRUPT_FP_UNAVAIL 0x800 #define BOOK3S_INTERRUPT_DECREMENTER 0x900 #define BOOK3S_INTERRUPT_HV_DECREMENTER 0x980 +#define BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER 0x1980 #define BOOK3S_INTERRUPT_DOORBELL 0xa00 #define BOOK3S_INTERRUPT_SYSCALL 0xc00 #define BOOK3S_INTERRUPT_TRACE 0xd00 diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 3e5c6b745394..b95e0c5e5557 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1491,6 +1491,10 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, run->ready_for_interrupt_injection = 1; switch (vcpu->arch.trap) { /* We're good on these - the host merely wanted to get our attention */ + case BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER: + WARN_ON_ONCE(1); /* Should never happen */ + vcpu->arch.trap = BOOK3S_INTERRUPT_HV_DECREMENTER; + fallthrough; case BOOK3S_INTERRUPT_HV_DECREMENTER: vcpu->stat.dec_exits++; r = RESUME_GUEST; @@ -1821,6 +1825,12 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu) vcpu->stat.ext_intr_exits++; r = RESUME_GUEST; break; + /* These need to go to the nested HV */ + case BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER: + vcpu->arch.trap = BOOK3S_INTERRUPT_HV_DECREMENTER; + vcpu->stat.dec_exits++; + r = RESUME_HOST; + break; /* SR/HMI/PMI are HV interrupts that host has handled. Resume guest.*/ case BOOK3S_INTERRUPT_HMI: case BOOK3S_INTERRUPT_PERFMON: @@ -3955,6 +3965,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, return BOOK3S_INTERRUPT_HV_DECREMENTER; if (next_timer < time_limit) time_limit = next_timer; + else if (*tb >= time_limit) /* nested time limit */ + return BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER; vcpu->arch.ceded = 0; diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index fad7bc8736ea..322064564260 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -397,11 +397,6 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) vcpu->arch.ret = RESUME_GUEST; vcpu->arch.trap = 0; do { - if (mftb() >= hdec_exp) { - vcpu->arch.trap = BOOK3S_INTERRUPT_HV_DECREMENTER; - r = RESUME_HOST; - break; - } r = kvmhv_run_single_vcpu(vcpu, hdec_exp, lpcr); } while (is_kvmppc_resume_guest(r)); From patchwork Mon Jul 26 03:50:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509755 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=oFLv9udR; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bn6J36z9t2G for ; Mon, 26 Jul 2021 13:52:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231219AbhGZDMH (ORCPT ); Sun, 25 Jul 2021 23:12:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231738AbhGZDMH (ORCPT ); Sun, 25 Jul 2021 23:12:07 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E3D9C061760 for ; Sun, 25 Jul 2021 20:52:36 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id k4-20020a17090a5144b02901731c776526so17814448pjm.4 for ; Sun, 25 Jul 2021 20:52:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OQbFsG0YODgf6XN8qLAIvcN1cpU3dQ4AeKIuy4NhO2s=; b=oFLv9udRAW+vE0ofkHGlqRAueSPG7jr6qR1GtUvbQPxUK7iqKJTWHThDPLHhv7k6F/ 9VtzXY7qGawtOJGbtRqKLXpNStCprrwtDGD6glG/aMcCtccQjnkR4ugFsY8oE/QvoGg0 Zig+309a9ehdF6Q4/+99wilKb6L2HyCSbA5IhfbAothDyUwcwhBlj5UuPbzobhMzfMJz vaG+0oj+Xv7jUcP8NHZ05beGOt7BxKTPJxiDirCWVLCPZeHXmNwRTShQMYURVt/Fovc/ BZCM6L5C1DyYWl8pA1t6pEOb5S40e1QcOEx4BK4jP6Hm3TywEz1WkPe0uwI5ROwJNDKa 7zHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OQbFsG0YODgf6XN8qLAIvcN1cpU3dQ4AeKIuy4NhO2s=; b=gH8OEUtOwFTFCX5kP4kjWIv1laOAO99Q9GfOJlXTZ6MrhSbPgANsTd5IT81c6W3YhH TltiZLbixxer6Ja0cUkfXN6SeSlIdnT5pAtL0clSfaB9/plaMhdv81RRmGWFUPYKkWxf vIDbH1OgWX4wbwtyC5E9XgX+eAcrPaf7zE2i7GH3IsWDiZgiAZwwTc2bdsEhjVrLOjAg dX/Byz7Davx3vFt84S8y1PPkuCPvCmwaswFk5uJzxULnOOJSIC3xDATIj/VKFokzpTi6 FVccqXtOrLFR+I9RvG0P7QVEEAqc8GuZ1Wu/t0+6VfgIvW5IQJyolABpCf5C7Q85PYx8 5F5w== X-Gm-Message-State: AOAM5334Z/NiJPromy/NIkOSOTISxiQE2AHeVhKLVvIR+WxMHIh0wkXB tqBdms0ZZDizSLSqmR7Ko1c/cm0KoS8= X-Google-Smtp-Source: ABdhPJxuBNSLlG5dO2HvigbWX4J+IijKMb0zvB+TKgOLz+hRP+mAAu5aUaEHksUTihEB1NkWOSeYJA== X-Received: by 2002:aa7:8602:0:b029:32d:3e9b:27de with SMTP id p2-20020aa786020000b029032d3e9b27demr15802815pfn.39.1627271555508; Sun, 25 Jul 2021 20:52:35 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:35 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 48/55] KVM: PPC: Book3S HV P9: Improve mfmsr performance on entry Date: Mon, 26 Jul 2021 13:50:29 +1000 Message-Id: <20210726035036.739609-49-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rearrange the MSR saving on entry so it does not follow the mtmsrd to disable interrupts, avoiding a possible RAW scoreboard stall. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 2 + arch/powerpc/kvm/book3s_hv.c | 18 ++----- arch/powerpc/kvm/book3s_hv_p9_entry.c | 66 +++++++++++++++--------- 3 files changed, 47 insertions(+), 39 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index 52e2b7a352c7..4b0753e03731 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -154,6 +154,8 @@ static inline bool kvmhv_vcpu_is_radix(struct kvm_vcpu *vcpu) return radix; } +unsigned long kvmppc_msr_hard_disable_set_facilities(struct kvm_vcpu *vcpu, unsigned long msr); + int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb); #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index b95e0c5e5557..ee4e38cf5df4 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3858,6 +3858,8 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns s64 dec; int trap; + msr = mfmsr(); + save_p9_host_os_sprs(&host_os_sprs); /* @@ -3868,24 +3870,10 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns */ host_psscr = mfspr(SPRN_PSSCR_PR); - hard_irq_disable(); + kvmppc_msr_hard_disable_set_facilities(vcpu, msr); if (lazy_irq_pending()) return 0; - /* MSR bits may have been cleared by context switch */ - msr = 0; - if (IS_ENABLED(CONFIG_PPC_FPU)) - msr |= MSR_FP; - if (cpu_has_feature(CPU_FTR_ALTIVEC)) - msr |= MSR_VEC; - if (cpu_has_feature(CPU_FTR_VSX)) - msr |= MSR_VSX; - if ((cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && - (vcpu->arch.hfscr & HFSCR_TM)) - msr |= MSR_TM; - msr = msr_check_and_set(msr); - if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) msr = mfmsr(); /* TM restore can update msr */ diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 1bb81be09d4f..1287dac918a0 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -622,6 +622,44 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) } } +unsigned long kvmppc_msr_hard_disable_set_facilities(struct kvm_vcpu *vcpu, unsigned long msr) +{ + unsigned long msr_needed = 0; + + msr &= ~MSR_EE; + + /* MSR bits may have been cleared by context switch so must recheck */ + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr_needed |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr_needed |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr_needed |= MSR_VSX; + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) + msr_needed |= MSR_TM; + + /* + * This could be combined with MSR[RI] clearing, but that expands + * the unrecoverable window. It would be better to cover unrecoverable + * with KVM bad interrupt handling rather than use MSR[RI] at all. + * + * Much more difficult and less worthwhile to combine with IR/DR + * disable. + */ + if ((msr & msr_needed) != msr_needed) { + msr |= msr_needed; + __mtmsrd(msr, 0); + } else { + __hard_irq_disable(); + } + local_paca->irq_happened |= PACA_IRQ_HARD_DIS; + + return msr; +} +EXPORT_SYMBOL_GPL(kvmppc_msr_hard_disable_set_facilities); + int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { struct p9_host_os_sprs host_os_sprs; @@ -655,6 +693,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ceded = 0; + /* Save MSR for restore, with EE clear. */ + msr = mfmsr() & ~MSR_EE; + host_hfscr = mfspr(SPRN_HFSCR); host_ciabr = mfspr(SPRN_CIABR); host_psscr = mfspr(SPRN_PSSCR_PR); @@ -676,35 +717,12 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc save_p9_host_os_sprs(&host_os_sprs); - /* - * This could be combined with MSR[RI] clearing, but that expands - * the unrecoverable window. It would be better to cover unrecoverable - * with KVM bad interrupt handling rather than use MSR[RI] at all. - * - * Much more difficult and less worthwhile to combine with IR/DR - * disable. - */ - hard_irq_disable(); + msr = kvmppc_msr_hard_disable_set_facilities(vcpu, msr); if (lazy_irq_pending()) { trap = 0; goto out; } - /* MSR bits may have been cleared by context switch */ - msr = 0; - if (IS_ENABLED(CONFIG_PPC_FPU)) - msr |= MSR_FP; - if (cpu_has_feature(CPU_FTR_ALTIVEC)) - msr |= MSR_VEC; - if (cpu_has_feature(CPU_FTR_VSX)) - msr |= MSR_VSX; - if ((cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && - (vcpu->arch.hfscr & HFSCR_TM)) - msr |= MSR_TM; - msr = msr_check_and_set(msr); - /* Save MSR for restore. This is after hard disable, so EE is clear. */ - if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) msr = mfmsr(); /* MSR may have been updated */ From patchwork Mon Jul 26 03:50:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509756 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=TNpHgJIr; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5br4yGfz9tD5 for ; Mon, 26 Jul 2021 13:52:40 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231739AbhGZDMK (ORCPT ); Sun, 25 Jul 2021 23:12:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231738AbhGZDMK (ORCPT ); Sun, 25 Jul 2021 23:12:10 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C18CC061757 for ; Sun, 25 Jul 2021 20:52:38 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id k4-20020a17090a5144b02901731c776526so17814538pjm.4 for ; Sun, 25 Jul 2021 20:52:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ss4SKT1mXpLqh5sHj/o40xs+IZbBpzSCdWo+XRcuK+I=; b=TNpHgJIrduC7LKWTgROEqE0OvQNLCPmPxYxafWgzN9+45UdApS0cAvqJDNcA/HyyLD 2ifIGY7B6lJYPM5k4LxIHgxhofJ7AwIqVwSLJalvx/vY0qkSJfs8zqUTW0glll+/Aw11 EeVU5XMdHgJAo/EgUSicIoq+ZVBorCMBwt2bNgaROtjzdaVTPA38cIj9KSjRM0THouJy CKulW7uyuFe93LSgr+qVvE5xzEWa4Pzuwqkcpu/N29nCy5BwpqlcrZVYh0tWIBCb/Iq3 aH7fKVh7XSTExSVXwADaVACxYg/6CVvMO+pqrviWqDddGj9qfNJt649oXGy7y9gGodbr Mjlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ss4SKT1mXpLqh5sHj/o40xs+IZbBpzSCdWo+XRcuK+I=; b=l6NJlCr8A6m/2OHn2Jj0QHctCel3NrL9h0jp5ROt54FXGToKBDBjRWgP100L+1jTlp nGf6NeKtunisxXPPP+giQeWe4Ys5u3acoLk+aB15oxrKxMNyZQBlIoha3e6GQrkRYDfX bTVY3p3F61xh8Oki9SLNuRJwiokMDiPlKj9D7X+gmzF9IuHGhR3QBtql6HNWlXwLEoDO 6ThZefxoh6XuiZ5Pi0dd8MR+DqqjRlR5fnJ0Ox0dOq2codlh3XxAdphXV4u/TOFUy9Q8 830NCh45LajAbB1BokiO4vmbkCOLQ3iSoeCpZLN3PdAJOpwH3tYv5hmjyEtjxHrcE877 oAvQ== X-Gm-Message-State: AOAM5308dMkntrttlVCIvkLetrPTkeaNXQS8Uw/bJzadqZy9+KGjotMZ D7CjgH7uI59uCyh9qrmMTgiRJt8PaPE= X-Google-Smtp-Source: ABdhPJz2PWY2wqBl1nbx6nNLUFWaqLoEMHAWftB5D29WCaNQQxV/uu7OTwoucCw4z7znMCz8lH+3PA== X-Received: by 2002:aa7:93cd:0:b029:328:9d89:a790 with SMTP id y13-20020aa793cd0000b02903289d89a790mr15681898pff.71.1627271557733; Sun, 25 Jul 2021 20:52:37 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:37 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 49/55] KVM: PPC: Book3S HV P9: Optimise hash guest SLB saving Date: Mon, 26 Jul 2021 13:50:30 +1000 Message-Id: <20210726035036.739609-50-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org slbmfee/slbmfev instructions are very expensive, moreso than a regular mfspr instruction, so minimising them significantly improves hash guest exit performance. The slbmfev is only required if slbmfee found a valid SLB entry. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 1287dac918a0..338873f90c72 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -477,10 +477,22 @@ static void __accumulate_time(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator #define accumulate_time(vcpu, next) do {} while (0) #endif -static inline void mfslb(unsigned int idx, u64 *slbee, u64 *slbev) +static inline u64 mfslbv(unsigned int idx) { - asm volatile("slbmfev %0,%1" : "=r" (*slbev) : "r" (idx)); - asm volatile("slbmfee %0,%1" : "=r" (*slbee) : "r" (idx)); + u64 slbev; + + asm volatile("slbmfev %0,%1" : "=r" (slbev) : "r" (idx)); + + return slbev; +} + +static inline u64 mfslbe(unsigned int idx) +{ + u64 slbee; + + asm volatile("slbmfee %0,%1" : "=r" (slbee) : "r" (idx)); + + return slbee; } static inline void mtslb(u64 slbee, u64 slbev) @@ -610,8 +622,10 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) */ for (i = 0; i < vcpu->arch.slb_nr; i++) { u64 slbee, slbev; - mfslb(i, &slbee, &slbev); + + slbee = mfslbe(i); if (slbee & SLB_ESID_V) { + slbev = mfslbv(i); vcpu->arch.slb[nr].orige = slbee | i; vcpu->arch.slb[nr].origv = slbev; nr++; From patchwork Mon Jul 26 03:50:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509757 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=q0zvUm3T; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bv225Zz9t6h for ; Mon, 26 Jul 2021 13:52:43 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231747AbhGZDMN (ORCPT ); Sun, 25 Jul 2021 23:12:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231738AbhGZDML (ORCPT ); Sun, 25 Jul 2021 23:12:11 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 759A4C061757 for ; Sun, 25 Jul 2021 20:52:40 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id o44-20020a17090a0a2fb0290176ca3e5a2fso5069958pjo.1 for ; Sun, 25 Jul 2021 20:52:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uA8TbX3pYs2LbKbx2AqpBXWdMPxg/1XwQyLtHZ8k4pk=; b=q0zvUm3TdOgPeQLfr9sl7pSpHI3OiRrMO74Sa2ndO/muk12RQUeSUujFdcIE80IV7P F4gtTODUo9IguULkuthSrvAqvksQeHi9a2ud4K1wl9wPv5e31dnvzLQznN0gbEBWpZ4y 8AZouMK2qFnGJkyFIeh/6kl3o8Z280aQ5Xx/burOJgp09wXcF/l4LnnVvK/dbXa6kB2B yMyxrW3DQjrEv6eybQeFtlsL83USLO/9JJRVpcH68YkMUm3HCSGT5phXnxq8t9ApUgRO /yJSOpFbSBYNQjTv2IwpcT20cz7LwuvTjrwSKf0r1NApVreyV0AYUk7F5nPiDnAR5y0q I5Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uA8TbX3pYs2LbKbx2AqpBXWdMPxg/1XwQyLtHZ8k4pk=; b=WKntqklP1Sg5bmX+i+iAnRyFOi7GhnAKDs1r1Bz1+KRJyWVRInYqN4+gjeZt65glAM ttwpqiRhaRN1IzY1N3OlXIfPFNEgm84vpU6aDVsxPrGjU+etrCCSeAEkkZYx6+s0Msm3 LQo7Nhmj07S8HqeMjq/67ME6GZPwSk9h9i5WfgQdgOeOD4gJTPuDg3MoMM9XXOcgft4w UORiTlLFgB1XFv7q9kSUCeuq+2XuYQg9yJnOcffAOsgli2oy73J11n8BlHqx8Q0Rqv6u S/cbKuoz+a7DuTbq/nZneVNfQ3ndWQpKbEnVKIe+QXhFVodUJzzNEu+ge/Tlzs0EBp5N sy1Q== X-Gm-Message-State: AOAM533Img/sXcBX3sJ0vHdc/28E/dBSOyqcf7LjZXMOLVZI9vc0T5JY JJlpT/xqzRGFtEngQs7Gy4Q4Rn+WY/4= X-Google-Smtp-Source: ABdhPJyDzzba2kzswPB+YgoKplhqURknNv8gSWMDKpLSXONGI2J1wXWOnb44s6R5WiFC76hLE8jz6Q== X-Received: by 2002:a63:f712:: with SMTP id x18mr16448939pgh.389.1627271559976; Sun, 25 Jul 2021 20:52:39 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:39 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 50/55] KVM: PPC: Book3S HV P9: Add unlikely annotation for !mmu_ready Date: Mon, 26 Jul 2021 13:50:31 +1000 Message-Id: <20210726035036.739609-51-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The mmu will almost always be ready. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index ee4e38cf5df4..2bd000e2c269 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4376,7 +4376,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, vc->runner = vcpu; /* See if the MMU is ready to go */ - if (!kvm->arch.mmu_ready) { + if (unlikely(!kvm->arch.mmu_ready)) { r = kvmhv_setup_mmu(vcpu); if (r) { run->exit_reason = KVM_EXIT_FAIL_ENTRY; From patchwork Mon Jul 26 03:50:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509758 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=dGnLADup; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bw3hnYz9tT8 for ; Mon, 26 Jul 2021 13:52:44 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231749AbhGZDMO (ORCPT ); Sun, 25 Jul 2021 23:12:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231738AbhGZDMO (ORCPT ); Sun, 25 Jul 2021 23:12:14 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A42D9C061757 for ; Sun, 25 Jul 2021 20:52:42 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id j1so11102004pjv.3 for ; Sun, 25 Jul 2021 20:52:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3Od0aBmiRV7nfGJrUiIs+liiazjhz4hDN8E8M4lUNSM=; b=dGnLADup8PXhp2alX1zwmgnoMeROJhet7wzxDOBYK9Wqrp9siBgyNrnayez9DVnqvA RqM267IWKQjI2WRgpbFc5Hy1acxtprxWFsTLxlR310gwLCMBUw81WaMQt5bATflqCb/V hgjcp35AVGcfqYOxsQnUpHADABCVRCkIPpFO+bXwveW1qS7cAyKM99vJ+1M4dJfr0FvD 5XoCrZmeahmwyITZOHgN6tKdH7uTSPRFIzwPVmvYNqFe9Z5qQ6Kb/M4WGO6KpLdsAawe +Htg08f85C/Ceuarq2Vn+Fbx/96twDLQxn/5cXUah2zOVYg+CZorZadaiYVPKuiE2rP5 bIug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3Od0aBmiRV7nfGJrUiIs+liiazjhz4hDN8E8M4lUNSM=; b=GE9jYdLB5g7hwhaG4nNSfx/pyczVMbTy+Hf9VSjo87zHy/talb7umQJafA29PapwOD hKsPlJbM9QuqE6EJwED+j5WGeWJIUKrI6wwYJWHGU5mX2E+Hs1eSHsTJW/VGwYX1AL9s PLNU3xHKSpm5nV0qjjkiIyawja/Bi3YKDxT/q4lwMtxjPhj9h7TrpRmOL4El8Wxmrxd2 ZEFu3p8X8PgRHX8zdWa0es2qc9VzcfHVyjELEv5w6qovV3mOAtls1gk5IbKYbNNL+nAA ZvA30lRkKnY0NTu5gRPaN18EJ3dNIvsnF94RwHEAoPjpdlY9pVijqgnhc19g5O51VyRX Z+Gw== X-Gm-Message-State: AOAM533YzXpp3OyhuZLS4dC4h+FZekJ+4VeSG4914j45fBjXrHrVvA1v 6m7DScNj9Z839ruNEexanwShg94gwsk= X-Google-Smtp-Source: ABdhPJx8uSuk+e8Tl7n9R3pE4gRvBsOOzry52L9kpOAZzitiY675t4Edy5Ljuh0bb+zMDA/7gDzhag== X-Received: by 2002:a63:fc02:: with SMTP id j2mr16507162pgi.235.1627271562183; Sun, 25 Jul 2021 20:52:42 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:42 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 51/55] KVM: PPC: Book3S HV P9: Avoid cpu_in_guest atomics on entry and exit Date: Mon, 26 Jul 2021 13:50:32 +1000 Message-Id: <20210726035036.739609-52-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org cpu_in_guest is set to determine if a CPU needs to be IPI'ed to exit the guest and notice the need_tlb_flush bit. This can be implemented as a global per-CPU pointer to the currently running guest instead of per-guest cpumasks, saving 2 atomics per entry/exit. P7/8 doesn't require cpu_in_guest, nor does a nested HV (only the L0 does), so move it to the P9 HV path. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 1 - arch/powerpc/include/asm/kvm_host.h | 1 - arch/powerpc/kvm/book3s_hv.c | 38 +++++++++++++----------- 3 files changed, 21 insertions(+), 19 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index 4b0753e03731..793aa2868c3f 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -44,7 +44,6 @@ struct kvm_nested_guest { struct mutex tlb_lock; /* serialize page faults and tlbies */ struct kvm_nested_guest *next; cpumask_t need_tlb_flush; - cpumask_t cpu_in_guest; short prev_cpu[NR_CPUS]; u8 radix; /* is this nested guest radix */ }; diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 74ee3a5b110e..650e1c0d118c 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -288,7 +288,6 @@ struct kvm_arch { u32 online_vcores; atomic_t hpte_mod_interest; cpumask_t need_tlb_flush; - cpumask_t cpu_in_guest; u8 radix; u8 fwnmi_enabled; u8 secure_guest; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 2bd000e2c269..6f29fa7d77cc 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -2989,30 +2989,33 @@ static void kvmppc_release_hwthread(int cpu) tpaca->kvm_hstate.kvm_split_mode = NULL; } +static DEFINE_PER_CPU(struct kvm *, cpu_in_guest); + static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu) { struct kvm_nested_guest *nested = vcpu->arch.nested; - cpumask_t *cpu_in_guest; int i; cpu = cpu_first_tlb_thread_sibling(cpu); - if (nested) { + if (nested) cpumask_set_cpu(cpu, &nested->need_tlb_flush); - cpu_in_guest = &nested->cpu_in_guest; - } else { + else cpumask_set_cpu(cpu, &kvm->arch.need_tlb_flush); - cpu_in_guest = &kvm->arch.cpu_in_guest; - } /* - * Make sure setting of bit in need_tlb_flush precedes - * testing of cpu_in_guest bits. The matching barrier on - * the other side is the first smp_mb() in kvmppc_run_core(). + * Make sure setting of bit in need_tlb_flush precedes testing of + * cpu_in_guest. The matching barrier on the other side is hwsync + * when switching to guest MMU mode, which happens between + * cpu_in_guest being set to the guest kvm, and need_tlb_flush bit + * being tested. */ smp_mb(); for (i = cpu; i <= cpu_last_tlb_thread_sibling(cpu); - i += cpu_tlb_thread_sibling_step()) - if (cpumask_test_cpu(i, cpu_in_guest)) + i += cpu_tlb_thread_sibling_step()) { + struct kvm *running = *per_cpu_ptr(&cpu_in_guest, i); + + if (running == kvm) smp_call_function_single(i, do_nothing, NULL, 1); + } } static void do_migrate_away_vcpu(void *arg) @@ -3080,7 +3083,6 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc) { int cpu; struct paca_struct *tpaca; - struct kvm *kvm = vc->kvm; cpu = vc->pcpu; if (vcpu) { @@ -3091,7 +3093,6 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc) cpu += vcpu->arch.ptid; vcpu->cpu = vc->pcpu; vcpu->arch.thread_cpu = cpu; - cpumask_set_cpu(cpu, &kvm->arch.cpu_in_guest); } tpaca = paca_ptrs[cpu]; tpaca->kvm_hstate.kvm_vcpu = vcpu; @@ -3809,7 +3810,6 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) kvmppc_release_hwthread(pcpu + i); if (sip && sip->napped[i]) kvmppc_ipi_thread(pcpu + i); - cpumask_clear_cpu(pcpu + i, &vc->kvm->arch.cpu_in_guest); } spin_unlock(&vc->lock); @@ -3977,8 +3977,14 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, } } else { + struct kvm *kvm = vcpu->kvm; + kvmppc_xive_push_vcpu(vcpu); + + __this_cpu_write(cpu_in_guest, kvm); trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr, tb); + __this_cpu_write(cpu_in_guest, NULL); + if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && !(vcpu->arch.shregs.msr & MSR_PR)) { unsigned long req = kvmppc_get_gpr(vcpu, 3); @@ -4003,7 +4009,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, } kvmppc_xive_pull_vcpu(vcpu); - if (kvm_is_radix(vcpu->kvm)) + if (kvm_is_radix(kvm)) vcpu->arch.slb_max = 0; } @@ -4468,8 +4474,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, powerpc_local_irq_pmu_restore(flags); - cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest); - preempt_enable(); /* From patchwork Mon Jul 26 03:50:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509759 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=fYWRJi9Q; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bz2p49z9t6h for ; Mon, 26 Jul 2021 13:52:47 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231754AbhGZDMR (ORCPT ); Sun, 25 Jul 2021 23:12:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231738AbhGZDMQ (ORCPT ); Sun, 25 Jul 2021 23:12:16 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 018DEC061760 for ; Sun, 25 Jul 2021 20:52:45 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id ch6so2118701pjb.5 for ; Sun, 25 Jul 2021 20:52:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iUws3UTIX1TvCQYo2LYjoB9sUWO54hgibFdVZfokms4=; b=fYWRJi9Qmh8c7vxOZ2Mj0IA9wn7JYkndfcdC/tjT9rAPlpL6LepCTX8YBvyC52DyZf 3YB/i+T6HF3yfMG6udn04kDM90XRB0lu3HGzl4fk0lr42wvKDBTCi6pYqtqN132Epdcv x4DykmcWnI/Ja6m0fQHNpzX+0vWlvPUVAU0Jjtv+mLpEq8MkHSG2+QEZjbfjJ33XKLXo u5kH5hrUXuXiDyNZ1z3O0APfNyu8nS9adxrk07t3OyW0sB9vFjMXkmHLaoMJoStt4xwD WtVVAOEPypmxuL4zLDrl7JUJSzrPAVWfBBi3AF2OUQAGx2LkFfLNoJdZu4DeIR2l52t7 s/Jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iUws3UTIX1TvCQYo2LYjoB9sUWO54hgibFdVZfokms4=; b=D8jjWU5p3GA2i+ps3P3emt15gadEEH9Q0N31rl3Sgj06sMKt8Qn6Rvl8NWpHeSTmEe eQLuKH1X1VmOLmXVTYLOvZDiJwpQRbvxRe+ldz3X/5Zbv1DASf19t2MICuuZ6wXsGdua YOTNq1Md8FO3sKOh6XFkTH+9jcWdb43QcUqYe6VwOjxEb2deZqnB3YuP77vyvbSGPfuD xufnYdqJ7ADiZvquxbWlm73n1AcCgS2DntchZjkJD6ffrpBxN8x7X5KWUZng2Kc23yLu 68cCquB0nhLf+Z+NA/xt9DN2bOFna+OQn57xlP/Emy/miaplDCJOfz3z49o4r/8a0aEL 5vvg== X-Gm-Message-State: AOAM53174wlBFd0Lj28aKRCeJIS7ShjMSLXvnZx9IdG0r4ti0KmlI+Iw Kvcrxp07b0wcz2qzAriDkk05QtbY+PI= X-Google-Smtp-Source: ABdhPJw/wTtyRvCoMNn5B6X6n/f2xubB+bOFBGnWOyLXRjHnaW4xpkEWJCf3Uhss5JITW177qQYxMg== X-Received: by 2002:a65:62da:: with SMTP id m26mr16257272pgv.370.1627271564384; Sun, 25 Jul 2021 20:52:44 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:44 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 52/55] KVM: PPC: Book3S HV P9: Remove most of the vcore logic Date: Mon, 26 Jul 2021 13:50:33 +1000 Message-Id: <20210726035036.739609-53-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The P9 path always uses one vcpu per vcore, so none of the the vcore, locks, stolen time, blocking logic, shared waitq, etc., is required. Remove most of it. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 147 ++++++++++++++++++++--------------- 1 file changed, 85 insertions(+), 62 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 6f29fa7d77cc..f83ae33e875c 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -281,6 +281,8 @@ static void kvmppc_core_start_stolen(struct kvmppc_vcore *vc, u64 tb) { unsigned long flags; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + spin_lock_irqsave(&vc->stoltb_lock, flags); vc->preempt_tb = tb; spin_unlock_irqrestore(&vc->stoltb_lock, flags); @@ -290,6 +292,8 @@ static void kvmppc_core_end_stolen(struct kvmppc_vcore *vc, u64 tb) { unsigned long flags; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + spin_lock_irqsave(&vc->stoltb_lock, flags); if (vc->preempt_tb != TB_NIL) { vc->stolen_tb += tb - vc->preempt_tb; @@ -302,7 +306,12 @@ static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu) { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long flags; - u64 now = mftb(); + u64 now; + + if (cpu_has_feature(CPU_FTR_ARCH_300)) + return; + + now = mftb(); /* * We can test vc->runner without taking the vcore lock, @@ -326,7 +335,12 @@ static void kvmppc_core_vcpu_put_hv(struct kvm_vcpu *vcpu) { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long flags; - u64 now = mftb(); + u64 now; + + if (cpu_has_feature(CPU_FTR_ARCH_300)) + return; + + now = mftb(); if (vc->runner == vcpu && vc->vcore_state >= VCORE_SLEEPING) kvmppc_core_start_stolen(vc, now); @@ -678,6 +692,8 @@ static u64 vcore_stolen_time(struct kvmppc_vcore *vc, u64 now) u64 p; unsigned long flags; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + spin_lock_irqsave(&vc->stoltb_lock, flags); p = vc->stolen_tb; if (vc->vcore_state != VCORE_INACTIVE && @@ -700,13 +716,19 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, dt = vcpu->arch.dtl_ptr; vpa = vcpu->arch.vpa.pinned_addr; now = tb; - core_stolen = vcore_stolen_time(vc, now); - stolen = core_stolen - vcpu->arch.stolen_logged; - vcpu->arch.stolen_logged = core_stolen; - spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); - stolen += vcpu->arch.busy_stolen; - vcpu->arch.busy_stolen = 0; - spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); + + if (cpu_has_feature(CPU_FTR_ARCH_300)) { + stolen = 0; + } else { + core_stolen = vcore_stolen_time(vc, now); + stolen = core_stolen - vcpu->arch.stolen_logged; + vcpu->arch.stolen_logged = core_stolen; + spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); + stolen += vcpu->arch.busy_stolen; + vcpu->arch.busy_stolen = 0; + spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); + } + if (!dt || !vpa) return; memset(dt, 0, sizeof(struct dtl_entry)); @@ -903,13 +925,14 @@ static int kvm_arch_vcpu_yield_to(struct kvm_vcpu *target) * mode handler is not called but no other threads are in the * source vcore. */ - - spin_lock(&vcore->lock); - if (target->arch.state == KVMPPC_VCPU_RUNNABLE && - vcore->vcore_state != VCORE_INACTIVE && - vcore->runner) - target = vcore->runner; - spin_unlock(&vcore->lock); + if (!cpu_has_feature(CPU_FTR_ARCH_300)) { + spin_lock(&vcore->lock); + if (target->arch.state == KVMPPC_VCPU_RUNNABLE && + vcore->vcore_state != VCORE_INACTIVE && + vcore->runner) + target = vcore->runner; + spin_unlock(&vcore->lock); + } return kvm_vcpu_yield_to(target); } @@ -3105,13 +3128,6 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc) kvmppc_ipi_thread(cpu); } -/* Old path does this in asm */ -static void kvmppc_stop_thread(struct kvm_vcpu *vcpu) -{ - vcpu->cpu = -1; - vcpu->arch.thread_cpu = -1; -} - static void kvmppc_wait_for_nap(int n_threads) { int cpu = smp_processor_id(); @@ -3200,6 +3216,8 @@ static void kvmppc_vcore_preempt(struct kvmppc_vcore *vc) { struct preempted_vcore_list *lp = this_cpu_ptr(&preempted_vcores); + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + vc->vcore_state = VCORE_PREEMPT; vc->pcpu = smp_processor_id(); if (vc->num_threads < threads_per_vcore(vc->kvm)) { @@ -3216,6 +3234,8 @@ static void kvmppc_vcore_end_preempt(struct kvmppc_vcore *vc) { struct preempted_vcore_list *lp; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + kvmppc_core_end_stolen(vc, mftb()); if (!list_empty(&vc->preempt_list)) { lp = &per_cpu(preempted_vcores, vc->pcpu); @@ -3944,7 +3964,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { - struct kvmppc_vcore *vc = vcpu->arch.vcore; u64 next_timer; int trap; @@ -3960,9 +3979,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, kvmppc_subcore_enter_guest(); - vc->entry_exit_map = 1; - vc->in_guest = 1; - vcpu_vpa_increment_dispatch(vcpu); if (kvmhv_on_pseries()) { @@ -4015,9 +4031,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - vc->entry_exit_map = 0x101; - vc->in_guest = 0; - kvmppc_subcore_exit_guest(); return trap; @@ -4083,6 +4096,13 @@ static bool kvmppc_vcpu_woken(struct kvm_vcpu *vcpu) return false; } +static bool kvmppc_vcpu_check_block(struct kvm_vcpu *vcpu) +{ + if (!vcpu->arch.ceded || kvmppc_vcpu_woken(vcpu)) + return true; + return false; +} + /* * Check to see if any of the runnable vcpus on the vcore have pending * exceptions or are no longer ceded @@ -4093,7 +4113,7 @@ static int kvmppc_vcore_check_block(struct kvmppc_vcore *vc) int i; for_each_runnable_thread(i, vcpu, vc) { - if (!vcpu->arch.ceded || kvmppc_vcpu_woken(vcpu)) + if (kvmppc_vcpu_check_block(vcpu)) return 1; } @@ -4110,6 +4130,8 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) int do_sleep = 1; u64 block_ns; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + /* Poll for pending exceptions and ceded state */ cur = start_poll = ktime_get(); if (vc->halt_poll_ns) { @@ -4375,11 +4397,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.ceded = 0; vcpu->arch.run_task = current; vcpu->arch.state = KVMPPC_VCPU_RUNNABLE; - vcpu->arch.busy_preempt = TB_NIL; vcpu->arch.last_inst = KVM_INST_FETCH_FAILED; - vc->runnable_threads[0] = vcpu; - vc->n_runnable = 1; - vc->runner = vcpu; /* See if the MMU is ready to go */ if (unlikely(!kvm->arch.mmu_ready)) { @@ -4397,11 +4415,8 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, kvmppc_update_vpas(vcpu); - init_vcore_to_run(vc); - preempt_disable(); pcpu = smp_processor_id(); - vc->pcpu = pcpu; if (kvm_is_radix(kvm)) kvmppc_prepare_radix_vcpu(vcpu, pcpu); @@ -4430,21 +4445,23 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, goto out; } - tb = mftb(); + if (vcpu->arch.timer_running) { + hrtimer_try_to_cancel(&vcpu->arch.dec_timer); + vcpu->arch.timer_running = 0; + } - vcpu->arch.stolen_logged = vcore_stolen_time(vc, tb); - vc->preempt_tb = TB_NIL; + tb = mftb(); - kvmppc_clear_host_core(pcpu); + vcpu->cpu = pcpu; + vcpu->arch.thread_cpu = pcpu; + local_paca->kvm_hstate.kvm_vcpu = vcpu; + local_paca->kvm_hstate.ptid = 0; + local_paca->kvm_hstate.fake_suspend = 0; - local_paca->kvm_hstate.napping = 0; - local_paca->kvm_hstate.kvm_split_mode = NULL; - kvmppc_start_thread(vcpu, vc); + vc->pcpu = pcpu; // for kvmppc_create_dtl_entry kvmppc_create_dtl_entry(vcpu, vc, tb); - trace_kvm_guest_enter(vcpu); - vc->vcore_state = VCORE_RUNNING; - trace_kvmppc_run_core(vc, 0); + trace_kvm_guest_enter(vcpu); guest_enter_irqoff(); @@ -4466,11 +4483,10 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, set_irq_happened(trap); - kvmppc_set_host_core(pcpu); - guest_exit_irqoff(); - kvmppc_stop_thread(vcpu); + vcpu->cpu = -1; + vcpu->arch.thread_cpu = -1; powerpc_local_irq_pmu_restore(flags); @@ -4497,28 +4513,31 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, } vcpu->arch.ret = r; - if (is_kvmppc_resume_guest(r) && vcpu->arch.ceded && - !kvmppc_vcpu_woken(vcpu)) { + if (is_kvmppc_resume_guest(r) && !kvmppc_vcpu_check_block(vcpu)) { kvmppc_set_timer(vcpu); - while (vcpu->arch.ceded && !kvmppc_vcpu_woken(vcpu)) { + + prepare_to_rcuwait(&vcpu->wait); + for (;;) { + set_current_state(TASK_INTERRUPTIBLE); if (signal_pending(current)) { vcpu->stat.signal_exits++; run->exit_reason = KVM_EXIT_INTR; vcpu->arch.ret = -EINTR; break; } - spin_lock(&vc->lock); - kvmppc_vcore_blocked(vc); - spin_unlock(&vc->lock); + + if (kvmppc_vcpu_check_block(vcpu)) + break; + + trace_kvmppc_vcore_blocked(vc, 0); + schedule(); + trace_kvmppc_vcore_blocked(vc, 1); } + finish_rcuwait(&vcpu->wait); } vcpu->arch.ceded = 0; - vc->vcore_state = VCORE_INACTIVE; - trace_kvmppc_run_core(vc, 1); - done: - kvmppc_remove_runnable(vc, vcpu, tb); trace_kvmppc_run_vcpu_exit(vcpu); return vcpu->arch.ret; @@ -4602,7 +4621,8 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) kvmppc_save_current_sprs(); - vcpu->arch.waitp = &vcpu->arch.vcore->wait; + if (!cpu_has_feature(CPU_FTR_ARCH_300)) + vcpu->arch.waitp = &vcpu->arch.vcore->wait; vcpu->arch.pgdir = kvm->mm->pgd; vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST; @@ -5064,6 +5084,9 @@ void kvmppc_alloc_host_rm_ops(void) int cpu, core; int size; + if (cpu_has_feature(CPU_FTR_ARCH_300)) + return; + /* Not the first time here ? */ if (kvmppc_host_rm_ops_hv != NULL) return; From patchwork Mon Jul 26 03:50:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509760 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=SZhCqBve; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5c13kZgz9tjw for ; Mon, 26 Jul 2021 13:52:49 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231738AbhGZDMT (ORCPT ); Sun, 25 Jul 2021 23:12:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231601AbhGZDMS (ORCPT ); Sun, 25 Jul 2021 23:12:18 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29D9EC061757 for ; Sun, 25 Jul 2021 20:52:47 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id ds11-20020a17090b08cbb0290172f971883bso17855806pjb.1 for ; Sun, 25 Jul 2021 20:52:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6yYGFwGtBMz3pGfvtqXbW3QlyXdSr1TV6daCQwMp9Lw=; b=SZhCqBvetmL1PmioiHO/N5X374alSsgY6mxVs3IioJLuPWNFZFVdGZo4a/FYu0jDya Lwqx6VtLSeAYvT3MBT5t/woStLlo4l3j0HbLfazQjkzv7yvZH753U0PhOouDAfZFdvvK VOaZqhXvnY/wxCZKQOuEv6cPFU9Ubz702+wEfiNbtDOTINa0LB8CU6Q7O9s0HT9xg8uw QIDWzAM68pl+VIFWVs7pANtwphiev4VXEY6DCQOwE3mf0AC70OYsQXaGC3gCZU+DhDTk wxrFYR+2a2EgTl6d0xxXkGAn93EWQ3SOnq485y+n1lWOLu6LgPD3dx0lBg1dZohRTLPS R6zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6yYGFwGtBMz3pGfvtqXbW3QlyXdSr1TV6daCQwMp9Lw=; b=BQS+RE+xKKnFu99A6U99NsnVVl6TmhfYrfgcKgpVTzZJ2JS569yCgSuYPYadOr7O17 YeicL0BGnZsQtJgaqN+/akBpZI9hb1mx38Kbtrv2HTdVllfcsg/hMC/Zk3Bi+Dngo5PF +3TnZjmoWeUNZEHUE7JCwO+X6ke6y8/reCY0+mLyzJ1DgqUJ0t33LbFcIY400D8qoe1F GgEkAh0L46OrfFdP2b66RTek9iE7QNFSJPuKXAtENMOZ8lbIV492b1RRLHfL10+hNbkk AYmMwEdcNVtVSuAqwoqKBQ+nEZ0hXlVOcAv+UPR2BxOIn9dGyz6ypVHaM0nXrxmIAhWk qyaA== X-Gm-Message-State: AOAM533n7eyDZsOS5UDYMtdNa83rdsJjG3aboddBNny/TDM2K1j3umqi t9jWk6iV14MeiGACwDL3Hb1LStMaiEQ= X-Google-Smtp-Source: ABdhPJzAOT5HxVR12j8Q6CgfMrAjIkX/L3bzZd7tfFwDSWOjCX6MXd1ZvsZYeST1AHwYwZw3P9ZKPw== X-Received: by 2002:a63:1f24:: with SMTP id f36mr16266493pgf.151.1627271566669; Sun, 25 Jul 2021 20:52:46 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:46 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 53/55] KVM: PPC: Book3S HV P9: Tidy kvmppc_create_dtl_entry Date: Mon, 26 Jul 2021 13:50:34 +1000 Message-Id: <20210726035036.739609-54-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This goes further to removing vcores from the P9 path. Also avoid the memset in favour of explicitly initialising all fields. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 61 +++++++++++++++++++++--------------- 1 file changed, 35 insertions(+), 26 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index f83ae33e875c..f233ff1c18e1 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -703,41 +703,30 @@ static u64 vcore_stolen_time(struct kvmppc_vcore *vc, u64 now) return p; } -static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, - struct kvmppc_vcore *vc, u64 tb) +static void __kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, + unsigned int pcpu, u64 now, + unsigned long stolen) { struct dtl_entry *dt; struct lppaca *vpa; - unsigned long stolen; - unsigned long core_stolen; - u64 now; - unsigned long flags; dt = vcpu->arch.dtl_ptr; vpa = vcpu->arch.vpa.pinned_addr; - now = tb; - - if (cpu_has_feature(CPU_FTR_ARCH_300)) { - stolen = 0; - } else { - core_stolen = vcore_stolen_time(vc, now); - stolen = core_stolen - vcpu->arch.stolen_logged; - vcpu->arch.stolen_logged = core_stolen; - spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); - stolen += vcpu->arch.busy_stolen; - vcpu->arch.busy_stolen = 0; - spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); - } if (!dt || !vpa) return; - memset(dt, 0, sizeof(struct dtl_entry)); + dt->dispatch_reason = 7; - dt->processor_id = cpu_to_be16(vc->pcpu + vcpu->arch.ptid); - dt->timebase = cpu_to_be64(now + vc->tb_offset); + dt->preempt_reason = 0; + dt->processor_id = cpu_to_be16(pcpu + vcpu->arch.ptid); dt->enqueue_to_dispatch_time = cpu_to_be32(stolen); + dt->ready_to_enqueue_time = 0; + dt->waiting_to_ready_time = 0; + dt->timebase = cpu_to_be64(now); + dt->fault_addr = 0; dt->srr0 = cpu_to_be64(kvmppc_get_pc(vcpu)); dt->srr1 = cpu_to_be64(vcpu->arch.shregs.msr); + ++dt; if (dt == vcpu->arch.dtl.pinned_end) dt = vcpu->arch.dtl.pinned_addr; @@ -748,6 +737,27 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, vcpu->arch.dtl.dirty = true; } +static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, + struct kvmppc_vcore *vc) +{ + unsigned long stolen; + unsigned long core_stolen; + u64 now; + unsigned long flags; + + now = mftb(); + + core_stolen = vcore_stolen_time(vc, now); + stolen = core_stolen - vcpu->arch.stolen_logged; + vcpu->arch.stolen_logged = core_stolen; + spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); + stolen += vcpu->arch.busy_stolen; + vcpu->arch.busy_stolen = 0; + spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); + + __kvmppc_create_dtl_entry(vcpu, vc->pcpu, now + vc->tb_offset, stolen); +} + /* See if there is a doorbell interrupt pending for a vcpu */ static bool kvmppc_doorbell_pending(struct kvm_vcpu *vcpu) { @@ -3730,7 +3740,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) pvc->pcpu = pcpu + thr; for_each_runnable_thread(i, vcpu, pvc) { kvmppc_start_thread(vcpu, pvc); - kvmppc_create_dtl_entry(vcpu, pvc, mftb()); + kvmppc_create_dtl_entry(vcpu, pvc); trace_kvm_guest_enter(vcpu); if (!vcpu->arch.ptid) thr0_done = true; @@ -4281,7 +4291,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) if ((vc->vcore_state == VCORE_PIGGYBACK || vc->vcore_state == VCORE_RUNNING) && !VCORE_IS_EXITING(vc)) { - kvmppc_create_dtl_entry(vcpu, vc, mftb()); + kvmppc_create_dtl_entry(vcpu, vc); kvmppc_start_thread(vcpu, vc); trace_kvm_guest_enter(vcpu); } else if (vc->vcore_state == VCORE_SLEEPING) { @@ -4458,8 +4468,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, local_paca->kvm_hstate.ptid = 0; local_paca->kvm_hstate.fake_suspend = 0; - vc->pcpu = pcpu; // for kvmppc_create_dtl_entry - kvmppc_create_dtl_entry(vcpu, vc, tb); + __kvmppc_create_dtl_entry(vcpu, pcpu, tb + vc->tb_offset, 0); trace_kvm_guest_enter(vcpu); From patchwork Mon Jul 26 03:50:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509761 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=rYqGzF2C; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5c30j1yz9tjw for ; Mon, 26 Jul 2021 13:52:51 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231759AbhGZDMV (ORCPT ); Sun, 25 Jul 2021 23:12:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231601AbhGZDMU (ORCPT ); Sun, 25 Jul 2021 23:12:20 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6188EC061757 for ; Sun, 25 Jul 2021 20:52:49 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id j1so11102246pjv.3 for ; Sun, 25 Jul 2021 20:52:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4TD+VXbAQkLjU1Tv9nfQp6WJQncSQ+vS8R2ucbT1q4Q=; b=rYqGzF2CXDcmuW2mWG/451LNfWwHC2Bh0vYkWw/oWGlPjqdmNWNI2JdYR4Ix/++TlK OUE/LJym/P3Gi9GZv90usVyZp7pSm6uCql39qE1G32gudX+8vkdFZ7paM3dI1SXNq2jW 6WuULww0raOMIEYWjOscKfKwPPSAtzFdSNYIXGJb4pw9WQLTRwigqa+o312fjmypphLk SFV16Dgdc7b38+EF3T9SLJ40VyOeqoLAWrj1Pd9G9zqYTqtpCR/kAE11IQQpt8wqhqVZ VTdf6dMap6Ijb2TAFfGJWuLmLRsN+5T7hUBZz6ZLPyebvuHeqBT3R5C+iPe9ZBlJ1kRW ktBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4TD+VXbAQkLjU1Tv9nfQp6WJQncSQ+vS8R2ucbT1q4Q=; b=azAHsBI/LyAmV8Q6+KYgZkCPB4hr4CePEXJJOyLVzue3DTRNxyyXl9ges5TVzMMuB6 zh4Cm5gEi2X+uK5j3hW3j8dMEzylj9XUdhYkh7kTUFsJqFzZEIFFZNGqfdgDZw0JmbPV XPDB3kF6KowUDGU3HBmYd5ul3bmGOZHxrAD+K/BZAMg5iLGKpkevU5yGKmem1EYDIyEY /tFR121ha8dmM0EI+/VhCCtZPATLJzwPjxbXv2CB6bcLFgSD2efo+fpR/8wM0Q4o65ot WGqu7/Kzts+dESdCJwC+6mxoFf47wjtQ6hrj8lHOFxviZKatsa+xSsXY7b0fM+WpXlBv mTwQ== X-Gm-Message-State: AOAM533/r2GnVE+ulMf3clAc6VDwaju9yUeeNqKhHL+CgZpavJXZlJNH ljegz4c8VRnFUA5wpP3JwUO/18vh0no= X-Google-Smtp-Source: ABdhPJyED48ffEzufLC/jgr6X71ytepHIJcTvR+1kUU3ioulHKiztmxFXFUUA4RAdq7/870wICyq3w== X-Received: by 2002:aa7:88d3:0:b029:32b:75d0:fa92 with SMTP id k19-20020aa788d30000b029032b75d0fa92mr15948571pff.23.1627271568854; Sun, 25 Jul 2021 20:52:48 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:48 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 54/55] KVM: PPC: Book3S HV P9: Stop using vc->dpdes Date: Mon, 26 Jul 2021 13:50:35 +1000 Message-Id: <20210726035036.739609-55-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The P9 path uses vc->dpdes only for msgsndp / SMT emulation. This adds an ordering requirement between vcpu->doorbell_request and vc->dpdes for no real benefit. Use vcpu->doorbell_request directly. XXX: verify msgsndp / DPDES emulation works properly. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 18 ++++++++++-------- arch/powerpc/kvm/book3s_hv_builtin.c | 2 ++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 14 ++++++++++---- 3 files changed, 22 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index f233ff1c18e1..b727b2cfad98 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -766,6 +766,8 @@ static bool kvmppc_doorbell_pending(struct kvm_vcpu *vcpu) if (vcpu->arch.doorbell_request) return true; + if (cpu_has_feature(CPU_FTR_ARCH_300)) + return false; /* * Ensure that the read of vcore->dpdes comes after the read * of vcpu->doorbell_request. This barrier matches the @@ -2166,8 +2168,10 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, * either vcore->dpdes or doorbell_request. * On POWER8, doorbell_request is 0. */ - *val = get_reg_val(id, vcpu->arch.vcore->dpdes | - vcpu->arch.doorbell_request); + if (cpu_has_feature(CPU_FTR_ARCH_300)) + *val = get_reg_val(id, vcpu->arch.doorbell_request); + else + *val = get_reg_val(id, vcpu->arch.vcore->dpdes); break; case KVM_REG_PPC_VTB: *val = get_reg_val(id, vcpu->arch.vcore->vtb); @@ -2404,7 +2408,10 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, vcpu->arch.pspb = set_reg_val(id, *val); break; case KVM_REG_PPC_DPDES: - vcpu->arch.vcore->dpdes = set_reg_val(id, *val); + if (cpu_has_feature(CPU_FTR_ARCH_300)) + vcpu->arch.doorbell_request = set_reg_val(id, *val) & 1; + else + vcpu->arch.vcore->dpdes = set_reg_val(id, *val); break; case KVM_REG_PPC_VTB: vcpu->arch.vcore->vtb = set_reg_val(id, *val); @@ -4440,11 +4447,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, if (!nested) { kvmppc_core_prepare_to_enter(vcpu); - if (vcpu->arch.doorbell_request) { - vc->dpdes = 1; - smp_wmb(); - vcpu->arch.doorbell_request = 0; - } if (test_bit(BOOK3S_IRQPRIO_EXTERNAL, &vcpu->arch.pending_exceptions)) lpcr |= LPCR_MER; diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c index a10bf93054ca..3ed90149ed2e 100644 --- a/arch/powerpc/kvm/book3s_hv_builtin.c +++ b/arch/powerpc/kvm/book3s_hv_builtin.c @@ -660,6 +660,8 @@ void kvmppc_guest_entry_inject_int(struct kvm_vcpu *vcpu) int ext; unsigned long lpcr; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + /* Insert EXTERNAL bit into LPCR at the MER bit position */ ext = (vcpu->arch.pending_exceptions >> BOOK3S_IRQPRIO_EXTERNAL) & 1; lpcr = mfspr(SPRN_LPCR); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 338873f90c72..032ca6dfd83c 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -695,6 +695,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc unsigned long host_pidr; unsigned long host_dawr1; unsigned long host_dawrx1; + unsigned long dpdes; hdec = time_limit - *tb; if (hdec < 0) @@ -757,8 +758,10 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (vc->pcr) mtspr(SPRN_PCR, vc->pcr | PCR_MASK); - if (vc->dpdes) - mtspr(SPRN_DPDES, vc->dpdes); + if (vcpu->arch.doorbell_request) { + vcpu->arch.doorbell_request = 0; + mtspr(SPRN_DPDES, 1); + } if (dawr_enabled()) { if (vcpu->arch.dawr0 != host_dawr0) @@ -995,7 +998,10 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); - vc->dpdes = mfspr(SPRN_DPDES); + dpdes = mfspr(SPRN_DPDES); + if (dpdes) + vcpu->arch.doorbell_request = 1; + vc->vtb = mfspr(SPRN_VTB); dec = mfspr(SPRN_DEC); @@ -1057,7 +1063,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc } } - if (vc->dpdes) + if (dpdes) mtspr(SPRN_DPDES, 0); if (vc->pcr) mtspr(SPRN_PCR, PCR_MASK); From patchwork Mon Jul 26 03:50:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509762 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=T4Qgip1K; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5c42ylsz9tk6 for ; Mon, 26 Jul 2021 13:52:52 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231766AbhGZDMW (ORCPT ); Sun, 25 Jul 2021 23:12:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231601AbhGZDMW (ORCPT ); Sun, 25 Jul 2021 23:12:22 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85219C061757 for ; Sun, 25 Jul 2021 20:52:51 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id m1so11127036pjv.2 for ; Sun, 25 Jul 2021 20:52:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g7tzAmHt9KFCtjyh0DlOhYXuEQ09+ToK2o7s2d30wi4=; b=T4Qgip1KmTTvnYJO1orXDsbOKQ2HY5MjVPhZr3mE8waj56Ds3xJLVFSGfRv7Szdn1y QEzBi1loDZwMIy49XVPf8h/N1fXKHBhgcHelSVF0/AyF2h+2AOjgeWJ4muenI4CbdSqC PvUrTx7RF19wifBUACpAviUfXSzAwe/ZNK0ysFv9S9SpSfVVngyJDrbiZIvfJ3+zvHLo fDBxmKlxQxE14P5J5dsSNY/5p7Mk+7zxjCqagaSpqQWdgGUy3oQ/E3sG9o6w6nrXZiQr rwaARkvRIC259z375gg2N3HbUA2XXq6FvPckc9qiQ+HENWTGj78+lM+7wXhJb2GLk05M aqWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g7tzAmHt9KFCtjyh0DlOhYXuEQ09+ToK2o7s2d30wi4=; b=j0p+Otxvdcy0O7I/uWmoRG94zhQjLjlBT6sVE8qtNSCQ04APlb4NqIQ591DiIqeiZA /AWPH/2o9pov+YrGU4fXiyil6skRHxsT9G+W5qP9NTlSOBkkSIXFbDKwrQny8aetiLGu mGuJQeNxtKrar3b7NYmuU8MNOagJzsxvuLG8rPx+0RE9MKlMKtmyel3aw51YIv9lH92J hhEQe4mxmTnAG9iqqJWdSUkKosicXnhcsLYYTuQqmkN/8ysMqPS/5lTX3BK/z5UxGPni KvqdSRzSA+1PICzcB91T9jzcUNdzto9i40j3J1MXYs8x7RfcpT+Qlv8lTXJ6HGbmXfU3 cT2Q== X-Gm-Message-State: AOAM530il2dBQ5i5CtVijuXU77ADQ4O85ESMvMSOjaU/WAoQtKsgx3NC 3wj73PNDsvlOpT2iLlUVYa/ta15DjBM= X-Google-Smtp-Source: ABdhPJzwIM7O9OUMjHW7iRnCSulR7E5dYl4iqmmFRRUkU6vIcCXmbFuSo8vKOF91kEH2jrCedJc7DQ== X-Received: by 2002:a17:90a:f40f:: with SMTP id ch15mr3517977pjb.32.1627271571042; Sun, 25 Jul 2021 20:52:51 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:50 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 55/55] KVM: PPC: Book3S HV P9: Remove subcore HMI handling Date: Mon, 26 Jul 2021 13:50:36 +1000 Message-Id: <20210726035036.739609-56-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org On POWER9 and newer, rather than the complex HMI synchronisation and subcore state, have each thread un-apply the guest TB offset before calling into the early HMI handler. This allows the subcore state to be avoided, including subcore enter / exit guest, which includes an expensive divide that shows up slightly in profiles. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 12 +++++----- arch/powerpc/kvm/book3s_hv_hmi.c | 7 +++++- arch/powerpc/kvm/book3s_hv_p9_entry.c | 32 ++++++++++++++++++++++++++- arch/powerpc/kvm/book3s_hv_ras.c | 4 ++++ 4 files changed, 46 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index b727b2cfad98..3f62ada1a669 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3994,8 +3994,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.ceded = 0; - kvmppc_subcore_enter_guest(); - vcpu_vpa_increment_dispatch(vcpu); if (kvmhv_on_pseries()) { @@ -4048,8 +4046,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - kvmppc_subcore_exit_guest(); - return trap; } @@ -6031,9 +6027,11 @@ static int kvmppc_book3s_init_hv(void) if (r) return r; - r = kvm_init_subcore_bitmap(); - if (r) - return r; + if (!cpu_has_feature(CPU_FTR_ARCH_300)) { + r = kvm_init_subcore_bitmap(); + if (r) + return r; + } /* * We need a way of accessing the XICS interrupt controller, diff --git a/arch/powerpc/kvm/book3s_hv_hmi.c b/arch/powerpc/kvm/book3s_hv_hmi.c index 9af660476314..1ec50c69678b 100644 --- a/arch/powerpc/kvm/book3s_hv_hmi.c +++ b/arch/powerpc/kvm/book3s_hv_hmi.c @@ -20,10 +20,15 @@ void wait_for_subcore_guest_exit(void) /* * NULL bitmap pointer indicates that KVM module hasn't - * been loaded yet and hence no guests are running. + * been loaded yet and hence no guests are running, or running + * on POWER9 or newer CPU. + * * If no KVM is in use, no need to co-ordinate among threads * as all of them will always be in host and no one is going * to modify TB other than the opal hmi handler. + * + * POWER9 and newer don't need this synchronisation. + * * Hence, just return from here. */ if (!local_paca->sibling_subcore_state) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 032ca6dfd83c..d23e1ef2e3a7 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -3,6 +3,7 @@ #include #include #include +#include #include #include #include @@ -927,7 +928,36 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc kvmppc_realmode_machine_check(vcpu); } else if (unlikely(trap == BOOK3S_INTERRUPT_HMI)) { - kvmppc_realmode_hmi_handler(); + /* + * Unapply and clear the offset first. That way, if the TB + * was fine then no harm done, if it is corrupted then the + * HMI resync will bring it back to host mode. This way, we + * don't need to actualy know whether not OPAL resynced the + * timebase. Although it would be cleaner if we could rely + * on that, early POWER9 OPAL did not support the + * OPAL_HANDLE_HMI2 call. + */ + if (vc->tb_offset_applied) { + u64 new_tb = mftb() - vc->tb_offset_applied; + mtspr(SPRN_TBU40, new_tb); + if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) { + new_tb += 0x1000000; + mtspr(SPRN_TBU40, new_tb); + } + vc->tb_offset_applied = 0; + } + + hmi_exception_realmode(NULL); + + if (vc->tb_offset) { + u64 new_tb = mftb() + vc->tb_offset; + mtspr(SPRN_TBU40, new_tb); + if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) { + new_tb += 0x1000000; + mtspr(SPRN_TBU40, new_tb); + } + vc->tb_offset_applied = vc->tb_offset; + } } else if (trap == BOOK3S_INTERRUPT_H_EMUL_ASSIST) { vcpu->arch.emul_inst = mfspr(SPRN_HEIR); diff --git a/arch/powerpc/kvm/book3s_hv_ras.c b/arch/powerpc/kvm/book3s_hv_ras.c index d4bca93b79f6..a49ee9bdab67 100644 --- a/arch/powerpc/kvm/book3s_hv_ras.c +++ b/arch/powerpc/kvm/book3s_hv_ras.c @@ -136,6 +136,10 @@ void kvmppc_realmode_machine_check(struct kvm_vcpu *vcpu) vcpu->arch.mce_evt = mce_evt; } +/* + * This subcore HMI handling is all only for pre-POWER9 CPUs. + */ + /* Check if dynamic split is in force and return subcore size accordingly. */ static inline int kvmppc_cur_subcore_size(void) {