From patchwork Mon Jul 26 03:50:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1509755 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=oFLv9udR; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GY5bn6J36z9t2G for ; Mon, 26 Jul 2021 13:52:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231219AbhGZDMH (ORCPT ); Sun, 25 Jul 2021 23:12:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231738AbhGZDMH (ORCPT ); Sun, 25 Jul 2021 23:12:07 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E3D9C061760 for ; Sun, 25 Jul 2021 20:52:36 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id k4-20020a17090a5144b02901731c776526so17814448pjm.4 for ; Sun, 25 Jul 2021 20:52:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OQbFsG0YODgf6XN8qLAIvcN1cpU3dQ4AeKIuy4NhO2s=; b=oFLv9udRAW+vE0ofkHGlqRAueSPG7jr6qR1GtUvbQPxUK7iqKJTWHThDPLHhv7k6F/ 9VtzXY7qGawtOJGbtRqKLXpNStCprrwtDGD6glG/aMcCtccQjnkR4ugFsY8oE/QvoGg0 Zig+309a9ehdF6Q4/+99wilKb6L2HyCSbA5IhfbAothDyUwcwhBlj5UuPbzobhMzfMJz vaG+0oj+Xv7jUcP8NHZ05beGOt7BxKTPJxiDirCWVLCPZeHXmNwRTShQMYURVt/Fovc/ BZCM6L5C1DyYWl8pA1t6pEOb5S40e1QcOEx4BK4jP6Hm3TywEz1WkPe0uwI5ROwJNDKa 7zHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OQbFsG0YODgf6XN8qLAIvcN1cpU3dQ4AeKIuy4NhO2s=; b=gH8OEUtOwFTFCX5kP4kjWIv1laOAO99Q9GfOJlXTZ6MrhSbPgANsTd5IT81c6W3YhH TltiZLbixxer6Ja0cUkfXN6SeSlIdnT5pAtL0clSfaB9/plaMhdv81RRmGWFUPYKkWxf vIDbH1OgWX4wbwtyC5E9XgX+eAcrPaf7zE2i7GH3IsWDiZgiAZwwTc2bdsEhjVrLOjAg dX/Byz7Davx3vFt84S8y1PPkuCPvCmwaswFk5uJzxULnOOJSIC3xDATIj/VKFokzpTi6 FVccqXtOrLFR+I9RvG0P7QVEEAqc8GuZ1Wu/t0+6VfgIvW5IQJyolABpCf5C7Q85PYx8 5F5w== X-Gm-Message-State: AOAM5334Z/NiJPromy/NIkOSOTISxiQE2AHeVhKLVvIR+WxMHIh0wkXB tqBdms0ZZDizSLSqmR7Ko1c/cm0KoS8= X-Google-Smtp-Source: ABdhPJxuBNSLlG5dO2HvigbWX4J+IijKMb0zvB+TKgOLz+hRP+mAAu5aUaEHksUTihEB1NkWOSeYJA== X-Received: by 2002:aa7:8602:0:b029:32d:3e9b:27de with SMTP id p2-20020aa786020000b029032d3e9b27demr15802815pfn.39.1627271555508; Sun, 25 Jul 2021 20:52:35 -0700 (PDT) Received: from bobo.ibm.com (220-244-190-123.tpgi.com.au. [220.244.190.123]) by smtp.gmail.com with ESMTPSA id p33sm41140341pfw.40.2021.07.25.20.52.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jul 2021 20:52:35 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 48/55] KVM: PPC: Book3S HV P9: Improve mfmsr performance on entry Date: Mon, 26 Jul 2021 13:50:29 +1000 Message-Id: <20210726035036.739609-49-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210726035036.739609-1-npiggin@gmail.com> References: <20210726035036.739609-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rearrange the MSR saving on entry so it does not follow the mtmsrd to disable interrupts, avoiding a possible RAW scoreboard stall. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 2 + arch/powerpc/kvm/book3s_hv.c | 18 ++----- arch/powerpc/kvm/book3s_hv_p9_entry.c | 66 +++++++++++++++--------- 3 files changed, 47 insertions(+), 39 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index 52e2b7a352c7..4b0753e03731 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -154,6 +154,8 @@ static inline bool kvmhv_vcpu_is_radix(struct kvm_vcpu *vcpu) return radix; } +unsigned long kvmppc_msr_hard_disable_set_facilities(struct kvm_vcpu *vcpu, unsigned long msr); + int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb); #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index b95e0c5e5557..ee4e38cf5df4 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3858,6 +3858,8 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns s64 dec; int trap; + msr = mfmsr(); + save_p9_host_os_sprs(&host_os_sprs); /* @@ -3868,24 +3870,10 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns */ host_psscr = mfspr(SPRN_PSSCR_PR); - hard_irq_disable(); + kvmppc_msr_hard_disable_set_facilities(vcpu, msr); if (lazy_irq_pending()) return 0; - /* MSR bits may have been cleared by context switch */ - msr = 0; - if (IS_ENABLED(CONFIG_PPC_FPU)) - msr |= MSR_FP; - if (cpu_has_feature(CPU_FTR_ALTIVEC)) - msr |= MSR_VEC; - if (cpu_has_feature(CPU_FTR_VSX)) - msr |= MSR_VSX; - if ((cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && - (vcpu->arch.hfscr & HFSCR_TM)) - msr |= MSR_TM; - msr = msr_check_and_set(msr); - if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) msr = mfmsr(); /* TM restore can update msr */ diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 1bb81be09d4f..1287dac918a0 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -622,6 +622,44 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) } } +unsigned long kvmppc_msr_hard_disable_set_facilities(struct kvm_vcpu *vcpu, unsigned long msr) +{ + unsigned long msr_needed = 0; + + msr &= ~MSR_EE; + + /* MSR bits may have been cleared by context switch so must recheck */ + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr_needed |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr_needed |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr_needed |= MSR_VSX; + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) + msr_needed |= MSR_TM; + + /* + * This could be combined with MSR[RI] clearing, but that expands + * the unrecoverable window. It would be better to cover unrecoverable + * with KVM bad interrupt handling rather than use MSR[RI] at all. + * + * Much more difficult and less worthwhile to combine with IR/DR + * disable. + */ + if ((msr & msr_needed) != msr_needed) { + msr |= msr_needed; + __mtmsrd(msr, 0); + } else { + __hard_irq_disable(); + } + local_paca->irq_happened |= PACA_IRQ_HARD_DIS; + + return msr; +} +EXPORT_SYMBOL_GPL(kvmppc_msr_hard_disable_set_facilities); + int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { struct p9_host_os_sprs host_os_sprs; @@ -655,6 +693,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ceded = 0; + /* Save MSR for restore, with EE clear. */ + msr = mfmsr() & ~MSR_EE; + host_hfscr = mfspr(SPRN_HFSCR); host_ciabr = mfspr(SPRN_CIABR); host_psscr = mfspr(SPRN_PSSCR_PR); @@ -676,35 +717,12 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc save_p9_host_os_sprs(&host_os_sprs); - /* - * This could be combined with MSR[RI] clearing, but that expands - * the unrecoverable window. It would be better to cover unrecoverable - * with KVM bad interrupt handling rather than use MSR[RI] at all. - * - * Much more difficult and less worthwhile to combine with IR/DR - * disable. - */ - hard_irq_disable(); + msr = kvmppc_msr_hard_disable_set_facilities(vcpu, msr); if (lazy_irq_pending()) { trap = 0; goto out; } - /* MSR bits may have been cleared by context switch */ - msr = 0; - if (IS_ENABLED(CONFIG_PPC_FPU)) - msr |= MSR_FP; - if (cpu_has_feature(CPU_FTR_ALTIVEC)) - msr |= MSR_VEC; - if (cpu_has_feature(CPU_FTR_VSX)) - msr |= MSR_VSX; - if ((cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && - (vcpu->arch.hfscr & HFSCR_TM)) - msr |= MSR_TM; - msr = msr_check_and_set(msr); - /* Save MSR for restore. This is after hard disable, so EE is clear. */ - if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) msr = mfmsr(); /* MSR may have been updated */