Patchwork [4/5] KVM: PPC: Book3S PR: Invalidate SLB entries properly

login
register
mail settings
Submitter Paul Mackerras
Date June 22, 2013, 7:15 a.m.
Message ID <20130622071524.GE2791@iris.ozlabs.ibm.com>
Download mbox | patch
Permalink /patch/253357/
State New
Headers show

Comments

Paul Mackerras - June 22, 2013, 7:15 a.m.
At present, if the guest creates a valid SLB (segment lookaside buffer)
entry with the slbmte instruction, then invalidates it with the slbie
instruction, then reads the entry with the slbmfee/slbmfev instructions,
the result of the slbmfee will have the valid bit set, even though the
entry is not actually considered valid by the host.  This is confusing,
if not worse.  This fixes it by zeroing out the orige and origv fields
of the SLB entry structure when the entry is invalidated.

Signed-off-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/kvm/book3s_64_mmu.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)
Alexander Graf - June 22, 2013, 5:48 p.m.
On 22.06.2013, at 09:15, Paul Mackerras wrote:

> At present, if the guest creates a valid SLB (segment lookaside buffer)
> entry with the slbmte instruction, then invalidates it with the slbie
> instruction, then reads the entry with the slbmfee/slbmfev instructions,
> the result of the slbmfee will have the valid bit set, even though the
> entry is not actually considered valid by the host.  This is confusing,
> if not worse.  This fixes it by zeroing out the orige and origv fields
> of the SLB entry structure when the entry is invalidated.
> 
> Signed-off-by: Paul Mackerras <paulus@samba.org>

Could you please change this to only remove the V bit from orige? I've found it very useful for debugging to see old, invalidated entries in the SLB when dumping it. The spec declares anything but the toggle of the V bit as undefined.


Alex

> ---
> arch/powerpc/kvm/book3s_64_mmu.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
> index 2e93bb5..7519124 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu.c
> @@ -376,6 +376,8 @@ static void kvmppc_mmu_book3s_64_slbie(struct kvm_vcpu *vcpu, u64 ea)
> 	dprintk("KVM MMU: slbie(0x%llx, 0x%llx)\n", ea, slbe->esid);
> 
> 	slbe->valid = false;
> +	slbe->orige = 0;
> +	slbe->origv = 0;
> 
> 	kvmppc_mmu_map_segment(vcpu, ea);
> }
> @@ -386,8 +388,11 @@ static void kvmppc_mmu_book3s_64_slbia(struct kvm_vcpu *vcpu)
> 
> 	dprintk("KVM MMU: slbia()\n");
> 
> -	for (i = 1; i < vcpu->arch.slb_nr; i++)
> +	for (i = 1; i < vcpu->arch.slb_nr; i++) {
> 		vcpu->arch.slb[i].valid = false;
> +		vcpu->arch.slb[i].orige = 0;
> +		vcpu->arch.slb[i].origv = 0;
> +	}
> 
> 	if (vcpu->arch.shared->msr & MSR_IR) {
> 		kvmppc_mmu_flush_segments(vcpu);
> -- 
> 1.8.3.1
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paul Mackerras - June 22, 2013, 11:30 p.m.
On Sat, Jun 22, 2013 at 07:48:05PM +0200, Alexander Graf wrote:
> 
> On 22.06.2013, at 09:15, Paul Mackerras wrote:
> 
> > At present, if the guest creates a valid SLB (segment lookaside buffer)
> > entry with the slbmte instruction, then invalidates it with the slbie
> > instruction, then reads the entry with the slbmfee/slbmfev instructions,
> > the result of the slbmfee will have the valid bit set, even though the
> > entry is not actually considered valid by the host.  This is confusing,
> > if not worse.  This fixes it by zeroing out the orige and origv fields
> > of the SLB entry structure when the entry is invalidated.
> > 
> > Signed-off-by: Paul Mackerras <paulus@samba.org>
> 
> Could you please change this to only remove the V bit from orige? I've found it very useful for debugging to see old, invalidated entries in the SLB when dumping it. The spec declares anything but the toggle of the V bit as undefined.

I did it like this since the architecture (since version 2.03)
specifies that slbmfee and slbmfev both return all zeroes for invalid
entries.  I'm not sure what you mean by your last sentence there.

Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf - June 22, 2013, 11:38 p.m.
On 23.06.2013, at 01:30, Paul Mackerras wrote:

> On Sat, Jun 22, 2013 at 07:48:05PM +0200, Alexander Graf wrote:
>> 
>> On 22.06.2013, at 09:15, Paul Mackerras wrote:
>> 
>>> At present, if the guest creates a valid SLB (segment lookaside buffer)
>>> entry with the slbmte instruction, then invalidates it with the slbie
>>> instruction, then reads the entry with the slbmfee/slbmfev instructions,
>>> the result of the slbmfee will have the valid bit set, even though the
>>> entry is not actually considered valid by the host.  This is confusing,
>>> if not worse.  This fixes it by zeroing out the orige and origv fields
>>> of the SLB entry structure when the entry is invalidated.
>>> 
>>> Signed-off-by: Paul Mackerras <paulus@samba.org>
>> 
>> Could you please change this to only remove the V bit from orige? I've found it very useful for debugging to see old, invalidated entries in the SLB when dumping it. The spec declares anything but the toggle of the V bit as undefined.
> 
> I did it like this since the architecture (since version 2.03)
> specifies that slbmfee and slbmfev both return all zeroes for invalid
> entries.  I'm not sure what you mean by your last sentence there.

Oh, really? I based all of the work back then on 2.01, so maybe that changed passed by me unnoticed. But you're right, it's certainly explicitly mentioned in 2.06. Guess this patch is perfectly valid then :).


Thanks, applied to kvm-ppc-queue.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch

diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 2e93bb5..7519124 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -376,6 +376,8 @@  static void kvmppc_mmu_book3s_64_slbie(struct kvm_vcpu *vcpu, u64 ea)
 	dprintk("KVM MMU: slbie(0x%llx, 0x%llx)\n", ea, slbe->esid);
 
 	slbe->valid = false;
+	slbe->orige = 0;
+	slbe->origv = 0;
 
 	kvmppc_mmu_map_segment(vcpu, ea);
 }
@@ -386,8 +388,11 @@  static void kvmppc_mmu_book3s_64_slbia(struct kvm_vcpu *vcpu)
 
 	dprintk("KVM MMU: slbia()\n");
 
-	for (i = 1; i < vcpu->arch.slb_nr; i++)
+	for (i = 1; i < vcpu->arch.slb_nr; i++) {
 		vcpu->arch.slb[i].valid = false;
+		vcpu->arch.slb[i].orige = 0;
+		vcpu->arch.slb[i].origv = 0;
+	}
 
 	if (vcpu->arch.shared->msr & MSR_IR) {
 		kvmppc_mmu_flush_segments(vcpu);