Message ID | 150693362232.15210.2878817650741484831.stgit@bahia (mailing list archive) |
---|---|
State | Not Applicable |
Delegated to: | Paul Mackerras |
Headers | show |
Series | [v2] KVM: PPC: Book3S PR: only install valid SLBs during KVM_SET_SREGS | expand |
On Mon, Oct 02, 2017 at 10:40:22AM +0200, Greg Kurz wrote: > Userland passes an array of 64 SLB descriptors to KVM_SET_SREGS, > some of which are valid (ie, SLB_ESID_V is set) and the rest are > likely all-zeroes (with QEMU at least). > > Each of them is then passed to kvmppc_mmu_book3s_64_slbmte(), which > assumes to find the SLB index in the 3 lower bits of its rb argument. > When passed zeroed arguments, it happily overwrites the 0th SLB entry > with zeroes. This is exactly what happens while doing live migration > with QEMU when the destination pushes the incoming SLB descriptors to > KVM PR. When reloading the SLBs at the next synchronization, QEMU first > clears its SLB array and only restore valid ones, but the 0th one is > now gone and we cannot access the corresponding memory anymore: > > (qemu) x/x $pc > c0000000000b742c: Cannot access memory > > To avoid this, let's filter out non-valid SLB entries. While here, we > also force a full SLB flush before installing new entries. > > Signed-off-by: Greg Kurz <groug@kaod.org> Seems sensible to me. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> > --- > v2: - flush SLB before installing new entries > --- > arch/powerpc/kvm/book3s_pr.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c > index 3beb4ff469d1..7cce08d610ae 100644 > --- a/arch/powerpc/kvm/book3s_pr.c > +++ b/arch/powerpc/kvm/book3s_pr.c > @@ -1327,9 +1327,15 @@ static int kvm_arch_vcpu_ioctl_set_sregs_pr(struct kvm_vcpu *vcpu, > > vcpu3s->sdr1 = sregs->u.s.sdr1; > if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) { > + /* Flush all SLB entries */ > + vcpu->arch.mmu.slbmte(vcpu, 0, 0); > + vcpu->arch.mmu.slbia(vcpu); > + > for (i = 0; i < 64; i++) { > - vcpu->arch.mmu.slbmte(vcpu, sregs->u.s.ppc64.slb[i].slbv, > - sregs->u.s.ppc64.slb[i].slbe); > + u64 rb = sregs->u.s.ppc64.slb[i].slbe; > + u64 rs = sregs->u.s.ppc64.slb[i].slbv; > + if (rb & SLB_ESID_V) > + vcpu->arch.mmu.slbmte(vcpu, rs, rb); > } > } else { > for (i = 0; i < 16; i++) { >
Ping ? On Mon, 02 Oct 2017 10:40:22 +0200 Greg Kurz <groug@kaod.org> wrote: > Userland passes an array of 64 SLB descriptors to KVM_SET_SREGS, > some of which are valid (ie, SLB_ESID_V is set) and the rest are > likely all-zeroes (with QEMU at least). > > Each of them is then passed to kvmppc_mmu_book3s_64_slbmte(), which > assumes to find the SLB index in the 3 lower bits of its rb argument. > When passed zeroed arguments, it happily overwrites the 0th SLB entry > with zeroes. This is exactly what happens while doing live migration > with QEMU when the destination pushes the incoming SLB descriptors to > KVM PR. When reloading the SLBs at the next synchronization, QEMU first > clears its SLB array and only restore valid ones, but the 0th one is > now gone and we cannot access the corresponding memory anymore: > > (qemu) x/x $pc > c0000000000b742c: Cannot access memory > > To avoid this, let's filter out non-valid SLB entries. While here, we > also force a full SLB flush before installing new entries. > > Signed-off-by: Greg Kurz <groug@kaod.org> > --- > v2: - flush SLB before installing new entries > --- > arch/powerpc/kvm/book3s_pr.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c > index 3beb4ff469d1..7cce08d610ae 100644 > --- a/arch/powerpc/kvm/book3s_pr.c > +++ b/arch/powerpc/kvm/book3s_pr.c > @@ -1327,9 +1327,15 @@ static int kvm_arch_vcpu_ioctl_set_sregs_pr(struct kvm_vcpu *vcpu, > > vcpu3s->sdr1 = sregs->u.s.sdr1; > if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) { > + /* Flush all SLB entries */ > + vcpu->arch.mmu.slbmte(vcpu, 0, 0); > + vcpu->arch.mmu.slbia(vcpu); > + > for (i = 0; i < 64; i++) { > - vcpu->arch.mmu.slbmte(vcpu, sregs->u.s.ppc64.slb[i].slbv, > - sregs->u.s.ppc64.slb[i].slbe); > + u64 rb = sregs->u.s.ppc64.slb[i].slbe; > + u64 rs = sregs->u.s.ppc64.slb[i].slbv; > + if (rb & SLB_ESID_V) > + vcpu->arch.mmu.slbmte(vcpu, rs, rb); > } > } else { > for (i = 0; i < 16; i++) { > > -- > To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Oct 02, 2017 at 10:40:22AM +0200, Greg Kurz wrote: > Userland passes an array of 64 SLB descriptors to KVM_SET_SREGS, > some of which are valid (ie, SLB_ESID_V is set) and the rest are > likely all-zeroes (with QEMU at least). > > Each of them is then passed to kvmppc_mmu_book3s_64_slbmte(), which > assumes to find the SLB index in the 3 lower bits of its rb argument. > When passed zeroed arguments, it happily overwrites the 0th SLB entry > with zeroes. This is exactly what happens while doing live migration > with QEMU when the destination pushes the incoming SLB descriptors to > KVM PR. When reloading the SLBs at the next synchronization, QEMU first > clears its SLB array and only restore valid ones, but the 0th one is > now gone and we cannot access the corresponding memory anymore: > > (qemu) x/x $pc > c0000000000b742c: Cannot access memory > > To avoid this, let's filter out non-valid SLB entries. While here, we > also force a full SLB flush before installing new entries. With this, a 32-bit powermac config with PR KVM enabled fails to build: CC [M] arch/powerpc/kvm/book3s_pr.o /home/paulus/kernel/kvm/arch/powerpc/kvm/book3s_pr.c: In function ‘kvm_arch_vcpu_ioctl_set_sregs_pr’: /home/paulus/kernel/kvm/arch/powerpc/kvm/book3s_pr.c:1337:13: error: ‘SLB_ESID_V’ undeclared (first use in this function) if (rb & SLB_ESID_V) ^ /home/paulus/kernel/kvm/arch/powerpc/kvm/book3s_pr.c:1337:13: note: each undeclared identifier is reported only once for each function it appears in /home/paulus/kernel/kvm/scripts/Makefile.build:313: recipe for target 'arch/powerpc/kvm/book3s_pr.o' failed make[3]: *** [arch/powerpc/kvm/book3s_pr.o] Error 1 Paul.
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 3beb4ff469d1..7cce08d610ae 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -1327,9 +1327,15 @@ static int kvm_arch_vcpu_ioctl_set_sregs_pr(struct kvm_vcpu *vcpu, vcpu3s->sdr1 = sregs->u.s.sdr1; if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) { + /* Flush all SLB entries */ + vcpu->arch.mmu.slbmte(vcpu, 0, 0); + vcpu->arch.mmu.slbia(vcpu); + for (i = 0; i < 64; i++) { - vcpu->arch.mmu.slbmte(vcpu, sregs->u.s.ppc64.slb[i].slbv, - sregs->u.s.ppc64.slb[i].slbe); + u64 rb = sregs->u.s.ppc64.slb[i].slbe; + u64 rs = sregs->u.s.ppc64.slb[i].slbv; + if (rb & SLB_ESID_V) + vcpu->arch.mmu.slbmte(vcpu, rs, rb); } } else { for (i = 0; i < 16; i++) {
Userland passes an array of 64 SLB descriptors to KVM_SET_SREGS, some of which are valid (ie, SLB_ESID_V is set) and the rest are likely all-zeroes (with QEMU at least). Each of them is then passed to kvmppc_mmu_book3s_64_slbmte(), which assumes to find the SLB index in the 3 lower bits of its rb argument. When passed zeroed arguments, it happily overwrites the 0th SLB entry with zeroes. This is exactly what happens while doing live migration with QEMU when the destination pushes the incoming SLB descriptors to KVM PR. When reloading the SLBs at the next synchronization, QEMU first clears its SLB array and only restore valid ones, but the 0th one is now gone and we cannot access the corresponding memory anymore: (qemu) x/x $pc c0000000000b742c: Cannot access memory To avoid this, let's filter out non-valid SLB entries. While here, we also force a full SLB flush before installing new entries. Signed-off-by: Greg Kurz <groug@kaod.org> --- v2: - flush SLB before installing new entries --- arch/powerpc/kvm/book3s_pr.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)