diff mbox

target-ppc: Update slb array with correct index values.

Message ID 1376245011-20008-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com
State New, archived
Headers show

Commit Message

Aneesh Kumar K.V Aug. 11, 2013, 6:16 p.m. UTC
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

Without this, a value of rb=0 and rs=0, result in us replacing the 0th index

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 target-ppc/kvm.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

Comments

Aneesh Kumar K.V Aug. 19, 2013, 7:25 a.m. UTC | #1
Alexander Graf <agraf@suse.de> writes:

> On 11.08.2013, at 20:16, Aneesh Kumar K.V wrote:
>
>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> 
>> Without this, a value of rb=0 and rs=0, result in us replacing the 0th index
>> 
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>
> Wrong mailing list again ;).

Will post the series again with updated commit message to the qemu list.

>
>> ---
>> target-ppc/kvm.c | 14 ++++++++++++--
>> 1 file changed, 12 insertions(+), 2 deletions(-)
>> 
>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>> index 30a870e..5d4e613 100644
>> --- a/target-ppc/kvm.c
>> +++ b/target-ppc/kvm.c
>> @@ -1034,8 +1034,18 @@ int kvm_arch_get_registers(CPUState *cs)
>>         /* Sync SLB */
>> #ifdef TARGET_PPC64
>>         for (i = 0; i < 64; i++) {
>> -            ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
>> -                               sregs.u.s.ppc64.slb[i].slbv);
>> +            target_ulong rb  = sregs.u.s.ppc64.slb[i].slbe;
>> +            /*
>> +             * KVM_GET_SREGS doesn't retun slb entry with slot information
>> +             * same as index. So don't depend on the slot information in
>> +             * the returned value.
>
> This is the generating code in book3s_pr.c:
>
>         if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) {
>                 for (i = 0; i < 64; i++) {
>                         sregs->u.s.ppc64.slb[i].slbe = vcpu->arch.slb[i].orige | i;
>                         sregs->u.s.ppc64.slb[i].slbv = vcpu->arch.slb[i].origv;
>                 }
>
> Where exactly did you see broken slbe entries?
>

I noticed this when adding support for guest memory dumping via qemu gdb
server. Now the array we get would look like below

slbe0 slbv0
slbe1 slbv1
0000  00000
0000  00000

Once we get an array like that when we hit the third value we will
replace the 0th entry, that is [slbe0 slbv0]. That resulted in failed
translation of the address by qemu gdb server. 


-aneesh

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf Aug. 19, 2013, 8:21 a.m. UTC | #2
On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:

> Alexander Graf <agraf@suse.de> writes:
> 
>> On 11.08.2013, at 20:16, Aneesh Kumar K.V wrote:
>> 
>>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>>> 
>>> Without this, a value of rb=0 and rs=0, result in us replacing the 0th index
>>> 
>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> 
>> Wrong mailing list again ;).
> 
> Will post the series again with updated commit message to the qemu list.
> 
>> 
>>> ---
>>> target-ppc/kvm.c | 14 ++++++++++++--
>>> 1 file changed, 12 insertions(+), 2 deletions(-)
>>> 
>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>>> index 30a870e..5d4e613 100644
>>> --- a/target-ppc/kvm.c
>>> +++ b/target-ppc/kvm.c
>>> @@ -1034,8 +1034,18 @@ int kvm_arch_get_registers(CPUState *cs)
>>>        /* Sync SLB */
>>> #ifdef TARGET_PPC64
>>>        for (i = 0; i < 64; i++) {
>>> -            ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
>>> -                               sregs.u.s.ppc64.slb[i].slbv);
>>> +            target_ulong rb  = sregs.u.s.ppc64.slb[i].slbe;
>>> +            /*
>>> +             * KVM_GET_SREGS doesn't retun slb entry with slot information
>>> +             * same as index. So don't depend on the slot information in
>>> +             * the returned value.
>> 
>> This is the generating code in book3s_pr.c:
>> 
>>        if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) {
>>                for (i = 0; i < 64; i++) {
>>                        sregs->u.s.ppc64.slb[i].slbe = vcpu->arch.slb[i].orige | i;
>>                        sregs->u.s.ppc64.slb[i].slbv = vcpu->arch.slb[i].origv;
>>                }
>> 
>> Where exactly did you see broken slbe entries?
>> 
> 
> I noticed this when adding support for guest memory dumping via qemu gdb
> server. Now the array we get would look like below
> 
> slbe0 slbv0
> slbe1 slbv1
> 0000  00000
> 0000  00000

Ok, so that's where the problem lies. Why are the entries 0 here? Either we try to fetch more entries than we should, we populate entries incorrectly or the kernel simply returns invalid SLB entry values for invalid entries.

Are you seeing this with PR KVM or HV KVM?


Alex

> Once we get an array like that when we hit the third value we will
> replace the 0th entry, that is [slbe0 slbv0]. That resulted in failed
> translation of the address by qemu gdb server. 
> 
> 
> -aneesh
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Aneesh Kumar K.V Aug. 20, 2013, 1:57 p.m. UTC | #3
Alexander Graf <agraf@suse.de> writes:

> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>
>> Alexander Graf <agraf@suse.de> writes:
>> 
>>> On 11.08.2013, at 20:16, Aneesh Kumar K.V wrote:
>>> 
>>>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>>>> 
>>>> Without this, a value of rb=0 and rs=0, result in us replacing the 0th index
>>>> 
>>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>>> 
>>> Wrong mailing list again ;).
>> 
>> Will post the series again with updated commit message to the qemu list.
>> 
>>> 
>>>> ---
>>>> target-ppc/kvm.c | 14 ++++++++++++--
>>>> 1 file changed, 12 insertions(+), 2 deletions(-)
>>>> 
>>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>>>> index 30a870e..5d4e613 100644
>>>> --- a/target-ppc/kvm.c
>>>> +++ b/target-ppc/kvm.c
>>>> @@ -1034,8 +1034,18 @@ int kvm_arch_get_registers(CPUState *cs)
>>>>        /* Sync SLB */
>>>> #ifdef TARGET_PPC64
>>>>        for (i = 0; i < 64; i++) {
>>>> -            ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
>>>> -                               sregs.u.s.ppc64.slb[i].slbv);
>>>> +            target_ulong rb  = sregs.u.s.ppc64.slb[i].slbe;
>>>> +            /*
>>>> +             * KVM_GET_SREGS doesn't retun slb entry with slot information
>>>> +             * same as index. So don't depend on the slot information in
>>>> +             * the returned value.
>>> 
>>> This is the generating code in book3s_pr.c:
>>> 
>>>        if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) {
>>>                for (i = 0; i < 64; i++) {
>>>                        sregs->u.s.ppc64.slb[i].slbe = vcpu->arch.slb[i].orige | i;
>>>                        sregs->u.s.ppc64.slb[i].slbv = vcpu->arch.slb[i].origv;
>>>                }
>>> 
>>> Where exactly did you see broken slbe entries?
>>> 
>> 
>> I noticed this when adding support for guest memory dumping via qemu gdb
>> server. Now the array we get would look like below
>> 
>> slbe0 slbv0
>> slbe1 slbv1
>> 0000  00000
>> 0000  00000
>
> Ok, so that's where the problem lies. Why are the entries 0 here?
> Either we try to fetch more entries than we should, we populate
> entries incorrectly or the kernel simply returns invalid SLB entry
> values for invalid entries.


The ioctl zero out the sregs, and fill only slb_max entries. So we find
0 filled entries above slb_max. Also we don't pass slb_max to user
space. So userspace have to look at all the 64 entries. 


>
> Are you seeing this with PR KVM or HV KVM?
>

HV KVM

-aneesh

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paul Mackerras Aug. 21, 2013, 5:11 a.m. UTC | #4
On Mon, Aug 19, 2013 at 10:21:09AM +0200, Alexander Graf wrote:
> 
> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
> > I noticed this when adding support for guest memory dumping via qemu gdb
> > server. Now the array we get would look like below
> > 
> > slbe0 slbv0
> > slbe1 slbv1
> > 0000  00000
> > 0000  00000
> 
> Ok, so that's where the problem lies. Why are the entries 0 here? Either we try to fetch more entries than we should, we populate entries incorrectly or the kernel simply returns invalid SLB entry values for invalid entries.
> 
> Are you seeing this with PR KVM or HV KVM?

I suspect this is to do with the fact that PR and HV KVM use the
vcpu->arch.slb[] array differently.  PR stores SLB entry n in
vcpu->arch.slb[n], whereas HV packs the valid entries down in the
low-numbered entries and puts the index in the bottom bits of the esid
field (this is so they can be loaded efficiently with the slbmte
instruction on guest entry).

Then, kvm_arch_vcpu_ioctl_get_sregs() on PR copies out all 64 entries
(valid or not) and puts an index value in the bottom bits of the esid,
whereas on HV it just copies out the valid entries (which already have
the index in the esid field).

So, the question is, what is the ABI here?  It sounds a bit like qemu
is ignoring the index value in the esid field.  Either qemu needs to
take notice of the index in the esid field or we need to change the HV
versions of kvm_arch_vcpu_ioctl_get/set_sregs to put entry n in
sregs->u.s.ppc64.slb[n] like PR does.

Paul.

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf Aug. 21, 2013, 7:37 a.m. UTC | #5
On 21.08.2013, at 06:11, Paul Mackerras wrote:

> On Mon, Aug 19, 2013 at 10:21:09AM +0200, Alexander Graf wrote:
>> 
>> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>>> I noticed this when adding support for guest memory dumping via qemu gdb
>>> server. Now the array we get would look like below
>>> 
>>> slbe0 slbv0
>>> slbe1 slbv1
>>> 0000  00000
>>> 0000  00000
>> 
>> Ok, so that's where the problem lies. Why are the entries 0 here? Either we try to fetch more entries than we should, we populate entries incorrectly or the kernel simply returns invalid SLB entry values for invalid entries.
>> 
>> Are you seeing this with PR KVM or HV KVM?
> 
> I suspect this is to do with the fact that PR and HV KVM use the
> vcpu->arch.slb[] array differently.  PR stores SLB entry n in
> vcpu->arch.slb[n], whereas HV packs the valid entries down in the
> low-numbered entries and puts the index in the bottom bits of the esid
> field (this is so they can be loaded efficiently with the slbmte
> instruction on guest entry).
> 
> Then, kvm_arch_vcpu_ioctl_get_sregs() on PR copies out all 64 entries
> (valid or not) and puts an index value in the bottom bits of the esid,
> whereas on HV it just copies out the valid entries (which already have
> the index in the esid field).
> 
> So, the question is, what is the ABI here?  It sounds a bit like qemu
> is ignoring the index value in the esid field.  Either qemu needs to
> take notice of the index in the esid field or we need to change the HV
> versions of kvm_arch_vcpu_ioctl_get/set_sregs to put entry n in
> sregs->u.s.ppc64.slb[n] like PR does.

It's the opposite today - QEMU does honor the index value on sregs get. Aneesh's patch wants to change it to ignore it instead. For sregs set we copy our internal copy of the slb linearly into the array, so we don't pack it there.

Can we safely assume on HV KVM that esid == 0 vsid == 0 is the end of the list? If so, we can just add a break statement in the get loop and call it a day. The rest should work just fine.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf Aug. 21, 2013, 7:42 a.m. UTC | #6
On 21.08.2013, at 08:37, Alexander Graf wrote:

> 
> On 21.08.2013, at 06:11, Paul Mackerras wrote:
> 
>> On Mon, Aug 19, 2013 at 10:21:09AM +0200, Alexander Graf wrote:
>>> 
>>> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>>>> I noticed this when adding support for guest memory dumping via qemu gdb
>>>> server. Now the array we get would look like below
>>>> 
>>>> slbe0 slbv0
>>>> slbe1 slbv1
>>>> 0000  00000
>>>> 0000  00000
>>> 
>>> Ok, so that's where the problem lies. Why are the entries 0 here? Either we try to fetch more entries than we should, we populate entries incorrectly or the kernel simply returns invalid SLB entry values for invalid entries.
>>> 
>>> Are you seeing this with PR KVM or HV KVM?
>> 
>> I suspect this is to do with the fact that PR and HV KVM use the
>> vcpu->arch.slb[] array differently.  PR stores SLB entry n in
>> vcpu->arch.slb[n], whereas HV packs the valid entries down in the
>> low-numbered entries and puts the index in the bottom bits of the esid
>> field (this is so they can be loaded efficiently with the slbmte
>> instruction on guest entry).
>> 
>> Then, kvm_arch_vcpu_ioctl_get_sregs() on PR copies out all 64 entries
>> (valid or not) and puts an index value in the bottom bits of the esid,
>> whereas on HV it just copies out the valid entries (which already have
>> the index in the esid field).
>> 
>> So, the question is, what is the ABI here?  

Oh, and to answer your question here: The original intent was to copy the SLB array 1:1 from kvm to user space. But that's moot by now, since we already have kvm versions out there that return your packed format :).


Alex

>> It sounds a bit like qemu
>> is ignoring the index value in the esid field.  Either qemu needs to
>> take notice of the index in the esid field or we need to change the HV
>> versions of kvm_arch_vcpu_ioctl_get/set_sregs to put entry n in
>> sregs->u.s.ppc64.slb[n] like PR does.
> 
> It's the opposite today - QEMU does honor the index value on sregs get. Aneesh's patch wants to change it to ignore it instead. For sregs set we copy our internal copy of the slb linearly into the array, so we don't pack it there.
> 
> Can we safely assume on HV KVM that esid == 0 vsid == 0 is the end of the list? If so, we can just add a break statement in the get loop and call it a day. The rest should work just fine.
> 
> 
> Alex
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paul Mackerras Aug. 21, 2013, 9:25 a.m. UTC | #7
On Wed, Aug 21, 2013 at 08:37:47AM +0100, Alexander Graf wrote:
> 
> On 21.08.2013, at 06:11, Paul Mackerras wrote:
> 
> > On Mon, Aug 19, 2013 at 10:21:09AM +0200, Alexander Graf wrote:
> >> 
> >> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
> >>> I noticed this when adding support for guest memory dumping via qemu gdb
> >>> server. Now the array we get would look like below
> >>> 
> >>> slbe0 slbv0
> >>> slbe1 slbv1
> >>> 0000  00000
> >>> 0000  00000
> >> 
> >> Ok, so that's where the problem lies. Why are the entries 0 here? Either we try to fetch more entries than we should, we populate entries incorrectly or the kernel simply returns invalid SLB entry values for invalid entries.
> >> 
> >> Are you seeing this with PR KVM or HV KVM?
> > 
> > I suspect this is to do with the fact that PR and HV KVM use the
> > vcpu->arch.slb[] array differently.  PR stores SLB entry n in
> > vcpu->arch.slb[n], whereas HV packs the valid entries down in the
> > low-numbered entries and puts the index in the bottom bits of the esid
> > field (this is so they can be loaded efficiently with the slbmte
> > instruction on guest entry).
> > 
> > Then, kvm_arch_vcpu_ioctl_get_sregs() on PR copies out all 64 entries
> > (valid or not) and puts an index value in the bottom bits of the esid,
> > whereas on HV it just copies out the valid entries (which already have
> > the index in the esid field).
> > 
> > So, the question is, what is the ABI here?  It sounds a bit like qemu
> > is ignoring the index value in the esid field.  Either qemu needs to
> > take notice of the index in the esid field or we need to change the HV
> > versions of kvm_arch_vcpu_ioctl_get/set_sregs to put entry n in
> > sregs->u.s.ppc64.slb[n] like PR does.
> 
> It's the opposite today - QEMU does honor the index value on sregs get. Aneesh's patch wants to change it to ignore it instead. For sregs set we copy our internal copy of the slb linearly into the array, so we don't pack it there.
> 
> Can we safely assume on HV KVM that esid == 0 vsid == 0 is the end of the list? If so, we can just add a break statement in the get loop and call it a day. The rest should work just fine.

On HV KVM yes, that would be the end of the list, but PR KVM could
give you entry 0 containing esid==0 and vsid==0 followed by valid
entries.  Perhaps the best approach is to ignore any entries with
SLB_ESID_V clear.

Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf Aug. 21, 2013, 10:03 a.m. UTC | #8
Am 21.08.2013 um 10:25 schrieb Paul Mackerras <paulus@samba.org>:

> On Wed, Aug 21, 2013 at 08:37:47AM +0100, Alexander Graf wrote:
>> 
>> On 21.08.2013, at 06:11, Paul Mackerras wrote:
>> 
>>> On Mon, Aug 19, 2013 at 10:21:09AM +0200, Alexander Graf wrote:
>>>> 
>>>> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>>>>> I noticed this when adding support for guest memory dumping via qemu gdb
>>>>> server. Now the array we get would look like below
>>>>> 
>>>>> slbe0 slbv0
>>>>> slbe1 slbv1
>>>>> 0000  00000
>>>>> 0000  00000
>>>> 
>>>> Ok, so that's where the problem lies. Why are the entries 0 here? Either we try to fetch more entries than we should, we populate entries incorrectly or the kernel simply returns invalid SLB entry values for invalid entries.
>>>> 
>>>> Are you seeing this with PR KVM or HV KVM?
>>> 
>>> I suspect this is to do with the fact that PR and HV KVM use the
>>> vcpu->arch.slb[] array differently.  PR stores SLB entry n in
>>> vcpu->arch.slb[n], whereas HV packs the valid entries down in the
>>> low-numbered entries and puts the index in the bottom bits of the esid
>>> field (this is so they can be loaded efficiently with the slbmte
>>> instruction on guest entry).
>>> 
>>> Then, kvm_arch_vcpu_ioctl_get_sregs() on PR copies out all 64 entries
>>> (valid or not) and puts an index value in the bottom bits of the esid,
>>> whereas on HV it just copies out the valid entries (which already have
>>> the index in the esid field).
>>> 
>>> So, the question is, what is the ABI here?  It sounds a bit like qemu
>>> is ignoring the index value in the esid field.  Either qemu needs to
>>> take notice of the index in the esid field or we need to change the HV
>>> versions of kvm_arch_vcpu_ioctl_get/set_sregs to put entry n in
>>> sregs->u.s.ppc64.slb[n] like PR does.
>> 
>> It's the opposite today - QEMU does honor the index value on sregs get. Aneesh's patch wants to change it to ignore it instead. For sregs set we copy our internal copy of the slb linearly into the array, so we don't pack it there.
>> 
>> Can we safely assume on HV KVM that esid == 0 vsid == 0 is the end of the list? If so, we can just add a break statement in the get loop and call it a day. The rest should work just fine.
> 
> On HV KVM yes, that would be the end of the list, but PR KVM could
> give you entry 0 containing esid==0 and vsid==0 followed by valid
> entries.  Perhaps the best approach is to ignore any entries with
> SLB_ESID_V clear.

That means we don't clear entries we don't receive from the kernel because they're V=0 but which were V=1 before. Which with the current code is probably already broken.

So yes, clear all cached entries first (to make sure we have no stale ones), then loop through all and only add entries with V=1 should fix everything for PR as well as HV.


Alex

> 
> Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Aneesh Kumar K.V Aug. 21, 2013, 3:59 p.m. UTC | #9
Alexander Graf <agraf@suse.de> writes:


....

>> 
>> On HV KVM yes, that would be the end of the list, but PR KVM could
>> give you entry 0 containing esid==0 and vsid==0 followed by valid
>> entries.  Perhaps the best approach is to ignore any entries with
>> SLB_ESID_V clear.
>
> That means we don't clear entries we don't receive from the kernel because they're V=0 but which were V=1 before. Which with the current code is probably already broken.
>
> So yes, clear all cached entries first (to make sure we have no stale
> ones), then loop through all and only add entries with V=1 should fix
> everything for PR as well as HV.

This is more or less what the patch is doing. The kernel already
does memset of all the slb entries. The only difference is we don't
depend on the slb index in the return value. Instead we just use the
array index as the slb index. Do we really need to make sure the slb
index remain same ?

-aneesh

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf Aug. 21, 2013, 11:32 p.m. UTC | #10
Am 21.08.2013 um 16:59 schrieb "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>:

> Alexander Graf <agraf@suse.de> writes:
> 
> 
> ....
> 
>>> 
>>> On HV KVM yes, that would be the end of the list, but PR KVM could
>>> give you entry 0 containing esid==0 and vsid==0 followed by valid
>>> entries.  Perhaps the best approach is to ignore any entries with
>>> SLB_ESID_V clear.
>> 
>> That means we don't clear entries we don't receive from the kernel because they're V=0 but which were V=1 before. Which with the current code is probably already broken.
>> 
>> So yes, clear all cached entries first (to make sure we have no stale
>> ones), then loop through all and only add entries with V=1 should fix
>> everything for PR as well as HV.
> 
> This is more or less what the patch is doing. The kernel already
> does memset of all the slb entries. The only difference is we don't
> depend on the slb index in the return value. Instead we just use the
> array index as the slb index. Do we really need to make sure the slb
> index remain same ?

Yes, otherwise get/set change SLB numbering which the guest doesn't expect.

Alex

> 
> -aneesh
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 30a870e..5d4e613 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -1034,8 +1034,18 @@  int kvm_arch_get_registers(CPUState *cs)
         /* Sync SLB */
 #ifdef TARGET_PPC64
         for (i = 0; i < 64; i++) {
-            ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
-                               sregs.u.s.ppc64.slb[i].slbv);
+            target_ulong rb  = sregs.u.s.ppc64.slb[i].slbe;
+            /*
+             * KVM_GET_SREGS doesn't retun slb entry with slot information
+             * same as index. So don't depend on the slot information in
+             * the returned value.
+             */
+            rb &= ~0xfff;
+            /*
+             * use the array index as the slot
+             */
+            rb |= i;
+            ppc_store_slb(env, rb, sregs.u.s.ppc64.slb[i].slbv);
         }
 #endif