diff mbox

target-ppc: Update slb array with correct index values.

Message ID DD377AA4-0063-40DC-8D46-8273662861E1@suse.de
State New, archived
Headers show

Commit Message

Alexander Graf Aug. 21, 2013, 7:02 a.m. UTC
On 20.08.2013, at 14:57, Aneesh Kumar K.V wrote:

> Alexander Graf <agraf@suse.de> writes:
> 
>> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>> 
>>> Alexander Graf <agraf@suse.de> writes:
>>> 
>>>> On 11.08.2013, at 20:16, Aneesh Kumar K.V wrote:
>>>> 
>>>>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>>>>> 
>>>>> Without this, a value of rb=0 and rs=0, result in us replacing the 0th index
>>>>> 
>>>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>>>> 
>>>> Wrong mailing list again ;).
>>> 
>>> Will post the series again with updated commit message to the qemu list.
>>> 
>>>> 
>>>>> ---
>>>>> target-ppc/kvm.c | 14 ++++++++++++--
>>>>> 1 file changed, 12 insertions(+), 2 deletions(-)
>>>>> 
>>>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>>>>> index 30a870e..5d4e613 100644
>>>>> --- a/target-ppc/kvm.c
>>>>> +++ b/target-ppc/kvm.c
>>>>> @@ -1034,8 +1034,18 @@ int kvm_arch_get_registers(CPUState *cs)
>>>>>       /* Sync SLB */
>>>>> #ifdef TARGET_PPC64
>>>>>       for (i = 0; i < 64; i++) {
>>>>> -            ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
>>>>> -                               sregs.u.s.ppc64.slb[i].slbv);
>>>>> +            target_ulong rb  = sregs.u.s.ppc64.slb[i].slbe;
>>>>> +            /*
>>>>> +             * KVM_GET_SREGS doesn't retun slb entry with slot information
>>>>> +             * same as index. So don't depend on the slot information in
>>>>> +             * the returned value.
>>>> 
>>>> This is the generating code in book3s_pr.c:
>>>> 
>>>>       if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) {
>>>>               for (i = 0; i < 64; i++) {
>>>>                       sregs->u.s.ppc64.slb[i].slbe = vcpu->arch.slb[i].orige | i;
>>>>                       sregs->u.s.ppc64.slb[i].slbv = vcpu->arch.slb[i].origv;
>>>>               }
>>>> 
>>>> Where exactly did you see broken slbe entries?
>>>> 
>>> 
>>> I noticed this when adding support for guest memory dumping via qemu gdb
>>> server. Now the array we get would look like below
>>> 
>>> slbe0 slbv0
>>> slbe1 slbv1
>>> 0000  00000
>>> 0000  00000
>> 
>> Ok, so that's where the problem lies. Why are the entries 0 here?
>> Either we try to fetch more entries than we should, we populate
>> entries incorrectly or the kernel simply returns invalid SLB entry
>> values for invalid entries.
> 
> 
> The ioctl zero out the sregs, and fill only slb_max entries. So we find
> 0 filled entries above slb_max. Also we don't pass slb_max to user
> space. So userspace have to look at all the 64 entries. 

We do pass slb_max, it's just called differently and calculated implicitly :). How about something like this:


> 
> 
>> 
>> Are you seeing this with PR KVM or HV KVM?
>> 
> 
> HV KVM

If the above didn't help, could you please dig out how HV KVM assembles its SLB information and just paste it here? :)


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Aneesh Kumar K.V Aug. 21, 2013, 9:07 a.m. UTC | #1
Alexander Graf <agraf@suse.de> writes:

> On 20.08.2013, at 14:57, Aneesh Kumar K.V wrote:
>
>> Alexander Graf <agraf@suse.de> writes:
>> 
>>> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>>> 
>>>> Alexander Graf <agraf@suse.de> writes:
>>>> 
>>>>> On 11.08.2013, at 20:16, Aneesh Kumar K.V wrote:
>>>>> 
>>>>>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>>>>>> 
>>>>>> Without this, a value of rb=0 and rs=0, result in us replacing the 0th index
>>>>>> 
>>>>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>>>>> 
>>>>> Wrong mailing list again ;).
>>>> 
>>>> Will post the series again with updated commit message to the qemu list.
>>>> 
>>>>> 
>>>>>> ---
>>>>>> target-ppc/kvm.c | 14 ++++++++++++--
>>>>>> 1 file changed, 12 insertions(+), 2 deletions(-)
>>>>>> 
>>>>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>>>>>> index 30a870e..5d4e613 100644
>>>>>> --- a/target-ppc/kvm.c
>>>>>> +++ b/target-ppc/kvm.c
>>>>>> @@ -1034,8 +1034,18 @@ int kvm_arch_get_registers(CPUState *cs)
>>>>>>       /* Sync SLB */
>>>>>> #ifdef TARGET_PPC64
>>>>>>       for (i = 0; i < 64; i++) {
>>>>>> -            ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
>>>>>> -                               sregs.u.s.ppc64.slb[i].slbv);
>>>>>> +            target_ulong rb  = sregs.u.s.ppc64.slb[i].slbe;
>>>>>> +            /*
>>>>>> +             * KVM_GET_SREGS doesn't retun slb entry with slot information
>>>>>> +             * same as index. So don't depend on the slot information in
>>>>>> +             * the returned value.
>>>>> 
>>>>> This is the generating code in book3s_pr.c:
>>>>> 
>>>>>       if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) {
>>>>>               for (i = 0; i < 64; i++) {
>>>>>                       sregs->u.s.ppc64.slb[i].slbe = vcpu->arch.slb[i].orige | i;
>>>>>                       sregs->u.s.ppc64.slb[i].slbv = vcpu->arch.slb[i].origv;
>>>>>               }
>>>>> 
>>>>> Where exactly did you see broken slbe entries?
>>>>> 
>>>> 
>>>> I noticed this when adding support for guest memory dumping via qemu gdb
>>>> server. Now the array we get would look like below
>>>> 
>>>> slbe0 slbv0
>>>> slbe1 slbv1
>>>> 0000  00000
>>>> 0000  00000
>>> 
>>> Ok, so that's where the problem lies. Why are the entries 0 here?
>>> Either we try to fetch more entries than we should, we populate
>>> entries incorrectly or the kernel simply returns invalid SLB entry
>>> values for invalid entries.
>> 
>> 
>> The ioctl zero out the sregs, and fill only slb_max entries. So we find
>> 0 filled entries above slb_max. Also we don't pass slb_max to user
>> space. So userspace have to look at all the 64 entries. 
>
> We do pass slb_max, it's just called differently and calculated implicitly :). How about something like this:
>
> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
> index 30a870e..29a2ec3 100644
> --- a/target-ppc/kvm.c
> +++ b/target-ppc/kvm.c
> @@ -818,6 +818,8 @@ int kvm_arch_put_registers(CPUState *cs, int level)
>
>          /* Sync SLB */
>  #ifdef TARGET_PPC64
> +        /* We need to loop through all entries to give them potentially
> +           valid values */
>          for (i = 0; i < 64; i++) {
>              sregs.u.s.ppc64.slb[i].slbe = env->slb[i].esid;
>              sregs.u.s.ppc64.slb[i].slbv = env->slb[i].vsid;
> @@ -1033,7 +1035,7 @@ int kvm_arch_get_registers(CPUState *cs)
>
>          /* Sync SLB */
>  #ifdef TARGET_PPC64
> -        for (i = 0; i < 64; i++) {
> +        for (i = 0; i < env->slb_nr; i++) {
>              ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
>                                 sregs.u.s.ppc64.slb[i].slbv);
>          }
>

But we don't sync slb_max (max valid slb index), env->slb_nr is slb_nr (total number of slb
slots). ? We also don't sync env->slb_nr everytime we do
kvm_arch_get_register. The problem we have is, we first memset sregs with 0 and then
fill only slb_max entries. Now slbe and slbv entries with value 0
results in us looking at those entries for 0th index, and update the
0th entry.

-aneesh

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 30a870e..29a2ec3 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -818,6 +818,8 @@  int kvm_arch_put_registers(CPUState *cs, int level)

         /* Sync SLB */
 #ifdef TARGET_PPC64
+        /* We need to loop through all entries to give them potentially
+           valid values */
         for (i = 0; i < 64; i++) {
             sregs.u.s.ppc64.slb[i].slbe = env->slb[i].esid;
             sregs.u.s.ppc64.slb[i].slbv = env->slb[i].vsid;
@@ -1033,7 +1035,7 @@  int kvm_arch_get_registers(CPUState *cs)

         /* Sync SLB */
 #ifdef TARGET_PPC64
-        for (i = 0; i < 64; i++) {
+        for (i = 0; i < env->slb_nr; i++) {
             ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
                                sregs.u.s.ppc64.slb[i].slbv);
         }