diff mbox

[v2,3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

Message ID 1381241531-27940-4-git-send-email-clg@fr.ibm.com
State New, archived
Headers show

Commit Message

Cédric Le Goater Oct. 8, 2013, 2:12 p.m. UTC
MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

This patch stores the last instruction in the endian order of the
host, primarily doing a byte-swap if needed. The common code
which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
and the exit paths for the Book3S PV and HR guests use their own
version in assembly.

Finally, the meaning of the 'is_bigendian' argument of the
routines kvmppc_handle_load() of kvmppc_handle_store() is
slightly changed to represent an eventual reverse operation. This
is used in conjunction with kvmppc_is_bigendian() to determine if
the instruction being emulated should be byte-swapped.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

 arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
 arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
 6 files changed, 46 insertions(+), 11 deletions(-)

Comments

Alexander Graf Oct. 8, 2013, 2:25 p.m. UTC | #1
On 08.10.2013, at 16:12, Cédric Le Goater <clg@fr.ibm.com> wrote:

> MMIO emulation reads the last instruction executed by the guest
> and then emulates. If the guest is running in Little Endian mode,
> the instruction needs to be byte-swapped before being emulated.
> 
> This patch stores the last instruction in the endian order of the
> host, primarily doing a byte-swap if needed. The common code
> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
> and the exit paths for the Book3S PV and HR guests use their own
> version in assembly.
> 
> Finally, the meaning of the 'is_bigendian' argument of the
> routines kvmppc_handle_load() of kvmppc_handle_store() is
> slightly changed to represent an eventual reverse operation. This
> is used in conjunction with kvmppc_is_bigendian() to determine if
> the instruction being emulated should be byte-swapped.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> 
> Changes in v2:
> 
> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>   exit paths. (Paul Mackerras)
> 
> - moved the byte swapping logic to kvmppc_handle_load() and 
>   kvmppc_handle_load() by changing the is_bigendian parameter
>   meaning. (Paul Mackerras)
> 
> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
> arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
> arch/powerpc/kvm/emulate.c              |    1 -
> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
> 6 files changed, 46 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 4ee6c66..f043e62 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
> 			      u32 *inst)
> {
> -	return kvmppc_ld32(vcpu, eaddr, inst, false);
> +	int ret;
> +
> +	ret = kvmppc_ld32(vcpu, eaddr, inst, false);
> +
> +	if (kvmppc_need_byteswap(vcpu))
> +		*inst = swab32(*inst);

This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to need a byteswap, regardless of whether it's an instruction or not.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Cédric Le Goater Oct. 8, 2013, 3:07 p.m. UTC | #2
On 10/08/2013 04:25 PM, Alexander Graf wrote:
> 
> On 08.10.2013, at 16:12, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
>> MMIO emulation reads the last instruction executed by the guest
>> and then emulates. If the guest is running in Little Endian mode,
>> the instruction needs to be byte-swapped before being emulated.
>>
>> This patch stores the last instruction in the endian order of the
>> host, primarily doing a byte-swap if needed. The common code
>> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
>> and the exit paths for the Book3S PV and HR guests use their own
>> version in assembly.
>>
>> Finally, the meaning of the 'is_bigendian' argument of the
>> routines kvmppc_handle_load() of kvmppc_handle_store() is
>> slightly changed to represent an eventual reverse operation. This
>> is used in conjunction with kvmppc_is_bigendian() to determine if
>> the instruction being emulated should be byte-swapped.
>>
>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>> ---
>>
>> Changes in v2:
>>
>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>   exit paths. (Paul Mackerras)
>>
>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>   kvmppc_handle_load() by changing the is_bigendian parameter
>>   meaning. (Paul Mackerras)
>>
>> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
>> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
>> arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
>> arch/powerpc/kvm/emulate.c              |    1 -
>> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
>> 6 files changed, 46 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>> index 4ee6c66..f043e62 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>> @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
>> 			      u32 *inst)
>> {
>> -	return kvmppc_ld32(vcpu, eaddr, inst, false);
>> +	int ret;
>> +
>> +	ret = kvmppc_ld32(vcpu, eaddr, inst, false);
>> +
>> +	if (kvmppc_need_byteswap(vcpu))
>> +		*inst = swab32(*inst);
> 
> This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to need a 
> byteswap, regardless of whether it's an instruction or not.

yes. the byteswap logic is not related to instruction or data. I will move it in 
kvmppc_ld32(). 

Thanks Alex,

C.

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paul Mackerras Oct. 8, 2013, 11:31 p.m. UTC | #3
On Tue, Oct 08, 2013 at 04:25:31PM +0200, Alexander Graf wrote:
> 
> On 08.10.2013, at 16:12, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
> > MMIO emulation reads the last instruction executed by the guest
> > and then emulates. If the guest is running in Little Endian mode,
> > the instruction needs to be byte-swapped before being emulated.
> > 
> > This patch stores the last instruction in the endian order of the
> > host, primarily doing a byte-swap if needed. The common code
> > which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
> > and the exit paths for the Book3S PV and HR guests use their own
> > version in assembly.
> > 
> > Finally, the meaning of the 'is_bigendian' argument of the
> > routines kvmppc_handle_load() of kvmppc_handle_store() is
> > slightly changed to represent an eventual reverse operation. This
> > is used in conjunction with kvmppc_is_bigendian() to determine if
> > the instruction being emulated should be byte-swapped.
> > 
> > Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> > ---
> > 
> > Changes in v2:
> > 
> > - replaced rldicl. by andi. to test the MSR_LE bit in the guest
> >   exit paths. (Paul Mackerras)
> > 
> > - moved the byte swapping logic to kvmppc_handle_load() and 
> >   kvmppc_handle_load() by changing the is_bigendian parameter
> >   meaning. (Paul Mackerras)
> > 
> > arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
> > arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
> > arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
> > arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
> > arch/powerpc/kvm/emulate.c              |    1 -
> > arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
> > 6 files changed, 46 insertions(+), 11 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> > index 4ee6c66..f043e62 100644
> > --- a/arch/powerpc/include/asm/kvm_book3s.h
> > +++ b/arch/powerpc/include/asm/kvm_book3s.h
> > @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
> > static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
> > 			      u32 *inst)
> > {
> > -	return kvmppc_ld32(vcpu, eaddr, inst, false);
> > +	int ret;
> > +
> > +	ret = kvmppc_ld32(vcpu, eaddr, inst, false);
> > +
> > +	if (kvmppc_need_byteswap(vcpu))
> > +		*inst = swab32(*inst);
> 
> This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to need a byteswap, regardless of whether it's an instruction or not.

True, until we get to POWER8 with its split little-endian support,
where instructions and data can have different endianness...

Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf Oct. 8, 2013, 11:46 p.m. UTC | #4
Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:

> On Tue, Oct 08, 2013 at 04:25:31PM +0200, Alexander Graf wrote:
>> 
>> On 08.10.2013, at 16:12, Cédric Le Goater <clg@fr.ibm.com> wrote:
>> 
>>> MMIO emulation reads the last instruction executed by the guest
>>> and then emulates. If the guest is running in Little Endian mode,
>>> the instruction needs to be byte-swapped before being emulated.
>>> 
>>> This patch stores the last instruction in the endian order of the
>>> host, primarily doing a byte-swap if needed. The common code
>>> which fetches 'last_inst' uses a helper routine kvmppc_need_byteswap().
>>> and the exit paths for the Book3S PV and HR guests use their own
>>> version in assembly.
>>> 
>>> Finally, the meaning of the 'is_bigendian' argument of the
>>> routines kvmppc_handle_load() of kvmppc_handle_store() is
>>> slightly changed to represent an eventual reverse operation. This
>>> is used in conjunction with kvmppc_is_bigendian() to determine if
>>> the instruction being emulated should be byte-swapped.
>>> 
>>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>>> ---
>>> 
>>> Changes in v2:
>>> 
>>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>>  exit paths. (Paul Mackerras)
>>> 
>>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>>  kvmppc_handle_load() by changing the is_bigendian parameter
>>>  meaning. (Paul Mackerras)
>>> 
>>> arch/powerpc/include/asm/kvm_book3s.h   |    9 ++++++++-
>>> arch/powerpc/include/asm/kvm_ppc.h      |   10 +++++-----
>>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   11 +++++++++++
>>> arch/powerpc/kvm/book3s_segment.S       |   10 ++++++++++
>>> arch/powerpc/kvm/emulate.c              |    1 -
>>> arch/powerpc/kvm/powerpc.c              |   16 ++++++++++++----
>>> 6 files changed, 46 insertions(+), 11 deletions(-)
>>> 
>>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>>> index 4ee6c66..f043e62 100644
>>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>>> @@ -289,7 +289,14 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>>> static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
>>>                  u32 *inst)
>>> {
>>> -    return kvmppc_ld32(vcpu, eaddr, inst, false);
>>> +    int ret;
>>> +
>>> +    ret = kvmppc_ld32(vcpu, eaddr, inst, false);
>>> +
>>> +    if (kvmppc_need_byteswap(vcpu))
>>> +        *inst = swab32(*inst);
>> 
>> This logic wants to live in kvmppc_ld32(), no? Every 32bit access is going to need a byteswap, regardless of whether it's an instruction or not.
> 
> True, until we get to POWER8 with its split little-endian support,
> where instructions and data can have different endianness...

How exactly does that work?

Alex

> 
> Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paul Mackerras Oct. 9, 2013, 5:59 a.m. UTC | #5
On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
> 
> 
> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
> 
> > True, until we get to POWER8 with its split little-endian support,
> > where instructions and data can have different endianness...
> 
> How exactly does that work?

They added an extra MSR bit called SLE which enables the split-endian
mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
LE bit controls instruction endianness, and data endianness depends on
LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
LE=0 you get little-endian data and big-endian instructions, and vice
versa with SLE=1 and LE=1.

There is also a user accessible "mtsle" instruction that sets the
value of the SLE bit.  This enables programs to flip their data
endianness back and forth quickly, so it's usable for short
instruction sequences, without the need to generate instructions of
the opposite endianness.

Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf Oct. 9, 2013, 8:29 a.m. UTC | #6
Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:

> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>> 
>> 
>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>> 
>>> True, until we get to POWER8 with its split little-endian support,
>>> where instructions and data can have different endianness...
>> 
>> How exactly does that work?
> 
> They added an extra MSR bit called SLE which enables the split-endian
> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
> LE bit controls instruction endianness, and data endianness depends on
> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
> LE=0 you get little-endian data and big-endian instructions, and vice
> versa with SLE=1 and LE=1.

So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?


Alex

> 
> There is also a user accessible "mtsle" instruction that sets the
> value of the SLE bit.  This enables programs to flip their data
> endianness back and forth quickly, so it's usable for short
> instruction sequences, without the need to generate instructions of
> the opposite endianness.
> 
> Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Cédric Le Goater Oct. 9, 2013, 8:42 a.m. UTC | #7
On 10/09/2013 10:29 AM, Alexander Graf wrote:
> 
> 
> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
> 
>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>
>>>
>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>>>
>>>> True, until we get to POWER8 with its split little-endian support,
>>>> where instructions and data can have different endianness...
>>>
>>> How exactly does that work?
>>
>> They added an extra MSR bit called SLE which enables the split-endian
>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>> LE bit controls instruction endianness, and data endianness depends on
>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>> LE=0 you get little-endian data and big-endian instructions, and vice
>> versa with SLE=1 and LE=1.
> 
> So ld32 should only honor LE and get_last_inst only looks at SLE and 
> swaps even the vcpu cached version if it's set, no?
 
Here is the table (PowerISA) illustrating the endian modes for all 
combinations :

	SLE	LE	Data	Instruction

	0	0	Big	Big
	0	1	Little	Little
	1	0	Little	Big
	1	1	Big	Little


My understanding is that when reading instructions, we should test MSR[LE] 
and for data, test MSR[SLE] ^ MSR[LE].

This has to be done in conjunction with the host endian order to determine 
if we should byte-swap or not, but we can assume the host is big endian
for the moment and fix the byte-swapping later.

C.



>>
>> There is also a user accessible "mtsle" instruction that sets the
>> value of the SLE bit.  This enables programs to flip their data
>> endianness back and forth quickly, so it's usable for short
>> instruction sequences, without the need to generate instructions of
>> the opposite endianness.
>>
>> Paul.
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paul Mackerras Oct. 10, 2013, 10:16 a.m. UTC | #8
On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
> 
> 
> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
> 
> > On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
> >> 
> >> 
> >> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
> >> 
> >>> True, until we get to POWER8 with its split little-endian support,
> >>> where instructions and data can have different endianness...
> >> 
> >> How exactly does that work?
> > 
> > They added an extra MSR bit called SLE which enables the split-endian
> > mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
> > LE bit controls instruction endianness, and data endianness depends on
> > LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
> > LE=0 you get little-endian data and big-endian instructions, and vice
> > versa with SLE=1 and LE=1.
> 
> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?

Not exactly; instruction endianness depends only on MSR[LE], so
get_last_inst should not look at MSR[SLE].  I would think the vcpu
cached version should be host endian always.

Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf Nov. 4, 2013, 11:44 a.m. UTC | #9
On 10.10.2013, at 12:16, Paul Mackerras <paulus@samba.org> wrote:

> On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
>> 
>> 
>> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
>> 
>>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>> 
>>>> 
>>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>>>> 
>>>>> True, until we get to POWER8 with its split little-endian support,
>>>>> where instructions and data can have different endianness...
>>>> 
>>>> How exactly does that work?
>>> 
>>> They added an extra MSR bit called SLE which enables the split-endian
>>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>>> LE bit controls instruction endianness, and data endianness depends on
>>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>>> LE=0 you get little-endian data and big-endian instructions, and vice
>>> versa with SLE=1 and LE=1.
>> 
>> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?
> 
> Not exactly; instruction endianness depends only on MSR[LE], so
> get_last_inst should not look at MSR[SLE].  I would think the vcpu
> cached version should be host endian always.

I agree. It makes the code flow easier.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Cédric Le Goater Nov. 5, 2013, 12:28 p.m. UTC | #10
On 11/04/2013 12:44 PM, Alexander Graf wrote:
> 
> On 10.10.2013, at 12:16, Paul Mackerras <paulus@samba.org> wrote:
> 
>> On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
>>>
>>>
>>> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
>>>
>>>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>>>
>>>>>
>>>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>>>>>
>>>>>> True, until we get to POWER8 with its split little-endian support,
>>>>>> where instructions and data can have different endianness...
>>>>>
>>>>> How exactly does that work?
>>>>
>>>> They added an extra MSR bit called SLE which enables the split-endian
>>>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>>>> LE bit controls instruction endianness, and data endianness depends on
>>>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>>>> LE=0 you get little-endian data and big-endian instructions, and vice
>>>> versa with SLE=1 and LE=1.
>>>
>>> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?
>>
>> Not exactly; instruction endianness depends only on MSR[LE], so
>> get_last_inst should not look at MSR[SLE].  I would think the vcpu
>> cached version should be host endian always.
> 
> I agree. It makes the code flow easier.


To take into account the host endian order to determine if we should 
byteswap, we could modify kvmppc_need_byteswap() as follow :


+/*
+ * Compare endian order of host and guest to determine whether we need
+ * to byteswap or not
+ */
 static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 {
-       return vcpu->arch.shared->msr & MSR_LE;
+       return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^
+               ((vcpu->arch.shared->msr & (MSR_LE)) >> MSR_LE_LG);
 }



and I think MSR[SLE] could be handled this way :


 static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
@@ -284,10 +289,19 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
                              u32 *ptr, bool data)
 {
        int ret;
+       bool byteswap;
 
        ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
 
-       if (kvmppc_need_byteswap(vcpu))
+       byteswap = kvmppc_need_byteswap(vcpu);
+
+       /* if we are loading data from a guest which is in Split
+        * Little Endian mode, the byte order is reversed 
+        */
+       if (data && (vcpu->arch.shared->msr & MSR_SLE))
+               byteswap = !byteswap;
+
+       if (byteswap)
                *ptr = swab32(*ptr);
 
        return ret;


How does that look ? 

This is not tested and the MSR_SLE definition is missing. I will fix that in v5.

Thanks,

C.

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf Nov. 5, 2013, 1:01 p.m. UTC | #11
On 05.11.2013, at 13:28, Cedric Le Goater <clg@fr.ibm.com> wrote:

> On 11/04/2013 12:44 PM, Alexander Graf wrote:
>> 
>> On 10.10.2013, at 12:16, Paul Mackerras <paulus@samba.org> wrote:
>> 
>>> On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
>>>> 
>>>> 
>>>> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@samba.org>:
>>>> 
>>>>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>>>> 
>>>>>> 
>>>>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@samba.org>:
>>>>>> 
>>>>>>> True, until we get to POWER8 with its split little-endian support,
>>>>>>> where instructions and data can have different endianness...
>>>>>> 
>>>>>> How exactly does that work?
>>>>> 
>>>>> They added an extra MSR bit called SLE which enables the split-endian
>>>>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>>>>> LE bit controls instruction endianness, and data endianness depends on
>>>>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>>>>> LE=0 you get little-endian data and big-endian instructions, and vice
>>>>> versa with SLE=1 and LE=1.
>>>> 
>>>> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?
>>> 
>>> Not exactly; instruction endianness depends only on MSR[LE], so
>>> get_last_inst should not look at MSR[SLE].  I would think the vcpu
>>> cached version should be host endian always.
>> 
>> I agree. It makes the code flow easier.
> 
> 
> To take into account the host endian order to determine if we should 
> byteswap, we could modify kvmppc_need_byteswap() as follow :
> 
> 
> +/*
> + * Compare endian order of host and guest to determine whether we need
> + * to byteswap or not
> + */
> static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> {
> -       return vcpu->arch.shared->msr & MSR_LE;
> +       return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^

mfmsr() is slow. Just use #ifdef __LITTLE_ENDIAN__.

> +               ((vcpu->arch.shared->msr & (MSR_LE)) >> MSR_LE_LG);
> }
> 
> 
> 
> and I think MSR[SLE] could be handled this way :
> 
> 
> static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
> @@ -284,10 +289,19 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
>                              u32 *ptr, bool data)
> {
>        int ret;
> +       bool byteswap;
> 
>        ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> 
> -       if (kvmppc_need_byteswap(vcpu))
> +       byteswap = kvmppc_need_byteswap(vcpu);
> +
> +       /* if we are loading data from a guest which is in Split
> +        * Little Endian mode, the byte order is reversed 

Only for data. Instructions are still non-reverse. You express this well in the code, but not in the comment.

> +        */
> +       if (data && (vcpu->arch.shared->msr & MSR_SLE))
> +               byteswap = !byteswap;
> +
> +       if (byteswap)
>                *ptr = swab32(*ptr);
> 
>        return ret;
> 
> 
> How does that look ? 
> 
> This is not tested and the MSR_SLE definition is missing. I will fix that in v5.

Alrighty :)


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Cédric Le Goater Nov. 5, 2013, 5:22 p.m. UTC | #12
MIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian mode,
the instruction needs to be byte-swapped before being emulated.

The first patches add simple helper routines to load instructions from 
the guest. It prepares ground for the byte-swapping of instructions 
when reading memory from Little Endian guests. 

The last patches enable the MMIO support by byte-swapping the last 
instruction if the guest is Little Endian and add support for little
endian host and Split Little Endian mode.

This patchset is based on Alex Graf's kvm-ppc-queue branch. It has been 
tested with anton's patchset for Big Endian and Little Endian HV guests 
and Big Endian PR guests. 

The kvm-ppc-queue branch I am using might be a bit outdated. The HEAD 
is on :

   0c58eb4 KVM: PPC: E500: Add userspace debug stub support


Changes in v5:
 
 - changed register usage slightly (paulus@samba.org)
 - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
 - added support for little endian host
 - added support for Split Little Endian (SLE)
 
Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)

Thanks,

C.	

Cédric Le Goater (6):
  KVM: PPC: Book3S: add helper routine to load guest instructions
  KVM: PPC: Book3S: add helper routines to detect endian
  KVM: PPC: Book3S: MMIO emulation support for little endian guests
  KVM: PPC: Book3S: modify kvmppc_need_byteswap() for little endian host
  powerpc: add Split Little Endian bit to MSR
  KVM: PPC: Book3S: modify byte loading when guest uses Split Little Endian

 arch/powerpc/include/asm/kvm_book3s.h   |   43 +++++++++++++++++++++++++++++--
 arch/powerpc/include/asm/kvm_ppc.h      |   10 +++----
 arch/powerpc/include/asm/reg.h          |    3 +++
 arch/powerpc/kernel/process.c           |    1 +
 arch/powerpc/kvm/book3s_64_mmu_hv.c     |    2 +-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |    9 +++++++
 arch/powerpc/kvm/book3s_pr.c            |    2 +-
 arch/powerpc/kvm/book3s_segment.S       |    9 +++++++
 arch/powerpc/kvm/emulate.c              |    1 -
 arch/powerpc/kvm/powerpc.c              |   16 +++++++++---
 10 files changed, 82 insertions(+), 14 deletions(-)
Paul Mackerras Nov. 6, 2013, 5:55 a.m. UTC | #13
On Tue, Nov 05, 2013 at 02:01:14PM +0100, Alexander Graf wrote:
> 
> On 05.11.2013, at 13:28, Cedric Le Goater <clg@fr.ibm.com> wrote:
> 
> > +/*
> > + * Compare endian order of host and guest to determine whether we need
> > + * to byteswap or not
> > + */
> > static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> > {
> > -       return vcpu->arch.shared->msr & MSR_LE;
> > +       return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^
> 
> mfmsr() is slow. Just use #ifdef __LITTLE_ENDIAN__.

Or (MSR_KERNEL & MSR_LE).

> > +       /* if we are loading data from a guest which is in Split
> > +        * Little Endian mode, the byte order is reversed 
> 
> Only for data. Instructions are still non-reverse. You express this well in the code, but not in the comment.

Well, his comment does say "if we are loading data", but I agree it's
slightly ambiguous (the guest's instructions are our data).

Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Cédric Le Goater Nov. 8, 2013, 2:29 p.m. UTC | #14
On 11/06/2013 06:55 AM, Paul Mackerras wrote:
> On Tue, Nov 05, 2013 at 02:01:14PM +0100, Alexander Graf wrote:
>>
>> On 05.11.2013, at 13:28, Cedric Le Goater <clg@fr.ibm.com> wrote:
>>
>>> +/*
>>> + * Compare endian order of host and guest to determine whether we need
>>> + * to byteswap or not
>>> + */
>>> static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
>>> {
>>> -       return vcpu->arch.shared->msr & MSR_LE;
>>> +       return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^
>>
>> mfmsr() is slow. Just use #ifdef __LITTLE_ENDIAN__.
> 
> Or (MSR_KERNEL & MSR_LE).

yes. That is better. I will resend the patch with an update. That was 
quite laborious for a single line patch ... 

Thanks,

C.

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 4ee6c66..f043e62 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -289,7 +289,14 @@  static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 static inline int kvmppc_ld_inst(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *inst)
 {
-	return kvmppc_ld32(vcpu, eaddr, inst, false);
+	int ret;
+
+	ret = kvmppc_ld32(vcpu, eaddr, inst, false);
+
+	if (kvmppc_need_byteswap(vcpu))
+		*inst = swab32(*inst);
+
+	return ret;
 }
 
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index b15554a..3769a13 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -53,13 +53,13 @@  extern void kvmppc_handler_highmem(void);
 
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                              unsigned int rt, unsigned int bytes,
-                              int is_bigendian);
+			      unsigned int rt, unsigned int bytes,
+			      int not_reverse);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               unsigned int rt, unsigned int bytes,
-                               int is_bigendian);
+			       unsigned int rt, unsigned int bytes,
+			       int not_reverse);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               u64 val, unsigned int bytes, int is_bigendian);
+			       u64 val, unsigned int bytes, int not_reverse);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
                                       struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 77f1baa..ff7da8b 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1404,10 +1404,21 @@  fast_interrupt_c_return:
 	lwz	r8, 0(r10)
 	mtmsrd	r3
 
+	ld	r0, VCPU_MSR(r9)
+
+	andi.	r10, r0, MSR_LE
+
 	/* Store the result */
 	stw	r8, VCPU_LAST_INST(r9)
 
+	beq	after_inst_store
+
+	/* Swap and store the result */
+	addi	r11, r9, VCPU_LAST_INST
+	stwbrx	r8, 0, r11
+
 	/* Unset guest mode. */
+after_inst_store:
 	li	r0, KVM_GUEST_MODE_HOST_HV
 	stb	r0, HSTATE_IN_GUEST(r13)
 	b	guest_exit_cont
diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
index 1abe478..677ef7a 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -287,8 +287,18 @@  ld_last_inst:
 	sync
 
 #endif
+	ld	r8, SVCPU_SHADOW_SRR1(r13)
+
+	andi.	r10, r8, MSR_LE
+
 	stw	r0, SVCPU_LAST_INST(r13)
 
+	beq	no_ld_last_inst
+
+	/* swap and store the result */
+	addi	r11, r13, SVCPU_LAST_INST
+	stwbrx	r0, 0, r11
+
 no_ld_last_inst:
 
 	/* Unset guest mode */
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 751cd45..5e38004 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@  static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 07c0106..6950f2b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -625,9 +625,13 @@  static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 }
 
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
@@ -662,21 +666,25 @@  int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 /* Same as above, but sign extends */
 int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes, int not_reverse)
 {
 	int r;
 
 	vcpu->arch.mmio_sign_extend = 1;
-	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
+	r = kvmppc_handle_load(run, vcpu, rt, bytes, not_reverse);
 
 	return r;
 }
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        u64 val, unsigned int bytes, int is_bigendian)
+			u64 val, unsigned int bytes, int not_reverse)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
+	int is_bigendian = not_reverse;
+
+	if (!kvmppc_is_bigendian(vcpu))
+		is_bigendian = !not_reverse;
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,