Patchwork [v7] KVM: PPC: Book3S: MMIO emulation support for little endian guests

login
register
mail settings
Submitter Cédric Le Goater
Date Jan. 9, 2014, 10:02 a.m.
Message ID <1389261756-2767-1-git-send-email-clg@fr.ibm.com>
Download mbox | patch
Permalink /patch/308537/
State New
Headers show

Comments

Cédric Le Goater - Jan. 9, 2014, 10:02 a.m.
MMIO emulation reads the last instruction executed by the guest
and then emulates. If the guest is running in Little Endian order,
or more generally in a different endian order of the host, the
instruction needs to be byte-swapped before being emulated.

This patch adds a helper routine which tests the endian order of
the host and the guest in order to decide whether a byteswap is
needed or not. It is then used to byteswap the last instruction
of the guest in the endian order of the host before MMIO emulation
is performed.

Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
to reverse the endianness of the MMIO if required.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---

How's that ? As the changes were small, I kept them in one patch 
but I can split if necessary. 

This patch was tested for Big Endian and Little Endian HV guests 
and Big Endian PR guests on 3.13 (plus a h_set_mode hack)

Cheers,

C.

Changes in v7:

 - replaced is_bigendian by is_default_endian (Alexander Graf)

Changes in v6:

 - removed asm changes (Alexander Graf)
 - byteswap last_inst when used in kvmppc_get_last_inst()
 - postponed Split Little Endian support

Changes in v5:

 - changed register usage slightly (paulus@samba.org)
 - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
 - added support for little endian host
 - added support for Split Little Endian (SLE)

Changes in v4:

 - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)

Changes in v3:

 - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
   kvmppc_ld_inst(). (Alexander Graf)

Changes in v2:

 - replaced rldicl. by andi. to test the MSR_LE bit in the guest
   exit paths. (Paul Mackerras)

 - moved the byte swapping logic to kvmppc_handle_load() and 
   kvmppc_handle_load() by changing the is_bigendian parameter
   meaning. (Paul Mackerras)
	
 arch/powerpc/include/asm/kvm_book3s.h |   15 +++++++++++++--
 arch/powerpc/include/asm/kvm_ppc.h    |    7 ++++---
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
 arch/powerpc/kvm/emulate.c            |    1 -
 arch/powerpc/kvm/powerpc.c            |   28 ++++++++++++++++++++++++----
 5 files changed, 42 insertions(+), 11 deletions(-)
Alexander Graf - Jan. 9, 2014, 10:17 a.m.
On 09.01.2014, at 11:02, Cédric Le Goater <clg@fr.ibm.com> wrote:

> MMIO emulation reads the last instruction executed by the guest
> and then emulates. If the guest is running in Little Endian order,
> or more generally in a different endian order of the host, the
> instruction needs to be byte-swapped before being emulated.
> 
> This patch adds a helper routine which tests the endian order of
> the host and the guest in order to decide whether a byteswap is
> needed or not. It is then used to byteswap the last instruction
> of the guest in the endian order of the host before MMIO emulation
> is performed.
> 
> Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
> to reverse the endianness of the MMIO if required.
> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> 
> How's that ? As the changes were small, I kept them in one patch 
> but I can split if necessary. 
> 
> This patch was tested for Big Endian and Little Endian HV guests 
> and Big Endian PR guests on 3.13 (plus a h_set_mode hack)
> 
> Cheers,
> 
> C.
> 
> Changes in v7:
> 
> - replaced is_bigendian by is_default_endian (Alexander Graf)
> 
> Changes in v6:
> 
> - removed asm changes (Alexander Graf)
> - byteswap last_inst when used in kvmppc_get_last_inst()
> - postponed Split Little Endian support
> 
> Changes in v5:
> 
> - changed register usage slightly (paulus@samba.org)
> - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
> - added support for little endian host
> - added support for Split Little Endian (SLE)
> 
> Changes in v4:
> 
> - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)
> 
> Changes in v3:
> 
> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>   kvmppc_ld_inst(). (Alexander Graf)
> 
> Changes in v2:
> 
> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>   exit paths. (Paul Mackerras)
> 
> - moved the byte swapping logic to kvmppc_handle_load() and 
>   kvmppc_handle_load() by changing the is_bigendian parameter
>   meaning. (Paul Mackerras)
> 	
> arch/powerpc/include/asm/kvm_book3s.h |   15 +++++++++++++--
> arch/powerpc/include/asm/kvm_ppc.h    |    7 ++++---
> arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
> arch/powerpc/kvm/emulate.c            |    1 -
> arch/powerpc/kvm/powerpc.c            |   28 ++++++++++++++++++++++++----
> 5 files changed, 42 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index bc23b1ba7980..00499f5f16bc 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -271,6 +271,17 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
> 	return vcpu->arch.pc;
> }
> 
> +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> +{
> +	return (vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE);
> +}
> +
> +static inline u32 kvmppc_byteswap_last_inst(struct kvm_vcpu *vcpu)
> +{
> +	return kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.last_inst) :
> +		vcpu->arch.last_inst;
> +}
> +
> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
> {
> 	ulong pc = kvmppc_get_pc(vcpu);
> @@ -280,7 +291,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
> 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
> 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
> 
> -	return vcpu->arch.last_inst;

I would prefer if you just explicitly put the contents of kvmppc_byteswap_last_inst() here.

> +	return kvmppc_byteswap_last_inst(vcpu);
> }
> 
> /*
> @@ -297,7 +308,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)

... and instead converge the two functions into one. In fact, let me quickly hack up a patch for that.

> 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
> 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
> 
> -	return vcpu->arch.last_inst;
> +	return kvmppc_byteswap_last_inst(vcpu);
> }
> 
> static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
> index c8317fbf92c4..629277df4798 100644
> --- a/arch/powerpc/include/asm/kvm_ppc.h
> +++ b/arch/powerpc/include/asm/kvm_ppc.h
> @@ -54,12 +54,13 @@ extern void kvmppc_handler_highmem(void);
> extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
> extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>                               unsigned int rt, unsigned int bytes,
> -                              int is_bigendian);
> +			      int is_default_endian);
> extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
>                                unsigned int rt, unsigned int bytes,
> -                               int is_bigendian);
> +			       int is_default_endian);
> extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -                               u64 val, unsigned int bytes, int is_bigendian);
> +			       u64 val, unsigned int bytes,
> +			       int is_default_endian);
> 
> extern int kvmppc_emulate_instruction(struct kvm_run *run,
>                                       struct kvm_vcpu *vcpu);
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> index 79e992d8c823..ff10fba29878 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> @@ -562,7 +562,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
> 	 * we just return and retry the instruction.
> 	 */
> 
> -	if (instruction_is_store(vcpu->arch.last_inst) != !!is_store)
> +	if (instruction_is_store(kvmppc_byteswap_last_inst(vcpu)) != !!is_store)

This can safely be kvmppc_get_last_inst() at this point, because we're definitely not hitting the KVM_INST_FETCH_FAILED code path anymore.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Cédric Le Goater - Jan. 9, 2014, 10:33 a.m.
On 01/09/2014 11:17 AM, Alexander Graf wrote:
> 
> On 09.01.2014, at 11:02, Cédric Le Goater <clg@fr.ibm.com> wrote:
> 
>> MMIO emulation reads the last instruction executed by the guest
>> and then emulates. If the guest is running in Little Endian order,
>> or more generally in a different endian order of the host, the
>> instruction needs to be byte-swapped before being emulated.
>>
>> This patch adds a helper routine which tests the endian order of
>> the host and the guest in order to decide whether a byteswap is
>> needed or not. It is then used to byteswap the last instruction
>> of the guest in the endian order of the host before MMIO emulation
>> is performed.
>>
>> Finally, kvmppc_handle_load() of kvmppc_handle_store() are modified
>> to reverse the endianness of the MMIO if required.
>>
>> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
>> ---
>>
>> How's that ? As the changes were small, I kept them in one patch 
>> but I can split if necessary. 
>>
>> This patch was tested for Big Endian and Little Endian HV guests 
>> and Big Endian PR guests on 3.13 (plus a h_set_mode hack)
>>
>> Cheers,
>>
>> C.
>>
>> Changes in v7:
>>
>> - replaced is_bigendian by is_default_endian (Alexander Graf)
>>
>> Changes in v6:
>>
>> - removed asm changes (Alexander Graf)
>> - byteswap last_inst when used in kvmppc_get_last_inst()
>> - postponed Split Little Endian support
>>
>> Changes in v5:
>>
>> - changed register usage slightly (paulus@samba.org)
>> - added #ifdef CONFIG_PPC64 in book3s_segment.S (paulus@samba.org)
>> - added support for little endian host
>> - added support for Split Little Endian (SLE)
>>
>> Changes in v4:
>>
>> - got rid of useless helper routine kvmppc_ld_inst(). (Alexander Graf)
>>
>> Changes in v3:
>>
>> - moved kvmppc_need_byteswap() in kvmppc_ld32. It previously was in
>>   kvmppc_ld_inst(). (Alexander Graf)
>>
>> Changes in v2:
>>
>> - replaced rldicl. by andi. to test the MSR_LE bit in the guest
>>   exit paths. (Paul Mackerras)
>>
>> - moved the byte swapping logic to kvmppc_handle_load() and 
>>   kvmppc_handle_load() by changing the is_bigendian parameter
>>   meaning. (Paul Mackerras)
>> 	
>> arch/powerpc/include/asm/kvm_book3s.h |   15 +++++++++++++--
>> arch/powerpc/include/asm/kvm_ppc.h    |    7 ++++---
>> arch/powerpc/kvm/book3s_64_mmu_hv.c   |    2 +-
>> arch/powerpc/kvm/emulate.c            |    1 -
>> arch/powerpc/kvm/powerpc.c            |   28 ++++++++++++++++++++++++----
>> 5 files changed, 42 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
>> index bc23b1ba7980..00499f5f16bc 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>> @@ -271,6 +271,17 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
>> 	return vcpu->arch.pc;
>> }
>>
>> +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
>> +{
>> +	return (vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE);
>> +}
>> +
>> +static inline u32 kvmppc_byteswap_last_inst(struct kvm_vcpu *vcpu)
>> +{
>> +	return kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.last_inst) :
>> +		vcpu->arch.last_inst;
>> +}
>> +
>> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
>> {
>> 	ulong pc = kvmppc_get_pc(vcpu);
>> @@ -280,7 +291,7 @@ static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
>> 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
>> 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
>>
>> -	return vcpu->arch.last_inst;
> 
> I would prefer if you just explicitly put the contents of kvmppc_byteswap_last_inst() here.
> 
>> +	return kvmppc_byteswap_last_inst(vcpu);
>> }
>>
>> /*
>> @@ -297,7 +308,7 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
> 
> ... and instead converge the two functions into one. In fact, let me quickly hack up a patch for that.

OK. I am taking the patch you just sent and work on a v8. 

>> 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
>> 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
>>
>> -	return vcpu->arch.last_inst;
>> +	return kvmppc_byteswap_last_inst(vcpu);
>> }
>>
>> static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
>> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
>> index c8317fbf92c4..629277df4798 100644
>> --- a/arch/powerpc/include/asm/kvm_ppc.h
>> +++ b/arch/powerpc/include/asm/kvm_ppc.h
>> @@ -54,12 +54,13 @@ extern void kvmppc_handler_highmem(void);
>> extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
>> extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>                               unsigned int rt, unsigned int bytes,
>> -                              int is_bigendian);
>> +			      int is_default_endian);
>> extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
>>                                unsigned int rt, unsigned int bytes,
>> -                               int is_bigendian);
>> +			       int is_default_endian);
>> extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> -                               u64 val, unsigned int bytes, int is_bigendian);
>> +			       u64 val, unsigned int bytes,
>> +			       int is_default_endian);
>>
>> extern int kvmppc_emulate_instruction(struct kvm_run *run,
>>                                       struct kvm_vcpu *vcpu);
>> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> index 79e992d8c823..ff10fba29878 100644
>> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> @@ -562,7 +562,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> 	 * we just return and retry the instruction.
>> 	 */
>>
>> -	if (instruction_is_store(vcpu->arch.last_inst) != !!is_store)
>> +	if (instruction_is_store(kvmppc_byteswap_last_inst(vcpu)) != !!is_store)
> 
> This can safely be kvmppc_get_last_inst() at this point, because we're definitely not hitting the KVM_INST_FETCH_FAILED code path anymore.

OK.

Thanks,

C. 

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index bc23b1ba7980..00499f5f16bc 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -271,6 +271,17 @@  static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 	return vcpu->arch.pc;
 }
 
+static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
+{
+	return (vcpu->arch.shared->msr & MSR_LE) != (MSR_KERNEL & MSR_LE);
+}
+
+static inline u32 kvmppc_byteswap_last_inst(struct kvm_vcpu *vcpu)
+{
+	return kvmppc_need_byteswap(vcpu) ? swab32(vcpu->arch.last_inst) :
+		vcpu->arch.last_inst;
+}
+
 static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 {
 	ulong pc = kvmppc_get_pc(vcpu);
@@ -280,7 +291,7 @@  static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
 
-	return vcpu->arch.last_inst;
+	return kvmppc_byteswap_last_inst(vcpu);
 }
 
 /*
@@ -297,7 +308,7 @@  static inline u32 kvmppc_get_last_sc(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
 		kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
 
-	return vcpu->arch.last_inst;
+	return kvmppc_byteswap_last_inst(vcpu);
 }
 
 static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index c8317fbf92c4..629277df4798 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -54,12 +54,13 @@  extern void kvmppc_handler_highmem(void);
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
                               unsigned int rt, unsigned int bytes,
-                              int is_bigendian);
+			      int is_default_endian);
 extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
                                unsigned int rt, unsigned int bytes,
-                               int is_bigendian);
+			       int is_default_endian);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                               u64 val, unsigned int bytes, int is_bigendian);
+			       u64 val, unsigned int bytes,
+			       int is_default_endian);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
                                       struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 79e992d8c823..ff10fba29878 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -562,7 +562,7 @@  static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * we just return and retry the instruction.
 	 */
 
-	if (instruction_is_store(vcpu->arch.last_inst) != !!is_store)
+	if (instruction_is_store(kvmppc_byteswap_last_inst(vcpu)) != !!is_store)
 		return RESUME_GUEST;
 
 	/*
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 2f9a0873b44f..c2b887be2c29 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -219,7 +219,6 @@  static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
  * lmw
  * stmw
  *
- * XXX is_bigendian should depend on MMU mapping or MSR[LE]
  */
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 9ae97686e9f4..053c92fb55d9 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -673,9 +673,19 @@  static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 }
 
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int rt, unsigned int bytes, int is_bigendian)
+		       unsigned int rt, unsigned int bytes,
+		       int is_default_endian)
 {
 	int idx, ret;
+	int is_bigendian;
+
+	if (kvmppc_need_byteswap(vcpu)) {
+		/* Default endianness is "little endian". */
+		is_bigendian = !is_default_endian;
+	} else {
+		/* Default endianness is "big endian". */
+		is_bigendian = is_default_endian;
+	}
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
@@ -711,21 +721,31 @@  EXPORT_SYMBOL_GPL(kvmppc_handle_load);
 
 /* Same as above, but sign extends */
 int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        unsigned int rt, unsigned int bytes, int is_bigendian)
+			unsigned int rt, unsigned int bytes,
+			int is_default_endian)
 {
 	int r;
 
 	vcpu->arch.mmio_sign_extend = 1;
-	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_bigendian);
+	r = kvmppc_handle_load(run, vcpu, rt, bytes, is_default_endian);
 
 	return r;
 }
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                        u64 val, unsigned int bytes, int is_bigendian)
+			u64 val, unsigned int bytes, int is_default_endian)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
+	int is_bigendian;
+
+	if (kvmppc_need_byteswap(vcpu)) {
+		/* Default endianness is "little endian". */
+		is_bigendian = !is_default_endian;
+	} else {
+		/* Default endianness is "big endian". */
+		is_bigendian = is_default_endian;
+	}
 
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,