diff mbox

[v5,6/6] KVM: PPC: Book3S: modify byte loading when guest uses Split Little Endian

Message ID 1383672128-26795-7-git-send-email-clg@fr.ibm.com
State New, archived
Headers show

Commit Message

Cédric Le Goater Nov. 5, 2013, 5:22 p.m. UTC
Instruction and data storage accesses are done in opposite order
when the Split Little Endian mode is used. This patch modifies
the kvmppc_ld32() routine to reverse the byteswap when the guest
is in SLE mode.

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |   14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

Comments

Alexander Graf Jan. 2, 2014, 8:26 p.m. UTC | #1
On 05.11.2013, at 18:22, Cédric Le Goater <clg@fr.ibm.com> wrote:

> Instruction and data storage accesses are done in opposite order
> when the Split Little Endian mode is used. This patch modifies
> the kvmppc_ld32() routine to reverse the byteswap when the guest
> is in SLE mode.

SLE can also happen with MMIO. This needs a more global approach I'm afraid.


Alex

> 
> Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
> ---
> arch/powerpc/include/asm/kvm_book3s.h |   14 +++++++++++++-
> 1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 6974aa0..eac8808 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -288,10 +288,22 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
> 			      u32 *ptr, bool data)
> {
> 	int ret;
> +	bool byteswap;
> 
> 	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
> 
> -	if (kvmppc_need_byteswap(vcpu))
> +	byteswap = kvmppc_need_byteswap(vcpu);
> +
> +	/* When in Split Little Endian (SLE) mode, instruction and
> +	 * data storage accesses are done in opposite order. If the
> +	 * guest is using this mode, we need to reverse the byteswap
> +	 * for data accesses only. Instructions accesses are left
> +	 * unchanged.
> +	 */
> +	if (data && (vcpu->arch.shared->msr & MSR_SLE))
> +		byteswap = !byteswap;
> +
> +	if (byteswap)
> 		*ptr = swab32(*ptr);
> 
> 	return ret;
> -- 
> 1.7.10.4
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 6974aa0..eac8808 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -288,10 +288,22 @@  static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
 			      u32 *ptr, bool data)
 {
 	int ret;
+	bool byteswap;
 
 	ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
 
-	if (kvmppc_need_byteswap(vcpu))
+	byteswap = kvmppc_need_byteswap(vcpu);
+
+	/* When in Split Little Endian (SLE) mode, instruction and
+	 * data storage accesses are done in opposite order. If the
+	 * guest is using this mode, we need to reverse the byteswap
+	 * for data accesses only. Instructions accesses are left
+	 * unchanged.
+	 */
+	if (data && (vcpu->arch.shared->msr & MSR_SLE))
+		byteswap = !byteswap;
+
+	if (byteswap)
 		*ptr = swab32(*ptr);
 
 	return ret;