Patchwork KVM: PPC: Book3S HV: Don't drop low-order page address bits

login
register
mail settings
Submitter Paul Mackerras
Date Dec. 16, 2013, 2:31 a.m.
Message ID <20131216023146.GA12402@drongo>
Download mbox | patch
Permalink /patch/301435/
State New
Headers show

Comments

Paul Mackerras - Dec. 16, 2013, 2:31 a.m.
Commit caaa4c804fae ("KVM: PPC: Book3S HV: Fix physical address
calculations") unfortunately resulted in some low-order address bits
getting dropped in the case where the guest is creating a 4k HPTE
and the host page size is 64k.  By getting the low-order bits from
hva rather than gpa we miss out on bits 12 - 15 in this case, since
hva is at page granularity.  This puts the missing bits back in.

Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Paul Mackerras <paulus@samba.org>
---
Alex, please apply this to your for-3.13 branch.

Thanks,
Paul.

 arch/powerpc/kvm/book3s_hv_rm_mmu.c | 1 +
 1 file changed, 1 insertion(+)
Alexander Graf - Dec. 18, 2013, 10:31 a.m.
On 16.12.2013, at 03:31, Paul Mackerras <paulus@samba.org> wrote:

> Commit caaa4c804fae ("KVM: PPC: Book3S HV: Fix physical address
> calculations") unfortunately resulted in some low-order address bits
> getting dropped in the case where the guest is creating a 4k HPTE
> and the host page size is 64k.  By getting the low-order bits from
> hva rather than gpa we miss out on bits 12 - 15 in this case, since
> hva is at page granularity.  This puts the missing bits back in.
> 
> Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> Signed-off-by: Paul Mackerras <paulus@samba.org>

Thanks, applied to for-3.13.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch

diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 1931aa3..8689e2e 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -240,6 +240,7 @@  long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
 			is_io = hpte_cache_bits(pte_val(pte));
 			pa = pte_pfn(pte) << PAGE_SHIFT;
 			pa |= hva & (pte_size - 1);
+			pa |= gpa & ~PAGE_MASK;
 		}
 	}