Patchwork KVM: PPC: Book3S HV: Fix physical address calculation

login
register
mail settings
Submitter Paul Mackerras
Date Nov. 6, 2013, 10:11 p.m.
Message ID <20131106221138.GA27956@iris.ozlabs.ibm.com>
Download mbox | patch
Permalink /patch/289035/
State New
Headers show

Comments

Paul Mackerras - Nov. 6, 2013, 10:11 p.m.
This fixes a bug in kvmppc_do_h_enter() where the physical address
for a page can be calculated incorrectly if transparent huge pages
(THP) are active.  Until THP came along, it was true that if we
encountered a large (16M) page in kvmppc_do_h_enter(), then the
associated memslot must be 16M aligned for both its guest physical
address and the userspace address, and the physical address
calculations in kvmppc_do_h_enter() assumed that.  With THP, that
is no longer true.

In the case where we are using MMU notifiers and the page size that
we get from the Linux page tables is larger than the page being mapped
by the guest, we need to fill in some low-order bits of the physical
address.  Without THP, these bits would be the same in the guest
physical address (gpa) and the host virtual address (hva).  With THP,
they can be different, and we need to use the bits from hva rather
than gpa.

In the case where we are not using MMU notifiers, the host physical
address we get from the memslot->arch.slot_phys[] array already
includes the low-order bits down to the PAGE_SIZE level, even if
we are using large pages.  Thus we can simplify the calculation in
this case to just add in the remaining bits in the case where
PAGE_SIZE is 64k and the guest is mapping a 4k page.

Cc: stable@vger.kernel.org # v3.11+
Signed-off-by: Paul Mackerras <paulus@samba.org>
Tested-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_hv_rm_mmu.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
Paul Mackerras - Nov. 7, 2013, 11:07 a.m.
On Thu, Nov 07, 2013 at 09:11:38AM +1100, Paul Mackerras wrote:
> This fixes a bug in kvmppc_do_h_enter() where the physical address
> for a page can be calculated incorrectly if transparent huge pages
> (THP) are active.  Until THP came along, it was true that if we
> encountered a large (16M) page in kvmppc_do_h_enter(), then the
> associated memslot must be 16M aligned for both its guest physical
> address and the userspace address, and the physical address
> calculations in kvmppc_do_h_enter() assumed that.  With THP, that
> is no longer true.

BTW, it looks like kvmppc_book3s_hv_page_fault() has a similar bug.
I'll do a v2 of the patch to fix both, since it's essentially the
same problem in both places.

Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch

diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 9c51544..fddbf98 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -225,6 +225,7 @@  long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
 		is_io = pa & (HPTE_R_I | HPTE_R_W);
 		pte_size = PAGE_SIZE << (pa & KVMPPC_PAGE_ORDER_MASK);
 		pa &= PAGE_MASK;
+		pa |= gpa & ~PAGE_MASK;
 	} else {
 		/* Translate to host virtual address */
 		hva = __gfn_to_hva_memslot(memslot, gfn);
@@ -238,13 +239,12 @@  long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
 				ptel = hpte_make_readonly(ptel);
 			is_io = hpte_cache_bits(pte_val(pte));
 			pa = pte_pfn(pte) << PAGE_SHIFT;
+			pa |= hva & (pte_size - 1);
 		}
 	}
 
 	if (pte_size < psize)
 		return H_PARAMETER;
-	if (pa && pte_size > psize)
-		pa |= gpa & (pte_size - 1);
 
 	ptel &= ~(HPTE_R_PP0 - psize);
 	ptel |= pa;