diff mbox series

[v1,1/2] KVM: PPC: Book3S HV: Add support for H_RPT_INVALIDATE (nested case only)

Message ID 20201019112642.53016-2-bharata@linux.ibm.com
State Superseded
Headers show
Series [v1,1/2] KVM: PPC: Book3S HV: Add support for H_RPT_INVALIDATE (nested case only) | expand

Commit Message

Bharata B Rao Oct. 19, 2020, 11:26 a.m. UTC
Implements H_RPT_INVALIDATE hcall and supports only nested case
currently.

A KVM capability KVM_CAP_RPT_INVALIDATE is added to indicate the
support for this hcall.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
---
 Documentation/virt/kvm/api.rst                | 17 ++++
 .../include/asm/book3s/64/tlbflush-radix.h    | 18 ++++
 arch/powerpc/include/asm/kvm_book3s.h         |  3 +
 arch/powerpc/kvm/book3s_hv.c                  | 32 +++++++
 arch/powerpc/kvm/book3s_hv_nested.c           | 94 +++++++++++++++++++
 arch/powerpc/kvm/powerpc.c                    |  3 +
 arch/powerpc/mm/book3s64/radix_tlb.c          |  4 -
 include/uapi/linux/kvm.h                      |  1 +
 8 files changed, 168 insertions(+), 4 deletions(-)

Comments

Paul Mackerras Dec. 9, 2020, 4:15 a.m. UTC | #1
On Mon, Oct 19, 2020 at 04:56:41PM +0530, Bharata B Rao wrote:
> Implements H_RPT_INVALIDATE hcall and supports only nested case
> currently.
> 
> A KVM capability KVM_CAP_RPT_INVALIDATE is added to indicate the
> support for this hcall.

I have a couple of questions about this patch:

1. Is this something that is useful today, or is it something that may
become useful in the future depending on future product plans?  In
other words, what advantage is there to forcing L2 guests to use this
hcall instead of doing tlbie themselves?

2. Why does it need to be added to the default-enabled hcall list?

There is a concern that if this is enabled by default we could get the
situation where a guest using it gets migrated to a host that doesn't
support it, which would be bad.  That is the reason that all new
things like this are disabled by default and only enabled by userspace
(i.e. QEMU) in situations where we can enforce that it is available on
all hosts to which the VM might be migrated.

Thanks,
Paul.
Bharata B Rao Dec. 10, 2020, 4:24 a.m. UTC | #2
On Wed, Dec 09, 2020 at 03:15:42PM +1100, Paul Mackerras wrote:
> On Mon, Oct 19, 2020 at 04:56:41PM +0530, Bharata B Rao wrote:
> > Implements H_RPT_INVALIDATE hcall and supports only nested case
> > currently.
> > 
> > A KVM capability KVM_CAP_RPT_INVALIDATE is added to indicate the
> > support for this hcall.
> 
> I have a couple of questions about this patch:
> 
> 1. Is this something that is useful today, or is it something that may
> become useful in the future depending on future product plans? In
> other words, what advantage is there to forcing L2 guests to use this
> hcall instead of doing tlbie themselves?

H_RPT_INVALIDATE will replace the use of the existing H_TLB_INVALIDATE
for nested partition scoped invalidations. Implementations that want to
off-load invalidations to the host (when GTSE=0) would have to bother
about only one hcall (H_RPT_INVALIDATE)

> 
> 2. Why does it need to be added to the default-enabled hcall list?
> 
> There is a concern that if this is enabled by default we could get the
> situation where a guest using it gets migrated to a host that doesn't
> support it, which would be bad.  That is the reason that all new
> things like this are disabled by default and only enabled by userspace
> (i.e. QEMU) in situations where we can enforce that it is available on
> all hosts to which the VM might be migrated.

As you suggested privately, I am thinking of falling back to
H_TLB_INVALIDATE in case where this new hcall fails due to not being
present. That should address the migration case that you mention
above. With that and leaving the new hcall enabled by default
is good okay?

Regards,
Bharata.
David Gibson Dec. 11, 2020, 1:16 a.m. UTC | #3
On Thu, Dec 10, 2020 at 09:54:18AM +0530, Bharata B Rao wrote:
> On Wed, Dec 09, 2020 at 03:15:42PM +1100, Paul Mackerras wrote:
> > On Mon, Oct 19, 2020 at 04:56:41PM +0530, Bharata B Rao wrote:
> > > Implements H_RPT_INVALIDATE hcall and supports only nested case
> > > currently.
> > > 
> > > A KVM capability KVM_CAP_RPT_INVALIDATE is added to indicate the
> > > support for this hcall.
> > 
> > I have a couple of questions about this patch:
> > 
> > 1. Is this something that is useful today, or is it something that may
> > become useful in the future depending on future product plans? In
> > other words, what advantage is there to forcing L2 guests to use this
> > hcall instead of doing tlbie themselves?
> 
> H_RPT_INVALIDATE will replace the use of the existing H_TLB_INVALIDATE
> for nested partition scoped invalidations. Implementations that want to
> off-load invalidations to the host (when GTSE=0) would have to bother
> about only one hcall (H_RPT_INVALIDATE)
> 
> > 
> > 2. Why does it need to be added to the default-enabled hcall list?
> > 
> > There is a concern that if this is enabled by default we could get the
> > situation where a guest using it gets migrated to a host that doesn't
> > support it, which would be bad.  That is the reason that all new
> > things like this are disabled by default and only enabled by userspace
> > (i.e. QEMU) in situations where we can enforce that it is available on
> > all hosts to which the VM might be migrated.
> 
> As you suggested privately, I am thinking of falling back to
> H_TLB_INVALIDATE in case where this new hcall fails due to not being
> present. That should address the migration case that you mention
> above. With that and leaving the new hcall enabled by default
> is good okay?

No.  Assuming that guests will have some fallback is not how the qemu
migration compatibility model works.  If we specify an old machine
type, we need to provide the same environment that the older host
would have.
Paul Mackerras Dec. 11, 2020, 5:27 a.m. UTC | #4
On Fri, Dec 11, 2020 at 12:16:39PM +1100, David Gibson wrote:
> On Thu, Dec 10, 2020 at 09:54:18AM +0530, Bharata B Rao wrote:
> > On Wed, Dec 09, 2020 at 03:15:42PM +1100, Paul Mackerras wrote:
> > > On Mon, Oct 19, 2020 at 04:56:41PM +0530, Bharata B Rao wrote:
> > > > Implements H_RPT_INVALIDATE hcall and supports only nested case
> > > > currently.
> > > > 
> > > > A KVM capability KVM_CAP_RPT_INVALIDATE is added to indicate the
> > > > support for this hcall.
> > > 
> > > I have a couple of questions about this patch:
> > > 
> > > 1. Is this something that is useful today, or is it something that may
> > > become useful in the future depending on future product plans? In
> > > other words, what advantage is there to forcing L2 guests to use this
> > > hcall instead of doing tlbie themselves?
> > 
> > H_RPT_INVALIDATE will replace the use of the existing H_TLB_INVALIDATE
> > for nested partition scoped invalidations. Implementations that want to
> > off-load invalidations to the host (when GTSE=0) would have to bother
> > about only one hcall (H_RPT_INVALIDATE)
> > 
> > > 
> > > 2. Why does it need to be added to the default-enabled hcall list?
> > > 
> > > There is a concern that if this is enabled by default we could get the
> > > situation where a guest using it gets migrated to a host that doesn't
> > > support it, which would be bad.  That is the reason that all new
> > > things like this are disabled by default and only enabled by userspace
> > > (i.e. QEMU) in situations where we can enforce that it is available on
> > > all hosts to which the VM might be migrated.
> > 
> > As you suggested privately, I am thinking of falling back to
> > H_TLB_INVALIDATE in case where this new hcall fails due to not being
> > present. That should address the migration case that you mention
> > above. With that and leaving the new hcall enabled by default
> > is good okay?
> 
> No.  Assuming that guests will have some fallback is not how the qemu
> migration compatibility model works.  If we specify an old machine
> type, we need to provide the same environment that the older host
> would have.

I misunderstood what this patchset is about when I first looked at
it.  H_RPT_INVALIDATE has two separate functions; one is to do
process-scoped invalidations for a guest when LPCR[GTSE] = 0 (i.e.,
when the guest is not permitted to do tlbie itself), and the other is
to do partition-scoped invalidations that an L1 hypervisor needs to do
on behalf of an L2 guest.  The second function is a replacement and
standardization of the existing H_TLB_INVALIDATE which was introduced
with the nested virtualization code (using a hypercall number from the
platform-specific range).

This patchset only implements the second function, not the first.  The
first function remains unimplemented in KVM at present.

Given that QEMU will need changes for a guest to be able to exploit
H_RPT_INVALIDATE (at a minimum, adding a device tree property), it
doesn't seem onerous for QEMU to have to enable the hcall with
KVM_CAP_PPC_ENABLE_HCALL.  I think that the control on whether the
hcall is handled in KVM along with the control on nested hypervisor
function provides adequate control for QEMU without needing a writable
capability.  The read-only capability to say whether the hcall exists
does seem useful.

Given all that, I'm veering towards taking Bharata's patchset pretty
much as-is, minus the addition of H_RPT_INVALIDATE to the
default-enabled set.

Paul.
Bharata B Rao Dec. 11, 2020, 10:33 a.m. UTC | #5
On Mon, Oct 19, 2020 at 04:56:41PM +0530, Bharata B Rao wrote:
> Implements H_RPT_INVALIDATE hcall and supports only nested case
> currently.
> 
> A KVM capability KVM_CAP_RPT_INVALIDATE is added to indicate the
> support for this hcall.

As Paul mentioned in the thread, this hcall does both process scoped
invalidations and partition scoped invalidations for L2 guest.
I am adding KVM_CAP_RPT_INVALIDATE capability with only partition
scoped invalidations (nested case) implemented in the hcall as we
don't see the need for KVM to implement process scoped invalidation
function as KVM may never run with LPCR[GTSE]=0.

I am wondering if enabling the capability with only partial
implementation of the hcall is the correct thing to do. In future
if we ever want process scoped invalidations support in this hcall,
we may not be able to differentiate the availability of two functions
cleanly from QEMU.

So does it make sense to implement the process scoped invalidation
function also now itself even if it is not going to be used in
KVM?

Regards,
Bharata.
David Gibson Dec. 14, 2020, 6:05 a.m. UTC | #6
On Fri, Dec 11, 2020 at 04:27:44PM +1100, Paul Mackerras wrote:
> On Fri, Dec 11, 2020 at 12:16:39PM +1100, David Gibson wrote:
> > On Thu, Dec 10, 2020 at 09:54:18AM +0530, Bharata B Rao wrote:
> > > On Wed, Dec 09, 2020 at 03:15:42PM +1100, Paul Mackerras wrote:
> > > > On Mon, Oct 19, 2020 at 04:56:41PM +0530, Bharata B Rao wrote:
> > > > > Implements H_RPT_INVALIDATE hcall and supports only nested case
> > > > > currently.
> > > > > 
> > > > > A KVM capability KVM_CAP_RPT_INVALIDATE is added to indicate the
> > > > > support for this hcall.
> > > > 
> > > > I have a couple of questions about this patch:
> > > > 
> > > > 1. Is this something that is useful today, or is it something that may
> > > > become useful in the future depending on future product plans? In
> > > > other words, what advantage is there to forcing L2 guests to use this
> > > > hcall instead of doing tlbie themselves?
> > > 
> > > H_RPT_INVALIDATE will replace the use of the existing H_TLB_INVALIDATE
> > > for nested partition scoped invalidations. Implementations that want to
> > > off-load invalidations to the host (when GTSE=0) would have to bother
> > > about only one hcall (H_RPT_INVALIDATE)
> > > 
> > > > 
> > > > 2. Why does it need to be added to the default-enabled hcall list?
> > > > 
> > > > There is a concern that if this is enabled by default we could get the
> > > > situation where a guest using it gets migrated to a host that doesn't
> > > > support it, which would be bad.  That is the reason that all new
> > > > things like this are disabled by default and only enabled by userspace
> > > > (i.e. QEMU) in situations where we can enforce that it is available on
> > > > all hosts to which the VM might be migrated.
> > > 
> > > As you suggested privately, I am thinking of falling back to
> > > H_TLB_INVALIDATE in case where this new hcall fails due to not being
> > > present. That should address the migration case that you mention
> > > above. With that and leaving the new hcall enabled by default
> > > is good okay?
> > 
> > No.  Assuming that guests will have some fallback is not how the qemu
> > migration compatibility model works.  If we specify an old machine
> > type, we need to provide the same environment that the older host
> > would have.
> 
> I misunderstood what this patchset is about when I first looked at
> it.  H_RPT_INVALIDATE has two separate functions; one is to do
> process-scoped invalidations for a guest when LPCR[GTSE] = 0 (i.e.,
> when the guest is not permitted to do tlbie itself), and the other is
> to do partition-scoped invalidations that an L1 hypervisor needs to do
> on behalf of an L2 guest.  The second function is a replacement and
> standardization of the existing H_TLB_INVALIDATE which was introduced
> with the nested virtualization code (using a hypercall number from the
> platform-specific range).
> 
> This patchset only implements the second function, not the first.  The
> first function remains unimplemented in KVM at present.
> 
> Given that QEMU will need changes for a guest to be able to exploit
> H_RPT_INVALIDATE (at a minimum, adding a device tree property), it
> doesn't seem onerous for QEMU to have to enable the hcall with
> KVM_CAP_PPC_ENABLE_HCALL.  I think that the control on whether the
> hcall is handled in KVM along with the control on nested hypervisor
> function provides adequate control for QEMU without needing a writable
> capability.  The read-only capability to say whether the hcall exists
> does seem useful.
> 
> Given all that, I'm veering towards taking Bharata's patchset pretty
> much as-is, minus the addition of H_RPT_INVALIDATE to the
> default-enabled set.

Yes, that's fine.  I was only the suggestion that it be on the
default-enabled set I was objecting to.
David Gibson Dec. 14, 2020, 6:05 a.m. UTC | #7
On Fri, Dec 11, 2020 at 04:03:36PM +0530, Bharata B Rao wrote:
> On Mon, Oct 19, 2020 at 04:56:41PM +0530, Bharata B Rao wrote:
> > Implements H_RPT_INVALIDATE hcall and supports only nested case
> > currently.
> > 
> > A KVM capability KVM_CAP_RPT_INVALIDATE is added to indicate the
> > support for this hcall.
> 
> As Paul mentioned in the thread, this hcall does both process scoped
> invalidations and partition scoped invalidations for L2 guest.
> I am adding KVM_CAP_RPT_INVALIDATE capability with only partition
> scoped invalidations (nested case) implemented in the hcall as we
> don't see the need for KVM to implement process scoped invalidation
> function as KVM may never run with LPCR[GTSE]=0.
> 
> I am wondering if enabling the capability with only partial
> implementation of the hcall is the correct thing to do. In future
> if we ever want process scoped invalidations support in this hcall,
> we may not be able to differentiate the availability of two functions
> cleanly from QEMU.

Yeah, it's not ideal.

> So does it make sense to implement the process scoped invalidation
> function also now itself even if it is not going to be used in
> KVM?

That might be a good idea, if it's not excessively difficult.
diff mbox series

Patch

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 1f26d83e6b168..67e98a56271ae 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -5852,6 +5852,23 @@  controlled by the kvm module parameter halt_poll_ns. This capability allows
 the maximum halt time to specified on a per-VM basis, effectively overriding
 the module parameter for the target VM.
 
+7.21 KVM_CAP_RPT_INVALIDATE
+--------------------------
+
+:Capability: KVM_CAP_RPT_INVALIDATE
+:Architectures: ppc
+:Type: vm
+
+This capability indicates that the kernel is capable of handling
+H_RPT_INVALIDATE hcall.
+
+In order to enable the use of H_RPT_INVALIDATE in the guest,
+user space might have to advertise it for the guest. For example,
+IBM pSeries (sPAPR) guest starts using it if "hcall-rpt-invalidate" is
+present in the "ibm,hypertas-functions" device-tree property.
+
+This capability is always enabled.
+
 8. Other capabilities.
 ======================
 
diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
index 94439e0cefc9c..aace7e9b2397d 100644
--- a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
+++ b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
@@ -4,6 +4,10 @@ 
 
 #include <asm/hvcall.h>
 
+#define RIC_FLUSH_TLB 0
+#define RIC_FLUSH_PWC 1
+#define RIC_FLUSH_ALL 2
+
 struct vm_area_struct;
 struct mm_struct;
 struct mmu_gather;
@@ -21,6 +25,20 @@  static inline u64 psize_to_rpti_pgsize(unsigned long psize)
 	return H_RPTI_PAGE_ALL;
 }
 
+static inline int rpti_pgsize_to_psize(unsigned long page_size)
+{
+	if (page_size == H_RPTI_PAGE_4K)
+		return MMU_PAGE_4K;
+	if (page_size == H_RPTI_PAGE_64K)
+		return MMU_PAGE_64K;
+	if (page_size == H_RPTI_PAGE_2M)
+		return MMU_PAGE_2M;
+	if (page_size == H_RPTI_PAGE_1G)
+		return MMU_PAGE_1G;
+	else
+		return MMU_PAGE_64K; /* Default */
+}
+
 static inline int mmu_get_ap(int psize)
 {
 	return mmu_psize_defs[psize].ap;
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index d32ec9ae73bd4..0f1c5fa6e8ce3 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -298,6 +298,9 @@  void kvmhv_set_ptbl_entry(unsigned int lpid, u64 dw0, u64 dw1);
 void kvmhv_release_all_nested(struct kvm *kvm);
 long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu);
 long kvmhv_do_nested_tlbie(struct kvm_vcpu *vcpu);
+long kvmhv_h_rpti_nested(struct kvm_vcpu *vcpu, unsigned long lpid,
+			 unsigned long type, unsigned long pg_sizes,
+			 unsigned long start, unsigned long end);
 int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu,
 			  u64 time_limit, unsigned long lpcr);
 void kvmhv_save_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr);
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 3bd3118c76330..6cbd37af91ebf 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -904,6 +904,28 @@  static int kvmppc_get_yield_count(struct kvm_vcpu *vcpu)
 	return yield_count;
 }
 
+static long kvmppc_h_rpt_invalidate(struct kvm_vcpu *vcpu,
+				    unsigned long pid, unsigned long target,
+				    unsigned long type, unsigned long pg_sizes,
+				    unsigned long start, unsigned long end)
+{
+	if (end < start)
+		return H_P5;
+
+	if ((!type & H_RPTI_TYPE_NESTED))
+		return H_P3;
+
+	if (!nesting_enabled(vcpu->kvm))
+		return H_FUNCTION;
+
+	/* Support only cores as target */
+	if (target != H_RPTI_TARGET_CMMU)
+		return H_P2;
+
+	return kvmhv_h_rpti_nested(vcpu, pid, (type & ~H_RPTI_TYPE_NESTED),
+				   pg_sizes, start, end);
+}
+
 int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
 {
 	unsigned long req = kvmppc_get_gpr(vcpu, 3);
@@ -1112,6 +1134,14 @@  int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
 		 */
 		ret = kvmppc_h_svm_init_abort(vcpu->kvm);
 		break;
+	case H_RPT_INVALIDATE:
+		ret = kvmppc_h_rpt_invalidate(vcpu, kvmppc_get_gpr(vcpu, 4),
+					      kvmppc_get_gpr(vcpu, 5),
+					      kvmppc_get_gpr(vcpu, 6),
+					      kvmppc_get_gpr(vcpu, 7),
+					      kvmppc_get_gpr(vcpu, 8),
+					      kvmppc_get_gpr(vcpu, 9));
+		break;
 
 	default:
 		return RESUME_HOST;
@@ -1158,6 +1188,7 @@  static int kvmppc_hcall_impl_hv(unsigned long cmd)
 	case H_XIRR_X:
 #endif
 	case H_PAGE_INIT:
+	case H_RPT_INVALIDATE:
 		return 1;
 	}
 
@@ -5334,6 +5365,7 @@  static unsigned int default_hcall_list[] = {
 	H_XIRR,
 	H_XIRR_X,
 #endif
+	H_RPT_INVALIDATE,
 	0
 };
 
diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 6822d23a2da4d..3ec0231628b42 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -1149,6 +1149,100 @@  long kvmhv_do_nested_tlbie(struct kvm_vcpu *vcpu)
 	return H_SUCCESS;
 }
 
+static long do_tlb_invalidate_nested_tlb(struct kvm_vcpu *vcpu,
+					 unsigned long lpid,
+					 unsigned long page_size,
+					 unsigned long ap,
+					 unsigned long start,
+					 unsigned long end)
+{
+	unsigned long addr = start;
+	int ret;
+
+	do {
+		ret = kvmhv_emulate_tlbie_tlb_addr(vcpu, lpid, ap,
+						   get_epn(addr));
+		if (ret)
+			return ret;
+		addr += page_size;
+	} while (addr < end);
+
+	return ret;
+}
+
+static long do_tlb_invalidate_nested_all(struct kvm_vcpu *vcpu,
+					 unsigned long lpid)
+{
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_nested_guest *gp;
+
+	gp = kvmhv_get_nested(kvm, lpid, false);
+	if (gp) {
+		kvmhv_emulate_tlbie_lpid(vcpu, gp, RIC_FLUSH_ALL);
+		kvmhv_put_nested(gp);
+	}
+	return H_SUCCESS;
+}
+
+long kvmhv_h_rpti_nested(struct kvm_vcpu *vcpu, unsigned long lpid,
+			 unsigned long type, unsigned long pg_sizes,
+			 unsigned long start, unsigned long end)
+{
+	struct kvm_nested_guest *gp;
+	long ret;
+	unsigned long psize, ap;
+
+	/*
+	 * If L2 lpid isn't valid, we need to return H_PARAMETER.
+	 * Nested KVM issues a L2 lpid flush call when creating
+	 * partition table entries for L2. This happens even before
+	 * the corresponding shadow lpid is created in HV. Until
+	 * this is fixed, ignore such flush requests.
+	 */
+	gp = kvmhv_find_nested(vcpu->kvm, lpid);
+	if (!gp)
+		return H_SUCCESS;
+
+	if ((type & H_RPTI_TYPE_NESTED_ALL) == H_RPTI_TYPE_NESTED_ALL)
+		return do_tlb_invalidate_nested_all(vcpu, lpid);
+
+	if ((type & H_RPTI_TYPE_TLB) == H_RPTI_TYPE_TLB) {
+		if (pg_sizes & H_RPTI_PAGE_64K) {
+			psize = rpti_pgsize_to_psize(pg_sizes & H_RPTI_PAGE_64K);
+			ap = mmu_get_ap(psize);
+
+			ret = do_tlb_invalidate_nested_tlb(vcpu, lpid,
+							   (1UL << 16),
+							   ap, start, end);
+			if (ret)
+				return H_P4;
+		}
+
+		if (pg_sizes & H_RPTI_PAGE_2M) {
+			psize = rpti_pgsize_to_psize(pg_sizes & H_RPTI_PAGE_2M);
+			ap = mmu_get_ap(psize);
+
+			ret = do_tlb_invalidate_nested_tlb(vcpu, lpid,
+							   (1UL << 21),
+							   ap, start, end);
+			if (ret)
+				return H_P4;
+		}
+
+		if (pg_sizes & H_RPTI_PAGE_1G) {
+			psize = rpti_pgsize_to_psize(pg_sizes & H_RPTI_PAGE_1G);
+			ap = mmu_get_ap(psize);
+
+			ret = do_tlb_invalidate_nested_tlb(vcpu, lpid,
+							   (1UL << 30),
+							   ap, start, end);
+			if (ret)
+				return H_P4;
+		}
+	}
+	return H_SUCCESS;
+}
+
 /* Used to convert a nested guest real address to a L1 guest real address */
 static int kvmhv_translate_addr_nested(struct kvm_vcpu *vcpu,
 				       struct kvm_nested_guest *gp,
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 13999123b7358..8ab4a71c40b4a 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -678,6 +678,9 @@  int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 		r = hv_enabled && kvmppc_hv_ops->enable_svm &&
 			!kvmppc_hv_ops->enable_svm(NULL);
 		break;
+	case KVM_CAP_RPT_INVALIDATE:
+		r = 1;
+		break;
 #endif
 	default:
 		r = 0;
diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
index b487b489d4b68..3a2b12d1d49b8 100644
--- a/arch/powerpc/mm/book3s64/radix_tlb.c
+++ b/arch/powerpc/mm/book3s64/radix_tlb.c
@@ -18,10 +18,6 @@ 
 #include <asm/cputhreads.h>
 #include <asm/plpar_wrappers.h>
 
-#define RIC_FLUSH_TLB 0
-#define RIC_FLUSH_PWC 1
-#define RIC_FLUSH_ALL 2
-
 /*
  * tlbiel instruction for radix, set invalidation
  * i.e., r=1 and is=01 or is=10 or is=11
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 7d8eced6f459b..109ede74735d4 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1037,6 +1037,7 @@  struct kvm_ppc_resize_hpt {
 #define KVM_CAP_SMALLER_MAXPHYADDR 185
 #define KVM_CAP_S390_DIAG318 186
 #define KVM_CAP_STEAL_TIME 187
+#define KVM_CAP_RPT_INVALIDATE 188
 
 #ifdef KVM_CAP_IRQ_ROUTING