diff mbox

[PATCHv4] kvmppc: Implement H_LOGICAL_CI_{LOAD,STORE} in KVM

Message ID 1429576911-6076-1-git-send-email-david@gibson.dropbear.id.au
State New, archived
Headers show

Commit Message

David Gibson April 21, 2015, 12:41 a.m. UTC
On POWER, storage caching is usually configured via the MMU - attributes
such as cache-inhibited are stored in the TLB and the hashed page table.

This makes correctly performing cache inhibited IO accesses awkward when
the MMU is turned off (real mode).  Some CPU models provide special
registers to control the cache attributes of real mode load and stores but
this is not at all consistent.  This is a problem in particular for SLOF,
the firmware used on KVM guests, which runs entirely in real mode, but
which needs to do IO to load the kernel.

To simplify this qemu implements two special hypercalls, H_LOGICAL_CI_LOAD
and H_LOGICAL_CI_STORE which simulate a cache-inhibited load or store to
a logical address (aka guest physical address).  SLOF uses these for IO.

However, because these are implemented within qemu, not the host kernel,
these bypass any IO devices emulated within KVM itself.  The simplest way
to see this problem is to attempt to boot a KVM guest from a virtio-blk
device with iothread / dataplane enabled.  The iothread code relies on an
in kernel implementation of the virtio queue notification, which is not
triggered by the IO hcalls, and so the guest will stall in SLOF unable to
load the guest OS.

This patch addresses this by providing in-kernel implementations of the
2 hypercalls, which correctly scan the KVM IO bus.  Any access to an
address not handled by the KVM IO bus will cause a VM exit, hitting the
qemu implementation as before.

Note that a userspace change is also required, in order to enable these
new hcall implementations with KVM_CAP_PPC_ENABLE_HCALL.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/include/asm/kvm_book3s.h |  3 ++
 arch/powerpc/kvm/book3s.c             | 76 +++++++++++++++++++++++++++++++++++
 arch/powerpc/kvm/book3s_hv.c          | 12 ++++++
 arch/powerpc/kvm/book3s_pr_papr.c     | 28 +++++++++++++
 4 files changed, 119 insertions(+)

Changes in v4:
 * Rebase onto 4.0+, correct for changed signature of kvm_io_bus_{read,write}

Alex, I saw from some build system notifications that you seemed to
hit some troubles compiling the last version of this patch. This
should fix it - hope it's not too late to get into 4.1.

Comments

Thomas Huth April 21, 2015, 6:37 a.m. UTC | #1
Am Tue, 21 Apr 2015 10:41:51 +1000
schrieb David Gibson <david@gibson.dropbear.id.au>:

> On POWER, storage caching is usually configured via the MMU - attributes
> such as cache-inhibited are stored in the TLB and the hashed page table.
> 
> This makes correctly performing cache inhibited IO accesses awkward when
> the MMU is turned off (real mode).  Some CPU models provide special
> registers to control the cache attributes of real mode load and stores but
> this is not at all consistent.  This is a problem in particular for SLOF,
> the firmware used on KVM guests, which runs entirely in real mode, but
> which needs to do IO to load the kernel.
> 
> To simplify this qemu implements two special hypercalls, H_LOGICAL_CI_LOAD
> and H_LOGICAL_CI_STORE which simulate a cache-inhibited load or store to
> a logical address (aka guest physical address).  SLOF uses these for IO.
> 
> However, because these are implemented within qemu, not the host kernel,
> these bypass any IO devices emulated within KVM itself.  The simplest way
> to see this problem is to attempt to boot a KVM guest from a virtio-blk
> device with iothread / dataplane enabled.  The iothread code relies on an
> in kernel implementation of the virtio queue notification, which is not
> triggered by the IO hcalls, and so the guest will stall in SLOF unable to
> load the guest OS.
> 
> This patch addresses this by providing in-kernel implementations of the
> 2 hypercalls, which correctly scan the KVM IO bus.  Any access to an
> address not handled by the KVM IO bus will cause a VM exit, hitting the
> qemu implementation as before.
> 
> Note that a userspace change is also required, in order to enable these
> new hcall implementations with KVM_CAP_PPC_ENABLE_HCALL.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  arch/powerpc/include/asm/kvm_book3s.h |  3 ++
>  arch/powerpc/kvm/book3s.c             | 76 +++++++++++++++++++++++++++++++++++
>  arch/powerpc/kvm/book3s_hv.c          | 12 ++++++
>  arch/powerpc/kvm/book3s_pr_papr.c     | 28 +++++++++++++
>  4 files changed, 119 insertions(+)
...
> diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
> index cfbcdc6..453a8a4 100644
> --- a/arch/powerpc/kvm/book3s.c
> +++ b/arch/powerpc/kvm/book3s.c
> @@ -821,6 +821,82 @@ void kvmppc_core_destroy_vm(struct kvm *kvm)
>  #endif
>  }
>  
> +int kvmppc_h_logical_ci_load(struct kvm_vcpu *vcpu)
> +{
> +	unsigned long size = kvmppc_get_gpr(vcpu, 4);
> +	unsigned long addr = kvmppc_get_gpr(vcpu, 5);
> +	u64 buf;
> +	int ret;
> +
> +	if (!is_power_of_2(size) || (size > sizeof(buf)))
> +		return H_TOO_HARD;
> +
> +	ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, addr, size, &buf);
> +	if (ret != 0)
> +		return H_TOO_HARD;
> +
> +	switch (size) {
> +	case 1:
> +		kvmppc_set_gpr(vcpu, 4, *(u8 *)&buf);
> +		break;
> +

Most of the code in book3s.c seems not to use a empty line after a
"break;", so may I suggest to remove these empty lines here, too, to
keep the coding style a little bit more consistent?

> +	case 2:
> +		kvmppc_set_gpr(vcpu, 4, be16_to_cpu(*(__be16 *)&buf));
> +		break;
> +
> +	case 4:
> +		kvmppc_set_gpr(vcpu, 4, be32_to_cpu(*(__be32 *)&buf));
> +		break;
> +
> +	case 8:
> +		kvmppc_set_gpr(vcpu, 4, be64_to_cpu(*(__be64 *)&buf));
> +		break;
> +
> +	default:
> +		BUG();

If I got the code right, a malicious guest could easily trigger this
BUG() statement, couldn't it? ... so a BUG() is maybe not the right
thing to do here. Would it be appropriate to return an error value to
the guest instead?

> +	}
> +
> +	return H_SUCCESS;
> +}
> +EXPORT_SYMBOL_GPL(kvmppc_h_logical_ci_load);

 Thomas
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Gibson April 21, 2015, 6:51 a.m. UTC | #2
On Tue, Apr 21, 2015 at 08:37:02AM +0200, Thomas Huth wrote:
> Am Tue, 21 Apr 2015 10:41:51 +1000
> schrieb David Gibson <david@gibson.dropbear.id.au>:
> 
> > On POWER, storage caching is usually configured via the MMU - attributes
> > such as cache-inhibited are stored in the TLB and the hashed page table.
> > 
> > This makes correctly performing cache inhibited IO accesses awkward when
> > the MMU is turned off (real mode).  Some CPU models provide special
> > registers to control the cache attributes of real mode load and stores but
> > this is not at all consistent.  This is a problem in particular for SLOF,
> > the firmware used on KVM guests, which runs entirely in real mode, but
> > which needs to do IO to load the kernel.
> > 
> > To simplify this qemu implements two special hypercalls, H_LOGICAL_CI_LOAD
> > and H_LOGICAL_CI_STORE which simulate a cache-inhibited load or store to
> > a logical address (aka guest physical address).  SLOF uses these for IO.
> > 
> > However, because these are implemented within qemu, not the host kernel,
> > these bypass any IO devices emulated within KVM itself.  The simplest way
> > to see this problem is to attempt to boot a KVM guest from a virtio-blk
> > device with iothread / dataplane enabled.  The iothread code relies on an
> > in kernel implementation of the virtio queue notification, which is not
> > triggered by the IO hcalls, and so the guest will stall in SLOF unable to
> > load the guest OS.
> > 
> > This patch addresses this by providing in-kernel implementations of the
> > 2 hypercalls, which correctly scan the KVM IO bus.  Any access to an
> > address not handled by the KVM IO bus will cause a VM exit, hitting the
> > qemu implementation as before.
> > 
> > Note that a userspace change is also required, in order to enable these
> > new hcall implementations with KVM_CAP_PPC_ENABLE_HCALL.
> > 
> > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > ---
> >  arch/powerpc/include/asm/kvm_book3s.h |  3 ++
> >  arch/powerpc/kvm/book3s.c             | 76 +++++++++++++++++++++++++++++++++++
> >  arch/powerpc/kvm/book3s_hv.c          | 12 ++++++
> >  arch/powerpc/kvm/book3s_pr_papr.c     | 28 +++++++++++++
> >  4 files changed, 119 insertions(+)
> ...
> > diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
> > index cfbcdc6..453a8a4 100644
> > --- a/arch/powerpc/kvm/book3s.c
> > +++ b/arch/powerpc/kvm/book3s.c
> > @@ -821,6 +821,82 @@ void kvmppc_core_destroy_vm(struct kvm *kvm)
> >  #endif
> >  }
> >  
> > +int kvmppc_h_logical_ci_load(struct kvm_vcpu *vcpu)
> > +{
> > +	unsigned long size = kvmppc_get_gpr(vcpu, 4);
> > +	unsigned long addr = kvmppc_get_gpr(vcpu, 5);
> > +	u64 buf;
> > +	int ret;
> > +
> > +	if (!is_power_of_2(size) || (size > sizeof(buf)))
> > +		return H_TOO_HARD;
> > +
> > +	ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, addr, size, &buf);
> > +	if (ret != 0)
> > +		return H_TOO_HARD;
> > +
> > +	switch (size) {
> > +	case 1:
> > +		kvmppc_set_gpr(vcpu, 4, *(u8 *)&buf);
> > +		break;
> > +
> 
> Most of the code in book3s.c seems not to use a empty line after a
> "break;", so may I suggest to remove these empty lines here, too, to
> keep the coding style a little bit more consistent?

I don't think it's worth respinning just for that.

> > +	case 2:
> > +		kvmppc_set_gpr(vcpu, 4, be16_to_cpu(*(__be16 *)&buf));
> > +		break;
> > +
> > +	case 4:
> > +		kvmppc_set_gpr(vcpu, 4, be32_to_cpu(*(__be32 *)&buf));
> > +		break;
> > +
> > +	case 8:
> > +		kvmppc_set_gpr(vcpu, 4, be64_to_cpu(*(__be64 *)&buf));
> > +		break;
> > +
> > +	default:
> > +		BUG();
> 
> If I got the code right, a malicious guest could easily trigger this
> BUG() statement, couldn't it? ... so a BUG() is maybe not the right
> thing to do here. Would it be appropriate to return an error value to
> the guest instead?

Actually no - the test at the top of the function for
is_power_of_2(size) etc. catches this safely before we get here.  The
BUG() is just paranoia.


> 
> > +	}
> > +
> > +	return H_SUCCESS;
> > +}
> > +EXPORT_SYMBOL_GPL(kvmppc_h_logical_ci_load);
> 
>  Thomas
>
Thomas Huth April 21, 2015, 7:36 a.m. UTC | #3
Am Tue, 21 Apr 2015 16:51:21 +1000
schrieb David Gibson <david@gibson.dropbear.id.au>:

> On Tue, Apr 21, 2015 at 08:37:02AM +0200, Thomas Huth wrote:
> > Am Tue, 21 Apr 2015 10:41:51 +1000
> > schrieb David Gibson <david@gibson.dropbear.id.au>:
> > 
> > > On POWER, storage caching is usually configured via the MMU - attributes
> > > such as cache-inhibited are stored in the TLB and the hashed page table.
> > > 
> > > This makes correctly performing cache inhibited IO accesses awkward when
> > > the MMU is turned off (real mode).  Some CPU models provide special
> > > registers to control the cache attributes of real mode load and stores but
> > > this is not at all consistent.  This is a problem in particular for SLOF,
> > > the firmware used on KVM guests, which runs entirely in real mode, but
> > > which needs to do IO to load the kernel.
> > > 
> > > To simplify this qemu implements two special hypercalls, H_LOGICAL_CI_LOAD
> > > and H_LOGICAL_CI_STORE which simulate a cache-inhibited load or store to
> > > a logical address (aka guest physical address).  SLOF uses these for IO.
> > > 
> > > However, because these are implemented within qemu, not the host kernel,
> > > these bypass any IO devices emulated within KVM itself.  The simplest way
> > > to see this problem is to attempt to boot a KVM guest from a virtio-blk
> > > device with iothread / dataplane enabled.  The iothread code relies on an
> > > in kernel implementation of the virtio queue notification, which is not
> > > triggered by the IO hcalls, and so the guest will stall in SLOF unable to
> > > load the guest OS.
> > > 
> > > This patch addresses this by providing in-kernel implementations of the
> > > 2 hypercalls, which correctly scan the KVM IO bus.  Any access to an
> > > address not handled by the KVM IO bus will cause a VM exit, hitting the
> > > qemu implementation as before.
> > > 
> > > Note that a userspace change is also required, in order to enable these
> > > new hcall implementations with KVM_CAP_PPC_ENABLE_HCALL.
> > > 
> > > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > > ---
> > >  arch/powerpc/include/asm/kvm_book3s.h |  3 ++
> > >  arch/powerpc/kvm/book3s.c             | 76 +++++++++++++++++++++++++++++++++++
> > >  arch/powerpc/kvm/book3s_hv.c          | 12 ++++++
> > >  arch/powerpc/kvm/book3s_pr_papr.c     | 28 +++++++++++++
> > >  4 files changed, 119 insertions(+)
> > ...
> > > diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
> > > index cfbcdc6..453a8a4 100644
> > > --- a/arch/powerpc/kvm/book3s.c
> > > +++ b/arch/powerpc/kvm/book3s.c
> > > @@ -821,6 +821,82 @@ void kvmppc_core_destroy_vm(struct kvm *kvm)
> > >  #endif
> > >  }
> > >  
> > > +int kvmppc_h_logical_ci_load(struct kvm_vcpu *vcpu)
> > > +{
> > > +	unsigned long size = kvmppc_get_gpr(vcpu, 4);
> > > +	unsigned long addr = kvmppc_get_gpr(vcpu, 5);
> > > +	u64 buf;
> > > +	int ret;
> > > +
> > > +	if (!is_power_of_2(size) || (size > sizeof(buf)))
> > > +		return H_TOO_HARD;
> > > +
> > > +	ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, addr, size, &buf);
> > > +	if (ret != 0)
> > > +		return H_TOO_HARD;
> > > +
> > > +	switch (size) {
> > > +	case 1:
> > > +		kvmppc_set_gpr(vcpu, 4, *(u8 *)&buf);
> > > +		break;
> > > +
> > 
> > Most of the code in book3s.c seems not to use a empty line after a
> > "break;", so may I suggest to remove these empty lines here, too, to
> > keep the coding style a little bit more consistent?
> 
> I don't think it's worth respinning just for that.
> 
> > > +	case 2:
> > > +		kvmppc_set_gpr(vcpu, 4, be16_to_cpu(*(__be16 *)&buf));
> > > +		break;
> > > +
> > > +	case 4:
> > > +		kvmppc_set_gpr(vcpu, 4, be32_to_cpu(*(__be32 *)&buf));
> > > +		break;
> > > +
> > > +	case 8:
> > > +		kvmppc_set_gpr(vcpu, 4, be64_to_cpu(*(__be64 *)&buf));
> > > +		break;
> > > +
> > > +	default:
> > > +		BUG();
> > 
> > If I got the code right, a malicious guest could easily trigger this
> > BUG() statement, couldn't it? ... so a BUG() is maybe not the right
> > thing to do here. Would it be appropriate to return an error value to
> > the guest instead?
> 
> Actually no - the test at the top of the function for
> is_power_of_2(size) etc. catches this safely before we get here.  The
> BUG() is just paranoia.

Ah, missed that, you're right, so the code should be fine!

 Thomas
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf April 21, 2015, 10:41 a.m. UTC | #4
On 04/21/2015 02:41 AM, David Gibson wrote:
> On POWER, storage caching is usually configured via the MMU - attributes
> such as cache-inhibited are stored in the TLB and the hashed page table.
>
> This makes correctly performing cache inhibited IO accesses awkward when
> the MMU is turned off (real mode).  Some CPU models provide special
> registers to control the cache attributes of real mode load and stores but
> this is not at all consistent.  This is a problem in particular for SLOF,
> the firmware used on KVM guests, which runs entirely in real mode, but
> which needs to do IO to load the kernel.
>
> To simplify this qemu implements two special hypercalls, H_LOGICAL_CI_LOAD
> and H_LOGICAL_CI_STORE which simulate a cache-inhibited load or store to
> a logical address (aka guest physical address).  SLOF uses these for IO.
>
> However, because these are implemented within qemu, not the host kernel,
> these bypass any IO devices emulated within KVM itself.  The simplest way
> to see this problem is to attempt to boot a KVM guest from a virtio-blk
> device with iothread / dataplane enabled.  The iothread code relies on an
> in kernel implementation of the virtio queue notification, which is not
> triggered by the IO hcalls, and so the guest will stall in SLOF unable to
> load the guest OS.
>
> This patch addresses this by providing in-kernel implementations of the
> 2 hypercalls, which correctly scan the KVM IO bus.  Any access to an
> address not handled by the KVM IO bus will cause a VM exit, hitting the
> qemu implementation as before.
>
> Note that a userspace change is also required, in order to enable these
> new hcall implementations with KVM_CAP_PPC_ENABLE_HCALL.
>
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>   arch/powerpc/include/asm/kvm_book3s.h |  3 ++
>   arch/powerpc/kvm/book3s.c             | 76 +++++++++++++++++++++++++++++++++++
>   arch/powerpc/kvm/book3s_hv.c          | 12 ++++++
>   arch/powerpc/kvm/book3s_pr_papr.c     | 28 +++++++++++++
>   4 files changed, 119 insertions(+)
>
> Changes in v4:
>   * Rebase onto 4.0+, correct for changed signature of kvm_io_bus_{read,write}
>
> Alex, I saw from some build system notifications that you seemed to
> hit some troubles compiling the last version of this patch. This
> should fix it - hope it's not too late to get into 4.1.

Oh, I already fixed it up in my tree, no worries.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 9930904..b91e74a 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -288,6 +288,9 @@  static inline bool kvmppc_supports_magic_page(struct kvm_vcpu *vcpu)
 	return !is_kvmppc_hv_enabled(vcpu->kvm);
 }
 
+extern int kvmppc_h_logical_ci_load(struct kvm_vcpu *vcpu);
+extern int kvmppc_h_logical_ci_store(struct kvm_vcpu *vcpu);
+
 /* Magic register values loaded into r3 and r4 before the 'sc' assembly
  * instruction for the OSI hypercalls */
 #define OSI_SC_MAGIC_R3			0x113724FA
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index cfbcdc6..453a8a4 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -821,6 +821,82 @@  void kvmppc_core_destroy_vm(struct kvm *kvm)
 #endif
 }
 
+int kvmppc_h_logical_ci_load(struct kvm_vcpu *vcpu)
+{
+	unsigned long size = kvmppc_get_gpr(vcpu, 4);
+	unsigned long addr = kvmppc_get_gpr(vcpu, 5);
+	u64 buf;
+	int ret;
+
+	if (!is_power_of_2(size) || (size > sizeof(buf)))
+		return H_TOO_HARD;
+
+	ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, addr, size, &buf);
+	if (ret != 0)
+		return H_TOO_HARD;
+
+	switch (size) {
+	case 1:
+		kvmppc_set_gpr(vcpu, 4, *(u8 *)&buf);
+		break;
+
+	case 2:
+		kvmppc_set_gpr(vcpu, 4, be16_to_cpu(*(__be16 *)&buf));
+		break;
+
+	case 4:
+		kvmppc_set_gpr(vcpu, 4, be32_to_cpu(*(__be32 *)&buf));
+		break;
+
+	case 8:
+		kvmppc_set_gpr(vcpu, 4, be64_to_cpu(*(__be64 *)&buf));
+		break;
+
+	default:
+		BUG();
+	}
+
+	return H_SUCCESS;
+}
+EXPORT_SYMBOL_GPL(kvmppc_h_logical_ci_load);
+
+int kvmppc_h_logical_ci_store(struct kvm_vcpu *vcpu)
+{
+	unsigned long size = kvmppc_get_gpr(vcpu, 4);
+	unsigned long addr = kvmppc_get_gpr(vcpu, 5);
+	unsigned long val = kvmppc_get_gpr(vcpu, 6);
+	u64 buf;
+	int ret;
+
+	switch (size) {
+	case 1:
+		*(u8 *)&buf = val;
+		break;
+
+	case 2:
+		*(__be16 *)&buf = cpu_to_be16(val);
+		break;
+
+	case 4:
+		*(__be32 *)&buf = cpu_to_be32(val);
+		break;
+
+	case 8:
+		*(__be64 *)&buf = cpu_to_be64(val);
+		break;
+
+	default:
+		return H_TOO_HARD;
+	}
+
+	ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, addr, size, &buf);
+	if (ret != 0)
+		return H_TOO_HARD;
+
+	return H_SUCCESS;
+}
+EXPORT_SYMBOL_GPL(kvmppc_h_logical_ci_store);
+
 int kvmppc_core_check_processor_compat(void)
 {
 	/*
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index de74756..b9c11a3 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -706,6 +706,16 @@  int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
 
 		/* Send the error out to userspace via KVM_RUN */
 		return rc;
+	case H_LOGICAL_CI_LOAD:
+		ret = kvmppc_h_logical_ci_load(vcpu);
+		if (ret == H_TOO_HARD)
+			return RESUME_HOST;
+		break;
+	case H_LOGICAL_CI_STORE:
+		ret = kvmppc_h_logical_ci_store(vcpu);
+		if (ret == H_TOO_HARD)
+			return RESUME_HOST;
+		break;
 	case H_SET_MODE:
 		ret = kvmppc_h_set_mode(vcpu, kvmppc_get_gpr(vcpu, 4),
 					kvmppc_get_gpr(vcpu, 5),
@@ -740,6 +750,8 @@  static int kvmppc_hcall_impl_hv(unsigned long cmd)
 	case H_CONFER:
 	case H_REGISTER_VPA:
 	case H_SET_MODE:
+	case H_LOGICAL_CI_LOAD:
+	case H_LOGICAL_CI_STORE:
 #ifdef CONFIG_KVM_XICS
 	case H_XIRR:
 	case H_CPPR:
diff --git a/arch/powerpc/kvm/book3s_pr_papr.c b/arch/powerpc/kvm/book3s_pr_papr.c
index ce3c893..f2c75a1 100644
--- a/arch/powerpc/kvm/book3s_pr_papr.c
+++ b/arch/powerpc/kvm/book3s_pr_papr.c
@@ -258,6 +258,28 @@  static int kvmppc_h_pr_put_tce(struct kvm_vcpu *vcpu)
 	return EMULATE_DONE;
 }
 
+static int kvmppc_h_pr_logical_ci_load(struct kvm_vcpu *vcpu)
+{
+	long rc;
+
+	rc = kvmppc_h_logical_ci_load(vcpu);
+	if (rc == H_TOO_HARD)
+		return EMULATE_FAIL;
+	kvmppc_set_gpr(vcpu, 3, rc);
+	return EMULATE_DONE;
+}
+
+static int kvmppc_h_pr_logical_ci_store(struct kvm_vcpu *vcpu)
+{
+	long rc;
+
+	rc = kvmppc_h_logical_ci_store(vcpu);
+	if (rc == H_TOO_HARD)
+		return EMULATE_FAIL;
+	kvmppc_set_gpr(vcpu, 3, rc);
+	return EMULATE_DONE;
+}
+
 static int kvmppc_h_pr_xics_hcall(struct kvm_vcpu *vcpu, u32 cmd)
 {
 	long rc = kvmppc_xics_hcall(vcpu, cmd);
@@ -290,6 +312,10 @@  int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd)
 		clear_bit(KVM_REQ_UNHALT, &vcpu->requests);
 		vcpu->stat.halt_wakeup++;
 		return EMULATE_DONE;
+	case H_LOGICAL_CI_LOAD:
+		return kvmppc_h_pr_logical_ci_load(vcpu);
+	case H_LOGICAL_CI_STORE:
+		return kvmppc_h_pr_logical_ci_store(vcpu);
 	case H_XIRR:
 	case H_CPPR:
 	case H_EOI:
@@ -323,6 +349,8 @@  int kvmppc_hcall_impl_pr(unsigned long cmd)
 	case H_BULK_REMOVE:
 	case H_PUT_TCE:
 	case H_CEDE:
+	case H_LOGICAL_CI_LOAD:
+	case H_LOGICAL_CI_STORE:
 #ifdef CONFIG_KVM_XICS
 	case H_XIRR:
 	case H_CPPR: