diff mbox series

[kernel,v2,1/3] KVM: PPC: Use correct page shift in H_STUFF_TCE

Message ID 20180514100029.32910-2-aik@ozlabs.ru
State Accepted
Headers show
Series KVM: PPC: Allow backing bigger guest IOMMU pages with smaller physical | expand

Commit Message

Alexey Kardashevskiy May 14, 2018, 10 a.m. UTC
The other TCE handlers use page shift from the guest visible TCE table
(described by kvmppc_spapr_tce_iommu_table) so let's make H_STUFF_TCE
handlers do the same thing.

This should cause no behavioral change now but soon we will allow
the iommu_table::it_page_shift being different from from the emulated
table page size so this will play a role.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/kvm/book3s_64_vio.c    | 2 +-
 arch/powerpc/kvm/book3s_64_vio_hv.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

Comments

Balbir Singh May 15, 2018, 5:05 a.m. UTC | #1
On Mon, May 14, 2018 at 8:00 PM, Alexey Kardashevskiy <aik@ozlabs.ru> wrote:
> The other TCE handlers use page shift from the guest visible TCE table
> (described by kvmppc_spapr_tce_iommu_table) so let's make H_STUFF_TCE
> handlers do the same thing.
>
> This should cause no behavioral change now but soon we will allow
> the iommu_table::it_page_shift being different from from the emulated
> table page size so this will play a role.
>
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  arch/powerpc/kvm/book3s_64_vio.c    | 2 +-
>  arch/powerpc/kvm/book3s_64_vio_hv.c | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
> index 4dffa61..041e54d 100644
> --- a/arch/powerpc/kvm/book3s_64_vio.c
> +++ b/arch/powerpc/kvm/book3s_64_vio.c
> @@ -615,7 +615,7 @@ long kvmppc_h_stuff_tce(struct kvm_vcpu *vcpu,
>                 return H_PARAMETER;
>
>         list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
> -               unsigned long entry = ioba >> stit->tbl->it_page_shift;
> +               unsigned long entry = ioba >> stt->page_shift;
>
>                 for (i = 0; i < npages; ++i) {
>                         ret = kvmppc_tce_iommu_unmap(vcpu->kvm,
> diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
> index 6651f73..e220fab 100644
> --- a/arch/powerpc/kvm/book3s_64_vio_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
> @@ -526,7 +526,7 @@ long kvmppc_rm_h_stuff_tce(struct kvm_vcpu *vcpu,
>                 return H_PARAMETER;
>
>         list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
> -               unsigned long entry = ioba >> stit->tbl->it_page_shift;
> +               unsigned long entry = ioba >> stt->page_shift;
>
>                 for (i = 0; i < npages; ++i) {
>                         ret = kvmppc_rm_tce_iommu_unmap(vcpu->kvm,

Acked-by: Balbir Singh <bsingharora@gmail.com>

Balbir
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox series

Patch

diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
index 4dffa61..041e54d 100644
--- a/arch/powerpc/kvm/book3s_64_vio.c
+++ b/arch/powerpc/kvm/book3s_64_vio.c
@@ -615,7 +615,7 @@  long kvmppc_h_stuff_tce(struct kvm_vcpu *vcpu,
 		return H_PARAMETER;
 
 	list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
-		unsigned long entry = ioba >> stit->tbl->it_page_shift;
+		unsigned long entry = ioba >> stt->page_shift;
 
 		for (i = 0; i < npages; ++i) {
 			ret = kvmppc_tce_iommu_unmap(vcpu->kvm,
diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
index 6651f73..e220fab 100644
--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
+++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
@@ -526,7 +526,7 @@  long kvmppc_rm_h_stuff_tce(struct kvm_vcpu *vcpu,
 		return H_PARAMETER;
 
 	list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
-		unsigned long entry = ioba >> stit->tbl->it_page_shift;
+		unsigned long entry = ioba >> stt->page_shift;
 
 		for (i = 0; i < npages; ++i) {
 			ret = kvmppc_rm_tce_iommu_unmap(vcpu->kvm,