diff mbox series

[kernel,1/2] KVM: PPC: Use correct page shift in H_STUFF_TCE

Message ID 20180502040723.20545-2-aik@ozlabs.ru (mailing list archive)
State Not Applicable
Headers show
Series KVM: PPC: Allow backing bigger guest IOMMU pages with smaller physical | expand

Commit Message

Alexey Kardashevskiy May 2, 2018, 4:07 a.m. UTC
The other TCE handlers use page shift from the guest visible TCE table
(described by kvmppc_spapr_tce_iommu_table) so let's make H_STUFF_TCE
handlers do the same thing.

This should cause no behavioral change now but soon we will allow
the iommu_table::it_page_shift being different from from the emulated
table page size so this will play a role.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 arch/powerpc/kvm/book3s_64_vio.c    | 2 +-
 arch/powerpc/kvm/book3s_64_vio_hv.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

Comments

David Gibson May 2, 2018, 5:41 a.m. UTC | #1
On Wed, May 02, 2018 at 02:07:22PM +1000, Alexey Kardashevskiy wrote:
> The other TCE handlers use page shift from the guest visible TCE table
> (described by kvmppc_spapr_tce_iommu_table) so let's make H_STUFF_TCE
> handlers do the same thing.
> 
> This should cause no behavioral change now but soon we will allow
> the iommu_table::it_page_shift being different from from the emulated
> table page size so this will play a role.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  arch/powerpc/kvm/book3s_64_vio.c    | 2 +-
>  arch/powerpc/kvm/book3s_64_vio_hv.c | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
> index 4dffa61..041e54d 100644
> --- a/arch/powerpc/kvm/book3s_64_vio.c
> +++ b/arch/powerpc/kvm/book3s_64_vio.c
> @@ -615,7 +615,7 @@ long kvmppc_h_stuff_tce(struct kvm_vcpu *vcpu,
>  		return H_PARAMETER;
>  
>  	list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
> -		unsigned long entry = ioba >> stit->tbl->it_page_shift;
> +		unsigned long entry = ioba >> stt->page_shift;
>  
>  		for (i = 0; i < npages; ++i) {
>  			ret = kvmppc_tce_iommu_unmap(vcpu->kvm,
> diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
> index 6651f73..e220fab 100644
> --- a/arch/powerpc/kvm/book3s_64_vio_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
> @@ -526,7 +526,7 @@ long kvmppc_rm_h_stuff_tce(struct kvm_vcpu *vcpu,
>  		return H_PARAMETER;
>  
>  	list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
> -		unsigned long entry = ioba >> stit->tbl->it_page_shift;
> +		unsigned long entry = ioba >> stt->page_shift;
>  
>  		for (i = 0; i < npages; ++i) {
>  			ret = kvmppc_rm_tce_iommu_unmap(vcpu->kvm,
diff mbox series

Patch

diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
index 4dffa61..041e54d 100644
--- a/arch/powerpc/kvm/book3s_64_vio.c
+++ b/arch/powerpc/kvm/book3s_64_vio.c
@@ -615,7 +615,7 @@  long kvmppc_h_stuff_tce(struct kvm_vcpu *vcpu,
 		return H_PARAMETER;
 
 	list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
-		unsigned long entry = ioba >> stit->tbl->it_page_shift;
+		unsigned long entry = ioba >> stt->page_shift;
 
 		for (i = 0; i < npages; ++i) {
 			ret = kvmppc_tce_iommu_unmap(vcpu->kvm,
diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
index 6651f73..e220fab 100644
--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
+++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
@@ -526,7 +526,7 @@  long kvmppc_rm_h_stuff_tce(struct kvm_vcpu *vcpu,
 		return H_PARAMETER;
 
 	list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
-		unsigned long entry = ioba >> stit->tbl->it_page_shift;
+		unsigned long entry = ioba >> stt->page_shift;
 
 		for (i = 0; i < npages; ++i) {
 			ret = kvmppc_rm_tce_iommu_unmap(vcpu->kvm,