diff mbox series

[2/2] drm/tegra: Acquire a reference to the IOVA cache

Message ID 20180423065745.26102-2-thierry.reding@gmail.com
State Deferred
Headers show
Series [1/2] drm/tegra: Fix order of teardown in IOMMU case | expand

Commit Message

Thierry Reding April 23, 2018, 6:57 a.m. UTC
From: Thierry Reding <treding@nvidia.com>

The IOVA API uses a memory cache to allocate IOVA nodes from. To make
sure that this cache is available, obtain a reference to it and release
the reference when the cache is no longer needed.

On 64-bit ARM this is hidden by the fact that the DMA mapping API gets
that reference and never releases it. On 32-bit ARM, however, the DMA
mapping API doesn't do that, so allocation of IOVA nodes fails.

Signed-off-by: Thierry Reding <treding@nvidia.com>
---
 drivers/gpu/drm/tegra/drm.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

Comments

Dmitry Osipenko April 23, 2018, 8:34 a.m. UTC | #1
On 23.04.2018 09:57, Thierry Reding wrote:
> From: Thierry Reding <treding@nvidia.com>
> 
> The IOVA API uses a memory cache to allocate IOVA nodes from. To make
> sure that this cache is available, obtain a reference to it and release
> the reference when the cache is no longer needed.
> 
> On 64-bit ARM this is hidden by the fact that the DMA mapping API gets
> that reference and never releases it. On 32-bit ARM, however, the DMA
> mapping API doesn't do that, so allocation of IOVA nodes fails.
> 
> Signed-off-by: Thierry Reding <treding@nvidia.com>
> ---

Reviewed-by: Dmitry Osipenko <digetx@gmail.com>
Tested-by: Dmitry Osipenko <digetx@gmail.com>

CONFIG_TEGRA_IOMMU_SMMU is enabled in the default kernel configs and hence DRM
should fail to probe on t124 since 4.11. What about to add stable tag for v4.11+
here to unbreak stable kernels as well?

>  drivers/gpu/drm/tegra/drm.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
> index 4a696fa274a3..0540b0741df6 100644
> --- a/drivers/gpu/drm/tegra/drm.c
> +++ b/drivers/gpu/drm/tegra/drm.c
> @@ -115,6 +115,10 @@ static int tegra_drm_load(struct drm_device *drm, unsigned long flags)
>  			goto free;
>  		}
>  
> +		err = iova_cache_get();
> +		if (err < 0)
> +			goto domain;
> +

Host1x also uses the alloc_iova(), though this allocation is actually invoked by
the DRM driver on requesting a channel. It is fine right now because DRM driver
is the only host1x user, but what about to add iova_cache_get() to the host1x
driver as well for consistency?

>  		geometry = &tegra->domain->geometry;
>  		gem_start = geometry->aperture_start;
>  		gem_end = geometry->aperture_end - CARVEOUT_SZ;
> @@ -205,11 +209,12 @@ static int tegra_drm_load(struct drm_device *drm, unsigned long flags)
>  	tegra_drm_fb_free(drm);
>  config:
>  	drm_mode_config_cleanup(drm);
> -
> +domain:
>  	if (tegra->domain) {
>  		mutex_destroy(&tegra->mm_lock);
>  		drm_mm_takedown(&tegra->mm);
>  		put_iova_domain(&tegra->carveout.domain);
> +		iova_cache_put();
>  		iommu_domain_free(tegra->domain);
>  	}
>  free:
> @@ -236,6 +241,7 @@ static void tegra_drm_unload(struct drm_device *drm)
>  		mutex_destroy(&tegra->mm_lock);
>  		drm_mm_takedown(&tegra->mm);
>  		put_iova_domain(&tegra->carveout.domain);
> +		iova_cache_put();
>  		iommu_domain_free(tegra->domain);
>  	}
>  
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-tegra" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dmitry Osipenko April 23, 2018, 8:41 a.m. UTC | #2
On 23.04.2018 11:34, Dmitry Osipenko wrote:
> On 23.04.2018 09:57, Thierry Reding wrote:
>> From: Thierry Reding <treding@nvidia.com>
>>
>> The IOVA API uses a memory cache to allocate IOVA nodes from. To make
>> sure that this cache is available, obtain a reference to it and release
>> the reference when the cache is no longer needed.
>>
>> On 64-bit ARM this is hidden by the fact that the DMA mapping API gets
>> that reference and never releases it. On 32-bit ARM, however, the DMA
>> mapping API doesn't do that, so allocation of IOVA nodes fails.
>>
>> Signed-off-by: Thierry Reding <treding@nvidia.com>
>> ---
> 
> Reviewed-by: Dmitry Osipenko <digetx@gmail.com>
> Tested-by: Dmitry Osipenko <digetx@gmail.com>
> 
> CONFIG_TEGRA_IOMMU_SMMU is enabled in the default kernel configs and hence DRM
> should fail to probe on t124 since 4.11. What about to add stable tag for v4.11+
> here to unbreak stable kernels as well?

IOMMU node for host1x was added to t124 DT in kernel v4.14, so s/4.11/4.14/.
--
To unsubscribe from this list: send the line "unsubscribe linux-tegra" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dmitry Osipenko April 23, 2018, 8:43 a.m. UTC | #3
On 23.04.2018 11:41, Dmitry Osipenko wrote:
> On 23.04.2018 11:34, Dmitry Osipenko wrote:
>> On 23.04.2018 09:57, Thierry Reding wrote:
>>> From: Thierry Reding <treding@nvidia.com>
>>>
>>> The IOVA API uses a memory cache to allocate IOVA nodes from. To make
>>> sure that this cache is available, obtain a reference to it and release
>>> the reference when the cache is no longer needed.
>>>
>>> On 64-bit ARM this is hidden by the fact that the DMA mapping API gets
>>> that reference and never releases it. On 32-bit ARM, however, the DMA
>>> mapping API doesn't do that, so allocation of IOVA nodes fails.
>>>
>>> Signed-off-by: Thierry Reding <treding@nvidia.com>
>>> ---
>>
>> Reviewed-by: Dmitry Osipenko <digetx@gmail.com>
>> Tested-by: Dmitry Osipenko <digetx@gmail.com>
>>
>> CONFIG_TEGRA_IOMMU_SMMU is enabled in the default kernel configs and hence DRM
>> should fail to probe on t124 since 4.11. What about to add stable tag for v4.11+
>> here to unbreak stable kernels as well?
> 
> IOMMU node for host1x was added to t124 DT in kernel v4.14, so s/4.11/4.14/.

On the other hand nothing stops to use newer DT with older kernel.
--
To unsubscribe from this list: send the line "unsubscribe linux-tegra" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Thierry Reding May 14, 2018, 9:02 a.m. UTC | #4
On Mon, Apr 23, 2018 at 11:43:16AM +0300, Dmitry Osipenko wrote:
> On 23.04.2018 11:41, Dmitry Osipenko wrote:
> > On 23.04.2018 11:34, Dmitry Osipenko wrote:
> >> On 23.04.2018 09:57, Thierry Reding wrote:
> >>> From: Thierry Reding <treding@nvidia.com>
> >>>
> >>> The IOVA API uses a memory cache to allocate IOVA nodes from. To make
> >>> sure that this cache is available, obtain a reference to it and release
> >>> the reference when the cache is no longer needed.
> >>>
> >>> On 64-bit ARM this is hidden by the fact that the DMA mapping API gets
> >>> that reference and never releases it. On 32-bit ARM, however, the DMA
> >>> mapping API doesn't do that, so allocation of IOVA nodes fails.
> >>>
> >>> Signed-off-by: Thierry Reding <treding@nvidia.com>
> >>> ---
> >>
> >> Reviewed-by: Dmitry Osipenko <digetx@gmail.com>
> >> Tested-by: Dmitry Osipenko <digetx@gmail.com>
> >>
> >> CONFIG_TEGRA_IOMMU_SMMU is enabled in the default kernel configs and hence DRM
> >> should fail to probe on t124 since 4.11. What about to add stable tag for v4.11+
> >> here to unbreak stable kernels as well?
> > 
> > IOMMU node for host1x was added to t124 DT in kernel v4.14, so s/4.11/4.14/.
> 
> On the other hand nothing stops to use newer DT with older kernel.

I've applied this and added:

	Fixes: ad92601521ea ("drm/tegra: Add Tegra DRM allocation API")

Since that's the commit that introduced the iova API usage. It seems
like we also need a fix in drivers/gpu/host1x to grab a reference to
this IOVA cache because the host1x driver also makes use of that. It
looks as if this patch currently papers over that bug, and there's
very little chance that anyone will use the host1x driver without the
Tegra DRM driver. However, it's probably best to still fix it to avoid
future exposure.

I'll go type that patch up now.

Thierry
Dmitry Osipenko May 17, 2018, 10:53 a.m. UTC | #5
On 23.04.2018 09:57, Thierry Reding wrote:
> From: Thierry Reding <treding@nvidia.com>
> 
> The IOVA API uses a memory cache to allocate IOVA nodes from. To make
> sure that this cache is available, obtain a reference to it and release
> the reference when the cache is no longer needed.
> 
> On 64-bit ARM this is hidden by the fact that the DMA mapping API gets
> that reference and never releases it. On 32-bit ARM, however, the DMA
> mapping API doesn't do that, so allocation of IOVA nodes fails.
> 
> Signed-off-by: Thierry Reding <treding@nvidia.com>
> ---
>  drivers/gpu/drm/tegra/drm.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
> index 4a696fa274a3..0540b0741df6 100644
> --- a/drivers/gpu/drm/tegra/drm.c
> +++ b/drivers/gpu/drm/tegra/drm.c
> @@ -115,6 +115,10 @@ static int tegra_drm_load(struct drm_device *drm, unsigned long flags)
>  			goto free;
>  		}
>  
> +		err = iova_cache_get();
> +		if (err < 0)
> +			goto domain;
> +
>  		geometry = &tegra->domain->geometry;
>  		gem_start = geometry->aperture_start;
>  		gem_end = geometry->aperture_end - CARVEOUT_SZ;
> @@ -205,11 +209,12 @@ static int tegra_drm_load(struct drm_device *drm, unsigned long flags)
>  	tegra_drm_fb_free(drm);
>  config:
>  	drm_mode_config_cleanup(drm);
> -
> +domain:
>  	if (tegra->domain) {
>  		mutex_destroy(&tegra->mm_lock);
>  		drm_mm_takedown(&tegra->mm);
>  		put_iova_domain(&tegra->carveout.domain);
> +		iova_cache_put();

I've spotted that this ^ is incorrect. This will put the iova_cache without
getting it if iova_cache_get() failed.

>  		iommu_domain_free(tegra->domain);
>  	}
>  free:
> @@ -236,6 +241,7 @@ static void tegra_drm_unload(struct drm_device *drm)
>  		mutex_destroy(&tegra->mm_lock);
>  		drm_mm_takedown(&tegra->mm);
>  		put_iova_domain(&tegra->carveout.domain);
> +		iova_cache_put();
>  		iommu_domain_free(tegra->domain);
>  	}
>  
> 

Thierry, please update the patch:

diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
index 4a59ac4d7793..26ce98479fb6 100644
--- a/drivers/gpu/drm/tegra/drm.c
+++ b/drivers/gpu/drm/tegra/drm.c
@@ -98,6 +98,10 @@ static int tegra_drm_load(struct drm_device *drm, unsigned
long flags)
 			goto free;
 		}

+		err = iova_cache_get();
+		if (err < 0)
+			goto domain;
+
 		geometry = &tegra->domain->geometry;
 		gem_start = geometry->aperture_start;
 		gem_end = geometry->aperture_end - CARVEOUT_SZ;
@@ -194,8 +198,11 @@ static int tegra_drm_load(struct drm_device *drm, unsigned
long flags)
 		mutex_destroy(&tegra->mm_lock);
 		drm_mm_takedown(&tegra->mm);
 		put_iova_domain(&tegra->carveout.domain);
-		iommu_domain_free(tegra->domain);
+		iova_cache_put();
 	}
+domain:
+	if (tegra->domain)
+		iommu_domain_free(tegra->domain);
 free:
 	kfree(tegra);
 	return err;
@@ -220,6 +227,7 @@ static void tegra_drm_unload(struct drm_device *drm)
 		mutex_destroy(&tegra->mm_lock);
 		drm_mm_takedown(&tegra->mm);
 		put_iova_domain(&tegra->carveout.domain);
+		iova_cache_put();
 		iommu_domain_free(tegra->domain);
 	}

--
To unsubscribe from this list: send the line "unsubscribe linux-tegra" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Thierry Reding May 17, 2018, 12:09 p.m. UTC | #6
On Thu, May 17, 2018 at 01:53:23PM +0300, Dmitry Osipenko wrote:
> On 23.04.2018 09:57, Thierry Reding wrote:
> > From: Thierry Reding <treding@nvidia.com>
> > 
> > The IOVA API uses a memory cache to allocate IOVA nodes from. To make
> > sure that this cache is available, obtain a reference to it and release
> > the reference when the cache is no longer needed.
> > 
> > On 64-bit ARM this is hidden by the fact that the DMA mapping API gets
> > that reference and never releases it. On 32-bit ARM, however, the DMA
> > mapping API doesn't do that, so allocation of IOVA nodes fails.
> > 
> > Signed-off-by: Thierry Reding <treding@nvidia.com>
> > ---
> >  drivers/gpu/drm/tegra/drm.c | 8 +++++++-
> >  1 file changed, 7 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
> > index 4a696fa274a3..0540b0741df6 100644
> > --- a/drivers/gpu/drm/tegra/drm.c
> > +++ b/drivers/gpu/drm/tegra/drm.c
> > @@ -115,6 +115,10 @@ static int tegra_drm_load(struct drm_device *drm, unsigned long flags)
> >  			goto free;
> >  		}
> >  
> > +		err = iova_cache_get();
> > +		if (err < 0)
> > +			goto domain;
> > +
> >  		geometry = &tegra->domain->geometry;
> >  		gem_start = geometry->aperture_start;
> >  		gem_end = geometry->aperture_end - CARVEOUT_SZ;
> > @@ -205,11 +209,12 @@ static int tegra_drm_load(struct drm_device *drm, unsigned long flags)
> >  	tegra_drm_fb_free(drm);
> >  config:
> >  	drm_mode_config_cleanup(drm);
> > -
> > +domain:
> >  	if (tegra->domain) {
> >  		mutex_destroy(&tegra->mm_lock);
> >  		drm_mm_takedown(&tegra->mm);
> >  		put_iova_domain(&tegra->carveout.domain);
> > +		iova_cache_put();
> 
> I've spotted that this ^ is incorrect. This will put the iova_cache without
> getting it if iova_cache_get() failed.

Good catch, updated the patch with your suggestion.

Thanks,
Thierry
diff mbox series

Patch

diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
index 4a696fa274a3..0540b0741df6 100644
--- a/drivers/gpu/drm/tegra/drm.c
+++ b/drivers/gpu/drm/tegra/drm.c
@@ -115,6 +115,10 @@  static int tegra_drm_load(struct drm_device *drm, unsigned long flags)
 			goto free;
 		}
 
+		err = iova_cache_get();
+		if (err < 0)
+			goto domain;
+
 		geometry = &tegra->domain->geometry;
 		gem_start = geometry->aperture_start;
 		gem_end = geometry->aperture_end - CARVEOUT_SZ;
@@ -205,11 +209,12 @@  static int tegra_drm_load(struct drm_device *drm, unsigned long flags)
 	tegra_drm_fb_free(drm);
 config:
 	drm_mode_config_cleanup(drm);
-
+domain:
 	if (tegra->domain) {
 		mutex_destroy(&tegra->mm_lock);
 		drm_mm_takedown(&tegra->mm);
 		put_iova_domain(&tegra->carveout.domain);
+		iova_cache_put();
 		iommu_domain_free(tegra->domain);
 	}
 free:
@@ -236,6 +241,7 @@  static void tegra_drm_unload(struct drm_device *drm)
 		mutex_destroy(&tegra->mm_lock);
 		drm_mm_takedown(&tegra->mm);
 		put_iova_domain(&tegra->carveout.domain);
+		iova_cache_put();
 		iommu_domain_free(tegra->domain);
 	}