diff mbox series

[v8,1/2] drm/gem: Properly annotate WW context on drm_gem_lock_reservations() error

Message ID 20220701090240.1896131-2-dmitry.osipenko@collabora.com
State Not Applicable
Headers show
Series DRM GEM fixes | expand

Commit Message

Dmitry Osipenko July 1, 2022, 9:02 a.m. UTC
Use ww_acquire_fini() in the error code paths. Otherwise lockdep
thinks that lock is held when lock's memory is freed after the
drm_gem_lock_reservations() error. The ww_acquire_context needs to be
annotated as "released", which fixes the noisy "WARNING: held lock freed!"
splat of VirtIO-GPU driver with CONFIG_DEBUG_MUTEXES=y and enabled lockdep.

Cc: stable@vger.kernel.org
Fixes: 7edc3e3b975b5 ("drm: Add helpers for locking an array of BO reservations.")
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Christian König July 5, 2022, 11:33 a.m. UTC | #1
Am 01.07.22 um 11:02 schrieb Dmitry Osipenko:
> Use ww_acquire_fini() in the error code paths. Otherwise lockdep
> thinks that lock is held when lock's memory is freed after the
> drm_gem_lock_reservations() error. The ww_acquire_context needs to be
> annotated as "released", which fixes the noisy "WARNING: held lock freed!"
> splat of VirtIO-GPU driver with CONFIG_DEBUG_MUTEXES=y and enabled lockdep.
>
> Cc: stable@vger.kernel.org
> Fixes: 7edc3e3b975b5 ("drm: Add helpers for locking an array of BO reservations.")
> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>

Reviewed-by: Christian König <christian.koenig@amd.com>

> ---
>   drivers/gpu/drm/drm_gem.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index eb0c2d041f13..86d670c71286 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1226,7 +1226,7 @@ drm_gem_lock_reservations(struct drm_gem_object **objs, int count,
>   		ret = dma_resv_lock_slow_interruptible(obj->resv,
>   								 acquire_ctx);
>   		if (ret) {
> -			ww_acquire_done(acquire_ctx);
> +			ww_acquire_fini(acquire_ctx);
>   			return ret;
>   		}
>   	}
> @@ -1251,7 +1251,7 @@ drm_gem_lock_reservations(struct drm_gem_object **objs, int count,
>   				goto retry;
>   			}
>   
> -			ww_acquire_done(acquire_ctx);
> +			ww_acquire_fini(acquire_ctx);
>   			return ret;
>   		}
>   	}
Daniel Vetter Aug. 9, 2022, 4:44 p.m. UTC | #2
On Tue, Jul 05, 2022 at 01:33:51PM +0200, Christian König wrote:
> Am 01.07.22 um 11:02 schrieb Dmitry Osipenko:
> > Use ww_acquire_fini() in the error code paths. Otherwise lockdep
> > thinks that lock is held when lock's memory is freed after the
> > drm_gem_lock_reservations() error. The ww_acquire_context needs to be
> > annotated as "released", which fixes the noisy "WARNING: held lock freed!"
> > splat of VirtIO-GPU driver with CONFIG_DEBUG_MUTEXES=y and enabled lockdep.
> > 
> > Cc: stable@vger.kernel.org
> > Fixes: 7edc3e3b975b5 ("drm: Add helpers for locking an array of BO reservations.")
> > Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> 
> Reviewed-by: Christian König <christian.koenig@amd.com>

Also added this r-b tag when merging to drm-misc-next-fixes.
-Daniel

> 
> > ---
> >   drivers/gpu/drm/drm_gem.c | 4 ++--
> >   1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> > index eb0c2d041f13..86d670c71286 100644
> > --- a/drivers/gpu/drm/drm_gem.c
> > +++ b/drivers/gpu/drm/drm_gem.c
> > @@ -1226,7 +1226,7 @@ drm_gem_lock_reservations(struct drm_gem_object **objs, int count,
> >   		ret = dma_resv_lock_slow_interruptible(obj->resv,
> >   								 acquire_ctx);
> >   		if (ret) {
> > -			ww_acquire_done(acquire_ctx);
> > +			ww_acquire_fini(acquire_ctx);
> >   			return ret;
> >   		}
> >   	}
> > @@ -1251,7 +1251,7 @@ drm_gem_lock_reservations(struct drm_gem_object **objs, int count,
> >   				goto retry;
> >   			}
> > -			ww_acquire_done(acquire_ctx);
> > +			ww_acquire_fini(acquire_ctx);
> >   			return ret;
> >   		}
> >   	}
>
Christian König Aug. 10, 2022, 6:52 a.m. UTC | #3
Am 09.08.22 um 18:44 schrieb Daniel Vetter:
> On Tue, Jul 05, 2022 at 01:33:51PM +0200, Christian König wrote:
>> Am 01.07.22 um 11:02 schrieb Dmitry Osipenko:
>>> Use ww_acquire_fini() in the error code paths. Otherwise lockdep
>>> thinks that lock is held when lock's memory is freed after the
>>> drm_gem_lock_reservations() error. The ww_acquire_context needs to be
>>> annotated as "released", which fixes the noisy "WARNING: held lock freed!"
>>> splat of VirtIO-GPU driver with CONFIG_DEBUG_MUTEXES=y and enabled lockdep.
>>>
>>> Cc: stable@vger.kernel.org
>>> Fixes: 7edc3e3b975b5 ("drm: Add helpers for locking an array of BO reservations.")
>>> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>> Reviewed-by: Christian König <christian.koenig@amd.com>
> Also added this r-b tag when merging to drm-misc-next-fixes.

IIRC I've already pushed this to drm-misc-fixes with a CC stable tag 
about 2 weeks ago.

Please double check, it probably just hasn't come down the stream again yet.

Christian.

> -Daniel
>
>>> ---
>>>    drivers/gpu/drm/drm_gem.c | 4 ++--
>>>    1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>>> index eb0c2d041f13..86d670c71286 100644
>>> --- a/drivers/gpu/drm/drm_gem.c
>>> +++ b/drivers/gpu/drm/drm_gem.c
>>> @@ -1226,7 +1226,7 @@ drm_gem_lock_reservations(struct drm_gem_object **objs, int count,
>>>    		ret = dma_resv_lock_slow_interruptible(obj->resv,
>>>    								 acquire_ctx);
>>>    		if (ret) {
>>> -			ww_acquire_done(acquire_ctx);
>>> +			ww_acquire_fini(acquire_ctx);
>>>    			return ret;
>>>    		}
>>>    	}
>>> @@ -1251,7 +1251,7 @@ drm_gem_lock_reservations(struct drm_gem_object **objs, int count,
>>>    				goto retry;
>>>    			}
>>> -			ww_acquire_done(acquire_ctx);
>>> +			ww_acquire_fini(acquire_ctx);
>>>    			return ret;
>>>    		}
>>>    	}
Daniel Vetter Aug. 10, 2022, 8:33 a.m. UTC | #4
On Wed, 10 Aug 2022 at 08:52, Christian König <christian.koenig@amd.com> wrote:
>
> Am 09.08.22 um 18:44 schrieb Daniel Vetter:
> > On Tue, Jul 05, 2022 at 01:33:51PM +0200, Christian König wrote:
> >> Am 01.07.22 um 11:02 schrieb Dmitry Osipenko:
> >>> Use ww_acquire_fini() in the error code paths. Otherwise lockdep
> >>> thinks that lock is held when lock's memory is freed after the
> >>> drm_gem_lock_reservations() error. The ww_acquire_context needs to be
> >>> annotated as "released", which fixes the noisy "WARNING: held lock freed!"
> >>> splat of VirtIO-GPU driver with CONFIG_DEBUG_MUTEXES=y and enabled lockdep.
> >>>
> >>> Cc: stable@vger.kernel.org
> >>> Fixes: 7edc3e3b975b5 ("drm: Add helpers for locking an array of BO reservations.")
> >>> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> >>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> >> Reviewed-by: Christian König <christian.koenig@amd.com>
> > Also added this r-b tag when merging to drm-misc-next-fixes.
>
> IIRC I've already pushed this to drm-misc-fixes with a CC stable tag
> about 2 weeks ago.
>
> Please double check, it probably just hasn't come down the stream again yet.

Hm quickly check and I didn't spot it? There's a few patches from
Dmitry in the last few pulls, and some more stuff pending, but not
these two afaics?
-Daniel

>
> Christian.
>
> > -Daniel
> >
> >>> ---
> >>>    drivers/gpu/drm/drm_gem.c | 4 ++--
> >>>    1 file changed, 2 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> >>> index eb0c2d041f13..86d670c71286 100644
> >>> --- a/drivers/gpu/drm/drm_gem.c
> >>> +++ b/drivers/gpu/drm/drm_gem.c
> >>> @@ -1226,7 +1226,7 @@ drm_gem_lock_reservations(struct drm_gem_object **objs, int count,
> >>>             ret = dma_resv_lock_slow_interruptible(obj->resv,
> >>>                                                              acquire_ctx);
> >>>             if (ret) {
> >>> -                   ww_acquire_done(acquire_ctx);
> >>> +                   ww_acquire_fini(acquire_ctx);
> >>>                     return ret;
> >>>             }
> >>>     }
> >>> @@ -1251,7 +1251,7 @@ drm_gem_lock_reservations(struct drm_gem_object **objs, int count,
> >>>                             goto retry;
> >>>                     }
> >>> -                   ww_acquire_done(acquire_ctx);
> >>> +                   ww_acquire_fini(acquire_ctx);
> >>>                     return ret;
> >>>             }
> >>>     }
>
Christian König Aug. 10, 2022, 9:04 a.m. UTC | #5
Am 10.08.22 um 10:33 schrieb Daniel Vetter:
> On Wed, 10 Aug 2022 at 08:52, Christian König <christian.koenig@amd.com> wrote:
>> Am 09.08.22 um 18:44 schrieb Daniel Vetter:
>>> On Tue, Jul 05, 2022 at 01:33:51PM +0200, Christian König wrote:
>>>> Am 01.07.22 um 11:02 schrieb Dmitry Osipenko:
>>>>> Use ww_acquire_fini() in the error code paths. Otherwise lockdep
>>>>> thinks that lock is held when lock's memory is freed after the
>>>>> drm_gem_lock_reservations() error. The ww_acquire_context needs to be
>>>>> annotated as "released", which fixes the noisy "WARNING: held lock freed!"
>>>>> splat of VirtIO-GPU driver with CONFIG_DEBUG_MUTEXES=y and enabled lockdep.
>>>>>
>>>>> Cc: stable@vger.kernel.org
>>>>> Fixes: 7edc3e3b975b5 ("drm: Add helpers for locking an array of BO reservations.")
>>>>> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>>>> Reviewed-by: Christian König <christian.koenig@amd.com>
>>> Also added this r-b tag when merging to drm-misc-next-fixes.
>> IIRC I've already pushed this to drm-misc-fixes with a CC stable tag
>> about 2 weeks ago.
>>
>> Please double check, it probably just hasn't come down the stream again yet.
> Hm quickly check and I didn't spot it? There's a few patches from
> Dmitry in the last few pulls, and some more stuff pending, but not
> these two afaics?

Mhm, there is some potential that I wanted to push it but got distracted 
by the re-occurring drm-tip build breakages.

Anyway what I wanted to say is that this stuff should probably go to 
drm-misc-fixes with a CC: stable tag :)

Christian.

> -Daniel
>
>> Christian.
>>
>>> -Daniel
>>>
>>>>> ---
>>>>>     drivers/gpu/drm/drm_gem.c | 4 ++--
>>>>>     1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>>>>> index eb0c2d041f13..86d670c71286 100644
>>>>> --- a/drivers/gpu/drm/drm_gem.c
>>>>> +++ b/drivers/gpu/drm/drm_gem.c
>>>>> @@ -1226,7 +1226,7 @@ drm_gem_lock_reservations(struct drm_gem_object **objs, int count,
>>>>>              ret = dma_resv_lock_slow_interruptible(obj->resv,
>>>>>                                                               acquire_ctx);
>>>>>              if (ret) {
>>>>> -                   ww_acquire_done(acquire_ctx);
>>>>> +                   ww_acquire_fini(acquire_ctx);
>>>>>                      return ret;
>>>>>              }
>>>>>      }
>>>>> @@ -1251,7 +1251,7 @@ drm_gem_lock_reservations(struct drm_gem_object **objs, int count,
>>>>>                              goto retry;
>>>>>                      }
>>>>> -                   ww_acquire_done(acquire_ctx);
>>>>> +                   ww_acquire_fini(acquire_ctx);
>>>>>                      return ret;
>>>>>              }
>>>>>      }
>
diff mbox series

Patch

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index eb0c2d041f13..86d670c71286 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1226,7 +1226,7 @@  drm_gem_lock_reservations(struct drm_gem_object **objs, int count,
 		ret = dma_resv_lock_slow_interruptible(obj->resv,
 								 acquire_ctx);
 		if (ret) {
-			ww_acquire_done(acquire_ctx);
+			ww_acquire_fini(acquire_ctx);
 			return ret;
 		}
 	}
@@ -1251,7 +1251,7 @@  drm_gem_lock_reservations(struct drm_gem_object **objs, int count,
 				goto retry;
 			}
 
-			ww_acquire_done(acquire_ctx);
+			ww_acquire_fini(acquire_ctx);
 			return ret;
 		}
 	}