From patchwork Sun May 21 20:51:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 1784216 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.a=rsa-sha256 header.s=mail header.b=ky3jgCXK; dkim-atps=neutral Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by legolas.ozlabs.org (Postfix) with ESMTP id 4QPXz00zx3z20PS for ; Mon, 22 May 2023 07:00:12 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229528AbjEUVAK (ORCPT ); Sun, 21 May 2023 17:00:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230338AbjEUVAI (ORCPT ); Sun, 21 May 2023 17:00:08 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96D67DE; Sun, 21 May 2023 14:00:06 -0700 (PDT) Received: from workpc.. (109-252-147-95.dynamic.spd-mgts.ru [109.252.147.95]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id BB91A66056FF; Sun, 21 May 2023 22:00:02 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1684702804; bh=ZiNxP0qBEhRDYC26fKFev4bnhE2oiLkLtrQkBgUqZF8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ky3jgCXKcu61FxykVCmfm0VEeyNlSEqPbvLn17Xe4jkOQSgdVXyEI3q00FF0pRve9 L2DXN8u233X0T842k7u7MCWAne5xMJgbBQ783Ig14cFLte/xFZa4DwkW08UoKwiE1E yy8AubSKKLIAamMwOefTsGc+L8zTME8gZAT7mtuLIiskG2bQe1Q6fouyHD8UIBl41z 0P0kQTAjxyORwLX2t1wZHH+CCXsFu3mPrrWHRGjJ8EqnfsDeJ6IteOsNsbl6PVpygO r2iLfBq0yS09mv4D4O1mp3eokz376mzwxj7u4tmGlXBnln0r2Q1GYnEhf5bRiTRC/A 7X3WCiPszgmZw== From: Dmitry Osipenko To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Benjamin Gaignard , Liam Mark , Brian Starkey , John Stultz , Gerd Hoffmann , Daniel Vetter , Jani Nikula , Arnd Bergmann , Thomas Zimmermann , Tomi Valkeinen , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Emil Velikov Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-tegra@vger.kernel.org, kernel@collabora.com Subject: [PATCH v3 1/6] media: videobuf2: Don't assert held reservation lock for dma-buf mmapping Date: Sun, 21 May 2023 23:51:07 +0300 Message-Id: <20230521205112.150206-2-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230521205112.150206-1-dmitry.osipenko@collabora.com> References: <20230521205112.150206-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Don't assert held dma-buf reservation lock on memory mapping of exported buffer. We're going to change dma-buf mmap() locking policy such that exporters will have to handle the lock. The previous locking policy caused deadlock problem for DRM drivers in a case of self-imported dma-bufs once these drivers are moved to use reservation lock universally. The problem is solved by moving the lock down to exporters. This patch prepares videobuf2 for the locking policy update. Reviewed-by: Emil Velikov Reviewed-by: Hans Verkuil Signed-off-by: Dmitry Osipenko --- drivers/media/common/videobuf2/videobuf2-dma-contig.c | 3 --- drivers/media/common/videobuf2/videobuf2-dma-sg.c | 3 --- drivers/media/common/videobuf2/videobuf2-vmalloc.c | 3 --- 3 files changed, 9 deletions(-) diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c index 205d3cac425c..2fa455d4a048 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c @@ -11,7 +11,6 @@ */ #include -#include #include #include #include @@ -456,8 +455,6 @@ static int vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct iosys_map *map) static int vb2_dc_dmabuf_ops_mmap(struct dma_buf *dbuf, struct vm_area_struct *vma) { - dma_resv_assert_held(dbuf->resv); - return vb2_dc_mmap(dbuf->priv, vma); } diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c index 183037fb1273..28f3fdfe23a2 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c @@ -10,7 +10,6 @@ * the Free Software Foundation. */ -#include #include #include #include @@ -498,8 +497,6 @@ static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf, static int vb2_dma_sg_dmabuf_ops_mmap(struct dma_buf *dbuf, struct vm_area_struct *vma) { - dma_resv_assert_held(dbuf->resv); - return vb2_dma_sg_mmap(dbuf->priv, vma); } diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c index a6c6d2fcaaa4..7c635e292106 100644 --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c @@ -10,7 +10,6 @@ * the Free Software Foundation. */ -#include #include #include #include @@ -319,8 +318,6 @@ static int vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf, static int vb2_vmalloc_dmabuf_ops_mmap(struct dma_buf *dbuf, struct vm_area_struct *vma) { - dma_resv_assert_held(dbuf->resv); - return vb2_vmalloc_mmap(dbuf->priv, vma); } From patchwork Sun May 21 20:51:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 1784217 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.a=rsa-sha256 header.s=mail header.b=jpXxA/tA; dkim-atps=neutral Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by legolas.ozlabs.org (Postfix) with ESMTP id 4QPXz36lg4z20PS for ; Mon, 22 May 2023 07:00:15 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230338AbjEUVAM (ORCPT ); Sun, 21 May 2023 17:00:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231175AbjEUVAK (ORCPT ); Sun, 21 May 2023 17:00:10 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DFE9E3; Sun, 21 May 2023 14:00:08 -0700 (PDT) Received: from workpc.. (109-252-147-95.dynamic.spd-mgts.ru [109.252.147.95]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 266FB6605863; Sun, 21 May 2023 22:00:05 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1684702807; bh=qSF509pb8QyQdKHaYl74cQIOuMUzkz2cqiJv90QTbdk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jpXxA/tAzg9DmGvEJnjsL4ziN1mdOpkocscNMZiirgI/sruILn1gjLFjqX1Y704s+ 6/UVTUXC/TMFNyxNTMgp5torSTluJsO59HQG37W5vyZgqBMeqOa071+BPFvRErypGd 8k/TvrM1doqIUg7zZWJZSL/x79FkxTucIkW+x3XF9Cvrm2iAlSyRl3aDfiEjZLgHE8 kd7Hr7FT6RcS4suyFMSV3zVQK57Potj/SsdpyUg3K7sBbMQHh4bCglQTcBJ02w3f/H Pfz4tJjJ3etw+1hD+F9EknZ7L8dUMEhLRkyr8lMYTfm643OKrtKVyzO3bfNZAfTOBw x/Wgo19EIbkCQ== From: Dmitry Osipenko To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Benjamin Gaignard , Liam Mark , Brian Starkey , John Stultz , Gerd Hoffmann , Daniel Vetter , Jani Nikula , Arnd Bergmann , Thomas Zimmermann , Tomi Valkeinen , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Emil Velikov Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-tegra@vger.kernel.org, kernel@collabora.com Subject: [PATCH v3 2/6] dma-buf/heaps: Don't assert held reservation lock for dma-buf mmapping Date: Sun, 21 May 2023 23:51:08 +0300 Message-Id: <20230521205112.150206-3-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230521205112.150206-1-dmitry.osipenko@collabora.com> References: <20230521205112.150206-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Don't assert held dma-buf reservation lock on memory mapping of exported buffer. We're going to change dma-buf mmap() locking policy such that exporters will have to handle the lock. The previous locking policy caused deadlock problem for DRM drivers in a case of self-imported dma-bufs once these drivers are moved to use reservation lock universally. The problem solved by moving the lock down to exporters. This patch prepares dma-buf heaps for the locking policy update. Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/dma-buf/heaps/cma_heap.c | 3 --- drivers/dma-buf/heaps/system_heap.c | 3 --- 2 files changed, 6 deletions(-) diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index a7f048048864..ee899f8e6721 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -13,7 +13,6 @@ #include #include #include -#include #include #include #include @@ -183,8 +182,6 @@ static int cma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) { struct cma_heap_buffer *buffer = dmabuf->priv; - dma_resv_assert_held(dmabuf->resv); - if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0) return -EINVAL; diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index ee7059399e9c..9076d47ed2ef 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -13,7 +13,6 @@ #include #include #include -#include #include #include #include @@ -201,8 +200,6 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) struct sg_page_iter piter; int ret; - dma_resv_assert_held(dmabuf->resv); - for_each_sgtable_page(table, &piter, vma->vm_pgoff) { struct page *page = sg_page_iter_page(&piter); From patchwork Sun May 21 20:51:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 1784218 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.a=rsa-sha256 header.s=mail header.b=G11I7dSr; dkim-atps=neutral Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by legolas.ozlabs.org (Postfix) with ESMTP id 4QPXz71PhTz20PS for ; Mon, 22 May 2023 07:00:19 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229570AbjEUVAP (ORCPT ); Sun, 21 May 2023 17:00:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231161AbjEUVAM (ORCPT ); Sun, 21 May 2023 17:00:12 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38EC7DD; Sun, 21 May 2023 14:00:11 -0700 (PDT) Received: from workpc.. (109-252-147-95.dynamic.spd-mgts.ru [109.252.147.95]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id AA93B6605901; Sun, 21 May 2023 22:00:07 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1684702810; bh=+OAM+FuLsoM/JJ301uDl3dz9cuHMlNKF1T1FvXy0LZ0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G11I7dSr5BVRWz6xyJybPAD8XIvQ6rhL/dNTU+n4jm6Iw/Sp1I0aFY/pmhjK3VCKL wcoJzxehvfxSB8kwEN/Vaj0CaPgy6M2QLu4djPjVF2V6/mZNmwHIr1iLRvTlRVZbFS 3b77pc+cNlbZzMxCL9uxugEKmVjVXbd1NsCLa9pIDVzyiBfiXIiO3fERemTjpFpPJK 5BtiHWYf0KoS4cHXuhTrQx3JLw47g4zn4f8GMSqgrg6TyaKD9zJ5X/odVqi+/53UjE JnXuBWFtxTsw22Ko42O5pXvSU8ur0U6tg98k7iNse3eyvQxDw7Qw0dSCqNYFdWcSNr O+9h/KcEQnjHg== From: Dmitry Osipenko To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Benjamin Gaignard , Liam Mark , Brian Starkey , John Stultz , Gerd Hoffmann , Daniel Vetter , Jani Nikula , Arnd Bergmann , Thomas Zimmermann , Tomi Valkeinen , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Emil Velikov Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-tegra@vger.kernel.org, kernel@collabora.com Subject: [PATCH v3 3/6] udmabuf: Don't assert held reservation lock for dma-buf mmapping Date: Sun, 21 May 2023 23:51:09 +0300 Message-Id: <20230521205112.150206-4-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230521205112.150206-1-dmitry.osipenko@collabora.com> References: <20230521205112.150206-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Don't assert held dma-buf reservation lock on memory mapping of exported buffer. We're going to change dma-buf mmap() locking policy such that exporters will have to handle the lock. The previous locking policy caused deadlock problem for DRM drivers in a case of self-imported dma-bufs once these drivers are moved to use reservation lock universally. The problem is solved by moving the lock down to exporters. This patch prepares udmabuf for the locking policy update. Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/dma-buf/udmabuf.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c index 01f2e86f3f7c..06729cd60136 100644 --- a/drivers/dma-buf/udmabuf.c +++ b/drivers/dma-buf/udmabuf.c @@ -52,8 +52,6 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma) { struct udmabuf *ubuf = buf->priv; - dma_resv_assert_held(buf->resv); - if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0) return -EINVAL; From patchwork Sun May 21 20:51:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 1784219 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.a=rsa-sha256 header.s=mail header.b=RE15xoFH; dkim-atps=neutral Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by legolas.ozlabs.org (Postfix) with ESMTP id 4QPXz85f08z20PS for ; Mon, 22 May 2023 07:00:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231272AbjEUVAT (ORCPT ); Sun, 21 May 2023 17:00:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231231AbjEUVAP (ORCPT ); Sun, 21 May 2023 17:00:15 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E85A9E4; Sun, 21 May 2023 14:00:13 -0700 (PDT) Received: from workpc.. (109-252-147-95.dynamic.spd-mgts.ru [109.252.147.95]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id B53F66606D86; Sun, 21 May 2023 22:00:10 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1684702812; bh=8TkX+3yzYriWDfAoZpN97fKff9Eu7aGGb57AmcxWsyQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RE15xoFHsSJZ3bml9agY32KRPEYTBotH1u9BVpkR/8RwqVSTHfenyGsF/zzZzPlWg JL2z5KVAEf5dkct3SFsKO7/O9VdoBtbK4wcPvo8uqT2yY3QSYdXennbtL4zm9XHvwR FdgV/CS97NAjkhsZ0IYt1s3896tpr4NCH12wjsprwPq02OpxRgUFFBxGJRl5Vf6Pfd KsTsOXP24KLvlTpT87qqKvLN7Z1tbvksZwnepyRrl4NZuA6/Sks9R65rJDB1bsxIRj szsBC9h/ZMc9A49DRRVnU38RPiqsLG0FnyuDrsThTdN/OLIGplkh2FQFZtgqRzhPjZ KSEogO7VrnsXQ== From: Dmitry Osipenko To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Benjamin Gaignard , Liam Mark , Brian Starkey , John Stultz , Gerd Hoffmann , Daniel Vetter , Jani Nikula , Arnd Bergmann , Thomas Zimmermann , Tomi Valkeinen , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Emil Velikov Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-tegra@vger.kernel.org, kernel@collabora.com Subject: [PATCH v3 4/6] drm: Don't assert held reservation lock for dma-buf mmapping Date: Sun, 21 May 2023 23:51:10 +0300 Message-Id: <20230521205112.150206-5-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230521205112.150206-1-dmitry.osipenko@collabora.com> References: <20230521205112.150206-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Don't assert held dma-buf reservation lock on memory mapping of exported buffer. We're going to change dma-buf mmap() locking policy such that exporters will have to handle the lock. The previous locking policy caused deadlock problem for DRM drivers in a case of self-imported dma-bufs once these drivers are moved to use reservation lock universally. The problem is solved by moving the lock down to exporters. This patch prepares DRM drivers for the locking policy update. Reviewed-by: Emil Velikov Acked-by: Christian König Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_prime.c | 2 -- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 2 -- drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 2 -- drivers/gpu/drm/tegra/gem.c | 2 -- 4 files changed, 8 deletions(-) diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index d29dafce9bb0..ccb2bfb29f14 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -785,8 +785,6 @@ int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma) struct drm_gem_object *obj = dma_buf->priv; struct drm_device *dev = obj->dev; - dma_resv_assert_held(dma_buf->resv); - if (!dev->driver->gem_prime_mmap) return -ENOSYS; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index fd556a076d05..1df74f7aa3dc 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -97,8 +97,6 @@ static int i915_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct * struct drm_i915_private *i915 = to_i915(obj->base.dev); int ret; - dma_resv_assert_held(dma_buf->resv); - if (obj->base.size < vma->vm_end - vma->vm_start) return -EINVAL; diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c index 3abc47521b2c..8e194dbc9506 100644 --- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c +++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c @@ -66,8 +66,6 @@ static int omap_gem_dmabuf_mmap(struct dma_buf *buffer, struct drm_gem_object *obj = buffer->priv; int ret = 0; - dma_resv_assert_held(buffer->resv); - ret = drm_gem_mmap_obj(obj, omap_gem_mmap_size(obj), vma); if (ret < 0) return ret; diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c index dea38892d6e6..a4023163493d 100644 --- a/drivers/gpu/drm/tegra/gem.c +++ b/drivers/gpu/drm/tegra/gem.c @@ -694,8 +694,6 @@ static int tegra_gem_prime_mmap(struct dma_buf *buf, struct vm_area_struct *vma) struct drm_gem_object *gem = buf->priv; int err; - dma_resv_assert_held(buf->resv); - err = drm_gem_mmap_obj(gem, gem->size, vma); if (err < 0) return err; From patchwork Sun May 21 20:51:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 1784220 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.a=rsa-sha256 header.s=mail header.b=R5NSEwwv; dkim-atps=neutral Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by legolas.ozlabs.org (Postfix) with ESMTP id 4QPXzD17lnz20PS for ; Mon, 22 May 2023 07:00:24 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231250AbjEUVAV (ORCPT ); Sun, 21 May 2023 17:00:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51234 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231264AbjEUVAR (ORCPT ); Sun, 21 May 2023 17:00:17 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5361E3; Sun, 21 May 2023 14:00:16 -0700 (PDT) Received: from workpc.. (109-252-147-95.dynamic.spd-mgts.ru [109.252.147.95]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 0851F6606E66; Sun, 21 May 2023 22:00:12 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1684702815; bh=0gUDR36aFi1An+7dulD3u+YxPIoB7pS3+8FHmbgN34M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R5NSEwwvvVp+9mP99LeyUlEOXBMVzd+hxMjfPq0TG9EJev8dQ8HB/DvM4KsdRqiWb oD+WWOq8jkbFlGqJ3K2j9Ne5M+G8py5Xrora1kd+7Kdi2J+gcINNMMBhJlnjl9KNEV Y1SnI+ifWFeQSofGJon/idzwXnhwYYHaK7sNz6mWTOi+G2Mx4LzAn4G8Qr3r+ez1G+ t9xP8j8iAxg8iI8fSsY9rHqS0LHFX1Vro2NGvt/Yo5RT88Ra5WsWdNGy5n2r8oYiPY zQLiCpq2nS3QxKStypE2F3W0NBJGgUxztHTuISaXeOgu7ulrHoycgy5bX0kl+3EA7Q GiuKRvqb3CRYQ== From: Dmitry Osipenko To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Benjamin Gaignard , Liam Mark , Brian Starkey , John Stultz , Gerd Hoffmann , Daniel Vetter , Jani Nikula , Arnd Bergmann , Thomas Zimmermann , Tomi Valkeinen , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Emil Velikov Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-tegra@vger.kernel.org, kernel@collabora.com Subject: [PATCH v3 5/6] dma-buf: Change locking policy for mmap() Date: Sun, 21 May 2023 23:51:11 +0300 Message-Id: <20230521205112.150206-6-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230521205112.150206-1-dmitry.osipenko@collabora.com> References: <20230521205112.150206-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Change locking policy of mmap() callback, making exporters responsible for handling dma-buf reservation locking. Previous locking policy stated that dma-buf is locked for both importers and exporters by the dma-buf core, which caused a deadlock problem for DRM drivers in a case of self-imported dma-bufs which required to take the lock from the DRM exporter side. Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/dma-buf/dma-buf.c | 17 +++-------------- 1 file changed, 3 insertions(+), 14 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index aa4ea8530cb3..21916bba77d5 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -131,7 +131,6 @@ static struct file_system_type dma_buf_fs_type = { static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) { struct dma_buf *dmabuf; - int ret; if (!is_dma_buf_file(file)) return -EINVAL; @@ -147,11 +146,7 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) dmabuf->size >> PAGE_SHIFT) return -EINVAL; - dma_resv_lock(dmabuf->resv, NULL); - ret = dmabuf->ops->mmap(dmabuf, vma); - dma_resv_unlock(dmabuf->resv); - - return ret; + return dmabuf->ops->mmap(dmabuf, vma); } static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) @@ -850,6 +845,7 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach, * - &dma_buf_ops.release() * - &dma_buf_ops.begin_cpu_access() * - &dma_buf_ops.end_cpu_access() + * - &dma_buf_ops.mmap() * * 2. These &dma_buf_ops callbacks are invoked with locked dma-buf * reservation and exporter can't take the lock: @@ -858,7 +854,6 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach, * - &dma_buf_ops.unpin() * - &dma_buf_ops.map_dma_buf() * - &dma_buf_ops.unmap_dma_buf() - * - &dma_buf_ops.mmap() * - &dma_buf_ops.vmap() * - &dma_buf_ops.vunmap() * @@ -1463,8 +1458,6 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, unsigned long pgoff) { - int ret; - if (WARN_ON(!dmabuf || !vma)) return -EINVAL; @@ -1485,11 +1478,7 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, vma_set_file(vma, dmabuf->file); vma->vm_pgoff = pgoff; - dma_resv_lock(dmabuf->resv, NULL); - ret = dmabuf->ops->mmap(dmabuf, vma); - dma_resv_unlock(dmabuf->resv); - - return ret; + return dmabuf->ops->mmap(dmabuf, vma); } EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); From patchwork Sun May 21 20:51:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 1784221 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.a=rsa-sha256 header.s=mail header.b=gngMhpQ5; dkim-atps=neutral Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by legolas.ozlabs.org (Postfix) with ESMTP id 4QPXzT0Rl9z20PS for ; Mon, 22 May 2023 07:00:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231340AbjEUVAf (ORCPT ); Sun, 21 May 2023 17:00:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231266AbjEUVAX (ORCPT ); Sun, 21 May 2023 17:00:23 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DAFE109; Sun, 21 May 2023 14:00:19 -0700 (PDT) Received: from workpc.. (109-252-147-95.dynamic.spd-mgts.ru [109.252.147.95]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id E433D6606E65; Sun, 21 May 2023 22:00:15 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1684702817; bh=T763spYAIoIZCiEjUN3NOtnTrnXETRDcazslVSl21UM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gngMhpQ53Nv16yjhIQ65DkGHn31eFvxnxInNzTKgcfqh/mnXl3KIU89e48OO68fj3 Lc4bsiLnLfYXxjGHe3wiuoN802OiuKsdr5cDE1GI0vNEV2bQiQIgYVj1bwVZuD5xxs Dcxb7fjzfqx8oeXB33lYDJhmNkA/g49x3+obnqqXG4LT0fyAU04QZca6eo5gpWyo1f XIAU/3ju0SyLAPhOIhpFR81RRvnrnpBP1AYtaQq0G78NE74rBz4TMEXj5r6tshFgxe dKiZ8Ul6SQBqcUCJQl9cm8ZAv5l9IcmWJfcpwU78hHVU4JTyk/QfPrEt+4Ce4zd4s+ uz19AII0uqpTw== From: Dmitry Osipenko To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Benjamin Gaignard , Liam Mark , Brian Starkey , John Stultz , Gerd Hoffmann , Daniel Vetter , Jani Nikula , Arnd Bergmann , Thomas Zimmermann , Tomi Valkeinen , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Emil Velikov Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-tegra@vger.kernel.org, kernel@collabora.com Subject: [PATCH v3 6/6] drm/shmem-helper: Switch to reservation lock Date: Sun, 21 May 2023 23:51:12 +0300 Message-Id: <20230521205112.150206-7-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230521205112.150206-1-dmitry.osipenko@collabora.com> References: <20230521205112.150206-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Replace all drm-shmem locks with a GEM reservation lock. This makes locks consistent with dma-buf locking convention where importers are responsible for holding reservation lock for all operations performed over dma-bufs, preventing deadlock between dma-buf importers and exporters. Suggested-by: Daniel Vetter Acked-by: Thomas Zimmermann Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 208 ++++++++---------- drivers/gpu/drm/lima/lima_gem.c | 8 +- drivers/gpu/drm/panfrost/panfrost_drv.c | 7 +- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 6 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 19 +- include/drm/drm_gem_shmem_helper.h | 14 +- 6 files changed, 114 insertions(+), 148 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 4ea6507a77e5..395942ca36fe 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -88,8 +88,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) if (ret) goto err_release; - mutex_init(&shmem->pages_lock); - mutex_init(&shmem->vmap_lock); INIT_LIST_HEAD(&shmem->madv_list); if (!private) { @@ -141,11 +139,13 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; - drm_WARN_ON(obj->dev, shmem->vmap_use_count); - if (obj->import_attach) { drm_prime_gem_destroy(obj, shmem->sgt); } else { + dma_resv_lock(shmem->base.resv, NULL); + + drm_WARN_ON(obj->dev, shmem->vmap_use_count); + if (shmem->sgt) { dma_unmap_sgtable(obj->dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); @@ -154,18 +154,18 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } if (shmem->pages) drm_gem_shmem_put_pages(shmem); - } - drm_WARN_ON(obj->dev, shmem->pages_use_count); + drm_WARN_ON(obj->dev, shmem->pages_use_count); + + dma_resv_unlock(shmem->base.resv); + } drm_gem_object_release(obj); - mutex_destroy(&shmem->pages_lock); - mutex_destroy(&shmem->vmap_lock); kfree(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); -static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct page **pages; @@ -197,35 +197,16 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) } /* - * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object + * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object * @shmem: shmem GEM object * - * This function makes sure that backing pages exists for the shmem GEM object - * and increases the use count. - * - * Returns: - * 0 on success or a negative error code on failure. + * This function decreases the use count and puts the backing pages when use drops to zero. */ -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; - int ret; - - drm_WARN_ON(obj->dev, obj->import_attach); - - ret = mutex_lock_interruptible(&shmem->pages_lock); - if (ret) - return ret; - ret = drm_gem_shmem_get_pages_locked(shmem); - mutex_unlock(&shmem->pages_lock); - - return ret; -} -EXPORT_SYMBOL(drm_gem_shmem_get_pages); -static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) -{ - struct drm_gem_object *obj = &shmem->base; + dma_resv_assert_held(shmem->base.resv); if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) return; @@ -243,20 +224,25 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) shmem->pages_mark_accessed_on_put); shmem->pages = NULL; } +EXPORT_SYMBOL(drm_gem_shmem_put_pages); -/* - * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object - * @shmem: shmem GEM object - * - * This function decreases the use count and puts the backing pages when use drops to zero. - */ -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) +static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) +{ + int ret; + + dma_resv_assert_held(shmem->base.resv); + + ret = drm_gem_shmem_get_pages(shmem); + + return ret; +} + +static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem) { - mutex_lock(&shmem->pages_lock); - drm_gem_shmem_put_pages_locked(shmem); - mutex_unlock(&shmem->pages_lock); + dma_resv_assert_held(shmem->base.resv); + + drm_gem_shmem_put_pages(shmem); } -EXPORT_SYMBOL(drm_gem_shmem_put_pages); /** * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object @@ -271,10 +257,17 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; + int ret; drm_WARN_ON(obj->dev, obj->import_attach); - return drm_gem_shmem_get_pages(shmem); + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); + if (ret) + return ret; + ret = drm_gem_shmem_pin_locked(shmem); + dma_resv_unlock(shmem->base.resv); + + return ret; } EXPORT_SYMBOL(drm_gem_shmem_pin); @@ -291,12 +284,29 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); - drm_gem_shmem_put_pages(shmem); + dma_resv_lock(shmem->base.resv, NULL); + drm_gem_shmem_unpin_locked(shmem); + dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL(drm_gem_shmem_unpin); -static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) +/* + * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object + * @shmem: shmem GEM object + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing + * store. + * + * This function makes sure that a contiguous kernel virtual address mapping + * exists for the buffer backing the shmem GEM object. It hides the differences + * between dma-buf imported and natively allocated objects. + * + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap(). + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) { struct drm_gem_object *obj = &shmem->base; int ret = 0; @@ -312,6 +322,8 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, } else { pgprot_t prot = PAGE_KERNEL; + dma_resv_assert_held(shmem->base.resv); + if (shmem->vmap_use_count++ > 0) { iosys_map_set_vaddr(map, shmem->vaddr); return 0; @@ -346,45 +358,30 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, return ret; } +EXPORT_SYMBOL(drm_gem_shmem_vmap); /* - * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object + * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object * @shmem: shmem GEM object - * @map: Returns the kernel virtual address of the SHMEM GEM object's backing - * store. - * - * This function makes sure that a contiguous kernel virtual address mapping - * exists for the buffer backing the shmem GEM object. It hides the differences - * between dma-buf imported and natively allocated objects. + * @map: Kernel virtual address where the SHMEM GEM object was mapped * - * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap(). + * This function cleans up a kernel virtual address mapping acquired by + * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to + * zero. * - * Returns: - * 0 on success or a negative error code on failure. + * This function hides the differences between dma-buf imported and natively + * allocated objects. */ -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) -{ - int ret; - - ret = mutex_lock_interruptible(&shmem->vmap_lock); - if (ret) - return ret; - ret = drm_gem_shmem_vmap_locked(shmem, map); - mutex_unlock(&shmem->vmap_lock); - - return ret; -} -EXPORT_SYMBOL(drm_gem_shmem_vmap); - -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) +void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) { struct drm_gem_object *obj = &shmem->base; if (obj->import_attach) { dma_buf_vunmap(obj->import_attach->dmabuf, map); } else { + dma_resv_assert_held(shmem->base.resv); + if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count)) return; @@ -397,26 +394,6 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, shmem->vaddr = NULL; } - -/* - * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object - * @shmem: shmem GEM object - * @map: Kernel virtual address where the SHMEM GEM object was mapped - * - * This function cleans up a kernel virtual address mapping acquired by - * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to - * zero. - * - * This function hides the differences between dma-buf imported and natively - * allocated objects. - */ -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) -{ - mutex_lock(&shmem->vmap_lock); - drm_gem_shmem_vunmap_locked(shmem, map); - mutex_unlock(&shmem->vmap_lock); -} EXPORT_SYMBOL(drm_gem_shmem_vunmap); static int @@ -447,24 +424,24 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, */ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) { - mutex_lock(&shmem->pages_lock); + dma_resv_assert_held(shmem->base.resv); if (shmem->madv >= 0) shmem->madv = madv; madv = shmem->madv; - mutex_unlock(&shmem->pages_lock); - return (madv >= 0); } EXPORT_SYMBOL(drm_gem_shmem_madvise); -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct drm_device *dev = obj->dev; + dma_resv_assert_held(shmem->base.resv); + drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem)); dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); @@ -472,7 +449,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) kfree(shmem->sgt); shmem->sgt = NULL; - drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_put_pages(shmem); shmem->madv = -1; @@ -488,17 +465,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); } -EXPORT_SYMBOL(drm_gem_shmem_purge_locked); - -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) -{ - if (!mutex_trylock(&shmem->pages_lock)) - return false; - drm_gem_shmem_purge_locked(shmem); - mutex_unlock(&shmem->pages_lock); - - return true; -} EXPORT_SYMBOL(drm_gem_shmem_purge); /** @@ -551,7 +517,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) /* We don't use vmf->pgoff since that has the fake offset */ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); if (page_offset >= num_pages || drm_WARN_ON_ONCE(obj->dev, !shmem->pages) || @@ -563,7 +529,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); } - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return ret; } @@ -575,7 +541,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) drm_WARN_ON(obj->dev, obj->import_attach); - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); /* * We should have already pinned the pages when the buffer was first @@ -585,7 +551,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) shmem->pages_use_count++; - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); drm_gem_vm_open(vma); } @@ -595,7 +561,10 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma) struct drm_gem_object *obj = vma->vm_private_data; struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_put_pages(shmem); + dma_resv_unlock(shmem->base.resv); + drm_gem_vm_close(vma); } @@ -633,7 +602,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct return ret; } + dma_resv_lock(shmem->base.resv, NULL); ret = drm_gem_shmem_get_pages(shmem); + dma_resv_unlock(shmem->base.resv); + if (ret) return ret; @@ -699,7 +671,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ drm_WARN_ON(obj->dev, obj->import_attach); - ret = drm_gem_shmem_get_pages_locked(shmem); + ret = drm_gem_shmem_get_pages(shmem); if (ret) return ERR_PTR(ret); @@ -721,7 +693,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ sg_free_table(sgt); kfree(sgt); err_put_pages: - drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_put_pages(shmem); return ERR_PTR(ret); } @@ -746,11 +718,11 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) int ret; struct sg_table *sgt; - ret = mutex_lock_interruptible(&shmem->pages_lock); + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ERR_PTR(ret); sgt = drm_gem_shmem_get_pages_sgt_locked(shmem); - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return sgt; } diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 10252dc11a22..4f9736e5f929 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) new_size = min(new_size, bo->base.base.size); - mutex_lock(&bo->base.pages_lock); + dma_resv_lock(bo->base.base.resv, NULL); if (bo->base.pages) { pages = bo->base.pages; @@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, sizeof(*pages), GFP_KERNEL | __GFP_ZERO); if (!pages) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); return -ENOMEM; } @@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) struct page *page = shmem_read_mapping_page(mapping, i); if (IS_ERR(page)) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); return PTR_ERR(page); } pages[i] = page; } - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); ret = sg_alloc_table_from_pages(&sgt, pages, i, 0, new_size, GFP_KERNEL); diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index bbada731bbbd..d9dda6acdfac 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -407,6 +407,10 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, bo = to_panfrost_bo(gem_obj); + ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL); + if (ret) + goto out_put_object; + mutex_lock(&pfdev->shrinker_lock); mutex_lock(&bo->mappings.lock); if (args->madv == PANFROST_MADV_DONTNEED) { @@ -444,7 +448,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, out_unlock_mappings: mutex_unlock(&bo->mappings.lock); mutex_unlock(&pfdev->shrinker_lock); - + dma_resv_unlock(bo->base.base.resv); +out_put_object: drm_gem_object_put(gem_obj); return ret; } diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c index bf0170782f25..6a71a2555f85 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -48,14 +48,14 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj) if (!mutex_trylock(&bo->mappings.lock)) return false; - if (!mutex_trylock(&shmem->pages_lock)) + if (!dma_resv_trylock(shmem->base.resv)) goto unlock_mappings; panfrost_gem_teardown_mappings_locked(bo); - drm_gem_shmem_purge_locked(&bo->base); + drm_gem_shmem_purge(&bo->base); ret = true; - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); unlock_mappings: mutex_unlock(&bo->mappings.lock); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index e961fa27702c..c0123d09f699 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -443,6 +443,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, struct panfrost_gem_mapping *bomapping; struct panfrost_gem_object *bo; struct address_space *mapping; + struct drm_gem_object *obj; pgoff_t page_offset; struct sg_table *sgt; struct page **pages; @@ -465,15 +466,16 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, page_offset = addr >> PAGE_SHIFT; page_offset -= bomapping->mmnode.start; - mutex_lock(&bo->base.pages_lock); + obj = &bo->base.base; + + dma_resv_lock(obj->resv, NULL); if (!bo->base.pages) { bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M, sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO); if (!bo->sgts) { - mutex_unlock(&bo->base.pages_lock); ret = -ENOMEM; - goto err_bo; + goto err_unlock; } pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, @@ -481,9 +483,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, if (!pages) { kvfree(bo->sgts); bo->sgts = NULL; - mutex_unlock(&bo->base.pages_lock); ret = -ENOMEM; - goto err_bo; + goto err_unlock; } bo->base.pages = pages; bo->base.pages_use_count = 1; @@ -491,7 +492,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, pages = bo->base.pages; if (pages[page_offset]) { /* Pages are already mapped, bail out. */ - mutex_unlock(&bo->base.pages_lock); goto out; } } @@ -502,15 +502,12 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { pages[i] = shmem_read_mapping_page(mapping, i); if (IS_ERR(pages[i])) { - mutex_unlock(&bo->base.pages_lock); ret = PTR_ERR(pages[i]); pages[i] = NULL; goto err_pages; } } - mutex_unlock(&bo->base.pages_lock); - sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; ret = sg_alloc_table_from_pages(sgt, pages + page_offset, NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); @@ -529,6 +526,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr); out: + dma_resv_unlock(obj->resv); + panfrost_gem_mapping_put(bomapping); return 0; @@ -537,6 +536,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, sg_free_table(sgt); err_pages: drm_gem_shmem_put_pages(&bo->base); +err_unlock: + dma_resv_unlock(obj->resv); err_bo: panfrost_gem_mapping_put(bomapping); return ret; diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 5994fed5e327..20ddcd799df9 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -26,11 +26,6 @@ struct drm_gem_shmem_object { */ struct drm_gem_object base; - /** - * @pages_lock: Protects the page table and use count - */ - struct mutex pages_lock; - /** * @pages: Page table */ @@ -65,11 +60,6 @@ struct drm_gem_shmem_object { */ struct sg_table *sgt; - /** - * @vmap_lock: Protects the vmap address and use count - */ - struct mutex vmap_lock; - /** * @vaddr: Kernel virtual address of the backing memory */ @@ -109,7 +99,6 @@ struct drm_gem_shmem_object { struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); @@ -128,8 +117,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem !shmem->base.dma_buf && !shmem->base.import_attach; } -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);