From patchwork Mon Oct 18 15:58:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juerg Haefliger X-Patchwork-Id: 1542782 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=V3VNZcLl; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4HY1lG744Dz9sRR for ; Tue, 19 Oct 2021 02:59:06 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1mcV2O-0002vC-Mp; Mon, 18 Oct 2021 15:58:56 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1mcV2L-0002tE-Q6 for kernel-team@lists.ubuntu.com; Mon, 18 Oct 2021 15:58:53 +0000 Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id A3EAB3F4A4 for ; Mon, 18 Oct 2021 15:58:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1634572733; bh=eq2nxsiJIm15+nX601yVWnFVRzWqmQoMguMiUDwHDF4=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=V3VNZcLlaE6rXJOSXzgqsrqLE6sKPn4WULDmylUA9e0DKxjWuaHLlQy3HsnyTykpn wNSonwEg6qpcVl09vmsNApSbyxjhhu/0GrEyNbNh66/Ehg+D8hydUgfKR5T+yuxnkN NIOARX/JjHtJmshM9rl5ESN8gnXSS0x793qJByBP6yPZP/NalDI/3ea647hAzP/4YH d3YfakXRVcbGmsjtsVhmmNk6NLfkJfoNJ+8+bF/liG1uMRBq4bZYh8Ad2dZlpO73kK DswNsR+CSRGUeaRFV2quGgjFDJwefnsV45arUUQRTf68BpTfh1WYT9v/+VyiF9ICFA QZowYHHBdpQYg== Received: by mail-ed1-f71.google.com with SMTP id u23-20020a50a417000000b003db23c7e5e2so14821611edb.8 for ; Mon, 18 Oct 2021 08:58:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eq2nxsiJIm15+nX601yVWnFVRzWqmQoMguMiUDwHDF4=; b=YX9TwGxyAmqYbYMgZYmIUMueK2t3X91Ceb1s+SssVI9gd5NhWLzP5uQGyHwI4AKszH GtnOhx5BhMmLwDDB8voJAo1GmMlLi4RChk/OCAK7VkkGuTFWqWlGgHkGspQUYnXu5QMq Sw1ODe4ICsIarCJnL6UFG31Uok7TiA+cnT4Qbki+WLiW723ZnlGTvp9ezH2H/nsEj5R3 2yYVlZErRGfgN61DDo9OiYHNAYOlIt91cfZtvmhC6dgAz4B6n8hmLz1i+HNUuU+lgKWE /2UNxdumtwMBtRtVU9RXzmKXCHTmYoFzUWvOhtjPhxug0RZiXKa4CWUBMSGwFaapcfCa Bx1A== X-Gm-Message-State: AOAM532bPrR66unGj8+HqOVDWentYhkuaC6LdqaO2aKiOh6alGwgIkR0 5qb4zK3hzqnMqZNB4GyyITk3SzmXeoVkLvbvgBO4AFIUXuQBOEng2bE9Z9uQ5n97H7VaAmNDV7c STXcdYQCoRpcAfhG0w2c7WJYDL5HqSsVccPmZQMmmtQ== X-Received: by 2002:a17:907:939:: with SMTP id au25mr31429086ejc.166.1634572733384; Mon, 18 Oct 2021 08:58:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxS4oLuOLRtzLHBzPOsPRX6uFT4diLYSTd71qENpcjYURU/WDlytjXlG8sYhZ/+921Z1rbZFg== X-Received: by 2002:a17:907:939:: with SMTP id au25mr31429058ejc.166.1634572733124; Mon, 18 Oct 2021 08:58:53 -0700 (PDT) Received: from gollum.fritz.box ([194.191.244.86]) by smtp.gmail.com with ESMTPSA id r22sm9078623ejd.109.2021.10.18.08.58.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Oct 2021 08:58:52 -0700 (PDT) From: Juerg Haefliger X-Google-Original-From: Juerg Haefliger To: kernel-team@lists.ubuntu.com Subject: [SRU][I/raspi][PATCH 05/10] Revert "drm/vc4: kms: Convert to atomic helpers" Date: Mon, 18 Oct 2021 17:58:43 +0200 Message-Id: <20211018155848.334053-6-juergh@canonical.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211018155848.334053-1-juergh@canonical.com> References: <20211018155848.334053-1-juergh@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" BugLink: https://bugs.launchpad.net/bugs/1946368 This reverts commit c9ba6cf8858b22fad16ddfe261a90181c4f9a504. Signed-off-by: Juerg Haefliger --- drivers/gpu/drm/vc4/vc4_kms.c | 121 ++++++++++++++++++++++++++++++---- 1 file changed, 108 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/vc4/vc4_kms.c b/drivers/gpu/drm/vc4/vc4_kms.c index a3bdd1c34ef5..24dea3acb9de 100644 --- a/drivers/gpu/drm/vc4/vc4_kms.c +++ b/drivers/gpu/drm/vc4/vc4_kms.c @@ -335,7 +335,8 @@ static void vc5_hvs_pv_muxing_commit(struct vc4_dev *vc4, } } -static void vc4_atomic_commit_tail(struct drm_atomic_state *state) +static void +vc4_atomic_complete_commit(struct drm_atomic_state *state) { struct drm_device *dev = state->dev; struct vc4_dev *vc4 = to_vc4_dev(dev); @@ -360,6 +361,10 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state) if (vc4->hvs && vc4->hvs->hvs5) core_req = clk_request_start(hvs->core_clk, 500000000); + drm_atomic_helper_wait_for_fences(dev, state, false); + + drm_atomic_helper_wait_for_dependencies(state); + old_hvs_state = vc4_hvs_get_old_global_state(state); if (!old_hvs_state) return; @@ -413,27 +418,29 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state) drm_atomic_helper_cleanup_planes(dev, state); + drm_atomic_helper_commit_cleanup_done(state); + if (vc4->hvs && vc4->hvs->hvs5) clk_request_done(core_req); + + drm_atomic_state_put(state); +} + +static void commit_work(struct work_struct *work) +{ + struct drm_atomic_state *state = container_of(work, + struct drm_atomic_state, + commit_work); + vc4_atomic_complete_commit(state); } static int vc4_atomic_commit_setup(struct drm_atomic_state *state) { - struct drm_device *dev = state->dev; - struct vc4_dev *vc4 = to_vc4_dev(dev); struct drm_crtc_state *crtc_state; struct vc4_hvs_state *hvs_state; struct drm_crtc *crtc; unsigned int i; - /* We know for sure we don't want an async update here. Set - * state->legacy_cursor_update to false to prevent - * drm_atomic_helper_setup_commit() from auto-completing - * commit->flip_done. - */ - if (!vc4->firmware_kms) - state->legacy_cursor_update = false; - hvs_state = vc4_hvs_get_new_global_state(state); if (!hvs_state) return -EINVAL; @@ -457,6 +464,95 @@ static int vc4_atomic_commit_setup(struct drm_atomic_state *state) return 0; } +/** + * vc4_atomic_commit - commit validated state object + * @dev: DRM device + * @state: the driver state object + * @nonblock: nonblocking commit + * + * This function commits a with drm_atomic_helper_check() pre-validated state + * object. This can still fail when e.g. the framebuffer reservation fails. For + * now this doesn't implement asynchronous commits. + * + * RETURNS + * Zero for success or -errno. + */ +static int vc4_atomic_commit(struct drm_device *dev, + struct drm_atomic_state *state, + bool nonblock) +{ + int ret; + + if (state->async_update) { + ret = drm_atomic_helper_prepare_planes(dev, state); + if (ret) + return ret; + + drm_atomic_helper_async_commit(dev, state); + + drm_atomic_helper_cleanup_planes(dev, state); + + return 0; + } + + /* We know for sure we don't want an async update here. Set + * state->legacy_cursor_update to false to prevent + * drm_atomic_helper_setup_commit() from auto-completing + * commit->flip_done. + */ + if (!vc4->firmware_kms) + state->legacy_cursor_update = false; + ret = drm_atomic_helper_setup_commit(state, nonblock); + if (ret) + return ret; + + INIT_WORK(&state->commit_work, commit_work); + + ret = drm_atomic_helper_prepare_planes(dev, state); + if (ret) + return ret; + + if (!nonblock) { + ret = drm_atomic_helper_wait_for_fences(dev, state, true); + if (ret) { + drm_atomic_helper_cleanup_planes(dev, state); + return ret; + } + } + + /* + * This is the point of no return - everything below never fails except + * when the hw goes bonghits. Which means we can commit the new state on + * the software side now. + */ + + BUG_ON(drm_atomic_helper_swap_state(state, false) < 0); + + /* + * Everything below can be run asynchronously without the need to grab + * any modeset locks at all under one condition: It must be guaranteed + * that the asynchronous work has either been cancelled (if the driver + * supports it, which at least requires that the framebuffers get + * cleaned up with drm_atomic_helper_cleanup_planes()) or completed + * before the new state gets committed on the software side with + * drm_atomic_helper_swap_state(). + * + * This scheme allows new atomic state updates to be prepared and + * checked in parallel to the asynchronous completion of the previous + * update. Which is important since compositors need to figure out the + * composition of the next frame right after having submitted the + * current layout. + */ + + drm_atomic_state_get(state); + if (nonblock) + queue_work(system_unbound_wq, &state->commit_work); + else + vc4_atomic_complete_commit(state); + + return 0; +} + static struct drm_framebuffer *vc4_fb_create(struct drm_device *dev, struct drm_file *file_priv, const struct drm_mode_fb_cmd2 *mode_cmd) @@ -872,12 +968,11 @@ vc4_atomic_check(struct drm_device *dev, struct drm_atomic_state *state) static struct drm_mode_config_helper_funcs vc4_mode_config_helpers = { .atomic_commit_setup = vc4_atomic_commit_setup, - .atomic_commit_tail = vc4_atomic_commit_tail, }; static const struct drm_mode_config_funcs vc4_mode_funcs = { .atomic_check = vc4_atomic_check, - .atomic_commit = drm_atomic_helper_commit, + .atomic_commit = vc4_atomic_commit, .fb_create = vc4_fb_create, };