From patchwork Fri May 8 20:54:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kelsey Skunberg X-Patchwork-Id: 1286450 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49JjKC1sywz9sSg; Sat, 9 May 2020 06:55:40 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1jXA1u-000245-Tp; Fri, 08 May 2020 20:55:34 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jXA1s-00022Z-PZ for kernel-team@lists.ubuntu.com; Fri, 08 May 2020 20:55:32 +0000 Received: from mail-io1-f69.google.com ([209.85.166.69]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jXA1s-0000up-Dy for kernel-team@lists.ubuntu.com; Fri, 08 May 2020 20:55:32 +0000 Received: by mail-io1-f69.google.com with SMTP id v12so3113179iol.1 for ; Fri, 08 May 2020 13:55:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bSMFxkIstAZw6Rm28sAxe5D3woenWb4BsX9buEHIC84=; b=Rhe3ZOu8A6ig8iufn79utMq7d+xSsYvbD7QTwAOZHmr0T7jN6F40Qiga23VbX5ZZYI LT1ccB/jVO9ajfCQQ0J3kd6c0SLXVUabAtGbD531CGEmTa2yQvX+WpD6NCCK6UpP80yC Us3sPoxmbzXvPstX1wQY2mXvgiyo2PPeu/63EML7UXRL1iI6WZI6ja1bl+boLIb2EkXg 9FtKlAgex+RkbRWXB/BhAefu3J0ng1EWLBpHASv6SErPWhD44AH2r0OBGm4hG2YfCcQY IKWH2LsuEVbcSvvDryvnvamDip8ROhWdUbn2sKxeYsxIhbs5XcBM+9f1U1d5Xlo2AVPQ qXZQ== X-Gm-Message-State: AGi0PuZoH89GSzc+KVRbytPPse533GE7/QAY8T3zlfJF6iNTNinjYa9w WfmsaRmVrJ0uo5VfinJLajg/jo1+FKXYeyEb7+DsU2KMiavOotF4l1OlYEWM2wtwyKzo/IL09jr BKJQxM2SUCIi1oF5nhK+p53N0yt8KMsujZo2oVH4sHA== X-Received: by 2002:a6b:7319:: with SMTP id e25mr4500546ioh.193.1588971331198; Fri, 08 May 2020 13:55:31 -0700 (PDT) X-Google-Smtp-Source: APiQypIBIyMhN+IRwgiqNwFJ29AEg1IZh0BVp91z2HIHqXUWu+8tV3+JYYwb7IAndG55DshZb9eyWw== X-Received: by 2002:a6b:7319:: with SMTP id e25mr4500529ioh.193.1588971330986; Fri, 08 May 2020 13:55:30 -0700 (PDT) Received: from localhost.localdomain (c-73-243-191-173.hsd1.co.comcast.net. [73.243.191.173]) by smtp.gmail.com with ESMTPSA id r2sm1082840ioo.51.2020.05.08.13.55.30 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 May 2020 13:55:30 -0700 (PDT) From: Kelsey Skunberg To: kernel-team@lists.ubuntu.com Subject: [B][PATCH 1/2] UBUNTU: SAUCE: Revert "drm/msm: Use the correct dma_sync calls in msm_gem" Date: Fri, 8 May 2020 14:54:22 -0600 Message-Id: <20200508205423.30548-2-kelsey.skunberg@canonical.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200508205423.30548-1-kelsey.skunberg@canonical.com> References: <20200508205423.30548-1-kelsey.skunberg@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" BugLink: https://bugs.launchpad.net/bugs/1877657 This reverts commit 3de433c5b38af49a5fc7602721e2ab5d39f1e69c which is upstream commit 9f614197c744 ("drm/msm: Use the correct dma_sync calls harder") Commit contributes to Certification Test failures and should be reverted until a fix or alternative solution for dma_sync calls in msm_gem can be applied. Signed-off-by: Kelsey Skunberg --- drivers/gpu/drm/msm/msm_gem.c | 47 ++++------------------------------- 1 file changed, 5 insertions(+), 42 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index ea59eb5eb556..21502afbcddc 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -43,46 +43,6 @@ static bool use_pages(struct drm_gem_object *obj) return !msm_obj->vram_node; } -/* - * Cache sync.. this is a bit over-complicated, to fit dma-mapping - * API. Really GPU cache is out of scope here (handled on cmdstream) - * and all we need to do is invalidate newly allocated pages before - * mapping to CPU as uncached/writecombine. - * - * On top of this, we have the added headache, that depending on - * display generation, the display's iommu may be wired up to either - * the toplevel drm device (mdss), or to the mdp sub-node, meaning - * that here we either have dma-direct or iommu ops. - * - * Let this be a cautionary tail of abstraction gone wrong. - */ - -static void sync_for_device(struct msm_gem_object *msm_obj) -{ - struct device *dev = msm_obj->base.dev->dev; - - if (get_dma_ops(dev)) { - dma_sync_sg_for_device(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } else { - dma_map_sg(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } -} - -static void sync_for_cpu(struct msm_gem_object *msm_obj) -{ - struct device *dev = msm_obj->base.dev->dev; - - if (get_dma_ops(dev)) { - dma_sync_sg_for_cpu(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } else { - dma_unmap_sg(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } -} - /* allocate pages from VRAM carveout, used when no IOMMU: */ static struct page **get_pages_vram(struct drm_gem_object *obj, int npages) { @@ -148,7 +108,8 @@ static struct page **get_pages(struct drm_gem_object *obj) * because display controller, GPU, etc. are not coherent: */ if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED)) - sync_for_device(msm_obj); + dma_sync_sg_for_device(dev->dev, msm_obj->sgt->sgl, + msm_obj->sgt->nents, DMA_BIDIRECTIONAL); } return msm_obj->pages; @@ -177,7 +138,9 @@ static void put_pages(struct drm_gem_object *obj) * GPU, etc. are not coherent: */ if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED)) - sync_for_cpu(msm_obj); + dma_sync_sg_for_cpu(obj->dev->dev, msm_obj->sgt->sgl, + msm_obj->sgt->nents, + DMA_BIDIRECTIONAL); sg_free_table(msm_obj->sgt); kfree(msm_obj->sgt);