From patchwork Sat May 9 00:28:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kelsey Skunberg X-Patchwork-Id: 1286508 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49Jp3X6kDpz9sSt; Sat, 9 May 2020 10:29:12 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1jXDMb-0001u4-JY; Sat, 09 May 2020 00:29:09 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jXDMW-0001sV-TB for kernel-team@lists.ubuntu.com; Sat, 09 May 2020 00:29:04 +0000 Received: from mail-io1-f71.google.com ([209.85.166.71]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jXDMW-0002nd-IR for kernel-team@lists.ubuntu.com; Sat, 09 May 2020 00:29:04 +0000 Received: by mail-io1-f71.google.com with SMTP id b21so2596401iob.17 for ; Fri, 08 May 2020 17:29:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+goQXmXU+xjRJ/k0Oeu/hcqSroYziGF2e5uWdXjq0rk=; b=S01OZlKvjyUUCRx7c8Ymv2D/98Jvd1AIjR2b2tP1rQ0h9r6v6EPpBTnnF4groJ5D2V zqmafnM/ETeqDLIr+5EhKicg9ppaWfSWL7IkJ4M7m8qngHVedwTwFnGhi+MRr7e4uMyQ NXvFaPGUBgbTVTmyDIK/HQmdF5dXUCoP64DtguYtDLMmR+BwJV7DupNW76ZsjshDZseE Y1MZXKXSaOTGd4V9ILrSJJvn14c3IwDQHBbbLRdDVYpUJfiMJpI0FTjLW4bMChihcA1g +5CQRBMrr2WvBjTvjdRIp2CQObRDQHxPlcP7EbYufYhaD60y6NLlhSnKq+rKUv24ksnT wNaQ== X-Gm-Message-State: AGi0Pubj9QMi+J+oHgnvJkj+uBKO+xW5Ng4c1CA7o+TZSN6W2+RR2Xzm iy5qcwFMFD2wGWF5sOLYAOc7ExiekC7FjVswU9AJMgwl3wAAQbaMFSFtbGgB6FpDZwvVjXVVCZb SXFMJH04zIsjUNw3xGFlycEgCvSnrBTvYh8ThOrY3mQ== X-Received: by 2002:a6b:14d0:: with SMTP id 199mr5266164iou.11.1588984143332; Fri, 08 May 2020 17:29:03 -0700 (PDT) X-Google-Smtp-Source: APiQypKt9P3jYs56L/pgQGoMxNgtCVr+mUDQtrsWS4lxEiI5wH/1INkfQzPopBmlv0YBfhPThz4Pdg== X-Received: by 2002:a6b:14d0:: with SMTP id 199mr5266158iou.11.1588984143111; Fri, 08 May 2020 17:29:03 -0700 (PDT) Received: from localhost.localdomain (c-73-243-191-173.hsd1.co.comcast.net. [73.243.191.173]) by smtp.gmail.com with ESMTPSA id l13sm1501359ilo.46.2020.05.08.17.29.02 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 May 2020 17:29:02 -0700 (PDT) From: Kelsey Skunberg To: kernel-team@lists.ubuntu.com Subject: [B][PATCH v3 1/2] Revert "drm/msm: Use the correct dma_sync calls in msm_gem" Date: Fri, 8 May 2020 18:28:59 -0600 Message-Id: <20200509002900.5921-2-kelsey.skunberg@canonical.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200509002900.5921-1-kelsey.skunberg@canonical.com> References: <20200509002900.5921-1-kelsey.skunberg@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" BugLink: https://bugs.launchpad.net/bugs/1877657 This reverts commit 0519bad6f34f693c9733deb4b5cd208c7881cd66 which is upstream commit 3de433c5b38a ("drm/msm: Use the correct dma_sync calls in msm_gem"). Commit contributes to Certification Test failures and should be reverted until a fix or alternative solution for dma_sync calls in msm_gem can be applied. Signed-off-by: Kelsey Skunberg --- drivers/gpu/drm/msm/msm_gem.c | 47 ++++------------------------------- 1 file changed, 5 insertions(+), 42 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index ea59eb5eb556..21502afbcddc 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -43,46 +43,6 @@ static bool use_pages(struct drm_gem_object *obj) return !msm_obj->vram_node; } -/* - * Cache sync.. this is a bit over-complicated, to fit dma-mapping - * API. Really GPU cache is out of scope here (handled on cmdstream) - * and all we need to do is invalidate newly allocated pages before - * mapping to CPU as uncached/writecombine. - * - * On top of this, we have the added headache, that depending on - * display generation, the display's iommu may be wired up to either - * the toplevel drm device (mdss), or to the mdp sub-node, meaning - * that here we either have dma-direct or iommu ops. - * - * Let this be a cautionary tail of abstraction gone wrong. - */ - -static void sync_for_device(struct msm_gem_object *msm_obj) -{ - struct device *dev = msm_obj->base.dev->dev; - - if (get_dma_ops(dev)) { - dma_sync_sg_for_device(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } else { - dma_map_sg(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } -} - -static void sync_for_cpu(struct msm_gem_object *msm_obj) -{ - struct device *dev = msm_obj->base.dev->dev; - - if (get_dma_ops(dev)) { - dma_sync_sg_for_cpu(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } else { - dma_unmap_sg(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } -} - /* allocate pages from VRAM carveout, used when no IOMMU: */ static struct page **get_pages_vram(struct drm_gem_object *obj, int npages) { @@ -148,7 +108,8 @@ static struct page **get_pages(struct drm_gem_object *obj) * because display controller, GPU, etc. are not coherent: */ if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED)) - sync_for_device(msm_obj); + dma_sync_sg_for_device(dev->dev, msm_obj->sgt->sgl, + msm_obj->sgt->nents, DMA_BIDIRECTIONAL); } return msm_obj->pages; @@ -177,7 +138,9 @@ static void put_pages(struct drm_gem_object *obj) * GPU, etc. are not coherent: */ if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED)) - sync_for_cpu(msm_obj); + dma_sync_sg_for_cpu(obj->dev->dev, msm_obj->sgt->sgl, + msm_obj->sgt->nents, + DMA_BIDIRECTIONAL); sg_free_table(msm_obj->sgt); kfree(msm_obj->sgt);