From patchwork Thu Feb 25 06:08:42 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Courbot X-Patchwork-Id: 587933 X-Patchwork-Delegate: gnurou@gmail.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 65A5F14030C for ; Thu, 25 Feb 2016 17:08:48 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751070AbcBYGIr (ORCPT ); Thu, 25 Feb 2016 01:08:47 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:5738 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750739AbcBYGIq (ORCPT ); Thu, 25 Feb 2016 01:08:46 -0500 Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com id ; Wed, 24 Feb 2016 22:09:13 -0800 Received: from hqemhub02.nvidia.com ([172.20.150.31]) by hqnvupgp07.nvidia.com (PGP Universal service); Wed, 24 Feb 2016 22:07:41 -0800 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Wed, 24 Feb 2016 22:07:41 -0800 Received: from percival.nvidia.com (172.20.144.16) by hqemhub02.nvidia.com (172.20.150.31) with Microsoft SMTP Server (TLS) id 8.3.406.0; Wed, 24 Feb 2016 22:08:45 -0800 From: Alexandre Courbot To: Ben Skeggs CC: nouveau@lists.freedesktop.org, linux-tegra@vger.kernel.org, gnurou@gmail.com, Alexandre Courbot Subject: [PATCH v2] instmem/gk20a: set DMA mask early Date: Thu, 25 Feb 2016 15:08:42 +0900 Message-ID: <1456380522-17520-1-git-send-email-acourbot@nvidia.com> X-Mailer: git-send-email 2.7.1 X-NVConfidentiality: public MIME-Version: 1.0 Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org DMA mask is typically set in nouveau_ttm_init(), but this function is called late during initialization and GK20A's instmem will have called DMA functions before this happens. Having a wrongly set DMA mask can result in the use of unneeded bounce buffers. Set it early to avoid this. Signed-off-by: Alexandre Courbot --- Changes since v1: - Set mask in tegra.c instead of instmem, as suggested by Ben. drm/nouveau/nvkm/engine/device/tegra.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drm/nouveau/nvkm/engine/device/tegra.c b/drm/nouveau/nvkm/engine/device/tegra.c index 6d89416f0bc1..e1cd665aee2c 100644 --- a/drm/nouveau/nvkm/engine/device/tegra.c +++ b/drm/nouveau/nvkm/engine/device/tegra.c @@ -272,6 +272,15 @@ nvkm_device_tegra_new(const struct nvkm_device_tegra_func *func, if (IS_ERR(tdev->clk_pwr)) return PTR_ERR(tdev->clk_pwr); + /** + * The IOMMU bit defines the upper limit of the GPU-addressable space. + * This will be refined in nouveau_ttm_init but we need to do it early + * for instmem to behave properly + */ + ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(tdev->func->iommu_bit)); + if (ret) + return ret; + nvkm_device_tegra_probe_iommu(tdev); ret = nvkm_device_tegra_power_up(tdev);