From patchwork Wed Oct 4 01:02:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 821097 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="FgAfAvwA"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3y6HhG5tnDz9t2V for ; Wed, 4 Oct 2017 12:03:10 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751194AbdJDBDK (ORCPT ); Tue, 3 Oct 2017 21:03:10 -0400 Received: from mail-wr0-f195.google.com ([209.85.128.195]:38682 "EHLO mail-wr0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751117AbdJDBDJ (ORCPT ); Tue, 3 Oct 2017 21:03:09 -0400 Received: by mail-wr0-f195.google.com with SMTP id f12so1150159wra.5 for ; Tue, 03 Oct 2017 18:03:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=k9qhAw8t1p1a7bEPRe4pbBQMrI3NByuFgmrqo5fuhhQ=; b=FgAfAvwA/DHLSVk4N/1PLa/XEI9012f0dcjJkBjqQTLWNLJYqF9BM4gxhwuAoF0HWw Y2TKbPzM6c8k0zKroHVH5Kxu0Qsyy4l54a1hXLUol16/1a6RnzuGaIhd3b1j9MmS4x+A A/ZpxGjwyH6ae802N/O7zNYfqRUXOPtmdYyn8oX9V4lAXWGbgsH97igIxebP2h1TTvGC +axaFKgfFPQkTYogOTg2OX/28vHLQqWswXBlECqkna9N+aLpkdbo4RgDsunvq2TKK98O GM8BZj2rIoXZEW1hqSdi3JM2rRwnwAPABFilK1GCDE0xTahyLe823ohXwZ3r6L8RHsGN Joew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=k9qhAw8t1p1a7bEPRe4pbBQMrI3NByuFgmrqo5fuhhQ=; b=KtYHq9OCKW1JPLsSQp9PPal+yiwiwwjxnBEqh1FfiDgUQWuT5AUYTIs0Q0mF/8RZ8+ SfvzfJ9CVxGW4dCjMLrXvDLMfwBhOwlsPYLyNn/4NGUCpW9b3Sw1bWQj+OxIu4qnoRRk iDU+bo8Eqfi03uPHSa51HY5NgkHvU7sOYnD/vEdr1IAQ2TuhaaMfVOwO+fsRIEM1ihO9 5eMGPLhKm0kLLJwInLMzXv6sqXe7fzVBO5A8hNuvgMoyKxbagH/VINhjF4FjoW40Olfd WibTriCrlpCOjcxvmcl6ERJF/Epnj89aQcbQhlBDZmzPnMIThHY17xD68F964KsyArL3 O0pg== X-Gm-Message-State: AMCzsaWOfI+t/1rmlUJgsWwB27J0Ftoa5Zc/hSFBljw9wZUmwikqdYRu Cwn3NmVa6DRCix5B5oRZKVLTf+WA X-Google-Smtp-Source: AOwi7QDiK1y5Z9+BIiQqsfULJtPDdeO4KbJtm5ct7GBfwMTUIYz0Ipdju6aA6Zny7NTM/TSIgG8T4g== X-Received: by 10.223.157.39 with SMTP id k39mr11130459wre.49.1507078988589; Tue, 03 Oct 2017 18:03:08 -0700 (PDT) Received: from localhost.localdomain (ppp109-252-55-163.pppoe.spdop.ru. [109.252.55.163]) by smtp.gmail.com with ESMTPSA id e17sm31307902wmf.46.2017.10.03.18.03.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 03 Oct 2017 18:03:08 -0700 (PDT) From: Dmitry Osipenko To: Thierry Reding , Joerg Roedel , Jonathan Hunter Cc: linux-tegra@vger.kernel.org, iommu@lists.linux-foundation.org Subject: [PATCH v2 1/2] iommu/tegra: gart: Optionally check for overwriting of page mappings Date: Wed, 4 Oct 2017 04:02:31 +0300 Message-Id: <7eea4a30b0f4fdbe14351cf2c6cf537365080d2d.1507078770.git.digetx@gmail.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Due to a bug in IOVA allocator, page mapping could accidentally overwritten. We can catch this case by checking 'VALID' bit of GART's page entry prior to mapping of a page. Since that check introduces a noticeable performance impact, it should be enabled explicitly by a new CONFIG_TEGRA_IOMMU_GART_DEBUG option. Signed-off-by: Dmitry Osipenko --- drivers/iommu/Kconfig | 9 +++++++++ drivers/iommu/tegra-gart.c | 16 +++++++++++++++- 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index f3a21343e636..851156a4896d 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -242,6 +242,15 @@ config TEGRA_IOMMU_GART space through the GART (Graphics Address Relocation Table) hardware included on Tegra SoCs. +config TEGRA_IOMMU_GART_DEBUG + bool "Debug Tegra GART IOMMU" + depends on TEGRA_IOMMU_GART + help + Properly unmap pages and check whether page is already mapped + during of mapping in expense of performance. This allows to + catch double page remappings, caused by a bug in the IOVA + allocator for example. + config TEGRA_IOMMU_SMMU bool "NVIDIA Tegra SMMU Support" depends on ARCH_TEGRA diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c index b62f790ad1ba..bc4cb200fa03 100644 --- a/drivers/iommu/tegra-gart.c +++ b/drivers/iommu/tegra-gart.c @@ -271,6 +271,7 @@ static int gart_iommu_map(struct iommu_domain *domain, unsigned long iova, struct gart_device *gart = gart_domain->gart; unsigned long flags; unsigned long pfn; + unsigned long pte; if (!gart_iova_range_valid(gart, iova, bytes)) return -EINVAL; @@ -282,6 +283,14 @@ static int gart_iommu_map(struct iommu_domain *domain, unsigned long iova, spin_unlock_irqrestore(&gart->pte_lock, flags); return -EINVAL; } + if (IS_ENABLED(TEGRA_IOMMU_GART_DEBUG)) { + pte = gart_read_pte(gart, iova); + if (pte & GART_ENTRY_PHYS_ADDR_VALID) { + spin_unlock_irqrestore(&gart->pte_lock, flags); + dev_err(gart->dev, "Page entry is used already\n"); + return -EBUSY; + } + } gart_set_pte(gart, iova, GART_PTE(pfn)); FLUSH_GART_REGS(gart); spin_unlock_irqrestore(&gart->pte_lock, flags); @@ -295,6 +304,10 @@ static size_t gart_iommu_unmap(struct iommu_domain *domain, unsigned long iova, struct gart_device *gart = gart_domain->gart; unsigned long flags; + /* don't unmap page entries to achieve better performance */ + if (!IS_ENABLED(TEGRA_IOMMU_GART_DEBUG)) + return 0; + if (!gart_iova_range_valid(gart, iova, bytes)) return 0; @@ -302,7 +315,8 @@ static size_t gart_iommu_unmap(struct iommu_domain *domain, unsigned long iova, gart_set_pte(gart, iova, 0); FLUSH_GART_REGS(gart); spin_unlock_irqrestore(&gart->pte_lock, flags); - return 0; + + return bytes; } static phys_addr_t gart_iommu_iova_to_phys(struct iommu_domain *domain,