From patchwork Thu Dec 29 20:45:03 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikita Yushchenko X-Patchwork-Id: 709567 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3tqM7B6nXTz9sCX for ; Fri, 30 Dec 2016 07:45:26 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=cogentembedded-com.20150623.gappssmtp.com header.i=@cogentembedded-com.20150623.gappssmtp.com header.b="RGYcPQMN"; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750967AbcL2UpR (ORCPT ); Thu, 29 Dec 2016 15:45:17 -0500 Received: from mail-lf0-f43.google.com ([209.85.215.43]:36739 "EHLO mail-lf0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750891AbcL2UpQ (ORCPT ); Thu, 29 Dec 2016 15:45:16 -0500 Received: by mail-lf0-f43.google.com with SMTP id t196so227448894lff.3 for ; Thu, 29 Dec 2016 12:45:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cogentembedded-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=5n0nViJRUk5iFvdAi+XB+HT+MP1Tc7Q8E47VVOZI0aw=; b=RGYcPQMN1Rk/bKa99iOwZfyZ37MPcMq6hLH8V82D0Mt77AIvrmS2tikHDku66vox90 nG1d5+IiKB44Z2vFCM6XTAt2C7FVyrSYWyEvdyP7LRxLmfINd87dndu/B2uabicLCiwt h9ixwuxyAIkrmF7+R00MrZLlqkX7CV+RN84gbVDElMEEX8GVnQTffDJmhErK9s/6tIBn A06wdWTcsjKCyYx3jb+9p/YXDqNK9pArU2XiUUmzmUD3ZutqTqsv4SZu/GF2vpthdPba MHMnMz5DreJKtD73VzgBxtzWP4YmFWsaL+M/LZ96ZuONa5/7EVvzUpj/QEA8IuRCZwfo NuLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=5n0nViJRUk5iFvdAi+XB+HT+MP1Tc7Q8E47VVOZI0aw=; b=AdPW5LeAjxCKYawEpLKByZwtaCeJjACgJvihtQFYs7bpg3daSo+fFOolK4UetnHoO8 t3g0jDLHfi6t8XragIsn6MvEGzgs717jfi089HsqF8oL28RO4SCGCbcp/xJsgEBqN9uI x/saa2Ayinsgdx/z+GgoFDjyAZ6NVu9eyeXd7r5VNlnPmxkBORmU11bVkKtbYg87eiNp J/Gwhd2XxuwT0aLvSbJ42f4bnFFKpIsWJ0W8sJCrktmRKVeoMd7xFDkMrVMwEsGhUk+U 1ySskWOJ5X68ZC1UEwzZB865mQ9j7LQbZicgKt9b2MnEEzx9ahK41CfOjNpWOWGdNtJ4 B7KA== X-Gm-Message-State: AIkVDXLi4AWyQU90+m3xKLvWDNpeL5kHajDQnrBBuwXaghOoyVLR6kLnOeABiasG1+Xi4w== X-Received: by 10.25.79.21 with SMTP id d21mr12422394lfb.49.1483044314541; Thu, 29 Dec 2016 12:45:14 -0800 (PST) Received: from hugenb.home (nikaet.starlink.ru. [94.141.168.29]) by smtp.gmail.com with ESMTPSA id d2sm3338104lfe.13.2016.12.29.12.45.12 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 29 Dec 2016 12:45:12 -0800 (PST) From: Nikita Yushchenko To: Catalin Marinas , Will Deacon , Arnd Bergmann , linux-arm-kernel@lists.infradead.org, Simon Horman , Bjorn Helgaas , linux-pci@vger.kernel.org, linux-renesas-soc@vger.kernel.org Cc: artemi.ivanov@cogentembedded.com, linux-kernel@vger.kernel.org, Nikita Yushchenko Subject: [PATCH 1/2] arm64: dma_mapping: allow PCI host driver to limit DMA mask Date: Thu, 29 Dec 2016 23:45:03 +0300 Message-Id: <1483044304-2085-1-git-send-email-nikita.yoush@cogentembedded.com> X-Mailer: git-send-email 2.1.4 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org It is possible that PCI device supports 64-bit DMA addressing, and thus it's driver sets device's dma_mask to DMA_BIT_MASK(64), however PCI host bridge has limitations on inbound transactions addressing. Example of such setup is NVME SSD device connected to RCAR PCIe controller. Previously there was attempt to handle this via bus notifier: after driver is attached to PCI device, bridge driver gets notifier callback, and resets dma_mask from there. However, this is racy: PCI device driver could already allocate buffers and/or start i/o in probe routine. In NVME case, i/o is started in workqueue context, and this race gives "sometimes works, sometimes not" effect. Proper solution should make driver's dma_set_mask() call to fail if host bridge can't support mask being set. This patch makes __swiotlb_dma_supported() to check mask being set for PCI device against dma_mask of struct device corresponding to PCI host bridge (one with name "pciXXXX:YY"), if that dma_mask is set. This is the least destructive approach: currently dma_mask of that device object is not used anyhow, thus all existing setups will work as before, and modification is required only in actually affected components - driver of particular PCI host bridge, and dma_map_ops of particular platform. Signed-off-by: Nikita Yushchenko --- arch/arm64/mm/dma-mapping.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index 290a84f..49645277 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -28,6 +28,7 @@ #include #include #include +#include #include @@ -347,6 +348,16 @@ static int __swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt, static int __swiotlb_dma_supported(struct device *hwdev, u64 mask) { +#ifdef CONFIG_PCI + if (dev_is_pci(hwdev)) { + struct pci_dev *pdev = to_pci_dev(hwdev); + struct pci_host_bridge *br = pci_find_host_bridge(pdev->bus); + + if (br->dev.dma_mask && (*br->dev.dma_mask) && + (mask & (*br->dev.dma_mask)) != mask) + return 0; + } +#endif if (swiotlb) return swiotlb_dma_supported(hwdev, mask); return 1;