From patchwork Thu May 27 12:58:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484581 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=Ql518PoK; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrSZ21jmSz9sRN for ; Thu, 27 May 2021 22:59:06 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235914AbhE0NAi (ORCPT ); Thu, 27 May 2021 09:00:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235177AbhE0NAh (ORCPT ); Thu, 27 May 2021 09:00:37 -0400 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6FAF3C061760 for ; Thu, 27 May 2021 05:59:04 -0700 (PDT) Received: by mail-pg1-x52b.google.com with SMTP id l70so3652026pga.1 for ; Thu, 27 May 2021 05:59:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TTF4vDbD9vQ1KC1jLpuRC6SYDBZGWhbxeMIB/CCcXbM=; b=Ql518PoKOfY5r4efBKvH9DmAfXHTzOTlD7cQCPHv4pwiDJMplS9HGrBounYOm/uHrf /J12Wcg/AjGjzAJHhjxJJIQiFQkUIqzSfolO8gn7+iXITH1Ou9qg/tKyKRsiVDL/Q70X wZNWq4zxBO11T+ZAQUrcXnBAK+2h+MWriKags= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TTF4vDbD9vQ1KC1jLpuRC6SYDBZGWhbxeMIB/CCcXbM=; b=kmS73KZvUcm5Akhq8jbGHAa5ceaROKaDOKk7RrekA9ol5+GlSNW4ATaCpaf/hUbLNn X2JN0f6w8UPy4iBQnQzqvFABrbJXgKdqMa+qnJ0NC8kySM3qpIYCTjQv/+9Q8QYNbIEL 0I7rhk2OOHMOO/Q4FQrtYNGAdg3z2ufAizHbbRsVZy9Pl3PVuvAI+FzLMSwq9CzJpQoP ZFTkz/k+hw4fZ7uZ44mSKVARQ2LH+mcwLSmPbP9a6Fml1yby+nr7Pd6Bdz8hwDja83H5 VXpCoThK6MmjjYhGV/86caMJMMFPp7WcDSS5TdYo+Q5CC2Uzmm4wkuPA61/B6fg9375G yA6A== X-Gm-Message-State: AOAM533MAvD6IEADk+RlusFF04AAQoKBdxvKBqBVzZEi9OxOfxArOmcO iGDwlrjV0wTf+EkPWox/AHqzGA== X-Google-Smtp-Source: ABdhPJyp2dRY1SU48NJaQCKs+dJX/2XsN+WDKmaiBpTycmpdiWBGW8lmfuldLoK+B+T5ST+S6ZG9/g== X-Received: by 2002:a63:f248:: with SMTP id d8mr3579481pgk.219.1622120343937; Thu, 27 May 2021 05:59:03 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id o17sm1723677pjp.33.2021.05.27.05.58.56 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 05:59:03 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 01/15] swiotlb: Refactor swiotlb init functions Date: Thu, 27 May 2021 20:58:31 +0800 Message-Id: <20210527125845.1852284-2-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct initialization to make the code reusable. Note that we now also call set_memory_decrypted in swiotlb_init_with_tbl. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 51 ++++++++++++++++++++++---------------------- 1 file changed, 25 insertions(+), 26 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 8ca7d505d61c..d3232fc19385 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -168,9 +168,30 @@ void __init swiotlb_update_mem_attributes(void) memset(vaddr, 0, bytes); } -int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) +static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start, + unsigned long nslabs, bool late_alloc) { + void *vaddr = phys_to_virt(start); unsigned long bytes = nslabs << IO_TLB_SHIFT, i; + + mem->nslabs = nslabs; + mem->start = start; + mem->end = mem->start + bytes; + mem->index = 0; + mem->late_alloc = late_alloc; + spin_lock_init(&mem->lock); + for (i = 0; i < mem->nslabs; i++) { + mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i); + mem->slots[i].orig_addr = INVALID_PHYS_ADDR; + mem->slots[i].alloc_size = 0; + } + + set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT); + memset(vaddr, 0, bytes); +} + +int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) +{ struct io_tlb_mem *mem; size_t alloc_size; @@ -186,16 +207,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) if (!mem) panic("%s: Failed to allocate %zu bytes align=0x%lx\n", __func__, alloc_size, PAGE_SIZE); - mem->nslabs = nslabs; - mem->start = __pa(tlb); - mem->end = mem->start + bytes; - mem->index = 0; - spin_lock_init(&mem->lock); - for (i = 0; i < mem->nslabs; i++) { - mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i); - mem->slots[i].orig_addr = INVALID_PHYS_ADDR; - mem->slots[i].alloc_size = 0; - } + + swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false); io_tlb_default_mem = mem; if (verbose) @@ -282,7 +295,6 @@ swiotlb_late_init_with_default_size(size_t default_size) int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) { - unsigned long bytes = nslabs << IO_TLB_SHIFT, i; struct io_tlb_mem *mem; if (swiotlb_force == SWIOTLB_NO_FORCE) @@ -297,20 +309,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) if (!mem) return -ENOMEM; - mem->nslabs = nslabs; - mem->start = virt_to_phys(tlb); - mem->end = mem->start + bytes; - mem->index = 0; - mem->late_alloc = 1; - spin_lock_init(&mem->lock); - for (i = 0; i < mem->nslabs; i++) { - mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i); - mem->slots[i].orig_addr = INVALID_PHYS_ADDR; - mem->slots[i].alloc_size = 0; - } - - set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT); - memset(tlb, 0, bytes); + swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true); io_tlb_default_mem = mem; swiotlb_print_info(); From patchwork Thu May 27 12:58:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484582 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=H8QKHfGZ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrSZJ4lKgz9sXS for ; Thu, 27 May 2021 22:59:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235939AbhE0NAu (ORCPT ); Thu, 27 May 2021 09:00:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235903AbhE0NAs (ORCPT ); Thu, 27 May 2021 09:00:48 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E2DBC061574 for ; Thu, 27 May 2021 05:59:13 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id q16so2296904pls.6 for ; Thu, 27 May 2021 05:59:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FBa4WmaBqBXHvL8MxFx8fqbv9G1UK9TFNo0t6ZyBRps=; b=H8QKHfGZsWxCHEy5cA1tQcGKb44EP5b2zAEgNghokxkbg+whP74qNT9pCynan1YB4C ODR1VCmnmaAcY81RiH4rrdh2ogtLXiV4eZN6Qr0E0GHPsI2d+SkbVqT8pWRXMc0PJZWW P9APiIrxJsEIWE0eMgqeltDq17QR2UGVMSiSA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FBa4WmaBqBXHvL8MxFx8fqbv9G1UK9TFNo0t6ZyBRps=; b=Zi/01Ejy6dhiyp6Nd4DepsHnhiJdEO/ckMk4/gt+GST84N4Ly2aFxXSwngAiC+v6BG 6p5D2laVlypHghUI2kUHZLhAoc2XIHcUN/WdO7CiLDgMgVjcJcbVKI+Vv4nTZtAwWuWC t4TAmbRB9u1xm/sW9skoVDt1djc4nvud6olDKQWeTm59Jc1SiPifWIx9WMocCIihQv2C nZW2tr2h0kEhT/h3uT0BU7hYCaytIb4tDNV51+FRJt4b9vmq+quwYjp6ixVLS4IJqy+E XoFzp6vNXyqWxyP1mea1DDwu6CwzVpq4Z6c2dgizI94V5SXXPVC7vIFvtBc36BNAl+kV 7SxA== X-Gm-Message-State: AOAM530MDgKk5QRRsczahEaA+2IHBKc0c75YQzmZOmHcxOJa83KeyEnr umcYo7t4fjLiLdJbAHdCumUrrA== X-Google-Smtp-Source: ABdhPJwy7gpYKip34XTwi+RiwqWb6k5jmIQ4n8vVEEugdkMYxDsCVe/LaGvOwoAOLDbCetdBwhFgLQ== X-Received: by 2002:a17:90b:190b:: with SMTP id mp11mr9025962pjb.77.1622120353025; Thu, 27 May 2021 05:59:13 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id r10sm2150201pga.48.2021.05.27.05.59.05 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 05:59:12 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 02/15] swiotlb: Refactor swiotlb_create_debugfs Date: Thu, 27 May 2021 20:58:32 +0800 Message-Id: <20210527125845.1852284-3-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Split the debugfs creation to make the code reusable for supporting different bounce buffer pools, e.g. restricted DMA pool. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index d3232fc19385..b849b01a446f 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -64,6 +64,7 @@ enum swiotlb_force swiotlb_force; struct io_tlb_mem *io_tlb_default_mem; +static struct dentry *debugfs_dir; /* * Max segment that we can provide which (if pages are contingous) will @@ -662,18 +663,30 @@ EXPORT_SYMBOL_GPL(is_swiotlb_active); #ifdef CONFIG_DEBUG_FS -static int __init swiotlb_create_debugfs(void) +static void swiotlb_create_debugfs(struct io_tlb_mem *mem, const char *name) { - struct io_tlb_mem *mem = io_tlb_default_mem; - if (!mem) - return 0; - mem->debugfs = debugfs_create_dir("swiotlb", NULL); + return; + + mem->debugfs = debugfs_create_dir(name, debugfs_dir); debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs); debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used); +} + +static int __init swiotlb_create_default_debugfs(void) +{ + struct io_tlb_mem *mem = io_tlb_default_mem; + + if (mem) { + swiotlb_create_debugfs(mem, "swiotlb"); + debugfs_dir = mem->debugfs; + } else { + debugfs_dir = debugfs_create_dir("swiotlb", NULL); + } + return 0; } -late_initcall(swiotlb_create_debugfs); +late_initcall(swiotlb_create_default_debugfs); #endif From patchwork Thu May 27 12:58:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484584 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=E61Ri/F3; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrSZQ6nQJz9sXL for ; Thu, 27 May 2021 22:59:26 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235986AbhE0NA5 (ORCPT ); Thu, 27 May 2021 09:00:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235983AbhE0NA4 (ORCPT ); Thu, 27 May 2021 09:00:56 -0400 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06B4EC06138C for ; Thu, 27 May 2021 05:59:22 -0700 (PDT) Received: by mail-pg1-x534.google.com with SMTP id e22so3617612pgv.10 for ; Thu, 27 May 2021 05:59:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=v5uDNYG4BjZ/8joAD3lImi71MufKa1LW825i4nd7Nkw=; b=E61Ri/F39WkjFWJqAXjGFvQPED/h2WAUJXs5JDDaCoUA1omO3EVyPXt2Y2hwAfXcjs MTmmUwzdXmE7AXrGKnYXFwjizScT15arM61lsHXOY8uoU6WtnTGi/ZNZ6/f2x5v6f4yj nZWUxmDPBg+tGToE82wIiDUeuR5LEBQHZ4KOI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=v5uDNYG4BjZ/8joAD3lImi71MufKa1LW825i4nd7Nkw=; b=SZydy9fqIOI+mDD8G+2Hzu7Wn7TQlsAod6ip+76iTyZ9n8VltrgergYaI8SklKuV4V X1TXxaTmYNi97gFQC1Aia/zzfXO9wQVixXkb2AVO7GDI+q9Qu6JgfA9c1v41941jb9RW cmDFEl93Eao2z8ioBbINBC7bEA47hiGdXkvfqpin76BfFUKaZmh8HzcmHB0/kxvNyplB Cv49YxIAWI2wLMqn3Nx6bD5zt6t0gvKVjAhPlHm8Aar7W6a90KQniwEfom+7kU9j25vp T/YYcSttZdW2XfTGZDis+1OAF6Dk8hQcY2417BTDaWajy0lg6P02yU5UsTMkY6rbunMM V2Vw== X-Gm-Message-State: AOAM532DlJNAYQyEK+UYvtp/Q63hTU0tww8Vy1KFVr0cuMMDgOwgFrbE r+ZwbNmfXabYEK4PYiYKEV/OBA== X-Google-Smtp-Source: ABdhPJxENRpAFf3CkWut6vuXYNlsvVQFU1tdVBb0uZip+87V32FWXRIoYRditOw7sBaKZrr0nh2u9g== X-Received: by 2002:a05:6a00:134b:b029:2bf:2c30:ebbd with SMTP id k11-20020a056a00134bb02902bf2c30ebbdmr3639373pfu.74.1622120361576; Thu, 27 May 2021 05:59:21 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id m14sm2012639pff.17.2021.05.27.05.59.14 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 05:59:21 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 03/15] swiotlb: Add DMA_RESTRICTED_POOL Date: Thu, 27 May 2021 20:58:33 +0800 Message-Id: <20210527125845.1852284-4-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add a new kconfig symbol, DMA_RESTRICTED_POOL, for restricted DMA pool. Signed-off-by: Claire Chang --- kernel/dma/Kconfig | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index 77b405508743..3e961dc39634 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -80,6 +80,20 @@ config SWIOTLB bool select NEED_DMA_MAP_STATE +config DMA_RESTRICTED_POOL + bool "DMA Restricted Pool" + depends on OF && OF_RESERVED_MEM + select SWIOTLB + help + This enables support for restricted DMA pools which provide a level of + DMA memory protection on systems with limited hardware protection + capabilities, such as those lacking an IOMMU. + + For more information see + + and . + If unsure, say "n". + # # Should be selected if we can mmap non-coherent mappings to userspace. # The only thing that is really required is a way to set an uncached bit From patchwork Thu May 27 12:58:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484585 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=CvfRxJ5V; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrSZX6nHBz9sWT for ; Thu, 27 May 2021 22:59:32 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236037AbhE0NBE (ORCPT ); Thu, 27 May 2021 09:01:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236030AbhE0NBD (ORCPT ); Thu, 27 May 2021 09:01:03 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12EA5C061574 for ; Thu, 27 May 2021 05:59:31 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id b7so2316301plg.0 for ; Thu, 27 May 2021 05:59:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=T4AlVMpaxoBirwNTcqLmDmgFXiPVS0VLyb+9i3p+imI=; b=CvfRxJ5V9gKk6u6YLWcJx5gnvIl9KADGaY68BVyoOTlxsZIYBwmCIUy6cy/EkJ5V39 BIErMCSgOF8GthMo65IN/6rS1CSzvq2iqmoBrtsHAQg9YFDtsGD6qxVIcoystXKVmg4Y 0Pj4mBD1pyn4QIUFHHuIi/lzWBR/yBkH4GJaE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=T4AlVMpaxoBirwNTcqLmDmgFXiPVS0VLyb+9i3p+imI=; b=SRC/Hc6S++9LEpzNfrL+jAXxPrfTtYVX0a3uHUsureSuWqGR8Vc8RW980BFhp/puyh axZTorE4wv+xBoRH3Kq/8AaUH8uAoNBamqOM9BY7hUuWPA79RuSNloJ8dQ9UkPIhBkku hs1FjpNjLWOeCq7eemjJtgwzLJAjJqxy/DdLXiEFL3XhHm/cXa6s9otHbos1WRvs6THJ RDKjRM3satPaWkpdNxrRVIOYqZgd/f4IoxALlPhRLooelaJSkxPMgQfeom65fgqlIyuF 7CBAW427LyLxbJIU2CP86QAZ9I/O8olXJv8BXcUgg73PBiulb4DkgLC4c4uEdfJraVaV /Q7Q== X-Gm-Message-State: AOAM533Y5fUtV/uB7mlHdMStE49uV5bbcAgL6QxHEEvZ1ElQYM2Nz6nE ZJfNxrVxsjrH8j1EpeBJWdcq5w== X-Google-Smtp-Source: ABdhPJzsUXxHElPiQq44JQkAo3Fb/eFl0eY1YUlEp1O6N7E/B92qoxEbXYhTocMtdUka3Uh42209KQ== X-Received: by 2002:a17:902:b181:b029:fc:c069:865c with SMTP id s1-20020a170902b181b02900fcc069865cmr3099177plr.28.1622120370579; Thu, 27 May 2021 05:59:30 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id b1sm2283938pgf.84.2021.05.27.05.59.23 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 05:59:30 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 04/15] swiotlb: Add restricted DMA pool initialization Date: Thu, 27 May 2021 20:58:34 +0800 Message-Id: <20210527125845.1852284-5-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add the initialization function to create restricted DMA pools from matching reserved-memory nodes. Signed-off-by: Claire Chang --- include/linux/device.h | 4 +++ include/linux/swiotlb.h | 3 +- kernel/dma/swiotlb.c | 76 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 82 insertions(+), 1 deletion(-) diff --git a/include/linux/device.h b/include/linux/device.h index 959cb9d2c9ab..e78e1ce0b1b1 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -416,6 +416,7 @@ struct dev_links_info { * @dma_pools: Dma pools (if dma'ble device). * @dma_mem: Internal for coherent mem override. * @cma_area: Contiguous memory area for dma allocations + * @dma_io_tlb_mem: Internal for swiotlb io_tlb_mem override. * @archdata: For arch-specific additions. * @of_node: Associated device tree node. * @fwnode: Associated device node supplied by platform firmware. @@ -521,6 +522,9 @@ struct device { #ifdef CONFIG_DMA_CMA struct cma *cma_area; /* contiguous memory area for dma allocations */ +#endif +#ifdef CONFIG_DMA_RESTRICTED_POOL + struct io_tlb_mem *dma_io_tlb_mem; #endif /* arch specific additions */ struct dev_archdata archdata; diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 216854a5e513..03ad6e3b4056 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -72,7 +72,8 @@ extern enum swiotlb_force swiotlb_force; * range check to see if the memory was in fact allocated by this * API. * @nslabs: The number of IO TLB blocks (in groups of 64) between @start and - * @end. This is command line adjustable via setup_io_tlb_npages. + * @end. For default swiotlb, this is command line adjustable via + * setup_io_tlb_npages. * @used: The number of used IO TLB block. * @list: The free list describing the number of free entries available * from each index. diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index b849b01a446f..d99b403144a8 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -39,6 +39,13 @@ #ifdef CONFIG_DEBUG_FS #include #endif +#ifdef CONFIG_DMA_RESTRICTED_POOL +#include +#include +#include +#include +#include +#endif #include #include @@ -690,3 +697,72 @@ static int __init swiotlb_create_default_debugfs(void) late_initcall(swiotlb_create_default_debugfs); #endif + +#ifdef CONFIG_DMA_RESTRICTED_POOL +static int rmem_swiotlb_device_init(struct reserved_mem *rmem, + struct device *dev) +{ + struct io_tlb_mem *mem = rmem->priv; + unsigned long nslabs = rmem->size >> IO_TLB_SHIFT; + + if (dev->dma_io_tlb_mem) + return 0; + + /* + * Since multiple devices can share the same pool, the private data, + * io_tlb_mem struct, will be initialized by the first device attached + * to it. + */ + if (!mem) { + mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL); + if (!mem) + return -ENOMEM; + + swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false); + + rmem->priv = mem; + + if (IS_ENABLED(CONFIG_DEBUG_FS)) + swiotlb_create_debugfs(mem, rmem->name); + } + + dev->dma_io_tlb_mem = mem; + + return 0; +} + +static void rmem_swiotlb_device_release(struct reserved_mem *rmem, + struct device *dev) +{ + if (dev) + dev->dma_io_tlb_mem = NULL; +} + +static const struct reserved_mem_ops rmem_swiotlb_ops = { + .device_init = rmem_swiotlb_device_init, + .device_release = rmem_swiotlb_device_release, +}; + +static int __init rmem_swiotlb_setup(struct reserved_mem *rmem) +{ + unsigned long node = rmem->fdt_node; + + if (of_get_flat_dt_prop(node, "reusable", NULL) || + of_get_flat_dt_prop(node, "linux,cma-default", NULL) || + of_get_flat_dt_prop(node, "linux,dma-default", NULL) || + of_get_flat_dt_prop(node, "no-map", NULL)) + return -EINVAL; + + if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) { + pr_err("Restricted DMA pool must be accessible within the linear mapping."); + return -EINVAL; + } + + rmem->ops = &rmem_swiotlb_ops; + pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n", + &rmem->base, (unsigned long)rmem->size / SZ_1M); + return 0; +} + +RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup); +#endif /* CONFIG_DMA_RESTRICTED_POOL */ From patchwork Thu May 27 12:58:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484586 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=CWN//86M; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrSZl32Rrz9sW7 for ; Thu, 27 May 2021 22:59:43 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236098AbhE0NBO (ORCPT ); Thu, 27 May 2021 09:01:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236066AbhE0NBN (ORCPT ); Thu, 27 May 2021 09:01:13 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E5B3C061574 for ; Thu, 27 May 2021 05:59:39 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id gb21-20020a17090b0615b029015d1a863a91so2296401pjb.2 for ; Thu, 27 May 2021 05:59:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=n2YnMeaa21fO8GV3DprpFoE7sFhCpYEOshpAHonnB4Q=; b=CWN//86M7pfZwpqjk6Pz1G7HqyEoQV2evBjGZ2yUSjg4QlRX/avwG27ptvNDpwyHCw +v7KXNRKRgPheqlBZeQIo496WAVDVK7ga9R6f5STmR5c/U2eiCxGBrlIMJe+YfycLoea t6vNAhsA0F2FaNz1gwrhV3oLti0cIUc2nP7Cs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=n2YnMeaa21fO8GV3DprpFoE7sFhCpYEOshpAHonnB4Q=; b=ACMqtFqftktwCDpuaJ6I28lyqVhsyKD/auVm1gL9bT52F5PJgFc4shsiFLhddR/VKU VZuTupbE652am6u0ZLd+3eGJiWrl+GN8G0zVhYM1NDTKf7uzziuf0UOJZw3+Rahhaz7w cB+nkHAW+bUgO4PakxQuKv7cU4+Nlt3qrsGtTXhkjKwD+9ZmoI5WvrMPN260CspQFzN0 /A5XGiBqqRrroFDDuntMEwIGAqkWkHTnyu+aAIVhE3OY30tVJW3CIGxDCdlql1sd++xu CLA2IsgXD2XlLcZJDlClcOxHJk+6dMaOH2TK3W52qDzvUpmJc0nm3z7Sm8GfU5lB3m+z 3SKw== X-Gm-Message-State: AOAM5320WUjsTGia4irXYBacsjcxcRHghbajOyNEG8JWpVZZluHF0TGT RGnXABPtLymvuh5C7ibhtomrZA== X-Google-Smtp-Source: ABdhPJy5Kn+yvqLcXzfzyvGlttPOTv6zxSUWOgHVrqd8CjcCzdgLfUd5wPMKRSdGsOL4+oXiuJ5KrA== X-Received: by 2002:a17:90a:46cc:: with SMTP id x12mr9547674pjg.52.1622120379071; Thu, 27 May 2021 05:59:39 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id o2sm1885997pfu.80.2021.05.27.05.59.31 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 05:59:38 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 05/15] swiotlb: Add a new get_io_tlb_mem getter Date: Thu, 27 May 2021 20:58:35 +0800 Message-Id: <20210527125845.1852284-6-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add a new getter, get_io_tlb_mem, to help select the io_tlb_mem struct. The restricted DMA pool is preferred if available. The reason it was done this way instead of assigning the active pool to dev->dma_io_tlb_mem was because directly using dev->dma_io_tlb_mem might cause memory allocation issues for existing devices. The pool can't support atomic coherent allocation so swiotlb_alloc needs to distinguish it from the default swiotlb pool. Signed-off-by: Claire Chang --- include/linux/swiotlb.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 03ad6e3b4056..b469f04cca26 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -2,6 +2,7 @@ #ifndef __LINUX_SWIOTLB_H #define __LINUX_SWIOTLB_H +#include #include #include #include @@ -102,6 +103,16 @@ struct io_tlb_mem { }; extern struct io_tlb_mem *io_tlb_default_mem; +static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev) +{ +#ifdef CONFIG_DMA_RESTRICTED_POOL + if (dev && dev->dma_io_tlb_mem) + return dev->dma_io_tlb_mem; +#endif /* CONFIG_DMA_RESTRICTED_POOL */ + + return io_tlb_default_mem; +} + static inline bool is_swiotlb_buffer(phys_addr_t paddr) { struct io_tlb_mem *mem = io_tlb_default_mem; From patchwork Thu May 27 12:58:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484588 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=cNn9BMhh; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrSZy6wzxz9sXH for ; Thu, 27 May 2021 22:59:53 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236081AbhE0NBW (ORCPT ); Thu, 27 May 2021 09:01:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236066AbhE0NBV (ORCPT ); Thu, 27 May 2021 09:01:21 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 442D5C061760 for ; Thu, 27 May 2021 05:59:48 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id q6so395215pjj.2 for ; Thu, 27 May 2021 05:59:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SRSZmEUB055aoXyARExKzd2TpwKQHnstrT5lWFQ1tNw=; b=cNn9BMhhKGHjhwPPG2UgABfE69X+p0W4TmaTAB5q8Z4M5InvkWs8ATmOd5U8N6qWxU bxxYuDcvkgM0Xfxs9klHmfv1f/HVtMfBRvpREzONkBnjg/VPWHTAwBepUFiPDAMGuCVL v48zrB6kkCg3HK6W5PGMid66190hqov2+sC6I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SRSZmEUB055aoXyARExKzd2TpwKQHnstrT5lWFQ1tNw=; b=SLUzKj1Y1C31qFM+Mg73D4BjKT3ngUq2qMObO/ub2vfrThBeRq3W8J759BLEUf+1xe 285CFLa+VQ0xZc2jhoNrLUpPB18U4TzNEnttY3PjvrHU2+y6Cn/JIetohlMb0pkPNBvY DA7ViZt9CZLIRI/3fLxuDlJijg1JxcKvTpgqcV0FdxoPE3+XtOdTk5t90sFKGtaiNeNY pStbkM8kWWNgRe0RzZMr1RJonlTspyfAHavGv75FXbotBWGOfYzPb87XOfx7Q/A/+NI/ ZvdiVzKOHVJMriEcQEH1X0SjW3hWLr1FRXlAadauKpEpkc2N2yjQAcRlmho0ubh7XP1Z tuBg== X-Gm-Message-State: AOAM530xn/SBv+mGaGyfweLCGmG4LuaPb7ISwKXcPBZJkCnaFn3WVPAa 720bFYnXOyIN0T5EnQ2mVtknLQ== X-Google-Smtp-Source: ABdhPJzNxyImuZ9vLBAvulnmvXHqnDmLE3mk3E5uMUl1wrRQ78i9a00FaZ7G/OE27BAd+P8QYyZIUw== X-Received: by 2002:a17:903:22d1:b029:f2:67be:82fb with SMTP id y17-20020a17090322d1b02900f267be82fbmr3220638plg.15.1622120387710; Thu, 27 May 2021 05:59:47 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id o5sm1947925pfp.196.2021.05.27.05.59.40 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 05:59:47 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 06/15] swiotlb: Update is_swiotlb_buffer to add a struct device argument Date: Thu, 27 May 2021 20:58:36 +0800 Message-Id: <20210527125845.1852284-7-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Update is_swiotlb_buffer to add a struct device argument. This will be useful later to allow for restricted DMA pool. Signed-off-by: Claire Chang --- drivers/iommu/dma-iommu.c | 12 ++++++------ drivers/xen/swiotlb-xen.c | 2 +- include/linux/swiotlb.h | 6 +++--- kernel/dma/direct.c | 6 +++--- kernel/dma/direct.h | 6 +++--- 5 files changed, 16 insertions(+), 16 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 7bcdd1205535..a5df35bfd150 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -504,7 +504,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr, __iommu_dma_unmap(dev, dma_addr, size); - if (unlikely(is_swiotlb_buffer(phys))) + if (unlikely(is_swiotlb_buffer(dev, phys))) swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); } @@ -575,7 +575,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, } iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask); - if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys)) + if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys)) swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs); return iova; } @@ -781,7 +781,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev, if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_cpu(phys, size, dir); - if (is_swiotlb_buffer(phys)) + if (is_swiotlb_buffer(dev, phys)) swiotlb_sync_single_for_cpu(dev, phys, size, dir); } @@ -794,7 +794,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev, return; phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); - if (is_swiotlb_buffer(phys)) + if (is_swiotlb_buffer(dev, phys)) swiotlb_sync_single_for_device(dev, phys, size, dir); if (!dev_is_dma_coherent(dev)) @@ -815,7 +815,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev, if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir); - if (is_swiotlb_buffer(sg_phys(sg))) + if (is_swiotlb_buffer(dev, sg_phys(sg))) swiotlb_sync_single_for_cpu(dev, sg_phys(sg), sg->length, dir); } @@ -832,7 +832,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, return; for_each_sg(sgl, sg, nelems, i) { - if (is_swiotlb_buffer(sg_phys(sg))) + if (is_swiotlb_buffer(dev, sg_phys(sg))) swiotlb_sync_single_for_device(dev, sg_phys(sg), sg->length, dir); diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 24d11861ac7d..0c4fb34f11ab 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr) * in our domain. Therefore _only_ check address within our domain. */ if (pfn_valid(PFN_DOWN(paddr))) - return is_swiotlb_buffer(paddr); + return is_swiotlb_buffer(dev, paddr); return 0; } diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index b469f04cca26..2a6cca07540b 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -113,9 +113,9 @@ static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev) return io_tlb_default_mem; } -static inline bool is_swiotlb_buffer(phys_addr_t paddr) +static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) { - struct io_tlb_mem *mem = io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(dev); return mem && paddr >= mem->start && paddr < mem->end; } @@ -127,7 +127,7 @@ bool is_swiotlb_active(void); void __init swiotlb_adjust_size(unsigned long size); #else #define swiotlb_force SWIOTLB_NO_FORCE -static inline bool is_swiotlb_buffer(phys_addr_t paddr) +static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) { return false; } diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index f737e3347059..84c9feb5474a 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev, for_each_sg(sgl, sg, nents, i) { phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg)); - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_sync_single_for_device(dev, paddr, sg->length, dir); @@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev, if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_cpu(paddr, sg->length, dir); - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_sync_single_for_cpu(dev, paddr, sg->length, dir); @@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev) bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr) { return !dev_is_dma_coherent(dev) || - is_swiotlb_buffer(dma_to_phys(dev, dma_addr)); + is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr)); } /** diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index 50afc05b6f1d..13e9e7158d94 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev, { phys_addr_t paddr = dma_to_phys(dev, addr); - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_sync_single_for_device(dev, paddr, size, dir); if (!dev_is_dma_coherent(dev)) @@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev, arch_sync_dma_for_cpu_all(); } - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_sync_single_for_cpu(dev, paddr, size, dir); if (dir == DMA_FROM_DEVICE) @@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr, if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) dma_direct_sync_single_for_cpu(dev, addr, size, dir); - if (unlikely(is_swiotlb_buffer(phys))) + if (unlikely(is_swiotlb_buffer(dev, phys))) swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); } #endif /* _KERNEL_DMA_DIRECT_H */ From patchwork Thu May 27 12:58:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484589 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=cC0gm2fr; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrSb217N4z9sWp for ; Thu, 27 May 2021 22:59:58 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235793AbhE0NBa (ORCPT ); Thu, 27 May 2021 09:01:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236127AbhE0NB3 (ORCPT ); Thu, 27 May 2021 09:01:29 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1444C061763 for ; Thu, 27 May 2021 05:59:56 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id d16so496354pfn.12 for ; Thu, 27 May 2021 05:59:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Mi4fzGLZAevgWkOrJd5AqwgkgacGpqhm6BO8mxVKx0M=; b=cC0gm2frLrJhtkllrartdi/IJ10NPfAYEcwXPEVpQV2EiZmW5wSJj1wWFHn6TpRrf2 HaqrP/OROCg0MWl2jqREsiM0RPu9M17H/istHD5+5fpOYHJ0bBQL7eAhc04shOaWsUE6 dFPx9rNBosC2Q2M9oPdfJQj93f1q/HMozBcKA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Mi4fzGLZAevgWkOrJd5AqwgkgacGpqhm6BO8mxVKx0M=; b=Hyrx1B+0VnZ3cCJyjGqodpTbtBFbnz3kRVBpSJOsqjnQc0SR4/2dySa5iT/sXCGlX9 XC8SDEUvTp+2/sG+G0m6lsKDj/ClDBa2tRMghvUa7WsWKza1tIVTNk4Klwi4TgXuRtM3 hxZyfAF+S13Z4rWlllPTzch6HYmvA+X47mmHGqpLsYKoOpmiPnjD9h2YmVXIEwAuG++j XMynFC/IKrqy8GjUr1QN3jY5TME4b4oPLMRbeYqqcSUcnOrek+wWVvZkf+XF7kMGd/vM E5vUc+QBtnU9GHQJ5yOmRNFnk4emtWw2g8vv03bzBscrEI1yjGoACuRgjGkIGl2SH7h2 67bQ== X-Gm-Message-State: AOAM533Ef0RxC6fk4kN3DFJBH5M8Nj9OSvNhsckywK7LIvd09c9FJXmp pThe1Yf2eSp/Lsiy4zU2eL3GbA== X-Google-Smtp-Source: ABdhPJzPlZfJRITEebp1ZlzGxNW3wGUcc2uIEGsv6usjxbdSMuBRFFThGIAIIadsB2Kn5HYh1wQOKg== X-Received: by 2002:a05:6a00:bcd:b029:2e5:694c:1c96 with SMTP id x13-20020a056a000bcdb02902e5694c1c96mr3610267pfu.53.1622120396288; Thu, 27 May 2021 05:59:56 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id z6sm1943314pgp.89.2021.05.27.05.59.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 05:59:55 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 07/15] swiotlb: Update is_swiotlb_active to add a struct device argument Date: Thu, 27 May 2021 20:58:37 +0800 Message-Id: <20210527125845.1852284-8-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Update is_swiotlb_active to add a struct device argument. This will be useful later to allow for restricted DMA pool. Signed-off-by: Claire Chang --- drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +- drivers/gpu/drm/nouveau/nouveau_ttm.c | 2 +- drivers/pci/xen-pcifront.c | 2 +- include/linux/swiotlb.h | 4 ++-- kernel/dma/direct.c | 2 +- kernel/dma/swiotlb.c | 4 ++-- 6 files changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c index ce6b664b10aa..7d48c433446b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c @@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) max_order = MAX_ORDER; #ifdef CONFIG_SWIOTLB - if (is_swiotlb_active()) { + if (is_swiotlb_active(NULL)) { unsigned int max_segment; max_segment = swiotlb_max_segment(); diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c index 65430912ff72..d0e998b9e2e8 100644 --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c @@ -270,7 +270,7 @@ nouveau_ttm_init(struct nouveau_drm *drm) } #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86) - need_swiotlb = is_swiotlb_active(); + need_swiotlb = is_swiotlb_active(NULL); #endif ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev, diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c index b7a8f3a1921f..6d548ce53ce7 100644 --- a/drivers/pci/xen-pcifront.c +++ b/drivers/pci/xen-pcifront.c @@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev) spin_unlock(&pcifront_dev_lock); - if (!err && !is_swiotlb_active()) { + if (!err && !is_swiotlb_active(NULL)) { err = pci_xen_swiotlb_init_late(); if (err) dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n"); diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 2a6cca07540b..c530c976d18b 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -123,7 +123,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) void __init swiotlb_exit(void); unsigned int swiotlb_max_segment(void); size_t swiotlb_max_mapping_size(struct device *dev); -bool is_swiotlb_active(void); +bool is_swiotlb_active(struct device *dev); void __init swiotlb_adjust_size(unsigned long size); #else #define swiotlb_force SWIOTLB_NO_FORCE @@ -143,7 +143,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev) return SIZE_MAX; } -static inline bool is_swiotlb_active(void) +static inline bool is_swiotlb_active(struct device *dev) { return false; } diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 84c9feb5474a..7a88c34d0867 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask) size_t dma_direct_max_mapping_size(struct device *dev) { /* If SWIOTLB is active, use its maximum mapping size */ - if (is_swiotlb_active() && + if (is_swiotlb_active(dev) && (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE)) return swiotlb_max_mapping_size(dev); return SIZE_MAX; diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index d99b403144a8..b2b6503ecd88 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -662,9 +662,9 @@ size_t swiotlb_max_mapping_size(struct device *dev) return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE; } -bool is_swiotlb_active(void) +bool is_swiotlb_active(struct device *dev) { - return io_tlb_default_mem != NULL; + return get_io_tlb_mem(dev) != NULL; } EXPORT_SYMBOL_GPL(is_swiotlb_active); From patchwork Thu May 27 12:58:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484590 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=kcN4x88w; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrSbJ28qjz9sRK for ; Thu, 27 May 2021 23:00:12 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236115AbhE0NBn (ORCPT ); Thu, 27 May 2021 09:01:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236158AbhE0NBk (ORCPT ); Thu, 27 May 2021 09:01:40 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E83BAC061760 for ; Thu, 27 May 2021 06:00:05 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id q16so2297963pls.6 for ; Thu, 27 May 2021 06:00:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=B7RrpH6OgXx/lTZyexKL+IWeAI7yB0yfCFBVtG2zWbw=; b=kcN4x88wnIqA2eddyBkI1s+5ccQseApdZLb8/RSr2+Fa520MxJZZ6NkX9r/ML2uBrj kvMXO+zNUanE1yYV9wWE+HheUK89EpCu8v6POAAGHEkAn5qXZCd1I7/7KAUnWJ2R0GFj AiEWOWj6q+T+/qXKxk0sPdE2cDhBD8s6mX/Vs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=B7RrpH6OgXx/lTZyexKL+IWeAI7yB0yfCFBVtG2zWbw=; b=QkOwJgeCsliAKIgmyWZXW9aHs097ywfBoN0e7/Jlj3584BstKAI00mIhaqAvC/NVBj yOgmJ1QqSEhTU9zp8yY+R4zqEfIxpfM+1n2db34nl5/6Vr44pCWmsIPolrCPnGwdT9L6 wEleB5/JYOFLs7H5x/EymmGe4yjB5bay1ufhXyh0SlfgMvx4GRnbvb1L9Mxc5IARpcvI jreWiPpUTOeSDjKDKL0VlCZjCWhcTmk2bWgXkj0lyGYDRIROITp4Fy0mWcgM9t4nEmKT Da+VJUEfy4/Vz5IlP/SbmyK5hLhmWJbWIY6WKyQ+I2qM1jM7CDT1XvGg90hRxTgxZO3u wb2w== X-Gm-Message-State: AOAM533Lw6Vrn4m5gFNgA/t58HJZgLK5Jpd/JIFYXlc81A/Vw528BDk7 42ZrYd/HWcuAimQHd5H4unfFsg== X-Google-Smtp-Source: ABdhPJx/S80brWa6yq+WbXDwFafH2VwKxtCuF8GRMQTIN21BHKQJi5x8f1RJPDSguTNDbXCIi1shzA== X-Received: by 2002:a17:90b:33d1:: with SMTP id lk17mr3678617pjb.154.1622120405435; Thu, 27 May 2021 06:00:05 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id m1sm2027143pgd.78.2021.05.27.05.59.58 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:00:04 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 08/15] swiotlb: Bounce data from/to restricted DMA pool if available Date: Thu, 27 May 2021 20:58:38 +0800 Message-Id: <20210527125845.1852284-9-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Regardless of swiotlb setting, the restricted DMA pool is preferred if available. The restricted DMA pools provide a basic level of protection against the DMA overwriting buffer contents at unexpected times. However, to protect against general data leakage and system memory corruption, the system needs to provide a way to lock down the memory access, e.g., MPU. Note that is_dev_swiotlb_force doesn't check if swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior with default swiotlb will be changed by the following patche ("dma-direct: Allocate memory from restricted DMA pool if available"). Signed-off-by: Claire Chang --- include/linux/swiotlb.h | 13 +++++++++++++ kernel/dma/direct.c | 3 ++- kernel/dma/direct.h | 3 ++- kernel/dma/swiotlb.c | 8 ++++---- 4 files changed, 21 insertions(+), 6 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index c530c976d18b..0c5a18d9cf89 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -120,6 +120,15 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) return mem && paddr >= mem->start && paddr < mem->end; } +static inline bool is_dev_swiotlb_force(struct device *dev) +{ +#ifdef CONFIG_DMA_RESTRICTED_POOL + if (dev->dma_io_tlb_mem) + return true; +#endif /* CONFIG_DMA_RESTRICTED_POOL */ + return false; +} + void __init swiotlb_exit(void); unsigned int swiotlb_max_segment(void); size_t swiotlb_max_mapping_size(struct device *dev); @@ -131,6 +140,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) { return false; } +static inline bool is_dev_swiotlb_force(struct device *dev) +{ + return false; +} static inline void swiotlb_exit(void) { } diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 7a88c34d0867..078f7087e466 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev) { /* If SWIOTLB is active, use its maximum mapping size */ if (is_swiotlb_active(dev) && - (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE)) + (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE || + is_dev_swiotlb_force(dev))) return swiotlb_max_mapping_size(dev); return SIZE_MAX; } diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index 13e9e7158d94..f94813674e23 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -87,7 +87,8 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, phys_addr_t phys = page_to_phys(page) + offset; dma_addr_t dma_addr = phys_to_dma(dev, phys); - if (unlikely(swiotlb_force == SWIOTLB_FORCE)) + if (unlikely(swiotlb_force == SWIOTLB_FORCE) || + is_dev_swiotlb_force(dev)) return swiotlb_map(dev, phys, size, dir, attrs); if (unlikely(!dma_capable(dev, dma_addr, size, true))) { diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index b2b6503ecd88..fa7f23fffc81 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -347,7 +347,7 @@ void __init swiotlb_exit(void) static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir) { - struct io_tlb_mem *mem = io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(dev); int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT; phys_addr_t orig_addr = mem->slots[index].orig_addr; size_t alloc_size = mem->slots[index].alloc_size; @@ -429,7 +429,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index) static int find_slots(struct device *dev, phys_addr_t orig_addr, size_t alloc_size) { - struct io_tlb_mem *mem = io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(dev); unsigned long boundary_mask = dma_get_seg_boundary(dev); dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(dev, mem->start) & boundary_mask; @@ -506,7 +506,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { - struct io_tlb_mem *mem = io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(dev); unsigned int offset = swiotlb_align_offset(dev, orig_addr); unsigned int i; int index; @@ -557,7 +557,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, size_t mapping_size, enum dma_data_direction dir, unsigned long attrs) { - struct io_tlb_mem *mem = io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(hwdev); unsigned long flags; unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr); int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT; From patchwork Thu May 27 12:58:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484592 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=h56T1h46; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrSbh1dWNz9sfG for ; Thu, 27 May 2021 23:00:32 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236099AbhE0NBy (ORCPT ); Thu, 27 May 2021 09:01:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236200AbhE0NBv (ORCPT ); Thu, 27 May 2021 09:01:51 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82633C0613CE for ; Thu, 27 May 2021 06:00:14 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id y202so520148pfc.6 for ; Thu, 27 May 2021 06:00:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=w2PiQmD1cuLTolB92EeZQujQ/2l5n9gaMPeHb2aHq3I=; b=h56T1h46NuCHAXLEDOrFtGzPeGkXpqjaVqU2Id3/ScnxpcTFZFB+NgniQLXbdnzuUx QD0APsw9ZjoeJMpeIHj0tuMfdo+wG/iO7GBpb84AqyFHYbVnubqVV+LlaPHYPYnvVgbR tJ7ATJhWs9UvVNPUNH5hquxvKLueKDOrXLp9s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=w2PiQmD1cuLTolB92EeZQujQ/2l5n9gaMPeHb2aHq3I=; b=LwXEsWcEclnFVBDuqA1F6z6g43Ikgje0mwy2PfzIAwiPy5dRvQ8lWSEnQ4/9kZhHlY JsSMUMNBmbrvnPcpsnFhnE+jEwKGFigjZYzY6m5o4FjhG436fXOertz0Qx1ZGYQH4Xy/ vCd8Cz3EgkHT2zWqdL+3XJf9ADKeho8KbLZpu5x9h9whxEqulhOMeY+JYOSh9r7mmd6m 9iD0n9WjRtAA9RS1Izs4Q3egpyFVrmCOAuExyaJDYt9oIIHMF3+F5mniB9b7uOow3Qdt 9nF0lHti5fKYzOl9U5pxFidG4RJm0oayKWSR6wIJSVfaog95HuoeDovEOGIkYHP1gTo3 Ga+Q== X-Gm-Message-State: AOAM530oOVH/Evw2OzmKm1bXBwxDnnRtJYagbhAWHvHvAmck+v0IfXeH hT1aPg/rggl82zEuCA4aE0FeEw== X-Google-Smtp-Source: ABdhPJwPKvUueO3/tWMtqAGArtGFSFkKVOYPoppJ9Pk4xAfchfGgkSrJtv/ImJCVwl16f+ZRC2hwVw== X-Received: by 2002:a63:5c01:: with SMTP id q1mr3600625pgb.447.1622120414085; Thu, 27 May 2021 06:00:14 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id m84sm1905689pfd.41.2021.05.27.06.00.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:00:13 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 09/15] swiotlb: Move alloc_size to find_slots Date: Thu, 27 May 2021 20:58:39 +0800 Message-Id: <20210527125845.1852284-10-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Move the maintenance of alloc_size to find_slots for better code reusability later. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index fa7f23fffc81..88b3471ac6a8 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -482,8 +482,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr, return -1; found: - for (i = index; i < index + nslots; i++) + for (i = index; i < index + nslots; i++) { mem->slots[i].list = 0; + mem->slots[i].alloc_size = + alloc_size - ((i - index) << IO_TLB_SHIFT); + } for (i = index - 1; io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 && mem->slots[i].list; i--) @@ -538,11 +541,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, * This is needed when we sync the memory. Then we sync the buffer if * needed. */ - for (i = 0; i < nr_slots(alloc_size + offset); i++) { + for (i = 0; i < nr_slots(alloc_size + offset); i++) mem->slots[index + i].orig_addr = slot_addr(orig_addr, i); - mem->slots[index + i].alloc_size = - alloc_size - (i << IO_TLB_SHIFT); - } tlb_addr = slot_addr(mem->start, index) + offset; if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) From patchwork Thu May 27 12:58:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484593 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=FaFXCFWJ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrSbh65vBz9sj1 for ; Thu, 27 May 2021 23:00:32 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236238AbhE0NCB (ORCPT ); Thu, 27 May 2021 09:02:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236225AbhE0NB7 (ORCPT ); Thu, 27 May 2021 09:01:59 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37E1EC06138A for ; Thu, 27 May 2021 06:00:24 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id b7so2317512plg.0 for ; Thu, 27 May 2021 06:00:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gCZEoBF15DvIVGSc+3j8Uk27721eHnYQC5cQOW6yZk4=; b=FaFXCFWJLCD/gigBOGfY4F270yO0Alvb7EMZVKQJ9L8utcEPvn3XtnKI3BEMvmJtEK gYgjai80HZHDiRLWnq4ZnWFUPWMggGA/Wo+QuTb4wDh5MX+6KsOXKwWCEnZzHAaiuA3T 2/Aqi7L4LhIRGkedSPRMYz9qyuKketwP3p78k= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gCZEoBF15DvIVGSc+3j8Uk27721eHnYQC5cQOW6yZk4=; b=MjJm4p1Uftf7rWfX3NOLrGN6qWB9i9v5YMHG9ajewP/sRwN9brqCjtWdV+Xe5r4Hso cdRaTvNjRz2I1Ksab2imUhZB5lsFT3ZZn5CoHZWIgaW7IM8zwj4BhxKAtSUV2vDaFINy xANs0BeOwXkYAvC8KLdXqcGAb2tV2uGD+UwizMzLDfuMDGnGLYoWXf6r4RmSAMhdEhZm wHRxL0Gt0bRaVDRZ/waJ0qkKV3AT9dpgPdUnm5or454gEZUNvDUw09uv/AFT6yk3DPaO +PzCFZ1i/BBxUiFzXN+c5MP2/1b/j8J9k0vbskIacJAXKaKpzt31e8P8YiLc8sW0Subq 6xcw== X-Gm-Message-State: AOAM531xSBleQ7OKYV/Xqb+kOihdR8q+EsWyf2F1afX0pL19GRb+ta9k RwRPgW/ACW2EH/P6YPVgkAS24g== X-Google-Smtp-Source: ABdhPJyRdfD5aP84oP+O3UZ9D5kP+7R8Jjf69TCVtMPqoqtl6dZXUjZXcnaxnn4OXnc5bmDzFDOgsA== X-Received: by 2002:a17:90a:74f:: with SMTP id s15mr4091622pje.90.1622120423164; Thu, 27 May 2021 06:00:23 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id 66sm2009117pgj.9.2021.05.27.06.00.15 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:00:22 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 10/15] swiotlb: Refactor swiotlb_tbl_unmap_single Date: Thu, 27 May 2021 20:58:40 +0800 Message-Id: <20210527125845.1852284-11-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org> References: <20210527125845.1852284-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add a new function, release_slots, to make the code reusable for supporting different bounce buffer pools, e.g. restricted DMA pool. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 35 ++++++++++++++++++++--------------- 1 file changed, 20 insertions(+), 15 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 88b3471ac6a8..c4fc2e444e7a 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -550,27 +550,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, return tlb_addr; } -/* - * tlb_addr is the physical address of the bounce buffer to unmap. - */ -void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, - size_t mapping_size, enum dma_data_direction dir, - unsigned long attrs) +static void release_slots(struct device *dev, phys_addr_t tlb_addr) { - struct io_tlb_mem *mem = get_io_tlb_mem(hwdev); + struct io_tlb_mem *mem = get_io_tlb_mem(dev); unsigned long flags; - unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr); + unsigned int offset = swiotlb_align_offset(dev, tlb_addr); int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT; int nslots = nr_slots(mem->slots[index].alloc_size + offset); int count, i; - /* - * First, sync the memory before unmapping the entry - */ - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE); - /* * Return the buffer to the free list by setting the corresponding * entries to indicate the number of contiguous entries available. @@ -605,6 +593,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, spin_unlock_irqrestore(&mem->lock, flags); } +/* + * tlb_addr is the physical address of the bounce buffer to unmap. + */ +void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr, + size_t mapping_size, enum dma_data_direction dir, + unsigned long attrs) +{ + /* + * First, sync the memory before unmapping the entry + */ + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)) + swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE); + + release_slots(dev, tlb_addr); +} + void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir) { From patchwork Thu May 27 13:03:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484601 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=ftmREmcx; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrSgR0kTyz9sWc for ; Thu, 27 May 2021 23:03:47 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236016AbhE0NFS (ORCPT ); Thu, 27 May 2021 09:05:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235177AbhE0NFR (ORCPT ); Thu, 27 May 2021 09:05:17 -0400 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C28F1C061574 for ; Thu, 27 May 2021 06:03:44 -0700 (PDT) Received: by mail-pf1-x42e.google.com with SMTP id d16so505100pfn.12 for ; Thu, 27 May 2021 06:03:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=m7+uI7rGPgfSKn+aVdUzTy5/dZn0TD9snxYJvtDtWr8=; b=ftmREmcxrJM5ghdWu3uu6RoVOQmn/FmRG0fXdD0sAjbx0bczGkzwvRTurlvaYuIvJW x3lcL0ttraMgqRb6RQUndSyEBiGIY3jtWjTqxgCUrCG/1jwY14bstcfPs5KZ9RhvRj5W T2wJrb10vnObIKglZGitAlnSSqiKkQCQp8AYQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=m7+uI7rGPgfSKn+aVdUzTy5/dZn0TD9snxYJvtDtWr8=; b=E/R822uymTxYP+Er1u6l3bgHQwb8Fs2mA8VL5mm7Cx9qRp0+cTfwN8boTT5R/ApeVM wqJjfEd+nJorSewVaSHFFp9GXx8KZEai8fWRQhVs3geKrzq0u3QXBtwOjmtVLTthEEgu n6U8ncl+Z9NvWf9buKU4Jtnam58FtSLsZYy85Wk3ldSh8l6oiQgOf2CHA/+s1B2H+Htu txsi9iAHv5je1haKf022aNdwNcLq/enYZA5QZy3wra9ZcC09y3Ew68sN2ek7tGkrxtuy DHeuGfrvGsPUEdBjFtZ3kpGWV5j3qSSZpC+/oHd1b5mqn+AkF5bBq6ycFoKVc4dEc+GE TSfQ== X-Gm-Message-State: AOAM531mBz2vQmBzIBCAE1TW+UIA40NDSQSlvd66A4kR8pAw9TWPc9WH qENztCJP9n362iwT2dcYdQCGzQ== X-Google-Smtp-Source: ABdhPJwFMAyQyClkgtD2WU3AOT0NSiYpDWGkYsiRLX3TSIJ7iJhjuPVvZAr31+YEfks0HMDDaEOArA== X-Received: by 2002:a05:6a00:216a:b029:2df:3461:4ac3 with SMTP id r10-20020a056a00216ab02902df34614ac3mr3590982pff.80.1622120624345; Thu, 27 May 2021 06:03:44 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id w15sm2015155pjy.1.2021.05.27.06.03.34 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:03:41 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 11/15] dma-direct: Add a new wrapper __dma_direct_free_pages() Date: Thu, 27 May 2021 21:03:25 +0800 Message-Id: <20210527130329.1853588-1-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add a new wrapper __dma_direct_free_pages() that will be useful later for swiotlb_free(). Signed-off-by: Claire Chang --- kernel/dma/direct.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 078f7087e466..eb4098323bbc 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -75,6 +75,12 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit); } +static void __dma_direct_free_pages(struct device *dev, struct page *page, + size_t size) +{ + dma_free_contiguous(dev, page, size); +} + static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, gfp_t gfp) { @@ -237,7 +243,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, return NULL; } out_free_pages: - dma_free_contiguous(dev, page, size); + __dma_direct_free_pages(dev, page, size); return NULL; } @@ -273,7 +279,7 @@ void dma_direct_free(struct device *dev, size_t size, else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED)) arch_dma_clear_uncached(cpu_addr, size); - dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size); + __dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size); } struct page *dma_direct_alloc_pages(struct device *dev, size_t size, @@ -310,7 +316,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, *dma_handle = phys_to_dma_direct(dev, page_to_phys(page)); return page; out_free_pages: - dma_free_contiguous(dev, page, size); + __dma_direct_free_pages(dev, page, size); return NULL; } @@ -329,7 +335,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, if (force_dma_unencrypted(dev)) set_memory_encrypted((unsigned long)vaddr, 1 << page_order); - dma_free_contiguous(dev, page, size); + __dma_direct_free_pages(dev, page, size); } #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ From patchwork Thu May 27 13:03:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484602 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=Fc1gvJ3n; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrSgg6mrHz9sW7 for ; Thu, 27 May 2021 23:03:59 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236225AbhE0NFa (ORCPT ); Thu, 27 May 2021 09:05:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236248AbhE0NF2 (ORCPT ); Thu, 27 May 2021 09:05:28 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5007EC061761 for ; Thu, 27 May 2021 06:03:54 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id g18so539804pfr.2 for ; Thu, 27 May 2021 06:03:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Hq3dk3sdVo1Aiyz3oEGqQdxJF9bgRO7on7tVtGrwREA=; b=Fc1gvJ3n84xz3JxOO0N55rsBPx0BRwCnJqQGnPap/zK0pI1U/L+kL/kcQzWLQ84Riq 0j/pqlcmWQYQydyVIRDN+PwShThBXOPgJ9Tt5RavIwKiqavfO6y+wHM9/KEWCh+zM9+b bTIS7XzvK0G3Tb5c5LYB9BTAV5sVenH8dr8s8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Hq3dk3sdVo1Aiyz3oEGqQdxJF9bgRO7on7tVtGrwREA=; b=oQNv+HbUpzUJ3EsrOOjKkVmCYfe/lPsykMdUGG1LW/wVY0omd0pk+Ir1EQfe15PyuG QsM5URqF7Y/5VOIi4JSnARU9DGEaeQLORgMUeMD2BgevzMeSHdXckIFG9H1AnXjgQObN pWQSa5yk654lT6fXQPjP9BOVKsthQQ2JIjtXDf0IbemG81dCXrO//UnWzWV0gW2r5FLj 71lv4nTCgdArRAUIJMI7Utyq8kWkA7lmm8Gm/t8dZ4vu70Q6rMZimUlIR6GByBxmgI7u VsipPPpMcIqeaQWhWQFdzLNsLOASnboomPWrsxFdaptcOB9AluqMBESXOwLuItfIilxU YJRQ== X-Gm-Message-State: AOAM533UNyUJjq7H5fuxxNTQVqJp3dIYI6MyfBkT/b86J/Q/LY0bCS0O OOJZSoMg0GKQr7JZ0y0W31J9/Q== X-Google-Smtp-Source: ABdhPJwek6kyZnVLp6SkoYltLcwOXMIMY+AUL7OjyvyvB2fw/uVSfHyOeaCWcshB7EYERK97204FCg== X-Received: by 2002:aa7:8588:0:b029:28e:dfa1:e31a with SMTP id w8-20020aa785880000b029028edfa1e31amr3587263pfn.77.1622120633858; Thu, 27 May 2021 06:03:53 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id p11sm2096443pjo.19.2021.05.27.06.03.46 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:03:53 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 12/15] swiotlb: Add restricted DMA alloc/free support. Date: Thu, 27 May 2021 21:03:26 +0800 Message-Id: <20210527130329.1853588-2-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527130329.1853588-1-tientzu@chromium.org> References: <20210527130329.1853588-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Add the functions, swiotlb_{alloc,free} to support the memory allocation from restricted DMA pool. Signed-off-by: Claire Chang --- include/linux/swiotlb.h | 4 ++++ kernel/dma/swiotlb.c | 35 +++++++++++++++++++++++++++++++++-- 2 files changed, 37 insertions(+), 2 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 0c5a18d9cf89..e8cf49bd90c5 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -134,6 +134,10 @@ unsigned int swiotlb_max_segment(void); size_t swiotlb_max_mapping_size(struct device *dev); bool is_swiotlb_active(struct device *dev); void __init swiotlb_adjust_size(unsigned long size); +#ifdef CONFIG_DMA_RESTRICTED_POOL +struct page *swiotlb_alloc(struct device *dev, size_t size); +bool swiotlb_free(struct device *dev, struct page *page, size_t size); +#endif /* CONFIG_DMA_RESTRICTED_POOL */ #else #define swiotlb_force SWIOTLB_NO_FORCE static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index c4fc2e444e7a..648bfdde4b0c 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -457,8 +457,9 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr, index = wrap = wrap_index(mem, ALIGN(mem->index, stride)); do { - if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) != - (orig_addr & iotlb_align_mask)) { + if (orig_addr && + (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) != + (orig_addr & iotlb_align_mask)) { index = wrap_index(mem, index + 1); continue; } @@ -704,6 +705,36 @@ late_initcall(swiotlb_create_default_debugfs); #endif #ifdef CONFIG_DMA_RESTRICTED_POOL +struct page *swiotlb_alloc(struct device *dev, size_t size) +{ + struct io_tlb_mem *mem = dev->dma_io_tlb_mem; + phys_addr_t tlb_addr; + int index; + + if (!mem) + return NULL; + + index = find_slots(dev, 0, size); + if (index == -1) + return NULL; + + tlb_addr = slot_addr(mem->start, index); + + return pfn_to_page(PFN_DOWN(tlb_addr)); +} + +bool swiotlb_free(struct device *dev, struct page *page, size_t size) +{ + phys_addr_t tlb_addr = page_to_phys(page); + + if (!is_swiotlb_buffer(dev, tlb_addr)) + return false; + + release_slots(dev, tlb_addr); + + return true; +} + static int rmem_swiotlb_device_init(struct reserved_mem *rmem, struct device *dev) { From patchwork Thu May 27 13:03:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484604 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=IyvYtT1A; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrSh84Xjxz9sT6 for ; Thu, 27 May 2021 23:04:24 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236297AbhE0NFz (ORCPT ); Thu, 27 May 2021 09:05:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235896AbhE0NFu (ORCPT ); Thu, 27 May 2021 09:05:50 -0400 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E614C06138D for ; Thu, 27 May 2021 06:04:03 -0700 (PDT) Received: by mail-pg1-x52a.google.com with SMTP id 29so3628557pgu.11 for ; Thu, 27 May 2021 06:04:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MYsbpvk1elzDffjY4lVqlSBet3lNsQF2QtHREtnNje4=; b=IyvYtT1AB6E4xwxKSFDYkZ5n98xR0l1LZiUkJW0MW5CgH/i5aGIsnRF/JPxxLV994w 6zBxh1X3gHavrt/AsvfReKnILPfzC/l6kkFOeK3VrBIogUw16ui2Ro+3Hw3QPFy2pWGM +/sNzLuWK+E6zh1p0Gxt9RmiN+wGdEuT6cNDQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MYsbpvk1elzDffjY4lVqlSBet3lNsQF2QtHREtnNje4=; b=cRR1ht0jsrneSUhe138Fpzy95UAfdNegMd5r2HMLtWsawcn+gz48SVJVWsVXhxeJfP Qg/flYwLmxPl9Sry4IkQPSndLxN19INsiHruH6G9+n5SG57teW2nSSouxA+rUK7dJAam Q2RComVEizNz8tNf7Ft9kxG8AqyEoC73a2LKIOIuPYBVJprDf26B/ubcDBS6JUb+Uww9 DEiKeTEWoQiEiFPRbChaPlmQkiT69+c8i2wv51fvb/7MpQuaombRom0R5HBTe5zv2PA1 YY4n1XB/iq5uq3LKnyCkfgE9M8s2C6j1yly/P+oxLFf29uXbwe0y7Uj5GgXBCGLdMZ5X QdPA== X-Gm-Message-State: AOAM531x70I0kC8AVvdIqZCyk8cOoBTgtnZZ5optt3RO7IscxyF1B8q9 t+Ks9wtP4acfwCoAt4yZYJBNnQ== X-Google-Smtp-Source: ABdhPJxiOdBGPcWtjBuPQLZxflfz/miVWdNfGLJRIpkM261jkyasjOonUIh30AqgExiX3771SeFH4Q== X-Received: by 2002:a63:4d11:: with SMTP id a17mr3584089pgb.266.1622120642890; Thu, 27 May 2021 06:04:02 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id n1sm1934646pjv.40.2021.05.27.06.03.55 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:04:02 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 13/15] dma-direct: Allocate memory from restricted DMA pool if available Date: Thu, 27 May 2021 21:03:27 +0800 Message-Id: <20210527130329.1853588-3-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527130329.1853588-1-tientzu@chromium.org> References: <20210527130329.1853588-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The restricted DMA pool is preferred if available. The restricted DMA pools provide a basic level of protection against the DMA overwriting buffer contents at unexpected times. However, to protect against general data leakage and system memory corruption, the system needs to provide a way to lock down the memory access, e.g., MPU. Note that since coherent allocation needs remapping, one must set up another device coherent pool by shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic coherent allocation. Signed-off-by: Claire Chang --- kernel/dma/direct.c | 38 +++++++++++++++++++++++++++++--------- 1 file changed, 29 insertions(+), 9 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index eb4098323bbc..0d521f78c7b9 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) static void __dma_direct_free_pages(struct device *dev, struct page *page, size_t size) { +#ifdef CONFIG_DMA_RESTRICTED_POOL + if (swiotlb_free(dev, page, size)) + return; +#endif dma_free_contiguous(dev, page, size); } @@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, &phys_limit); - page = dma_alloc_contiguous(dev, size, gfp); + +#ifdef CONFIG_DMA_RESTRICTED_POOL + page = swiotlb_alloc(dev, size); + if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { + __dma_direct_free_pages(dev, page, size); + page = NULL; + } +#endif + + if (!page) + page = dma_alloc_contiguous(dev, size, gfp); if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { dma_free_contiguous(dev, page, size); page = NULL; @@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, gfp |= __GFP_NOWARN; if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev)) { + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO); if (!page) return NULL; @@ -161,18 +175,23 @@ void *dma_direct_alloc(struct device *dev, size_t size, } if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - !dev_is_dma_coherent(dev)) + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && + !is_dev_swiotlb_force(dev)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); /* * Remapping or decrypting memory may block. If either is required and * we can't block, allocate the memory from the atomic pools. + * If restricted DMA (i.e., is_dev_swiotlb_force) is required, one must + * set up another device coherent pool by shared-dma-pool and use + * dma_alloc_from_dev_coherent instead. */ if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && !gfpflags_allow_blocking(gfp) && (force_dma_unencrypted(dev) || - (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev)))) + (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && + !dev_is_dma_coherent(dev))) && + !is_dev_swiotlb_force(dev)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); /* we always manually zero the memory once we are done */ @@ -253,15 +272,15 @@ void dma_direct_free(struct device *dev, size_t size, unsigned int page_order = get_order(size); if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev)) { + !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) { /* cpu_addr is a struct page cookie, not a kernel address */ dma_free_contiguous(dev, cpu_addr, size); return; } if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - !dev_is_dma_coherent(dev)) { + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && + !is_dev_swiotlb_force(dev)) { arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); return; } @@ -289,7 +308,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, void *ret; if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && - force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp)) + force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) && + !is_dev_swiotlb_force(dev)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); page = __dma_direct_alloc_pages(dev, size, gfp); From patchwork Thu May 27 13:03:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484605 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=GAjg2iwD; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrSh91pVsz9sWQ for ; Thu, 27 May 2021 23:04:25 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236255AbhE0NF4 (ORCPT ); Thu, 27 May 2021 09:05:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236261AbhE0NFv (ORCPT ); Thu, 27 May 2021 09:05:51 -0400 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 131FEC061342 for ; Thu, 27 May 2021 06:04:12 -0700 (PDT) Received: by mail-pg1-x52e.google.com with SMTP id r1so3644326pgk.8 for ; Thu, 27 May 2021 06:04:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=t98IQ0DRt9rdYTgtaruQg3/PAS3qE8eaMmCBaIqRh14=; b=GAjg2iwD8z4KXC0wmxl+b72WGOgo3axuh1N2ckUZTa0L2t7lEjUG/rd9vUciIgjM6/ BBoe8qhQS9McLcHa3Vql2S8tpN9ukDLaEDnz4WKO3KFwq/co1Y7IP35DmK8g86ONb0cC 2QkhzvjQas2dIg52wH3GUtMDzBV3jAGcZRtko= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=t98IQ0DRt9rdYTgtaruQg3/PAS3qE8eaMmCBaIqRh14=; b=Megs+b8nVIcR9/Jh2hDbwNO/DC3SFfNF/tGOcbWxs+r+59IhgaWyDUsiYlaOMS/oWQ iHL0K/8WLD5mjoCzlVqjuhzKKYVAIwypJP8x64ctJgW0vw4uv/hWvMRuMhFiP5GJnZcb HyG3BeaCIIZ0aN+59HJiDTUO/lHjGV5w2pMAa11mfL8/xOXASm7kGruseCFQ2mYfmL98 NPt+D1+M05UdlskYqe/kG+tBcWyJMgghxa9kanbUxVelIwPAjqCENmeBMACnQtnl7kEH 4WJIxfBwaDgSZqpBwlzwocusbFMhnAYGJ4BlZC7aqGLeChMS4JCZ2HAiSo6NXLWNBzU1 w/9A== X-Gm-Message-State: AOAM530aJt201w1MtJu/azjPd2lJoFV8tifCY3Gt/t5MOqxq+uhRCK86 yIb845kNTMXUkriOQny5MOEC4g== X-Google-Smtp-Source: ABdhPJz5SQWTqZi0xc7pup6EKCV3Dg34qwCaD2h6fUXISa0uHIOvWKamMjceM9SgliQmqLBbNqrp+Q== X-Received: by 2002:a62:cd49:0:b029:2d8:ae8d:6a9f with SMTP id o70-20020a62cd490000b02902d8ae8d6a9fmr3674431pfg.50.1622120651592; Thu, 27 May 2021 06:04:11 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id w197sm1969835pfc.5.2021.05.27.06.04.04 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:04:11 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 14/15] dt-bindings: of: Add restricted DMA pool Date: Thu, 27 May 2021 21:03:28 +0800 Message-Id: <20210527130329.1853588-4-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527130329.1853588-1-tientzu@chromium.org> References: <20210527130329.1853588-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Introduce the new compatible string, restricted-dma-pool, for restricted DMA. One can specify the address and length of the restricted DMA memory region by restricted-dma-pool in the reserved-memory node. Signed-off-by: Claire Chang --- .../reserved-memory/reserved-memory.txt | 36 +++++++++++++++++-- 1 file changed, 33 insertions(+), 3 deletions(-) diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt index e8d3096d922c..46804f24df05 100644 --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt @@ -51,6 +51,23 @@ compatible (optional) - standard definition used as a shared pool of DMA buffers for a set of devices. It can be used by an operating system to instantiate the necessary pool management subsystem if necessary. + - restricted-dma-pool: This indicates a region of memory meant to be + used as a pool of restricted DMA buffers for a set of devices. The + memory region would be the only region accessible to those devices. + When using this, the no-map and reusable properties must not be set, + so the operating system can create a virtual mapping that will be used + for synchronization. The main purpose for restricted DMA is to + mitigate the lack of DMA access control on systems without an IOMMU, + which could result in the DMA accessing the system memory at + unexpected times and/or unexpected addresses, possibly leading to data + leakage or corruption. The feature on its own provides a basic level + of protection against the DMA overwriting buffer contents at + unexpected times. However, to protect against general data leakage and + system memory corruption, the system needs to provide way to lock down + the memory access, e.g., MPU. Note that since coherent allocation + needs remapping, one must set up another device coherent pool by + shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic + coherent allocation. - vendor specific string in the form ,[-] no-map (optional) - empty property - Indicates the operating system must not create a virtual mapping @@ -85,10 +102,11 @@ memory-region-names (optional) - a list of names, one for each corresponding Example ------- -This example defines 3 contiguous regions are defined for Linux kernel: +This example defines 4 contiguous regions for Linux kernel: one default of all device drivers (named linux,cma@72000000 and 64MiB in size), -one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB), and -one for multimedia processing (named multimedia-memory@77000000, 64MiB). +one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB), +one for multimedia processing (named multimedia-memory@77000000, 64MiB), and +one for restricted dma pool (named restricted_dma_reserved@0x50000000, 64MiB). / { #address-cells = <1>; @@ -120,6 +138,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). compatible = "acme,multimedia-memory"; reg = <0x77000000 0x4000000>; }; + + restricted_dma_reserved: restricted_dma_reserved { + compatible = "restricted-dma-pool"; + reg = <0x50000000 0x4000000>; + }; }; /* ... */ @@ -138,4 +161,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). memory-region = <&multimedia_reserved>; /* ... */ }; + + pcie_device: pcie_device@0,0 { + reg = <0x83010000 0x0 0x00000000 0x0 0x00100000 + 0x83010000 0x0 0x00100000 0x0 0x00100000>; + memory-region = <&restricted_dma_mem_reserved>; + /* ... */ + }; }; From patchwork Thu May 27 13:03:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 1484608 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=RU9c38RK; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4FrShL3r3Sz9sXH for ; Thu, 27 May 2021 23:04:34 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236303AbhE0NGF (ORCPT ); Thu, 27 May 2021 09:06:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236252AbhE0NF4 (ORCPT ); Thu, 27 May 2021 09:05:56 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 969C4C061761 for ; Thu, 27 May 2021 06:04:20 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id i5so3680149pgm.0 for ; Thu, 27 May 2021 06:04:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g9sAWaw9/Ru404/gNZK/9yCFJ+COY6ShgEN5yl9rius=; b=RU9c38RKOmRl8qLPe9nfaNayIMAo/KhsxRPQb+p1I2/vkq857xJH1yHb2hsJr/oLI5 OzLGwV9+GOm93mot3oVKzRPIFGItH+esVlARoN4qqQuH+rUWPo604KbT7KtsniJEDbEo HQFJYd6CK0U3jnTgrILmcYyxGPhhzyPRwcTRo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g9sAWaw9/Ru404/gNZK/9yCFJ+COY6ShgEN5yl9rius=; b=MowfSAHndqxmCR+M/D2p77FFeT3otRc5be57+kmmKLYH/na3X54Wer9zOZKb3PdEkC mU4NXTiMBX+s0HBF6basWGlQYaP0Y+mRizXkGTrdEbpYUaMwSMqafJzRkjVc9FvjztBo yTs68usHQnXWxQZzNNbHjJNQ6eXQszxOIjlYsj7ZJVwOBcoQ4a/CrTHtwEg/3Zljdaue 08mpN3SoN3X2rxzSbnP9/RFr+WVJfAtx1DGmeuvcW2Zi5JAlS0BrvZoBu8pP8zV0MlMq 9KRERU6gij9RMu/ktS7qLLAXju6vrQsMXRrVnBYHpvE/BRLQZF8E6GJseTAhkarcLtHy E3WA== X-Gm-Message-State: AOAM530ANpjCBl6r6zvSS8vsfjQevAI+NKRibfB4WdqyddToVqjUXc0/ WBOHr1riD5X8GMvQpyNfk6tFBg== X-Google-Smtp-Source: ABdhPJw+tNdufRtYdRKBadLokBr1RMenJ8TRt7icb84uhhJq7TwV1n1jRizQR2Rn5o3HccoeyQdcjw== X-Received: by 2002:aa7:9438:0:b029:2df:258e:7f10 with SMTP id y24-20020aa794380000b02902df258e7f10mr3229291pfo.79.1622120660147; Thu, 27 May 2021 06:04:20 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70]) by smtp.gmail.com with UTF8SMTPSA id o3sm1937619pgh.22.2021.05.27.06.04.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 May 2021 06:04:19 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, tientzu@chromium.org, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v8 15/15] of: Add plumbing for restricted DMA pool Date: Thu, 27 May 2021 21:03:29 +0800 Message-Id: <20210527130329.1853588-5-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog In-Reply-To: <20210527130329.1853588-1-tientzu@chromium.org> References: <20210527130329.1853588-1-tientzu@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org If a device is not behind an IOMMU, we look up the device node and set up the restricted DMA when the restricted-dma-pool is presented. Signed-off-by: Claire Chang --- drivers/of/address.c | 33 +++++++++++++++++++++++++++++++++ drivers/of/device.c | 3 +++ drivers/of/of_private.h | 6 ++++++ 3 files changed, 42 insertions(+) diff --git a/drivers/of/address.c b/drivers/of/address.c index aca94c348bd4..6cc7eaaf7e11 100644 --- a/drivers/of/address.c +++ b/drivers/of/address.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -1112,6 +1113,38 @@ bool of_dma_is_coherent(struct device_node *np) } EXPORT_SYMBOL_GPL(of_dma_is_coherent); +int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np) +{ + struct device_node *node, *of_node = dev->of_node; + int count, i; + + count = of_property_count_elems_of_size(of_node, "memory-region", + sizeof(u32)); + /* + * If dev->of_node doesn't exist or doesn't contain memory-region, try + * the OF node having DMA configuration. + */ + if (count <= 0) { + of_node = np; + count = of_property_count_elems_of_size( + of_node, "memory-region", sizeof(u32)); + } + + for (i = 0; i < count; i++) { + node = of_parse_phandle(of_node, "memory-region", i); + /* + * There might be multiple memory regions, but only one + * restricted-dma-pool region is allowed. + */ + if (of_device_is_compatible(node, "restricted-dma-pool") && + of_device_is_available(node)) + return of_reserved_mem_device_init_by_idx(dev, of_node, + i); + } + + return 0; +} + /** * of_mmio_is_nonposted - Check if device uses non-posted MMIO * @np: device node diff --git a/drivers/of/device.c b/drivers/of/device.c index c5a9473a5fb1..2defdca418ec 100644 --- a/drivers/of/device.c +++ b/drivers/of/device.c @@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np, arch_setup_dma_ops(dev, dma_start, size, iommu, coherent); + if (!iommu) + return of_dma_set_restricted_buffer(dev, np); + return 0; } EXPORT_SYMBOL_GPL(of_dma_configure_id); diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h index d717efbd637d..8fde97565d11 100644 --- a/drivers/of/of_private.h +++ b/drivers/of/of_private.h @@ -163,12 +163,18 @@ struct bus_dma_region; #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA) int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map); +int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np); #else static inline int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map) { return -ENODEV; } +static inline int of_dma_set_restricted_buffer(struct device *dev, + struct device_node *np) +{ + return -ENODEV; +} #endif #endif /* _LINUX_OF_PRIVATE_H */