From patchwork Mon Mar 14 07:31:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604923 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=1SiDt8zu; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7XN1x5bz9sGN for ; Mon, 14 Mar 2022 18:32:04 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236647AbiCNHdI (ORCPT ); Mon, 14 Mar 2022 03:33:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236674AbiCNHc7 (ORCPT ); Mon, 14 Mar 2022 03:32:59 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D40C40A32; Mon, 14 Mar 2022 00:31:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=fpYq2gch6ejmPkcFKPjSyDr7OAs4OQSLe0uyAoUEPa0=; b=1SiDt8zubg5I0ueT636uOGUNeo nzdNRiaQCUVWgjMPxZ1+oQTQbyiLg/w4TFouHm2BYLknLSuxCv/ojkO1Z5PpI5wlZ3h0uqPrutEh8 OuZPV6b5cmTW9qNKKLWRTUH4LF4VetuO/2a0xOe6Q/Fz+TG4amB4EVcoitlw06DP9dGsILqimGObs ExsRJz/Gu+rY3tdP9ldFzM98I4FPsnUIBEiN1tfsgCwAluNTubp3//rXJ6sYUpzqebfF0FJRHE1lC bxauBfwlUuLRbxzO+E/W4SduALdUw9Kb+9eT67a6+f/Dsi9acNlJyA8zzX7BXeY9ZUCUeg4ftUTle 5C25WlEg==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfB1-0044SD-2u; Mon, 14 Mar 2022 07:31:35 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org Subject: [PATCH 01/15] dma-direct: use is_swiotlb_active in dma_direct_map_page Date: Mon, 14 Mar 2022 08:31:15 +0100 Message-Id: <20220314073129.1862284-2-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Use the more specific is_swiotlb_active check instead of checking the global swiotlb_force variable. Signed-off-by: Christoph Hellwig Reviewed-by: Anshuman Khandual --- kernel/dma/direct.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index 4632b0f4f72eb..4dc16e08c7e1a 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -91,7 +91,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, return swiotlb_map(dev, phys, size, dir, attrs); if (unlikely(!dma_capable(dev, dma_addr, size, true))) { - if (swiotlb_force != SWIOTLB_NO_FORCE) + if (is_swiotlb_active(dev)) return swiotlb_map(dev, phys, size, dir, attrs); dev_WARN_ONCE(dev, 1, From patchwork Mon Mar 14 07:31:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604924 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=W+WZjoWP; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7XN52bMz9sGP for ; Mon, 14 Mar 2022 18:32:04 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236657AbiCNHdJ (ORCPT ); Mon, 14 Mar 2022 03:33:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236670AbiCNHc6 (ORCPT ); Mon, 14 Mar 2022 03:32:58 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA17C40A25; Mon, 14 Mar 2022 00:31:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=7KIhJt40I1ofu47+zsRTUJkoQ8fUOXBHfwlYnVCC3M8=; b=W+WZjoWPy3YFymvGdgW5PlFiLN 2oqlLSAIhYam5081YLU/1idoDC+4RE6JSCHC36ZpyAX2RqypK3I1dmx8MCT9usY2EtHAr7dVoXXgU ta3E9aQ+CvKo576fFUXY4Js12Sk2bnGPPWnIQ4WEXMdZ1jHX3vWNfFRBg0PeEX9CY8ebeHWNaHoXD Ldd6MPElBlKE1uL77ytbpRl/zJ8WlwYmmUjnWfYBgb8pX+tGeT/H2o/rs+/tYOYrp3RlS4F5EEmz/ 9eJYZaX/9N3HVPvuYUruScc8MIxDSXJGJKLArbTW8hiz/DoUu0s0GNqz7Lk4SBui3qiJz+6Yua/Fj GAY/ztJQ==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfB4-0044TV-Ja; Mon, 14 Mar 2022 07:31:39 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org Subject: [PATCH 02/15] swiotlb: make swiotlb_exit a no-op if SWIOTLB_FORCE is set Date: Mon, 14 Mar 2022 08:31:16 +0100 Message-Id: <20220314073129.1862284-3-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org If force bouncing is enabled we can't release the buffers. Signed-off-by: Christoph Hellwig Reviewed-by: Anshuman Khandual --- kernel/dma/swiotlb.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 908eac2527cb1..af9d257501a64 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -369,6 +369,9 @@ void __init swiotlb_exit(void) unsigned long tbl_vaddr; size_t tbl_size, slots_size; + if (swiotlb_force == SWIOTLB_FORCE) + return; + if (!mem->nslabs) return; From patchwork Mon Mar 14 07:31:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604922 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=xws4HwHa; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7XK30z5z9sGD for ; Mon, 14 Mar 2022 18:32:01 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236648AbiCNHdI (ORCPT ); Mon, 14 Mar 2022 03:33:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236687AbiCNHdD (ORCPT ); Mon, 14 Mar 2022 03:33:03 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0F7940A25; Mon, 14 Mar 2022 00:31:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=qjF6UlDoKTJIPz7U7eqEwXQdhEoBvByheuaiClvkIZs=; b=xws4HwHaOihl78yOiEhjroPPH2 54Uuo5lIEafRY6wWRzl3zJycrcpdL3Ct3V5Jp5JR8qMp8oFjSOXQbB0NlttNHliOig3A7mAukzQQ1 jqCDcfjsL8YCmgoNEU5hVEVvPQVUgpI4qQ9pvZEVCxmIRV8INa0ghdAsk8Vxd/FVyx+S9zfzWJkI8 71Vz8cAVJ+t3qVKRBh3oH5OfL67neWWtxy0ukHlr5V7WBvHe1bgjdnFpOyHJL/puGEyXEQmWcSMeF dhJq3ne5jkzC2E0LUwwtQQmao0rPySxKqNAmfNsXt+gJ1lr4nz1OMCbh7PoAe9GuD09/A8md2T2cD qY3q73tw==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfB7-0044Us-O5; Mon, 14 Mar 2022 07:31:42 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org Subject: [PATCH 03/15] swiotlb: simplify swiotlb_max_segment Date: Mon, 14 Mar 2022 08:31:17 +0100 Message-Id: <20220314073129.1862284-4-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Remove the bogus Xen override that was usually larger than the actual size and just calculate the value on demand. Note that swiotlb_max_segment still doesn't make sense as an interface and should eventually be removed. Signed-off-by: Christoph Hellwig Reviewed-by: Anshuman Khandual --- drivers/xen/swiotlb-xen.c | 2 -- include/linux/swiotlb.h | 1 - kernel/dma/swiotlb.c | 20 +++----------------- 3 files changed, 3 insertions(+), 20 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 47aebd98f52f5..485cd06ed39e7 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -202,7 +202,6 @@ int xen_swiotlb_init(void) rc = swiotlb_late_init_with_tbl(start, nslabs); if (rc) return rc; - swiotlb_set_max_segment(PAGE_SIZE); return 0; error: if (nslabs > 1024 && repeat--) { @@ -254,7 +253,6 @@ void __init xen_swiotlb_init_early(void) if (swiotlb_init_with_tbl(start, nslabs, true)) panic("Cannot allocate SWIOTLB buffer"); - swiotlb_set_max_segment(PAGE_SIZE); } #endif /* CONFIG_X86 */ diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index f6c3638255d54..9fb3a568f0c51 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -164,7 +164,6 @@ static inline void swiotlb_adjust_size(unsigned long size) #endif /* CONFIG_SWIOTLB */ extern void swiotlb_print_info(void); -extern void swiotlb_set_max_segment(unsigned int); #ifdef CONFIG_DMA_RESTRICTED_POOL struct page *swiotlb_alloc(struct device *dev, size_t size); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index af9d257501a64..b75943c2a0a0e 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -68,12 +68,6 @@ struct io_tlb_mem io_tlb_default_mem; phys_addr_t swiotlb_unencrypted_base; -/* - * Max segment that we can provide which (if pages are contingous) will - * not be bounced (unless SWIOTLB_FORCE is set). - */ -static unsigned int max_segment; - static unsigned long default_nslabs = IO_TLB_DEFAULT_SIZE >> IO_TLB_SHIFT; static int __init @@ -97,18 +91,12 @@ early_param("swiotlb", setup_io_tlb_npages); unsigned int swiotlb_max_segment(void) { - return io_tlb_default_mem.nslabs ? max_segment : 0; + if (!io_tlb_default_mem.nslabs) + return 0; + return rounddown(io_tlb_default_mem.nslabs << IO_TLB_SHIFT, PAGE_SIZE); } EXPORT_SYMBOL_GPL(swiotlb_max_segment); -void swiotlb_set_max_segment(unsigned int val) -{ - if (swiotlb_force == SWIOTLB_FORCE) - max_segment = 1; - else - max_segment = rounddown(val, PAGE_SIZE); -} - unsigned long swiotlb_size_or_default(void) { return default_nslabs << IO_TLB_SHIFT; @@ -258,7 +246,6 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) if (verbose) swiotlb_print_info(); - swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT); return 0; } @@ -359,7 +346,6 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true); swiotlb_print_info(); - swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT); return 0; } From patchwork Mon Mar 14 07:31:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604925 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=qwDfXCpO; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7XP4Ss0z9sGN for ; Mon, 14 Mar 2022 18:32:05 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236665AbiCNHdK (ORCPT ); Mon, 14 Mar 2022 03:33:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236701AbiCNHdG (ORCPT ); Mon, 14 Mar 2022 03:33:06 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B822340A25; Mon, 14 Mar 2022 00:31:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=fPWsO1iHHAQagtBcoXAydAb96o6DVHP7RWGjrq+6LwY=; b=qwDfXCpOs1g0/zHivF8J3Pa2kj TSEGk1jWRMtKhEWhGNTpplhyN3HAXMPj2LczMjlRItqHl2EU4O94m52uTs7+XplZ9Kz3G8NQbVgDe 8sYKcoI1GNnHZg+/WDfK7cVDrptgMrK0dKcNymVgO1rg5DAYC7x7npVpM/i4b21NILmU5aJ0/my+6 kyYkMNI5HH4L9WZ2hdaZTVXBgc1A0S2ACtfv5IQXO+1zkiOWQ3JRYYpJ3hCXh6GtWwX4FXtpn++/b 3fgyxOV7bdJ585bpb0fhRoJk4MsacKntn9Q2zC4rxGGLze7TGvuBGq3nLKTRVjU093NpkROGa7e37 qjB7pyoQ==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfBB-0044WZ-0Z; Mon, 14 Mar 2022 07:31:45 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org Subject: [PATCH 04/15] swiotlb: rename swiotlb_late_init_with_default_size Date: Mon, 14 Mar 2022 08:31:18 +0100 Message-Id: <20220314073129.1862284-5-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org swiotlb_late_init_with_default_size is an overly verbose name that doesn't even catch what the function is doing, given that the size is not just a default but the actual requested size. Rename it to swiotlb_init_late. Signed-off-by: Christoph Hellwig Reviewed-by: Anshuman Khandual --- arch/x86/pci/sta2x11-fixup.c | 2 +- include/linux/swiotlb.h | 2 +- kernel/dma/swiotlb.c | 6 ++---- 3 files changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/x86/pci/sta2x11-fixup.c b/arch/x86/pci/sta2x11-fixup.c index 101081ad64b6d..e0c039a75b2db 100644 --- a/arch/x86/pci/sta2x11-fixup.c +++ b/arch/x86/pci/sta2x11-fixup.c @@ -57,7 +57,7 @@ static void sta2x11_new_instance(struct pci_dev *pdev) int size = STA2X11_SWIOTLB_SIZE; /* First instance: register your own swiotlb area */ dev_info(&pdev->dev, "Using SWIOTLB (size %i)\n", size); - if (swiotlb_late_init_with_default_size(size)) + if (swiotlb_init_late(size)) dev_emerg(&pdev->dev, "init swiotlb failed\n"); } list_add(&instance->list, &sta2x11_instance_list); diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 9fb3a568f0c51..b48b26bfa0edb 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -40,7 +40,7 @@ extern void swiotlb_init(int verbose); int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); unsigned long swiotlb_size_or_default(void); extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); -extern int swiotlb_late_init_with_default_size(size_t default_size); +int swiotlb_init_late(size_t size); extern void __init swiotlb_update_mem_attributes(void); phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys, diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index b75943c2a0a0e..14e08fa9621c2 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -281,11 +281,9 @@ swiotlb_init(int verbose) * initialize the swiotlb later using the slab allocator if needed. * This should be just like above, but with some error catching. */ -int -swiotlb_late_init_with_default_size(size_t default_size) +int swiotlb_init_late(size_t size) { - unsigned long nslabs = - ALIGN(default_size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE); + unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE); unsigned long bytes; unsigned char *vstart = NULL; unsigned int order; From patchwork Mon Mar 14 07:31:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604926 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=qiuq87bY; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7Xc3PpTz9sGD for ; Mon, 14 Mar 2022 18:32:16 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236685AbiCNHdX (ORCPT ); Mon, 14 Mar 2022 03:33:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236666AbiCNHdK (ORCPT ); Mon, 14 Mar 2022 03:33:10 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03F5740A2D; Mon, 14 Mar 2022 00:32:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=PXzvxDuJ7dtq4MEwSJLJWBZwmoZU9P9omOl6CdJuIoM=; b=qiuq87bYuaIlaiP0CNaY36Vw2E eRadO6nXlKLFyymOzH7mAwQgeK4fEKmdRvF6EjtdqK6+MbDIOpkQiH+QdW80NTaMs30eyNTyE8Tmg xFZRbGO0LcYfGQuCTWKgMJnzCl+u0P4/fgJMtF3taDNIcjh1uSBVkVIECGoChzB8GkRiKr8LnKX3u 6GNxIeUiA+DizSoHt6RIqvYjOaVp03itnUL9qBG06/2pqM7shQiyVji3SqadJOIwzwmiBbcIGlXvU +hGhlJ94coaCT4H/vaPwPH1xU9garA4YArn7xfz6IqfQCCp1++XWmvY/xfmXvDkNEPMfCy/gu7s3t M0LxkcQA==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfBE-0044YU-6n; Mon, 14 Mar 2022 07:31:48 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org, Stefano Stabellini Subject: [PATCH 05/15] arm/xen: don't check for xen_initial_domain() in xen_create_contiguous_region Date: Mon, 14 Mar 2022 08:31:19 +0100 Message-Id: <20220314073129.1862284-6-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Stefano Stabellini It used to be that Linux enabled swiotlb-xen when running a dom0 on ARM. Since f5079a9a2a31 "xen/arm: introduce XENFEAT_direct_mapped and XENFEAT_not_direct_mapped", Linux detects whether to enable or disable swiotlb-xen based on the new feature flags: XENFEAT_direct_mapped and XENFEAT_not_direct_mapped. However, there is still a leftover xen_initial_domain() check in xen_create_contiguous_region. Remove the check as xen_create_contiguous_region is only called by swiotlb-xen during initialization. If xen_create_contiguous_region is called, we know Linux is running 1:1 mapped so there is no need for additional checks. Also update the in-code comment. Signed-off-by: Stefano Stabellini Signed-off-by: Christoph Hellwig --- arch/arm/xen/mm.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c index a7e54a087b802..28c2070602535 100644 --- a/arch/arm/xen/mm.c +++ b/arch/arm/xen/mm.c @@ -122,10 +122,7 @@ int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order, unsigned int address_bits, dma_addr_t *dma_handle) { - if (!xen_initial_domain()) - return -EINVAL; - - /* we assume that dom0 is mapped 1:1 for now */ + /* the domain is 1:1 mapped to use swiotlb-xen */ *dma_handle = pstart; return 0; } From patchwork Mon Mar 14 07:31:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604927 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=UaUDSuTf; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7Xn3XB7z9sGD for ; Mon, 14 Mar 2022 18:32:25 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236693AbiCNHda (ORCPT ); Mon, 14 Mar 2022 03:33:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235438AbiCNHdX (ORCPT ); Mon, 14 Mar 2022 03:33:23 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9264E40E47; Mon, 14 Mar 2022 00:32:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=fPQX1tW6oEG+I0H/rxZDFlkVrHm+7UqYHxDjUft+pbI=; b=UaUDSuTfz++xf50tkHNiHGXJZq cJ7jDE7eLn4++IV9Tll52n1Vyta7Quk7g2f3APPp8NZQ82J9lVEH0b+2sz77Kx5Ag19nVkSSMOHca Q+4rQuVGpmGgZutzMpfA5u3XmfFPI5bVsb07TCve+RRWDWakNpqhwENvwDSHO4BZduNpBFXaySMjE UGCLRYOwl0Xqsbq+YUPpW34dps/onEFHOIF/iOXPMh3FmAWkYnxr8hf3aefQd6Yzrs5k6KcfnvkBt bVljteU+8Uox68Ji5IQ+EVsDkRO+On8IRzUH/KkmbM7cukrYjUNKbL5VWd44MaREvLr7mZQKNZUnB NK1I2pAQ==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfBH-0044b2-Bx; Mon, 14 Mar 2022 07:31:52 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org, Thomas Bogendoerfer Subject: [PATCH 06/15] MIPS/octeon: use swiotlb_init instead of open coding it Date: Mon, 14 Mar 2022 08:31:20 +0100 Message-Id: <20220314073129.1862284-7-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Use the generic swiotlb initialization helper instead of open coding it. Signed-off-by: Christoph Hellwig Acked-by: Thomas Bogendoerfer --- arch/mips/cavium-octeon/dma-octeon.c | 15 ++------------- arch/mips/pci/pci-octeon.c | 2 +- 2 files changed, 3 insertions(+), 14 deletions(-) diff --git a/arch/mips/cavium-octeon/dma-octeon.c b/arch/mips/cavium-octeon/dma-octeon.c index df70308db0e69..fb7547e217263 100644 --- a/arch/mips/cavium-octeon/dma-octeon.c +++ b/arch/mips/cavium-octeon/dma-octeon.c @@ -186,15 +186,12 @@ phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr) return daddr; } -char *octeon_swiotlb; - void __init plat_swiotlb_setup(void) { phys_addr_t start, end; phys_addr_t max_addr; phys_addr_t addr_size; size_t swiotlbsize; - unsigned long swiotlb_nslabs; u64 i; max_addr = 0; @@ -236,15 +233,7 @@ void __init plat_swiotlb_setup(void) if (OCTEON_IS_OCTEON2() && max_addr >= 0x100000000ul) swiotlbsize = 64 * (1<<20); #endif - swiotlb_nslabs = swiotlbsize >> IO_TLB_SHIFT; - swiotlb_nslabs = ALIGN(swiotlb_nslabs, IO_TLB_SEGSIZE); - swiotlbsize = swiotlb_nslabs << IO_TLB_SHIFT; - - octeon_swiotlb = memblock_alloc_low(swiotlbsize, PAGE_SIZE); - if (!octeon_swiotlb) - panic("%s: Failed to allocate %zu bytes align=%lx\n", - __func__, swiotlbsize, PAGE_SIZE); - if (swiotlb_init_with_tbl(octeon_swiotlb, swiotlb_nslabs, 1) == -ENOMEM) - panic("Cannot allocate SWIOTLB buffer"); + swiotlb_adjust_size(swiotlbsize); + swiotlb_init(1); } diff --git a/arch/mips/pci/pci-octeon.c b/arch/mips/pci/pci-octeon.c index fc29b85cfa926..e457a18cbdc59 100644 --- a/arch/mips/pci/pci-octeon.c +++ b/arch/mips/pci/pci-octeon.c @@ -664,7 +664,7 @@ static int __init octeon_pci_setup(void) /* BAR1 movable regions contiguous to cover the swiotlb */ octeon_bar1_pci_phys = - virt_to_phys(octeon_swiotlb) & ~((1ull << 22) - 1); + io_tlb_default_mem.start & ~((1ull << 22) - 1); for (index = 0; index < 32; index++) { union cvmx_pci_bar1_indexx bar1_index; From patchwork Mon Mar 14 07:31:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604928 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=ztzmVDCi; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7Xz3z5rz9sGD for ; Mon, 14 Mar 2022 18:32:35 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236706AbiCNHdl (ORCPT ); Mon, 14 Mar 2022 03:33:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232025AbiCNHda (ORCPT ); Mon, 14 Mar 2022 03:33:30 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB45A40E41; Mon, 14 Mar 2022 00:32:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=9TI61PVS1CmYy96IdPK0xotFklww6NqACNVye6CbaCc=; b=ztzmVDCimd9p0qwRm3lPhdEThS Myvw6EN96v1eJT5HIwjmhtSTD1TGKN+ia9lsBzRQQLbtwfn1lMSZmS1xHx0qRKrzU4UBuKdYNyTZX 8H7CJzvwmYx5ufy1f24XGAoU7n0oP0ekhhcJF8sTxahztZj3Vp0H7aR/xGGmoRdzShekU91k9U+k3 HL/vhJGPknDZS3VDH4yXPZrZVMxHlnMVwQZPk9ynPQftVct1V87zITgA6hv+wi7HOoioEhjv2Iewz rESc6pI2aejfYQodsDN3VxFl32XCwKmZqGn7KtURt42Ki+5zp7cbRqBjhzryE4qIA+ThVjmdawC3y i+FuQ/CA==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfBK-0044dW-LB; Mon, 14 Mar 2022 07:31:55 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org Subject: [PATCH 07/15] x86: remove the IOMMU table infrastructure Date: Mon, 14 Mar 2022 08:31:21 +0100 Message-Id: <20220314073129.1862284-8-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The IOMMU table tries to separate the different IOMMUs into different backends, but actually requires various cross calls. Rewrite the code to do the generic swiotlb/swiotlb-xen setup directly in pci-dma.c and then just call into the IOMMU drivers. Signed-off-by: Christoph Hellwig --- arch/ia64/include/asm/iommu_table.h | 7 -- arch/x86/include/asm/dma-mapping.h | 1 - arch/x86/include/asm/gart.h | 5 +- arch/x86/include/asm/iommu.h | 6 ++ arch/x86/include/asm/iommu_table.h | 102 ----------------------- arch/x86/include/asm/swiotlb.h | 30 ------- arch/x86/include/asm/xen/swiotlb-xen.h | 2 - arch/x86/kernel/Makefile | 2 - arch/x86/kernel/amd_gart_64.c | 5 +- arch/x86/kernel/aperture_64.c | 14 ++-- arch/x86/kernel/pci-dma.c | 107 ++++++++++++++++++++----- arch/x86/kernel/pci-iommu_table.c | 77 ------------------ arch/x86/kernel/pci-swiotlb.c | 77 ------------------ arch/x86/kernel/tboot.c | 1 - arch/x86/kernel/vmlinux.lds.S | 12 --- arch/x86/xen/Makefile | 2 - arch/x86/xen/pci-swiotlb-xen.c | 96 ---------------------- drivers/iommu/amd/init.c | 6 -- drivers/iommu/amd/iommu.c | 5 +- drivers/iommu/intel/dmar.c | 6 +- include/linux/dmar.h | 6 +- 21 files changed, 110 insertions(+), 459 deletions(-) delete mode 100644 arch/ia64/include/asm/iommu_table.h delete mode 100644 arch/x86/include/asm/iommu_table.h delete mode 100644 arch/x86/include/asm/swiotlb.h delete mode 100644 arch/x86/kernel/pci-iommu_table.c delete mode 100644 arch/x86/kernel/pci-swiotlb.c delete mode 100644 arch/x86/xen/pci-swiotlb-xen.c diff --git a/arch/ia64/include/asm/iommu_table.h b/arch/ia64/include/asm/iommu_table.h deleted file mode 100644 index cc96116ac276a..0000000000000 --- a/arch/ia64/include/asm/iommu_table.h +++ /dev/null @@ -1,7 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_IA64_IOMMU_TABLE_H -#define _ASM_IA64_IOMMU_TABLE_H - -#define IOMMU_INIT_POST(_detect) - -#endif /* _ASM_IA64_IOMMU_TABLE_H */ diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h index bb1654fe0ce74..256fd8115223d 100644 --- a/arch/x86/include/asm/dma-mapping.h +++ b/arch/x86/include/asm/dma-mapping.h @@ -9,7 +9,6 @@ #include #include -#include extern int iommu_merge; extern int panic_on_overflow; diff --git a/arch/x86/include/asm/gart.h b/arch/x86/include/asm/gart.h index 3185565743459..5af8088a10df6 100644 --- a/arch/x86/include/asm/gart.h +++ b/arch/x86/include/asm/gart.h @@ -38,7 +38,7 @@ extern int gart_iommu_aperture_disabled; extern void early_gart_iommu_check(void); extern int gart_iommu_init(void); extern void __init gart_parse_options(char *); -extern int gart_iommu_hole_init(void); +void gart_iommu_hole_init(void); #else #define gart_iommu_aperture 0 @@ -51,9 +51,8 @@ static inline void early_gart_iommu_check(void) static inline void gart_parse_options(char *options) { } -static inline int gart_iommu_hole_init(void) +static inline void gart_iommu_hole_init(void) { - return -ENODEV; } #endif diff --git a/arch/x86/include/asm/iommu.h b/arch/x86/include/asm/iommu.h index bf1ed2ddc74bd..dba89ed40d38d 100644 --- a/arch/x86/include/asm/iommu.h +++ b/arch/x86/include/asm/iommu.h @@ -9,6 +9,12 @@ extern int force_iommu, no_iommu; extern int iommu_detected; +#ifdef CONFIG_SWIOTLB +extern bool x86_swiotlb_enable; +#else +#define x86_swiotlb_enable false +#endif + /* 10 seconds */ #define DMAR_OPERATION_TIMEOUT ((cycles_t) tsc_khz*10*1000) diff --git a/arch/x86/include/asm/iommu_table.h b/arch/x86/include/asm/iommu_table.h deleted file mode 100644 index 1fb3fd1a83c25..0000000000000 --- a/arch/x86/include/asm/iommu_table.h +++ /dev/null @@ -1,102 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_X86_IOMMU_TABLE_H -#define _ASM_X86_IOMMU_TABLE_H - -#include - -/* - * History lesson: - * The execution chain of IOMMUs in 2.6.36 looks as so: - * - * [xen-swiotlb] - * | - * +----[swiotlb *]--+ - * / | \ - * / | \ - * [GART] [Calgary] [Intel VT-d] - * / - * / - * [AMD-Vi] - * - * *: if SWIOTLB detected 'iommu=soft'/'swiotlb=force' it would skip - * over the rest of IOMMUs and unconditionally initialize the SWIOTLB. - * Also it would surreptitiously initialize set the swiotlb=1 if there were - * more than 4GB and if the user did not pass in 'iommu=off'. The swiotlb - * flag would be turned off by all IOMMUs except the Calgary one. - * - * The IOMMU_INIT* macros allow a similar tree (or more complex if desired) - * to be built by defining who we depend on. - * - * And all that needs to be done is to use one of the macros in the IOMMU - * and the pci-dma.c will take care of the rest. - */ - -struct iommu_table_entry { - initcall_t detect; - initcall_t depend; - void (*early_init)(void); /* No memory allocate available. */ - void (*late_init)(void); /* Yes, can allocate memory. */ -#define IOMMU_FINISH_IF_DETECTED (1<<0) -#define IOMMU_DETECTED (1<<1) - int flags; -}; -/* - * Macro fills out an entry in the .iommu_table that is equivalent - * to the fields that 'struct iommu_table_entry' has. The entries - * that are put in the .iommu_table section are not put in any order - * hence during boot-time we will have to resort them based on - * dependency. */ - - -#define __IOMMU_INIT(_detect, _depend, _early_init, _late_init, _finish)\ - static const struct iommu_table_entry \ - __iommu_entry_##_detect __used \ - __attribute__ ((unused, __section__(".iommu_table"), \ - aligned((sizeof(void *))))) \ - = {_detect, _depend, _early_init, _late_init, \ - _finish ? IOMMU_FINISH_IF_DETECTED : 0} -/* - * The simplest IOMMU definition. Provide the detection routine - * and it will be run after the SWIOTLB and the other IOMMUs - * that utilize this macro. If the IOMMU is detected (ie, the - * detect routine returns a positive value), the other IOMMUs - * are also checked. You can use IOMMU_INIT_POST_FINISH if you prefer - * to stop detecting the other IOMMUs after yours has been detected. - */ -#define IOMMU_INIT_POST(_detect) \ - __IOMMU_INIT(_detect, pci_swiotlb_detect_4gb, NULL, NULL, 0) - -#define IOMMU_INIT_POST_FINISH(detect) \ - __IOMMU_INIT(_detect, pci_swiotlb_detect_4gb, NULL, NULL, 1) - -/* - * A more sophisticated version of IOMMU_INIT. This variant requires: - * a). A detection routine function. - * b). The name of the detection routine we depend on to get called - * before us. - * c). The init routine which gets called if the detection routine - * returns a positive value from the pci_iommu_alloc. This means - * no presence of a memory allocator. - * d). Similar to the 'init', except that this gets called from pci_iommu_init - * where we do have a memory allocator. - * - * The standard IOMMU_INIT differs from the IOMMU_INIT_FINISH variant - * in that the former will continue detecting other IOMMUs in the call - * list after the detection routine returns a positive number, while the - * latter will stop the execution chain upon first successful detection. - * Both variants will still call the 'init' and 'late_init' functions if - * they are set. - */ -#define IOMMU_INIT_FINISH(_detect, _depend, _init, _late_init) \ - __IOMMU_INIT(_detect, _depend, _init, _late_init, 1) - -#define IOMMU_INIT(_detect, _depend, _init, _late_init) \ - __IOMMU_INIT(_detect, _depend, _init, _late_init, 0) - -void sort_iommu_table(struct iommu_table_entry *start, - struct iommu_table_entry *finish); - -void check_iommu_entries(struct iommu_table_entry *start, - struct iommu_table_entry *finish); - -#endif /* _ASM_X86_IOMMU_TABLE_H */ diff --git a/arch/x86/include/asm/swiotlb.h b/arch/x86/include/asm/swiotlb.h deleted file mode 100644 index ff6c92eff035a..0000000000000 --- a/arch/x86/include/asm/swiotlb.h +++ /dev/null @@ -1,30 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_X86_SWIOTLB_H -#define _ASM_X86_SWIOTLB_H - -#include - -#ifdef CONFIG_SWIOTLB -extern int swiotlb; -extern int __init pci_swiotlb_detect_override(void); -extern int __init pci_swiotlb_detect_4gb(void); -extern void __init pci_swiotlb_init(void); -extern void __init pci_swiotlb_late_init(void); -#else -#define swiotlb 0 -static inline int pci_swiotlb_detect_override(void) -{ - return 0; -} -static inline int pci_swiotlb_detect_4gb(void) -{ - return 0; -} -static inline void pci_swiotlb_init(void) -{ -} -static inline void pci_swiotlb_late_init(void) -{ -} -#endif -#endif /* _ASM_X86_SWIOTLB_H */ diff --git a/arch/x86/include/asm/xen/swiotlb-xen.h b/arch/x86/include/asm/xen/swiotlb-xen.h index 66b4ddde77430..e5a90b42e4dde 100644 --- a/arch/x86/include/asm/xen/swiotlb-xen.h +++ b/arch/x86/include/asm/xen/swiotlb-xen.h @@ -3,10 +3,8 @@ #define _ASM_X86_SWIOTLB_XEN_H #ifdef CONFIG_SWIOTLB_XEN -extern int __init pci_xen_swiotlb_detect(void); extern int pci_xen_swiotlb_init_late(void); #else -#define pci_xen_swiotlb_detect NULL static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; } #endif diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 6aef9ee28a394..2851d4f0aa0d2 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -71,7 +71,6 @@ obj-y += bootflag.o e820.o obj-y += pci-dma.o quirks.o topology.o kdebugfs.o obj-y += alternative.o i8253.o hw_breakpoint.o obj-y += tsc.o tsc_msr.o io_delay.o rtc.o -obj-y += pci-iommu_table.o obj-y += resource.o obj-y += irqflags.o obj-y += static_call.o @@ -136,7 +135,6 @@ obj-$(CONFIG_PCSPKR_PLATFORM) += pcspeaker.o obj-$(CONFIG_X86_CHECK_BIOS_CORRUPTION) += check.o -obj-$(CONFIG_SWIOTLB) += pci-swiotlb.o obj-$(CONFIG_OF) += devicetree.o obj-$(CONFIG_UPROBES) += uprobes.o diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c index ed837383de5c8..194d54eed5376 100644 --- a/arch/x86/kernel/amd_gart_64.c +++ b/arch/x86/kernel/amd_gart_64.c @@ -38,11 +38,9 @@ #include #include #include -#include #include #include #include -#include static unsigned long iommu_bus_base; /* GART remapping area (physical) */ static unsigned long iommu_size; /* size of remapping area bytes */ @@ -808,7 +806,7 @@ int __init gart_iommu_init(void) flush_gart(); dma_ops = &gart_dma_ops; x86_platform.iommu_shutdown = gart_iommu_shutdown; - swiotlb = 0; + x86_swiotlb_enable = false; return 0; } @@ -842,4 +840,3 @@ void __init gart_parse_options(char *p) } } } -IOMMU_INIT_POST(gart_iommu_hole_init); diff --git a/arch/x86/kernel/aperture_64.c b/arch/x86/kernel/aperture_64.c index af3ba08b684b5..7a5630d904b23 100644 --- a/arch/x86/kernel/aperture_64.c +++ b/arch/x86/kernel/aperture_64.c @@ -392,7 +392,7 @@ void __init early_gart_iommu_check(void) static int __initdata printed_gart_size_msg; -int __init gart_iommu_hole_init(void) +void __init gart_iommu_hole_init(void) { u32 agp_aper_base = 0, agp_aper_order = 0; u32 aper_size, aper_alloc = 0, aper_order = 0, last_aper_order = 0; @@ -401,11 +401,11 @@ int __init gart_iommu_hole_init(void) int i, node; if (!amd_gart_present()) - return -ENODEV; + return; if (gart_iommu_aperture_disabled || !fix_aperture || !early_pci_allowed()) - return -ENODEV; + return; pr_info("Checking aperture...\n"); @@ -491,10 +491,8 @@ int __init gart_iommu_hole_init(void) * and fixed up the northbridge */ exclude_from_core(last_aper_base, last_aper_order); - - return 1; } - return 0; + return; } if (!fallback_aper_force) { @@ -527,7 +525,7 @@ int __init gart_iommu_hole_init(void) panic("Not enough memory for aperture"); } } else { - return 0; + return; } /* @@ -561,6 +559,4 @@ int __init gart_iommu_hole_init(void) } set_up_gart_resume(aper_order, aper_alloc); - - return 1; } diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c index de234e7a8962e..df96926421be0 100644 --- a/arch/x86/kernel/pci-dma.c +++ b/arch/x86/kernel/pci-dma.c @@ -7,13 +7,16 @@ #include #include #include +#include #include #include #include #include #include -#include + +#include +#include static bool disable_dac_quirk __read_mostly; @@ -34,24 +37,83 @@ int no_iommu __read_mostly; /* Set this to 1 if there is a HW IOMMU in the system */ int iommu_detected __read_mostly = 0; -extern struct iommu_table_entry __iommu_table[], __iommu_table_end[]; +#ifdef CONFIG_SWIOTLB +bool x86_swiotlb_enable; + +static void __init pci_swiotlb_detect(void) +{ + /* don't initialize swiotlb if iommu=off (no_iommu=1) */ + if (!no_iommu && max_possible_pfn > MAX_DMA32_PFN) + x86_swiotlb_enable = true; + + /* + * Set swiotlb to 1 so that bounce buffers are allocated and used for + * devices that can't support DMA to encrypted memory. + */ + if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) + x86_swiotlb_enable = true; + + if (swiotlb_force == SWIOTLB_FORCE) + x86_swiotlb_enable = true; +} +#else +static inline void __init pci_swiotlb_detect(void) +{ +} +#endif /* CONFIG_SWIOTLB */ + +#ifdef CONFIG_SWIOTLB_XEN +static bool xen_swiotlb; + +static void __init pci_xen_swiotlb_init(void) +{ + if (!xen_initial_domain() && !x86_swiotlb_enable && + swiotlb_force != SWIOTLB_FORCE) + return; + x86_swiotlb_enable = true; + xen_swiotlb = true; + xen_swiotlb_init_early(); + dma_ops = &xen_swiotlb_dma_ops; + if (IS_ENABLED(CONFIG_PCI)) + pci_request_acs(); +} + +int pci_xen_swiotlb_init_late(void) +{ + int rc; + + if (xen_swiotlb) + return 0; + + rc = xen_swiotlb_init(); + if (rc) + return rc; + + /* XXX: this switches the dma ops under live devices! */ + dma_ops = &xen_swiotlb_dma_ops; + if (IS_ENABLED(CONFIG_PCI)) + pci_request_acs(); + return 0; +} +EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late); +#else +static inline void __init pci_xen_swiotlb_init(void) +{ +} +#endif /* CONFIG_SWIOTLB_XEN */ void __init pci_iommu_alloc(void) { - struct iommu_table_entry *p; - - sort_iommu_table(__iommu_table, __iommu_table_end); - check_iommu_entries(__iommu_table, __iommu_table_end); - - for (p = __iommu_table; p < __iommu_table_end; p++) { - if (p && p->detect && p->detect() > 0) { - p->flags |= IOMMU_DETECTED; - if (p->early_init) - p->early_init(); - if (p->flags & IOMMU_FINISH_IF_DETECTED) - break; - } + if (xen_pv_domain()) { + pci_xen_swiotlb_init(); + return; } + pci_swiotlb_detect(); + gart_iommu_hole_init(); + amd_iommu_detect(); + detect_intel_iommu(); + if (x86_swiotlb_enable) + swiotlb_init(0); } /* @@ -102,7 +164,7 @@ static __init int iommu_setup(char *p) } #ifdef CONFIG_SWIOTLB if (!strncmp(p, "soft", 4)) - swiotlb = 1; + x86_swiotlb_enable = true; #endif if (!strncmp(p, "pt", 2)) iommu_set_default_passthrough(true); @@ -121,14 +183,17 @@ early_param("iommu", iommu_setup); static int __init pci_iommu_init(void) { - struct iommu_table_entry *p; - x86_init.iommu.iommu_init(); - for (p = __iommu_table; p < __iommu_table_end; p++) { - if (p && (p->flags & IOMMU_DETECTED) && p->late_init) - p->late_init(); +#ifdef CONFIG_SWIOTLB + /* An IOMMU turned us off. */ + if (x86_swiotlb_enable) { + pr_info("PCI-DMA: Using software bounce buffering for IO (SWIOTLB)\n"); + swiotlb_print_info(); + } else { + swiotlb_exit(); } +#endif return 0; } diff --git a/arch/x86/kernel/pci-iommu_table.c b/arch/x86/kernel/pci-iommu_table.c deleted file mode 100644 index 42e92ec62973b..0000000000000 --- a/arch/x86/kernel/pci-iommu_table.c +++ /dev/null @@ -1,77 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include -#include -#include -#include - -static struct iommu_table_entry * __init -find_dependents_of(struct iommu_table_entry *start, - struct iommu_table_entry *finish, - struct iommu_table_entry *q) -{ - struct iommu_table_entry *p; - - if (!q) - return NULL; - - for (p = start; p < finish; p++) - if (p->detect == q->depend) - return p; - - return NULL; -} - - -void __init sort_iommu_table(struct iommu_table_entry *start, - struct iommu_table_entry *finish) { - - struct iommu_table_entry *p, *q, tmp; - - for (p = start; p < finish; p++) { -again: - q = find_dependents_of(start, finish, p); - /* We are bit sneaky here. We use the memory address to figure - * out if the node we depend on is past our point, if so, swap. - */ - if (q > p) { - tmp = *p; - memmove(p, q, sizeof(*p)); - *q = tmp; - goto again; - } - } - -} - -#ifdef DEBUG -void __init check_iommu_entries(struct iommu_table_entry *start, - struct iommu_table_entry *finish) -{ - struct iommu_table_entry *p, *q, *x; - - /* Simple cyclic dependency checker. */ - for (p = start; p < finish; p++) { - q = find_dependents_of(start, finish, p); - x = find_dependents_of(start, finish, q); - if (p == x) { - printk(KERN_ERR "CYCLIC DEPENDENCY FOUND! %pS depends on %pS and vice-versa. BREAKING IT.\n", - p->detect, q->detect); - /* Heavy handed way..*/ - x->depend = NULL; - } - } - - for (p = start; p < finish; p++) { - q = find_dependents_of(p, finish, p); - if (q && q > p) { - printk(KERN_ERR "EXECUTION ORDER INVALID! %pS should be called before %pS!\n", - p->detect, q->detect); - } - } -} -#else -void __init check_iommu_entries(struct iommu_table_entry *start, - struct iommu_table_entry *finish) -{ -} -#endif diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c deleted file mode 100644 index 814ab46a0dada..0000000000000 --- a/arch/x86/kernel/pci-swiotlb.c +++ /dev/null @@ -1,77 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 - -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include - -int swiotlb __read_mostly; - -/* - * pci_swiotlb_detect_override - set swiotlb to 1 if necessary - * - * This returns non-zero if we are forced to use swiotlb (by the boot - * option). - */ -int __init pci_swiotlb_detect_override(void) -{ - if (swiotlb_force == SWIOTLB_FORCE) - swiotlb = 1; - - return swiotlb; -} -IOMMU_INIT_FINISH(pci_swiotlb_detect_override, - pci_xen_swiotlb_detect, - pci_swiotlb_init, - pci_swiotlb_late_init); - -/* - * If 4GB or more detected (and iommu=off not set) or if SME is active - * then set swiotlb to 1 and return 1. - */ -int __init pci_swiotlb_detect_4gb(void) -{ - /* don't initialize swiotlb if iommu=off (no_iommu=1) */ - if (!no_iommu && max_possible_pfn > MAX_DMA32_PFN) - swiotlb = 1; - - /* - * Set swiotlb to 1 so that bounce buffers are allocated and used for - * devices that can't support DMA to encrypted memory. - */ - if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) - swiotlb = 1; - - return swiotlb; -} -IOMMU_INIT(pci_swiotlb_detect_4gb, - pci_swiotlb_detect_override, - pci_swiotlb_init, - pci_swiotlb_late_init); - -void __init pci_swiotlb_init(void) -{ - if (swiotlb) - swiotlb_init(0); -} - -void __init pci_swiotlb_late_init(void) -{ - /* An IOMMU turned us off. */ - if (!swiotlb) - swiotlb_exit(); - else { - printk(KERN_INFO "PCI-DMA: " - "Using software bounce buffering for IO (SWIOTLB)\n"); - swiotlb_print_info(); - } -} diff --git a/arch/x86/kernel/tboot.c b/arch/x86/kernel/tboot.c index f9af561c3cd4f..0c1154a1c4032 100644 --- a/arch/x86/kernel/tboot.c +++ b/arch/x86/kernel/tboot.c @@ -24,7 +24,6 @@ #include #include #include -#include #include #include #include diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index 27f830345b6f0..bbe910c15b293 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -306,18 +306,6 @@ SECTIONS *(.altinstr_replacement) } - /* - * struct iommu_table_entry entries are injected in this section. - * It is an array of IOMMUs which during run time gets sorted depending - * on its dependency order. After rootfs_initcall is complete - * this section can be safely removed. - */ - .iommu_table : AT(ADDR(.iommu_table) - LOAD_OFFSET) { - __iommu_table = .; - *(.iommu_table) - __iommu_table_end = .; - } - . = ALIGN(8); .apicdrivers : AT(ADDR(.apicdrivers) - LOAD_OFFSET) { __apicdrivers = .; diff --git a/arch/x86/xen/Makefile b/arch/x86/xen/Makefile index 4953260e281c3..3c5b52fbe4a7f 100644 --- a/arch/x86/xen/Makefile +++ b/arch/x86/xen/Makefile @@ -47,6 +47,4 @@ obj-$(CONFIG_XEN_DEBUG_FS) += debugfs.o obj-$(CONFIG_XEN_PV_DOM0) += vga.o -obj-$(CONFIG_SWIOTLB_XEN) += pci-swiotlb-xen.o - obj-$(CONFIG_XEN_EFI) += efi.o diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c deleted file mode 100644 index 46df59aeaa06a..0000000000000 --- a/arch/x86/xen/pci-swiotlb-xen.c +++ /dev/null @@ -1,96 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 - -/* Glue code to lib/swiotlb-xen.c */ - -#include -#include -#include - -#include -#include -#include - - -#include -#ifdef CONFIG_X86_64 -#include -#include -#endif -#include - -static int xen_swiotlb __read_mostly; - -/* - * pci_xen_swiotlb_detect - set xen_swiotlb to 1 if necessary - * - * This returns non-zero if we are forced to use xen_swiotlb (by the boot - * option). - */ -int __init pci_xen_swiotlb_detect(void) -{ - - if (!xen_pv_domain()) - return 0; - - /* If running as PV guest, either iommu=soft, or swiotlb=force will - * activate this IOMMU. If running as PV privileged, activate it - * irregardless. - */ - if (xen_initial_domain() || swiotlb || swiotlb_force == SWIOTLB_FORCE) - xen_swiotlb = 1; - - /* If we are running under Xen, we MUST disable the native SWIOTLB. - * Don't worry about swiotlb_force flag activating the native, as - * the 'swiotlb' flag is the only one turning it on. */ - swiotlb = 0; - -#ifdef CONFIG_X86_64 - /* pci_swiotlb_detect_4gb turns on native SWIOTLB if no_iommu == 0 - * (so no iommu=X command line over-writes). - * Considering that PV guests do not want the *native SWIOTLB* but - * only Xen SWIOTLB it is not useful to us so set no_iommu=1 here. - */ - if (max_pfn > MAX_DMA32_PFN) - no_iommu = 1; -#endif - return xen_swiotlb; -} - -static void __init pci_xen_swiotlb_init(void) -{ - if (xen_swiotlb) { - xen_swiotlb_init_early(); - dma_ops = &xen_swiotlb_dma_ops; - -#ifdef CONFIG_PCI - /* Make sure ACS will be enabled */ - pci_request_acs(); -#endif - } -} - -int pci_xen_swiotlb_init_late(void) -{ - int rc; - - if (xen_swiotlb) - return 0; - - rc = xen_swiotlb_init(); - if (rc) - return rc; - - dma_ops = &xen_swiotlb_dma_ops; -#ifdef CONFIG_PCI - /* Make sure ACS will be enabled */ - pci_request_acs(); -#endif - - return 0; -} -EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late); - -IOMMU_INIT_FINISH(pci_xen_swiotlb_detect, - NULL, - pci_xen_swiotlb_init, - NULL); diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c index dc338acf33385..0000bce14d059 100644 --- a/drivers/iommu/amd/init.c +++ b/drivers/iommu/amd/init.c @@ -26,7 +26,6 @@ #include #include #include -#include #include #include #include @@ -3235,11 +3234,6 @@ __setup("ivrs_ioapic", parse_ivrs_ioapic); __setup("ivrs_hpet", parse_ivrs_hpet); __setup("ivrs_acpihid", parse_ivrs_acpihid); -IOMMU_INIT_FINISH(amd_iommu_detect, - gart_iommu_hole_init, - NULL, - NULL); - bool amd_iommu_v2_supported(void) { return amd_iommu_v2_present; diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index 461f1844ed1fb..541a7b8315a12 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -1834,7 +1834,10 @@ void amd_iommu_domain_update(struct protection_domain *domain) static void __init amd_iommu_init_dma_ops(void) { - swiotlb = (iommu_default_passthrough() || sme_me_mask) ? 1 : 0; + if (iommu_default_passthrough() || sme_me_mask) + x86_swiotlb_enable = true; + else + x86_swiotlb_enable = false; } int __init amd_iommu_init_api(void) diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c index 915bff76fe965..29bee4b210c5b 100644 --- a/drivers/iommu/intel/dmar.c +++ b/drivers/iommu/intel/dmar.c @@ -30,7 +30,6 @@ #include #include #include -#include #include #include "../irq_remapping.h" @@ -913,7 +912,7 @@ dmar_validate_one_drhd(struct acpi_dmar_header *entry, void *arg) return 0; } -int __init detect_intel_iommu(void) +void __init detect_intel_iommu(void) { int ret; struct dmar_res_callback validate_drhd_cb = { @@ -946,8 +945,6 @@ int __init detect_intel_iommu(void) dmar_tbl = NULL; } up_write(&dmar_global_lock); - - return ret ? ret : 1; } static void unmap_iommu(struct intel_iommu *iommu) @@ -2165,7 +2162,6 @@ static int __init dmar_free_unused_resources(void) } late_initcall(dmar_free_unused_resources); -IOMMU_INIT_POST(detect_intel_iommu); /* * DMAR Hotplug Support diff --git a/include/linux/dmar.h b/include/linux/dmar.h index 45e903d847335..cbd714a198a0a 100644 --- a/include/linux/dmar.h +++ b/include/linux/dmar.h @@ -121,7 +121,7 @@ extern int dmar_remove_dev_scope(struct dmar_pci_notify_info *info, u16 segment, struct dmar_dev_scope *devices, int count); /* Intel IOMMU detection */ -extern int detect_intel_iommu(void); +void detect_intel_iommu(void); extern int enable_drhd_fault_handling(void); extern int dmar_device_add(acpi_handle handle); extern int dmar_device_remove(acpi_handle handle); @@ -197,6 +197,10 @@ static inline bool dmar_platform_optin(void) return false; } +static inline void detect_intel_iommu(void) +{ +} + #endif /* CONFIG_DMAR_TABLE */ struct irte { From patchwork Mon Mar 14 07:31:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604929 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=Grkv61Sp; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7Y05BN2z9sGD for ; Mon, 14 Mar 2022 18:32:36 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236725AbiCNHdm (ORCPT ); Mon, 14 Mar 2022 03:33:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236675AbiCNHdd (ORCPT ); Mon, 14 Mar 2022 03:33:33 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69DC640E62; Mon, 14 Mar 2022 00:32:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=G+BH4X2UfRw8OQMrfsMNZasg39oL/O6RTmQprXaltS0=; b=Grkv61SpDZXDtEl+P0acp2tTgz OLt1K0n4GmPr0GQxsUa8/K8MjT3fytjZ3rMBnPHO2kpSrwAz0QoAkPN/ghuhz21oW1mV/v64yJByU 2OV008sibw7ZWNht6kzeRC++MtgAaleBtX4VbTiTlboGVSvOamqdjEHWbgTpAwaTcwbE/jDMQFqpI asso+bSa91CnpkigH56+ORMUcaFOjOcTjY+IeFeok3J3cRYAK3Okh52ITJFOrcYMNTOisQuUDaIeP BaZ0miOCwVVihjS9gxDyrn/S/D3V7gXL5BDCq6FpCcdb9+LOEAc1r+dumDEfY4xYA4ex0Cqigpbpr IkwBVJtQ==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfBO-0044gC-7e; Mon, 14 Mar 2022 07:31:58 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org Subject: [PATCH 08/15] x86: centralize setting SWIOTLB_FORCE when guest memory encryption is enabled Date: Mon, 14 Mar 2022 08:31:22 +0100 Message-Id: <20220314073129.1862284-9-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Move enabling SWIOTLB_FORCE for guest memory encryption into common code. Signed-off-by: Christoph Hellwig --- arch/x86/kernel/cpu/mshyperv.c | 8 -------- arch/x86/kernel/pci-dma.c | 8 ++++++++ arch/x86/mm/mem_encrypt_amd.c | 3 --- 3 files changed, 8 insertions(+), 11 deletions(-) diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index 5a99f993e6392..568274917f1cd 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -336,14 +336,6 @@ static void __init ms_hyperv_init_platform(void) swiotlb_unencrypted_base = ms_hyperv.shared_gpa_boundary; #endif } - -#ifdef CONFIG_SWIOTLB - /* - * Enable swiotlb force mode in Isolation VM to - * use swiotlb bounce buffer for dma transaction. - */ - swiotlb_force = SWIOTLB_FORCE; -#endif } if (hv_max_functions_eax >= HYPERV_CPUID_NESTED_FEATURES) { diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c index df96926421be0..04140e20ef1a3 100644 --- a/arch/x86/kernel/pci-dma.c +++ b/arch/x86/kernel/pci-dma.c @@ -53,6 +53,14 @@ static void __init pci_swiotlb_detect(void) if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) x86_swiotlb_enable = true; + /* + * Guest with guest memory encryption currently perform all DMA through + * bounce buffers as the hypervisor can't access arbitrary VM memory + * that is not explicitly shared with it. + */ + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) + swiotlb_force = SWIOTLB_FORCE; + if (swiotlb_force == SWIOTLB_FORCE) x86_swiotlb_enable = true; } diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c index 2b2d018ea3450..a72942d569cf9 100644 --- a/arch/x86/mm/mem_encrypt_amd.c +++ b/arch/x86/mm/mem_encrypt_amd.c @@ -191,9 +191,6 @@ void __init sme_early_init(void) /* Update the protection map with memory encryption mask */ for (i = 0; i < ARRAY_SIZE(protection_map); i++) protection_map[i] = pgprot_encrypted(protection_map[i]); - - if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) - swiotlb_force = SWIOTLB_FORCE; } void __init sev_setup_arch(void) From patchwork Mon Mar 14 07:31:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604931 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=xEjxMMsg; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7Y41wZhz9sGD for ; Mon, 14 Mar 2022 18:32:40 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236728AbiCNHdq (ORCPT ); Mon, 14 Mar 2022 03:33:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236672AbiCNHdl (ORCPT ); Mon, 14 Mar 2022 03:33:41 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A398E40A3C; Mon, 14 Mar 2022 00:32:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=j/gMo6kiMd15MBMR1iDhtcdbnucrzViFl0x/evBe/TY=; b=xEjxMMsgbQ0jKRvTK2JjhaXyUW drB+OCc3Pz1H1uptRid31fhWIE5niv1KrX8vnTFCQuiZbAK+DzpHs+QwGXf3gCehjg6z0G5Tflm0D g6UeoMbTAxL+SlAoczJI4sdOcSLdGR/wmGxfeAj7W3bBMK6fSKezuouF1XzDxMg3tFWSiW+sjDmPD VqFbTXPJKAaWfxmVUrbE0a5a9e1/bmQzsY3pcSm5obrhdRwd5sjI516ufZsueZySDVuEzXrm2IVvp q8AhDTF7lCMrSWp4UYk7vhqY48EbEugzz/Yel5JzwrU2N9Ev/atyyDefwiU2UIAvxCri15b0aXh70 eiAp67sA==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfBR-0044iI-FL; Mon, 14 Mar 2022 07:32:02 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org Subject: [PATCH 09/15] swiotlb: make the swiotlb_init interface more useful Date: Mon, 14 Mar 2022 08:31:23 +0100 Message-Id: <20220314073129.1862284-10-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Pass a bool to pass if swiotlb needs to be enabled based on the addressing needs and replace the verbose argument with a set of flags, including one to force enable bounce buffering. Note that this patch removes the possibility to force xen-swiotlb use using swiotlb=force on the command line on x86 (arm and arm64 never supported that), but this interface will be restored shortly. Signed-off-by: Christoph Hellwig --- arch/arm/mm/init.c | 6 +---- arch/arm64/mm/init.c | 6 +---- arch/ia64/mm/init.c | 4 +-- arch/mips/cavium-octeon/dma-octeon.c | 2 +- arch/mips/loongson64/dma.c | 2 +- arch/mips/sibyte/common/dma.c | 2 +- arch/powerpc/mm/mem.c | 3 ++- arch/powerpc/platforms/pseries/setup.c | 3 --- arch/riscv/mm/init.c | 8 +----- arch/s390/mm/init.c | 3 +-- arch/x86/kernel/pci-dma.c | 15 ++++++----- drivers/xen/swiotlb-xen.c | 4 +-- include/linux/swiotlb.h | 15 ++++++----- include/trace/events/swiotlb.h | 29 ++++++++------------- kernel/dma/swiotlb.c | 35 ++++++++++++++------------ 15 files changed, 55 insertions(+), 82 deletions(-) diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 6d0cb0f7bc54b..73f30d278b565 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -312,11 +312,7 @@ static void __init free_highpages(void) void __init mem_init(void) { #ifdef CONFIG_ARM_LPAE - if (swiotlb_force == SWIOTLB_FORCE || - max_pfn > arm_dma_pfn_limit) - swiotlb_init(1); - else - swiotlb_force = SWIOTLB_NO_FORCE; + swiotlb_init(max_pfn > arm_dma_pfn_limit, SWIOTLB_VERBOSE); #endif set_max_mapnr(pfn_to_page(max_pfn) - mem_map); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index db63cc885771a..52102adda3d28 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -373,11 +373,7 @@ void __init bootmem_init(void) */ void __init mem_init(void) { - if (swiotlb_force == SWIOTLB_FORCE || - max_pfn > PFN_DOWN(arm64_dma_phys_limit)) - swiotlb_init(1); - else if (!xen_swiotlb_detect()) - swiotlb_force = SWIOTLB_NO_FORCE; + swiotlb_init(max_pfn > PFN_DOWN(arm64_dma_phys_limit), SWIOTLB_VERBOSE); /* this will put all unused low memory onto the freelists */ memblock_free_all(); diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index 5d165607bf354..3c3e15b22608f 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -437,9 +437,7 @@ mem_init (void) if (iommu_detected) break; #endif -#ifdef CONFIG_SWIOTLB - swiotlb_init(1); -#endif + swiotlb_init(true, SWIOTLB_VERBOSE); } while (0); #ifdef CONFIG_FLATMEM diff --git a/arch/mips/cavium-octeon/dma-octeon.c b/arch/mips/cavium-octeon/dma-octeon.c index fb7547e217263..9fbba6a8fa4c5 100644 --- a/arch/mips/cavium-octeon/dma-octeon.c +++ b/arch/mips/cavium-octeon/dma-octeon.c @@ -235,5 +235,5 @@ void __init plat_swiotlb_setup(void) #endif swiotlb_adjust_size(swiotlbsize); - swiotlb_init(1); + swiotlb_init(true, SWIOTLB_VERBOSE); } diff --git a/arch/mips/loongson64/dma.c b/arch/mips/loongson64/dma.c index 364f2f27c8723..8220a1bc0db64 100644 --- a/arch/mips/loongson64/dma.c +++ b/arch/mips/loongson64/dma.c @@ -24,5 +24,5 @@ phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr) void __init plat_swiotlb_setup(void) { - swiotlb_init(1); + swiotlb_init(true, SWIOTLB_VERBOSE); } diff --git a/arch/mips/sibyte/common/dma.c b/arch/mips/sibyte/common/dma.c index eb47a94f3583e..c5c2c782aff68 100644 --- a/arch/mips/sibyte/common/dma.c +++ b/arch/mips/sibyte/common/dma.c @@ -10,5 +10,5 @@ void __init plat_swiotlb_setup(void) { - swiotlb_init(1); + swiotlb_init(true, SWIOTLB_VERBOSE); } diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 8e301cd8925b2..e1519e2edc656 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -17,6 +17,7 @@ #include #include +#include #include #include #include @@ -251,7 +252,7 @@ void __init mem_init(void) if (is_secure_guest()) svm_swiotlb_init(); else - swiotlb_init(0); + swiotlb_init(ppc_swiotlb_enable, 0); #endif high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index 83a04d967a59f..45d637ab58261 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -838,9 +838,6 @@ static void __init pSeries_setup_arch(void) } ppc_md.pcibios_root_bridge_prepare = pseries_root_bridge_prepare; - - if (swiotlb_force == SWIOTLB_FORCE) - ppc_swiotlb_enable = 1; } static void pseries_panic(char *str) diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index cf4d018b7d668..e2c97b4ea2b48 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -118,13 +118,7 @@ void __init mem_init(void) BUG_ON(!mem_map); #endif /* CONFIG_FLATMEM */ -#ifdef CONFIG_SWIOTLB - if (swiotlb_force == SWIOTLB_FORCE || - max_pfn > PFN_DOWN(dma32_phys_limit)) - swiotlb_init(1); - else - swiotlb_force = SWIOTLB_NO_FORCE; -#endif + swiotlb_init(max_pfn > PFN_DOWN(dma32_phys_limit), SWIOTLB_VERBOSE); high_memory = (void *)(__va(PFN_PHYS(max_low_pfn))); memblock_free_all(); diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 86ffd0d51fd59..6fb6bf64326f9 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -185,8 +185,7 @@ static void pv_init(void) return; /* make sure bounce buffers are shared */ - swiotlb_force = SWIOTLB_FORCE; - swiotlb_init(1); + swiotlb_init(true, SWIOTLB_FORCE | SWIOTLB_VERBOSE); swiotlb_update_mem_attributes(); } diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c index 04140e20ef1a3..a705a199bf8a3 100644 --- a/arch/x86/kernel/pci-dma.c +++ b/arch/x86/kernel/pci-dma.c @@ -39,6 +39,7 @@ int iommu_detected __read_mostly = 0; #ifdef CONFIG_SWIOTLB bool x86_swiotlb_enable; +static unsigned int x86_swiotlb_flags; static void __init pci_swiotlb_detect(void) { @@ -58,16 +59,16 @@ static void __init pci_swiotlb_detect(void) * bounce buffers as the hypervisor can't access arbitrary VM memory * that is not explicitly shared with it. */ - if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) - swiotlb_force = SWIOTLB_FORCE; - - if (swiotlb_force == SWIOTLB_FORCE) + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) { x86_swiotlb_enable = true; + x86_swiotlb_flags |= SWIOTLB_FORCE; + } } #else static inline void __init pci_swiotlb_detect(void) { } +#define x86_swiotlb_flags 0 #endif /* CONFIG_SWIOTLB */ #ifdef CONFIG_SWIOTLB_XEN @@ -75,8 +76,7 @@ static bool xen_swiotlb; static void __init pci_xen_swiotlb_init(void) { - if (!xen_initial_domain() && !x86_swiotlb_enable && - swiotlb_force != SWIOTLB_FORCE) + if (!xen_initial_domain() && !x86_swiotlb_enable) return; x86_swiotlb_enable = true; xen_swiotlb = true; @@ -120,8 +120,7 @@ void __init pci_iommu_alloc(void) gart_iommu_hole_init(); amd_iommu_detect(); detect_intel_iommu(); - if (x86_swiotlb_enable) - swiotlb_init(0); + swiotlb_init(x86_swiotlb_enable, x86_swiotlb_flags); } /* diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 485cd06ed39e7..c2da3eb4826e8 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -251,7 +251,7 @@ void __init xen_swiotlb_init_early(void) panic("%s (rc:%d)", xen_swiotlb_error(XEN_SWIOTLB_EFIXUP), rc); } - if (swiotlb_init_with_tbl(start, nslabs, true)) + if (swiotlb_init_with_tbl(start, nslabs, SWIOTLB_VERBOSE)) panic("Cannot allocate SWIOTLB buffer"); } #endif /* CONFIG_X86 */ @@ -376,7 +376,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, /* * Oh well, have to allocate and map a bounce buffer. */ - trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force); + trace_swiotlb_bounced(dev, dev_addr, size); map = swiotlb_tbl_map_single(dev, phys, size, size, 0, dir, attrs); if (map == (phys_addr_t)DMA_MAPPING_ERROR) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index b48b26bfa0edb..ae0407173e845 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -13,11 +13,8 @@ struct device; struct page; struct scatterlist; -enum swiotlb_force { - SWIOTLB_NORMAL, /* Default - depending on HW DMA mask etc. */ - SWIOTLB_FORCE, /* swiotlb=force */ - SWIOTLB_NO_FORCE, /* swiotlb=noforce */ -}; +#define SWIOTLB_VERBOSE (1 << 0) /* verbose initialization */ +#define SWIOTLB_FORCE (1 << 1) /* force bounce buffering */ /* * Maximum allowable number of contiguous slabs to map, @@ -36,8 +33,7 @@ enum swiotlb_force { /* default to 64MB */ #define IO_TLB_DEFAULT_SIZE (64UL<<20) -extern void swiotlb_init(int verbose); -int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); +int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, unsigned int flags); unsigned long swiotlb_size_or_default(void); extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); int swiotlb_init_late(size_t size); @@ -126,13 +122,16 @@ static inline bool is_swiotlb_force_bounce(struct device *dev) return mem && mem->force_bounce; } +void swiotlb_init(bool addressing_limited, unsigned int flags); void __init swiotlb_exit(void); unsigned int swiotlb_max_segment(void); size_t swiotlb_max_mapping_size(struct device *dev); bool is_swiotlb_active(struct device *dev); void __init swiotlb_adjust_size(unsigned long size); #else -#define swiotlb_force SWIOTLB_NO_FORCE +static inline void swiotlb_init(bool addressing_limited, unsigned int flags) +{ +} static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) { return false; diff --git a/include/trace/events/swiotlb.h b/include/trace/events/swiotlb.h index 705be43b71ab0..da05c9ebd224a 100644 --- a/include/trace/events/swiotlb.h +++ b/include/trace/events/swiotlb.h @@ -8,20 +8,15 @@ #include TRACE_EVENT(swiotlb_bounced, - - TP_PROTO(struct device *dev, - dma_addr_t dev_addr, - size_t size, - enum swiotlb_force swiotlb_force), - - TP_ARGS(dev, dev_addr, size, swiotlb_force), + TP_PROTO(struct device *dev, dma_addr_t dev_addr, size_t size), + TP_ARGS(dev, dev_addr, size), TP_STRUCT__entry( - __string( dev_name, dev_name(dev) ) - __field( u64, dma_mask ) - __field( dma_addr_t, dev_addr ) - __field( size_t, size ) - __field( enum swiotlb_force, swiotlb_force ) + __string(dev_name, dev_name(dev)) + __field(u64, dma_mask) + __field(dma_addr_t, dev_addr) + __field(size_t, size) + __field(bool, force) ), TP_fast_assign( @@ -29,19 +24,15 @@ TRACE_EVENT(swiotlb_bounced, __entry->dma_mask = (dev->dma_mask ? *dev->dma_mask : 0); __entry->dev_addr = dev_addr; __entry->size = size; - __entry->swiotlb_force = swiotlb_force; + __entry->force = is_swiotlb_force_bounce(dev); ), - TP_printk("dev_name: %s dma_mask=%llx dev_addr=%llx " - "size=%zu %s", + TP_printk("dev_name: %s dma_mask=%llx dev_addr=%llx size=%zu %s", __get_str(dev_name), __entry->dma_mask, (unsigned long long)__entry->dev_addr, __entry->size, - __print_symbolic(__entry->swiotlb_force, - { SWIOTLB_NORMAL, "NORMAL" }, - { SWIOTLB_FORCE, "FORCE" }, - { SWIOTLB_NO_FORCE, "NO_FORCE" })) + __entry->force ? "FORCE" : "NORMAL") ); #endif /* _TRACE_SWIOTLB_H */ diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 14e08fa9621c2..5ac6e128d4279 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -62,7 +62,8 @@ #define INVALID_PHYS_ADDR (~(phys_addr_t)0) -enum swiotlb_force swiotlb_force; +static bool swiotlb_force_bounce; +static bool swiotlb_force_disable; struct io_tlb_mem io_tlb_default_mem; @@ -81,9 +82,9 @@ setup_io_tlb_npages(char *str) if (*str == ',') ++str; if (!strcmp(str, "force")) - swiotlb_force = SWIOTLB_FORCE; + swiotlb_force_bounce = true; else if (!strcmp(str, "noforce")) - swiotlb_force = SWIOTLB_NO_FORCE; + swiotlb_force_disable = true; return 0; } @@ -202,7 +203,7 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start, mem->index = 0; mem->late_alloc = late_alloc; - if (swiotlb_force == SWIOTLB_FORCE) + if (swiotlb_force_bounce) mem->force_bounce = true; spin_lock_init(&mem->lock); @@ -224,12 +225,13 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start, return; } -int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) +int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, + unsigned int flags) { struct io_tlb_mem *mem = &io_tlb_default_mem; size_t alloc_size; - if (swiotlb_force == SWIOTLB_NO_FORCE) + if (swiotlb_force_disable) return 0; /* protect against double initialization */ @@ -243,8 +245,9 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) __func__, alloc_size, PAGE_SIZE); swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false); + mem->force_bounce = flags & SWIOTLB_FORCE; - if (verbose) + if (flags & SWIOTLB_VERBOSE) swiotlb_print_info(); return 0; } @@ -253,20 +256,21 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) * Statically reserve bounce buffer space and initialize bounce buffer data * structures for the software IO TLB used to implement the DMA API. */ -void __init -swiotlb_init(int verbose) +void __init swiotlb_init(bool addressing_limit, unsigned int flags) { size_t bytes = PAGE_ALIGN(default_nslabs << IO_TLB_SHIFT); void *tlb; - if (swiotlb_force == SWIOTLB_NO_FORCE) + if (!addressing_limit && !swiotlb_force_bounce) + return; + if (swiotlb_force_disable) return; /* Get IO TLB memory from the low pages */ tlb = memblock_alloc_low(bytes, PAGE_SIZE); if (!tlb) goto fail; - if (swiotlb_init_with_tbl(tlb, default_nslabs, verbose)) + if (swiotlb_init_with_tbl(tlb, default_nslabs, flags)) goto fail_free_mem; return; @@ -289,7 +293,7 @@ int swiotlb_init_late(size_t size) unsigned int order; int rc = 0; - if (swiotlb_force == SWIOTLB_NO_FORCE) + if (swiotlb_force_disable) return 0; /* @@ -328,7 +332,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) struct io_tlb_mem *mem = &io_tlb_default_mem; unsigned long bytes = nslabs << IO_TLB_SHIFT; - if (swiotlb_force == SWIOTLB_NO_FORCE) + if (swiotlb_force_disable) return 0; /* protect against double initialization */ @@ -353,7 +357,7 @@ void __init swiotlb_exit(void) unsigned long tbl_vaddr; size_t tbl_size, slots_size; - if (swiotlb_force == SWIOTLB_FORCE) + if (swiotlb_force_bounce) return; if (!mem->nslabs) @@ -699,8 +703,7 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size, phys_addr_t swiotlb_addr; dma_addr_t dma_addr; - trace_swiotlb_bounced(dev, phys_to_dma(dev, paddr), size, - swiotlb_force); + trace_swiotlb_bounced(dev, phys_to_dma(dev, paddr), size); swiotlb_addr = swiotlb_tbl_map_single(dev, paddr, size, size, 0, dir, attrs); From patchwork Mon Mar 14 07:31:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604930 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=tK68Ijrd; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7Y23hqYz9sGD for ; Mon, 14 Mar 2022 18:32:38 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236699AbiCNHdp (ORCPT ); Mon, 14 Mar 2022 03:33:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236714AbiCNHdm (ORCPT ); Mon, 14 Mar 2022 03:33:42 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A167040E73; Mon, 14 Mar 2022 00:32:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=IbJQxaVGsLOnuLApC1W6u9Iku9THkdOh49hhwsQBBRk=; b=tK68Ijrd1/cIsna5xphlbFdXSm bHrkhGEpmsnXwro/DU5nR3bLwfDyCWnvMPwb7vBV8J1Fv4qBsXkvt3HSuzNOknQP/Bl7Nemjpf3xv DYgNvl3HJaonwptS1EPU66gRvbaA2crWzzg9wm+EvcWdYs3AbMAnn0vqYHhoGrHc/gmf9j7/3JSq+ +no0w2XcZ2J2w98/AXCIYn2UAbl4yhmkACP0pBjdJ81us1qFAwMlX8raVh4T05PKlU7HqHOMVUXtL LaxHbPOytLGhwb6TecxqYTd0uCP66Zkqa13jXXli492+7N+edkiDapIRinfy6WFxzQ/xynCz4My08 ScXHujgg==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfBU-0044l6-Pr; Mon, 14 Mar 2022 07:32:05 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org Subject: [PATCH 10/15] swiotlb: add a SWIOTLB_ANY flag to lift the low memory restriction Date: Mon, 14 Mar 2022 08:31:24 +0100 Message-Id: <20220314073129.1862284-11-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Power SVM wants to allocate a swiotlb buffer that is not restricted to low memory for the trusted hypervisor scheme. Consolidate the support for this into the swiotlb_init interface by adding a new flag. Signed-off-by: Christoph Hellwig --- arch/powerpc/include/asm/svm.h | 4 ---- arch/powerpc/include/asm/swiotlb.h | 1 + arch/powerpc/kernel/dma-swiotlb.c | 1 + arch/powerpc/mm/mem.c | 5 +---- arch/powerpc/platforms/pseries/svm.c | 26 +------------------------- include/linux/swiotlb.h | 1 + kernel/dma/swiotlb.c | 11 +++++++++-- 7 files changed, 14 insertions(+), 35 deletions(-) diff --git a/arch/powerpc/include/asm/svm.h b/arch/powerpc/include/asm/svm.h index 7546402d796af..85580b30aba48 100644 --- a/arch/powerpc/include/asm/svm.h +++ b/arch/powerpc/include/asm/svm.h @@ -15,8 +15,6 @@ static inline bool is_secure_guest(void) return mfmsr() & MSR_S; } -void __init svm_swiotlb_init(void); - void dtl_cache_ctor(void *addr); #define get_dtl_cache_ctor() (is_secure_guest() ? dtl_cache_ctor : NULL) @@ -27,8 +25,6 @@ static inline bool is_secure_guest(void) return false; } -static inline void svm_swiotlb_init(void) {} - #define get_dtl_cache_ctor() NULL #endif /* CONFIG_PPC_SVM */ diff --git a/arch/powerpc/include/asm/swiotlb.h b/arch/powerpc/include/asm/swiotlb.h index 3c1a1cd161286..4203b5e0a88ed 100644 --- a/arch/powerpc/include/asm/swiotlb.h +++ b/arch/powerpc/include/asm/swiotlb.h @@ -9,6 +9,7 @@ #include extern unsigned int ppc_swiotlb_enable; +extern unsigned int ppc_swiotlb_flags; #ifdef CONFIG_SWIOTLB void swiotlb_detect_4g(void); diff --git a/arch/powerpc/kernel/dma-swiotlb.c b/arch/powerpc/kernel/dma-swiotlb.c index fc7816126a401..ba256c37bcc0f 100644 --- a/arch/powerpc/kernel/dma-swiotlb.c +++ b/arch/powerpc/kernel/dma-swiotlb.c @@ -10,6 +10,7 @@ #include unsigned int ppc_swiotlb_enable; +unsigned int ppc_swiotlb_flags; void __init swiotlb_detect_4g(void) { diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index e1519e2edc656..a4d65418c30a9 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -249,10 +249,7 @@ void __init mem_init(void) * back to to-down. */ memblock_set_bottom_up(true); - if (is_secure_guest()) - svm_swiotlb_init(); - else - swiotlb_init(ppc_swiotlb_enable, 0); + swiotlb_init(ppc_swiotlb_enable, ppc_swiotlb_flags); #endif high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c index c5228f4969eb2..3b4045d508ec8 100644 --- a/arch/powerpc/platforms/pseries/svm.c +++ b/arch/powerpc/platforms/pseries/svm.c @@ -28,7 +28,7 @@ static int __init init_svm(void) * need to use the SWIOTLB buffer for DMA even if dma_capable() says * otherwise. */ - swiotlb_force = SWIOTLB_FORCE; + ppc_swiotlb_flags |= SWIOTLB_ANY | SWIOTLB_FORCE; /* Share the SWIOTLB buffer with the host. */ swiotlb_update_mem_attributes(); @@ -37,30 +37,6 @@ static int __init init_svm(void) } machine_early_initcall(pseries, init_svm); -/* - * Initialize SWIOTLB. Essentially the same as swiotlb_init(), except that it - * can allocate the buffer anywhere in memory. Since the hypervisor doesn't have - * any addressing limitation, we don't need to allocate it in low addresses. - */ -void __init svm_swiotlb_init(void) -{ - unsigned char *vstart; - unsigned long bytes, io_tlb_nslabs; - - io_tlb_nslabs = (swiotlb_size_or_default() >> IO_TLB_SHIFT); - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); - - bytes = io_tlb_nslabs << IO_TLB_SHIFT; - - vstart = memblock_alloc(PAGE_ALIGN(bytes), PAGE_SIZE); - if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, false)) - return; - - - memblock_free(vstart, PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); - panic("SVM: Cannot allocate SWIOTLB buffer"); -} - int set_memory_encrypted(unsigned long addr, int numpages) { if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT)) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index ae0407173e845..eabdd89987027 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -15,6 +15,7 @@ struct scatterlist; #define SWIOTLB_VERBOSE (1 << 0) /* verbose initialization */ #define SWIOTLB_FORCE (1 << 1) /* force bounce buffering */ +#define SWIOTLB_ANY (1 << 2) /* allow any memory for the buffer */ /* * Maximum allowable number of contiguous slabs to map, diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 5ac6e128d4279..2ad12562c94fe 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -266,8 +266,15 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags) if (swiotlb_force_disable) return; - /* Get IO TLB memory from the low pages */ - tlb = memblock_alloc_low(bytes, PAGE_SIZE); + /* + * By default allocate the bounce buffer memory from low memory, but + * allow to pick a location everywhere for hypervisors with guest + * memory encryption. + */ + if (flags & SWIOTLB_ANY) + tlb = memblock_alloc(bytes, PAGE_SIZE); + else + tlb = memblock_alloc_low(bytes, PAGE_SIZE); if (!tlb) goto fail; if (swiotlb_init_with_tbl(tlb, default_nslabs, flags)) From patchwork Mon Mar 14 07:31:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604933 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=T6Y3N/uU; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7YB4ymnz9sGD for ; Mon, 14 Mar 2022 18:32:46 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236672AbiCNHdx (ORCPT ); Mon, 14 Mar 2022 03:33:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236718AbiCNHdm (ORCPT ); Mon, 14 Mar 2022 03:33:42 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1244C40A39; Mon, 14 Mar 2022 00:32:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=z/er+v+IsLn/3DGDcWTENkiYhlg652Y98Hj6qBpRkNM=; b=T6Y3N/uUE3h/14YStboRzoVFc/ 9GfPPPiQd9PwYJqMRTOm0smey062UGJjZdeQZ9x0qWOemaH54g1ClNwdlxvpRhwgYnXiW5KIAxr/K 0ViIWjUq2r0uqQwCJOnDObCWM2sgMkrRm+yHUkWtw73Q3liFGhUVwItXaqZymyO6/dwPOeEuZGgic vt7rZvwQsgvigoSxECM40cJSIH9vvMz5ucLySpLwPPc8PaMJsrw6tdkHiuxSJZFXbg98u8YA2zmsH 4lidVtIkikm+amhl35GxiJRKhzpKdfqaC0KZ0xADQbJRGXRScG3/pvHuAePMU0dudcrXIJ0h7PCMq JLagJkvA==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfBX-0044nI-NE; Mon, 14 Mar 2022 07:32:08 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org Subject: [PATCH 11/15] swiotlb: pass a gfp_mask argument to swiotlb_init_late Date: Mon, 14 Mar 2022 08:31:25 +0100 Message-Id: <20220314073129.1862284-12-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Let the caller chose a zone to allocate from. This will be used later on by the xen-swiotlb initialization on arm. Signed-off-by: Christoph Hellwig Reviewed-by: Anshuman Khandual --- arch/x86/pci/sta2x11-fixup.c | 2 +- include/linux/swiotlb.h | 2 +- kernel/dma/swiotlb.c | 7 ++----- 3 files changed, 4 insertions(+), 7 deletions(-) diff --git a/arch/x86/pci/sta2x11-fixup.c b/arch/x86/pci/sta2x11-fixup.c index e0c039a75b2db..c7e6faf59a861 100644 --- a/arch/x86/pci/sta2x11-fixup.c +++ b/arch/x86/pci/sta2x11-fixup.c @@ -57,7 +57,7 @@ static void sta2x11_new_instance(struct pci_dev *pdev) int size = STA2X11_SWIOTLB_SIZE; /* First instance: register your own swiotlb area */ dev_info(&pdev->dev, "Using SWIOTLB (size %i)\n", size); - if (swiotlb_init_late(size)) + if (swiotlb_init_late(size, GFP_DMA)) dev_emerg(&pdev->dev, "init swiotlb failed\n"); } list_add(&instance->list, &sta2x11_instance_list); diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index eabdd89987027..ee655f2e4d28b 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -37,7 +37,7 @@ struct scatterlist; int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, unsigned int flags); unsigned long swiotlb_size_or_default(void); extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); -int swiotlb_init_late(size_t size); +int swiotlb_init_late(size_t size, gfp_t gfp_mask); extern void __init swiotlb_update_mem_attributes(void); phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys, diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 2ad12562c94fe..79641c446d284 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -292,7 +292,7 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags) * initialize the swiotlb later using the slab allocator if needed. * This should be just like above, but with some error catching. */ -int swiotlb_init_late(size_t size) +int swiotlb_init_late(size_t size, gfp_t gfp_mask) { unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE); unsigned long bytes; @@ -303,15 +303,12 @@ int swiotlb_init_late(size_t size) if (swiotlb_force_disable) return 0; - /* - * Get IO TLB memory from the low pages - */ order = get_order(nslabs << IO_TLB_SHIFT); nslabs = SLABS_PER_PAGE << order; bytes = nslabs << IO_TLB_SHIFT; while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { - vstart = (void *)__get_free_pages(GFP_DMA | __GFP_NOWARN, + vstart = (void *)__get_free_pages(gfp_mask | __GFP_NOWARN, order); if (vstart) break; From patchwork Mon Mar 14 07:31:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604934 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=ENNItxcd; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7YC4vpmz9sGD for ; Mon, 14 Mar 2022 18:32:47 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236702AbiCNHdx (ORCPT ); Mon, 14 Mar 2022 03:33:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236719AbiCNHdm (ORCPT ); Mon, 14 Mar 2022 03:33:42 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5722040E7C; Mon, 14 Mar 2022 00:32:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=tCQPVGkfrukt4Dhgd/m8K7XGtVrl0r8X20GxVdNHixk=; b=ENNItxcdsAagWAfjgj/8+y982H gh50MENeg/ax/OS7Fov6li0O5SQnNLVr0BqfVzuuzju/FcYqJlRTQJPJL3BrzIDsJO3ks0LS+ILtS CkJEYoIwpoL6ptQvzrS2W+46fwBq3U2+fJ6IoE5tCVcnuI0z+6fYNN+VOkwK8P6ZsBEnNHweD8G76 JXrWKYiQ++5QYu2v027qSoahDtSHnSCymKyy1r1TMGpw067rLd/NUVSZmMK515LT0f9U5pLn9XuEd SUNEsRc/aLRyqWwRUpskuuDHXD0b4X4VXD3Y3IYvp1A+0MPW28KuwT+adBEIHIzoAtbobOA8f0sNd 7/N4x57g==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfBa-0044pG-D4; Mon, 14 Mar 2022 07:32:11 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org Subject: [PATCH 12/15] swiotlb: provide swiotlb_init variants that remap the buffer Date: Mon, 14 Mar 2022 08:31:26 +0100 Message-Id: <20220314073129.1862284-13-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org To shared more code between swiotlb and xen-swiotlb, offer a swiotlb_init_remap interface and add a remap callback to swiotlb_init_late that will allow Xen to remap the buffer the buffer without duplicating much of the logic. Signed-off-by: Christoph Hellwig Signed-off-by: Christoph Hellwig --- arch/x86/pci/sta2x11-fixup.c | 2 +- include/linux/swiotlb.h | 5 ++++- kernel/dma/swiotlb.c | 38 +++++++++++++++++++++++++++++++++--- 3 files changed, 40 insertions(+), 5 deletions(-) diff --git a/arch/x86/pci/sta2x11-fixup.c b/arch/x86/pci/sta2x11-fixup.c index c7e6faf59a861..7368afc039987 100644 --- a/arch/x86/pci/sta2x11-fixup.c +++ b/arch/x86/pci/sta2x11-fixup.c @@ -57,7 +57,7 @@ static void sta2x11_new_instance(struct pci_dev *pdev) int size = STA2X11_SWIOTLB_SIZE; /* First instance: register your own swiotlb area */ dev_info(&pdev->dev, "Using SWIOTLB (size %i)\n", size); - if (swiotlb_init_late(size, GFP_DMA)) + if (swiotlb_init_late(size, GFP_DMA, NULL)) dev_emerg(&pdev->dev, "init swiotlb failed\n"); } list_add(&instance->list, &sta2x11_instance_list); diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index ee655f2e4d28b..7b50c82f84ce9 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -36,8 +36,11 @@ struct scatterlist; int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, unsigned int flags); unsigned long swiotlb_size_or_default(void); +void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags, + int (*remap)(void *tlb, unsigned long nslabs)); +int swiotlb_init_late(size_t size, gfp_t gfp_mask, + int (*remap)(void *tlb, unsigned long nslabs)); extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); -int swiotlb_init_late(size_t size, gfp_t gfp_mask); extern void __init swiotlb_update_mem_attributes(void); phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys, diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 79641c446d284..88ea7b9bce6e9 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -256,9 +256,11 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, * Statically reserve bounce buffer space and initialize bounce buffer data * structures for the software IO TLB used to implement the DMA API. */ -void __init swiotlb_init(bool addressing_limit, unsigned int flags) +void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags, + int (*remap)(void *tlb, unsigned long nslabs)) { - size_t bytes = PAGE_ALIGN(default_nslabs << IO_TLB_SHIFT); + unsigned long nslabs = default_nslabs; + size_t bytes; void *tlb; if (!addressing_limit && !swiotlb_force_bounce) @@ -271,12 +273,24 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags) * allow to pick a location everywhere for hypervisors with guest * memory encryption. */ +retry: + bytes = PAGE_ALIGN(default_nslabs << IO_TLB_SHIFT); if (flags & SWIOTLB_ANY) tlb = memblock_alloc(bytes, PAGE_SIZE); else tlb = memblock_alloc_low(bytes, PAGE_SIZE); if (!tlb) goto fail; + if (remap && remap(tlb, nslabs) < 0) { + memblock_free(tlb, PAGE_ALIGN(bytes)); + + /* Min is 2MB */ + if (nslabs <= 1024) + panic("%s: Failed to remap %zu bytes\n", + __func__, bytes); + nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE)); + goto retry; + } if (swiotlb_init_with_tbl(tlb, default_nslabs, flags)) goto fail_free_mem; return; @@ -287,12 +301,18 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags) pr_warn("Cannot allocate buffer"); } +void __init swiotlb_init(bool addressing_limit, unsigned int flags) +{ + return swiotlb_init_remap(addressing_limit, flags, NULL); +} + /* * Systems with larger DMA zones (those that don't support ISA) can * initialize the swiotlb later using the slab allocator if needed. * This should be just like above, but with some error catching. */ -int swiotlb_init_late(size_t size, gfp_t gfp_mask) +int swiotlb_init_late(size_t size, gfp_t gfp_mask, + int (*remap)(void *tlb, unsigned long nslabs)) { unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE); unsigned long bytes; @@ -303,6 +323,7 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask) if (swiotlb_force_disable) return 0; +retry: order = get_order(nslabs << IO_TLB_SHIFT); nslabs = SLABS_PER_PAGE << order; bytes = nslabs << IO_TLB_SHIFT; @@ -317,6 +338,17 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask) if (!vstart) return -ENOMEM; + if (remap) + rc = remap(vstart, nslabs); + if (rc) { + free_pages((unsigned long)vstart, order); + + /* Min is 2MB */ + if (nslabs <= 1024) + return rc; + nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE)); + goto retry; + } if (order != get_order(bytes)) { pr_warn("only able to allocate %ld MB\n", From patchwork Mon Mar 14 07:31:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604935 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=KGIzp2JL; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7YF6hzLz9sGD for ; Mon, 14 Mar 2022 18:32:49 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236714AbiCNHdz (ORCPT ); Mon, 14 Mar 2022 03:33:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236722AbiCNHdm (ORCPT ); Mon, 14 Mar 2022 03:33:42 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD73940E7E; Mon, 14 Mar 2022 00:32:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=1FAMfRRw2aEYg6ugW17iKWYRmxGcWkT9OJlmJuUFHKk=; b=KGIzp2JLMsNz/9/O53m5dQCL/i 9BIvOegjPn8f4pkgr2+hRsdTiUliDURRbQCNFxlTB2u18aoIHj/u4p9P3yh7cnSwXDZ98ZmeE/LwQ VFFc7Kbhx+orXeTs7dfJKtCujBIryxvTOrmg3tHPjhJwb19T9fzrx+e+AqbpQ5q6G7p2BOLrBeHJQ 9LvmnG6yYRQx0G0DIAr/9X5lQknf2sP+SEYD9Ev2Y38Hc6Ny6PG0zj/k8R334QV87OMF3fuyGaw0U QmxV/6ZhNQ8teCnAONpK494UOjRP0mAO1NIbxgUgHTLV6tBzl1LnrTBHEe9f0nmyeNvpEDFAhvbFA WTzyb3Mg==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfBd-0044s6-8e; Mon, 14 Mar 2022 07:32:13 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org Subject: [PATCH 13/15] swiotlb: merge swiotlb-xen initialization into swiotlb Date: Mon, 14 Mar 2022 08:31:27 +0100 Message-Id: <20220314073129.1862284-14-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Reuse the generic swiotlb initialization for xen-swiotlb. For ARM/ARM64 this works trivially, while for x86 xen_swiotlb_fixup needs to be passed as the remap argument to swiotlb_init_remap/swiotlb_init_late. Signed-off-by: Christoph Hellwig Reviewed-by: Stefano Stabellini --- arch/arm/xen/mm.c | 21 +++--- arch/x86/include/asm/xen/page.h | 5 -- arch/x86/kernel/pci-dma.c | 19 +++-- drivers/xen/swiotlb-xen.c | 128 +------------------------------- include/xen/arm/page.h | 1 - include/xen/swiotlb-xen.h | 8 +- 6 files changed, 27 insertions(+), 155 deletions(-) diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c index 28c2070602535..ff05a7899cb86 100644 --- a/arch/arm/xen/mm.c +++ b/arch/arm/xen/mm.c @@ -23,22 +23,20 @@ #include #include -unsigned long xen_get_swiotlb_free_pages(unsigned int order) +static gfp_t xen_swiotlb_gfp(void) { phys_addr_t base; - gfp_t flags = __GFP_NOWARN|__GFP_KSWAPD_RECLAIM; u64 i; for_each_mem_range(i, &base, NULL) { if (base < (phys_addr_t)0xffffffff) { if (IS_ENABLED(CONFIG_ZONE_DMA32)) - flags |= __GFP_DMA32; - else - flags |= __GFP_DMA; - break; + return __GFP_DMA32; + return __GFP_DMA; } } - return __get_free_pages(flags, order); + + return GFP_KERNEL; } static bool hypercall_cflush = false; @@ -140,10 +138,13 @@ static int __init xen_mm_init(void) if (!xen_swiotlb_detect()) return 0; - rc = xen_swiotlb_init(); /* we can work with the default swiotlb */ - if (rc < 0 && rc != -EEXIST) - return rc; + if (!io_tlb_default_mem.nslabs) { + rc = swiotlb_init_late(swiotlb_size_or_default(), + xen_swiotlb_gfp(), NULL); + if (rc < 0) + return rc; + } cflush.op = 0; cflush.a.dev_bus_addr = 0; diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h index e989bc2269f54..1fc67df500145 100644 --- a/arch/x86/include/asm/xen/page.h +++ b/arch/x86/include/asm/xen/page.h @@ -357,9 +357,4 @@ static inline bool xen_arch_need_swiotlb(struct device *dev, return false; } -static inline unsigned long xen_get_swiotlb_free_pages(unsigned int order) -{ - return __get_free_pages(__GFP_NOWARN, order); -} - #endif /* _ASM_X86_XEN_PAGE_H */ diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c index a705a199bf8a3..dbb7b83fc3e48 100644 --- a/arch/x86/kernel/pci-dma.c +++ b/arch/x86/kernel/pci-dma.c @@ -72,15 +72,12 @@ static inline void __init pci_swiotlb_detect(void) #endif /* CONFIG_SWIOTLB */ #ifdef CONFIG_SWIOTLB_XEN -static bool xen_swiotlb; - static void __init pci_xen_swiotlb_init(void) { if (!xen_initial_domain() && !x86_swiotlb_enable) return; x86_swiotlb_enable = true; - xen_swiotlb = true; - xen_swiotlb_init_early(); + swiotlb_init_remap(true, x86_swiotlb_flags, xen_swiotlb_fixup); dma_ops = &xen_swiotlb_dma_ops; if (IS_ENABLED(CONFIG_PCI)) pci_request_acs(); @@ -88,14 +85,16 @@ static void __init pci_xen_swiotlb_init(void) int pci_xen_swiotlb_init_late(void) { - int rc; - - if (xen_swiotlb) + if (dma_ops == &xen_swiotlb_dma_ops) return 0; - rc = xen_swiotlb_init(); - if (rc) - return rc; + /* we can work with the default swiotlb */ + if (!io_tlb_default_mem.nslabs) { + int rc = swiotlb_init_late(swiotlb_size_or_default(), + GFP_KERNEL, xen_swiotlb_fixup); + if (rc < 0) + return rc; + } /* XXX: this switches the dma ops under live devices! */ dma_ops = &xen_swiotlb_dma_ops; diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index c2da3eb4826e8..df8085b50df10 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -104,7 +104,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr) return 0; } -static int xen_swiotlb_fixup(void *buf, unsigned long nslabs) +int xen_swiotlb_fixup(void *buf, unsigned long nslabs) { int rc; unsigned int order = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT); @@ -130,132 +130,6 @@ static int xen_swiotlb_fixup(void *buf, unsigned long nslabs) return 0; } -enum xen_swiotlb_err { - XEN_SWIOTLB_UNKNOWN = 0, - XEN_SWIOTLB_ENOMEM, - XEN_SWIOTLB_EFIXUP -}; - -static const char *xen_swiotlb_error(enum xen_swiotlb_err err) -{ - switch (err) { - case XEN_SWIOTLB_ENOMEM: - return "Cannot allocate Xen-SWIOTLB buffer\n"; - case XEN_SWIOTLB_EFIXUP: - return "Failed to get contiguous memory for DMA from Xen!\n"\ - "You either: don't have the permissions, do not have"\ - " enough free memory under 4GB, or the hypervisor memory"\ - " is too fragmented!"; - default: - break; - } - return ""; -} - -int xen_swiotlb_init(void) -{ - enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN; - unsigned long bytes = swiotlb_size_or_default(); - unsigned long nslabs = bytes >> IO_TLB_SHIFT; - unsigned int order, repeat = 3; - int rc = -ENOMEM; - char *start; - - if (io_tlb_default_mem.nslabs) { - pr_warn("swiotlb buffer already initialized\n"); - return -EEXIST; - } - -retry: - m_ret = XEN_SWIOTLB_ENOMEM; - order = get_order(bytes); - - /* - * Get IO TLB memory from any location. - */ -#define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT)) -#define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT) - while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { - start = (void *)xen_get_swiotlb_free_pages(order); - if (start) - break; - order--; - } - if (!start) - goto exit; - if (order != get_order(bytes)) { - pr_warn("Warning: only able to allocate %ld MB for software IO TLB\n", - (PAGE_SIZE << order) >> 20); - nslabs = SLABS_PER_PAGE << order; - bytes = nslabs << IO_TLB_SHIFT; - } - - /* - * And replace that memory with pages under 4GB. - */ - rc = xen_swiotlb_fixup(start, nslabs); - if (rc) { - free_pages((unsigned long)start, order); - m_ret = XEN_SWIOTLB_EFIXUP; - goto error; - } - rc = swiotlb_late_init_with_tbl(start, nslabs); - if (rc) - return rc; - return 0; -error: - if (nslabs > 1024 && repeat--) { - /* Min is 2MB */ - nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE)); - bytes = nslabs << IO_TLB_SHIFT; - pr_info("Lowering to %luMB\n", bytes >> 20); - goto retry; - } -exit: - pr_err("%s (rc:%d)\n", xen_swiotlb_error(m_ret), rc); - return rc; -} - -#ifdef CONFIG_X86 -void __init xen_swiotlb_init_early(void) -{ - unsigned long bytes = swiotlb_size_or_default(); - unsigned long nslabs = bytes >> IO_TLB_SHIFT; - unsigned int repeat = 3; - char *start; - int rc; - -retry: - /* - * Get IO TLB memory from any location. - */ - start = memblock_alloc(PAGE_ALIGN(bytes), - IO_TLB_SEGSIZE << IO_TLB_SHIFT); - if (!start) - panic("%s: Failed to allocate %lu bytes\n", - __func__, PAGE_ALIGN(bytes)); - - /* - * And replace that memory with pages under 4GB. - */ - rc = xen_swiotlb_fixup(start, nslabs); - if (rc) { - memblock_free(start, PAGE_ALIGN(bytes)); - if (nslabs > 1024 && repeat--) { - /* Min is 2MB */ - nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE)); - bytes = nslabs << IO_TLB_SHIFT; - pr_info("Lowering to %luMB\n", bytes >> 20); - goto retry; - } - panic("%s (rc:%d)", xen_swiotlb_error(XEN_SWIOTLB_EFIXUP), rc); - } - - if (swiotlb_init_with_tbl(start, nslabs, SWIOTLB_VERBOSE)) - panic("Cannot allocate SWIOTLB buffer"); -} -#endif /* CONFIG_X86 */ - static void * xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t flags, diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h index ac1b654705631..7e199c6656b90 100644 --- a/include/xen/arm/page.h +++ b/include/xen/arm/page.h @@ -115,6 +115,5 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn) bool xen_arch_need_swiotlb(struct device *dev, phys_addr_t phys, dma_addr_t dev_addr); -unsigned long xen_get_swiotlb_free_pages(unsigned int order); #endif /* _ASM_ARM_XEN_PAGE_H */ diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h index b3e647f86e3e2..590ceb923f0c8 100644 --- a/include/xen/swiotlb-xen.h +++ b/include/xen/swiotlb-xen.h @@ -10,8 +10,12 @@ void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle, void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir); -int xen_swiotlb_init(void); -void __init xen_swiotlb_init_early(void); +#ifdef CONFIG_SWIOTLB_XEN +int xen_swiotlb_fixup(void *buf, unsigned long nslabs); +#else +#define xen_swiotlb_fixup NULL +#endif + extern const struct dma_map_ops xen_swiotlb_dma_ops; #endif /* __LINUX_SWIOTLB_XEN_H */ From patchwork Mon Mar 14 07:31:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604936 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=Nd0IXyZY; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7YV2KV2z9sGD for ; Mon, 14 Mar 2022 18:33:02 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236750AbiCNHeH (ORCPT ); Mon, 14 Mar 2022 03:34:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235153AbiCNHdm (ORCPT ); Mon, 14 Mar 2022 03:33:42 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 480D640E7F; Mon, 14 Mar 2022 00:32:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=dDhiwgQcAwGiyM5bV6HjqpPSPw1ZYuR5pzBsTUAIu7w=; b=Nd0IXyZYwxrXgIDJ9Qm0/nnu/F Mt9TAfkB75N48Ae3Up+Y0uwyHDdKoDRxWDLLJLP3GGrVVF3u7f3+tRpuvub8FjFBNd4aw/6KW0vUV VPbPrxqDRNkIFNT8McAKXUWPxG6Z+heTadwCb9udXXC9KJL2FmOQRZ31FOYGmkipgixZstyzqL2GA UFyG9Yk/FMzwJigMzLreiBJg1ov91gQNkSLz1gD2xxpL7EUvQPuBBFK/whVl0jfD3c8tmg7g/mcFB 79yNstFQ/NxJPPZSVTTKVF8QucE1PLouj0dsYSJktNDH0IbHUYXjYZJo9FuyvrefY6r8vr1wsZT/F ZCtU1P8A==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfBg-0044uH-2E; Mon, 14 Mar 2022 07:32:16 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org Subject: [PATCH 14/15] swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl Date: Mon, 14 Mar 2022 08:31:28 +0100 Message-Id: <20220314073129.1862284-15-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org No users left. Signed-off-by: Christoph Hellwig --- include/linux/swiotlb.h | 2 - kernel/dma/swiotlb.c | 85 +++++++++++++++-------------------------- 2 files changed, 30 insertions(+), 57 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 7b50c82f84ce9..7ed35dd3de6e7 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -34,13 +34,11 @@ struct scatterlist; /* default to 64MB */ #define IO_TLB_DEFAULT_SIZE (64UL<<20) -int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, unsigned int flags); unsigned long swiotlb_size_or_default(void); void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags, int (*remap)(void *tlb, unsigned long nslabs)); int swiotlb_init_late(size_t size, gfp_t gfp_mask, int (*remap)(void *tlb, unsigned long nslabs)); -extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); extern void __init swiotlb_update_mem_attributes(void); phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys, diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 88ea7b9bce6e9..d04bacdb0905b 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -225,33 +225,6 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start, return; } -int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, - unsigned int flags) -{ - struct io_tlb_mem *mem = &io_tlb_default_mem; - size_t alloc_size; - - if (swiotlb_force_disable) - return 0; - - /* protect against double initialization */ - if (WARN_ON_ONCE(mem->nslabs)) - return -ENOMEM; - - alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs)); - mem->slots = memblock_alloc(alloc_size, PAGE_SIZE); - if (!mem->slots) - panic("%s: Failed to allocate %zu bytes align=0x%lx\n", - __func__, alloc_size, PAGE_SIZE); - - swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false); - mem->force_bounce = flags & SWIOTLB_FORCE; - - if (flags & SWIOTLB_VERBOSE) - swiotlb_print_info(); - return 0; -} - /* * Statically reserve bounce buffer space and initialize bounce buffer data * structures for the software IO TLB used to implement the DMA API. @@ -259,7 +232,9 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags, int (*remap)(void *tlb, unsigned long nslabs)) { + struct io_tlb_mem *mem = &io_tlb_default_mem; unsigned long nslabs = default_nslabs; + size_t alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs)); size_t bytes; void *tlb; @@ -280,7 +255,8 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags, else tlb = memblock_alloc_low(bytes, PAGE_SIZE); if (!tlb) - goto fail; + panic("%s: failed to allocate tlb structure\n", __func__); + if (remap && remap(tlb, nslabs) < 0) { memblock_free(tlb, PAGE_ALIGN(bytes)); @@ -291,14 +267,17 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags, nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE)); goto retry; } - if (swiotlb_init_with_tbl(tlb, default_nslabs, flags)) - goto fail_free_mem; - return; -fail_free_mem: - memblock_free(tlb, bytes); -fail: - pr_warn("Cannot allocate buffer"); + mem->slots = memblock_alloc(alloc_size, PAGE_SIZE); + if (!mem->slots) + panic("%s: Failed to allocate %zu bytes align=0x%lx\n", + __func__, alloc_size, PAGE_SIZE); + + swiotlb_init_io_tlb_mem(mem, __pa(tlb), default_nslabs, false); + mem->force_bounce = flags & SWIOTLB_FORCE; + + if (flags & SWIOTLB_VERBOSE) + swiotlb_print_info(); } void __init swiotlb_init(bool addressing_limit, unsigned int flags) @@ -314,6 +293,7 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags) int swiotlb_init_late(size_t size, gfp_t gfp_mask, int (*remap)(void *tlb, unsigned long nslabs)) { + struct io_tlb_mem *mem = &io_tlb_default_mem; unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE); unsigned long bytes; unsigned char *vstart = NULL; @@ -355,33 +335,28 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask, (PAGE_SIZE << order) >> 20); nslabs = SLABS_PER_PAGE << order; } - rc = swiotlb_late_init_with_tbl(vstart, nslabs); - if (rc) - free_pages((unsigned long)vstart, order); - - return rc; -} - -int -swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) -{ - struct io_tlb_mem *mem = &io_tlb_default_mem; - unsigned long bytes = nslabs << IO_TLB_SHIFT; - if (swiotlb_force_disable) - return 0; + if (remap) + rc = remap(vstart, nslabs); + if (rc) { + free_pages((unsigned long)vstart, order); - /* protect against double initialization */ - if (WARN_ON_ONCE(mem->nslabs)) - return -ENOMEM; + /* Min is 2MB */ + if (nslabs <= 1024) + return rc; + nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE)); + goto retry; + } mem->slots = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, get_order(array_size(sizeof(*mem->slots), nslabs))); - if (!mem->slots) + if (!mem->slots) { + free_pages((unsigned long)vstart, order); return -ENOMEM; + } - set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT); - swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true); + set_memory_decrypted((unsigned long)vstart, bytes >> PAGE_SHIFT); + swiotlb_init_io_tlb_mem(mem, virt_to_phys(vstart), nslabs, true); swiotlb_print_info(); return 0; From patchwork Mon Mar 14 07:31:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 1604937 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=vIJ2oE33; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4KH7YV5SNLz9sGN for ; Mon, 14 Mar 2022 18:33:02 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236753AbiCNHeI (ORCPT ); Mon, 14 Mar 2022 03:34:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236676AbiCNHdn (ORCPT ); Mon, 14 Mar 2022 03:33:43 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCE7140A38; Mon, 14 Mar 2022 00:32:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=zCMW1Q+QCPXYuvBuEGOZUQKFwE9OwdLFtYhltvQ29K4=; b=vIJ2oE33+/DMOwDU/JixqH774f 4HcyxfW75FjmmRylwNxtYUZ/EQtjckE6WUtJCk9FMmZ9UiNrJUQNReUkRYB9gehR9d8FPU1StVV25 mDZAU61IRRH6aevchvran8PrU1UUMctkO1s07C1dw7LyB44+FDPejkmKQZWLyMHg5oTrChOy5x3wo MfT4/jWOnm16vvedpWLshlHmUjvCe5TTKasktRFAsL18gH/ZI5fWAS0KZoTBRUiVEpg29wVYcYU1M 9OpeHj+MXwuceM9FZp/nCmHAVa7rGsk1e3aB6Qq0bvLlVR46azzHlVR3oWpheK6+ceqEjGSc2s8WB kPKhfqNA==; Received: from [46.140.54.162] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTfBi-0044wm-QZ; Mon, 14 Mar 2022 07:32:19 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: x86@kernel.org, Anshuman Khandual , Tom Lendacky , Konrad Rzeszutek Wilk , Stefano Stabellini , Boris Ostrovsky , Juergen Gross , Joerg Roedel , David Woodhouse , Lu Baolu , Robin Murphy , linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net, linux-pci@vger.kernel.org Subject: [PATCH 15/15] x86: remove cruft from Date: Mon, 14 Mar 2022 08:31:29 +0100 Message-Id: <20220314073129.1862284-16-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220314073129.1862284-1-hch@lst.de> References: <20220314073129.1862284-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org gets pulled in by all drivers using the DMA API. Remove x86 internal variables and unnecessary includes from it. Signed-off-by: Christoph Hellwig --- arch/x86/include/asm/dma-mapping.h | 11 ----------- arch/x86/include/asm/iommu.h | 2 ++ 2 files changed, 2 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h index 256fd8115223d..1c66708e30623 100644 --- a/arch/x86/include/asm/dma-mapping.h +++ b/arch/x86/include/asm/dma-mapping.h @@ -2,17 +2,6 @@ #ifndef _ASM_X86_DMA_MAPPING_H #define _ASM_X86_DMA_MAPPING_H -/* - * IOMMU interface. See Documentation/core-api/dma-api-howto.rst and - * Documentation/core-api/dma-api.rst for documentation. - */ - -#include -#include - -extern int iommu_merge; -extern int panic_on_overflow; - extern const struct dma_map_ops *dma_ops; static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) diff --git a/arch/x86/include/asm/iommu.h b/arch/x86/include/asm/iommu.h index dba89ed40d38d..0bef44d30a278 100644 --- a/arch/x86/include/asm/iommu.h +++ b/arch/x86/include/asm/iommu.h @@ -8,6 +8,8 @@ extern int force_iommu, no_iommu; extern int iommu_detected; +extern int iommu_merge; +extern int panic_on_overflow; #ifdef CONFIG_SWIOTLB extern bool x86_swiotlb_enable;