From patchwork Wed Mar 30 17:50:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 1611270 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=oR6W5fB2; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KTDVq4XSJz9sBJ for ; Thu, 31 Mar 2022 04:50:43 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1nZcSs-0006J1-Uu; Wed, 30 Mar 2022 17:50:38 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1nZcSr-0006Iu-AT for kernel-team@lists.ubuntu.com; Wed, 30 Mar 2022 17:50:37 +0000 Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id 066483F174 for ; Wed, 30 Mar 2022 17:50:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1648662637; bh=zFrXU0I8rwbUEhwSscfInwxOXVypMZJnZu+Aw4HeXhQ=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=oR6W5fB25PFNyYvqqUG6U429L4Z8WsxQ9B8YlDU0EEp8rlhTPwDM+jQuUFIVsx120 3cFxG9m11/g3yNBMu4IO3RXzbYjRJBtZo1idtXBAWV6j4WhTS+y5Gc4RooQMF2al3l zov+LVwWjghaRDyOjHOwpEnH1vqmk2eURcP/hTjsZPC0jQOXHiz9tgvdBQtEZlJl9V ZAy+nI/mU1CDgYAFNDeuB5LHbWny23pw124kWamNu4VnaCs22Qe9DPgl6cLG7qlQjQ MueCQNtzZ6bEktDfUemH7RrNCyw0rMGcZAcbM2vn5A8/50YRHlSfyVb29GRwERe6UV v4fm1aFLHtVig== Received: by mail-pg1-f200.google.com with SMTP id z10-20020a634c0a000000b0036c5eb39076so10722106pga.18 for ; Wed, 30 Mar 2022 10:50:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=zFrXU0I8rwbUEhwSscfInwxOXVypMZJnZu+Aw4HeXhQ=; b=RQDjZhWer1M5tDVM7s7NR0KO6k28FBzJ8TpicMyg0DsC0QpkflxHXqWRI2ybYizyBq M4D1dGVFe2+eDuO2pZedO6pffbpMRixMXHh+hqrcnJVSRX84Da+LBjq+Gxc5ThgvneE/ uST3docefqPR0j5FNheTUtCmPo+OzSHsgDYj+mXVhY46p5D0vtSi7Pu6eV39YAQHupsC d91/Dn/ZsbhuOA4mKN/aHUeZ7spzVrGuNFObufLd9cvuUvkh1NSlSiA+fkgMj6snmBpL 4ch+xqcP0OD2jbR2EjTfEMnWb8RcU/irb2T0gbVWjr2FpTBXHI86PCP1GEwwIJbwVs3m ECvg== X-Gm-Message-State: AOAM532NaMKdEH9T/f8bY5vp6EDFzmKD+7/rrJv8nqjokO62ke+9zF98 u7DiT1WcFapgF2CE8otOYPqmX3vYf9t6alHTSTeVR2eGVgp59SUcDJ/QIR6mdp6bX17Pg3tue6C /yQFQlv5cXn7ABqXmInxQJdbpjOyZxCTG2a47Z6lKLA== X-Received: by 2002:a17:902:ca13:b0:153:db88:92cc with SMTP id w19-20020a170902ca1300b00153db8892ccmr814921pld.80.1648662634949; Wed, 30 Mar 2022 10:50:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw20lfkn5RjNNAJ6LZAbLZsYHrS55yTnXchdS/2P3uWxwmliSr0wuQhiHt0e9czCE1z/MBctg== X-Received: by 2002:a17:902:ca13:b0:153:db88:92cc with SMTP id w19-20020a170902ca1300b00153db8892ccmr814890pld.80.1648662634564; Wed, 30 Mar 2022 10:50:34 -0700 (PDT) Received: from localhost.localdomain ([69.163.84.166]) by smtp.gmail.com with ESMTPSA id u14-20020a056a00124e00b004fab8f3245fsm25103880pfi.149.2022.03.30.10.50.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 30 Mar 2022 10:50:33 -0700 (PDT) From: Tim Gardner To: kernel-team@lists.ubuntu.com Subject: [PATCH v2][impish/linux-azure] UBUNTU: SAUCE: azure: Swiotlb: Add swiotlb_alloc_from_low_pages switch Date: Wed, 30 Mar 2022 11:50:32 -0600 Message-Id: <20220330175032.24767-1-tim.gardner@canonical.com> X-Mailer: git-send-email 2.35.1 MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Tianyu Lan v2 - added proper subject BugLink: https://bugs.launchpad.net/bugs/1967166 Hyper-V Isolation VM and AMD SEV VM uses swiotlb bounce buffer to share memory with hypervisor. Current swiotlb bounce buffer is only allocated from 0 to ARCH_LOW_ADDRESS_LIMIT which is default to 0xffffffffUL. Isolation VM and AMD SEV VM needs 1G bounce buffer at most. This will fail when there is not enough memory from 0 to 4G address space and devices also may use memory above 4G address space as DMA memory. Expose swiotlb_alloc_from_low_pages and platform mey set it to false when it's not necessary to limit bounce buffer from 0 to 4G memory. Signed-off-by: Tianyu Lan Signed-off-by: Tim Gardner --- This patch depends on a previous pull request: [Pull Request] [impish/linux-azure] Azure: Update Hyperv to 5.17 --- include/linux/swiotlb.h | 1 + kernel/dma/swiotlb.c | 17 +++++++++++++++-- 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 2356da25c3b9..037356d57abf 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -38,6 +38,7 @@ enum swiotlb_force { extern void swiotlb_init(int verbose); int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); unsigned long swiotlb_size_or_default(void); +void swiotlb_set_alloc_from_low_pages(bool low); extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); extern int swiotlb_late_init_with_default_size(size_t default_size); extern void __init swiotlb_update_mem_attributes(void); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index c57e78071143..e2c1395740de 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -67,6 +67,7 @@ enum swiotlb_force swiotlb_force; struct io_tlb_mem *io_tlb_default_mem; phys_addr_t swiotlb_unencrypted_base; +static bool swiotlb_alloc_from_low_pages = true; /* * Max segment that we can provide which (if pages are contingous) will @@ -109,6 +110,11 @@ void swiotlb_set_max_segment(unsigned int val) max_segment = rounddown(val, PAGE_SIZE); } +void swiotlb_set_alloc_from_low_pages(bool low) +{ + swiotlb_alloc_from_low_pages = low; +} + unsigned long swiotlb_size_or_default(void) { return default_nslabs << IO_TLB_SHIFT; @@ -253,8 +259,15 @@ swiotlb_init(int verbose) if (swiotlb_force == SWIOTLB_NO_FORCE) return; - /* Get IO TLB memory from the low pages */ - tlb = memblock_alloc_low(bytes, PAGE_SIZE); + /* + * Get IO TLB memory from the low pages if swiotlb_alloc_from_low_pages + * is set. + */ + if (swiotlb_alloc_from_low_pages) + tlb = memblock_alloc_low(bytes, PAGE_SIZE); + else + tlb = memblock_alloc(bytes, PAGE_SIZE); + if (!tlb) goto fail; if (swiotlb_init_with_tbl(tlb, default_nslabs, verbose))