From patchwork Tue Aug 8 23:13:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 1819090 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=FzNurgW9; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=desiato.20200630 header.b=VFstw8Tn; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20221208 header.b=VdwO9zpC; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4RL8Cs3Gppz1yfD for ; Wed, 9 Aug 2023 09:14:49 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=/uwF0n5CdjR74Ag5QeIdHZxHz6IWYS3EwSWC8tdKqRA=; b=FzNurgW9g8l2/FfKNg2qwGUAUw 4Gggwtg3FWiDzYGaJv754gQoWw/Stnjzb7RsZhA0lqcZfO0G3XHjbV2hiAdGSesnkkLdTdP7mg1Pd eWzuzlEvvnq11o6XFU30HdQEKIG/kLBoRPKQ8oXcK9SUSbwCaJZ1l3p6zDqpbj/salYKWRiwk31+r B7qXdvQ9P0QPZEPaUjkYTty7g+DGkU93WfxkvcZkiGgOxL66YjmvD4PUCEoDHFGziLPBP1MBodm8c sHQ557wr1xUiqmTRtGBenzgy6kjuplnKQMcba0Gc9b7oWAiAokqAamqQyhTTj4REe5qgNqvIRRN5D HGvAvhXw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qTVuX-003eDS-1n; Tue, 08 Aug 2023 23:14:45 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qTVtm-003dOv-2P for kvm-riscv@bombadil.infradead.org; Tue, 08 Aug 2023 23:13:58 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=b8Ihb497LpuMIgM0TiiBODXNuX0ZiHa7Y+wfWyh7fKg=; b=VFstw8Tnqw0tpjofg6UoqR8C1c lb7NLqrsPGs7HLwuxlPjRBIuV+wq/HQM8g1QfZO2+K5Fye/5yFnRyIrfy1dViwWROGdMxpS9G7jEa 2sA8LZ7aKgkY9NFW28fEstE1DNIDj7+J2YV0z2OmyPZXW88jkZF9l+JnhdlqxMZqibBNL0TvMetur hrZt8egAEs63s3syYjEtbo4bIL1+LCdozfKdWxxdepPy+INCP3be0M/bxLyiunCakPjgLFk4SBH4p BaXTmobJcdNqzJx6haxIeA8xdojQnp/ykUGrKZLoz/yQbmU6z8ZkcWBP1P9FPtCFM3Iig9GPqxYi8 ntpLvzhQ==; Received: from mail-oi1-x24a.google.com ([2607:f8b0:4864:20::24a]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qTVtf-0057mA-16 for kvm-riscv@lists.infradead.org; Tue, 08 Aug 2023 23:13:56 +0000 Received: by mail-oi1-x24a.google.com with SMTP id 5614622812f47-3a78c2cc35eso3896562b6e.1 for ; Tue, 08 Aug 2023 16:13:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536422; x=1692141222; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b8Ihb497LpuMIgM0TiiBODXNuX0ZiHa7Y+wfWyh7fKg=; b=VdwO9zpCylwoLk2VWROe5L67a/7tekWsn7A41eH6FI22rEu0/jHlzNkiGuD5QSU2r4 heeaTspfhcXGEZ4qFJxVIyrxjc0hHb5cQfJc1qxh7n1P3ML2VVzOESTMC81RcoH0f4F/ EjpyTbgU71zDoK+wd45wJvsGUqkEDjnyOMLm7DSdWRcgt9XoYCuiReiEWpwCuhW7OePA lULJsmxsVya0hLV68oD3RZTeA1GbtiDF/VJ4/TbciC0JjFABt8rg7VHPqKAkFCSX0oh8 Odw2TKiJRTdIeDkYgX0f+kV73C9Szk+ZM0HXTYYKkFU9XsAXh016PlcC9AGO1l0YzgPo jJWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536422; x=1692141222; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b8Ihb497LpuMIgM0TiiBODXNuX0ZiHa7Y+wfWyh7fKg=; b=e13RhJZrRUJXLqJus78/Ka0qGCH+dD8nN55c3cVievKEadjEaRaJ6TAEFZihFBEdlR YvV3wnn4Zta8caexi5BZICZLeprxx41WYEdeAlz8dfOb9osI8+yG5kNvgGbFVHq8EsDi SfHM7DpemkERPkuVjCXPBHWOZKAi91n5MhsW7VaWMlxXaPcK7MWosBcNmg+E0gU8/Vkk 34+EoubBpMMvdSK7eFYmycixdiTi/kNOIleMDl9S1daqNp2+zo+oMlb6YTywrXau6sdI usqmQQ26lL+U/spTar5Ot+kcRUixrzAJJcjzm3gZnXXhwdxp0RuQzrHdltnduNZEE7zi VIIw== X-Gm-Message-State: AOJu0YxR3/vNeIAU6QvUU3ZwH6SMnfyS6A2cs9J2YFvZP1zFSZ62dTws VMeb6tzJPjwE0VAbIadWhwEOR4aJRErD X-Google-Smtp-Source: AGHT+IFatEcJgOqiwqn8y/S4x8uvX47JhME+BD2W6lk/32L881wNtvTRbj8PXPAQF5kkjTPZiBmutkPkEAYv X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6808:1307:b0:3a7:aef2:8b1e with SMTP id y7-20020a056808130700b003a7aef28b1emr601739oiv.10.1691536422468; Tue, 08 Aug 2023 16:13:42 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:23 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-8-rananta@google.com> Subject: [PATCH v8 07/14] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Catalin Marinas , Gavin Shan , Shaoqin Huang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230809_001354_258673_4FF04953 X-CRM114-Status: GOOD ( 18.93 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "desiato.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Currently, the core TLB flush functionality of __flush_tlb_range() hardcodes vae1is (and variants) for the flush operation. In the upcoming patches, the KVM code reuses this core algorithm with ipas2e [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:24a listed in] [list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Currently, the core TLB flush functionality of __flush_tlb_range() hardcodes vae1is (and variants) for the flush operation. In the upcoming patches, the KVM code reuses this core algorithm with ipas2e1is for range based TLB invalidations based on the IPA. Hence, extract the core flush functionality of __flush_tlb_range() into its own macro that accepts an 'op' argument to pass any TLBI operation, such that other callers (KVM) can benefit. No functional changes intended. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Catalin Marinas Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/tlbflush.h | 121 +++++++++++++++++------------- 1 file changed, 68 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 412a3b9a3c25d..b9475a852d5be 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -278,14 +278,74 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, */ #define MAX_TLBI_OPS PTRS_PER_PTE +/* + * __flush_tlb_range_op - Perform TLBI operation upon a range + * + * @op: TLBI instruction that operates on a range (has 'r' prefix) + * @start: The start address of the range + * @pages: Range as the number of pages from 'start' + * @stride: Flush granularity + * @asid: The ASID of the task (0 for IPA instructions) + * @tlb_level: Translation Table level hint, if known + * @tlbi_user: If 'true', call an additional __tlbi_user() + * (typically for user ASIDs). 'flase' for IPA instructions + * + * When the CPU does not support TLB range operations, flush the TLB + * entries one by one at the granularity of 'stride'. If the TLB + * range ops are supported, then: + * + * 1. If 'pages' is odd, flush the first page through non-range + * operations; + * + * 2. For remaining pages: the minimum range granularity is decided + * by 'scale', so multiple range TLBI operations may be required. + * Start from scale = 0, flush the corresponding number of pages + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it + * until no pages left. + * + * Note that certain ranges can be represented by either num = 31 and + * scale or num = 0 and scale + 1. The loop below favours the latter + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. + */ +#define __flush_tlb_range_op(op, start, pages, stride, \ + asid, tlb_level, tlbi_user) \ +do { \ + int num = 0; \ + int scale = 0; \ + unsigned long addr; \ + \ + while (pages > 0) { \ + if (!system_supports_tlb_range() || \ + pages % 2 == 1) { \ + addr = __TLBI_VADDR(start, asid); \ + __tlbi_level(op, addr, tlb_level); \ + if (tlbi_user) \ + __tlbi_user_level(op, addr, tlb_level); \ + start += stride; \ + pages -= stride >> PAGE_SHIFT; \ + continue; \ + } \ + \ + num = __TLBI_RANGE_NUM(pages, scale); \ + if (num >= 0) { \ + addr = __TLBI_VADDR_RANGE(start, asid, scale, \ + num, tlb_level); \ + __tlbi(r##op, addr); \ + if (tlbi_user) \ + __tlbi_user(r##op, addr); \ + start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ + pages -= __TLBI_RANGE_PAGES(num, scale); \ + } \ + scale++; \ + } \ +} while (0) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) { - int num = 0; - int scale = 0; - unsigned long asid, addr, pages; + unsigned long asid, pages; start = round_down(start, stride); end = round_up(end, stride); @@ -307,56 +367,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ishst); asid = ASID(vma->vm_mm); - /* - * When the CPU does not support TLB range operations, flush the TLB - * entries one by one at the granularity of 'stride'. If the TLB - * range ops are supported, then: - * - * 1. If 'pages' is odd, flush the first page through non-range - * operations; - * - * 2. For remaining pages: the minimum range granularity is decided - * by 'scale', so multiple range TLBI operations may be required. - * Start from scale = 0, flush the corresponding number of pages - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it - * until no pages left. - * - * Note that certain ranges can be represented by either num = 31 and - * scale or num = 0 and scale + 1. The loop below favours the latter - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. - */ - while (pages > 0) { - if (!system_supports_tlb_range() || - pages % 2 == 1) { - addr = __TLBI_VADDR(start, asid); - if (last_level) { - __tlbi_level(vale1is, addr, tlb_level); - __tlbi_user_level(vale1is, addr, tlb_level); - } else { - __tlbi_level(vae1is, addr, tlb_level); - __tlbi_user_level(vae1is, addr, tlb_level); - } - start += stride; - pages -= stride >> PAGE_SHIFT; - continue; - } - - num = __TLBI_RANGE_NUM(pages, scale); - if (num >= 0) { - addr = __TLBI_VADDR_RANGE(start, asid, scale, - num, tlb_level); - if (last_level) { - __tlbi(rvale1is, addr); - __tlbi_user(rvale1is, addr); - } else { - __tlbi(rvae1is, addr); - __tlbi_user(rvae1is, addr); - } - start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; - pages -= __TLBI_RANGE_PAGES(num, scale); - } - scale++; - } + if (last_level) + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, true); + else + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true); + dsb(ish); }