From patchwork Tue Aug 25 14:57:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1351121 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BbXRg35Mxz9sSP for ; Wed, 26 Aug 2020 01:07:59 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=dQccofU4; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BbXRg2YWKzDqN8 for ; Wed, 26 Aug 2020 01:07:59 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::543; helo=mail-pg1-x543.google.com; envelope-from=npiggin@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=dQccofU4; dkim-atps=neutral Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BbXDL44CxzDq72 for ; Wed, 26 Aug 2020 00:58:10 +1000 (AEST) Received: by mail-pg1-x543.google.com with SMTP id o13so7012798pgf.0 for ; Tue, 25 Aug 2020 07:58:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6FG53WANq96sGtoLSKrRzKN8EzzN/tCT8YeKUFryJxY=; b=dQccofU4xh0H65J3wLVfbSsGxfXEbG4tZaPJjU1doX6AedshRyYkztWc2h50Qm3z93 KgscBr6qG9Fo44eG2OeRT2QZf2KmnmSNJ3lJqfRXlwMxr28f4//s+/k+WSkzhJG+zZXq 7e1A8Xe1Vb3oektoAvwsrr1HG/8T9+BhGHxeBxqhkUW/QAL+qYsEJzsjQGDLV40ToZlb cFFecAKq8Dmuuzx1yFyxfPa/KmQyWvfZIEsC7eIW5afSBgJPUE4uZ6p89M7FoGQqUVAQ TCo3qp5Y+8G9k+LVGo8fN1/xRq//BUSFIL+BRNmAN8TwMnHLloOkZCcwYUYknVes0/b3 AtTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6FG53WANq96sGtoLSKrRzKN8EzzN/tCT8YeKUFryJxY=; b=X6HQ0B8w9WEYpWtxHFEEGg8iuRcmz0VzqrZ7oFKbChyUhPKzWrCsTGFimD2kQFgtdR PyN/YciQBZbGrHGdVL5aNcz6erUkH+1tV/wrVfyFf5YMRuDuYkaeQV4bS9T5z7SKUsOW RN4iUntTSkofkJrcm8UhppKJzSKQHueSdWprAo5Ryr7lqwJr0sdbyp3sCFj8ws8l7f/a psSoE4hAQXAvfDJ5Wyer/+4WAdIUv5sOWKOew/olI9v83NQvVsquJWp3goM36PH6D3tL YMf2OrghayqR2ArKRiN9lsKcZMDyI54uNSFeHOFsNJQb/WkS+O3ehgz0DX758achjB+2 zUQQ== X-Gm-Message-State: AOAM533agIE8B9xG1aClY7WKkEkVdoAG9pnLpHK7MV+4TsAPmcRVwvdX 8G3iomd1/kMQS9qU5xS8MSY= X-Google-Smtp-Source: ABdhPJxbIN9TEoKvgUhetFCwKB5gJVl/+6wG+sb6ODHZ7G+F/ibIU6KSibeJKKELCF6GtNzzgFkFGA== X-Received: by 2002:a05:6a00:1515:: with SMTP id q21mr8428082pfu.126.1598367488211; Tue, 25 Aug 2020 07:58:08 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id e29sm15755956pfj.92.2020.08.25.07.58.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 07:58:07 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH v7 01/12] mm/vmalloc: fix vmalloc_to_page for huge vmap mappings Date: Wed, 26 Aug 2020 00:57:42 +1000 Message-Id: <20200825145753.529284-2-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200825145753.529284-1-npiggin@gmail.com> References: <20200825145753.529284-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Nicholas Piggin , Christoph Hellwig , Zefan Li , Jonathan Cameron , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" vmalloc_to_page returns NULL for addresses mapped by larger pages[*]. Whether or not a vmap is huge depends on the architecture details, alignments, boot options, etc., which the caller can not be expected to know. Therefore HUGE_VMAP is a regression for vmalloc_to_page. This change teaches vmalloc_to_page about larger pages, and returns the struct page that corresponds to the offset within the large page. This makes the API agnostic to mapping implementation details. [*] As explained by commit 029c54b095995 ("mm/vmalloc.c: huge-vmap: fail gracefully on unexpected huge vmap mappings") Signed-off-by: Nicholas Piggin --- mm/vmalloc.c | 41 ++++++++++++++++++++++++++--------------- 1 file changed, 26 insertions(+), 15 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index b482d240f9a2..4e9b21adc73d 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -36,7 +36,7 @@ #include #include #include - +#include #include #include #include @@ -343,7 +343,9 @@ int is_vmalloc_or_module_addr(const void *x) } /* - * Walk a vmap address to the struct page it maps. + * Walk a vmap address to the struct page it maps. Huge vmap mappings will + * return the tail page that corresponds to the base page address, which + * matches small vmap mappings. */ struct page *vmalloc_to_page(const void *vmalloc_addr) { @@ -363,25 +365,33 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) if (pgd_none(*pgd)) return NULL; + if (WARN_ON_ONCE(pgd_leaf(*pgd))) + return NULL; /* XXX: no allowance for huge pgd */ + if (WARN_ON_ONCE(pgd_bad(*pgd))) + return NULL; + p4d = p4d_offset(pgd, addr); if (p4d_none(*p4d)) return NULL; - pud = pud_offset(p4d, addr); + if (p4d_leaf(*p4d)) + return p4d_page(*p4d) + ((addr & ~P4D_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(p4d_bad(*p4d))) + return NULL; - /* - * Don't dereference bad PUD or PMD (below) entries. This will also - * identify huge mappings, which we may encounter on architectures - * that define CONFIG_HAVE_ARCH_HUGE_VMAP=y. Such regions will be - * identified as vmalloc addresses by is_vmalloc_addr(), but are - * not [unambiguously] associated with a struct page, so there is - * no correct value to return for them. - */ - WARN_ON_ONCE(pud_bad(*pud)); - if (pud_none(*pud) || pud_bad(*pud)) + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) + return NULL; + if (pud_leaf(*pud)) + return pud_page(*pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(pud_bad(*pud))) return NULL; + pmd = pmd_offset(pud, addr); - WARN_ON_ONCE(pmd_bad(*pmd)); - if (pmd_none(*pmd) || pmd_bad(*pmd)) + if (pmd_none(*pmd)) + return NULL; + if (pmd_leaf(*pmd)) + return pmd_page(*pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(pmd_bad(*pmd))) return NULL; ptep = pte_offset_map(pmd, addr); @@ -389,6 +399,7 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) if (pte_present(pte)) page = pte_page(pte); pte_unmap(ptep); + return page; } EXPORT_SYMBOL(vmalloc_to_page); From patchwork Tue Aug 25 14:57:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1351123 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BbXVC2zvxz9sSP for ; Wed, 26 Aug 2020 01:10:11 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=vQtLeior; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BbXVC0JgpzDqL9 for ; Wed, 26 Aug 2020 01:10:11 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::444; helo=mail-pf1-x444.google.com; envelope-from=npiggin@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=vQtLeior; dkim-atps=neutral Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BbXDR4P8HzDqP2 for ; Wed, 26 Aug 2020 00:58:15 +1000 (AEST) Received: by mail-pf1-x444.google.com with SMTP id u20so7565453pfn.0 for ; Tue, 25 Aug 2020 07:58:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mSzVBpitc24aVru+Z5sqZmf0BwSZbFvBZc52kq4fALc=; b=vQtLeiorPiM+mBnb60jbbLATWdt1MHtxT3cLtj/RriJ8o1jdz7rkP3YQoG8mJWp8io LS8Dn6rf+7FDGByZFvEN2Fvj765mPOq6rs5PxtVpnNKAnP+HpRPZEGtxlZ4w7c+4GIgm ezgfbHhGIlzBhl9CqOSnbIsXRE5mnjIqpNFVmnXuNc6PzLygdIFXRQ5p2XVvInqQQ0lO 2yl8BqFp93HPQDWY6blFQJainUjJoxR/yKxm5FkwT715afuzUyduhXOJYRKlwe8TAjes P7xVYKEM9pYt5n5N5z4cdA2mKYCcpZb5B8iOuU36TX3QTyiEkX6JW7/V6JZZlxdFBQw7 8HRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mSzVBpitc24aVru+Z5sqZmf0BwSZbFvBZc52kq4fALc=; b=VU4H7cQegmioaGhS5I6V0VJxM1iAnCOea9OaYJVnnWOYXktTnvMYAJe6YqQNOL/jKk zOz2VFzH5uFlpTR49MmQ1JFNRwsmjgZ5dBnkikxIePhk6GJq69dSXAv5gPa+vw+UhMBl f+RLRItWmGi7TGF+7mzOiC3MDN8+0uzkuksni1540r1DMLNbm0699bbF3l5Mg2LhhAcB AuAE+93Sb/hxnHZBDY+Qf4dAIqcAiUS2YA7ZjMUjnIJLz/Fhw2pz6uXa1QnkrSDM2cne gkBfjXNVsUm6Gl1EAaMDntpzYX2o9XUe20yVT1Esbx4zO7uQKt9cbGkC2OMRwBC+1C1K X5Sw== X-Gm-Message-State: AOAM533srq4i6kuF6JvVm9pDhCtm5oVoLrBsvsV2XSud65Yz1qEzMxuT MUVY8LNfFdF0v0fm/fDO4V5NzmXrUTM= X-Google-Smtp-Source: ABdhPJx8/sy89f7MYqPfGwfXfZYI+WaeFxgwudoUWCZ36ogQX+u6vunFQp0u1doW3+kcOJej62M5Pg== X-Received: by 2002:a17:902:7790:: with SMTP id o16mr7580684pll.299.1598367492650; Tue, 25 Aug 2020 07:58:12 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id e29sm15755956pfj.92.2020.08.25.07.58.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 07:58:12 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH v7 02/12] mm: apply_to_pte_range warn and fail if a large pte is encountered Date: Wed, 26 Aug 2020 00:57:43 +1000 Message-Id: <20200825145753.529284-3-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200825145753.529284-1-npiggin@gmail.com> References: <20200825145753.529284-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Nicholas Piggin , Christoph Hellwig , Zefan Li , Jonathan Cameron , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" apply_to_pte_range might mistake a large pte for bad, or treat it as a page table, resulting in a crash or corruption. Add a test to warn and return error if large entries are found. Signed-off-by: Nicholas Piggin --- mm/memory.c | 60 +++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 44 insertions(+), 16 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 602f4283122f..995b2e790b79 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2262,13 +2262,20 @@ static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud, } do { next = pmd_addr_end(addr, end); - if (create || !pmd_none_or_clear_bad(pmd)) { - err = apply_to_pte_range(mm, pmd, addr, next, fn, data, - create); - if (err) - break; + if (pmd_none(*pmd) && !create) + continue; + if (WARN_ON_ONCE(pmd_leaf(*pmd))) + return -EINVAL; + if (!pmd_none(*pmd) && WARN_ON_ONCE(pmd_bad(*pmd))) { + if (!create) + continue; + pmd_clear_bad(pmd); } + err = apply_to_pte_range(mm, pmd, addr, next, fn, data, create); + if (err) + break; } while (pmd++, addr = next, addr != end); + return err; } @@ -2289,13 +2296,20 @@ static int apply_to_pud_range(struct mm_struct *mm, p4d_t *p4d, } do { next = pud_addr_end(addr, end); - if (create || !pud_none_or_clear_bad(pud)) { - err = apply_to_pmd_range(mm, pud, addr, next, fn, data, - create); - if (err) - break; + if (pud_none(*pud) && !create) + continue; + if (WARN_ON_ONCE(pud_leaf(*pud))) + return -EINVAL; + if (!pud_none(*pud) && WARN_ON_ONCE(pud_bad(*pud))) { + if (!create) + continue; + pud_clear_bad(pud); } + err = apply_to_pmd_range(mm, pud, addr, next, fn, data, create); + if (err) + break; } while (pud++, addr = next, addr != end); + return err; } @@ -2316,13 +2330,20 @@ static int apply_to_p4d_range(struct mm_struct *mm, pgd_t *pgd, } do { next = p4d_addr_end(addr, end); - if (create || !p4d_none_or_clear_bad(p4d)) { - err = apply_to_pud_range(mm, p4d, addr, next, fn, data, - create); - if (err) - break; + if (p4d_none(*p4d) && !create) + continue; + if (WARN_ON_ONCE(p4d_leaf(*p4d))) + return -EINVAL; + if (!p4d_none(*p4d) && WARN_ON_ONCE(p4d_bad(*p4d))) { + if (!create) + continue; + p4d_clear_bad(p4d); } + err = apply_to_pud_range(mm, p4d, addr, next, fn, data, create); + if (err) + break; } while (p4d++, addr = next, addr != end); + return err; } @@ -2341,8 +2362,15 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr, pgd = pgd_offset(mm, addr); do { next = pgd_addr_end(addr, end); - if (!create && pgd_none_or_clear_bad(pgd)) + if (pgd_none(*pgd) && !create) continue; + if (WARN_ON_ONCE(pgd_leaf(*pgd))) + return -EINVAL; + if (!pgd_none(*pgd) && WARN_ON_ONCE(pgd_bad(*pgd))) { + if (!create) + continue; + pgd_clear_bad(pgd); + } err = apply_to_p4d_range(mm, pgd, addr, next, fn, data, create); if (err) break; From patchwork Tue Aug 25 14:57:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1351124 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BbXYP1VVTz9sSP for ; Wed, 26 Aug 2020 01:12:57 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Xfp2e3Xt; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BbXYN3jWZzDqWN for ; Wed, 26 Aug 2020 01:12:56 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::441; helo=mail-pf1-x441.google.com; envelope-from=npiggin@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Xfp2e3Xt; dkim-atps=neutral Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BbXDW3lSWzDqNw for ; Wed, 26 Aug 2020 00:58:19 +1000 (AEST) Received: by mail-pf1-x441.google.com with SMTP id u128so7553158pfb.6 for ; Tue, 25 Aug 2020 07:58:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sv/9eriXs4YMtuUqEg5bncZ0rlcJBEbL+hLm5+QBb40=; b=Xfp2e3Xtmw0mMDWZ1ueqN80X0o3DlaU18t1Cz/zK/F/eG1y7CuKlm8YpZiO4eoodn+ pou5w1vEvYGji3W/Lm6246DlLB/o7In87TRc/AlIA0ZsHzwWyB0AsKNL0vlL6mmEPmR/ 0IFy4YCNvsAbiS9HKLsJN9yeZ1WKIzzFRSqG80JdkpP9JkQJa2GyYnw+7nlUPr8PhBrr TY8Tk3e6jBiHxthHc5rRMIvlJ2nuZy+ZQoGTKOPCVESC2mNUa1jBOftgRpfR8gVwsahg ndpNRGLeSb03UdhbXf5Oglnz1f2x6ei+zbwAPqXu52ALcrF+qkQbmNdZkrQCvBaKPAmB o9PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sv/9eriXs4YMtuUqEg5bncZ0rlcJBEbL+hLm5+QBb40=; b=DB3CYPn58vC4EpL+fjZ+9hVq3JiwCHQK/XFAPxQKHX/e+RdmaryUQH7hlaVgnXwtFa hqPqo8yOBmObLt/RU/UqJsfI/KdFWHFQPAMhAQH0brXRPZ/dz+lmFQIyCseexboPjdsU vFsCIl8+P//amQXWuDoY7pEDHPFKckTn2CtI/mLuDgis1v10iK4ylkqHWOVV2+avwpZm AVObT12/872lUXlq8gN1zqi39b5Cj1TKus8tKFqa/ulOU4d8b0dWzy+yaKRsugYvmSxo PjZa2x2OdC5oWZc15vD43lQo+vFpzfFMmTCM8lSLAXSh3cN8WogyURErtGMMs9JVUKdh hDrw== X-Gm-Message-State: AOAM533AwniZjzRQxWoYfyYwFffq5gJzvk4JLVGB5g0FJP4HBSEldLtF /8m8SzmuiA/o/EpnzLbdbkY= X-Google-Smtp-Source: ABdhPJzgj9qz6s3F5P6AAw9oSEmK+WKjXVI/Yh0BhuKrx77hUIFnOe4T+XEIWRGHePjBI4hUL0PHmQ== X-Received: by 2002:a17:902:eb03:: with SMTP id l3mr5592962plb.296.1598367497180; Tue, 25 Aug 2020 07:58:17 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id e29sm15755956pfj.92.2020.08.25.07.58.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 07:58:16 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH v7 03/12] mm/vmalloc: rename vmap_*_range vmap_pages_*_range Date: Wed, 26 Aug 2020 00:57:44 +1000 Message-Id: <20200825145753.529284-4-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200825145753.529284-1-npiggin@gmail.com> References: <20200825145753.529284-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Nicholas Piggin , Christoph Hellwig , Zefan Li , Jonathan Cameron , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" The vmalloc mapper operates on a struct page * array rather than a linear physical address, re-name it to make this distinction clear. Signed-off-by: Nicholas Piggin --- mm/vmalloc.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 4e9b21adc73d..45cd80ec7eeb 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -189,7 +189,7 @@ void unmap_kernel_range_noflush(unsigned long start, unsigned long size) arch_sync_kernel_mappings(start, end); } -static int vmap_pte_range(pmd_t *pmd, unsigned long addr, +static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { @@ -217,7 +217,7 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, return 0; } -static int vmap_pmd_range(pud_t *pud, unsigned long addr, +static int vmap_pages_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { @@ -229,13 +229,13 @@ static int vmap_pmd_range(pud_t *pud, unsigned long addr, return -ENOMEM; do { next = pmd_addr_end(addr, end); - if (vmap_pte_range(pmd, addr, next, prot, pages, nr, mask)) + if (vmap_pages_pte_range(pmd, addr, next, prot, pages, nr, mask)) return -ENOMEM; } while (pmd++, addr = next, addr != end); return 0; } -static int vmap_pud_range(p4d_t *p4d, unsigned long addr, +static int vmap_pages_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { @@ -247,13 +247,13 @@ static int vmap_pud_range(p4d_t *p4d, unsigned long addr, return -ENOMEM; do { next = pud_addr_end(addr, end); - if (vmap_pmd_range(pud, addr, next, prot, pages, nr, mask)) + if (vmap_pages_pmd_range(pud, addr, next, prot, pages, nr, mask)) return -ENOMEM; } while (pud++, addr = next, addr != end); return 0; } -static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, +static int vmap_pages_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { @@ -265,7 +265,7 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, return -ENOMEM; do { next = p4d_addr_end(addr, end); - if (vmap_pud_range(p4d, addr, next, prot, pages, nr, mask)) + if (vmap_pages_pud_range(p4d, addr, next, prot, pages, nr, mask)) return -ENOMEM; } while (p4d++, addr = next, addr != end); return 0; @@ -306,7 +306,7 @@ int map_kernel_range_noflush(unsigned long addr, unsigned long size, next = pgd_addr_end(addr, end); if (pgd_bad(*pgd)) mask |= PGTBL_PGD_MODIFIED; - err = vmap_p4d_range(pgd, addr, next, prot, pages, &nr, &mask); + err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, &mask); if (err) return err; } while (pgd++, addr = next, addr != end); From patchwork Tue Aug 25 14:57:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1351125 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BbXc21sp2z9sSP for ; Wed, 26 Aug 2020 01:15:14 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=cQlfZOFx; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BbXc175DBzDqWL for ; Wed, 26 Aug 2020 01:15:13 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::1044; helo=mail-pj1-x1044.google.com; envelope-from=npiggin@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=cQlfZOFx; dkim-atps=neutral Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BbXDg4N2wzDqHL for ; Wed, 26 Aug 2020 00:58:27 +1000 (AEST) Received: by mail-pj1-x1044.google.com with SMTP id ep8so1366539pjb.3 for ; Tue, 25 Aug 2020 07:58:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0Y/hT12bvV3dyctfA7PHWDL72BJBoDG5OIVFWS0o/KA=; b=cQlfZOFxC9POJSsAPsU013/5SvZ8VquHCiQPYBs15lja1sRY9m0PfrBMaIg1XLZWcv p3kZ6oqN6DCIm2HvEax4UUKPiCB//PaUuvI8smd0nEuerBeL6E2tGvtPumsACkZ5ysso CDCfsLBjF8AYDsVcdnO8YD2fmQdKSaLT35JGbuQZasKci9nQJILSR1yfzQamY0bOttUB +RGvsK3qj29/8PWmXQTeNiCW2pdLS+4+/5QFudTaL8t0K/712PG54s7dYQO9AMX8Fg6C Rtkud9dnpS0h9tgeCnl+4oYIPUqQJ+ygF7KxTCgFHOmDfbGCK8Jy+0A4ZV7L0r66gU5p dY9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0Y/hT12bvV3dyctfA7PHWDL72BJBoDG5OIVFWS0o/KA=; b=aCxoimA5mF3DzvCBxbGlexunTvLcbsZoLxndw0K5WhBov8cnj+0D+TyO66BpKnZvQ1 gukYfQmbeqCdvu/RkN4z6pPsG8220gEQGj5KvEk5QMcndzIFwQp0mJww2HrMNaFubd70 HsvxI5fC3VMNsqyDYvQGgNe99JWnBAsUfztAxTuuTP91dYcdsiynGKmyFeropzLYZl7l LbQ604KlGm8NmDxStZHWVwf6HsSGofWI2n57JseP23X2UmEfpuWXaYxWzQko8urBCRtL g3uqy3Ayd46tu6U7xN1uWMoZ4ZIOgvva3LW8MvVdFAMEAHZM4yQ/aiDmpA+Zew17KGIZ X38w== X-Gm-Message-State: AOAM531HxyB6BsDSYWQkOVonr3IVRN6+wzlAjObVY+KqX/RX4NwxOJIt O2nXS9/KTTcVaSGdNqjuDwA= X-Google-Smtp-Source: ABdhPJw+i4LK7/5haXFGcH6N0zRmeZBuurVr4mpxZbjFi7Xsjr+zjEKDT0h7eZ/uzG52blM1wfwcQQ== X-Received: by 2002:a17:90b:28d:: with SMTP id az13mr2083415pjb.206.1598367501593; Tue, 25 Aug 2020 07:58:21 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id e29sm15755956pfj.92.2020.08.25.07.58.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 07:58:21 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH v7 04/12] mm/ioremap: rename ioremap_*_range to vmap_*_range Date: Wed, 26 Aug 2020 00:57:45 +1000 Message-Id: <20200825145753.529284-5-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200825145753.529284-1-npiggin@gmail.com> References: <20200825145753.529284-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Nicholas Piggin , Christoph Hellwig , Zefan Li , Jonathan Cameron , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" This will be used as a generic kernel virtual mapping function, so re-name it in preparation. Signed-off-by: Nicholas Piggin --- mm/ioremap.c | 64 +++++++++++++++++++++++++++------------------------- 1 file changed, 33 insertions(+), 31 deletions(-) diff --git a/mm/ioremap.c b/mm/ioremap.c index 5fa1ab41d152..3f4d36f9745a 100644 --- a/mm/ioremap.c +++ b/mm/ioremap.c @@ -61,9 +61,9 @@ static inline int ioremap_pud_enabled(void) { return 0; } static inline int ioremap_pmd_enabled(void) { return 0; } #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ -static int ioremap_pte_range(pmd_t *pmd, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) +static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + pgtbl_mod_mask *mask) { pte_t *pte; u64 pfn; @@ -81,9 +81,8 @@ static int ioremap_pte_range(pmd_t *pmd, unsigned long addr, return 0; } -static int ioremap_try_huge_pmd(pmd_t *pmd, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, - pgprot_t prot) +static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) { if (!ioremap_pmd_enabled()) return 0; @@ -103,9 +102,9 @@ static int ioremap_try_huge_pmd(pmd_t *pmd, unsigned long addr, return pmd_set_huge(pmd, phys_addr, prot); } -static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) +static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + pgtbl_mod_mask *mask) { pmd_t *pmd; unsigned long next; @@ -116,20 +115,19 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr, do { next = pmd_addr_end(addr, end); - if (ioremap_try_huge_pmd(pmd, addr, next, phys_addr, prot)) { + if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot)) { *mask |= PGTBL_PMD_MODIFIED; continue; } - if (ioremap_pte_range(pmd, addr, next, phys_addr, prot, mask)) + if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask)) return -ENOMEM; } while (pmd++, phys_addr += (next - addr), addr = next, addr != end); return 0; } -static int ioremap_try_huge_pud(pud_t *pud, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, - pgprot_t prot) +static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) { if (!ioremap_pud_enabled()) return 0; @@ -149,9 +147,9 @@ static int ioremap_try_huge_pud(pud_t *pud, unsigned long addr, return pud_set_huge(pud, phys_addr, prot); } -static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) +static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + pgtbl_mod_mask *mask) { pud_t *pud; unsigned long next; @@ -162,20 +160,19 @@ static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr, do { next = pud_addr_end(addr, end); - if (ioremap_try_huge_pud(pud, addr, next, phys_addr, prot)) { + if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot)) { *mask |= PGTBL_PUD_MODIFIED; continue; } - if (ioremap_pmd_range(pud, addr, next, phys_addr, prot, mask)) + if (vmap_pmd_range(pud, addr, next, phys_addr, prot, mask)) return -ENOMEM; } while (pud++, phys_addr += (next - addr), addr = next, addr != end); return 0; } -static int ioremap_try_huge_p4d(p4d_t *p4d, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, - pgprot_t prot) +static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) { if (!ioremap_p4d_enabled()) return 0; @@ -195,9 +192,9 @@ static int ioremap_try_huge_p4d(p4d_t *p4d, unsigned long addr, return p4d_set_huge(p4d, phys_addr, prot); } -static inline int ioremap_p4d_range(pgd_t *pgd, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) +static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + pgtbl_mod_mask *mask) { p4d_t *p4d; unsigned long next; @@ -208,19 +205,19 @@ static inline int ioremap_p4d_range(pgd_t *pgd, unsigned long addr, do { next = p4d_addr_end(addr, end); - if (ioremap_try_huge_p4d(p4d, addr, next, phys_addr, prot)) { + if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot)) { *mask |= PGTBL_P4D_MODIFIED; continue; } - if (ioremap_pud_range(p4d, addr, next, phys_addr, prot, mask)) + if (vmap_pud_range(p4d, addr, next, phys_addr, prot, mask)) return -ENOMEM; } while (p4d++, phys_addr += (next - addr), addr = next, addr != end); return 0; } -int ioremap_page_range(unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot) +static int vmap_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) { pgd_t *pgd; unsigned long start; @@ -235,8 +232,7 @@ int ioremap_page_range(unsigned long addr, pgd = pgd_offset_k(addr); do { next = pgd_addr_end(addr, end); - err = ioremap_p4d_range(pgd, addr, next, phys_addr, prot, - &mask); + err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, &mask); if (err) break; } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); @@ -249,6 +245,12 @@ int ioremap_page_range(unsigned long addr, return err; } +int ioremap_page_range(unsigned long addr, + unsigned long end, phys_addr_t phys_addr, pgprot_t prot) +{ + return vmap_range(addr, end, phys_addr, prot); +} + #ifdef CONFIG_GENERIC_IOREMAP void __iomem *ioremap_prot(phys_addr_t addr, size_t size, unsigned long prot) { From patchwork Tue Aug 25 14:57:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1351126 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BbXgY3GGTz9sSP for ; Wed, 26 Aug 2020 01:18:17 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=GAUBvlTx; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BbXgY1lLZzDqVR for ; Wed, 26 Aug 2020 01:18:17 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::542; helo=mail-pg1-x542.google.com; envelope-from=npiggin@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=GAUBvlTx; dkim-atps=neutral Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BbXDq6cRKzDqHL for ; Wed, 26 Aug 2020 00:58:35 +1000 (AEST) Received: by mail-pg1-x542.google.com with SMTP id o5so7003782pgb.2 for ; Tue, 25 Aug 2020 07:58:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PoWZjfd9+LgKKwcwd/njX5fpO4STl9e4RnEY5y1S8j4=; b=GAUBvlTxy2A9BnS5ZutI8oNgc1UucAOxAr6pjScgXj537JKPb/P+AN4zkpWmUslPVv KeAu1QHH74L50dD3zuON5DE4zJDQ1XETD7rQUBjtBlejDQFS/17pyzT18QW5+QWpj027 OTp11JRuzSijnMU7y4j9ZkuE7oWhSUZzKT84vmtKGLQ8RRj4V08oPpB8xZ1YGxrUj5hr KYDmCUoIt6iwasFSr+o2CiZrRLR/szrTTas1tNoi6ouITdiNnnKrrEbZ9nleZie6ueVM EswVlgd2MdmX+Lim4FHgoaRM/uqJzi52ypnSZQQKaw/gpLHVfjORhy1n9JX6eWlrz1KQ zn3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PoWZjfd9+LgKKwcwd/njX5fpO4STl9e4RnEY5y1S8j4=; b=nRGZn2opWFC8PU9rVMl+X4yaXDIttyQ4WecXE69MgwNF+JuYfr08I50SzHfeyo/NtH 53C8XLjnWoscaK40vRxXEEO6EnegO6XTOomsaKfl29AV0mUJmMRaRP9x5I4HbMAaIhWv WY/TCxN7ZBHNFPXwK/gDBE+9UAaF/LdMQw1w0be/bPm1f/ZsvoZiCi3KgHnAIFHmYxYX aua/d9ok27+fhbKVTJoPfN9E3TVnps/TMdChQRg0HQVOYfv+mu4ZoY+hUnW+iZm8aPcI KclCo9eTFrslk6JIvr0+6x6h2JBC/cJqDgvGW2I3pFGbpaeVFTr2yUoThBUrSJluaAGD sFQA== X-Gm-Message-State: AOAM531Cf5ELTYbOtL41jshNfSO+Zrxepl4NjLBzHLugaFH5iiU6gc5u InlLJNWwKak/kgJsfIZByfw= X-Google-Smtp-Source: ABdhPJxciB6Foo8ZH9/2nA6YwZdwto57DtX1Azbp16OcKyr/UBX7pMIRifOCukZ277dSTR5A5TvAEg== X-Received: by 2002:aa7:8e9c:: with SMTP id a28mr933285pfr.20.1598367508687; Tue, 25 Aug 2020 07:58:28 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id e29sm15755956pfj.92.2020.08.25.07.58.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 07:58:28 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH v7 05/12] mm: HUGE_VMAP arch support cleanup Date: Wed, 26 Aug 2020 00:57:46 +1000 Message-Id: <20200825145753.529284-6-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200825145753.529284-1-npiggin@gmail.com> References: <20200825145753.529284-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, "H. Peter Anvin" , Will Deacon , Catalin Marinas , x86@kernel.org, linux-kernel@vger.kernel.org, Nicholas Piggin , Christoph Hellwig , Zefan Li , Borislav Petkov , Jonathan Cameron , Thomas Gleixner , linuxppc-dev@lists.ozlabs.org, Ingo Molnar , linux-arm-kernel@lists.infradead.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" This changes the awkward approach where architectures provide init functions to determine which levels they can provide large mappings for, to one where the arch is queried for each call. This removes code and indirection, and allows constant-folding of dead code for unsupported levels. This also adds a prot argument to the arch query. This is unused currently but could help with some architectures (e.g., some powerpc processors can't map uncacheable memory with large pages). Cc: linuxppc-dev@lists.ozlabs.org Cc: Catalin Marinas Cc: Will Deacon Cc: linux-arm-kernel@lists.infradead.org Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: x86@kernel.org Cc: "H. Peter Anvin" Signed-off-by: Nicholas Piggin --- Ack or objection from arch maintainers if this goes via the -mm tree? arch/arm64/include/asm/vmalloc.h | 8 +++ arch/arm64/mm/mmu.c | 10 +-- arch/powerpc/include/asm/vmalloc.h | 8 +++ arch/powerpc/mm/book3s64/radix_pgtable.c | 8 +-- arch/x86/include/asm/vmalloc.h | 7 ++ arch/x86/mm/ioremap.c | 10 +-- include/linux/io.h | 9 --- include/linux/vmalloc.h | 6 ++ init/main.c | 1 - mm/ioremap.c | 88 +++++++++--------------- 10 files changed, 77 insertions(+), 78 deletions(-) diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h index 2ca708ab9b20..597b40405319 100644 --- a/arch/arm64/include/asm/vmalloc.h +++ b/arch/arm64/include/asm/vmalloc.h @@ -1,4 +1,12 @@ #ifndef _ASM_ARM64_VMALLOC_H #define _ASM_ARM64_VMALLOC_H +#include + +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP +bool arch_vmap_p4d_supported(pgprot_t prot); +bool arch_vmap_pud_supported(pgprot_t prot); +bool arch_vmap_pmd_supported(pgprot_t prot); +#endif + #endif /* _ASM_ARM64_VMALLOC_H */ diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 75df62fea1b6..9df7e0058c78 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1304,12 +1304,12 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot) return dt_virt; } -int __init arch_ioremap_p4d_supported(void) +bool arch_vmap_p4d_supported(pgprot_t prot) { - return 0; + return false; } -int __init arch_ioremap_pud_supported(void) +bool arch_vmap_pud_supported(pgprot_t prot); { /* * Only 4k granule supports level 1 block mappings. @@ -1319,9 +1319,9 @@ int __init arch_ioremap_pud_supported(void) !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); } -int __init arch_ioremap_pmd_supported(void) +bool arch_vmap_pmd_supported(pgprot_t prot) { - /* See arch_ioremap_pud_supported() */ + /* See arch_vmap_pud_supported() */ return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); } diff --git a/arch/powerpc/include/asm/vmalloc.h b/arch/powerpc/include/asm/vmalloc.h index b992dfaaa161..105abb73f075 100644 --- a/arch/powerpc/include/asm/vmalloc.h +++ b/arch/powerpc/include/asm/vmalloc.h @@ -1,4 +1,12 @@ #ifndef _ASM_POWERPC_VMALLOC_H #define _ASM_POWERPC_VMALLOC_H +#include + +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP +bool arch_vmap_p4d_supported(pgprot_t prot); +bool arch_vmap_pud_supported(pgprot_t prot); +bool arch_vmap_pmd_supported(pgprot_t prot); +#endif + #endif /* _ASM_POWERPC_VMALLOC_H */ diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index 28c784976bed..eca83a50bf2e 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -1134,13 +1134,13 @@ void radix__ptep_modify_prot_commit(struct vm_area_struct *vma, set_pte_at(mm, addr, ptep, pte); } -int __init arch_ioremap_pud_supported(void) +bool arch_vmap_pud_supported(pgprot_t prot) { /* HPT does not cope with large pages in the vmalloc area */ return radix_enabled(); } -int __init arch_ioremap_pmd_supported(void) +bool arch_vmap_pmd_supported(pgprot_t prot) { return radix_enabled(); } @@ -1234,7 +1234,7 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) return 1; } -int __init arch_ioremap_p4d_supported(void) +bool arch_vmap_p4d_supported(pgprot_t prot) { - return 0; + return false; } diff --git a/arch/x86/include/asm/vmalloc.h b/arch/x86/include/asm/vmalloc.h index 29837740b520..094ea2b565f3 100644 --- a/arch/x86/include/asm/vmalloc.h +++ b/arch/x86/include/asm/vmalloc.h @@ -1,6 +1,13 @@ #ifndef _ASM_X86_VMALLOC_H #define _ASM_X86_VMALLOC_H +#include #include +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP +bool arch_vmap_p4d_supported(pgprot_t prot); +bool arch_vmap_pud_supported(pgprot_t prot); +bool arch_vmap_pmd_supported(pgprot_t prot); +#endif + #endif /* _ASM_X86_VMALLOC_H */ diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 84d85dbd1dad..159bfca757b9 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -481,21 +481,21 @@ void iounmap(volatile void __iomem *addr) } EXPORT_SYMBOL(iounmap); -int __init arch_ioremap_p4d_supported(void) +bool arch_vmap_p4d_supported(pgprot_t prot) { - return 0; + return false; } -int __init arch_ioremap_pud_supported(void) +bool arch_vmap_pud_supported(pgprot_t prot) { #ifdef CONFIG_X86_64 return boot_cpu_has(X86_FEATURE_GBPAGES); #else - return 0; + return false; #endif } -int __init arch_ioremap_pmd_supported(void) +bool arch_vmap_pmd_supported(pgprot_t prot) { return boot_cpu_has(X86_FEATURE_PSE); } diff --git a/include/linux/io.h b/include/linux/io.h index 8394c56babc2..f1effd4d7a3c 100644 --- a/include/linux/io.h +++ b/include/linux/io.h @@ -31,15 +31,6 @@ static inline int ioremap_page_range(unsigned long addr, unsigned long end, } #endif -#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP -void __init ioremap_huge_init(void); -int arch_ioremap_p4d_supported(void); -int arch_ioremap_pud_supported(void); -int arch_ioremap_pmd_supported(void); -#else -static inline void ioremap_huge_init(void) { } -#endif - /* * Managed iomap interface */ diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 0221f852a7e1..3f6bba4cc9bc 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -84,6 +84,12 @@ struct vmap_area { }; }; +#ifndef CONFIG_HAVE_ARCH_HUGE_VMAP +static inline bool arch_vmap_p4d_supported(pgprot_t prot) { return false; } +static inline bool arch_vmap_pud_supported(pgprot_t prot) { return false; } +static inline bool arch_vmap_pmd_supported(pgprot_t prot) { return false; } +#endif + /* * Highlevel APIs for driver use */ diff --git a/init/main.c b/init/main.c index ae78fb68d231..1c89aa127b8f 100644 --- a/init/main.c +++ b/init/main.c @@ -820,7 +820,6 @@ static void __init mm_init(void) pgtable_init(); debug_objects_mem_init(); vmalloc_init(); - ioremap_huge_init(); /* Should be run before the first non-init thread is created */ init_espfix_bsp(); /* Should be run after espfix64 is set up. */ diff --git a/mm/ioremap.c b/mm/ioremap.c index 3f4d36f9745a..c67f91164401 100644 --- a/mm/ioremap.c +++ b/mm/ioremap.c @@ -16,49 +16,16 @@ #include "pgalloc-track.h" #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP -static int __read_mostly ioremap_p4d_capable; -static int __read_mostly ioremap_pud_capable; -static int __read_mostly ioremap_pmd_capable; -static int __read_mostly ioremap_huge_disabled; +static bool __ro_after_init iomap_max_page_shift = PAGE_SHIFT; static int __init set_nohugeiomap(char *str) { - ioremap_huge_disabled = 1; + iomap_max_page_shift = P4D_SHIFT; return 0; } early_param("nohugeiomap", set_nohugeiomap); - -void __init ioremap_huge_init(void) -{ - if (!ioremap_huge_disabled) { - if (arch_ioremap_p4d_supported()) - ioremap_p4d_capable = 1; - if (arch_ioremap_pud_supported()) - ioremap_pud_capable = 1; - if (arch_ioremap_pmd_supported()) - ioremap_pmd_capable = 1; - } -} - -static inline int ioremap_p4d_enabled(void) -{ - return ioremap_p4d_capable; -} - -static inline int ioremap_pud_enabled(void) -{ - return ioremap_pud_capable; -} - -static inline int ioremap_pmd_enabled(void) -{ - return ioremap_pmd_capable; -} - -#else /* !CONFIG_HAVE_ARCH_HUGE_VMAP */ -static inline int ioremap_p4d_enabled(void) { return 0; } -static inline int ioremap_pud_enabled(void) { return 0; } -static inline int ioremap_pmd_enabled(void) { return 0; } +#else /* CONFIG_HAVE_ARCH_HUGE_VMAP */ +static const bool iomap_max_page_shift = PAGE_SHIFT; #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, @@ -82,9 +49,13 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, } static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot) + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) { - if (!ioremap_pmd_enabled()) + if (max_page_shift < PMD_SHIFT) + return 0; + + if (!arch_vmap_pmd_supported(prot)) return 0; if ((end - addr) != PMD_SIZE) @@ -104,7 +75,7 @@ static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) + unsigned int max_page_shift, pgtbl_mod_mask *mask) { pmd_t *pmd; unsigned long next; @@ -115,7 +86,7 @@ static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, do { next = pmd_addr_end(addr, end); - if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot)) { + if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot, max_page_shift)) { *mask |= PGTBL_PMD_MODIFIED; continue; } @@ -127,9 +98,13 @@ static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, } static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot) + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) { - if (!ioremap_pud_enabled()) + if (max_page_shift < PUD_SHIFT) + return 0; + + if (!arch_vmap_pud_supported(prot)) return 0; if ((end - addr) != PUD_SIZE) @@ -149,7 +124,7 @@ static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) + unsigned int max_page_shift, pgtbl_mod_mask *mask) { pud_t *pud; unsigned long next; @@ -160,21 +135,25 @@ static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, do { next = pud_addr_end(addr, end); - if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot)) { + if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot, max_page_shift)) { *mask |= PGTBL_PUD_MODIFIED; continue; } - if (vmap_pmd_range(pud, addr, next, phys_addr, prot, mask)) + if (vmap_pmd_range(pud, addr, next, phys_addr, prot, max_page_shift, mask)) return -ENOMEM; } while (pud++, phys_addr += (next - addr), addr = next, addr != end); return 0; } static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot) + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) { - if (!ioremap_p4d_enabled()) + if (max_page_shift < P4D_SHIFT) + return 0; + + if (!arch_vmap_p4d_supported(prot)) return 0; if ((end - addr) != P4D_SIZE) @@ -194,7 +173,7 @@ static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) + unsigned int max_page_shift, pgtbl_mod_mask *mask) { p4d_t *p4d; unsigned long next; @@ -205,19 +184,20 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, do { next = p4d_addr_end(addr, end); - if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot)) { + if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot, max_page_shift)) { *mask |= PGTBL_P4D_MODIFIED; continue; } - if (vmap_pud_range(p4d, addr, next, phys_addr, prot, mask)) + if (vmap_pud_range(p4d, addr, next, phys_addr, prot, max_page_shift, mask)) return -ENOMEM; } while (p4d++, phys_addr += (next - addr), addr = next, addr != end); return 0; } static int vmap_range(unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot) + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) { pgd_t *pgd; unsigned long start; @@ -232,7 +212,7 @@ static int vmap_range(unsigned long addr, unsigned long end, pgd = pgd_offset_k(addr); do { next = pgd_addr_end(addr, end); - err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, &mask); + err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, max_page_shift, &mask); if (err) break; } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); @@ -248,7 +228,7 @@ static int vmap_range(unsigned long addr, unsigned long end, int ioremap_page_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot) { - return vmap_range(addr, end, phys_addr, prot); + return vmap_range(addr, end, phys_addr, prot, iomap_max_page_shift); } #ifdef CONFIG_GENERIC_IOREMAP From patchwork Tue Aug 25 14:57:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1351127 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BbXlW2GJHz9sSP for ; Wed, 26 Aug 2020 01:21:43 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=uDNHH8Kh; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BbXlV1PSTzDqVP for ; Wed, 26 Aug 2020 01:21:42 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::541; helo=mail-pg1-x541.google.com; envelope-from=npiggin@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=uDNHH8Kh; dkim-atps=neutral Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BbXDr3xQSzDqMr for ; Wed, 26 Aug 2020 00:58:36 +1000 (AEST) Received: by mail-pg1-x541.google.com with SMTP id v15so6995459pgh.6 for ; Tue, 25 Aug 2020 07:58:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=l67Z2dzho9T2ly5R7eUcCIHpsBNVy2xJSWIdCCUuaqc=; b=uDNHH8KhZtI7x0oo3/DGvUVgj15j0EFWeWkpDYBKCLJKsOMtuQ1xtfIlg3G2AhdBIn CYJgB2BvAfJMB4LL/pc1kFgvcDgNgqiXBYeStk8noBxkFGfXHwcn8jVawe4QiICKcdN1 T4XRPdl431VQo65ffcLocq0P2XPLAjeYshZGsAuExgdni/73Sg156xvcwRekGlVWJE2o AznQRe/Gzv5OMNjMBjT2abU3ehYKwFcnsxjKAURf/zpJ51iHpE+VyEuioFKDyhFwybyk ICUmDfnanBvbNsP5GjLLst5UsdUPbiQXprIPBIH4QHAFO1CzLtHvj7cNT80JD+I/PVED g5vQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=l67Z2dzho9T2ly5R7eUcCIHpsBNVy2xJSWIdCCUuaqc=; b=NTG66lgRXLvHpz+Cz9+4G2L3H3XwH7bPL52puxOeH1nnXd/kvGT1d/7q7ITnvXrTn6 iRC7MYXEGcoJeoNrdv3bc7XUZc6cjgNjmZy3Q9Fteh/FF5YjyXhw7GWTB66l3YxUvsTt G4KTk5vMlA+6g0VvOFwtzdMsnvSjlM2gP3wFC7f2SjSjrVd5W4ZQOgDIRVeYawEf9oxO Oeh/8Ys0JM0i+tmoYYAGhA8Ife3rpgoGxDMgU04Tm4Nd3cUPibYGTsT5tfGjfVfmNLr4 zPZTCbTDgIsoqic+62xUP2Kzl1ay3eVmyzs92NTvDmVAxaLQ8D0pj8ba++TJu4d2Hh/+ byEw== X-Gm-Message-State: AOAM530I3d+fUWZnA0NxkGtryfyCiqdk+oYZ6/x2R2yo8zlZILvkDyU8 +nMpotjf4Up1kdEMjqjLE24= X-Google-Smtp-Source: ABdhPJzLMJzy9eSMALAieo2xYeCw4xNiaeoXz3Fa/uIpayqK5UpBelchAC7yK+azH4/CJat9OKGjQg== X-Received: by 2002:a17:902:407:: with SMTP id 7mr8173396ple.167.1598367513105; Tue, 25 Aug 2020 07:58:33 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id e29sm15755956pfj.92.2020.08.25.07.58.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 07:58:32 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH v7 06/12] powerpc: inline huge vmap supported functions Date: Wed, 26 Aug 2020 00:57:47 +1000 Message-Id: <20200825145753.529284-7-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200825145753.529284-1-npiggin@gmail.com> References: <20200825145753.529284-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Nicholas Piggin , Christoph Hellwig , Zefan Li , Jonathan Cameron , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" This allows unsupported levels to be constant folded away, and so p4d_free_pud_page can be removed because it's no longer linked to. Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Nicholas Piggin Acked-by: Michael Ellerman --- Ack or objection if this goes via the -mm tree? arch/powerpc/include/asm/vmalloc.h | 19 ++++++++++++++++--- arch/powerpc/mm/book3s64/radix_pgtable.c | 21 --------------------- 2 files changed, 16 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/include/asm/vmalloc.h b/arch/powerpc/include/asm/vmalloc.h index 105abb73f075..3f0c153befb0 100644 --- a/arch/powerpc/include/asm/vmalloc.h +++ b/arch/powerpc/include/asm/vmalloc.h @@ -1,12 +1,25 @@ #ifndef _ASM_POWERPC_VMALLOC_H #define _ASM_POWERPC_VMALLOC_H +#include #include #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP -bool arch_vmap_p4d_supported(pgprot_t prot); -bool arch_vmap_pud_supported(pgprot_t prot); -bool arch_vmap_pmd_supported(pgprot_t prot); +static inline bool arch_vmap_p4d_supported(pgprot_t prot) +{ + return false; +} + +static inline bool arch_vmap_pud_supported(pgprot_t prot) +{ + /* HPT does not cope with large pages in the vmalloc area */ + return radix_enabled(); +} + +static inline bool arch_vmap_pmd_supported(pgprot_t prot) +{ + return radix_enabled(); +} #endif #endif /* _ASM_POWERPC_VMALLOC_H */ diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index eca83a50bf2e..27f5837cf145 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -1134,22 +1134,6 @@ void radix__ptep_modify_prot_commit(struct vm_area_struct *vma, set_pte_at(mm, addr, ptep, pte); } -bool arch_vmap_pud_supported(pgprot_t prot) -{ - /* HPT does not cope with large pages in the vmalloc area */ - return radix_enabled(); -} - -bool arch_vmap_pmd_supported(pgprot_t prot) -{ - return radix_enabled(); -} - -int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) -{ - return 0; -} - int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot) { pte_t *ptep = (pte_t *)pud; @@ -1233,8 +1217,3 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) return 1; } - -bool arch_vmap_p4d_supported(pgprot_t prot) -{ - return false; -} From patchwork Tue Aug 25 14:57:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1351128 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BbXpN33YBz9sTN for ; Wed, 26 Aug 2020 01:24:12 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=b0AWgWGG; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BbXpM6yM6zDqXl for ; Wed, 26 Aug 2020 01:24:11 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::444; helo=mail-pf1-x444.google.com; envelope-from=npiggin@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=b0AWgWGG; dkim-atps=neutral Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BbXDw3NsSzDq72 for ; Wed, 26 Aug 2020 00:58:40 +1000 (AEST) Received: by mail-pf1-x444.google.com with SMTP id m71so7567561pfd.1 for ; Tue, 25 Aug 2020 07:58:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=M/5eWAbEDQgOkiF6H06FGKZHT/EDb26p/JCBso4Tchw=; b=b0AWgWGGUkjUbVDCI2TEdIDTZ4JsxKTUttA8MH4l5sDv3h/rE3ZTwojNnF/PYUM1Mc rChp7wjU4B40W63KYGDwvx8DMJuTlmJIWmiOvJ24kwFppMGbxEe7UsbE/BSgegHkfb3j LONhu22IlfERWDw8KCGGDDHT35FyWcfSU82h3i6KidDfX79nz7QiE46/WdC5uZ4brzxu 8hrUEomJ1vnvGYG6UeDro5fRriPjk19sdTPUR0iAB0Omt6kUP5pVWMF3b7oZncCSELwC CgmHBgVY75MA6GwJqdeihvCoBXpuhmVIXDzN41XcjOhbeUcHiyD7Ay2fGa3VscHvt8OI U6Ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=M/5eWAbEDQgOkiF6H06FGKZHT/EDb26p/JCBso4Tchw=; b=ngg/wXbq9ggyIs4ST+nZVvMAZcN7z5p9Wj5S/RdwP2h4KaIbrACQWltZPsFwFKZSVE 34ks/wjFKEDYcEMWiAG8eS/FAg8YBIG5Ft+Xx+AdzcwmbsSlpTg3NrhaIc7MibLDtAvt 7jrANIZuK+Z0s8nu/le1H8tNYgjEWy78AneSjUAG7GHTT1/Ev7mOWUm6i8F6IScQVu0i TF4yCEdjBQT+TV45/Y98FbBYxSoVavrYaUXjq/XCqHa7Rv1o5qFQwsKtetzEqPmPP6fO imYLGdQRvccmMvOASD1UVD5Ne0imSI5TLw9HNriGua0TfGbLMDlwKznMUwzZQeDQMMr2 hqeA== X-Gm-Message-State: AOAM531o0KIU0QO7jq9x8KbAIvgVHxSvonU8vRTpqPDbp5HTtGZxgzXS QYm2bJ4Ggf5saLmzL3dr2kc6WzkgJWY= X-Google-Smtp-Source: ABdhPJyoYG5YDMJT67BmAM16vft0/MgmMdJ09oTXgsuH1MNyBThmKV5NRHMl8mSRkrVyoySYFmRl/A== X-Received: by 2002:aa7:9f92:: with SMTP id z18mr8222689pfr.260.1598367518404; Tue, 25 Aug 2020 07:58:38 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id e29sm15755956pfj.92.2020.08.25.07.58.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 07:58:38 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH v7 07/12] arm64: inline huge vmap supported functions Date: Wed, 26 Aug 2020 00:57:48 +1000 Message-Id: <20200825145753.529284-8-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200825145753.529284-1-npiggin@gmail.com> References: <20200825145753.529284-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, Will Deacon , Catalin Marinas , linux-kernel@vger.kernel.org, Nicholas Piggin , Christoph Hellwig , Zefan Li , Jonathan Cameron , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" This allows unsupported levels to be constant folded away, and so p4d_free_pud_page can be removed because it's no longer linked to. Cc: Catalin Marinas Cc: Will Deacon Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Nicholas Piggin Acked-by: Catalin Marinas --- Ack or objection if this goes via the -mm tree? arch/arm64/include/asm/vmalloc.h | 23 ++++++++++++++++++++--- arch/arm64/mm/mmu.c | 26 -------------------------- 2 files changed, 20 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h index 597b40405319..fc9a12d6cc1a 100644 --- a/arch/arm64/include/asm/vmalloc.h +++ b/arch/arm64/include/asm/vmalloc.h @@ -4,9 +4,26 @@ #include #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP -bool arch_vmap_p4d_supported(pgprot_t prot); -bool arch_vmap_pud_supported(pgprot_t prot); -bool arch_vmap_pmd_supported(pgprot_t prot); +static inline bool arch_vmap_p4d_supported(pgprot_t prot) +{ + return false; +} + +static inline bool arch_vmap_pud_supported(pgprot_t prot) +{ + /* + * Only 4k granule supports level 1 block mappings. + * SW table walks can't handle removal of intermediate entries. + */ + return IS_ENABLED(CONFIG_ARM64_4K_PAGES) && + !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); +} + +static inline bool arch_vmap_pmd_supported(pgprot_t prot) +{ + /* See arch_vmap_pud_supported() */ + return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); +} #endif #endif /* _ASM_ARM64_VMALLOC_H */ diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 9df7e0058c78..07093e148957 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1304,27 +1304,6 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot) return dt_virt; } -bool arch_vmap_p4d_supported(pgprot_t prot) -{ - return false; -} - -bool arch_vmap_pud_supported(pgprot_t prot); -{ - /* - * Only 4k granule supports level 1 block mappings. - * SW table walks can't handle removal of intermediate entries. - */ - return IS_ENABLED(CONFIG_ARM64_4K_PAGES) && - !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); -} - -bool arch_vmap_pmd_supported(pgprot_t prot) -{ - /* See arch_vmap_pud_supported() */ - return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); -} - int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot) { pud_t new_pud = pfn_pud(__phys_to_pfn(phys), mk_pud_sect_prot(prot)); @@ -1416,11 +1395,6 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr) return 1; } -int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) -{ - return 0; /* Don't attempt a block mapping */ -} - #ifdef CONFIG_MEMORY_HOTPLUG static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size) { From patchwork Tue Aug 25 14:57:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1351130 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BbXtP3HK0z9sSP for ; Wed, 26 Aug 2020 01:27:41 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ld0qeCxb; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BbXtN1TqszDqMf for ; Wed, 26 Aug 2020 01:27:40 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::444; helo=mail-pf1-x444.google.com; envelope-from=npiggin@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ld0qeCxb; dkim-atps=neutral Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BbXF3566tzDqNm for ; Wed, 26 Aug 2020 00:58:47 +1000 (AEST) Received: by mail-pf1-x444.google.com with SMTP id k15so1083511pfc.12 for ; Tue, 25 Aug 2020 07:58:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aQ1WXKhj5a18kdRmgSAyN5y+pR2QoRFll6u3oKCjOTs=; b=ld0qeCxbyfv+pIE1p9tMPi79FDatikVJ43DF5LbwBuNhVgpCDbF06GfECbrvsMP/rn j+9XBpaxT9i2UKZttKNrNGhxQ5lYz9qbzrdVtFq8EPll0QUhyKnG3TBNWuP5AAMgT0Z0 hcSRJXoE92dTPWqLbPVZu6V1RBvpNfMIMizISs/hwRqKOZ/uRUpmu2owYRuxktcPqJeo oL0zn6YvRj8eQU3/ys+70jRTBX7wIDQgr1OMzdCEnfXProl815qfC1bH2BGFW0+n/qNl zzVaksNuuUNfvcQDGz15exv/e/ZOcP4e+A5tRdVOtJd2sJUjt833+lmaWXG0LE3k5jWt aUVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aQ1WXKhj5a18kdRmgSAyN5y+pR2QoRFll6u3oKCjOTs=; b=bba7BJlCOh6Y2NY1trMgZLwj5Rxg/ShCQGqk9XfLHG1FP089tz9pm0Rsx8H79ta9dw CyjJfY/qd55FSZrAWgGANslV9wWOA+B09kRF4DdbS5WoccgivydYCZZtPX1T1qQmM0Nw vVjQi/HNFO33csjfVBEal3JTh/Lxsf3yN+UAP+9wPxbG2KuIXxWSzNxNttvHra9lm0nN em/7g9jvWVoPMRcVCllev3fwQJ0Xr3b1IVrrsf0xEcffk7uNl4YqJ823+y1pUbBoAmmi XxbO6LuxG631tOXO3aCQTGdqh98XFV3Q/l9gbLU/8PrVq75EO55WAHMIIxqGblRvRhzd 3vKQ== X-Gm-Message-State: AOAM531xp6KCwgOxTOK0D6iZw4NyQJ5qcXK1SnRd1zEzF4obncsfVLlm lCwpbTa15NhRgz5AdXReLFE= X-Google-Smtp-Source: ABdhPJxgyt6v8I7ydZGOFbxNzJf2nKN/r4HPzy9rkAa1+avUfozgoPS8wf2IGvii68NG9fSXHus0bg== X-Received: by 2002:a63:7c9:: with SMTP id 192mr6935394pgh.181.1598367524426; Tue, 25 Aug 2020 07:58:44 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id e29sm15755956pfj.92.2020.08.25.07.58.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 07:58:44 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH v7 08/12] x86: inline huge vmap supported functions Date: Wed, 26 Aug 2020 00:57:49 +1000 Message-Id: <20200825145753.529284-9-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200825145753.529284-1-npiggin@gmail.com> References: <20200825145753.529284-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org, Nicholas Piggin , Christoph Hellwig , Zefan Li , Borislav Petkov , Jonathan Cameron , Thomas Gleixner , linuxppc-dev@lists.ozlabs.org, Ingo Molnar Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" This allows unsupported levels to be constant folded away, and so p4d_free_pud_page can be removed because it's no longer linked to. Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: x86@kernel.org Cc: "H. Peter Anvin" Signed-off-by: Nicholas Piggin --- Ack or objection if this goes via the -mm tree? arch/x86/include/asm/vmalloc.h | 22 +++++++++++++++++++--- arch/x86/mm/ioremap.c | 19 ------------------- arch/x86/mm/pgtable.c | 13 ------------- 3 files changed, 19 insertions(+), 35 deletions(-) diff --git a/arch/x86/include/asm/vmalloc.h b/arch/x86/include/asm/vmalloc.h index 094ea2b565f3..e714b00fc0ca 100644 --- a/arch/x86/include/asm/vmalloc.h +++ b/arch/x86/include/asm/vmalloc.h @@ -1,13 +1,29 @@ #ifndef _ASM_X86_VMALLOC_H #define _ASM_X86_VMALLOC_H +#include #include #include #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP -bool arch_vmap_p4d_supported(pgprot_t prot); -bool arch_vmap_pud_supported(pgprot_t prot); -bool arch_vmap_pmd_supported(pgprot_t prot); +static inline bool arch_vmap_p4d_supported(pgprot_t prot) +{ + return false; +} + +static inline bool arch_vmap_pud_supported(pgprot_t prot) +{ +#ifdef CONFIG_X86_64 + return boot_cpu_has(X86_FEATURE_GBPAGES); +#else + return false; +#endif +} + +static inline bool arch_vmap_pmd_supported(pgprot_t prot) +{ + return boot_cpu_has(X86_FEATURE_PSE); +} #endif #endif /* _ASM_X86_VMALLOC_H */ diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 159bfca757b9..1465a22a9bfb 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -481,25 +481,6 @@ void iounmap(volatile void __iomem *addr) } EXPORT_SYMBOL(iounmap); -bool arch_vmap_p4d_supported(pgprot_t prot) -{ - return false; -} - -bool arch_vmap_pud_supported(pgprot_t prot) -{ -#ifdef CONFIG_X86_64 - return boot_cpu_has(X86_FEATURE_GBPAGES); -#else - return false; -#endif -} - -bool arch_vmap_pmd_supported(pgprot_t prot) -{ - return boot_cpu_has(X86_FEATURE_PSE); -} - /* * Convert a physical pointer to a virtual kernel pointer for /dev/mem * access diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index dfd82f51ba66..801c418ee97d 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -780,14 +780,6 @@ int pmd_clear_huge(pmd_t *pmd) return 0; } -/* - * Until we support 512GB pages, skip them in the vmap area. - */ -int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) -{ - return 0; -} - #ifdef CONFIG_X86_64 /** * pud_free_pmd_page - Clear pud entry and free pmd page. @@ -859,11 +851,6 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) #else /* !CONFIG_X86_64 */ -int pud_free_pmd_page(pud_t *pud, unsigned long addr) -{ - return pud_none(*pud); -} - /* * Disable free page handling on x86-PAE. This assures that ioremap() * does not update sync'd pmd entries. See vmalloc_sync_one(). From patchwork Tue Aug 25 14:57:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1351132 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BbXxX0LGWz9sSP for ; Wed, 26 Aug 2020 01:30:24 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=EmY2bagV; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BbXxW6xjFzDqCs for ; Wed, 26 Aug 2020 01:30:23 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::442; helo=mail-pf1-x442.google.com; envelope-from=npiggin@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=EmY2bagV; dkim-atps=neutral Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BbXF90VBkzDqHL for ; Wed, 26 Aug 2020 00:58:52 +1000 (AEST) Received: by mail-pf1-x442.google.com with SMTP id k1so715128pfu.2 for ; Tue, 25 Aug 2020 07:58:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FErZs3hZSGjvqwNcDn/CTTFNbpmYi2cg1JR35qAXZTs=; b=EmY2bagVYcEwm/v7nmZnHTii9lxPoceyw//2dPjiejBidWt6EbSEd2mhPJcOgGVVd0 PETr9kZUt+8PZVFnxT357wUl0RQsyaZHtU+qKhbYjsBmz7rgaosotYUl7OsbvPMI9bP+ UZDtKe5/XCjGTipj5wmhZdxbeQ3j6sdUkaQR5Ml0JCBkGhCI3dxqQGKQKLhoIun4huda q4OQE81ocUORQul9f50fs9/wcB1vdEJBSX2m5sCh2xi9uGo7fyrRNKSQg+wmXebMj/cn Aq4N8+0aq7+kjVDVUfrN/0i9yojvl874l7U4a8ry0z/sRBrgmftKoWz/Xddx0R+MZU1N NUCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FErZs3hZSGjvqwNcDn/CTTFNbpmYi2cg1JR35qAXZTs=; b=O0uqDnI9kphYQ9Gi4BwxTZyT4vuL+x9XLRZlhE5/kix4LrA2MJeIpUtGy9jcPnywrl p9nivRO26MVOtJBIe49ccNtLtgUjLr+5OBwoIRTbQAgFFz+kJLtP1EcAJ6ijSv9skd4z +Y0KZoeTcedOcq4vre+q+GbeKRFnvFGMmDgJizbjzSPAFP3YV528OUtBKdTqfRO0fJQz yR7t9rSUErxv3F7TDh8fUZet2WSmvwHujafF0B2bvoSYdM0DC2xt/tFIx+U9MVphW6y5 39OWOUZnpvQUryMg80lB1LJU0nmJYCO8eqAxEKQTiAff0dvgXXRKwxtcXKUkGLSOpNnn n9Uw== X-Gm-Message-State: AOAM531DkDbBsJC3uk3FGeK8eFBspdpSMeNugGREYJrvqeCNcVVlZvd6 pMhOn7PXhj7BZfrzF8AJYa4= X-Google-Smtp-Source: ABdhPJw31A0Uq06DkF6fTEYi8KgTmGz2Cy3Zchx+cG6HwJRAzukHgP51Ao6KdPPtqmJe9zVfFGPIFg== X-Received: by 2002:a17:902:8ecb:: with SMTP id x11mr7922476plo.13.1598367529029; Tue, 25 Aug 2020 07:58:49 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id e29sm15755956pfj.92.2020.08.25.07.58.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 07:58:48 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH v7 09/12] mm: Move vmap_range from mm/ioremap.c to mm/vmalloc.c Date: Wed, 26 Aug 2020 00:57:50 +1000 Message-Id: <20200825145753.529284-10-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200825145753.529284-1-npiggin@gmail.com> References: <20200825145753.529284-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Nicholas Piggin , Christoph Hellwig , Zefan Li , Jonathan Cameron , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" This is a generic kernel virtual memory mapper, not specific to ioremap. Signed-off-by: Nicholas Piggin --- include/linux/vmalloc.h | 3 + mm/ioremap.c | 197 ---------------------------------------- mm/vmalloc.c | 196 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 199 insertions(+), 197 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 3f6bba4cc9bc..15adb9a14fb6 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -177,6 +177,9 @@ extern struct vm_struct *remove_vm_area(const void *addr); extern struct vm_struct *find_vm_area(const void *addr); #ifdef CONFIG_MMU +int vmap_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift); extern int map_kernel_range_noflush(unsigned long start, unsigned long size, pgprot_t prot, struct page **pages); int map_kernel_range(unsigned long start, unsigned long size, pgprot_t prot, diff --git a/mm/ioremap.c b/mm/ioremap.c index c67f91164401..d1dcc7e744ac 100644 --- a/mm/ioremap.c +++ b/mm/ioremap.c @@ -28,203 +28,6 @@ early_param("nohugeiomap", set_nohugeiomap); static const bool iomap_max_page_shift = PAGE_SHIFT; #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ -static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) -{ - pte_t *pte; - u64 pfn; - - pfn = phys_addr >> PAGE_SHIFT; - pte = pte_alloc_kernel_track(pmd, addr, mask); - if (!pte) - return -ENOMEM; - do { - BUG_ON(!pte_none(*pte)); - set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot)); - pfn++; - } while (pte++, addr += PAGE_SIZE, addr != end); - *mask |= PGTBL_PTE_MODIFIED; - return 0; -} - -static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) -{ - if (max_page_shift < PMD_SHIFT) - return 0; - - if (!arch_vmap_pmd_supported(prot)) - return 0; - - if ((end - addr) != PMD_SIZE) - return 0; - - if (!IS_ALIGNED(addr, PMD_SIZE)) - return 0; - - if (!IS_ALIGNED(phys_addr, PMD_SIZE)) - return 0; - - if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr)) - return 0; - - return pmd_set_huge(pmd, phys_addr, prot); -} - -static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift, pgtbl_mod_mask *mask) -{ - pmd_t *pmd; - unsigned long next; - - pmd = pmd_alloc_track(&init_mm, pud, addr, mask); - if (!pmd) - return -ENOMEM; - do { - next = pmd_addr_end(addr, end); - - if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot, max_page_shift)) { - *mask |= PGTBL_PMD_MODIFIED; - continue; - } - - if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask)) - return -ENOMEM; - } while (pmd++, phys_addr += (next - addr), addr = next, addr != end); - return 0; -} - -static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) -{ - if (max_page_shift < PUD_SHIFT) - return 0; - - if (!arch_vmap_pud_supported(prot)) - return 0; - - if ((end - addr) != PUD_SIZE) - return 0; - - if (!IS_ALIGNED(addr, PUD_SIZE)) - return 0; - - if (!IS_ALIGNED(phys_addr, PUD_SIZE)) - return 0; - - if (pud_present(*pud) && !pud_free_pmd_page(pud, addr)) - return 0; - - return pud_set_huge(pud, phys_addr, prot); -} - -static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift, pgtbl_mod_mask *mask) -{ - pud_t *pud; - unsigned long next; - - pud = pud_alloc_track(&init_mm, p4d, addr, mask); - if (!pud) - return -ENOMEM; - do { - next = pud_addr_end(addr, end); - - if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot, max_page_shift)) { - *mask |= PGTBL_PUD_MODIFIED; - continue; - } - - if (vmap_pmd_range(pud, addr, next, phys_addr, prot, max_page_shift, mask)) - return -ENOMEM; - } while (pud++, phys_addr += (next - addr), addr = next, addr != end); - return 0; -} - -static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) -{ - if (max_page_shift < P4D_SHIFT) - return 0; - - if (!arch_vmap_p4d_supported(prot)) - return 0; - - if ((end - addr) != P4D_SIZE) - return 0; - - if (!IS_ALIGNED(addr, P4D_SIZE)) - return 0; - - if (!IS_ALIGNED(phys_addr, P4D_SIZE)) - return 0; - - if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr)) - return 0; - - return p4d_set_huge(p4d, phys_addr, prot); -} - -static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift, pgtbl_mod_mask *mask) -{ - p4d_t *p4d; - unsigned long next; - - p4d = p4d_alloc_track(&init_mm, pgd, addr, mask); - if (!p4d) - return -ENOMEM; - do { - next = p4d_addr_end(addr, end); - - if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot, max_page_shift)) { - *mask |= PGTBL_P4D_MODIFIED; - continue; - } - - if (vmap_pud_range(p4d, addr, next, phys_addr, prot, max_page_shift, mask)) - return -ENOMEM; - } while (p4d++, phys_addr += (next - addr), addr = next, addr != end); - return 0; -} - -static int vmap_range(unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) -{ - pgd_t *pgd; - unsigned long start; - unsigned long next; - int err; - pgtbl_mod_mask mask = 0; - - might_sleep(); - BUG_ON(addr >= end); - - start = addr; - pgd = pgd_offset_k(addr); - do { - next = pgd_addr_end(addr, end); - err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, max_page_shift, &mask); - if (err) - break; - } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); - - flush_cache_vmap(start, end); - - if (mask & ARCH_PAGE_TABLE_SYNC_MASK) - arch_sync_kernel_mappings(start, end); - - return err; -} - int ioremap_page_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot) { diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 45cd80ec7eeb..256554d598e6 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -70,6 +70,202 @@ static void free_work(struct work_struct *w) } /*** Page table manipulation functions ***/ +static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + pgtbl_mod_mask *mask) +{ + pte_t *pte; + u64 pfn; + + pfn = phys_addr >> PAGE_SHIFT; + pte = pte_alloc_kernel_track(pmd, addr, mask); + if (!pte) + return -ENOMEM; + do { + BUG_ON(!pte_none(*pte)); + set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot)); + pfn++; + } while (pte++, addr += PAGE_SIZE, addr != end); + *mask |= PGTBL_PTE_MODIFIED; + return 0; +} + +static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + if (max_page_shift < PMD_SHIFT) + return 0; + + if (!arch_vmap_pmd_supported(prot)) + return 0; + + if ((end - addr) != PMD_SIZE) + return 0; + + if (!IS_ALIGNED(addr, PMD_SIZE)) + return 0; + + if (!IS_ALIGNED(phys_addr, PMD_SIZE)) + return 0; + + if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr)) + return 0; + + return pmd_set_huge(pmd, phys_addr, prot); +} + +static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift, pgtbl_mod_mask *mask) +{ + pmd_t *pmd; + unsigned long next; + + pmd = pmd_alloc_track(&init_mm, pud, addr, mask); + if (!pmd) + return -ENOMEM; + do { + next = pmd_addr_end(addr, end); + + if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot, max_page_shift)) { + *mask |= PGTBL_PMD_MODIFIED; + continue; + } + + if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask)) + return -ENOMEM; + } while (pmd++, phys_addr += (next - addr), addr = next, addr != end); + return 0; +} + +static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + if (max_page_shift < PUD_SHIFT) + return 0; + + if (!arch_vmap_pud_supported(prot)) + return 0; + + if ((end - addr) != PUD_SIZE) + return 0; + + if (!IS_ALIGNED(addr, PUD_SIZE)) + return 0; + + if (!IS_ALIGNED(phys_addr, PUD_SIZE)) + return 0; + + if (pud_present(*pud) && !pud_free_pmd_page(pud, addr)) + return 0; + + return pud_set_huge(pud, phys_addr, prot); +} + +static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift, pgtbl_mod_mask *mask) +{ + pud_t *pud; + unsigned long next; + + pud = pud_alloc_track(&init_mm, p4d, addr, mask); + if (!pud) + return -ENOMEM; + do { + next = pud_addr_end(addr, end); + + if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot, max_page_shift)) { + *mask |= PGTBL_PUD_MODIFIED; + continue; + } + + if (vmap_pmd_range(pud, addr, next, phys_addr, prot, max_page_shift, mask)) + return -ENOMEM; + } while (pud++, phys_addr += (next - addr), addr = next, addr != end); + return 0; +} + +static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + if (max_page_shift < P4D_SHIFT) + return 0; + + if (!arch_vmap_p4d_supported(prot)) + return 0; + + if ((end - addr) != P4D_SIZE) + return 0; + + if (!IS_ALIGNED(addr, P4D_SIZE)) + return 0; + + if (!IS_ALIGNED(phys_addr, P4D_SIZE)) + return 0; + + if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr)) + return 0; + + return p4d_set_huge(p4d, phys_addr, prot); +} + +static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift, pgtbl_mod_mask *mask) +{ + p4d_t *p4d; + unsigned long next; + + p4d = p4d_alloc_track(&init_mm, pgd, addr, mask); + if (!p4d) + return -ENOMEM; + do { + next = p4d_addr_end(addr, end); + + if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot, max_page_shift)) { + *mask |= PGTBL_P4D_MODIFIED; + continue; + } + + if (vmap_pud_range(p4d, addr, next, phys_addr, prot, max_page_shift, mask)) + return -ENOMEM; + } while (p4d++, phys_addr += (next - addr), addr = next, addr != end); + return 0; +} + +int vmap_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + pgd_t *pgd; + unsigned long start; + unsigned long next; + int err; + pgtbl_mod_mask mask = 0; + + might_sleep(); + BUG_ON(addr >= end); + + start = addr; + pgd = pgd_offset_k(addr); + do { + next = pgd_addr_end(addr, end); + err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, max_page_shift, &mask); + if (err) + break; + } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); + + flush_cache_vmap(start, end); + + if (mask & ARCH_PAGE_TABLE_SYNC_MASK) + arch_sync_kernel_mappings(start, end); + + return err; +} static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pgtbl_mod_mask *mask) From patchwork Tue Aug 25 14:57:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1351133 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BbY0F1l1Hz9sSP for ; Wed, 26 Aug 2020 01:32:45 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=jFDbXQGW; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BbY0D6th5zDqY0 for ; Wed, 26 Aug 2020 01:32:44 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::1044; helo=mail-pj1-x1044.google.com; envelope-from=npiggin@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=jFDbXQGW; dkim-atps=neutral Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BbXFD4r9wzDqHL for ; Wed, 26 Aug 2020 00:58:56 +1000 (AEST) Received: by mail-pj1-x1044.google.com with SMTP id q93so960516pjq.0 for ; Tue, 25 Aug 2020 07:58:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ik3l57t0FyOClvLjEe+sb/BhE/y0hhY0gnEqux751Uc=; b=jFDbXQGWGSUzk7/GJ+Z/Evd9q3iiU/BfiaMC4g83Na9QhLkZaoeFiiSi/FDNx+q0/o JzPfzJTgJWm/P9pW+kiyHgiJ3OxnIaNeOA/VQFt0osyMoqtV8dJGZoFOSLHI8ZJp2LQG uXQsCruf53J3N+40TqcYQYhQU5JOrltFKObIJtcT8dIGteWrwQ63Li+XqNWXfO/8TIcX O5Y3C++e90X5Ov1fyAwEZ0Pw1S8c0WlG2BrKjPLNiDRIjxzpQXbwnzWTA+RXKohUt+G4 zwS57mS4cvrbf9wiDvxEB6b1vNXZ5voiRSKhtoS77rvCe8CCQIRYn7SJtz8+EOdXbboI nZzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ik3l57t0FyOClvLjEe+sb/BhE/y0hhY0gnEqux751Uc=; b=nK9xNZIkTMeQ+S/jl0kgthryJLrHdcF1c8LGfcm1nWVkuijY3MdCNs6SC5xyy+EnA9 GsS5G2IeejgKG5ulR0xZQSuvzFVNfUL1T2fX+Td2z7FPu7YTSud2jT7J7sdFRX5beXSm aIPbWu/g1wLZAE9ApqyqwNjEnVbQRX57xAQMkeGFEWinN4Lr7CahxaMPT7W2UBKvSELn MTAmZIsyFlrk+XneV03ikx15qoyoaV56sj7e6E4C8SMMSc0Q3S4Z7UpVMKvVn3mY6UNo JC4eExhF/zy140Rq/HiN9KD3YtLo4DLgcl8dMk2L/IzqseEhES5Sn8cZrtk0NLbL0b9q MQ1w== X-Gm-Message-State: AOAM532zeoq9kuCgi7ROxHv+jx7P4rAB5Y0DuCidZxJNO7Vii0o/5onv Rx0mZ//HD+Cd32d3pYhyxmQ= X-Google-Smtp-Source: ABdhPJzHo6/7MAKEoI/XmiBkadcGmQ+3ea7nzB3CCBEUR/YsmN15UifwlOMYYbga1Z5fsGVrMds0VQ== X-Received: by 2002:a17:90a:a101:: with SMTP id s1mr1788302pjp.205.1598367533484; Tue, 25 Aug 2020 07:58:53 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id e29sm15755956pfj.92.2020.08.25.07.58.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 07:58:53 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH v7 10/12] mm/vmalloc: add vmap_range_noflush variant Date: Wed, 26 Aug 2020 00:57:51 +1000 Message-Id: <20200825145753.529284-11-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200825145753.529284-1-npiggin@gmail.com> References: <20200825145753.529284-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Nicholas Piggin , Christoph Hellwig , Zefan Li , Jonathan Cameron , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" As a side-effect, the order of flush_cache_vmap() and arch_sync_kernel_mappings() calls are switched, but that now matches the other callers in this file. Signed-off-by: Nicholas Piggin --- mm/vmalloc.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 256554d598e6..1d6cad16bda3 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -237,7 +237,7 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, return 0; } -int vmap_range(unsigned long addr, unsigned long end, +static int vmap_range_noflush(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift) { @@ -259,14 +259,24 @@ int vmap_range(unsigned long addr, unsigned long end, break; } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); - flush_cache_vmap(start, end); - if (mask & ARCH_PAGE_TABLE_SYNC_MASK) arch_sync_kernel_mappings(start, end); return err; } +int vmap_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + int err; + + err = vmap_range_noflush(addr, end, phys_addr, prot, max_page_shift); + flush_cache_vmap(addr, end); + + return err; +} + static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pgtbl_mod_mask *mask) { From patchwork Tue Aug 25 14:57:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1351135 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BbY3p17P3z9sSP for ; Wed, 26 Aug 2020 01:35:50 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=G4/MMiKI; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BbY3l4jphzDqQs for ; Wed, 26 Aug 2020 01:35:47 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::1043; helo=mail-pj1-x1043.google.com; envelope-from=npiggin@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=G4/MMiKI; dkim-atps=neutral Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BbXFK6BX1zDqSy for ; Wed, 26 Aug 2020 00:59:01 +1000 (AEST) Received: by mail-pj1-x1043.google.com with SMTP id kx11so932318pjb.5 for ; Tue, 25 Aug 2020 07:59:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6KJeQP8NI0Gmq28+Z9mGp8SWZDiLwVnET1WJkFSU9fg=; b=G4/MMiKI3zAB9GZaRHWlNv9lxtyaITj1HOneVEnczu7qEBJuf+V670gslFcluD9DfV dKFhpN/WL4LNfDDfqNgVKdmCkWZtOQD8AR87AfG5JuTiroU9qsz6dQkIQISu8fQcNQpX brfM6EH/XQVEv4CyH06jAc2CUPxwD6+60MbRIOyzUHJi5XVsEEe8gkMYcR1WSQYp9r5u XP+Rkwj4ZIVDgOOaCKn9ONyOoYAOP4aBBQ9Gg8T/6H+kHiTY60o59ZUHO0d0sqG7Rd4P uU1lQsBRXasq3jcdCOP+kdgE/+A2BLKF21czW067Wh8qyrY+uN2Dd8I5PEt6zRFb/3NR QL3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6KJeQP8NI0Gmq28+Z9mGp8SWZDiLwVnET1WJkFSU9fg=; b=ARalHxQN8AXkQNKFjzqW5UB6cvkmclna87/U+6GrJWm7hC1ZCSIW5iq5RLEXVvxq1M GqhLhC/DzZbUNy5310jB9fmJsHh3Jna2VgeU7T1vT6jT6Hd6Ex+75/spGxxFKoJe/TIa vZSKSa7GROHi96/AVQrHZN3DTdZdoF9zjlewwDqt8jZuhmStkF3KRVmIQ+Tqp47qgrT1 nXUe/CoqDAJhav9HtfHr4dqOX1uW9ryJT4YEcm/y45ApZkcDwU+/bpUWg8LNruqFl+fU XF9dkAj1yjGn92EgnWvGHshWgh7bjK6BPsoo8ktMa3odZ6WEfU8kuRAxYLyA75Z7zewX l24A== X-Gm-Message-State: AOAM533ok3E6V2ceLwhvHH+lX/4G5RWplfxIEfPo7FHRDw91D8qMkyht W5Bk58wtNDzkQJLHs+Mz9v7u51OMHjU= X-Google-Smtp-Source: ABdhPJwxOdeZP69xC+JAEexbE9+iKYE0HYeXxKfVEDE0GXMN513IqNTakJmuHRjF51c8KGciI3/RNg== X-Received: by 2002:a17:90a:1e65:: with SMTP id w92mr1861914pjw.187.1598367538048; Tue, 25 Aug 2020 07:58:58 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id e29sm15755956pfj.92.2020.08.25.07.58.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 07:58:57 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH v7 11/12] mm/vmalloc: Hugepage vmalloc mappings Date: Wed, 26 Aug 2020 00:57:52 +1000 Message-Id: <20200825145753.529284-12-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200825145753.529284-1-npiggin@gmail.com> References: <20200825145753.529284-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Nicholas Piggin , Christoph Hellwig , Zefan Li , Jonathan Cameron , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC enables support on architectures that define HAVE_ARCH_HUGE_VMAP and supports PMD sized vmap mappings. vmalloc will attempt to allocate PMD-sized pages if allocating PMD size or larger, and fall back to small pages if that was unsuccessful. Allocations that do not use PAGE_KERNEL prot are not permitted to use huge pages, because not all callers expect this (e.g., module allocations vs strict module rwx). This reduces TLB misses by nearly 30x on a `git diff` workload on a 2-node POWER9 (59,800 -> 2,100) and reduces CPU cycles by 0.54%. This can result in more internal fragmentation and memory overhead for a given allocation, an option nohugevmalloc is added to disable at boot. Signed-off-by: Nicholas Piggin --- arch/Kconfig | 4 + include/linux/vmalloc.h | 1 + mm/page_alloc.c | 5 +- mm/vmalloc.c | 180 ++++++++++++++++++++++++++++++---------- 4 files changed, 145 insertions(+), 45 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index af14a567b493..b2b89d629317 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -616,6 +616,10 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD config HAVE_ARCH_HUGE_VMAP bool +config HAVE_ARCH_HUGE_VMALLOC + depends on HAVE_ARCH_HUGE_VMAP + bool + config ARCH_WANT_HUGE_PMD_SHARE bool diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 15adb9a14fb6..a7449064fe35 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -58,6 +58,7 @@ struct vm_struct { unsigned long size; unsigned long flags; struct page **pages; + unsigned int page_order; unsigned int nr_pages; phys_addr_t phys_addr; const void *caller; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0e2bab486fea..b6427cc7b838 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -69,6 +69,7 @@ #include #include #include +#include #include #include @@ -8102,6 +8103,7 @@ void *__init alloc_large_system_hash(const char *tablename, void *table = NULL; gfp_t gfp_flags; bool virt; + bool huge; /* allow the kernel cmdline to have a say */ if (!numentries) { @@ -8169,6 +8171,7 @@ void *__init alloc_large_system_hash(const char *tablename, } else if (get_order(size) >= MAX_ORDER || hashdist) { table = __vmalloc(size, gfp_flags); virt = true; + huge = (find_vm_area(table)->page_order > 0); } else { /* * If bucketsize is not a power-of-two, we may free @@ -8185,7 +8188,7 @@ void *__init alloc_large_system_hash(const char *tablename, pr_info("%s hash table entries: %ld (order: %d, %lu bytes, %s)\n", tablename, 1UL << log2qty, ilog2(size) - PAGE_SHIFT, size, - virt ? "vmalloc" : "linear"); + virt ? (huge ? "vmalloc hugepage" : "vmalloc") : "linear"); if (_hash_shift) *_hash_shift = log2qty; diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 1d6cad16bda3..8db53c2d7f72 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -44,6 +44,19 @@ #include "internal.h" #include "pgalloc-track.h" +#ifdef CONFIG_HAVE_ARCH_HUGE_VMALLOC +static bool __ro_after_init vmap_allow_huge = true; + +static int __init set_nohugevmalloc(char *str) +{ + vmap_allow_huge = false; + return 0; +} +early_param("nohugevmalloc", set_nohugevmalloc); +#else /* CONFIG_HAVE_ARCH_HUGE_VMALLOC */ +static const bool vmap_allow_huge = false; +#endif /* CONFIG_HAVE_ARCH_HUGE_VMALLOC */ + bool is_vmalloc_addr(const void *x) { unsigned long addr = (unsigned long)x; @@ -477,31 +490,12 @@ static int vmap_pages_p4d_range(pgd_t *pgd, unsigned long addr, return 0; } -/** - * map_kernel_range_noflush - map kernel VM area with the specified pages - * @addr: start of the VM area to map - * @size: size of the VM area to map - * @prot: page protection flags to use - * @pages: pages to map - * - * Map PFN_UP(@size) pages at @addr. The VM area @addr and @size specify should - * have been allocated using get_vm_area() and its friends. - * - * NOTE: - * This function does NOT do any cache flushing. The caller is responsible for - * calling flush_cache_vmap() on to-be-mapped areas before calling this - * function. - * - * RETURNS: - * 0 on success, -errno on failure. - */ -int map_kernel_range_noflush(unsigned long addr, unsigned long size, - pgprot_t prot, struct page **pages) +static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages) { unsigned long start = addr; - unsigned long end = addr + size; - unsigned long next; pgd_t *pgd; + unsigned long next; int err = 0; int nr = 0; pgtbl_mod_mask mask = 0; @@ -523,6 +517,65 @@ int map_kernel_range_noflush(unsigned long addr, unsigned long size, return 0; } +static int vmap_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift) +{ + unsigned int i, nr = (end - addr) >> PAGE_SHIFT; + + WARN_ON(page_shift < PAGE_SHIFT); + + if (page_shift == PAGE_SHIFT) + return vmap_small_pages_range_noflush(addr, end, prot, pages); + + for (i = 0; i < nr; i += 1U << (page_shift - PAGE_SHIFT)) { + int err; + + err = vmap_range_noflush(addr, addr + (1UL << page_shift), + __pa(page_address(pages[i])), prot, + page_shift); + if (err) + return err; + + addr += 1UL << page_shift; + } + + return 0; +} + +static int vmap_pages_range(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift) +{ + int err; + + err = vmap_pages_range_noflush(addr, end, prot, pages, page_shift); + flush_cache_vmap(addr, end); + return err; +} + +/** + * map_kernel_range_noflush - map kernel VM area with the specified pages + * @addr: start of the VM area to map + * @size: size of the VM area to map + * @prot: page protection flags to use + * @pages: pages to map + * + * Map PFN_UP(@size) pages at @addr. The VM area @addr and @size specify should + * have been allocated using get_vm_area() and its friends. + * + * NOTE: + * This function does NOT do any cache flushing. The caller is responsible for + * calling flush_cache_vmap() on to-be-mapped areas before calling this + * function. + * + * RETURNS: + * 0 on success, -errno on failure. + */ +int map_kernel_range_noflush(unsigned long addr, unsigned long size, + pgprot_t prot, struct page **pages) +{ + return vmap_pages_range_noflush(addr, addr + size, prot, pages, PAGE_SHIFT); +} + int map_kernel_range(unsigned long start, unsigned long size, pgprot_t prot, struct page **pages) { @@ -2400,6 +2453,7 @@ static inline void set_area_direct_map(const struct vm_struct *area, { int i; + /* HUGE_VMALLOC passes small pages to set_direct_map */ for (i = 0; i < area->nr_pages; i++) if (page_address(area->pages[i])) set_direct_map(area->pages[i]); @@ -2433,11 +2487,12 @@ static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages) * map. Find the start and end range of the direct mappings to make sure * the vm_unmap_aliases() flush includes the direct map. */ - for (i = 0; i < area->nr_pages; i++) { + for (i = 0; i < area->nr_pages; i += 1U << area->page_order) { unsigned long addr = (unsigned long)page_address(area->pages[i]); if (addr) { + unsigned long page_size = PAGE_SIZE << area->page_order; start = min(addr, start); - end = max(addr + PAGE_SIZE, end); + end = max(addr + page_size, end); flush_dmap = 1; } } @@ -2480,11 +2535,11 @@ static void __vunmap(const void *addr, int deallocate_pages) if (deallocate_pages) { int i; - for (i = 0; i < area->nr_pages; i++) { + for (i = 0; i < area->nr_pages; i += 1U << area->page_order) { struct page *page = area->pages[i]; BUG_ON(!page); - __free_pages(page, 0); + __free_pages(page, area->page_order); } atomic_long_sub(area->nr_pages, &nr_vmalloc_pages); @@ -2623,9 +2678,12 @@ void *vmap(struct page **pages, unsigned int count, EXPORT_SYMBOL(vmap); static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, - pgprot_t prot, int node) + pgprot_t prot, unsigned int page_shift, int node) { struct page **pages; + unsigned long addr = (unsigned long)area->addr; + unsigned long size = get_vm_area_size(area); + unsigned int page_order = page_shift - PAGE_SHIFT; unsigned int nr_pages, array_size, i; const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO; const gfp_t alloc_mask = gfp_mask | __GFP_NOWARN; @@ -2633,7 +2691,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, 0 : __GFP_HIGHMEM; - nr_pages = get_vm_area_size(area) >> PAGE_SHIFT; + nr_pages = size >> PAGE_SHIFT; array_size = (nr_pages * sizeof(struct page *)); /* Please note that the recursion is strictly bounded. */ @@ -2652,29 +2710,29 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, area->pages = pages; area->nr_pages = nr_pages; + area->page_order = page_order; - for (i = 0; i < area->nr_pages; i++) { + for (i = 0; i < area->nr_pages; i += 1U << page_order) { struct page *page; + int p; - if (node == NUMA_NO_NODE) - page = alloc_page(alloc_mask|highmem_mask); - else - page = alloc_pages_node(node, alloc_mask|highmem_mask, 0); - + page = alloc_pages_node(node, alloc_mask|highmem_mask, page_order); if (unlikely(!page)) { /* Successfully allocated i pages, free them in __vunmap() */ area->nr_pages = i; atomic_long_add(area->nr_pages, &nr_vmalloc_pages); goto fail; } - area->pages[i] = page; + + for (p = 0; p < (1U << page_order); p++) + area->pages[i + p] = page + p; + if (gfpflags_allow_blocking(gfp_mask)) cond_resched(); } atomic_long_add(area->nr_pages, &nr_vmalloc_pages); - if (map_kernel_range((unsigned long)area->addr, get_vm_area_size(area), - prot, pages) < 0) + if (vmap_pages_range(addr, addr + size, prot, pages, page_shift) < 0) goto fail; return area->addr; @@ -2682,7 +2740,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, fail: warn_alloc(gfp_mask, NULL, "vmalloc: allocation failure, allocated %ld of %ld bytes", - (area->nr_pages*PAGE_SIZE), area->size); + (area->nr_pages*PAGE_SIZE), size); __vfree(area->addr); return NULL; } @@ -2713,19 +2771,42 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, struct vm_struct *area; void *addr; unsigned long real_size = size; + unsigned long real_align = align; + unsigned int shift = PAGE_SHIFT; - size = PAGE_ALIGN(size); if (!size || (size >> PAGE_SHIFT) > totalram_pages()) goto fail; - area = __get_vm_area_node(real_size, align, VM_ALLOC | VM_UNINITIALIZED | + if (vmap_allow_huge && (pgprot_val(prot) == pgprot_val(PAGE_KERNEL))) { + unsigned long size_per_node; + + /* + * Try huge pages. Only try for PAGE_KERNEL allocations, + * others like modules don't yet expect huge pages in + * their allocations due to apply_to_page_range not + * supporting them. + */ + + size_per_node = size; + if (node == NUMA_NO_NODE) + size_per_node /= num_online_nodes(); + if (size_per_node >= PMD_SIZE) { + shift = PMD_SHIFT; + align = max(real_align, 1UL << shift); + size = ALIGN(real_size, 1UL << shift); + } + } + +again: + size = PAGE_ALIGN(size); + area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED | vm_flags, start, end, node, gfp_mask, caller); if (!area) goto fail; - addr = __vmalloc_area_node(area, gfp_mask, prot, node); + addr = __vmalloc_area_node(area, gfp_mask, prot, shift, node); if (!addr) - return NULL; + goto fail; /* * In this function, newly allocated vm_struct has VM_UNINITIALIZED @@ -2739,8 +2820,19 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, return addr; fail: - warn_alloc(gfp_mask, NULL, + if (shift > PAGE_SHIFT) { + free_vm_area(area); + shift = PAGE_SHIFT; + align = real_align; + size = real_size; + goto again; + } + + if (!area) { + /* Warn for area allocation, page allocations already warn */ + warn_alloc(gfp_mask, NULL, "vmalloc: allocation failure: %lu bytes", real_size); + } return NULL; } @@ -3739,7 +3831,7 @@ static int s_show(struct seq_file *m, void *p) seq_printf(m, " %pS", v->caller); if (v->nr_pages) - seq_printf(m, " pages=%d", v->nr_pages); + seq_printf(m, " pages=%d order=%d", v->nr_pages, v->page_order); if (v->phys_addr) seq_printf(m, " phys=%pa", &v->phys_addr); From patchwork Tue Aug 25 14:57:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1351136 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BbY6r1y4gz9sSP for ; Wed, 26 Aug 2020 01:38:28 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=CehPsFgN; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BbY6q648KzDqQs for ; Wed, 26 Aug 2020 01:38:27 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::442; helo=mail-pf1-x442.google.com; envelope-from=npiggin@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=CehPsFgN; dkim-atps=neutral Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BbXFV5XPdzDqMr for ; Wed, 26 Aug 2020 00:59:05 +1000 (AEST) Received: by mail-pf1-x442.google.com with SMTP id 17so7551599pfw.9 for ; Tue, 25 Aug 2020 07:59:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Gv+PP/wjBFW/QutbN7MDl66u1IlF9/srIH5l1WMUz8Y=; b=CehPsFgNdfZPYRARvszK0iCwlAZvJIxorhXqUBbsMR1owd1LyaUhv6KkQvkxGhxABE h3c8oVY4EFa6rJUgIrnLjCkRStwzaM9eK3BEMzyYASBFUKLmA+35vORx8gBhNYtW5z1x 3s4yU9gOSNIa2NA3S2uwFMPR+o6NS9XnTAPYFEUJ/a2tvEAtbBjA1Mmp8fT2zjJBaCkA 4TWe+7FbzU6KUDpftLPy16UTk04L6fVCaWwJOTS0ymSpOh99vhufmMr4F4IOV+GRRGFF GHMVyMWmrmX2CZxkpNlbupR524BD2hb59sZczST12PHUj62mzy25W0DZVP/E9GAWUSNh X5zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Gv+PP/wjBFW/QutbN7MDl66u1IlF9/srIH5l1WMUz8Y=; b=dnVnQZ+i/MJwBQcPOAfTFep4cIQ2CcPpY5fTSzsVtSdk91WAuRYsAEAHSswW2iRgLb WmMzgcUggsX6Mg0FPMCU3RqJ8qB4RqYZLmDOXpXImzLA79+B6Z0rU/y3g21mRp2o5TPQ XwYEqn2OL+cygLBca4yXqRvoS834UT5kshoRYYaIhLJvvLG+RBMck5Q+gi1kVHxPhZGC JUfhRe6jgLY2DFoteXGQoxJ6NgwCIUGXSRmmGSXcV/NKNhZgumBX+SCnCE/XgPscjrEF yOlWCBq0pUbFWxi7pEk8jAgIph58+We58XR6ZyPHJpSsDQuwS25QRK2p6HkBCl0b+xdi gUQA== X-Gm-Message-State: AOAM530N61YvuBahBUzOYMfQRVBMv35pJhphB+sUGkl3BPevdzQBk7GZ cIDBg4zx9CXuiI0eo9uc1P4= X-Google-Smtp-Source: ABdhPJw/xmLmJIGKGVYgH1x/f+iaOst/Nn2dxedzrtxUqC/Tzcj9oxAGZQvDlxbNYOCi6LTdFZvEGg== X-Received: by 2002:a62:6582:: with SMTP id z124mr7599734pfb.250.1598367542459; Tue, 25 Aug 2020 07:59:02 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id e29sm15755956pfj.92.2020.08.25.07.58.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 07:59:02 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH v7 12/12] powerpc/64s/radix: Enable huge vmalloc mappings Date: Wed, 26 Aug 2020 00:57:53 +1000 Message-Id: <20200825145753.529284-13-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200825145753.529284-1-npiggin@gmail.com> References: <20200825145753.529284-1-npiggin@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Nicholas Piggin , Christoph Hellwig , Zefan Li , Jonathan Cameron , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Nicholas Piggin --- Documentation/admin-guide/kernel-parameters.txt | 2 ++ arch/powerpc/Kconfig | 1 + 2 files changed, 3 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index bdc1f33fd3d1..6f0b41289a90 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3190,6 +3190,8 @@ nohugeiomap [KNL,X86,PPC] Disable kernel huge I/O mappings. + nohugevmalloc [PPC] Disable kernel huge vmalloc mappings. + nosmt [KNL,S390] Disable symmetric multithreading (SMT). Equivalent to smt=1. diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 1f48bbfb3ce9..9171d25ad7dc 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -175,6 +175,7 @@ config PPC select GENERIC_TIME_VSYSCALL select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 && PPC_RADIX_MMU + select HAVE_ARCH_HUGE_VMALLOC if HAVE_ARCH_HUGE_VMAP select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14 select HAVE_ARCH_KASAN_VMALLOC if PPC32 && PPC_PAGE_SHIFT <= 14