Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/1.2/patches/2219950/?format=api
{ "id": 2219950, "url": "http://patchwork.ozlabs.org/api/1.2/patches/2219950/?format=api", "web_url": "http://patchwork.ozlabs.org/project/linuxppc-dev/patch/20260405125240.2558577-26-songmuchun@bytedance.com/", "project": { "id": 2, "url": "http://patchwork.ozlabs.org/api/1.2/projects/2/?format=api", "name": "Linux PPC development", "link_name": "linuxppc-dev", "list_id": "linuxppc-dev.lists.ozlabs.org", "list_email": "linuxppc-dev@lists.ozlabs.org", "web_url": "https://github.com/linuxppc/wiki/wiki", "scm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git", "webscm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/", "list_archive_url": "https://lore.kernel.org/linuxppc-dev/", "list_archive_url_format": "https://lore.kernel.org/linuxppc-dev/{}/", "commit_url_format": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?id={}" }, "msgid": "<20260405125240.2558577-26-songmuchun@bytedance.com>", "list_archive_url": "https://lore.kernel.org/linuxppc-dev/20260405125240.2558577-26-songmuchun@bytedance.com/", "date": "2026-04-05T12:52:16", "name": "[25/49] mm/sparse-vmemmap: support vmemmap-optimizable compound page population", "commit_ref": null, "pull_url": null, "state": "new", "archived": false, "hash": "73f8bd02d9fe617424ed7ec5157bd8d43b3bc4da", "submitter": { "id": 78930, "url": "http://patchwork.ozlabs.org/api/1.2/people/78930/?format=api", "name": "Muchun Song", "email": "songmuchun@bytedance.com" }, "delegate": null, "mbox": "http://patchwork.ozlabs.org/project/linuxppc-dev/patch/20260405125240.2558577-26-songmuchun@bytedance.com/mbox/", "series": [ { "id": 498783, "url": "http://patchwork.ozlabs.org/api/1.2/series/498783/?format=api", "web_url": "http://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=498783", "date": "2026-04-05T12:51:51", "name": "mm: Generalize vmemmap optimization for DAX and HugeTLB", "version": 1, "mbox": "http://patchwork.ozlabs.org/series/498783/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/2219950/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/2219950/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "\n <linuxppc-dev+bounces-19355-incoming=patchwork.ozlabs.org@lists.ozlabs.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "linuxppc-dev@lists.ozlabs.org" ], "Delivered-To": "patchwork-incoming@legolas.ozlabs.org", "Authentication-Results": [ "legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=bytedance.com header.i=@bytedance.com\n header.a=rsa-sha256 header.s=google header.b=gTcGykho;\n\tdkim-atps=neutral", "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=2404:9400:21b9:f100::1; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-19355-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)", "lists.ozlabs.org;\n arc=none smtp.remote-ip=\"2607:f8b0:4864:20::102c\"", "lists.ozlabs.org;\n dmarc=pass (p=quarantine dis=none) header.from=bytedance.com", "lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=bytedance.com header.i=@bytedance.com\n header.a=rsa-sha256 header.s=google header.b=gTcGykho;\n\tdkim-atps=neutral", "lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=bytedance.com\n (client-ip=2607:f8b0:4864:20::102c; helo=mail-pj1-x102c.google.com;\n envelope-from=songmuchun@bytedance.com; receiver=lists.ozlabs.org)" ], "Received": [ "from lists.ozlabs.org (lists.ozlabs.org\n [IPv6:2404:9400:21b9:f100::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4fpXX36FNZz1yD3\n\tfor <incoming@patchwork.ozlabs.org>; Sun, 05 Apr 2026 22:56:19 +1000 (AEST)", "from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4fpXWs5jgtz2yv0;\n\tSun, 05 Apr 2026 22:56:09 +1000 (AEST)", "from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com\n [IPv6:2607:f8b0:4864:20::102c])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4fpXWr6q2yz2yks\n\tfor <linuxppc-dev@lists.ozlabs.org>; Sun, 05 Apr 2026 22:56:08 +1000 (AEST)", "by mail-pj1-x102c.google.com with SMTP id\n 98e67ed59e1d1-35da1af3e10so2885473a91.3\n for <linuxppc-dev@lists.ozlabs.org>;\n Sun, 05 Apr 2026 05:56:08 -0700 (PDT)", "from n232-176-004.byted.org ([36.110.163.97])\n by smtp.gmail.com with ESMTPSA id\n 98e67ed59e1d1-35de66b4808sm3748505a91.2.2026.04.05.05.56.01\n (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);\n Sun, 05 Apr 2026 05:56:06 -0700 (PDT)" ], "ARC-Seal": "i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1775393769;\n\tcv=none;\n b=WFzQt4U1BVTnArbRSiUMTgwMxbhgEOeMP8tTs4gObtKapb0MOUB18jf3NmLo5hYk3cdkbha43326l5KPTm9qI9e98dS4v3Miq53d9U3PloAxxZmTmi4YMOos0VwW/HYHiOeax1InsqKgipXeYD1r3iOrww1xZE/5tHg5MQw3QK7PVlsT4CmNT3CVGtEGg2lMhXJWBnsBUmMmzW1CtpdVyNqUf+T1c2fpG3z7pM79fWTsq/9VOIw/JcSXwWai3WZSmU4thm4UCCL9Amc1Tb9xi+wQIoqM/Tt8ASU+dx5OmIPzDawhLfnseINE2mCgUuy3a8t9okJ5qiNjznnFuxAviQ==", "ARC-Message-Signature": "i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1775393769; c=relaxed/relaxed;\n\tbh=yuildpo+hHfNl+izDYgZmo65TSC8uYjcfc21fVKbu/w=;\n\th=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:\n\t MIME-Version;\n b=Rq/gxdLqhYC37s1+WpihqCWtUFsnC6wdDnD7NhgUtIMNgEWDeMahntbaTb5zpU7nOzi5s3CAtGtvrH7Wm7SXTdCET1WTZXhIEsOzJsk1UmFbDgtbKLYgLO4y3Y/R7AXdrfhTEcwq4RxLmIwHfjPBhTA7Hg7XGsJHXdx24hz5VPTgFmXx/37A50NnhEhPA0yNry3mLG2w9hdEDvwOZhSM4+2JiUS5FCbx9ZCJoEqA2rk1rAQFkpVQqtCBEEa+wFmhWn7mtIpsEiVjvoDxO1RawP0xvWgjiH3S/H+3RaL4dvv4r5BQs9RzhasugQL0GHGpMwFvmUY0WF3SHvY7edxuOQ==", "ARC-Authentication-Results": "i=1; lists.ozlabs.org;\n dmarc=pass (p=quarantine dis=none) header.from=bytedance.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=bytedance.com header.i=@bytedance.com\n header.a=rsa-sha256 header.s=google header.b=gTcGykho; dkim-atps=neutral;\n spf=pass (client-ip=2607:f8b0:4864:20::102c; helo=mail-pj1-x102c.google.com;\n envelope-from=songmuchun@bytedance.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=bytedance.com", "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=bytedance.com; s=google; t=1775393767; x=1775998567;\n darn=lists.ozlabs.org;\n h=content-transfer-encoding:mime-version:references:in-reply-to\n :message-id:date:subject:cc:to:from:from:to:cc:subject:date\n :message-id:reply-to;\n bh=yuildpo+hHfNl+izDYgZmo65TSC8uYjcfc21fVKbu/w=;\n b=gTcGykhos/hVr65U+kRBhl2UjerRGzGe+KZHw6c1pwBQmBQCW/uGKWK3sBQRwedLCU\n 7Bg+8RuOVx9eN2KdC7sU5kGq55F7Wh6TcZQiDuYBbmQQVnFpaV5UYLdrVozpgFHUyaq9\n 0f5lPAVNHsE2XvdRpNbpKNXMlr00fTyXn48YGmikqm0TVDvcKxXve6VM0wv5JrhI+gjB\n HW6la7YmAPIVdW51kshacrMZyCioEytdBDHNl3cpvU8+JSaGxYRyamsYxZqaZqsI3s5j\n Qh42uoSofONETIOabTyh8qUIS8FRDcwpQIksI+Q65d6nJ2guDk90MnFpF77mN1jgMeTm\n Ic+A==", "X-Google-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=1e100.net; s=20251104; t=1775393767; x=1775998567;\n h=content-transfer-encoding:mime-version:references:in-reply-to\n :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from\n :to:cc:subject:date:message-id:reply-to;\n bh=yuildpo+hHfNl+izDYgZmo65TSC8uYjcfc21fVKbu/w=;\n b=ne3ZcfJJMWQ8xpPJXHPS9ZHPzzf3AjL0s5cEusQb2i7mO2tT1f0SWEJNpVciarfV1h\n 6MecIxSX+/K1SH2YMkWzRze+liwzZBDY+or8mPmYOLGBpAV/aATSuwpDtQz38W2Q/Bhw\n Ni2s3IkjXSjikE06si/MgOUJySk8mcP9xuLxexvTyZyWNK3LhtBEla4b7j+ly50ddu0K\n b3MNPD2s/xijxwQ1qFMrpCT/cmKg3sz5XEWsUcs8RvVyY758tpvT8gM0QyDwje2Qis4n\n ObABMZZ24QJlWkzi6lTAunyXfjjYcBEfKD00XzZ78fjd20ULQHs2xe4YFB1DcT2v9jDK\n 4JXQ==", "X-Forwarded-Encrypted": "i=1;\n AJvYcCUe3cAAALFr5t5JqC3rDIEnmSZdYtGkUpU/qErrvuzT8W0WQDXSbSwgr0yYjB85Yz9DpsWdWxEtPKuMpSc=@lists.ozlabs.org", "X-Gm-Message-State": "AOJu0YykC9Ebr7UinySLcCts8f7lxjY5X+dpsXCi6iOwizxUhmrNgNTZ\n\tzeDZWRB6u0xeaAuBnbCRoca++Giki+rZBaKpTrplG/Em7ARQ4kSFkg8U7WW6OolovK8=", "X-Gm-Gg": "AeBDietxsRWh1kJUXSwKraucwaKx0ceEa1U819AQjwffELb81clBqB9KyMFzP43rGJG\n\tusjguY5PjvHoWu7wx02cG2/+IGSvS+l4cZhOv55JWuZjDBIvF8PK1LWeCQEzr7AaYiyKBdPIsPN\n\t5FK5GI30LxJipA6uk3vlF3+RjNbdHkc0B3NO2R+VUbhf2rEgZG/w1awPCg6Py7O1dUalywKQgNf\n\t2hbK5ml2iTdkys9LGIALIfKA+Bo7pGwyeAerCsQLxmVSQBIxtFgogZaCqWYU3Ar1Ox0iCrDBBv9\n\tTnKUoFdaAmoyV2MSs1wjKxK2EREpKOEPcbiIJywkqgtvM3jr6M523QZThyM2YfyzZMQVhrTje1K\n\tJe8NqQ6Af086uTNFh0Qf9Njhp+sYMt2STVfCkhQZd9vlWCDAqmIIci6JEhiCu8zakqqWUPMulBL\n\tC3HEhii2hgRRThe8HarKrflXEbDI8i2exwEIwKLMD6vHP1nVAGJMsaNA==", "X-Received": "by 2002:a17:90b:4ac7:b0:35b:9720:98d0 with SMTP id\n 98e67ed59e1d1-35de679086dmr9103094a91.5.1775393766817;\n Sun, 05 Apr 2026 05:56:06 -0700 (PDT)", "From": "Muchun Song <songmuchun@bytedance.com>", "To": "Andrew Morton <akpm@linux-foundation.org>,\n\tDavid Hildenbrand <david@kernel.org>,\n\tMuchun Song <muchun.song@linux.dev>,\n\tOscar Salvador <osalvador@suse.de>,\n\tMichael Ellerman <mpe@ellerman.id.au>,\n\tMadhavan Srinivasan <maddy@linux.ibm.com>", "Cc": "Lorenzo Stoakes <ljs@kernel.org>,\n\t\"Liam R . Howlett\" <Liam.Howlett@oracle.com>,\n\tVlastimil Babka <vbabka@kernel.org>,\n\tMike Rapoport <rppt@kernel.org>,\n\tSuren Baghdasaryan <surenb@google.com>,\n\tMichal Hocko <mhocko@suse.com>,\n\tNicholas Piggin <npiggin@gmail.com>,\n\tChristophe Leroy <chleroy@kernel.org>,\n\taneesh.kumar@linux.ibm.com,\n\tjoao.m.martins@oracle.com,\n\tlinux-mm@kvack.org,\n\tlinuxppc-dev@lists.ozlabs.org,\n\tlinux-kernel@vger.kernel.org,\n\tMuchun Song <songmuchun@bytedance.com>", "Subject": "[PATCH 25/49] mm/sparse-vmemmap: support vmemmap-optimizable compound\n page population", "Date": "Sun, 5 Apr 2026 20:52:16 +0800", "Message-Id": "<20260405125240.2558577-26-songmuchun@bytedance.com>", "X-Mailer": "git-send-email 2.20.1", "In-Reply-To": "<20260405125240.2558577-1-songmuchun@bytedance.com>", "References": "<20260405125240.2558577-1-songmuchun@bytedance.com>", "X-Mailing-List": "linuxppc-dev@lists.ozlabs.org", "List-Id": "<linuxppc-dev.lists.ozlabs.org>", "List-Help": "<mailto:linuxppc-dev+help@lists.ozlabs.org>", "List-Owner": "<mailto:linuxppc-dev+owner@lists.ozlabs.org>", "List-Post": "<mailto:linuxppc-dev@lists.ozlabs.org>", "List-Archive": "<https://lore.kernel.org/linuxppc-dev/>,\n <https://lists.ozlabs.org/pipermail/linuxppc-dev/>", "List-Subscribe": "<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>", "List-Unsubscribe": "<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>", "Precedence": "list", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "X-Spam-Status": "No, score=-0.2 required=3.0 tests=DKIM_SIGNED,DKIM_VALID,\n\tDKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS\n\tautolearn=disabled version=4.0.1 OzLabs 8", "X-Spam-Checker-Version": "SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org" }, "content": "Previously, vmemmap optimization (HVO) was tightly coupled with HugeTLB\nand relied on CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP. With the recent\nintroduction of compound page order to struct mem_section, we can now\ngeneralize this optimization to be based on sections rather than being\nHugeTLB-specific.\n\nThis patch refactors the vmemmap population logic to utilize the new\nsection-level order information by updating vmemmap_pte_populate() to\ndynamically allocates or reuses the shared tail page if a section\ncontains optimizable compound pages.\n\nThese changes centralize the HVO logic within the core sparse-vmemmap\ncode, reducing code duplication and paving the way for unifying the vmemmap\noptimization paths for both HugeTLB and DAX.\n\nSigned-off-by: Muchun Song <songmuchun@bytedance.com>\n---\n include/linux/mmzone.h | 8 ++++-\n mm/internal.h | 3 ++\n mm/sparse-vmemmap.c | 66 +++++++++++++++++++++++++-----------------\n mm/sparse.c | 30 +++++++++++++++++--\n 4 files changed, 78 insertions(+), 29 deletions(-)", "diff": "diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h\nindex 620503aa29ba..e4d37492ca63 100644\n--- a/include/linux/mmzone.h\n+++ b/include/linux/mmzone.h\n@@ -1145,7 +1145,7 @@ struct zone {\n \t/* Zone statistics */\n \tatomic_long_t\t\tvm_stat[NR_VM_ZONE_STAT_ITEMS];\n \tatomic_long_t\t\tvm_numa_event[NR_VM_NUMA_EVENT_ITEMS];\n-#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP\n+#ifdef CONFIG_SPARSEMEM_VMEMMAP\n \tstruct page *vmemmap_tails[NR_OPTIMIZABLE_FOLIO_SIZES];\n #endif\n } ____cacheline_internodealigned_in_smp;\n@@ -2250,6 +2250,12 @@ static inline unsigned int section_order(const struct mem_section *section)\n }\n #endif\n \n+static inline bool section_vmemmap_optimizable(const struct mem_section *section)\n+{\n+\treturn is_power_of_2(sizeof(struct page)) &&\n+\t section_order(section) >= OPTIMIZABLE_FOLIO_MIN_ORDER;\n+}\n+\n void sparse_init_early_section(int nid, struct page *map, unsigned long pnum,\n \t\t\t unsigned long flags);\n \ndiff --git a/mm/internal.h b/mm/internal.h\nindex 1060d7c07f5b..c0d0f546864c 100644\n--- a/mm/internal.h\n+++ b/mm/internal.h\n@@ -996,6 +996,9 @@ static inline void __section_mark_present(struct mem_section *ms,\n \n \tms->section_mem_map |= SECTION_MARKED_PRESENT;\n }\n+\n+int section_vmemmap_pages(unsigned long pfn, unsigned long nr_pages,\n+\t\t\t struct vmem_altmap *altmap, struct dev_pagemap *pgmap);\n #else\n static inline void sparse_init(void) {}\n #endif /* CONFIG_SPARSEMEM */\ndiff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c\nindex 2a6c3c82f9f5..6522c36aac20 100644\n--- a/mm/sparse-vmemmap.c\n+++ b/mm/sparse-vmemmap.c\n@@ -144,17 +144,47 @@ void __meminit vmemmap_verify(pte_t *pte, int node,\n \t\t\tstart, end - 1);\n }\n \n+static struct zone __meminit *pfn_to_zone(unsigned long pfn, int nid)\n+{\n+\tpg_data_t *pgdat = NODE_DATA(nid);\n+\n+\tfor (enum zone_type zone_type = 0; zone_type < MAX_NR_ZONES; zone_type++) {\n+\t\tstruct zone *zone = &pgdat->node_zones[zone_type];\n+\n+\t\tif (zone_spans_pfn(zone, pfn))\n+\t\t\treturn zone;\n+\t}\n+\n+\treturn NULL;\n+}\n+\n+static __meminit struct page *vmemmap_get_tail(unsigned int order, struct zone *zone);\n+\n static pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node,\n \t\t\t\t\t struct vmem_altmap *altmap,\n \t\t\t\t\t unsigned long ptpfn)\n {\n \tpte_t *pte = pte_offset_kernel(pmd, addr);\n+\n \tif (pte_none(ptep_get(pte))) {\n \t\tpte_t entry;\n-\t\tvoid *p;\n+\n+\t\tif (vmemmap_page_optimizable((struct page *)addr) &&\n+\t\t ptpfn == (unsigned long)-1) {\n+\t\t\tstruct page *page;\n+\t\t\tunsigned long pfn = page_to_pfn((struct page *)addr);\n+\t\t\tconst struct mem_section *ms = __pfn_to_section(pfn);\n+\n+\t\t\tpage = vmemmap_get_tail(section_order(ms),\n+\t\t\t\t\t\tpfn_to_zone(pfn, node));\n+\t\t\tif (!page)\n+\t\t\t\treturn NULL;\n+\t\t\tptpfn = page_to_pfn(page);\n+\t\t}\n \n \t\tif (ptpfn == (unsigned long)-1) {\n-\t\t\tp = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap);\n+\t\t\tvoid *p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap);\n+\n \t\t\tif (!p)\n \t\t\t\treturn NULL;\n \t\t\tptpfn = PHYS_PFN(__pa(p));\n@@ -323,7 +353,6 @@ void vmemmap_wrprotect_hvo(unsigned long addr, unsigned long end,\n \t}\n }\n \n-#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP\n static __meminit struct page *vmemmap_get_tail(unsigned int order, struct zone *zone)\n {\n \tstruct page *p, *tail;\n@@ -352,6 +381,7 @@ static __meminit struct page *vmemmap_get_tail(unsigned int order, struct zone *\n \treturn tail;\n }\n \n+#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP\n int __meminit vmemmap_populate_hvo(unsigned long addr, unsigned long end,\n \t\t\t\t unsigned int order, struct zone *zone,\n \t\t\t\t unsigned long headsize)\n@@ -404,6 +434,9 @@ int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end,\n \t\treturn vmemmap_populate_compound_pages(start, end, node, pgmap);\n \n \tfor (addr = start; addr < end; addr = next) {\n+\t\tunsigned long pfn = page_to_pfn((struct page *)addr);\n+\t\tconst struct mem_section *ms = __pfn_to_section(pfn);\n+\n \t\tnext = pmd_addr_end(addr, end);\n \n \t\tpgd = vmemmap_pgd_populate(addr, node);\n@@ -419,7 +452,7 @@ int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end,\n \t\t\treturn -ENOMEM;\n \n \t\tpmd = pmd_offset(pud, addr);\n-\t\tif (pmd_none(pmdp_get(pmd))) {\n+\t\tif (pmd_none(pmdp_get(pmd)) && !section_vmemmap_optimizable(ms)) {\n \t\t\tvoid *p;\n \n \t\t\tp = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap);\n@@ -437,8 +470,10 @@ int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end,\n \t\t\t\t */\n \t\t\t\treturn -ENOMEM;\n \t\t\t}\n-\t\t} else if (vmemmap_check_pmd(pmd, node, addr, next))\n+\t\t} else if (vmemmap_check_pmd(pmd, node, addr, next)) {\n+\t\t\tVM_BUG_ON(section_vmemmap_optimizable(ms));\n \t\t\tcontinue;\n+\t\t}\n \t\tif (vmemmap_populate_basepages(addr, next, node, altmap, pgmap))\n \t\t\treturn -ENOMEM;\n \t}\n@@ -705,27 +740,6 @@ static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages)\n \treturn rc;\n }\n \n-static int __meminit section_vmemmap_pages(unsigned long pfn, unsigned long nr_pages,\n-\t\t\t\t\t struct vmem_altmap *altmap, struct dev_pagemap *pgmap)\n-{\n-\tunsigned int order = pgmap ? pgmap->vmemmap_shift : 0;\n-\tunsigned long pages_per_compound = 1L << order;\n-\n-\tVM_BUG_ON(!IS_ALIGNED(pfn | nr_pages, min(pages_per_compound, PAGES_PER_SECTION)));\n-\tVM_BUG_ON(pfn_to_section_nr(pfn) != pfn_to_section_nr(pfn + nr_pages - 1));\n-\n-\tif (!vmemmap_can_optimize(altmap, pgmap))\n-\t\treturn DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE);\n-\n-\tif (order < PFN_SECTION_SHIFT)\n-\t\treturn VMEMMAP_RESERVE_NR * nr_pages / pages_per_compound;\n-\n-\tif (IS_ALIGNED(pfn, pages_per_compound))\n-\t\treturn VMEMMAP_RESERVE_NR;\n-\n-\treturn 0;\n-}\n-\n /*\n * To deactivate a memory region, there are 3 cases to handle:\n *\ndiff --git a/mm/sparse.c b/mm/sparse.c\nindex cfe4ffd89baf..62659752980e 100644\n--- a/mm/sparse.c\n+++ b/mm/sparse.c\n@@ -345,6 +345,32 @@ static void __init sparse_usage_fini(void)\n \tsparse_usagebuf = sparse_usagebuf_end = NULL;\n }\n \n+int __meminit section_vmemmap_pages(unsigned long pfn, unsigned long nr_pages,\n+\t\t\t\t struct vmem_altmap *altmap, struct dev_pagemap *pgmap)\n+{\n+\tconst struct mem_section *ms = __pfn_to_section(pfn);\n+\tunsigned int order = pgmap ? pgmap->vmemmap_shift : section_order(ms);\n+\tunsigned long pages_per_compound = 1L << order;\n+\tunsigned int vmemmap_pages = OPTIMIZED_FOLIO_VMEMMAP_PAGES;\n+\n+\tif (vmemmap_can_optimize(altmap, pgmap))\n+\t\tvmemmap_pages = VMEMMAP_RESERVE_NR;\n+\n+\tVM_BUG_ON(!IS_ALIGNED(pfn | nr_pages, min(pages_per_compound, PAGES_PER_SECTION)));\n+\tVM_BUG_ON(pfn_to_section_nr(pfn) != pfn_to_section_nr(pfn + nr_pages - 1));\n+\n+\tif (!vmemmap_can_optimize(altmap, pgmap) && !section_vmemmap_optimizable(ms))\n+\t\treturn DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE);\n+\n+\tif (order < PFN_SECTION_SHIFT)\n+\t\treturn vmemmap_pages * nr_pages / pages_per_compound;\n+\n+\tif (IS_ALIGNED(pfn, pages_per_compound))\n+\t\treturn vmemmap_pages;\n+\n+\treturn 0;\n+}\n+\n /*\n * Initialize sparse on a specific node. The node spans [pnum_begin, pnum_end)\n * And number of present sections in this node is map_count.\n@@ -376,8 +402,8 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,\n \t\t\t\t\t\t\tnid, NULL, NULL);\n \t\t\tif (!map)\n \t\t\t\tpanic(\"Populate section (%ld) on node[%d] failed\\n\", pnum, nid);\n-\t\t\tmemmap_boot_pages_add(DIV_ROUND_UP(PAGES_PER_SECTION * sizeof(struct page),\n-\t\t\t\t\t\t\t PAGE_SIZE));\n+\t\t\tmemmap_boot_pages_add(section_vmemmap_pages(pfn, PAGES_PER_SECTION,\n+\t\t\t\t\t\t\t\t NULL, NULL));\n \t\t\tsparse_init_early_section(nid, map, pnum, 0);\n \t\t}\n \t}\n", "prefixes": [ "25/49" ] }