[{"id":3635568,"web_url":"http://patchwork.ozlabs.org/comment/3635568/","msgid":"<36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>","date":"2026-01-13T20:04:18","subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","submitter":{"id":76947,"url":"http://patchwork.ozlabs.org/api/people/76947/","name":"Zi Yan","email":"ziy@nvidia.com"},"content":"On 7 Jan 2026, at 4:18, Jordan Niethe wrote:\n\n> Currently when creating these device private struct pages, the first\n> step is to use request_free_mem_region() to get a range of physical\n> address space large enough to represent the devices memory. This\n> allocated physical address range is then remapped as device private\n> memory using memremap_pages().\n>\n> Needing allocation of physical address space has some problems:\n>\n>   1) There may be insufficient physical address space to represent the\n>      device memory. KASLR reducing the physical address space and VM\n>      configurations with limited physical address space increase the\n>      likelihood of hitting this especially as device memory increases. This\n>      has been observed to prevent device private from being initialized.\n>\n>   2) Attempting to add the device private pages to the linear map at\n>      addresses beyond the actual physical memory causes issues on\n>      architectures like aarch64 meaning the feature does not work there.\n>\n> Instead of using the physical address space, introduce a device private\n> address space and allocate devices regions from there to represent the\n> device private pages.\n>\n> Introduce a new interface memremap_device_private_pagemap() that\n> allocates a requested amount of device private address space and creates\n> the necessary device private pages.\n>\n> To support this new interface, struct dev_pagemap needs some changes:\n>\n>   - Add a new dev_pagemap::nr_pages field as an input parameter.\n>   - Add a new dev_pagemap::pages array to store the device\n>     private pages.\n>\n> When using memremap_device_private_pagemap(), rather then passing in\n> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to\n> be remapped, dev_pagemap::nr_ranges will always be 1, and the device\n> private range that is reserved is returned in dev_pagemap::range.\n>\n> Forbid calling memremap_pages() with dev_pagemap::ranges::type =\n> MEMORY_DEVICE_PRIVATE.\n>\n> Represent this device private address space using a new\n> device_private_pgmap_tree maple tree. This tree maps a given device\n> private address to a struct dev_pagemap, where a specific device private\n> page may then be looked up in that dev_pagemap::pages array.\n>\n> Device private address space can be reclaimed and the assoicated device\n> private pages freed using the corresponding new\n> memunmap_device_private_pagemap() interface.\n>\n> Because the device private pages now live outside the physical address\n> space, they no longer have a normal PFN. This means that page_to_pfn(),\n> et al. are no longer meaningful.\n>\n> Introduce helpers:\n>\n>   - device_private_page_to_offset()\n>   - device_private_folio_to_offset()\n>\n> to take a given device private page / folio and return its offset within\n> the device private address space.\n>\n> Update the places where we previously converted a device private page to\n> a PFN to use these new helpers. When we encounter a device private\n> offset, instead of looking up its page within the pagemap use\n> device_private_offset_to_page() instead.\n>\n> Update the existing users:\n>\n>  - lib/test_hmm.c\n>  - ppc ultravisor\n>  - drm/amd/amdkfd\n>  - gpu/drm/xe\n>  - gpu/drm/nouveau\n>\n> to use the new memremap_device_private_pagemap() interface.\n>\n> Signed-off-by: Jordan Niethe <jniethe@nvidia.com>\n> Signed-off-by: Alistair Popple <apopple@nvidia.com>\n>\n> ---\n>\n> NOTE: The updates to the existing drivers have only been compile tested.\n> I'll need some help in testing these drivers.\n>\n> v1:\n> - Include NUMA node paramater for memremap_device_private_pagemap()\n> - Add devm_memremap_device_private_pagemap() and friends\n> - Update existing users of memremap_pages():\n>     - ppc ultravisor\n>     - drm/amd/amdkfd\n>     - gpu/drm/xe\n>     - gpu/drm/nouveau\n> - Update for HMM huge page support\n> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE\n>\n> v2:\n> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);\n> ---\n>  Documentation/mm/hmm.rst                 |  11 +-\n>  arch/powerpc/kvm/book3s_hv_uvmem.c       |  41 ++---\n>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  23 +--\n>  drivers/gpu/drm/nouveau/nouveau_dmem.c   |  35 ++--\n>  drivers/gpu/drm/xe/xe_svm.c              |  28 +---\n>  include/linux/hmm.h                      |   3 +\n>  include/linux/leafops.h                  |  16 +-\n>  include/linux/memremap.h                 |  64 +++++++-\n>  include/linux/migrate.h                  |   6 +-\n>  include/linux/mm.h                       |   2 +\n>  include/linux/rmap.h                     |   5 +-\n>  include/linux/swapops.h                  |  10 +-\n>  lib/test_hmm.c                           |  69 ++++----\n>  mm/debug.c                               |   9 +-\n>  mm/memremap.c                            | 193 ++++++++++++++++++-----\n>  mm/mm_init.c                             |   8 +-\n>  mm/page_vma_mapped.c                     |  19 ++-\n>  mm/rmap.c                                |  43 +++--\n>  mm/util.c                                |   5 +-\n>  19 files changed, 391 insertions(+), 199 deletions(-)\n>\n<snip>\n\n> diff --git a/include/linux/mm.h b/include/linux/mm.h\n> index e65329e1969f..b36599ab41ba 100644\n> --- a/include/linux/mm.h\n> +++ b/include/linux/mm.h\n> @@ -2038,6 +2038,8 @@ static inline unsigned long memdesc_section(memdesc_flags_t mdf)\n>   */\n>  static inline unsigned long folio_pfn(const struct folio *folio)\n>  {\n> +\tVM_BUG_ON(folio_is_device_private(folio));\n\nPlease use VM_WARN_ON instead.\n\n> +\n>  \treturn page_to_pfn(&folio->page);\n>  }\n>\n> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n> index 57c63b6a8f65..c1561a92864f 100644\n> --- a/include/linux/rmap.h\n> +++ b/include/linux/rmap.h\n> @@ -951,7 +951,7 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>  static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>  {\n>  \tif (folio_is_device_private(folio))\n> -\t\treturn page_vma_walk_pfn(folio_pfn(folio)) |\n> +\t\treturn page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n>  \t\t       PVMW_PFN_DEVICE_PRIVATE;\n>\n>  \treturn page_vma_walk_pfn(folio_pfn(folio));\n> @@ -959,6 +959,9 @@ static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>\n>  static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n>  {\n> +\tif (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n> +\t\treturn device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n> +\n>  \treturn pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>  }\n\n<snip>\n\n> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n> index 96c525785d78..141fe5abd33f 100644\n> --- a/mm/page_vma_mapped.c\n> +++ b/mm/page_vma_mapped.c\n> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,\n>  static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>  {\n>  \tunsigned long pfn;\n> +\tbool device_private = false;\n>  \tpte_t ptent = ptep_get(pvmw->pte);\n>\n>  \tif (pvmw->flags & PVMW_MIGRATION) {\n> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>  \t\tif (!softleaf_is_migration(entry))\n>  \t\t\treturn false;\n>\n> +\t\tif (softleaf_is_migration_device_private(entry))\n> +\t\t\tdevice_private = true;\n> +\n>  \t\tpfn = softleaf_to_pfn(entry);\n>  \t} else if (pte_present(ptent)) {\n>  \t\tpfn = pte_pfn(ptent);\n> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>  \t\t\treturn false;\n>\n>  \t\tpfn = softleaf_to_pfn(entry);\n> +\n> +\t\tif (softleaf_is_device_private(entry))\n> +\t\t\tdevice_private = true;\n>  \t}\n>\n> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n> +\t\treturn false;\n> +\n>  \tif ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>  \t\treturn false;\n>  \tif (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>  }\n>\n>  /* Returns true if the two ranges overlap.  Careful to not overflow. */\n> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)\n> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n>  {\n> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n> +\t\treturn false;\n> +\n>  \tif ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>  \t\treturn false;\n>  \tif (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n> @@ -255,6 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>\n>  \t\t\t\tif (!softleaf_is_migration(entry) ||\n>  \t\t\t\t    !check_pmd(softleaf_to_pfn(entry),\n> +\t\t\t\t\t       softleaf_is_device_private(entry) ||\n> +\t\t\t\t\t       softleaf_is_migration_device_private(entry),\n>  \t\t\t\t\t       pvmw))\n>  \t\t\t\t\treturn not_found(pvmw);\n>  \t\t\t\treturn true;\n> @@ -262,7 +277,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>  \t\t\tif (likely(pmd_trans_huge(pmde))) {\n>  \t\t\t\tif (pvmw->flags & PVMW_MIGRATION)\n>  \t\t\t\t\treturn not_found(pvmw);\n> -\t\t\t\tif (!check_pmd(pmd_pfn(pmde), pvmw))\n> +\t\t\t\tif (!check_pmd(pmd_pfn(pmde), false, pvmw))\n>  \t\t\t\t\treturn not_found(pvmw);\n>  \t\t\t\treturn true;\n>  \t\t\t}\n\nIt seems to me that you can add a new flag like “bool is_device_private” to\nindicate whether pfn is a device private index instead of pfn without\nmanipulating pvmw->pfn itself.\n\nBest Regards,\nYan, Zi","headers":{"Return-Path":"\n <linuxppc-dev+bounces-15663-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=M7wu4zi5;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=2404:9400:21b9:f100::1; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-15663-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=40.93.196.17 arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=M7wu4zi5;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=40.93.196.17; helo=sa9pr02cu001.outbound.protection.outlook.com;\n envelope-from=ziy@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org\n [IPv6:2404:9400:21b9:f100::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1 raw public key)\n server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4drKwt3tt8z1xrQ\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 14 Jan 2026 07:05:17 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4drKwl5dyQz2xWP;\n\tWed, 14 Jan 2026 07:05:11 +1100 (AEDT)","from SA9PR02CU001.outbound.protection.outlook.com\n (mail-southcentralusazon11013017.outbound.protection.outlook.com\n [40.93.196.17])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4drKwk2f0rz2xQB\n\tfor <linuxppc-dev@lists.ozlabs.org>; Wed, 14 Jan 2026 07:05:09 +1100 (AEDT)","from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by\n PH7PR12MB5829.namprd12.prod.outlook.com (2603:10b6:510:1d4::6) with Microsoft\n SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.9499.7; Tue, 13 Jan 2026 20:04:28 +0000","from DS7PR12MB9473.namprd12.prod.outlook.com\n ([fe80::5189:ecec:d84a:133a]) by DS7PR12MB9473.namprd12.prod.outlook.com\n ([fe80::5189:ecec:d84a:133a%5]) with mapi id 15.20.9499.005; Tue, 13 Jan 2026\n 20:04:28 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1768334711;\n\tcv=pass;\n b=MG1WT+5aVevrmFV72yFKrDlnhMSMBYupPgNzxIQWXL10yfhTahbGgekbSBoWkiAMNLX/ppugkAKmRjczVq9OITE0WPE35UwMToqK7MoV4GZMKkm6erd6zN7kD3edBRepuXYFGrm6FFFfl0ggdToJSSEhLRoTJ/W9oAYr+PRCFDvtfVpE9b10VwgoKVs693JY5mlx5PwxHk2tmI6BUHXfOQGDFZzLA4d7LnpQHCLKwDxNp6wEY27uZpNTQS3GL56uDLMx0YoBMPsbLoGG0Bg6i9p2CsKZ4w8TVA7imRX8XRIYlHij8a/0TfpNeZZywvwOSmKU94mIHcIMfq/DTc48Ng==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=SlPAKaFnLtwDL2u+sEtbSawRRD76ccZcYB7fXIjr9tcW8JHKxrLJxthydr3wkTr1zSRAMDVvuWhASzvkHvUD3SrDAMlhNXCDzHdYvWo9yb93Ivk0H+eVYO8mfk8ISPt4uSPlm6wYBl6u9sONSlKsWuaR4+2ibygurqgVBv/1Y4HKRYASi44reCD1B9nxbceiFD9Q0mcGb2qVdsjBW7J+2ubwnthHSOlVVYOIuvVLZY11UWJ+hI+H3hnLfVO2S0A/92/MjkvvFwdqWS6IRQDwAtFyTvLVXGjSYRACL3FePfVq0IlDVPETn242CVeIFydCpQoFSLNj4yxhe7G9/Hzs1Q=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1768334711; c=relaxed/relaxed;\n\tbh=yGOhj3PQ3wDem02v+a5lxa0pQr9j2IipeoIhKvVgP9c=;\n\th=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References:\n\t Content-Type:MIME-Version;\n b=E8I27j4CYCkm7PMCynOV2Qt5u9YKGL2c25BI5qpgvQPEac8kAZiYsTmNKXfm2nHh+79jqXVtfmErz+UUAIrcQbmxgVdE8JkrLyCnurY2kcj8ilG13khgx7tTbUwYe07bx6Hkn6ThTguig++samqWrT0Cvff4TrcIyeE48CV/U0ZcF4NN+fMWeun1aTuEQeE0R6ReYsP0WEL/Jlc3eqw/pkSqx1dEtTI/Fs9NziZ5mhe3AoN3JL7Mtc1u7ZE/pMSxFVazCAoB4n/7JdkaGrmz8rgKuEtbmffbRWLJuXA0uvmbeZjcCJF9FEq4G37/0qDk+hIO+scBKqcG6//6rvz5MQ==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=yGOhj3PQ3wDem02v+a5lxa0pQr9j2IipeoIhKvVgP9c=;\n b=vMFfRuPgkkqNuUX+adT6lSJamYdDh0bymbmnYpnCal6USvnFtxuWRgKYueQXA1o7S16NocU8V5YwCAJ7tKzI1X0vrQ94tLEKa6tFNF4LutOLPdIIRYBMu1sh6hwSQw5gUzDBHJGPNkZT6eLd2OutRwGzmzz2oYVjzbqDBbswBXSQAZQzDpqGjaO7vZPaVzF/Ap05w3ikYkVYWKo6+XklPtj0d5QrT7ncCXUVzw0+I+nbhYk7UJNYe0Dq+wbsxOTsLVMwRbI3wMyJdcBaf47kNu+i7IyoIvdW5Mj4DVS13xskxagXcyUUx7D1irp8MtolNsui/3JSrMCaVu4+IGZ5Zw=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=M7wu4zi5; dkim-atps=neutral;\n spf=pass (client-ip=40.93.196.17;\n helo=sa9pr02cu001.outbound.protection.outlook.com;\n envelope-from=ziy@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=yGOhj3PQ3wDem02v+a5lxa0pQr9j2IipeoIhKvVgP9c=;\n b=M7wu4zi50h0Sw0pD5tl7N5hykFh3Z+kXi6vWnXrNzcI44bFO/EINbbWbEQQukTeoOLodh+6J9FqmZ/cbmtQ+fMhFdahw5hHGi1fTG40P24Yi410Mt7HJf7pa/PlBO2TyrkAiXxpQjGBUElDBqYIRywjiznFbKePm5sr1v61yP/RnOhULwf/c0Gy/i3NNRnLbWDJ59HlJFpz8xerUQj//eApBvkiI22UlYlso2gENG9L7gd5U3KVUKkZbHdVDfxs+PbJw8yzJ/EjCcmplG6TjDhw4ANEMy0JfsWMFTcMGrXL300s8f7eWb51CmT/0mJdJM5sl6+wCh9L3cY0QJ+iEkA==","From":"Zi Yan <ziy@nvidia.com>","To":"Jordan Niethe <jniethe@nvidia.com>","Cc":"linux-mm@kvack.org, balbirs@nvidia.com, matthew.brost@intel.com,\n akpm@linux-foundation.org, linux-kernel@vger.kernel.org,\n dri-devel@lists.freedesktop.org, david@redhat.com, apopple@nvidia.com,\n lorenzo.stoakes@oracle.com, lyude@redhat.com, dakr@kernel.org,\n airlied@gmail.com, simona@ffwll.ch, rcampbell@nvidia.com,\n mpenttil@redhat.com, jgg@nvidia.com, willy@infradead.org,\n linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org, jgg@ziepe.ca,\n Felix.Kuehling@amd.com","Subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","Date":"Tue, 13 Jan 2026 15:04:18 -0500","X-Mailer":"MailMate (2.0r6290)","Message-ID":"<36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>","In-Reply-To":"<20260107091823.68974-12-jniethe@nvidia.com>","References":"<20260107091823.68974-1-jniethe@nvidia.com>\n <20260107091823.68974-12-jniethe@nvidia.com>","Content-Type":"text/plain; charset=UTF-8","Content-Transfer-Encoding":"quoted-printable","X-ClientProxiedBy":"BYAPR11CA0075.namprd11.prod.outlook.com\n (2603:10b6:a03:f4::16) To DS7PR12MB9473.namprd12.prod.outlook.com\n (2603:10b6:8:252::5)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DS7PR12MB9473:EE_|PH7PR12MB5829:EE_","X-MS-Office365-Filtering-Correlation-Id":"ed79eb82-60af-4bb1-38ed-08de52def55d","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|376014|7416014|366016|1800799024;","X-Microsoft-Antispam-Message-Info":"=?utf-8?q?M7Mf9U1C5fingDWKQFh2Ge6yDQl8jWQ?=\n\t=?utf-8?q?278FcUEmjD1+I98UTCuwHB4xt4HHoVPIqBorHQXzHMlq/d6fpn2p+VLZVORUSQ2RA?=\n\t=?utf-8?q?LgagcantirPFkvmN27UbJhHgAJk8bY2VhN9jLlmYLZcxb8j7Ml8X6HmabfD28Oo67?=\n\t=?utf-8?q?ZiTy05JznRK66Zwn5RNFe1NhuCm0oUgekeD1N1/Hs6KLoVIw1/fIqB0W9PygVYnTW?=\n\t=?utf-8?q?HzP+xkqLgvmK4MTNl9hLrMlbxz+cgf8k606G3RmX6XMFjyFb3Yl9xoFRffLpvSDaw?=\n\t=?utf-8?q?G8Ti2PsrLvpKazmBEYovWa8rtuFHiptKUtllXVAVtbjcxdM0xP+Sr7QasF7+fDUeR?=\n\t=?utf-8?q?xb0Lvbdc3z7pcipLP1Cm+eOmOzQiUTGoukdaaaJPbeTPc7V02q/q/3oBJK6bC6tkD?=\n\t=?utf-8?q?uKIE/BQBzkpZ5mh7yxquhCcZinKxnj1zuO9mLGn3ZYtBWxkCfRWB4GwIuwS5SMBDV?=\n\t=?utf-8?q?DAoDNhPO5pDG7EKN+y7y9SMQ1MSZkRRQv4honREur9guJv/cPH1aa2eomtNGUYAmM?=\n\t=?utf-8?q?XYRv/1iL3GO2SLs1aLOEK45eC+NhbxjqtkjlvvfZ8nIRJpwVTWDHb1xsl8MFDYZNd?=\n\t=?utf-8?q?95gURjI/35diwjPyYEi06ZYO8Y7P+VycDjnL28RrzONo3Q9jUVjFL7D1Gvumls/lG?=\n\t=?utf-8?q?0D431bm/mYhe80i3rLJ0qDNJuJiVt6EzOFWU2dmIWsL7ND4HNYYgWHrikXBddgSCB?=\n\t=?utf-8?q?M51Pft0cjUXec1fOt2cNaO3J+GSWyav+shf54d62/FIsTkYuortumm9Gom3FOlafc?=\n\t=?utf-8?q?yx+lSVAf2S95HEy914X/QOkm9jR0mYuTuPOJZxzqrwK62b9zPEPkr2FiouSxyLqnb?=\n\t=?utf-8?q?RrknhtagxYzdUtg8wr4f+WGaEPEHuvoo2FH4o/np2Q/6W5E6NxxqCBO3VfujzqcM7?=\n\t=?utf-8?q?ZqAUW5n8VTU9lGSM2i/T2GxGws4YF92Y1kyVsNeIWvItx6wJiyepMJAZLR8cXZalo?=\n\t=?utf-8?q?FRGwBZV5GnUCMCzTsS7TQUMt6/wo9p+JF2wf19gaHFk/bYahyAiYGCPfoUkQYprQn?=\n\t=?utf-8?q?b7eg0yaKxYpHs41ZKIfZi+F3a2Bo0aZFNag/aWJ5+wJDiTEn87z7ilhGRl/jP6+ki?=\n\t=?utf-8?q?NgloIo+kt6Fpw1lSCpvXvIysb5RBOjX8MuaNQNDrLYwSufTF9Ra5PB+rA3fsrsD1N?=\n\t=?utf-8?q?jYL0bGg78XGcB3rn15OPT+lmtlVY6mF+A5FmnoEuM628ZpklkhB7flc8v+HcxHsDM?=\n\t=?utf-8?q?n8GhPylbZYa00L0zQuFwAxfwkX57Guq76p6OCxGPjF+OvW2Pgyekmhyi+ln/cd18K?=\n\t=?utf-8?q?zjyXZb+qvXd5QdExZPOWxskdHRsDfYBMHcRC5F75plab0Whme03bGHgup+XYOxujx?=\n\t=?utf-8?q?Dfs3I3fDzH3SM372qERfc6TuWo7I68mG8rKYVjIDoQxhCgbTmWVd4I8nEIVkMy88L?=\n\t=?utf-8?q?6TbDCLDGoQm?=","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(376014)(7416014)(366016)(1800799024);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"=?utf-8?q?oalPIJvYQE7uKIwHegLbOXC89kev?=\n\t=?utf-8?q?bTUkukO0e73L9lRn4jr0isVG4Dw6JF/4lbRmGttxqlCPRK+HUGoLY7Fmgyvn2cJ49?=\n\t=?utf-8?q?+LoWS+1mNRPKEhdQZJAWP6VEyf76oeSgS0Q5twz32vjSQULErHafyNc1j9osvOOHu?=\n\t=?utf-8?q?cAOhRv4QNC77oH5XIrwK9DGD9WbOnruiPEl/WIBAL69ff56LHFQNacZdsKijzfHm9?=\n\t=?utf-8?q?BVXgf/Fhj9kqV7v7vbE6zEjbBxstqda/t91K70Cguphs3YpY3uj41oo6vS9hyEI/r?=\n\t=?utf-8?q?0rFUYaUTggRW+kkNfWrTva33TmYo9Sm79HTd+EeQgo8ogu5cOs9nPKOKT3ME2n3lI?=\n\t=?utf-8?q?afXg4aDGr16mapWWNVUIUEQe6bomj3miV1d4XED/6Z2psC1O741WfLSeZG2gI9lWd?=\n\t=?utf-8?q?rKAOPf0T3Ndv8SjKUZZWyZ70tbNMoHufDCKGttqtbKG6PjhryA44E56UY8A1Ln4vp?=\n\t=?utf-8?q?M3XjWwv/LQ+uWXtHPFIMffBhcJykLkd+eciJOJsfdnct0f31efa6SSYpb5/sJLo2W?=\n\t=?utf-8?q?TqSFisBNJrQ5BheY6kKjiRE0MgX46X+f3LfabGeA0UuPjgP2K6rcg/pkldEQuhya8?=\n\t=?utf-8?q?uqZC/O0/YskGTSA4eQfyJn1kWJkd97dHgZqtA9vQ8Wxu6WOtRY3VzNMS2bN/2C3dq?=\n\t=?utf-8?q?Q3wLJhZF53BOUVkOwtXSE5yA5BOrVsqoFyaqtCnPtR/Oeeb3a5gSEcMZZmMUd1Z1S?=\n\t=?utf-8?q?LcjbL012lFZoQNFJZttXH17mxGlcyPIidlohoTEFL0EuASjf4FaGu7FJ5fHWgeGWc?=\n\t=?utf-8?q?tURDa438RBhplhvOEy+64Bf+T0YIuS2TU2Om7gbYwJqF7xWnf5qePg9Ix37RhaaZ5?=\n\t=?utf-8?q?1+aLMjsp41NdvwKkVTYi4OW4843MTcNL9ZMkEyMAytBZuXYawov+HybwRBjr5daAF?=\n\t=?utf-8?q?uEgN1XunNyC+WaXAJacm0JjFjAKuGo1why1VSRrrtqPpkdyNmOWi1Zli8Ek7fuOOQ?=\n\t=?utf-8?q?l5MA+KJDLpAbC99Wg3son5hZyFRi53NoeNMnuaDHhFxgGEcXX3NvSLT/o2RRa+Qa3?=\n\t=?utf-8?q?cm67Un1palqWBIQB4sj57OiVt1W4jsNuVwb6AOsimncJxgHcQwXCteVQjacKhe01p?=\n\t=?utf-8?q?V/1RN0QqJ1kbnxtVGjA9fNMMapILYrcauWtU3EisXJVQ9JIVFNY4dFsbTmgGkaO5h?=\n\t=?utf-8?q?qwfP6fPaQnm0dd3UlkTNlj6VXes8Kezft+hvoAJg/CcBiwjwGoCwx+8YywSWFVl8A?=\n\t=?utf-8?q?c25cbuSEPjoLjzWw8HVSaQU8bxZcsk8v83OcEHnzUPQtoF4jY1sBFqwh/sNn86ylx?=\n\t=?utf-8?q?DMcU6ptxvhAp3v5pJphlXXR65TCJD6ZoRn4pnl7O78h5RDHiD2Fw3pTCfLRn6datf?=\n\t=?utf-8?q?SH4j2sXt1THatnvpyeERW0Sl8Fgo+xzjm6UiRCdnq/h68iGRIKsUd7we3Brb48NnN?=\n\t=?utf-8?q?ms3LriUT8/uNCuiJqB462/ZlNTD1yaaqQy0g7bAv66Mzhgku+iq0VZWeDdD0Jj/B8?=\n\t=?utf-8?q?oM0SY+cEuS/zSlMO31nTxsVc7zeRpTNOmyqELHg9hcdB2V6uQ6w1xSXLv0tW0FOiq?=\n\t=?utf-8?q?nL1GZb6rJ+bxIjhA54D9Kkouvx0F6ZSYbfs1SmXcBkRdtDm7YE0as+7FAiLCbrwFb?=\n\t=?utf-8?q?7kDjZDDQ+ZEUfhRlwcu+Jbjxn7q4VFgUDBsuOazeyKtVHoWnzRqsq9b2B4OT72YBI?=\n\t=?utf-8?q?rc3T4rggHQhr9wuzWd9dtp9WozeFBPAw=3D=3D?=","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n ed79eb82-60af-4bb1-38ed-08de52def55d","X-MS-Exchange-CrossTenant-AuthSource":"DS7PR12MB9473.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"13 Jan 2026 20:04:28.7347\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n NzSboNEHvdg/7LYKgk16WjV3R07nDrnk2PYlaGi83SuDabolP+rVMLBSBdeZmp5+","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"PH7PR12MB5829","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tRCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_PASS\n\tautolearn=disabled version=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"}},{"id":3639364,"web_url":"http://patchwork.ozlabs.org/comment/3639364/","msgid":"<c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>","date":"2026-01-20T22:33:07","subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","submitter":{"id":92354,"url":"http://patchwork.ozlabs.org/api/people/92354/","name":"Jordan Niethe","email":"jniethe@nvidia.com"},"content":"On 14/1/26 07:04, Zi Yan wrote:\n> On 7 Jan 2026, at 4:18, Jordan Niethe wrote:\n> \n>> Currently when creating these device private struct pages, the first\n>> step is to use request_free_mem_region() to get a range of physical\n>> address space large enough to represent the devices memory. This\n>> allocated physical address range is then remapped as device private\n>> memory using memremap_pages().\n>>\n>> Needing allocation of physical address space has some problems:\n>>\n>>    1) There may be insufficient physical address space to represent the\n>>       device memory. KASLR reducing the physical address space and VM\n>>       configurations with limited physical address space increase the\n>>       likelihood of hitting this especially as device memory increases. This\n>>       has been observed to prevent device private from being initialized.\n>>\n>>    2) Attempting to add the device private pages to the linear map at\n>>       addresses beyond the actual physical memory causes issues on\n>>       architectures like aarch64 meaning the feature does not work there.\n>>\n>> Instead of using the physical address space, introduce a device private\n>> address space and allocate devices regions from there to represent the\n>> device private pages.\n>>\n>> Introduce a new interface memremap_device_private_pagemap() that\n>> allocates a requested amount of device private address space and creates\n>> the necessary device private pages.\n>>\n>> To support this new interface, struct dev_pagemap needs some changes:\n>>\n>>    - Add a new dev_pagemap::nr_pages field as an input parameter.\n>>    - Add a new dev_pagemap::pages array to store the device\n>>      private pages.\n>>\n>> When using memremap_device_private_pagemap(), rather then passing in\n>> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to\n>> be remapped, dev_pagemap::nr_ranges will always be 1, and the device\n>> private range that is reserved is returned in dev_pagemap::range.\n>>\n>> Forbid calling memremap_pages() with dev_pagemap::ranges::type =\n>> MEMORY_DEVICE_PRIVATE.\n>>\n>> Represent this device private address space using a new\n>> device_private_pgmap_tree maple tree. This tree maps a given device\n>> private address to a struct dev_pagemap, where a specific device private\n>> page may then be looked up in that dev_pagemap::pages array.\n>>\n>> Device private address space can be reclaimed and the assoicated device\n>> private pages freed using the corresponding new\n>> memunmap_device_private_pagemap() interface.\n>>\n>> Because the device private pages now live outside the physical address\n>> space, they no longer have a normal PFN. This means that page_to_pfn(),\n>> et al. are no longer meaningful.\n>>\n>> Introduce helpers:\n>>\n>>    - device_private_page_to_offset()\n>>    - device_private_folio_to_offset()\n>>\n>> to take a given device private page / folio and return its offset within\n>> the device private address space.\n>>\n>> Update the places where we previously converted a device private page to\n>> a PFN to use these new helpers. When we encounter a device private\n>> offset, instead of looking up its page within the pagemap use\n>> device_private_offset_to_page() instead.\n>>\n>> Update the existing users:\n>>\n>>   - lib/test_hmm.c\n>>   - ppc ultravisor\n>>   - drm/amd/amdkfd\n>>   - gpu/drm/xe\n>>   - gpu/drm/nouveau\n>>\n>> to use the new memremap_device_private_pagemap() interface.\n>>\n>> Signed-off-by: Jordan Niethe <jniethe@nvidia.com>\n>> Signed-off-by: Alistair Popple <apopple@nvidia.com>\n>>\n>> ---\n>>\n>> NOTE: The updates to the existing drivers have only been compile tested.\n>> I'll need some help in testing these drivers.\n>>\n>> v1:\n>> - Include NUMA node paramater for memremap_device_private_pagemap()\n>> - Add devm_memremap_device_private_pagemap() and friends\n>> - Update existing users of memremap_pages():\n>>      - ppc ultravisor\n>>      - drm/amd/amdkfd\n>>      - gpu/drm/xe\n>>      - gpu/drm/nouveau\n>> - Update for HMM huge page support\n>> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE\n>>\n>> v2:\n>> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);\n>> ---\n>>   Documentation/mm/hmm.rst                 |  11 +-\n>>   arch/powerpc/kvm/book3s_hv_uvmem.c       |  41 ++---\n>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  23 +--\n>>   drivers/gpu/drm/nouveau/nouveau_dmem.c   |  35 ++--\n>>   drivers/gpu/drm/xe/xe_svm.c              |  28 +---\n>>   include/linux/hmm.h                      |   3 +\n>>   include/linux/leafops.h                  |  16 +-\n>>   include/linux/memremap.h                 |  64 +++++++-\n>>   include/linux/migrate.h                  |   6 +-\n>>   include/linux/mm.h                       |   2 +\n>>   include/linux/rmap.h                     |   5 +-\n>>   include/linux/swapops.h                  |  10 +-\n>>   lib/test_hmm.c                           |  69 ++++----\n>>   mm/debug.c                               |   9 +-\n>>   mm/memremap.c                            | 193 ++++++++++++++++++-----\n>>   mm/mm_init.c                             |   8 +-\n>>   mm/page_vma_mapped.c                     |  19 ++-\n>>   mm/rmap.c                                |  43 +++--\n>>   mm/util.c                                |   5 +-\n>>   19 files changed, 391 insertions(+), 199 deletions(-)\n>>\n> <snip>\n> \n>> diff --git a/include/linux/mm.h b/include/linux/mm.h\n>> index e65329e1969f..b36599ab41ba 100644\n>> --- a/include/linux/mm.h\n>> +++ b/include/linux/mm.h\n>> @@ -2038,6 +2038,8 @@ static inline unsigned long memdesc_section(memdesc_flags_t mdf)\n>>    */\n>>   static inline unsigned long folio_pfn(const struct folio *folio)\n>>   {\n>> +\tVM_BUG_ON(folio_is_device_private(folio));\n> \n> Please use VM_WARN_ON instead.\n\nack.\n\n> \n>> +\n>>   \treturn page_to_pfn(&folio->page);\n>>   }\n>>\n>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n>> index 57c63b6a8f65..c1561a92864f 100644\n>> --- a/include/linux/rmap.h\n>> +++ b/include/linux/rmap.h\n>> @@ -951,7 +951,7 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>>   static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>   {\n>>   \tif (folio_is_device_private(folio))\n>> -\t\treturn page_vma_walk_pfn(folio_pfn(folio)) |\n>> +\t\treturn page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n>>   \t\t       PVMW_PFN_DEVICE_PRIVATE;\n>>\n>>   \treturn page_vma_walk_pfn(folio_pfn(folio));\n>> @@ -959,6 +959,9 @@ static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>\n>>   static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n>>   {\n>> +\tif (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n>> +\t\treturn device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>> +\n>>   \treturn pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>   }\n> \n> <snip>\n> \n>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n>> index 96c525785d78..141fe5abd33f 100644\n>> --- a/mm/page_vma_mapped.c\n>> +++ b/mm/page_vma_mapped.c\n>> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,\n>>   static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>   {\n>>   \tunsigned long pfn;\n>> +\tbool device_private = false;\n>>   \tpte_t ptent = ptep_get(pvmw->pte);\n>>\n>>   \tif (pvmw->flags & PVMW_MIGRATION) {\n>> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>   \t\tif (!softleaf_is_migration(entry))\n>>   \t\t\treturn false;\n>>\n>> +\t\tif (softleaf_is_migration_device_private(entry))\n>> +\t\t\tdevice_private = true;\n>> +\n>>   \t\tpfn = softleaf_to_pfn(entry);\n>>   \t} else if (pte_present(ptent)) {\n>>   \t\tpfn = pte_pfn(ptent);\n>> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>   \t\t\treturn false;\n>>\n>>   \t\tpfn = softleaf_to_pfn(entry);\n>> +\n>> +\t\tif (softleaf_is_device_private(entry))\n>> +\t\t\tdevice_private = true;\n>>   \t}\n>>\n>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>> +\t\treturn false;\n>> +\n>>   \tif ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>   \t\treturn false;\n>>   \tif (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n>> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>   }\n>>\n>>   /* Returns true if the two ranges overlap.  Careful to not overflow. */\n>> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)\n>> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n>>   {\n>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>> +\t\treturn false;\n>> +\n>>   \tif ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>   \t\treturn false;\n>>   \tif (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n>> @@ -255,6 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>\n>>   \t\t\t\tif (!softleaf_is_migration(entry) ||\n>>   \t\t\t\t    !check_pmd(softleaf_to_pfn(entry),\n>> +\t\t\t\t\t       softleaf_is_device_private(entry) ||\n>> +\t\t\t\t\t       softleaf_is_migration_device_private(entry),\n>>   \t\t\t\t\t       pvmw))\n>>   \t\t\t\t\treturn not_found(pvmw);\n>>   \t\t\t\treturn true;\n>> @@ -262,7 +277,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>   \t\t\tif (likely(pmd_trans_huge(pmde))) {\n>>   \t\t\t\tif (pvmw->flags & PVMW_MIGRATION)\n>>   \t\t\t\t\treturn not_found(pvmw);\n>> -\t\t\t\tif (!check_pmd(pmd_pfn(pmde), pvmw))\n>> +\t\t\t\tif (!check_pmd(pmd_pfn(pmde), false, pvmw))\n>>   \t\t\t\t\treturn not_found(pvmw);\n>>   \t\t\t\treturn true;\n>>   \t\t\t}\n> \n> It seems to me that you can add a new flag like “bool is_device_private” to\n> indicate whether pfn is a device private index instead of pfn without\n> manipulating pvmw->pfn itself.\n\nWe could do it like that, however my concern with using a new param was that\nstoring this info seperately might make it easier to misuse a device private\nindex as a regular pfn.\n\nIt seemed like it could be easy to overlook both when creating the pvmw and\nthen when accessing the pfn.\n\n\nThanks,\nJordan.\n\n> \n> Best Regards,\n> Yan, Zi","headers":{"Return-Path":"\n <linuxppc-dev+bounces-16078-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=ORswT+D+;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=2404:9400:21b9:f100::1; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-16078-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=\"2a01:111:f403:c10c::1\" arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=ORswT+D+;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=2a01:111:f403:c10c::1;\n helo=sa9pr02cu001.outbound.protection.outlook.com;\n envelope-from=jniethe@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org\n [IPv6:2404:9400:21b9:f100::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dwhtx4W8Lz1xsg\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 21 Jan 2026 09:33:45 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4dwhtx1K7Cz2xrL;\n\tWed, 21 Jan 2026 09:33:45 +1100 (AEDT)","from SA9PR02CU001.outbound.protection.outlook.com\n (mail-southcentralusazlp170130001.outbound.protection.outlook.com\n [IPv6:2a01:111:f403:c10c::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4dwhtv3F6Cz2xjK\n\tfor <linuxppc-dev@lists.ozlabs.org>; Wed, 21 Jan 2026 09:33:42 +1100 (AEDT)","from DM4PR12MB9072.namprd12.prod.outlook.com (2603:10b6:8:be::6) by\n CY8PR12MB8300.namprd12.prod.outlook.com (2603:10b6:930:7d::16) with Microsoft\n SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.9520.13; Tue, 20 Jan 2026 22:33:17 +0000","from DM4PR12MB9072.namprd12.prod.outlook.com\n ([fe80::9e49:782:8e98:1ff1]) by DM4PR12MB9072.namprd12.prod.outlook.com\n ([fe80::9e49:782:8e98:1ff1%5]) with mapi id 15.20.9520.011; Tue, 20 Jan 2026\n 22:33:17 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1768948425;\n\tcv=pass;\n b=BrgAWe2gnDZWg9rShdbbF1wGdvSqG7M+/5ZlTCOCqXkb+IJ7ZFwYx7UO+tTzdXY0HovgQDkHEmUaWpHmDCHk3NYyV9jz0xCyUcI32C+iFmATTZ+qMsA9sc2sQPEO24FhDLGwXfBQXjWLFc02oecqBkP9qhJddEVbVlHmX5SohxzfWzzBR8Ji1Efb+Ffq7YrWq1HU34Gx8EMTUDoY7Jm7B6b3ZAo6JdyigPoWV9uadSMPcsEz32Mtjy+qoEZwyeIU52PIGA8YTWzAGnaTgX3hjRjFXbrlYOVWXlb5L4oteQW73XaSkiSlKSjLncPf5tl0q+FaGYA5SmEbXGs2/8lSfg==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=y4CJKZrBaENQ3b+vWhl5sDIkSZ6cQvlzItIziByAGzpPWVUdKxGj/2CeW6clwSMo+vvHQqqBIJ3pKqtv8y2qbzOOnKELvkNI0fIcWvYKnWSYzlP/4+LU4WMQiojsyNxgSkxdr5xCnt4Y4cwOvd5R3GFvfOTaUuQFzVUXdPus7jMvgTUwgAnZXoNpbK86shm+NVpxeAK2AR8BwlldGEJpmuPiMFr8H9P0ZEI0Oj1TxD5EBUbLMPQL2m7+WWZOYXAkEW8c+CI0CAC3WReAtdkVOWvU9M28ON+wzs1w61GDCvgpY+e4l0VCezvorlWBtg0NPRb+EbDRI14U74QukSIsSA=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1768948425; c=relaxed/relaxed;\n\tbh=S+f2Mph0Q7ufI+tR+hk1pmXX/CxqZ1dJgQcyGG/XPyQ=;\n\th=Message-ID:Date:Subject:To:Cc:References:From:In-Reply-To:\n\t Content-Type:MIME-Version;\n b=mT+A+P6jOyR+boi1M9Ax0Tl1t+5VkoLnzlYsqWS1uEF3R5k1hkntfG5gfDSTvhxv3e1V4TKteMgokHoJ4cNjPO7gY3LEgka67ye/Gy03HbGB11XATMtfTLexnfq/SJpeim4Cq+6OxJVd0Rc7JMWgEhAwCpev3XqFynGFrih9NRHdTxmfF9aIYUgyyzUFmEk/y+cQShTfnItLgJtluyDsS04YmN1y4WVIeRfELSUpyPm89CNC/mZprD3XMcb27PRaBmRfoK6o80MhoG+A/jQm5sFi+JOJDdkST28Ylqm/lqBhB6IoDE/HHKMVzi+cRInFnqFv9G0v9OQeOKY7M21spw==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=S+f2Mph0Q7ufI+tR+hk1pmXX/CxqZ1dJgQcyGG/XPyQ=;\n b=eamhrtR2VwetVkR6Rw5148MwZmVHFiqcCjeVOmLVXt+85nXgZljcxgazzuZ7yPujhU2VQ56Hht1bZMGPZAqEmqIKsfyK1RsyABFRhukiSKNAicpfxKuxekXW5LmSH85ngVO4qVLVCXv3sG7QIACzlwBoc2nY5yPYu/zlmbGYIPjcaKvcYZBPCg2rktEAXvl5uENKUQbP4tHp8tuPXQCXH7qj1z1n5XTy/tn03tMQuC1KbsB68LQ+q1fq8E26pmSxBkMlZF16Ovjbuy/M/dQ4zniDvEGPCd3UwPyYZ1bNByscNNJzx8SOYoL/n4lVMw7ppIohJW6DZQv2D0iyVyP5Jw=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=ORswT+D+; dkim-atps=neutral;\n spf=pass (client-ip=2a01:111:f403:c10c::1;\n helo=sa9pr02cu001.outbound.protection.outlook.com;\n envelope-from=jniethe@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=S+f2Mph0Q7ufI+tR+hk1pmXX/CxqZ1dJgQcyGG/XPyQ=;\n b=ORswT+D+MhaVJvhhEPLqnLiZRXh679STy1ciWmuWpO6KOvsW9gmvi4P6q+vqaW+nyDizXWYuEz4fKh96fEmtIuoO1Mj4aivCxzNIsd1fjHM3bIXVhjPjT5XDPgLw0k3fIkvFLSLKjnQ/wRQXKv0cMfe1WUGpH4i3swCt4Oj814LZHyHMkj1kZRDNBypd4kffbtH7k77WcoyllVfBc+mSoAvZM0WQmFOQs/SHtIh891511HZXo3O9PYAez0nZdsmJjqrj6BHV7oF98ii66ZJF0+kvyfKq1PEoevO0uUbjKBS6RgnUeB/W4jzwSLx1E63lPAc1QWHPQxRnFStldjav8w==","Message-ID":"<c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>","Date":"Wed, 21 Jan 2026 09:33:07 +1100","User-Agent":"Mozilla Thunderbird","Subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","To":"Zi Yan <ziy@nvidia.com>","Cc":"linux-mm@kvack.org, balbirs@nvidia.com, matthew.brost@intel.com,\n akpm@linux-foundation.org, linux-kernel@vger.kernel.org,\n dri-devel@lists.freedesktop.org, david@redhat.com, apopple@nvidia.com,\n lorenzo.stoakes@oracle.com, lyude@redhat.com, dakr@kernel.org,\n airlied@gmail.com, simona@ffwll.ch, rcampbell@nvidia.com,\n mpenttil@redhat.com, jgg@nvidia.com, willy@infradead.org,\n linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org, jgg@ziepe.ca,\n Felix.Kuehling@amd.com","References":"<20260107091823.68974-1-jniethe@nvidia.com>\n <20260107091823.68974-12-jniethe@nvidia.com>\n <36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>","Content-Language":"en-US","From":"Jordan Niethe <jniethe@nvidia.com>","In-Reply-To":"<36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>","Content-Type":"text/plain; charset=UTF-8; format=flowed","Content-Transfer-Encoding":"8bit","X-ClientProxiedBy":"BY5PR13CA0001.namprd13.prod.outlook.com\n (2603:10b6:a03:180::14) To DM4PR12MB9072.namprd12.prod.outlook.com\n (2603:10b6:8:be::6)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DM4PR12MB9072:EE_|CY8PR12MB8300:EE_","X-MS-Office365-Filtering-Correlation-Id":"4eed00e2-a952-4f19-bc5f-08de5873e802","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|1800799024|7416014|376014|366016;","X-Microsoft-Antispam-Message-Info":"=?utf-8?q?fcVk17Wv6WASLq56wLo+VNh8wMfPRFU?=\n\t=?utf-8?q?hcLcVCKyc4YPAktg8WcBP7NhQV7PAIfRqkMolHXN8Lb5ZZT2qbRtKytZ0u0p1i8rl?=\n\t=?utf-8?q?RWAWIjmCOiXJcqla4Sel5STGHuomSuHj3QAbvpUeCV+lr7SpnrGYBNe65V6MOluoE?=\n\t=?utf-8?q?hFiSvtZwpu1dDHCm8dlJZp11ClNmqKusXPE3Xyj17ip2YOmR8k6LApatguuyN4IDv?=\n\t=?utf-8?q?o4l64FTAL7Oz29BrJwfjCEEXMuWydVpuWojKP+x4QV/oDlivyklz0U94yiGmgl34/?=\n\t=?utf-8?q?aY+8HTnjNrPEp7uEt9LNY+y9hpwIWmga9JVsMgLvAMIG+1z6oyWIpHlKueNSnTaSZ?=\n\t=?utf-8?q?DzMaY347xYhG0Fss8Y5l3E1b/BUIsjH50IQklIRMIHIg3jaFzNRhAe7OebzxWAbHS?=\n\t=?utf-8?q?aG70XsYCY214YNpvRYEq2tSyD2vOgKGAjVrDC04GsgKcyzxVdeWSs1NpSF3BulRdo?=\n\t=?utf-8?q?gDXbGKXHkDqm4UnlCzw5+zzEYetaF4RQuqeawp8MZpRTuWWRllyWA/xc+LsULAtsO?=\n\t=?utf-8?q?Ef4jZxhaLSoMr4BsNJYCh2v9bqF75lTgZLGBuXUCcpbJjHccSIjK1S6FhZqW5Aemw?=\n\t=?utf-8?q?71m3ojGy8YbC0h3hruoOoOAuraz03t/qxYd9YfcHMqvDw3nnanoFekk7LGzUBvghA?=\n\t=?utf-8?q?jUC1qgsy9jkLTPR7e6WR+beeGk/0K6j5zB3o6rof3HVBSxTe/2x61JYGnZswqHfWH?=\n\t=?utf-8?q?hjxJDjg+YT4/VXnQ4SvHUek8J+KRHV925PsKhaWwNKkATMoJ05qi0v7egjRsH/SfC?=\n\t=?utf-8?q?ne6qJOyNhwOOUDjsIwzjl1R11IsMMl5l3XEvfM74GP4W8s35kCJ1TuXKQN6xX7o45?=\n\t=?utf-8?q?UETFUtbxgVGygn8x4p8jHijJYFhs3gB7Le18Pw6kw2L3Lz2QiAVJAMCd7Ri2Kz6GC?=\n\t=?utf-8?q?LVBABbkj4REauuqeNp5pmNrpUqX1fSz7GLw0OHBrSI+tcYLg3Ql5VIH/0Q+TBVMpS?=\n\t=?utf-8?q?HGIWBc/HzPWPI0WfLsRt/QEK49yuzWatFuHYMUc3VidEBRJVsUBuaZG5yjhpqJzcD?=\n\t=?utf-8?q?EyWXQOyK55yjEqIdRT7gYeQRr1u5mksgETpTTjGBf0SYw4AAdyzLSEJ/zWk8Klvxl?=\n\t=?utf-8?q?c03UGvIEEfNkcSfEmpwBsucRBfHqnCz3eIsV9fqLZIWCKRvDyVaQx2ZPtMM9tby85?=\n\t=?utf-8?q?VbPW70BO8bdFFReQX76I9/ovAyN6cefG2nGRHVBZcwIrms08RbNTGpj0chaef1PsS?=\n\t=?utf-8?q?78RMEjF695x12Ey7y45ipSGYk8IX83wHlaGq+zvxTP6Q/8QikGGzZTVqlFnX1Snre?=\n\t=?utf-8?q?hWjhdopAHdUfTRLxor8Xg6cCDu+MIwcLwRtmYuc5Ujlo08UxgOTtIY9xcxltjo2H/?=\n\t=?utf-8?q?/KJ1aI+JffdeM+zFSIBYsss+3LRxDFcFLXVPhDDkFtu5MCfFhl6iSUFfF76pw1G2k?=\n\t=?utf-8?q?BA56DICCRLN+jK4qFEaBf6QXRFAGfna0Gh8djma0+7fHj65C27dPbOcNqX2QJElqe?=\n\t=?utf-8?q?fcgm03Q6JwZ83cSHNwNIo8VlMFPJdTeZN50LlSrnNeo+srGMfqSqI=3D?=","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR12MB9072.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(366016);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"=?utf-8?q?gq6sx1aa7hIbdKZ0yNfijJxXOG8U?=\n\t=?utf-8?q?5GM/omOQZWbZ+heyH2VTEZhTsXs3xd66blY/DIsItHBoA1p5ZRRbfKKGK2fe07mzO?=\n\t=?utf-8?q?L18pAhqF2k9B7Ke6NShZYkMNuobd2WOeRJchdngocUcsGeYY1e8MpbnjceNWEQqo7?=\n\t=?utf-8?q?ga59p4chdac/Qf8KdScXjGw9J5lZRe3d/qakmF0tcMryE4tKIz3pkD9/RHiU6HLhr?=\n\t=?utf-8?q?4O26nLBRZtTrJs/DodUHpqJtseqvgeGGXTdm8o5Iz2aDpXAs6rG+iDPdQegUeNoue?=\n\t=?utf-8?q?+vWN5dwXT5G2cnUiHziMSxCZ73uNYiLWvrDIOF6UTuj2z9mD/7mizNavU3rWlLY7a?=\n\t=?utf-8?q?a3muAbGyr+uzqGXlpY7ph67YDofki+/qEZvplEEF3EPfbGaat8ZkkUNmgB78f0HET?=\n\t=?utf-8?q?fyIA4pz0QEIv0qlupeWtDAtE5yvddT55FMCAk7LAjV/IXn/tL7jfeAfJj21p6tsUN?=\n\t=?utf-8?q?0dGucAFFz0ecLQoHYA3LDlZLks52ZgSCZu9KH0wHRAuw0uGbJ/jopQtuerD2HKaD+?=\n\t=?utf-8?q?LqU6xiaSN/9RzCzHDmd1KyAjwTJtXhTOUvdgoFUt6j4AVLQ54Kr1GSjCijT6o9XrC?=\n\t=?utf-8?q?JBN4wqnHocwJsGVafPLAfto2YfIQWgI0441ZgMzAsO9Cfsyraz4hZoqr4FmRby+Os?=\n\t=?utf-8?q?DK3KzecdwRn63n9GvMycTtCQ9JZwc+pjfXXT9sEuws5zOkn6IyWEb5+U3VDszks8T?=\n\t=?utf-8?q?4iDCz/gjY3//pwKmNAu1H2P+E35DsgAggSi8diZVD1YCaLwy8EjAnkfiOFslWHXhG?=\n\t=?utf-8?q?3QDlZ3+Zzx8jh1T+2tCVww7IyDSg7kYiZPu6tkPJ2HM0sICpGnhkZNqqiCHzwt8Sh?=\n\t=?utf-8?q?QpQ06Dhek/fykk6Pv7ojVBJY8plUC95qpkYW1RSY0tCmkdNdz/euaFOH7mZLsQKdf?=\n\t=?utf-8?q?j6qbeeAs8ZSqsCtmwV+gXGvl1cbUvn+dubWVGF+/CekouIz/Nbkt+9Z639MMd53uN?=\n\t=?utf-8?q?TZmkuU7sb0qbDqGh+p4z/Lg8x6kmNJkTMj/0l6gGzHASEXE5KodHYcSl1Z0QxCiBa?=\n\t=?utf-8?q?47ZHeoGh6ALaohAl54qg1EnhU5NN8EFwhbU945G/Ojfj9qjC74WXrkZ1D6gG1DVcr?=\n\t=?utf-8?q?72/aDPMSj6opC7/JZp4rgS4eXOM/XVnEf/UMHTmCChTSLc6p2YHqGngXCt0MAMypx?=\n\t=?utf-8?q?1zYyo+TXr3a4cnU4RKARydpgxX4hVPgx4ZyWXV8aR1yALXJ5zDigXa+zrCWD22Uai?=\n\t=?utf-8?q?bb2NyxafAIvgj/zNu2DJZpoDbiZK6Knohdg6gs5WM28ks9kMtN+sTxs+Q3fF7KDjX?=\n\t=?utf-8?q?qjyS5gf4X2ZddRpXvZ+1BC1hpEXQbPFQCeVyU8LEpMq4yCxuFYuGZGFcknflNNj6T?=\n\t=?utf-8?q?cb0/5QU7i3MACe2tLgbbouNGjBP12W0qDCSK9JKagohARWWrRM/yAoMd7KoT7fJV9?=\n\t=?utf-8?q?u9dJUJlQIV7DaDjHtVhRE95+MOKcG37wpmr9ehwkgzE4i3kekGI/OGiv9zu4mEO4M?=\n\t=?utf-8?q?VC6qAMivi/fPDNePucCtGLvtpz7RZ6NjChVTJxTryvyiM5O0w5yi2Aza3bWICimbE?=\n\t=?utf-8?q?VY25L0KGK5XFDeIF20jYh1LoFFxZtKB4BIOzs3kEzNkqge+fXX+BjDfO7KtukydKF?=\n\t=?utf-8?q?mrnNZxJr0DCWP+rWE3Vx+l/KzW752Z8CcnsHy/auPLb8EnqVWrXyXPATICJBJszIO?=\n\t=?utf-8?q?9tLGDVAZg7wGfTYtFpTdeIXr+5zRb7EA=3D=3D?=","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n 4eed00e2-a952-4f19-bc5f-08de5873e802","X-MS-Exchange-CrossTenant-AuthSource":"DM4PR12MB9072.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"20 Jan 2026 22:33:17.2441\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n fTNAMznJDrafVBxebFBD2AGZOuQ6/CPKiOl3oSzwXgOWLKrr/OUa8yu63QCTNrmfBF1EApUMMs4cwCMKaAlhAw==","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"CY8PR12MB8300","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tSPF_HELO_PASS,SPF_PASS autolearn=disabled version=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"}},{"id":3639372,"web_url":"http://patchwork.ozlabs.org/comment/3639372/","msgid":"<6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com>","date":"2026-01-20T22:53:41","subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","submitter":{"id":76947,"url":"http://patchwork.ozlabs.org/api/people/76947/","name":"Zi Yan","email":"ziy@nvidia.com"},"content":"On 20 Jan 2026, at 17:33, Jordan Niethe wrote:\n\n> On 14/1/26 07:04, Zi Yan wrote:\n>> On 7 Jan 2026, at 4:18, Jordan Niethe wrote:\n>>\n>>> Currently when creating these device private struct pages, the first\n>>> step is to use request_free_mem_region() to get a range of physical\n>>> address space large enough to represent the devices memory. This\n>>> allocated physical address range is then remapped as device private\n>>> memory using memremap_pages().\n>>>\n>>> Needing allocation of physical address space has some problems:\n>>>\n>>>    1) There may be insufficient physical address space to represent the\n>>>       device memory. KASLR reducing the physical address space and VM\n>>>       configurations with limited physical address space increase the\n>>>       likelihood of hitting this especially as device memory increases. This\n>>>       has been observed to prevent device private from being initialized.\n>>>\n>>>    2) Attempting to add the device private pages to the linear map at\n>>>       addresses beyond the actual physical memory causes issues on\n>>>       architectures like aarch64 meaning the feature does not work there.\n>>>\n>>> Instead of using the physical address space, introduce a device private\n>>> address space and allocate devices regions from there to represent the\n>>> device private pages.\n>>>\n>>> Introduce a new interface memremap_device_private_pagemap() that\n>>> allocates a requested amount of device private address space and creates\n>>> the necessary device private pages.\n>>>\n>>> To support this new interface, struct dev_pagemap needs some changes:\n>>>\n>>>    - Add a new dev_pagemap::nr_pages field as an input parameter.\n>>>    - Add a new dev_pagemap::pages array to store the device\n>>>      private pages.\n>>>\n>>> When using memremap_device_private_pagemap(), rather then passing in\n>>> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to\n>>> be remapped, dev_pagemap::nr_ranges will always be 1, and the device\n>>> private range that is reserved is returned in dev_pagemap::range.\n>>>\n>>> Forbid calling memremap_pages() with dev_pagemap::ranges::type =\n>>> MEMORY_DEVICE_PRIVATE.\n>>>\n>>> Represent this device private address space using a new\n>>> device_private_pgmap_tree maple tree. This tree maps a given device\n>>> private address to a struct dev_pagemap, where a specific device private\n>>> page may then be looked up in that dev_pagemap::pages array.\n>>>\n>>> Device private address space can be reclaimed and the assoicated device\n>>> private pages freed using the corresponding new\n>>> memunmap_device_private_pagemap() interface.\n>>>\n>>> Because the device private pages now live outside the physical address\n>>> space, they no longer have a normal PFN. This means that page_to_pfn(),\n>>> et al. are no longer meaningful.\n>>>\n>>> Introduce helpers:\n>>>\n>>>    - device_private_page_to_offset()\n>>>    - device_private_folio_to_offset()\n>>>\n>>> to take a given device private page / folio and return its offset within\n>>> the device private address space.\n>>>\n>>> Update the places where we previously converted a device private page to\n>>> a PFN to use these new helpers. When we encounter a device private\n>>> offset, instead of looking up its page within the pagemap use\n>>> device_private_offset_to_page() instead.\n>>>\n>>> Update the existing users:\n>>>\n>>>   - lib/test_hmm.c\n>>>   - ppc ultravisor\n>>>   - drm/amd/amdkfd\n>>>   - gpu/drm/xe\n>>>   - gpu/drm/nouveau\n>>>\n>>> to use the new memremap_device_private_pagemap() interface.\n>>>\n>>> Signed-off-by: Jordan Niethe <jniethe@nvidia.com>\n>>> Signed-off-by: Alistair Popple <apopple@nvidia.com>\n>>>\n>>> ---\n>>>\n>>> NOTE: The updates to the existing drivers have only been compile tested.\n>>> I'll need some help in testing these drivers.\n>>>\n>>> v1:\n>>> - Include NUMA node paramater for memremap_device_private_pagemap()\n>>> - Add devm_memremap_device_private_pagemap() and friends\n>>> - Update existing users of memremap_pages():\n>>>      - ppc ultravisor\n>>>      - drm/amd/amdkfd\n>>>      - gpu/drm/xe\n>>>      - gpu/drm/nouveau\n>>> - Update for HMM huge page support\n>>> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE\n>>>\n>>> v2:\n>>> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);\n>>> ---\n>>>   Documentation/mm/hmm.rst                 |  11 +-\n>>>   arch/powerpc/kvm/book3s_hv_uvmem.c       |  41 ++---\n>>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  23 +--\n>>>   drivers/gpu/drm/nouveau/nouveau_dmem.c   |  35 ++--\n>>>   drivers/gpu/drm/xe/xe_svm.c              |  28 +---\n>>>   include/linux/hmm.h                      |   3 +\n>>>   include/linux/leafops.h                  |  16 +-\n>>>   include/linux/memremap.h                 |  64 +++++++-\n>>>   include/linux/migrate.h                  |   6 +-\n>>>   include/linux/mm.h                       |   2 +\n>>>   include/linux/rmap.h                     |   5 +-\n>>>   include/linux/swapops.h                  |  10 +-\n>>>   lib/test_hmm.c                           |  69 ++++----\n>>>   mm/debug.c                               |   9 +-\n>>>   mm/memremap.c                            | 193 ++++++++++++++++++-----\n>>>   mm/mm_init.c                             |   8 +-\n>>>   mm/page_vma_mapped.c                     |  19 ++-\n>>>   mm/rmap.c                                |  43 +++--\n>>>   mm/util.c                                |   5 +-\n>>>   19 files changed, 391 insertions(+), 199 deletions(-)\n>>>\n>> <snip>\n>>\n>>> diff --git a/include/linux/mm.h b/include/linux/mm.h\n>>> index e65329e1969f..b36599ab41ba 100644\n>>> --- a/include/linux/mm.h\n>>> +++ b/include/linux/mm.h\n>>> @@ -2038,6 +2038,8 @@ static inline unsigned long memdesc_section(memdesc_flags_t mdf)\n>>>    */\n>>>   static inline unsigned long folio_pfn(const struct folio *folio)\n>>>   {\n>>> +\tVM_BUG_ON(folio_is_device_private(folio));\n>>\n>> Please use VM_WARN_ON instead.\n>\n> ack.\n>\n>>\n>>> +\n>>>   \treturn page_to_pfn(&folio->page);\n>>>   }\n>>>\n>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n>>> index 57c63b6a8f65..c1561a92864f 100644\n>>> --- a/include/linux/rmap.h\n>>> +++ b/include/linux/rmap.h\n>>> @@ -951,7 +951,7 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>>>   static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>   {\n>>>   \tif (folio_is_device_private(folio))\n>>> -\t\treturn page_vma_walk_pfn(folio_pfn(folio)) |\n>>> +\t\treturn page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n>>>   \t\t       PVMW_PFN_DEVICE_PRIVATE;\n>>>\n>>>   \treturn page_vma_walk_pfn(folio_pfn(folio));\n>>> @@ -959,6 +959,9 @@ static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>\n>>>   static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n>>>   {\n>>> +\tif (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n>>> +\t\treturn device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>> +\n>>>   \treturn pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>   }\n>>\n>> <snip>\n>>\n>>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n>>> index 96c525785d78..141fe5abd33f 100644\n>>> --- a/mm/page_vma_mapped.c\n>>> +++ b/mm/page_vma_mapped.c\n>>> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,\n>>>   static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>   {\n>>>   \tunsigned long pfn;\n>>> +\tbool device_private = false;\n>>>   \tpte_t ptent = ptep_get(pvmw->pte);\n>>>\n>>>   \tif (pvmw->flags & PVMW_MIGRATION) {\n>>> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>   \t\tif (!softleaf_is_migration(entry))\n>>>   \t\t\treturn false;\n>>>\n>>> +\t\tif (softleaf_is_migration_device_private(entry))\n>>> +\t\t\tdevice_private = true;\n>>> +\n>>>   \t\tpfn = softleaf_to_pfn(entry);\n>>>   \t} else if (pte_present(ptent)) {\n>>>   \t\tpfn = pte_pfn(ptent);\n>>> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>   \t\t\treturn false;\n>>>\n>>>   \t\tpfn = softleaf_to_pfn(entry);\n>>> +\n>>> +\t\tif (softleaf_is_device_private(entry))\n>>> +\t\t\tdevice_private = true;\n>>>   \t}\n>>>\n>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>> +\t\treturn false;\n>>> +\n>>>   \tif ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>   \t\treturn false;\n>>>   \tif (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n>>> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>   }\n>>>\n>>>   /* Returns true if the two ranges overlap.  Careful to not overflow. */\n>>> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)\n>>> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n>>>   {\n>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>> +\t\treturn false;\n>>> +\n>>>   \tif ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>   \t\treturn false;\n>>>   \tif (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n>>> @@ -255,6 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>\n>>>   \t\t\t\tif (!softleaf_is_migration(entry) ||\n>>>   \t\t\t\t    !check_pmd(softleaf_to_pfn(entry),\n>>> +\t\t\t\t\t       softleaf_is_device_private(entry) ||\n>>> +\t\t\t\t\t       softleaf_is_migration_device_private(entry),\n>>>   \t\t\t\t\t       pvmw))\n>>>   \t\t\t\t\treturn not_found(pvmw);\n>>>   \t\t\t\treturn true;\n>>> @@ -262,7 +277,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>   \t\t\tif (likely(pmd_trans_huge(pmde))) {\n>>>   \t\t\t\tif (pvmw->flags & PVMW_MIGRATION)\n>>>   \t\t\t\t\treturn not_found(pvmw);\n>>> -\t\t\t\tif (!check_pmd(pmd_pfn(pmde), pvmw))\n>>> +\t\t\t\tif (!check_pmd(pmd_pfn(pmde), false, pvmw))\n>>>   \t\t\t\t\treturn not_found(pvmw);\n>>>   \t\t\t\treturn true;\n>>>   \t\t\t}\n>>\n>> It seems to me that you can add a new flag like “bool is_device_private” to\n>> indicate whether pfn is a device private index instead of pfn without\n>> manipulating pvmw->pfn itself.\n>\n> We could do it like that, however my concern with using a new param was that\n> storing this info seperately might make it easier to misuse a device private\n> index as a regular pfn.\n>\n> It seemed like it could be easy to overlook both when creating the pvmw and\n> then when accessing the pfn.\n\nThat is why I asked for a helper function like page_vma_walk_pfn(pvmw) to\nreturn the converted pfn instead of pvmw->pfn directly. You can add a comment\nto ask people to use helper function and even mark pvmw->pfn /* do not use\ndirectly */.\n\nIn addition, your patch manipulates pfn by left shifting it by 1. Are you sure\nthere is no weird arch having pfns with bit 63 being 1? Your change could\nbreak it, right?\n\n\nBest Regards,\nYan, Zi","headers":{"Return-Path":"\n <linuxppc-dev+bounces-16082-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=Vjkyb2ZA;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=2404:9400:21b9:f100::1; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-16082-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=\"2a01:111:f403:c107::3\" arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=Vjkyb2ZA;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=2a01:111:f403:c107::3;\n helo=ph0pr06cu001.outbound.protection.outlook.com;\n envelope-from=ziy@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org\n [IPv6:2404:9400:21b9:f100::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dwjLX5Bt1z1xsN\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 21 Jan 2026 09:54:12 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4dwjLX49NCz2xrL;\n\tWed, 21 Jan 2026 09:54:12 +1100 (AEDT)","from PH0PR06CU001.outbound.protection.outlook.com\n (mail-westus3azlp170110003.outbound.protection.outlook.com\n [IPv6:2a01:111:f403:c107::3])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4dwjLW4JVxz2xjK\n\tfor <linuxppc-dev@lists.ozlabs.org>; Wed, 21 Jan 2026 09:54:11 +1100 (AEDT)","from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by\n SN7PR12MB7275.namprd12.prod.outlook.com (2603:10b6:806:2ae::9) with Microsoft\n SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.9520.12; Tue, 20 Jan 2026 22:53:47 +0000","from DS7PR12MB9473.namprd12.prod.outlook.com\n ([fe80::f01d:73d2:2dda:c7b2]) by DS7PR12MB9473.namprd12.prod.outlook.com\n ([fe80::f01d:73d2:2dda:c7b2%4]) with mapi id 15.20.9542.008; Tue, 20 Jan 2026\n 22:53:47 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1768949652;\n\tcv=pass;\n b=S8qqh/A3fJap21q1rqVfy6s1Wr/ZOtp8OdIjIpiSzSEUy5IHHpV5AxBlSYStgsTJHGvwC8Lvn6PpjegskapiRFOBCns4JZYbTNM0Ss8xCf5YW+Nr2YRc/J/krhdOOOk+mOgV1nsf7VXHimHQUYb5citnYNfJaVLAOk9ZMHGZ7B4pnpsPmSMnhu3zBGEcAuW9EwKz06APyD6zSOteIL1jLx4lENHaUw29I5qL/BcAm73vV3uSHH/A7PYz31X31lTlm+eBoWQhsePrGWvwJbTlxOEX4QZxL3nl+DfFp6C7fQ2LWQ180C5JjQOr0lFNjNX2pJUKY0tAutOHfdd0UsRedw==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=QhC+NzmJvLnHb2PwiZxAsbXNQZCshV/YVt7tyFQ8mTOlDuHm2PNJu96SJdQnFy7mMoXXljtruS24az6aH2uZlPXtjjye/Tics51kIeOtGjM0LxnU37u7A1kz8d36F7zXyY1FDzeptIDw+PB9PbM0hCJklrsIkToEnQniM2+GQ5cLd1w6O/VZN/fTguLI9fQ7KayzNc/RAB/nDLpokc1VY8iE3/b3OfN/e3JmFbMYlBPdbZx33Yqr9i9h0GkMYzrn5iLue9DhdKkAc1r7XgKYg50wMfwQp0AKmCZLT1xwkVuF0/FFRvgaaXg3P7T2iHsH3nEyw74erOzfSxiFp9CQfA=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1768949652; c=relaxed/relaxed;\n\tbh=NZn82ZzzYTIroWb23yOu9XXNggXgXnV3+j/QTnnvkS0=;\n\th=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References:\n\t Content-Type:MIME-Version;\n b=c1Uv4k3EB2dJVuuX9DxArusdLnECZY76FScQRD5c0KSfvjJqsD/+OB/WoMwbZfAVp3VLMRFs6FIYm/vOfANz657esrMZ3uf70hpN7A5yhA8a3zB8gbkEHOs8p8SbYPAHfH2zITNjbOIwkHzoqX/ZPMahc+jOy7TIZyptV9peqPOGth9TqJHdgoI07UunlZjuMtSU8dcKMI2ZFu0wgDtrIXmDm43NLdpSufl2tY9tNhteQwl23dJ0r1QdA/u8iKz6zbnrewx+xnYk6GmAsBOsMXw6Gyl8sLeA73Y0XOn/wqhfx4/5kbJ+yy0Jm8J3jce6sNHxi8stTbefGXa8ALe1xQ==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=NZn82ZzzYTIroWb23yOu9XXNggXgXnV3+j/QTnnvkS0=;\n b=DgZ1vADCoZ9BJBphZd5IJcm2g/DIE+lj/mtzJNzIMfnTyeQgk43YRTsIlsvNHN/Jkziwcn+vxzaIVgprP9ItJmnt2YypOOJV3uXaoKuZXdKTviicsFwTYJXInowYpNp72lufSyuA0OUK6Yqh9iAbD97UDVQbi/r8aN793y5SkSF5/0KZxMNa2siZZ2fK9mISbiSCxmEWFJEvlIVFqhPRY//8f9OpQRbiHiHCBCWfkpU2XgFubl7zwIVm3yIRcevZvdFo1wp5pVrQYuoa+wfCdMLqfPhVrb3HStnVIBO/HwZrp4ZrjG1hAbk5pSaG1hiueTt+O9zf64sVTCP8vpNWJQ=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=Vjkyb2ZA; dkim-atps=neutral;\n spf=pass (client-ip=2a01:111:f403:c107::3;\n helo=ph0pr06cu001.outbound.protection.outlook.com;\n envelope-from=ziy@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=NZn82ZzzYTIroWb23yOu9XXNggXgXnV3+j/QTnnvkS0=;\n b=Vjkyb2ZAfQ2cROfnYkMqQocE6slAHAqZPgunz+ukV+tTIHzT+nf0O+F0VqmWZXOBEYp3ZxlD6A6p+egrLEcb2We7ClZva/6z7vrHBGir4fR6lX62T/+pHjyioXGrG34UY5W5ArNIIlCySMCvuKzEyyY4pmDkQY6DTJqbs/QDOph0xmAc0slJTWqwGPkxgAU+pkmLZBajB9yrzgs0H2lhHcIYQfGSPYv32ZlhedPeBDDp2rvtk5WzU/2XIE9VuThrm6h4abwbUtxx32VPgmbT9/dBKQNUMtWH0XmhFzIJ+q8bzs2nGcO8AnveFQuTFZR98onhVZ5ysGdyBoI3nt7oFQ==","From":"Zi Yan <ziy@nvidia.com>","To":"Jordan Niethe <jniethe@nvidia.com>","Cc":"linux-mm@kvack.org, balbirs@nvidia.com, matthew.brost@intel.com,\n akpm@linux-foundation.org, linux-kernel@vger.kernel.org,\n dri-devel@lists.freedesktop.org, david@redhat.com, apopple@nvidia.com,\n lorenzo.stoakes@oracle.com, lyude@redhat.com, dakr@kernel.org,\n airlied@gmail.com, simona@ffwll.ch, rcampbell@nvidia.com,\n mpenttil@redhat.com, jgg@nvidia.com, willy@infradead.org,\n linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org, jgg@ziepe.ca,\n Felix.Kuehling@amd.com","Subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","Date":"Tue, 20 Jan 2026 17:53:41 -0500","X-Mailer":"MailMate (2.0r6290)","Message-ID":"<6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com>","In-Reply-To":"<c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>","References":"<20260107091823.68974-1-jniethe@nvidia.com>\n <20260107091823.68974-12-jniethe@nvidia.com>\n <36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>\n <c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>","Content-Type":"text/plain; charset=UTF-8","Content-Transfer-Encoding":"quoted-printable","X-ClientProxiedBy":"BYAPR08CA0031.namprd08.prod.outlook.com\n (2603:10b6:a03:100::44) To DS7PR12MB9473.namprd12.prod.outlook.com\n (2603:10b6:8:252::5)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DS7PR12MB9473:EE_|SN7PR12MB7275:EE_","X-MS-Office365-Filtering-Correlation-Id":"8ca40570-8371-4156-a8cd-08de5876c54c","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|366016|1800799024|7416014|376014;","X-Microsoft-Antispam-Message-Info":"=?utf-8?q?d6UunRj9/J4SS6F/d26PnJi9ET5ZXV7?=\n\t=?utf-8?q?f4vzmEK2XievDi5yz1YvN2hvfgiGH62okDCj2FhGbvGV8AFIL/lB7Ejan9oZG681q?=\n\t=?utf-8?q?JwmeqPtWMrCliCLvI3I8vDj+kv++s6CWU7hY1PraEBSBcDgJzrzbjI9tJ7JMoUG7F?=\n\t=?utf-8?q?BSt46HhJYAlLw7jfDwdZ1yH9cnO6fRmpZvB8rAAKQ/wMWcyBGDgLw3DiC+yv6mFrM?=\n\t=?utf-8?q?yjN4Bv5EtuvomVsP19Qu3JWJ4Mw9/M0SAR8RyGJo/ASEZnhZRLJAi+yjSd0Mpb3Tc?=\n\t=?utf-8?q?72fH5WANeYSon7GNZ1sd3X44voX9C+WsJdPhw8DO++pJGzJgxbPzYPDlvkd3R4jm3?=\n\t=?utf-8?q?ZWm8gc73X3hvr+8ncGD3KZ8lJCKdgHdn7/MB8MW946hv6U4j7b6oFYVdHhblzLoHQ?=\n\t=?utf-8?q?Udemuf985vODfQ1EWN35gQJuUzLGWaq2KytK13MqtZnRPriylbakQlLxfz2Qtw+ZV?=\n\t=?utf-8?q?kfKi8jtiKD/6OR6Wm2SVdZ72bOjznhkkZDQPyB5a+q6BjnMr8fXl2E5yAfpbrIRKX?=\n\t=?utf-8?q?aPt8Nop4VinCaZ71ejVzupjvroWh+DHdKqUSVzWD1tz7WMqhrIR7IScdb9uiZJC0j?=\n\t=?utf-8?q?wqzzBWuPaRxUadlOXyl1H3F2/Kii9F5tOS+XbZjrjtoaOaF8PDLPrd9ZEScbZeKts?=\n\t=?utf-8?q?edP9VvVjldI+ui79YsUs+Pd5Oshr2FFy7oFXrFnSzyOCks1NnPtTZZrr2y0DFNhxK?=\n\t=?utf-8?q?sFaCYY0bkcwR2gjlUGGjNjDSK9fRYGbuDJ2ZZk6oPH0zr76tZz3H1US2Ehdd5Bgle?=\n\t=?utf-8?q?+PYBXUaualiFiRu0DOV2tNmPgNPZqafk0nzkFvqk117D74AKd8sEDZ2vIv/e08aqk?=\n\t=?utf-8?q?7bN7JJPJ8xkUMLwFcMrd8SXTKGkoaf+0YOq5+DF8TXZJfV63SSKcW/s/3zbH1zcuq?=\n\t=?utf-8?q?eriA2T8oI/o8QNwjbm7xcS9oB54ncdIf9d7rK5nE0R8CF8g/Z0xJ1HofMPSnXgc6Q?=\n\t=?utf-8?q?n1KaVHTrJHu5uiDxViwoxwSFTqV9Oq7bzzt1cxQXhj/XbB0HaGLbsmUAKhsvS+nPC?=\n\t=?utf-8?q?CZ7kdfUnheV9mAFIfYIqBRsTch3hdQ2wdi1NI64CJq3SM5lh/jyQqHqWOkiIPl+Lm?=\n\t=?utf-8?q?Y5X8bu8RZX7P1c1B4yxvPd3IIVKsGqlpzMlye1fuoBM7Yl8KKdeeajrpVL0B1X5/5?=\n\t=?utf-8?q?LuzKRjtk7Eb+lD9uT6y7B5+3Q6q6ENUpUF6J0dBlwPoZOXYsybYsC9fDa/jGY0VHh?=\n\t=?utf-8?q?xC32Fk90eqCRHyG382xLkdLtsZGbohofob7IDjChOK7QCGmrbnQB/gbb9q6lfL2sI?=\n\t=?utf-8?q?UvELH84VeMnJ7qhpvNWfeHtCLMFYj2HJz/hvO+AIg4Zt1pd1CIc8XwWIGmt3M+Aoi?=\n\t=?utf-8?q?Z6pGFxh7JfKEJN72kvWgvnk4j+60f8pC8RPCNmcX9YwSa2baErm3WqLWlKur56Hb2?=\n\t=?utf-8?q?VjNCmexepLQ1kJXcDhBhzNN0p3+Hy5FWSo6jp4TE+IxpqRfwgKiDiUdRfWq+vvcDq?=\n\t=?utf-8?q?GdnULvtwAGlYCAYAwEbvtt372BgaJ4kYY5lbdrQE/Xa3jQ9fJ6LjE=3D?=","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(1800799024)(7416014)(376014);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"=?utf-8?q?IR12TSsGHGakNdPZLKkT0RLwdlHA?=\n\t=?utf-8?q?BWbOn968kL6OEq+FIct8iw31sIZF2xVd0zTUQvU0HjtP5BtSmK0HwMbPZxilqIiuj?=\n\t=?utf-8?q?qYmhEw5R6E3MRRSNmG1S0ugJmdhj1+d9qW1acVoP06spZFITKZRmGE/omUKfy3B+c?=\n\t=?utf-8?q?n9CxhRj0D6NNpYouvZaHgpDTzAdsDVIWBJZhebMUWDhA+hZW+uwjHhFbk2ugo6jc9?=\n\t=?utf-8?q?UWAmVbMWTfMYcyl8tfYx2EdvjxmIf+HqkElA9oV8+b7k0f0PDdCxbZGwARtSsrP5o?=\n\t=?utf-8?q?EGmfEPPDbewNf/MD9LlUs8syzBG+SYhvw5fGcG3q/Qe3bB1vs2BGvxtA5Rb+7N5EC?=\n\t=?utf-8?q?h9Kvs5p6d6EpydabmlOUWg33ig9lnuxkJPF5XBsUbxnh934Uc0D4u9Pd+iZIOhuTl?=\n\t=?utf-8?q?5Eh3Tt0Wyo+28Om8LWo05yPUFolh6j1xfCTPJYt8hKi9p46euxL75Rq3+4T03q1My?=\n\t=?utf-8?q?9ro6PZB82PGZVyMfshoVNhwVSBCPlpLvV7XW3JhCO3tZIqd6LTOkSHIEz8djO8U+B?=\n\t=?utf-8?q?noa2rQnJHDLTfVle3wNcYpy/pWnp6a29gzja/ASPlHf1slWwfX6+ULY5IkfIe/tlW?=\n\t=?utf-8?q?BiRWvAiyqc7emB8x+LyTOU9yauOeSY/i2OUTqw0mf9s+ygXvo+MTTkl/ANyPgy0MQ?=\n\t=?utf-8?q?t/eTlzs0YEzbS8/SJ6y+/7e8UZRmQJqpL1OkVmGrOs/sX6DuhoGJhKSUC5Y+WnIT4?=\n\t=?utf-8?q?sVFDkjN/OBzJziQw15V6TZ3aF4bRlrC/QLwP8YhrZGSku4p2X1nXRz1y0s5b05x45?=\n\t=?utf-8?q?XDfikK5G5hYinA5f+86ECObBE51gfv1QY3chMz1/JajKK1c+Z7skTtejBecXQuM7L?=\n\t=?utf-8?q?+qCmMr7X0SNIOZ+vracN1AhwjALgXNqj9nhmF1dHTjV4eDrYmM3r8Z91hR+X5oeJ4?=\n\t=?utf-8?q?1XF+wU8PIulJNsK6UO53XiChLMT/zFQL29jnxgX4CE1ZIrGH6uG0votWROrNxo+/t?=\n\t=?utf-8?q?jm7S6BFjpFGOB9dEW41Rf8JroVHfB4zS+YOiWLAMXZIXy84Y5rxdIaioexH8DTrOs?=\n\t=?utf-8?q?b9mnHCvLnruqXy2gTGjgCp28VnY/DpNc1aCBA/SpCDZK3kvsFNQC6PSRDVMidmfcC?=\n\t=?utf-8?q?RIeU7pRm4vo/M35pyWyE92VgmR70xHwwgmREiDVfFLSo1xrJjjDGRFtbZJya6Ecr3?=\n\t=?utf-8?q?hBwroBh18mCfElaf2cogoO1hlzZFM4mWZgL8h/ptsv5twGxU4daA6IKGR+UOKp10d?=\n\t=?utf-8?q?9Hp19ll4GX6oei6b/QooXcjlMh/pMdLnc9ykW8uZ6JvhlCJgwbM5d9JdBWatksKpz?=\n\t=?utf-8?q?rvwxRNv67SE7gs1ieesTl6R8jvPdg9qsSEfz3xg0UMFfFoGc/V88FRswoKh5CX6I7?=\n\t=?utf-8?q?gVBq3uCIu60mwVymH3805pvtd9BjIr4pMdmOIsgsK0qVw6NO9Ms8aWtAFlJFkT+y9?=\n\t=?utf-8?q?wbQd+qyfp39F3s8mxnnmsbCYvDL+65h7Utn8S7c/5xZtXQqHHz9Xe309HIJYmEzmE?=\n\t=?utf-8?q?BCYXBXOeVbMMl5e13mKwpwAmbM+SIZ4F4BUjO85LEazuanMWIks1NqO6R7Y2HX67g?=\n\t=?utf-8?q?qfllshExwGy5FP1vzyYO2WhLkyWRqDSCrx+Z5r2yG3BB87v4G+C4/Hf40jKrkx0xa?=\n\t=?utf-8?q?uEcaBlLVK1guQA4HNoS2xglBaPJF9fRrUujfxoJat2/Na/uez9Cp+j8FlBexu+yIx?=\n\t=?utf-8?q?L15/EOFY33?=","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n 8ca40570-8371-4156-a8cd-08de5876c54c","X-MS-Exchange-CrossTenant-AuthSource":"DS7PR12MB9473.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"20 Jan 2026 22:53:47.3899\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n aXILLM66E1CHv/8Abfllz2RmjtFFIzoQNLEpp6WjMe7SrJLzVI56rLQVzr+cjrrK","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"SN7PR12MB7275","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tSPF_HELO_NONE,SPF_PASS autolearn=disabled version=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"}},{"id":3639375,"web_url":"http://patchwork.ozlabs.org/comment/3639375/","msgid":"<fd4b6553-3e9e-4829-a12f-51d29a5d7571@nvidia.com>","date":"2026-01-20T23:02:40","subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","submitter":{"id":92354,"url":"http://patchwork.ozlabs.org/api/people/92354/","name":"Jordan Niethe","email":"jniethe@nvidia.com"},"content":"Hi,\n\nOn 21/1/26 09:53, Zi Yan wrote:\n> On 20 Jan 2026, at 17:33, Jordan Niethe wrote:\n> \n>> On 14/1/26 07:04, Zi Yan wrote:\n>>> On 7 Jan 2026, at 4:18, Jordan Niethe wrote:\n>>>\n>>>> Currently when creating these device private struct pages, the first\n>>>> step is to use request_free_mem_region() to get a range of physical\n>>>> address space large enough to represent the devices memory. This\n>>>> allocated physical address range is then remapped as device private\n>>>> memory using memremap_pages().\n>>>>\n>>>> Needing allocation of physical address space has some problems:\n>>>>\n>>>>     1) There may be insufficient physical address space to represent the\n>>>>        device memory. KASLR reducing the physical address space and VM\n>>>>        configurations with limited physical address space increase the\n>>>>        likelihood of hitting this especially as device memory increases. This\n>>>>        has been observed to prevent device private from being initialized.\n>>>>\n>>>>     2) Attempting to add the device private pages to the linear map at\n>>>>        addresses beyond the actual physical memory causes issues on\n>>>>        architectures like aarch64 meaning the feature does not work there.\n>>>>\n>>>> Instead of using the physical address space, introduce a device private\n>>>> address space and allocate devices regions from there to represent the\n>>>> device private pages.\n>>>>\n>>>> Introduce a new interface memremap_device_private_pagemap() that\n>>>> allocates a requested amount of device private address space and creates\n>>>> the necessary device private pages.\n>>>>\n>>>> To support this new interface, struct dev_pagemap needs some changes:\n>>>>\n>>>>     - Add a new dev_pagemap::nr_pages field as an input parameter.\n>>>>     - Add a new dev_pagemap::pages array to store the device\n>>>>       private pages.\n>>>>\n>>>> When using memremap_device_private_pagemap(), rather then passing in\n>>>> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to\n>>>> be remapped, dev_pagemap::nr_ranges will always be 1, and the device\n>>>> private range that is reserved is returned in dev_pagemap::range.\n>>>>\n>>>> Forbid calling memremap_pages() with dev_pagemap::ranges::type =\n>>>> MEMORY_DEVICE_PRIVATE.\n>>>>\n>>>> Represent this device private address space using a new\n>>>> device_private_pgmap_tree maple tree. This tree maps a given device\n>>>> private address to a struct dev_pagemap, where a specific device private\n>>>> page may then be looked up in that dev_pagemap::pages array.\n>>>>\n>>>> Device private address space can be reclaimed and the assoicated device\n>>>> private pages freed using the corresponding new\n>>>> memunmap_device_private_pagemap() interface.\n>>>>\n>>>> Because the device private pages now live outside the physical address\n>>>> space, they no longer have a normal PFN. This means that page_to_pfn(),\n>>>> et al. are no longer meaningful.\n>>>>\n>>>> Introduce helpers:\n>>>>\n>>>>     - device_private_page_to_offset()\n>>>>     - device_private_folio_to_offset()\n>>>>\n>>>> to take a given device private page / folio and return its offset within\n>>>> the device private address space.\n>>>>\n>>>> Update the places where we previously converted a device private page to\n>>>> a PFN to use these new helpers. When we encounter a device private\n>>>> offset, instead of looking up its page within the pagemap use\n>>>> device_private_offset_to_page() instead.\n>>>>\n>>>> Update the existing users:\n>>>>\n>>>>    - lib/test_hmm.c\n>>>>    - ppc ultravisor\n>>>>    - drm/amd/amdkfd\n>>>>    - gpu/drm/xe\n>>>>    - gpu/drm/nouveau\n>>>>\n>>>> to use the new memremap_device_private_pagemap() interface.\n>>>>\n>>>> Signed-off-by: Jordan Niethe <jniethe@nvidia.com>\n>>>> Signed-off-by: Alistair Popple <apopple@nvidia.com>\n>>>>\n>>>> ---\n>>>>\n>>>> NOTE: The updates to the existing drivers have only been compile tested.\n>>>> I'll need some help in testing these drivers.\n>>>>\n>>>> v1:\n>>>> - Include NUMA node paramater for memremap_device_private_pagemap()\n>>>> - Add devm_memremap_device_private_pagemap() and friends\n>>>> - Update existing users of memremap_pages():\n>>>>       - ppc ultravisor\n>>>>       - drm/amd/amdkfd\n>>>>       - gpu/drm/xe\n>>>>       - gpu/drm/nouveau\n>>>> - Update for HMM huge page support\n>>>> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE\n>>>>\n>>>> v2:\n>>>> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);\n>>>> ---\n>>>>    Documentation/mm/hmm.rst                 |  11 +-\n>>>>    arch/powerpc/kvm/book3s_hv_uvmem.c       |  41 ++---\n>>>>    drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  23 +--\n>>>>    drivers/gpu/drm/nouveau/nouveau_dmem.c   |  35 ++--\n>>>>    drivers/gpu/drm/xe/xe_svm.c              |  28 +---\n>>>>    include/linux/hmm.h                      |   3 +\n>>>>    include/linux/leafops.h                  |  16 +-\n>>>>    include/linux/memremap.h                 |  64 +++++++-\n>>>>    include/linux/migrate.h                  |   6 +-\n>>>>    include/linux/mm.h                       |   2 +\n>>>>    include/linux/rmap.h                     |   5 +-\n>>>>    include/linux/swapops.h                  |  10 +-\n>>>>    lib/test_hmm.c                           |  69 ++++----\n>>>>    mm/debug.c                               |   9 +-\n>>>>    mm/memremap.c                            | 193 ++++++++++++++++++-----\n>>>>    mm/mm_init.c                             |   8 +-\n>>>>    mm/page_vma_mapped.c                     |  19 ++-\n>>>>    mm/rmap.c                                |  43 +++--\n>>>>    mm/util.c                                |   5 +-\n>>>>    19 files changed, 391 insertions(+), 199 deletions(-)\n>>>>\n>>> <snip>\n>>>\n>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h\n>>>> index e65329e1969f..b36599ab41ba 100644\n>>>> --- a/include/linux/mm.h\n>>>> +++ b/include/linux/mm.h\n>>>> @@ -2038,6 +2038,8 @@ static inline unsigned long memdesc_section(memdesc_flags_t mdf)\n>>>>     */\n>>>>    static inline unsigned long folio_pfn(const struct folio *folio)\n>>>>    {\n>>>> +\tVM_BUG_ON(folio_is_device_private(folio));\n>>>\n>>> Please use VM_WARN_ON instead.\n>>\n>> ack.\n>>\n>>>\n>>>> +\n>>>>    \treturn page_to_pfn(&folio->page);\n>>>>    }\n>>>>\n>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n>>>> index 57c63b6a8f65..c1561a92864f 100644\n>>>> --- a/include/linux/rmap.h\n>>>> +++ b/include/linux/rmap.h\n>>>> @@ -951,7 +951,7 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>>>>    static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>    {\n>>>>    \tif (folio_is_device_private(folio))\n>>>> -\t\treturn page_vma_walk_pfn(folio_pfn(folio)) |\n>>>> +\t\treturn page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n>>>>    \t\t       PVMW_PFN_DEVICE_PRIVATE;\n>>>>\n>>>>    \treturn page_vma_walk_pfn(folio_pfn(folio));\n>>>> @@ -959,6 +959,9 @@ static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>\n>>>>    static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n>>>>    {\n>>>> +\tif (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n>>>> +\t\treturn device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>> +\n>>>>    \treturn pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>    }\n>>>\n>>> <snip>\n>>>\n>>>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n>>>> index 96c525785d78..141fe5abd33f 100644\n>>>> --- a/mm/page_vma_mapped.c\n>>>> +++ b/mm/page_vma_mapped.c\n>>>> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,\n>>>>    static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>    {\n>>>>    \tunsigned long pfn;\n>>>> +\tbool device_private = false;\n>>>>    \tpte_t ptent = ptep_get(pvmw->pte);\n>>>>\n>>>>    \tif (pvmw->flags & PVMW_MIGRATION) {\n>>>> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>    \t\tif (!softleaf_is_migration(entry))\n>>>>    \t\t\treturn false;\n>>>>\n>>>> +\t\tif (softleaf_is_migration_device_private(entry))\n>>>> +\t\t\tdevice_private = true;\n>>>> +\n>>>>    \t\tpfn = softleaf_to_pfn(entry);\n>>>>    \t} else if (pte_present(ptent)) {\n>>>>    \t\tpfn = pte_pfn(ptent);\n>>>> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>    \t\t\treturn false;\n>>>>\n>>>>    \t\tpfn = softleaf_to_pfn(entry);\n>>>> +\n>>>> +\t\tif (softleaf_is_device_private(entry))\n>>>> +\t\t\tdevice_private = true;\n>>>>    \t}\n>>>>\n>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>> +\t\treturn false;\n>>>> +\n>>>>    \tif ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>    \t\treturn false;\n>>>>    \tif (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n>>>> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>    }\n>>>>\n>>>>    /* Returns true if the two ranges overlap.  Careful to not overflow. */\n>>>> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)\n>>>> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n>>>>    {\n>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>> +\t\treturn false;\n>>>> +\n>>>>    \tif ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>    \t\treturn false;\n>>>>    \tif (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n>>>> @@ -255,6 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>\n>>>>    \t\t\t\tif (!softleaf_is_migration(entry) ||\n>>>>    \t\t\t\t    !check_pmd(softleaf_to_pfn(entry),\n>>>> +\t\t\t\t\t       softleaf_is_device_private(entry) ||\n>>>> +\t\t\t\t\t       softleaf_is_migration_device_private(entry),\n>>>>    \t\t\t\t\t       pvmw))\n>>>>    \t\t\t\t\treturn not_found(pvmw);\n>>>>    \t\t\t\treturn true;\n>>>> @@ -262,7 +277,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>    \t\t\tif (likely(pmd_trans_huge(pmde))) {\n>>>>    \t\t\t\tif (pvmw->flags & PVMW_MIGRATION)\n>>>>    \t\t\t\t\treturn not_found(pvmw);\n>>>> -\t\t\t\tif (!check_pmd(pmd_pfn(pmde), pvmw))\n>>>> +\t\t\t\tif (!check_pmd(pmd_pfn(pmde), false, pvmw))\n>>>>    \t\t\t\t\treturn not_found(pvmw);\n>>>>    \t\t\t\treturn true;\n>>>>    \t\t\t}\n>>>\n>>> It seems to me that you can add a new flag like “bool is_device_private” to\n>>> indicate whether pfn is a device private index instead of pfn without\n>>> manipulating pvmw->pfn itself.\n>>\n>> We could do it like that, however my concern with using a new param was that\n>> storing this info seperately might make it easier to misuse a device private\n>> index as a regular pfn.\n>>\n>> It seemed like it could be easy to overlook both when creating the pvmw and\n>> then when accessing the pfn.\n> \n> That is why I asked for a helper function like page_vma_walk_pfn(pvmw) to\n> return the converted pfn instead of pvmw->pfn directly. You can add a comment\n> to ask people to use helper function and even mark pvmw->pfn /* do not use\n> directly */.\n\nYeah I agree that is a good idea.\n\n> \n> In addition, your patch manipulates pfn by left shifting it by 1. Are you sure\n> there is no weird arch having pfns with bit 63 being 1? Your change could\n> break it, right?\n\nCurrently for migrate pfns we left shift by pfns by MIGRATE_PFN_SHIFT (6), so I\nthought doing something similiar here should be safe.\n\nThanks,\nJordan.\n\n> \n> \n> Best Regards,\n> Yan, Zi","headers":{"Return-Path":"\n <linuxppc-dev+bounces-16083-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=ei5PoZf0;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=112.213.38.117; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-16083-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=\"2a01:111:f403:c111::5\" arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=ei5PoZf0;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=2a01:111:f403:c111::5;\n helo=dm1pr04cu001.outbound.protection.outlook.com;\n envelope-from=jniethe@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dwjXy6GHQz1xsg\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 21 Jan 2026 10:03:14 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4dwjXy15rZz2xrL;\n\tWed, 21 Jan 2026 10:03:14 +1100 (AEDT)","from DM1PR04CU001.outbound.protection.outlook.com\n (mail-centralusazlp170100005.outbound.protection.outlook.com\n [IPv6:2a01:111:f403:c111::5])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4dwjXw3xtWz2xjK\n\tfor <linuxppc-dev@lists.ozlabs.org>; Wed, 21 Jan 2026 10:03:12 +1100 (AEDT)","from DM4PR12MB9072.namprd12.prod.outlook.com (2603:10b6:8:be::6) by\n MN0PR12MB6366.namprd12.prod.outlook.com (2603:10b6:208:3c1::19) with\n Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9520.12; Tue, 20 Jan\n 2026 23:02:48 +0000","from DM4PR12MB9072.namprd12.prod.outlook.com\n ([fe80::9e49:782:8e98:1ff1]) by DM4PR12MB9072.namprd12.prod.outlook.com\n ([fe80::9e49:782:8e98:1ff1%5]) with mapi id 15.20.9520.011; Tue, 20 Jan 2026\n 23:02:48 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1768950194;\n\tcv=pass;\n b=ILSnPSJoWWVVFICwJ3ZlTPiwujyzKiKCcCwxFBJS9bcSb2s9qdNhEY4F6uR/7EPLK3UhsDLTbU660JjWNeVE6ycvfNpQS8YHfVKR/S98ePedN/iFpiYdSvGbLqT5+Tpl6GtpVHRb0X8Y1CzvZM8vLbiS4n8zhpHv+cv5hcHDUmd0abuSMbt+B73BfsAqEevYgNzXBr3iB3U3wcicteu+1DxN+WvTYjPZV8C7nkZJNp0NLr+2jUKpeKbyQwgF4ulWfefRZalRCyvvJO/jAO9bT6mdyf70PhOTkcdJwm/5VFDBIT69TxIv2F+m1wNf3bE/Henk3IQvE4TsZ5pCpCUCyA==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=b7V7nDWGu6U1t/b6MtiirGTH056gRfTEA3ouiIBdTReAcwFIZ+geFLW7bBSzbHZXthjELuQueIpE8qDbh84nBaI/VfFHGBia2Ijw4yFwE00sOYjVpYSFSYbhoCdyKgmg47NlyXP6U7Rc5vsSHkgjS+KwJH2WTTU/WOVYRwqgIx5AiafPHLqe5oraO/w2WnKXFbgMNzfaGMNa4u1wUpXSQJh/Kir7I6A8ANRj0IyWGmDqyalgqag1RRAsnmqnMVoD7rxZTSEtCjPFo8OiVVV1suji1A10whiWsv2ZCOzCKqPgyv1u6UoAUcDTbvIxKoGkWNeJzyFH9Mv4fFHyD/Ni9A=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1768950194; c=relaxed/relaxed;\n\tbh=qfI4ksFsqdaJEwVOlr/HZSFYGEua3NQnn/TaEy/Jueg=;\n\th=Message-ID:Date:Subject:To:Cc:References:From:In-Reply-To:\n\t Content-Type:MIME-Version;\n b=kB8R5cw2+8fcO8N7F6lwJms008LsYEGopvVSsNc3EcNlibR+9rwo4gC1huTw97LZMWLPm0RSF3wLzCE6Paryi9LMz5Ief2l0fAtSGzMnV5FYlN/CKZOkY46CTy63tDpDO7Nlq1gdAdQNxJSM8LLpxx6HrfppBzNpFHA/4oRLxJuE40Gb2+R7vE4EUm9QbP+0DQBasqhjsKqckcztcWABuSaCAMKBIQ0BMNQ22NiStKLPwtexpEkj++sJ08u0+wFurVdxDATWqodBALFfS9VVHre86QevZGULI1+jQK5pW6yf+CZzukdX50fHJqbmbVx06F/scXGR/BK09fbBEbNyeg==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=qfI4ksFsqdaJEwVOlr/HZSFYGEua3NQnn/TaEy/Jueg=;\n b=wpnnwzFUt8lHNA/Ht8PchZ1WpMy870xQF9AwUqbv/fFUAgbU57Jc1LqXCxUJWCmCJjGPyUihiFu1qwasUW2eNpzV6gAtV2MfV3+tPlPucdqF61elLdvZtGxe/SPKH/02YAin1PhHpJgTF/v/yV52nlsfU36Jpe4rQtQHrOWA6IwR/WephLm32bRMy5xdU3wXyruZk7tWl6ok8akfcF9jxhg/CVsva4fLvltQW8u9sHHOMANVwQEpKRzpDpd748wXhKWl5yamLBjqSMX+UwmHFEZqlzPQRiaD6oq+4OwRhnGRbhkR3L5x6qXfW8btm7MH8se4/IDIWPmx++mNCS7bfg=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=ei5PoZf0; dkim-atps=neutral;\n spf=pass (client-ip=2a01:111:f403:c111::5;\n helo=dm1pr04cu001.outbound.protection.outlook.com;\n envelope-from=jniethe@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=qfI4ksFsqdaJEwVOlr/HZSFYGEua3NQnn/TaEy/Jueg=;\n b=ei5PoZf0RJtURSmJxEg8WsSsC6jaKAa8C9UWe+yczKo8PIIE7vBysQWq+S7ByMILEnEoZWg4K6hhk5j+P3eAhbtscoLwUOJQiYp9IpAm6NKrJAMAINA9diRNwZZ90xQbo1yl7AdhDz2fLaLGX6tLW9Mln3uEnE25eB6208x1EW5LC3K9uP9fhzHuzKyIEOxwQMWYVCqHqozoMIYJCSlccezO7M93jwCHhWIF+FGCsautbnfmPMPMGwVyPG0F/y6iUj84oLBybkiUGUmWCSXLPlMGLD4AVDU1D6GsnK0gPmLMGsVipagx3VLR5bTrQJ2Z0uD/Bwnz3j2zrZAEfOv0nQ==","Message-ID":"<fd4b6553-3e9e-4829-a12f-51d29a5d7571@nvidia.com>","Date":"Wed, 21 Jan 2026 10:02:40 +1100","User-Agent":"Mozilla Thunderbird","Subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","To":"Zi Yan <ziy@nvidia.com>","Cc":"linux-mm@kvack.org, balbirs@nvidia.com, matthew.brost@intel.com,\n akpm@linux-foundation.org, linux-kernel@vger.kernel.org,\n dri-devel@lists.freedesktop.org, david@redhat.com, apopple@nvidia.com,\n lorenzo.stoakes@oracle.com, lyude@redhat.com, dakr@kernel.org,\n airlied@gmail.com, simona@ffwll.ch, rcampbell@nvidia.com,\n mpenttil@redhat.com, jgg@nvidia.com, willy@infradead.org,\n linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org, jgg@ziepe.ca,\n Felix.Kuehling@amd.com","References":"<20260107091823.68974-1-jniethe@nvidia.com>\n <20260107091823.68974-12-jniethe@nvidia.com>\n <36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>\n <c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>\n <6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com>","Content-Language":"en-US","From":"Jordan Niethe <jniethe@nvidia.com>","In-Reply-To":"<6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com>","Content-Type":"text/plain; charset=UTF-8; format=flowed","Content-Transfer-Encoding":"8bit","X-ClientProxiedBy":"BY5PR16CA0027.namprd16.prod.outlook.com\n (2603:10b6:a03:1a0::40) To DM4PR12MB9072.namprd12.prod.outlook.com\n (2603:10b6:8:be::6)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DM4PR12MB9072:EE_|MN0PR12MB6366:EE_","X-MS-Office365-Filtering-Correlation-Id":"3244da34-8226-4e5d-5719-08de58780765","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|376014|7416014|366016|1800799024;","X-Microsoft-Antispam-Message-Info":"=?utf-8?q?vWF92+O3ei/ckyj9o5pLwYo2sPAnCWw?=\n\t=?utf-8?q?lInRKP9sNylCqLYKVtiTko/dWFwZiy0B2SlIPu6FgB13GR9bAzikbicyzbG/domq1?=\n\t=?utf-8?q?rqvma7wAGwb004uHtb6l2ig7C2WO95VDAHErXx23U8363RvctIG5I7N/JR5iLWf6V?=\n\t=?utf-8?q?jkpSUVB8SLNv+h+Pc8guBpyDfP1XpcSaaTdxDPb/Mk7rN0/i4OhxvSg9Sea2qmwuu?=\n\t=?utf-8?q?ySJqh5TUOgM6z/GthUc+2a0IXEpcZ6xIwrlXMe5bS7QDSyK9BqhyVzZMQwcA2Z5bV?=\n\t=?utf-8?q?JkIcPqyG2LOxWX6+f+TegG9CAW3rEL52tt7mWmSIMgcYluXbZtYiUgQWm1qMPSoac?=\n\t=?utf-8?q?jNjBeEkBdLYmaqyZGDNg8+6UVLRqhtPLd9fjlYZdvxvJ3kHPocWcDx1R4CJShzlq3?=\n\t=?utf-8?q?UrmYR/JQbMmdmv3yMAhQXqHYHr//KDiEDBjwsis53E5JcpZw2B/ZfpVkTu/hZ00zA?=\n\t=?utf-8?q?9E0FSEDi98JW+faAa2LCBUpJ+ri2EzG9/ZGPNRLByoMp87udntFkrcg2+ON1lF9fH?=\n\t=?utf-8?q?E7X2OWReYqel+EpL1xEWt2Y8xfQrEZCAUfHJp2baKenWoqmzumWUqWfeV6ai2W5i8?=\n\t=?utf-8?q?wTNsNNY+0YLp52KxraASKWMhULX2pZiD1yFiwfqEbPPxCdSOEUJJpfqmrcxF6rYnp?=\n\t=?utf-8?q?nh+yx6+G3WW4XmQ7GCQIM1jXOxQg0+R0A1jOF8+ECxGl/ihsOHAcFLEo1VihqBhcc?=\n\t=?utf-8?q?hOHerN1KjhHvFFfBiLu7l8QW2H9urRnngZl/R4ZCXe7GPsFqetyRmADQj5wLJkLIe?=\n\t=?utf-8?q?Ee5InBFaTY91KRBnghweh844zTMWv/LvY0flMGbHEqUPvtGNUj2mij1Ps3xvy1oFT?=\n\t=?utf-8?q?NhjyvBGloj0FtQ1VuJANPRvmMyF+KLcHyQ5QKPAc3aQQ1hjOpjSUeOs/VJtAHa7xQ?=\n\t=?utf-8?q?muv2kGY2MVl2qWyKOCXJqRNOcdBy6ANj2CNomiJa3Yv5oJB54t6swq24ivhfPaFk5?=\n\t=?utf-8?q?MTGC7yieVvsHhmCQBSPEN2F8qc8u6LA7y8OXLl9fdc6plBMEmwMHvZJzYG0SdWUpO?=\n\t=?utf-8?q?BPvhoHI2ZnJLwCS4dk63gosf2oMNN7DeozP1rFs2x9Bwqx8+mqjtztSddhSE6bm7B?=\n\t=?utf-8?q?ftpgdZdFO2JZhNoo34pyJc4YDdReX3h0t6TspsxLiVreoEEx2pCoYC74CabfZ3XqG?=\n\t=?utf-8?q?FUV2qrfdiDzxtSAo2JXrUH6kTH+EbhP49AnBqeEWp7Vc0rO+x4NPiZamE4OWLYNfv?=\n\t=?utf-8?q?/ZVnZdbu3GklYbQUIaKSXAAGS7PMy6g43bSRoev/e68RBogZY4FwLHpmiFFaRTnqs?=\n\t=?utf-8?q?CBTIPEBJJFmfp6+ucKZfguvh7C8wEweg6Aft56aGVyFnAsLmTnh2KfAskFz2ljqF0?=\n\t=?utf-8?q?V4oeaoK5NMCCCzVBFlwN4scUSwtV+HsmLk5Yj6h/EShCPD1/ZXPfuDCoNpDPoHY7H?=\n\t=?utf-8?q?Fxif4lJod/EdkiSkRXpK5uHdJqoDXKlAhWvPkkMn1en5JtYc0mkCvWlCCW7K3Tn5Z?=\n\t=?utf-8?q?cIQ7s+mdgqDtMX6HVT+tDGuGXqee+qnU/mOp8hUIXcXNxQYp2nIyg=3D?=","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR12MB9072.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(376014)(7416014)(366016)(1800799024);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"=?utf-8?q?8TMj/5ygP7s4CUfdEN+jSzO61A9K?=\n\t=?utf-8?q?+A+fGsC3vfUgIliW3mkoNtWwEwiE3oZky5xcJjl2d8tgAM8UEGO5Ck7Lj1F+z9zU1?=\n\t=?utf-8?q?TBc4uI8YhqArhvak9NedsMiOI9LjEYI0p1eqX6VzDcdTlvv6idbF4CFo6B73zAH2u?=\n\t=?utf-8?q?yKjBkZ0kww890vujumKVzbJ2QeOYW66OwDBETSEzke4SkigxRxtdyx5Tk0vk7e4hm?=\n\t=?utf-8?q?8f4nbwv/neuSRAcrxqntWR62zwGfuLqCcWwgH8e/mT0KBJDBKOe1l2zPhvW5CGEU/?=\n\t=?utf-8?q?V1xsIlrGo1NKJDKr/wOaw0l2Yc2KuwsX4yiZS2W/1A60pWYPemDVGARTOJrr5kRk4?=\n\t=?utf-8?q?fQwoz3v+/7xby9EZFHFz8pZwsSI5QgB57VLxuG7kZ03PV02kQrCBTY2ATHEf9xi41?=\n\t=?utf-8?q?eXVPUeBjWjnVOsDSM+U8AscXP8f/F+FExdOabNGA++zUl6huCvGiYYZSmNmJb8/mi?=\n\t=?utf-8?q?KNy/i/ZBZt006/tKg9W+0teuJVGv2XsnncfeicmhTAgTbUEX2fFhjEa4ANuSNGmkl?=\n\t=?utf-8?q?796BvYsDCmRT9b/eLzr2EHajVfmxQt+Ui47J4NdoPsskITeKx0wSYidrbR/SphF6T?=\n\t=?utf-8?q?prSoMwCkuC0EgQRj3bhgfkVVnUoPZeVPGhc7JpYje77UF9GcXIufeYx85+eHx04Pt?=\n\t=?utf-8?q?GVFos4gJwmOjQWGIQFXM2XK+l/t83hTsF2HDY4npLvlt90lq7H16kWNxM2f7RRbVR?=\n\t=?utf-8?q?Y+HNOoD90ScZEjhlNxYoB9FgFGbGjlXhBxXPVGPaxkMqAU7X1rzIm4MoxtyibRtMQ?=\n\t=?utf-8?q?3+bQDjxceXb9w4+tfdIIYb2N/A2wcl+JThu/xad5C3/3Fq3XMIdr/oNPjnGTaDbag?=\n\t=?utf-8?q?5ogl1mc9R/deCJn54atsWkayAPEkHP2pxqfqn6pGvxh5G8Ke24D45oXMYXgeJnmkd?=\n\t=?utf-8?q?OQB6HgQNNt5fpmI1UZvzkR/lWxpyGOsVRS9eXWhRFTiRymCSjBt4KPcDh41vNK2pB?=\n\t=?utf-8?q?AmsP42ruKhLcC87epohFf9o2i1b8g+bxPuWEOW7J/4AZi2Mk35GbtHGqC8WpzAdY4?=\n\t=?utf-8?q?IGqhENPYv3norjpg2oMJip9+CtcpGuvn/iwD+Tas2+bC+T1V+Atlmvbq7KF51BNm4?=\n\t=?utf-8?q?WJsbiqyJsdGSO7l2h+E+dG6KV1SEdD4UL74umh78JZIwtHlzyxrrvjTjHd8ampHyY?=\n\t=?utf-8?q?i4ysZW6QzR1JFqAM++bWdwx5feiKr9X9DL128Bh0g5UES90tf8acC9cbzCCSip+Ts?=\n\t=?utf-8?q?R+zJczoqXbVKDiCjHoqqiJSGkKIja31nSJ6QpRv5vQ+sQDbWK+a985nkaCkFXAnRW?=\n\t=?utf-8?q?bJy6y7MyOxlL+ln7QqHmBqK5xwLIgBgkgOkC78NLMtL05BZnSgtd9boDA+UjFBB/+?=\n\t=?utf-8?q?/vl14jtz46LmZNDs9n6CaMMh4bLOkWRDurHrIkGfGTwVFjEDDMQg5zSfZF/Zxu9Eu?=\n\t=?utf-8?q?IIVgjS9+JOpRO+xI6bJfwh7ZUkE2fSCBQ70Zo9xO93VugjIWOQDBO7iTLKL5ryzMw?=\n\t=?utf-8?q?7js8Z8mk0F9Er2HPaV+xKI8NAHqrDvWMvzxlAM78cImnFa5znMPl+lwVxOaHFApkJ?=\n\t=?utf-8?q?D+4xKozEl+EMwGu+hmmKY8qshTxL1ldxLiLd10JflhEzQ35aj4re0HWZArtXVp21r?=\n\t=?utf-8?q?PL0yOsoS1+NMsftabekFX3W4DzG50wCd8siSor/yhhQ2cvZ2deOTvTSvKe7iOw1Am?=\n\t=?utf-8?q?c4fwOaGIVsAXTc8niYSGih/DBocOWLjQ=3D=3D?=","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n 3244da34-8226-4e5d-5719-08de58780765","X-MS-Exchange-CrossTenant-AuthSource":"DM4PR12MB9072.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"20 Jan 2026 23:02:47.9607\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n bDhzGVKKnIvBDINodmXaxkRBkfXL/TPFpZRuaUdRrrYkzk7aZlG8lclkjxXN7kIaWSQJql/vvk9EM8Cm6Rda+Q==","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"MN0PR12MB6366","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tRCVD_IN_DNSWL_NONE,SPF_HELO_PASS,SPF_PASS autolearn=disabled\n\tversion=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"}},{"id":3639378,"web_url":"http://patchwork.ozlabs.org/comment/3639378/","msgid":"<16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com>","date":"2026-01-20T23:06:11","subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","submitter":{"id":76947,"url":"http://patchwork.ozlabs.org/api/people/76947/","name":"Zi Yan","email":"ziy@nvidia.com"},"content":"On 20 Jan 2026, at 18:02, Jordan Niethe wrote:\n\n> Hi,\n>\n> On 21/1/26 09:53, Zi Yan wrote:\n>> On 20 Jan 2026, at 17:33, Jordan Niethe wrote:\n>>\n>>> On 14/1/26 07:04, Zi Yan wrote:\n>>>> On 7 Jan 2026, at 4:18, Jordan Niethe wrote:\n>>>>\n>>>>> Currently when creating these device private struct pages, the first\n>>>>> step is to use request_free_mem_region() to get a range of physical\n>>>>> address space large enough to represent the devices memory. This\n>>>>> allocated physical address range is then remapped as device private\n>>>>> memory using memremap_pages().\n>>>>>\n>>>>> Needing allocation of physical address space has some problems:\n>>>>>\n>>>>>     1) There may be insufficient physical address space to represent the\n>>>>>        device memory. KASLR reducing the physical address space and VM\n>>>>>        configurations with limited physical address space increase the\n>>>>>        likelihood of hitting this especially as device memory increases. This\n>>>>>        has been observed to prevent device private from being initialized.\n>>>>>\n>>>>>     2) Attempting to add the device private pages to the linear map at\n>>>>>        addresses beyond the actual physical memory causes issues on\n>>>>>        architectures like aarch64 meaning the feature does not work there.\n>>>>>\n>>>>> Instead of using the physical address space, introduce a device private\n>>>>> address space and allocate devices regions from there to represent the\n>>>>> device private pages.\n>>>>>\n>>>>> Introduce a new interface memremap_device_private_pagemap() that\n>>>>> allocates a requested amount of device private address space and creates\n>>>>> the necessary device private pages.\n>>>>>\n>>>>> To support this new interface, struct dev_pagemap needs some changes:\n>>>>>\n>>>>>     - Add a new dev_pagemap::nr_pages field as an input parameter.\n>>>>>     - Add a new dev_pagemap::pages array to store the device\n>>>>>       private pages.\n>>>>>\n>>>>> When using memremap_device_private_pagemap(), rather then passing in\n>>>>> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to\n>>>>> be remapped, dev_pagemap::nr_ranges will always be 1, and the device\n>>>>> private range that is reserved is returned in dev_pagemap::range.\n>>>>>\n>>>>> Forbid calling memremap_pages() with dev_pagemap::ranges::type =\n>>>>> MEMORY_DEVICE_PRIVATE.\n>>>>>\n>>>>> Represent this device private address space using a new\n>>>>> device_private_pgmap_tree maple tree. This tree maps a given device\n>>>>> private address to a struct dev_pagemap, where a specific device private\n>>>>> page may then be looked up in that dev_pagemap::pages array.\n>>>>>\n>>>>> Device private address space can be reclaimed and the assoicated device\n>>>>> private pages freed using the corresponding new\n>>>>> memunmap_device_private_pagemap() interface.\n>>>>>\n>>>>> Because the device private pages now live outside the physical address\n>>>>> space, they no longer have a normal PFN. This means that page_to_pfn(),\n>>>>> et al. are no longer meaningful.\n>>>>>\n>>>>> Introduce helpers:\n>>>>>\n>>>>>     - device_private_page_to_offset()\n>>>>>     - device_private_folio_to_offset()\n>>>>>\n>>>>> to take a given device private page / folio and return its offset within\n>>>>> the device private address space.\n>>>>>\n>>>>> Update the places where we previously converted a device private page to\n>>>>> a PFN to use these new helpers. When we encounter a device private\n>>>>> offset, instead of looking up its page within the pagemap use\n>>>>> device_private_offset_to_page() instead.\n>>>>>\n>>>>> Update the existing users:\n>>>>>\n>>>>>    - lib/test_hmm.c\n>>>>>    - ppc ultravisor\n>>>>>    - drm/amd/amdkfd\n>>>>>    - gpu/drm/xe\n>>>>>    - gpu/drm/nouveau\n>>>>>\n>>>>> to use the new memremap_device_private_pagemap() interface.\n>>>>>\n>>>>> Signed-off-by: Jordan Niethe <jniethe@nvidia.com>\n>>>>> Signed-off-by: Alistair Popple <apopple@nvidia.com>\n>>>>>\n>>>>> ---\n>>>>>\n>>>>> NOTE: The updates to the existing drivers have only been compile tested.\n>>>>> I'll need some help in testing these drivers.\n>>>>>\n>>>>> v1:\n>>>>> - Include NUMA node paramater for memremap_device_private_pagemap()\n>>>>> - Add devm_memremap_device_private_pagemap() and friends\n>>>>> - Update existing users of memremap_pages():\n>>>>>       - ppc ultravisor\n>>>>>       - drm/amd/amdkfd\n>>>>>       - gpu/drm/xe\n>>>>>       - gpu/drm/nouveau\n>>>>> - Update for HMM huge page support\n>>>>> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE\n>>>>>\n>>>>> v2:\n>>>>> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);\n>>>>> ---\n>>>>>    Documentation/mm/hmm.rst                 |  11 +-\n>>>>>    arch/powerpc/kvm/book3s_hv_uvmem.c       |  41 ++---\n>>>>>    drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  23 +--\n>>>>>    drivers/gpu/drm/nouveau/nouveau_dmem.c   |  35 ++--\n>>>>>    drivers/gpu/drm/xe/xe_svm.c              |  28 +---\n>>>>>    include/linux/hmm.h                      |   3 +\n>>>>>    include/linux/leafops.h                  |  16 +-\n>>>>>    include/linux/memremap.h                 |  64 +++++++-\n>>>>>    include/linux/migrate.h                  |   6 +-\n>>>>>    include/linux/mm.h                       |   2 +\n>>>>>    include/linux/rmap.h                     |   5 +-\n>>>>>    include/linux/swapops.h                  |  10 +-\n>>>>>    lib/test_hmm.c                           |  69 ++++----\n>>>>>    mm/debug.c                               |   9 +-\n>>>>>    mm/memremap.c                            | 193 ++++++++++++++++++-----\n>>>>>    mm/mm_init.c                             |   8 +-\n>>>>>    mm/page_vma_mapped.c                     |  19 ++-\n>>>>>    mm/rmap.c                                |  43 +++--\n>>>>>    mm/util.c                                |   5 +-\n>>>>>    19 files changed, 391 insertions(+), 199 deletions(-)\n>>>>>\n>>>> <snip>\n>>>>\n>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h\n>>>>> index e65329e1969f..b36599ab41ba 100644\n>>>>> --- a/include/linux/mm.h\n>>>>> +++ b/include/linux/mm.h\n>>>>> @@ -2038,6 +2038,8 @@ static inline unsigned long memdesc_section(memdesc_flags_t mdf)\n>>>>>     */\n>>>>>    static inline unsigned long folio_pfn(const struct folio *folio)\n>>>>>    {\n>>>>> +\tVM_BUG_ON(folio_is_device_private(folio));\n>>>>\n>>>> Please use VM_WARN_ON instead.\n>>>\n>>> ack.\n>>>\n>>>>\n>>>>> +\n>>>>>    \treturn page_to_pfn(&folio->page);\n>>>>>    }\n>>>>>\n>>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n>>>>> index 57c63b6a8f65..c1561a92864f 100644\n>>>>> --- a/include/linux/rmap.h\n>>>>> +++ b/include/linux/rmap.h\n>>>>> @@ -951,7 +951,7 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>>>>>    static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>>    {\n>>>>>    \tif (folio_is_device_private(folio))\n>>>>> -\t\treturn page_vma_walk_pfn(folio_pfn(folio)) |\n>>>>> +\t\treturn page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n>>>>>    \t\t       PVMW_PFN_DEVICE_PRIVATE;\n>>>>>\n>>>>>    \treturn page_vma_walk_pfn(folio_pfn(folio));\n>>>>> @@ -959,6 +959,9 @@ static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>>\n>>>>>    static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n>>>>>    {\n>>>>> +\tif (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n>>>>> +\t\treturn device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>> +\n>>>>>    \treturn pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>>    }\n>>>>\n>>>> <snip>\n>>>>\n>>>>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n>>>>> index 96c525785d78..141fe5abd33f 100644\n>>>>> --- a/mm/page_vma_mapped.c\n>>>>> +++ b/mm/page_vma_mapped.c\n>>>>> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,\n>>>>>    static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>    {\n>>>>>    \tunsigned long pfn;\n>>>>> +\tbool device_private = false;\n>>>>>    \tpte_t ptent = ptep_get(pvmw->pte);\n>>>>>\n>>>>>    \tif (pvmw->flags & PVMW_MIGRATION) {\n>>>>> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>    \t\tif (!softleaf_is_migration(entry))\n>>>>>    \t\t\treturn false;\n>>>>>\n>>>>> +\t\tif (softleaf_is_migration_device_private(entry))\n>>>>> +\t\t\tdevice_private = true;\n>>>>> +\n>>>>>    \t\tpfn = softleaf_to_pfn(entry);\n>>>>>    \t} else if (pte_present(ptent)) {\n>>>>>    \t\tpfn = pte_pfn(ptent);\n>>>>> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>    \t\t\treturn false;\n>>>>>\n>>>>>    \t\tpfn = softleaf_to_pfn(entry);\n>>>>> +\n>>>>> +\t\tif (softleaf_is_device_private(entry))\n>>>>> +\t\t\tdevice_private = true;\n>>>>>    \t}\n>>>>>\n>>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>>> +\t\treturn false;\n>>>>> +\n>>>>>    \tif ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>>    \t\treturn false;\n>>>>>    \tif (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n>>>>> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>    }\n>>>>>\n>>>>>    /* Returns true if the two ranges overlap.  Careful to not overflow. */\n>>>>> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)\n>>>>> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n>>>>>    {\n>>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>>> +\t\treturn false;\n>>>>> +\n>>>>>    \tif ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>>    \t\treturn false;\n>>>>>    \tif (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n>>>>> @@ -255,6 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>>\n>>>>>    \t\t\t\tif (!softleaf_is_migration(entry) ||\n>>>>>    \t\t\t\t    !check_pmd(softleaf_to_pfn(entry),\n>>>>> +\t\t\t\t\t       softleaf_is_device_private(entry) ||\n>>>>> +\t\t\t\t\t       softleaf_is_migration_device_private(entry),\n>>>>>    \t\t\t\t\t       pvmw))\n>>>>>    \t\t\t\t\treturn not_found(pvmw);\n>>>>>    \t\t\t\treturn true;\n>>>>> @@ -262,7 +277,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>>    \t\t\tif (likely(pmd_trans_huge(pmde))) {\n>>>>>    \t\t\t\tif (pvmw->flags & PVMW_MIGRATION)\n>>>>>    \t\t\t\t\treturn not_found(pvmw);\n>>>>> -\t\t\t\tif (!check_pmd(pmd_pfn(pmde), pvmw))\n>>>>> +\t\t\t\tif (!check_pmd(pmd_pfn(pmde), false, pvmw))\n>>>>>    \t\t\t\t\treturn not_found(pvmw);\n>>>>>    \t\t\t\treturn true;\n>>>>>    \t\t\t}\n>>>>\n>>>> It seems to me that you can add a new flag like “bool is_device_private” to\n>>>> indicate whether pfn is a device private index instead of pfn without\n>>>> manipulating pvmw->pfn itself.\n>>>\n>>> We could do it like that, however my concern with using a new param was that\n>>> storing this info seperately might make it easier to misuse a device private\n>>> index as a regular pfn.\n>>>\n>>> It seemed like it could be easy to overlook both when creating the pvmw and\n>>> then when accessing the pfn.\n>>\n>> That is why I asked for a helper function like page_vma_walk_pfn(pvmw) to\n>> return the converted pfn instead of pvmw->pfn directly. You can add a comment\n>> to ask people to use helper function and even mark pvmw->pfn /* do not use\n>> directly */.\n>\n> Yeah I agree that is a good idea.\n>\n>>\n>> In addition, your patch manipulates pfn by left shifting it by 1. Are you sure\n>> there is no weird arch having pfns with bit 63 being 1? Your change could\n>> break it, right?\n>\n> Currently for migrate pfns we left shift by pfns by MIGRATE_PFN_SHIFT (6), so I\n> thought doing something similiar here should be safe.\n\nYeah, but that limits to archs supporting HMM. page_vma_mapped_walk is used\nby almost every arch, so it has a broader impact.\n\nBest Regards,\nYan, Zi","headers":{"Return-Path":"\n <linuxppc-dev+bounces-16084-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=rtnrA1cq;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=2404:9400:21b9:f100::1; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-16084-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=\"2a01:111:f403:c105::5\" arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=rtnrA1cq;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=2a01:111:f403:c105::5;\n helo=ch5pr02cu005.outbound.protection.outlook.com;\n envelope-from=ziy@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org\n [IPv6:2404:9400:21b9:f100::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dwjd24g94z1xsN\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 21 Jan 2026 10:06:46 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4dwjd15yXZz2xrL;\n\tWed, 21 Jan 2026 10:06:45 +1100 (AEDT)","from CH5PR02CU005.outbound.protection.outlook.com\n (mail-northcentralusazlp170120005.outbound.protection.outlook.com\n [IPv6:2a01:111:f403:c105::5])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4dwjd05Mb6z2xjK\n\tfor <linuxppc-dev@lists.ozlabs.org>; Wed, 21 Jan 2026 10:06:44 +1100 (AEDT)","from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by\n CYYPR12MB8703.namprd12.prod.outlook.com (2603:10b6:930:c4::9) with Microsoft\n SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.9542.9; Tue, 20 Jan 2026 23:06:19 +0000","from DS7PR12MB9473.namprd12.prod.outlook.com\n ([fe80::f01d:73d2:2dda:c7b2]) by DS7PR12MB9473.namprd12.prod.outlook.com\n ([fe80::f01d:73d2:2dda:c7b2%4]) with mapi id 15.20.9542.008; Tue, 20 Jan 2026\n 23:06:19 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1768950405;\n\tcv=pass;\n b=BARa+LTdRJ+W3RMsPl2txwlsxRuecKdKNtvg8YKdLs37so9PnUgEsrhM2EZ//DMcGhhV3rHFuLfgDgJRUSs43gVgJUSzWCkEBBZnmJkrHx6KXuaHyg/ykwwZLoHq0qZ03wmpm+l8+3IYEzuxVYi0syYFj1jLOKQhNLRppvAC3rKN5EH0zUgweWv9CsgSX+oWU0wPM4q1I2ZYJBasIv+FNs2xUe92IdjdnR0wucist2JFzNz0dZG13eZ3S5Ov9xuw/YCfZpFcHgWZOP3HcO/VWmO8VMhHapjIfquF1L8WyLEstF2KgTBDkEWWjCNqjtlAt6whUdT5dg/nYvqUsVPpew==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=hKb3ce3PKovTsd6wHyFuVttlqvozqrgSIUKleXjZD/RUgRCjnyHe5i7vBBEmX8fOgwyvou8li8b0pdNtZB3tHVj8YuUFeWxsOELiQ+nE40NN+3ErRmS28oF0xR2yrpWaUu5KTfN2S4qJ4m+1Qbd1tBNo3GrnjjgJRPvGI/VkNsVSHHT4tqxx/Cyr2d9LPyEmxUTxkVAralcw7l9HmAzWobODhSKwVxCvBbJ8EJ0KWxcVaeaPWaoLJA4Dh/sIVQtUQ4nX1UdL9UyRfS01THDzkAriygC5A5H8LIhflI3hb6cAH8aR7cS58FoEtFbAHCdw+cXvBNuMZ7rXNOWrp7Q5Lg=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1768950405; c=relaxed/relaxed;\n\tbh=nnl5Ay8DyENmAYPGsHVNnjpqZdwkvLk/ZYsiMLftvcg=;\n\th=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References:\n\t Content-Type:MIME-Version;\n b=c2lZnEuG9h3VSksP6PkItR8iKzuS+NZKPyx2NoeZbv/Q2VWNiAawqXUkLepRYNrBdm9RiOpoX1LMB4cHO2MOh9zboE12wyzkV09B/Begc+Y0/CrXZhWlhIgr9DuGO5FJjVODDcBnct5zVHHSmUOq6zmVbi1veYlxuAD0sQhB/4Sv0TPjtCkWxdu+2RTDeddx+TYdJULe07byR7Epe5c8f6NinZYOxtRDzqR0nvcvDrZAz2bNoL78qHbBpYYv7pLsvj/U+m5P5ObxmQCsb36+dO5M8gTccYcSO4Qhu2wfOptQ01PFpYs6K/lgEN/qFNE5VeI965Tz73RUhEq5yzWuug==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=nnl5Ay8DyENmAYPGsHVNnjpqZdwkvLk/ZYsiMLftvcg=;\n b=mw6U+C/+yQlNph5dAShVQo5LeztNRuUhjqE6gaiSoi1GUYra/h0yJBkhbF5qTL31r1OKdpfFd3TZwCakubgQsmQPyavdGacSYkaSQ8G7WtWe4suWgQXh2jBbHxzotA89IzaYaItJigirbVxghqHHQVe/O0iR/fiX1N1q1Ns6iMb75UPD0jU81+2HLuVcytCMUA7xZ1bcy6r6Olf+9iFG+zgY9D+UBoC7kowcWu0gAkaqmac6Fdw5zwdQIWMofwFMjrLI7sQ8gDRXnfMNwhfUdCOXlR4szS/JCNMyEwHuGQfrXf39Z6L3vLkZXLpfrK7Ck2KwhUJxN8KccaYP0+1bJA=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=rtnrA1cq; dkim-atps=neutral;\n spf=pass (client-ip=2a01:111:f403:c105::5;\n helo=ch5pr02cu005.outbound.protection.outlook.com;\n envelope-from=ziy@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=nnl5Ay8DyENmAYPGsHVNnjpqZdwkvLk/ZYsiMLftvcg=;\n b=rtnrA1cqZcXLCvI/LvI1otULrFjaDKHFbHDvU4CxoTL9oEcsdZ9//2kw72+G+Q9qjCRGOGTZYNPIcfE+hei82QBAA7e8HFJrfAm07iwW3g+dgt6MjFz4IvypYOLjOaII6OO2Bq/QhjtH69CDst//fwBg3mKa3U9FIjwpnlQCwWmdDQljtE2AyG4hKyCgqMLvB0AUPCE0DsZoDUXQ2AZ0qXGNhkyn0mpl/hKpcmnj51zCEA2vWF/CD14s+QVu44EsirDJkOIJnvMztC6nGq143Kf7EAKTeRk3IfPQHapGumlPYE2fcDAVd2nEguNLrH684w2lzeUALq0GBNjohDEVdg==","From":"Zi Yan <ziy@nvidia.com>","To":"Jordan Niethe <jniethe@nvidia.com>","Cc":"linux-mm@kvack.org, balbirs@nvidia.com, matthew.brost@intel.com,\n akpm@linux-foundation.org, linux-kernel@vger.kernel.org,\n dri-devel@lists.freedesktop.org, david@redhat.com, apopple@nvidia.com,\n lorenzo.stoakes@oracle.com, lyude@redhat.com, dakr@kernel.org,\n airlied@gmail.com, simona@ffwll.ch, rcampbell@nvidia.com,\n mpenttil@redhat.com, jgg@nvidia.com, willy@infradead.org,\n linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org, jgg@ziepe.ca,\n Felix.Kuehling@amd.com","Subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","Date":"Tue, 20 Jan 2026 18:06:11 -0500","X-Mailer":"MailMate (2.0r6290)","Message-ID":"<16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com>","In-Reply-To":"<fd4b6553-3e9e-4829-a12f-51d29a5d7571@nvidia.com>","References":"<20260107091823.68974-1-jniethe@nvidia.com>\n <20260107091823.68974-12-jniethe@nvidia.com>\n <36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>\n <c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>\n <6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com>\n <fd4b6553-3e9e-4829-a12f-51d29a5d7571@nvidia.com>","Content-Type":"text/plain; charset=UTF-8","Content-Transfer-Encoding":"quoted-printable","X-ClientProxiedBy":"BY3PR05CA0055.namprd05.prod.outlook.com\n (2603:10b6:a03:39b::30) To DS7PR12MB9473.namprd12.prod.outlook.com\n (2603:10b6:8:252::5)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DS7PR12MB9473:EE_|CYYPR12MB8703:EE_","X-MS-Office365-Filtering-Correlation-Id":"1fc9b058-3c18-4851-61fa-08de58788545","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|366016|7416014|376014|1800799024;","X-Microsoft-Antispam-Message-Info":"=?utf-8?q?tuOBbo2r1gQeHrdOW4PzOs9TAXWejp5?=\n\t=?utf-8?q?ohDge7tBB3OOmId84sKQDqRvHXUNJlbGYg6XWbGnnonmIWV7tXcmdMijSWgh6ndHn?=\n\t=?utf-8?q?7xUsZLWYL+9Hkugs9JKLSrWA+ZGmWk6Qc3nn0Xul+fpuHFDAiXlLHz+8PVcUQ7Met?=\n\t=?utf-8?q?m3ThC6RHBCy01ehSz0UhtbdS21bc5gL9wmC2OZ5ymFm0bzo74Mla5QKMFKxEE/yLG?=\n\t=?utf-8?q?g4/BFpEOrvDge+0yveZpDi0T3ile2zrv9eoEjw2bJiObJzWhiqvP71dem+dIzjzAJ?=\n\t=?utf-8?q?oXE6S4ujm7ihWESNA99zEtSZPvUvEJUWWzg6H7E7oliSdIn3jV1CwYfSmgaK4ackh?=\n\t=?utf-8?q?muYBZ7rbsYtSWnhz52Y+td+CSJjqs7MD2XdOkQSKU3egSX2wp5Tzlni72w1f8c9YA?=\n\t=?utf-8?q?6PJ61nwz9iyeY2YUs4w3FErTImeFleRMC0XTqMZUq2kScK6n3n7jtjMLtMq/s55Bi?=\n\t=?utf-8?q?qaPNDAW2aen/mldmA5F6gHItn1fB45FbyttxRBvII/h4c5SAbopqSRC6Zbnz5dKLr?=\n\t=?utf-8?q?Tt2zgEKAG53oUtAEFXKKEBR7LxmUmEairxc0DrAhis3zXoXVI5ExsaW9tMD/+csk+?=\n\t=?utf-8?q?Uq7tNHd76Nlm1khFGkPLh+rHeuFe7S4GFDWkrhNHUOXIUvevLCDHRCe8xPUmbC5e7?=\n\t=?utf-8?q?arcE9+UIBoSHz4N5KLiCl2yqZWgW5rrRDyIIjSNstYLNfQlthe+jtleJzva7GV5a4?=\n\t=?utf-8?q?g4fLdKfRb39JOyDS3XJt0EGPKpMUhUiRtaxhuv6nR7Q88enworHjr2d7VpNf6y5ua?=\n\t=?utf-8?q?u+BgypbhqbYrHT3pkX7EXOAaT9wiuFr0BOunhOJhsUdqNQQXYjyuFeCOVpujw6Qgc?=\n\t=?utf-8?q?h6m/4c88g1+3zdPEqU2W4hWzkcGFsIB2VujMqrsxt+8+PmDofqQIulA1Ux7rr73zA?=\n\t=?utf-8?q?3PFNU0PgKlTqrTF3DhyjgdStAepJgy1Vk6WlaUkti5S9W+wz0Rnd0e9p/nFk+e9vv?=\n\t=?utf-8?q?8aPznVrPvM1qf6k8zZigR8gVdnLLm29f0DwIlNQebFRWuT+F71xXcVtvSheNRsh0+?=\n\t=?utf-8?q?Pv1o4oh5tdhm37zl/mYcfrQZHNLMK7mJW7utZYxtSGxvqKFZZ55PhyHx7O3CbazCU?=\n\t=?utf-8?q?R+YTLCZDZEAr+q1PrXGhw4FYs6OGcRSWZQP59mHRZHqR52Yn5epof3WIrwo3h1hPy?=\n\t=?utf-8?q?UUUoTwPTBWj2Gq98wbnGPjuMlPMdzuFySQwrYD/JYjBSbjvPYt/8KRT4IzXXF5vAA?=\n\t=?utf-8?q?RU1JvTl4VGrMPSVgtZxt2TKtJNQrJNcfwvanw0gFEGAPk85sc9F3ze/IsLqRBWrls?=\n\t=?utf-8?q?Jg94rEJC/FI2Dro9NfpNWRPuO0/c8Egs/MZkpJVwIe+Rzr4iJWipdfEecRvYPgSKc?=\n\t=?utf-8?q?3ezhNlEWZTIfBuD93zsG8Ubldy9lgnybxOP6nhuneplRRzas6it/y+OpKXsQ96Xen?=\n\t=?utf-8?q?V14LEN3DWg1XmEK+lxcQ7/1+7V+Qj2yST/XizA1sUGARNjMOaIFCwm1I19XhnGcqE?=\n\t=?utf-8?q?nTzBT38W3vDKaXj2SPoiAVQD3LShEJhlydc7xqz8pvIM77igkpve4=3D?=","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(7416014)(376014)(1800799024);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"=?utf-8?q?v6j/NlfOGFuMjlB2r8TP9Qmj3JA+?=\n\t=?utf-8?q?gcRGPuRjJ7e+jgy29lEjl1ZAcsM86aKw2W0CbwFjuRrnPVPf/+EcGzoP618HoOrhJ?=\n\t=?utf-8?q?3cLyVXGVfEZWbyWUVVhpxUgWLct0zoOFZ44xkUslFWxxPDt5J8P0WsMkkZcsjuJkh?=\n\t=?utf-8?q?LKp6Sw06Kq+GOru7vyjNSBnjEB35x2TXdU1Js0X+4De0NRb38yi1Is60g0NcUYi9y?=\n\t=?utf-8?q?N2xmvbY9jVKGtHK+gIui4lYk9+YuPLWdCh8+vL8nWqtpBUw/cQXGLf3rQ0skvittd?=\n\t=?utf-8?q?PiRl/edVwLnM5NaqiJOXHdkUsTGq3Dq0pGNFavb/NIak6qxJ8gqaezlEcnSFGzNdZ?=\n\t=?utf-8?q?U7w7s7g+3xF9o5lnvNmy+MDFiiSlw4mo6xxJbv0+Ftuh92pUpZKIdyXj88KXw2YMv?=\n\t=?utf-8?q?t7uAwUvRbQHgGp/BTzFaxZmxg6iJPXmELvUtuCxjW/Hy2i3VeiT7By5Y9KHN++SaT?=\n\t=?utf-8?q?n+dYF1joo3qA/aGrFMxUpzzNtgXApIMLgsOvXoYBIEA2e7oNiUBA3e0T0N2pzbdll?=\n\t=?utf-8?q?MOz/S7T4fHMSlvdekfmRhSdDuhPRyaYRDg/uumPJfrc0EGQk8AxdMIwO6vQyQbEOH?=\n\t=?utf-8?q?arU2g/8zdjRB3omWNEg0WC8qyUuAf18ajVfXJ2K9I3AJqQtCFn+f31tYUyGa0VsNy?=\n\t=?utf-8?q?Y0vOU8rXSQpkN3bi2ibkfKLamXY+ELYzCkzRCKjLkOzMT/J50Pw1ByscBaxJOGCnL?=\n\t=?utf-8?q?UL7eKndCEfQGf0y+VRbDl9iASPq9FgtnCTGDIXdz/59vgHKFPbKh5gk5DQQIzK6ta?=\n\t=?utf-8?q?ntxGc3JRPGQMjzRogNyhG5shouUx2xrPLDrSniyvaBCIwfgXI5uJTFq+k5QaC+yWB?=\n\t=?utf-8?q?cgsSF41fND310qc26arnHmb5ueduEcBjhpkrFw7mhJ8ccOBIsz83olh0cXPEI7WtZ?=\n\t=?utf-8?q?S0do+mOVt8Jq9HNL57h4/LlyZSn/vAlmDY3z30cmkdmrHI21rjggSWRLk1nWKxRN2?=\n\t=?utf-8?q?TvTO8HBk6Usp4UpuqXVfBcUVGFe7OlpDLBEu9SvCOZmfH93FTNcfTFxJIfhN+COBo?=\n\t=?utf-8?q?pGrEe6kVCKgpbGw80DkjfqEE4bfy+ptJ3qiVhLFiU9Jqm6Uq7u4lXVsjTYnztmFOs?=\n\t=?utf-8?q?VL2Fnq6oBx/uicMjKstYYAcUqWYZ7/geGVODqoVRb6fe6tV5PBZ0NC3PWGkj/hs2x?=\n\t=?utf-8?q?+Miu8kNoG67pnfQxyRgGfTZhA1Xu2lFFbwIMD4tkWzqcx0Us6Hkmsj8PmXkqu1K9A?=\n\t=?utf-8?q?uAszAf1fAL6JzwDT539M7pJxDfhq3+GTPhfaaqsnNQjX9tocUwnEDHRXRmJad9kpm?=\n\t=?utf-8?q?5ht5+bimtZxFuwFn99LOtgWvwzm8ooTggi7tkEeGcmdVYIvIz1xuk8fi3N9z3kxFx?=\n\t=?utf-8?q?qGn5QYT7yoFQAPoC99AYvPmTsqt+IMP7iWfegSDzD/AgBUYGozJptqFo4xuduMaTD?=\n\t=?utf-8?q?/3omV1rRTvtVAJwBJnca9g5Eg6pU98nfOZxJsPoWh30IXtDCrin2yvxCMg/rytDEx?=\n\t=?utf-8?q?O1WVQlCh76DjgjoLfjZMha3AM8NLSnNI5M98xqrMdO/kC1Yxi5qHvpNcBuyLjG6JA?=\n\t=?utf-8?q?ljPhaSx2lhcX+PPYYc+A29sVg/dtA0a2yXFJkMMxThTiijc0WcUC53XwsRr20aN//?=\n\t=?utf-8?q?5HdsXbFSQGbWgx02shAT4owgJd9UTkgxlv4Y8uPz/LAZDTGvdKf9CVn4msSjZ8XO4?=\n\t=?utf-8?q?+E//lOEf7X?=","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n 1fc9b058-3c18-4851-61fa-08de58788545","X-MS-Exchange-CrossTenant-AuthSource":"DS7PR12MB9473.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"20 Jan 2026 23:06:19.0624\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n pz6g7bGuj+xu9UdL3KZtvDS3UYo0Yzcs/iQWcDbRWRFnuUXJLLQDujK0koq4O9A2","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"CYYPR12MB8703","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tRCVD_IN_DNSWL_NONE,SPF_HELO_PASS,SPF_PASS autolearn=disabled\n\tversion=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"}},{"id":3639385,"web_url":"http://patchwork.ozlabs.org/comment/3639385/","msgid":"<649cc20c-161c-4343-8263-3712a4f6dccb@nvidia.com>","date":"2026-01-20T23:34:21","subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","submitter":{"id":92354,"url":"http://patchwork.ozlabs.org/api/people/92354/","name":"Jordan Niethe","email":"jniethe@nvidia.com"},"content":"Hi,\n\nOn 21/1/26 10:06, Zi Yan wrote:\n> On 20 Jan 2026, at 18:02, Jordan Niethe wrote:\n> \n>> Hi,\n>>\n>> On 21/1/26 09:53, Zi Yan wrote:\n>>> On 20 Jan 2026, at 17:33, Jordan Niethe wrote:\n>>>\n>>>> On 14/1/26 07:04, Zi Yan wrote:\n>>>>> On 7 Jan 2026, at 4:18, Jordan Niethe wrote:\n>>>>>\n>>>>>> Currently when creating these device private struct pages, the first\n>>>>>> step is to use request_free_mem_region() to get a range of physical\n>>>>>> address space large enough to represent the devices memory. This\n>>>>>> allocated physical address range is then remapped as device private\n>>>>>> memory using memremap_pages().\n>>>>>>\n>>>>>> Needing allocation of physical address space has some problems:\n>>>>>>\n>>>>>>      1) There may be insufficient physical address space to represent the\n>>>>>>         device memory. KASLR reducing the physical address space and VM\n>>>>>>         configurations with limited physical address space increase the\n>>>>>>         likelihood of hitting this especially as device memory increases. This\n>>>>>>         has been observed to prevent device private from being initialized.\n>>>>>>\n>>>>>>      2) Attempting to add the device private pages to the linear map at\n>>>>>>         addresses beyond the actual physical memory causes issues on\n>>>>>>         architectures like aarch64 meaning the feature does not work there.\n>>>>>>\n>>>>>> Instead of using the physical address space, introduce a device private\n>>>>>> address space and allocate devices regions from there to represent the\n>>>>>> device private pages.\n>>>>>>\n>>>>>> Introduce a new interface memremap_device_private_pagemap() that\n>>>>>> allocates a requested amount of device private address space and creates\n>>>>>> the necessary device private pages.\n>>>>>>\n>>>>>> To support this new interface, struct dev_pagemap needs some changes:\n>>>>>>\n>>>>>>      - Add a new dev_pagemap::nr_pages field as an input parameter.\n>>>>>>      - Add a new dev_pagemap::pages array to store the device\n>>>>>>        private pages.\n>>>>>>\n>>>>>> When using memremap_device_private_pagemap(), rather then passing in\n>>>>>> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to\n>>>>>> be remapped, dev_pagemap::nr_ranges will always be 1, and the device\n>>>>>> private range that is reserved is returned in dev_pagemap::range.\n>>>>>>\n>>>>>> Forbid calling memremap_pages() with dev_pagemap::ranges::type =\n>>>>>> MEMORY_DEVICE_PRIVATE.\n>>>>>>\n>>>>>> Represent this device private address space using a new\n>>>>>> device_private_pgmap_tree maple tree. This tree maps a given device\n>>>>>> private address to a struct dev_pagemap, where a specific device private\n>>>>>> page may then be looked up in that dev_pagemap::pages array.\n>>>>>>\n>>>>>> Device private address space can be reclaimed and the assoicated device\n>>>>>> private pages freed using the corresponding new\n>>>>>> memunmap_device_private_pagemap() interface.\n>>>>>>\n>>>>>> Because the device private pages now live outside the physical address\n>>>>>> space, they no longer have a normal PFN. This means that page_to_pfn(),\n>>>>>> et al. are no longer meaningful.\n>>>>>>\n>>>>>> Introduce helpers:\n>>>>>>\n>>>>>>      - device_private_page_to_offset()\n>>>>>>      - device_private_folio_to_offset()\n>>>>>>\n>>>>>> to take a given device private page / folio and return its offset within\n>>>>>> the device private address space.\n>>>>>>\n>>>>>> Update the places where we previously converted a device private page to\n>>>>>> a PFN to use these new helpers. When we encounter a device private\n>>>>>> offset, instead of looking up its page within the pagemap use\n>>>>>> device_private_offset_to_page() instead.\n>>>>>>\n>>>>>> Update the existing users:\n>>>>>>\n>>>>>>     - lib/test_hmm.c\n>>>>>>     - ppc ultravisor\n>>>>>>     - drm/amd/amdkfd\n>>>>>>     - gpu/drm/xe\n>>>>>>     - gpu/drm/nouveau\n>>>>>>\n>>>>>> to use the new memremap_device_private_pagemap() interface.\n>>>>>>\n>>>>>> Signed-off-by: Jordan Niethe <jniethe@nvidia.com>\n>>>>>> Signed-off-by: Alistair Popple <apopple@nvidia.com>\n>>>>>>\n>>>>>> ---\n>>>>>>\n>>>>>> NOTE: The updates to the existing drivers have only been compile tested.\n>>>>>> I'll need some help in testing these drivers.\n>>>>>>\n>>>>>> v1:\n>>>>>> - Include NUMA node paramater for memremap_device_private_pagemap()\n>>>>>> - Add devm_memremap_device_private_pagemap() and friends\n>>>>>> - Update existing users of memremap_pages():\n>>>>>>        - ppc ultravisor\n>>>>>>        - drm/amd/amdkfd\n>>>>>>        - gpu/drm/xe\n>>>>>>        - gpu/drm/nouveau\n>>>>>> - Update for HMM huge page support\n>>>>>> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE\n>>>>>>\n>>>>>> v2:\n>>>>>> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);\n>>>>>> ---\n>>>>>>     Documentation/mm/hmm.rst                 |  11 +-\n>>>>>>     arch/powerpc/kvm/book3s_hv_uvmem.c       |  41 ++---\n>>>>>>     drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  23 +--\n>>>>>>     drivers/gpu/drm/nouveau/nouveau_dmem.c   |  35 ++--\n>>>>>>     drivers/gpu/drm/xe/xe_svm.c              |  28 +---\n>>>>>>     include/linux/hmm.h                      |   3 +\n>>>>>>     include/linux/leafops.h                  |  16 +-\n>>>>>>     include/linux/memremap.h                 |  64 +++++++-\n>>>>>>     include/linux/migrate.h                  |   6 +-\n>>>>>>     include/linux/mm.h                       |   2 +\n>>>>>>     include/linux/rmap.h                     |   5 +-\n>>>>>>     include/linux/swapops.h                  |  10 +-\n>>>>>>     lib/test_hmm.c                           |  69 ++++----\n>>>>>>     mm/debug.c                               |   9 +-\n>>>>>>     mm/memremap.c                            | 193 ++++++++++++++++++-----\n>>>>>>     mm/mm_init.c                             |   8 +-\n>>>>>>     mm/page_vma_mapped.c                     |  19 ++-\n>>>>>>     mm/rmap.c                                |  43 +++--\n>>>>>>     mm/util.c                                |   5 +-\n>>>>>>     19 files changed, 391 insertions(+), 199 deletions(-)\n>>>>>>\n>>>>> <snip>\n>>>>>\n>>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h\n>>>>>> index e65329e1969f..b36599ab41ba 100644\n>>>>>> --- a/include/linux/mm.h\n>>>>>> +++ b/include/linux/mm.h\n>>>>>> @@ -2038,6 +2038,8 @@ static inline unsigned long memdesc_section(memdesc_flags_t mdf)\n>>>>>>      */\n>>>>>>     static inline unsigned long folio_pfn(const struct folio *folio)\n>>>>>>     {\n>>>>>> +\tVM_BUG_ON(folio_is_device_private(folio));\n>>>>>\n>>>>> Please use VM_WARN_ON instead.\n>>>>\n>>>> ack.\n>>>>\n>>>>>\n>>>>>> +\n>>>>>>     \treturn page_to_pfn(&folio->page);\n>>>>>>     }\n>>>>>>\n>>>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n>>>>>> index 57c63b6a8f65..c1561a92864f 100644\n>>>>>> --- a/include/linux/rmap.h\n>>>>>> +++ b/include/linux/rmap.h\n>>>>>> @@ -951,7 +951,7 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>>>>>>     static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>>>     {\n>>>>>>     \tif (folio_is_device_private(folio))\n>>>>>> -\t\treturn page_vma_walk_pfn(folio_pfn(folio)) |\n>>>>>> +\t\treturn page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n>>>>>>     \t\t       PVMW_PFN_DEVICE_PRIVATE;\n>>>>>>\n>>>>>>     \treturn page_vma_walk_pfn(folio_pfn(folio));\n>>>>>> @@ -959,6 +959,9 @@ static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>>>\n>>>>>>     static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n>>>>>>     {\n>>>>>> +\tif (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n>>>>>> +\t\treturn device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>>> +\n>>>>>>     \treturn pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>>>     }\n>>>>>\n>>>>> <snip>\n>>>>>\n>>>>>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n>>>>>> index 96c525785d78..141fe5abd33f 100644\n>>>>>> --- a/mm/page_vma_mapped.c\n>>>>>> +++ b/mm/page_vma_mapped.c\n>>>>>> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,\n>>>>>>     static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>     {\n>>>>>>     \tunsigned long pfn;\n>>>>>> +\tbool device_private = false;\n>>>>>>     \tpte_t ptent = ptep_get(pvmw->pte);\n>>>>>>\n>>>>>>     \tif (pvmw->flags & PVMW_MIGRATION) {\n>>>>>> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>     \t\tif (!softleaf_is_migration(entry))\n>>>>>>     \t\t\treturn false;\n>>>>>>\n>>>>>> +\t\tif (softleaf_is_migration_device_private(entry))\n>>>>>> +\t\t\tdevice_private = true;\n>>>>>> +\n>>>>>>     \t\tpfn = softleaf_to_pfn(entry);\n>>>>>>     \t} else if (pte_present(ptent)) {\n>>>>>>     \t\tpfn = pte_pfn(ptent);\n>>>>>> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>     \t\t\treturn false;\n>>>>>>\n>>>>>>     \t\tpfn = softleaf_to_pfn(entry);\n>>>>>> +\n>>>>>> +\t\tif (softleaf_is_device_private(entry))\n>>>>>> +\t\t\tdevice_private = true;\n>>>>>>     \t}\n>>>>>>\n>>>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>>>> +\t\treturn false;\n>>>>>> +\n>>>>>>     \tif ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>>>     \t\treturn false;\n>>>>>>     \tif (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n>>>>>> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>     }\n>>>>>>\n>>>>>>     /* Returns true if the two ranges overlap.  Careful to not overflow. */\n>>>>>> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)\n>>>>>> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n>>>>>>     {\n>>>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>>>> +\t\treturn false;\n>>>>>> +\n>>>>>>     \tif ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>>>     \t\treturn false;\n>>>>>>     \tif (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n>>>>>> @@ -255,6 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>>>\n>>>>>>     \t\t\t\tif (!softleaf_is_migration(entry) ||\n>>>>>>     \t\t\t\t    !check_pmd(softleaf_to_pfn(entry),\n>>>>>> +\t\t\t\t\t       softleaf_is_device_private(entry) ||\n>>>>>> +\t\t\t\t\t       softleaf_is_migration_device_private(entry),\n>>>>>>     \t\t\t\t\t       pvmw))\n>>>>>>     \t\t\t\t\treturn not_found(pvmw);\n>>>>>>     \t\t\t\treturn true;\n>>>>>> @@ -262,7 +277,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>>>     \t\t\tif (likely(pmd_trans_huge(pmde))) {\n>>>>>>     \t\t\t\tif (pvmw->flags & PVMW_MIGRATION)\n>>>>>>     \t\t\t\t\treturn not_found(pvmw);\n>>>>>> -\t\t\t\tif (!check_pmd(pmd_pfn(pmde), pvmw))\n>>>>>> +\t\t\t\tif (!check_pmd(pmd_pfn(pmde), false, pvmw))\n>>>>>>     \t\t\t\t\treturn not_found(pvmw);\n>>>>>>     \t\t\t\treturn true;\n>>>>>>     \t\t\t}\n>>>>>\n>>>>> It seems to me that you can add a new flag like “bool is_device_private” to\n>>>>> indicate whether pfn is a device private index instead of pfn without\n>>>>> manipulating pvmw->pfn itself.\n>>>>\n>>>> We could do it like that, however my concern with using a new param was that\n>>>> storing this info seperately might make it easier to misuse a device private\n>>>> index as a regular pfn.\n>>>>\n>>>> It seemed like it could be easy to overlook both when creating the pvmw and\n>>>> then when accessing the pfn.\n>>>\n>>> That is why I asked for a helper function like page_vma_walk_pfn(pvmw) to\n>>> return the converted pfn instead of pvmw->pfn directly. You can add a comment\n>>> to ask people to use helper function and even mark pvmw->pfn /* do not use\n>>> directly */.\n>>\n>> Yeah I agree that is a good idea.\n>>\n>>>\n>>> In addition, your patch manipulates pfn by left shifting it by 1. Are you sure\n>>> there is no weird arch having pfns with bit 63 being 1? Your change could\n>>> break it, right?\n>>\n>> Currently for migrate pfns we left shift by pfns by MIGRATE_PFN_SHIFT (6), so I\n>> thought doing something similiar here should be safe.\n> \n> Yeah, but that limits to archs supporting HMM. page_vma_mapped_walk is used\n> by almost every arch, so it has a broader impact.\n\nThat is a good point.\n\nI see a few options:\n\n- On every arch we can assume SWP_PFN_BITS? I could add a sanity check that we\n   have an extra bit on top of SWP_PFN_BITS within an unsigned long.\n- We could define PVMW_PFN_SHIFT as 0 if !CONFIG_MIGRATION as the flag is not\n   required.\n- Instead of modifying pvmw->pfn we could use pvmw->flags but that has the\n   issues of separating the offset type and offset.\n\nThanks,\nJordan.\n\n\n> \n> Best Regards,\n> Yan, Zi","headers":{"Return-Path":"\n <linuxppc-dev+bounces-16085-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=C/oSPSjH;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=112.213.38.117; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-16085-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=\"2a01:111:f403:c10d::1\" arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=C/oSPSjH;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=2a01:111:f403:c10d::1;\n helo=sn4pr2101cu001.outbound.protection.outlook.com;\n envelope-from=jniethe@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dwkFZ11Skz1xsg\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 21 Jan 2026 10:34:58 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4dwkFY0RTnz2xrL;\n\tWed, 21 Jan 2026 10:34:57 +1100 (AEDT)","from SN4PR2101CU001.outbound.protection.outlook.com\n (mail-southcentralusazlp170120001.outbound.protection.outlook.com\n [IPv6:2a01:111:f403:c10d::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4dwkFW4Mx5z2xjK\n\tfor <linuxppc-dev@lists.ozlabs.org>; Wed, 21 Jan 2026 10:34:54 +1100 (AEDT)","from DM4PR12MB9072.namprd12.prod.outlook.com (2603:10b6:8:be::6) by\n DM4PR12MB6037.namprd12.prod.outlook.com (2603:10b6:8:b0::11) with Microsoft\n SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.9542.9; Tue, 20 Jan 2026 23:34:29 +0000","from DM4PR12MB9072.namprd12.prod.outlook.com\n ([fe80::9e49:782:8e98:1ff1]) by DM4PR12MB9072.namprd12.prod.outlook.com\n ([fe80::9e49:782:8e98:1ff1%5]) with mapi id 15.20.9520.011; Tue, 20 Jan 2026\n 23:34:29 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1768952096;\n\tcv=pass;\n b=L0lm3IVLUEOy+0vECag6ukV5mqLmSliTuiW4kauDxHEPlnWDeGqmMa1WEQSoeCt17b6o0QbVvwb7wGgseWSKPBXYT8c2Kd7TmKYaMr5EKJneAeQl5cWVASm+qmgUTlmK6RAFlY/k2WccJrZRtzPKeBP8c3iwMOvg2CkI0+d/Cyo5ZCXGo9qDADKKsWA00zwYwSHkZ+7fyrvwB+SyVLtiyIEOA5/1Wv8WIVyWnHquMnH5nyvjn/RK2sc2cYMVsg6WVMBMqsLPcf8W77guc+tDccCPNvfi34zPoSqBvub26kAO7D3sf6VRu2pSsnSlEHOcBSXtEsdt4yABrrqa+VcMZg==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=n50Lob+YiAF5R1j47EIYd2dkrmG2fLUUo68M7HPXJacTsc6hothqHQRPytu6DWTBlD1OXXj7EBcLhDuQQtKo2NRDlSmK+AEo7sVGnb57lOfIJ6W/R8btELTbRLG6gCsbxfUDk3LYFxC3fImE3RVUo0G4iDx2saAq/cL88URD6X562y2mcL6dWumJGXTG6zZGrBa+mDsJg7EgNsPgSqB5cc7P87XwXUZWREWgQG0oLL9OLY+xDKFVCsSbji76pOr7e9Jt7TJzf24c/dzJXVIFJwsMy+paTyu638w43RJMwT6ogY91GGnvee3530IMSr+KbWQW/ScIjG+JUsAJl9M4jg=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1768952096; c=relaxed/relaxed;\n\tbh=Vk7aGO8KS68mEJDMr8TfVp9YtnjBYzgSIUFqG35xXq4=;\n\th=Message-ID:Date:Subject:To:Cc:References:From:In-Reply-To:\n\t Content-Type:MIME-Version;\n b=IvGfBGczyeSukVWkuTgAoNxpDlh2y24h93GJIF7iP5ALefBeym6QicrukuxtCXyLMw6/rvmxQpfB5Wn68D55qkYaTeJ6mgAkGCqbd0bJy4THl6Aja64K/21YgowThp9SD+EJPzaaci6GzeF3s0Nf+eAPy7B8dNcB2N0rdwU3c+MFSK5/05eX8RggJYoLcb3+BIqxZEBPfEiH11aARuEgZ1ffAVcgKdfGaPB9rPmo59VDpUyJrwQspnuh2DNiSU9dZFMWKkhf/TrEwo0lVpcHqbSLOy30//03e/c3NwWCISf9fQS3/3TyE5L0TUvn+DNCUp9FTdufVRGNDqh19MDYSw==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=Vk7aGO8KS68mEJDMr8TfVp9YtnjBYzgSIUFqG35xXq4=;\n b=Bh3iQtOrv8pTMeqf2yBPRP+e4wsyJlrbHsum48KpHoZt5wiZhg/UACdgFZqMYR2xoFW6xFxm6iHa8/HgOmAPfyQiEyY3u7Im7+0PK1HHRVHn4Sv7ycAhrBxG1MOI23l9uN2mvsXAV9lhJ6PbU94Zk2qWZ5YPXN5kFIQDJGlgpHNFww6/RfzO+A6msZAHaY6WTRs45aLB5VkY4yvoJjvH1O6a5s9vGZAf4uD/7+Ntm4Wb8xCcxtWDAr3VudtrDvQBL1JzHTqu6sxI1utnxByK4SNTe71nTIi1hewjxkL3LUizXTad1FZGBZpaKjzYdzItoIMOmTopFXGtnTlOjk32mw=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=C/oSPSjH; dkim-atps=neutral;\n spf=pass (client-ip=2a01:111:f403:c10d::1;\n helo=sn4pr2101cu001.outbound.protection.outlook.com;\n envelope-from=jniethe@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=Vk7aGO8KS68mEJDMr8TfVp9YtnjBYzgSIUFqG35xXq4=;\n b=C/oSPSjHd3bMFmurf1o4GnM2lRx9j22raMiTl53WcH1wbcV24IS94voJW47EZi/iphTjqFXK7hXPd9CpoCs4viSbLvVqwSZqUaH9g27Vr8VT8DSngInLKVHnfOeTy/S9Zz5CjUVkafqkLEaUVgQfo2HU1uRzmK1oy4JsCoqGtFK7ww69/+xP2Hn9WRw2PFUm7rMSGOFin8eni2rkVC8I4lOLzOHY16WbRzpZmDQATo7cLJZ2JH0hzfLOuzF/e+jq50ao9H/CKjDTBXrbbWVXi9v2tfr01IEsbNague+UOpXnZdzso6jgvJ8IQ5g9tSZ+1Jt9Je2izNjbBKMtOsa4Lw==","Message-ID":"<649cc20c-161c-4343-8263-3712a4f6dccb@nvidia.com>","Date":"Wed, 21 Jan 2026 10:34:21 +1100","User-Agent":"Mozilla Thunderbird","Subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","To":"Zi Yan <ziy@nvidia.com>","Cc":"linux-mm@kvack.org, balbirs@nvidia.com, matthew.brost@intel.com,\n akpm@linux-foundation.org, linux-kernel@vger.kernel.org,\n dri-devel@lists.freedesktop.org, david@redhat.com, apopple@nvidia.com,\n lorenzo.stoakes@oracle.com, lyude@redhat.com, dakr@kernel.org,\n airlied@gmail.com, simona@ffwll.ch, rcampbell@nvidia.com,\n mpenttil@redhat.com, jgg@nvidia.com, willy@infradead.org,\n linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org, jgg@ziepe.ca,\n Felix.Kuehling@amd.com","References":"<20260107091823.68974-1-jniethe@nvidia.com>\n <20260107091823.68974-12-jniethe@nvidia.com>\n <36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>\n <c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>\n <6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com>\n <fd4b6553-3e9e-4829-a12f-51d29a5d7571@nvidia.com>\n <16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com>","Content-Language":"en-US","From":"Jordan Niethe <jniethe@nvidia.com>","In-Reply-To":"<16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com>","Content-Type":"text/plain; charset=UTF-8; format=flowed","Content-Transfer-Encoding":"8bit","X-ClientProxiedBy":"SJ0PR13CA0077.namprd13.prod.outlook.com\n (2603:10b6:a03:2c4::22) To DM4PR12MB9072.namprd12.prod.outlook.com\n (2603:10b6:8:be::6)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DM4PR12MB9072:EE_|DM4PR12MB6037:EE_","X-MS-Office365-Filtering-Correlation-Id":"9f2027f0-bf2a-4f0a-5537-08de587c7494","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|1800799024|366016|376014|7416014;","X-Microsoft-Antispam-Message-Info":"=?utf-8?q?Bh3cvBrqWZWhdO9+iOXXHf8Cm/9E+Nh?=\n\t=?utf-8?q?qLinjPAefqO7XBtk9aUFDl2W8jhIZLU3grvKIF1Qpcr0cMUkTLGwRw1qbjUWMaplb?=\n\t=?utf-8?q?1Z7VO+orw8DcxS44uG8OGOoVBCytZt7yQSOYGA8NjUfb9sfPSBF9JaqrMLUpZ/XEn?=\n\t=?utf-8?q?OGECmZAAVgXEJxbk5KF1dmJpfBUuHi7pUbxGUo6DkPQzwW1vdRIHdQGRix+bi/IKd?=\n\t=?utf-8?q?Di17qQ3IrH70oZet9c7rNb77dsHGCXRxHzq1h1bBfAsP8OljVV0fWfsRp2sqCglbo?=\n\t=?utf-8?q?wS5chtjFZfBIlm3gQMPntZuiPRIKVekNrJ3dOP95Ol7A1oSWpgY7bX9ry7F3PVwWZ?=\n\t=?utf-8?q?d2/xxyiT8WcGvHEcdYffkS2o+SQyFgPFlzwSVjhSKf6ZTP9STO96eIYD+4V8+R/t3?=\n\t=?utf-8?q?AmTo77fkuEdvDEVUXwntiD/NVFBe9FDTIjMSaaskrx4vPA5lKSIgTdP/f5Gxdwwnj?=\n\t=?utf-8?q?iy5RQJytrsOyg0RSWbbw5wjDVdyQS+71aQ/+JuNI7NhruC2txocXcqnWZ8CwrXuhS?=\n\t=?utf-8?q?L4ErxBkFKCfAIvltfrZz5asMDk7d/w8h+R015bbZaaIA1yf1r9Kep6K6ZQzQLSmZ3?=\n\t=?utf-8?q?dEp8DpdXpKIxG4khGgHWo4RMmJ8eAspQJc/fXOcU9bHXQZMCpj41EA/Rv7b14hE5o?=\n\t=?utf-8?q?MxknOlDv3xHuwzhzyjSq0RNTxdAUpxTgzIMfITo78RQ1FrgG8wdMJWcLLfGfULxuA?=\n\t=?utf-8?q?haZMSEVVdv6lAylX5tKSZDrCQSJtn6umJYLl15jwFtfde8RyqeE+2bKse3dQnuEuW?=\n\t=?utf-8?q?NURGSU87Q+I0Yhs2zAyKFjBmBetBz3IOxEIZ8/y3znlu4ysI7vLGTjW20u2ykeFzJ?=\n\t=?utf-8?q?bBh6I6Gs3yCpFWADCQdjCSYnosqDCy4KxjNpjrkaxst9IOnZuyBi3zvRQVOIpP9yE?=\n\t=?utf-8?q?1fsLkys5zInvTfM2fDcaPwFF3P8eGPHbbme6OVJtRGHVribZGzYiks/Wke5uRnfgU?=\n\t=?utf-8?q?9kQ+wxGYplmXyvj8N7RFW6Lh4rJtCBPgxrS9x3NNkdRh0WicTI61fc4QPBSw4qj0E?=\n\t=?utf-8?q?EsVs135ZFQAuciqrDrd8fzOC4C2OIt31I142lr/C9buRBUuwCJ7upUkjZFKo8llh8?=\n\t=?utf-8?q?FW9HFMioUspp9UHLQDf6Of/q7bGGbE2ar7NXutRaEtabfEU55bfF7GbocEZu4zUBs?=\n\t=?utf-8?q?6Yi6uQOybQrJqZPiGNrXlkoQ3s0wT3ZYTwPitaENhO3OE4u0J0lzRggzYlqYIzRgm?=\n\t=?utf-8?q?Gdgy3UVL53tiR6b5oI/vwCGFXn2A9i113lZq2u7UpWhi8ub9P/hHdSb4guLc/Awfj?=\n\t=?utf-8?q?VseKTa9e4IRlHS9IlLioMUL9e6ynyWmjtQfZHmkq5o/LaelcR1JIfp7/PS8Dh2qY2?=\n\t=?utf-8?q?Ep/dRDJv5YScoc0hdrQ3sS/ynpHbZN2tovVx7za41yivakQ6q8Uu/U4LHJ/PdPtCe?=\n\t=?utf-8?q?ZPT3dQ3y8k78qWqvM6LoJxrDMqA4dWna8keMGWv3Iqte7Cho6th3sRk8ic3ThX2+0?=\n\t=?utf-8?q?vV+U3yO5GvrAFXe8jIQ4kfccV0I5y3ldNnzQDQFUNp1h3UOvrOW1k=3D?=","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR12MB9072.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014)(7416014);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"=?utf-8?q?3e98j+bd3o3JOMlx+UmwzbuDCQbQ?=\n\t=?utf-8?q?NdZU1iWC9JzXro9hP9WspGJqkxmdR74NSgO8Y0x/Hbl2RAET9egmcGBhWM5E0OfsV?=\n\t=?utf-8?q?0aiK4v+EEquupDF8C5W6KrmA4mktq7vAfYZABH098Gqm57Q0mhBZWz/0myTWx0YGO?=\n\t=?utf-8?q?3NXJMRV4ALWH7DjVNYFWvLv+0+c0DxcHtKJJogBtMWWgVhHNVPLT6EZcJfIV9X2wO?=\n\t=?utf-8?q?HzQn+rX0OGVIaDvtujZAblMG4Y0QG46WuaMAJa5sRHlylywN30UBMuA1hd9dzTYqL?=\n\t=?utf-8?q?xoDtQsSsMGakfO6UgysCMKn2nmrXQTaIZXiy82KP3UYMzdgQwOqslpwKkjXZM1yDV?=\n\t=?utf-8?q?MeKO5U2X8m2iuUa/DYY3+GMSU6SPZ6MUM+oDfoh2X+xFtb6xiw5JnxZ612gTA9D54?=\n\t=?utf-8?q?FRAWWX+oa2P3a3+5oe6yWaD0JthOD15Mq5ZlHNSaL3BS8LfC2Tj6CiiKAGNw0AEvO?=\n\t=?utf-8?q?5gaG2zKHJQ5An2gfj+65czGVIOs53kPKbIPgfzhPnWW5enK0XCQEM/RGLILTi3jmz?=\n\t=?utf-8?q?dSPUSQkoIhPLlRrKIHVITO1LbYO8mwYtFAtaAsKBoStl4Plp941T7zQLqK7KoX1FA?=\n\t=?utf-8?q?eytllF/5NNTP+ovtaZKusyzn+v0yEnNCSvi+dupYIj1vEekiYVSHLpJ6yJI26Qu3d?=\n\t=?utf-8?q?cF1NWSXuAU4bMyLbnFscv9ecy4XmdQmvHhpFL7+J7bVeqcUVaybmSQMPbj2lLYac+?=\n\t=?utf-8?q?V7gfgWCm1QZhY+Ww9kyp976jvkaqsEBYw0sjpEiMZo/WqkCN67sKI7qkhvz7i1DUy?=\n\t=?utf-8?q?Yv/m24sNv+n5eoIrxvgDlp8BFv7zm6eIHeN1B1E4oXTVqOsoVFBz7L81//T72iFaK?=\n\t=?utf-8?q?vUCeHLvM8oolI26j1JqJu11MruoBDj9Dn68m53gtSBPZRuqmeH/fuXw5j51CRaNHc?=\n\t=?utf-8?q?kZ8OazklIcltiqIKw2xMlZU9timMBRTObrBqVAE5RVFu4hW2/pz+U362+Cnia+qfO?=\n\t=?utf-8?q?RpawlYC+626HGa/WXgYu73LfLu6LTaiXl6nvDYmSQeXX7c6Lk3NpYgs+ETx6FRXy9?=\n\t=?utf-8?q?3d+AA3Y+ZUKhjujpkLFUF9HeC6thG+agLWMXe632C8bm2D1r5PQtlrPoDMAAmT7YA?=\n\t=?utf-8?q?amYIrWG9Dzq5pxNiI05kdUIOHXV1wnqMAMa1JCDDTMevbhpoGkuV+qg9YKZUMar7o?=\n\t=?utf-8?q?7OXh66uvJp9UFVBivvBIYmEHlIytTieIHkVtzxkdmGrFu7NW6uGZa9pGaZdP6rKVg?=\n\t=?utf-8?q?1n22vGSV+iSgLXnmkzrpbmCU0r+m0wEZpWYkGnt4VVLoxE/SS7Askxtn45cfQJwbW?=\n\t=?utf-8?q?IU5WojyZfRkrk+CW1ix+QlSea1yehJncZs0K+ey5bMgCu+hAKNQWdlV4mBdvHVZ+M?=\n\t=?utf-8?q?mbKHRd8naJX/UrrgdIjOhpSKA2zSpz/miLc4NLc02acgQIKddVhieBlI3VmpusZpW?=\n\t=?utf-8?q?gGjM9Pd2q4gXpuetl9b5cml+88vGLh4YqZ3iNO/gWYxsut75QiaPpU9Skkm4VbP90?=\n\t=?utf-8?q?ynMeRXoGQYRoNsT66gMGiVRn64TT2aZqgQ+uH6cKCjd1l9rE0+PhVW6s1qc2EfgHk?=\n\t=?utf-8?q?iSPLcnYioiBtaxgl4W9O9LvfZrq18MiVGIJiYX7J35OFNOh4CQ2hPj4GaTqgP2XD3?=\n\t=?utf-8?q?P5a5+eUdq8HCEW7DVGK2wiFSqcj9QfospGfbzyx2ryfgEBdrOFM8HvGwWupE5X6eb?=\n\t=?utf-8?q?c4Qe7clrgW+c1MtheAUXX5RB3+9qmC4w=3D=3D?=","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n 9f2027f0-bf2a-4f0a-5537-08de587c7494","X-MS-Exchange-CrossTenant-AuthSource":"DM4PR12MB9072.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"20 Jan 2026 23:34:29.0335\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n xwkD99ugmWOIhZFyh0XAIheCkb7qq/NBEnEAPLCnR4nxcyUevU6TA4qzKqkE6RzODOfysORVXQ20a+80qFatQg==","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"DM4PR12MB6037","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tSPF_HELO_PASS,SPF_PASS autolearn=disabled version=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"}},{"id":3639442,"web_url":"http://patchwork.ozlabs.org/comment/3639442/","msgid":"<C2A9F124-9EA8-4916-AB86-659BD280390D@nvidia.com>","date":"2026-01-21T02:41:25","subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","submitter":{"id":76947,"url":"http://patchwork.ozlabs.org/api/people/76947/","name":"Zi Yan","email":"ziy@nvidia.com"},"content":"On 20 Jan 2026, at 18:34, Jordan Niethe wrote:\n\n> Hi,\n>\n> On 21/1/26 10:06, Zi Yan wrote:\n>> On 20 Jan 2026, at 18:02, Jordan Niethe wrote:\n>>\n>>> Hi,\n>>>\n>>> On 21/1/26 09:53, Zi Yan wrote:\n>>>> On 20 Jan 2026, at 17:33, Jordan Niethe wrote:\n>>>>\n>>>>> On 14/1/26 07:04, Zi Yan wrote:\n>>>>>> On 7 Jan 2026, at 4:18, Jordan Niethe wrote:\n>>>>>>\n>>>>>>> Currently when creating these device private struct pages, the first\n>>>>>>> step is to use request_free_mem_region() to get a range of physical\n>>>>>>> address space large enough to represent the devices memory. This\n>>>>>>> allocated physical address range is then remapped as device private\n>>>>>>> memory using memremap_pages().\n>>>>>>>\n>>>>>>> Needing allocation of physical address space has some problems:\n>>>>>>>\n>>>>>>>      1) There may be insufficient physical address space to represent the\n>>>>>>>         device memory. KASLR reducing the physical address space and VM\n>>>>>>>         configurations with limited physical address space increase the\n>>>>>>>         likelihood of hitting this especially as device memory increases. This\n>>>>>>>         has been observed to prevent device private from being initialized.\n>>>>>>>\n>>>>>>>      2) Attempting to add the device private pages to the linear map at\n>>>>>>>         addresses beyond the actual physical memory causes issues on\n>>>>>>>         architectures like aarch64 meaning the feature does not work there.\n>>>>>>>\n>>>>>>> Instead of using the physical address space, introduce a device private\n>>>>>>> address space and allocate devices regions from there to represent the\n>>>>>>> device private pages.\n>>>>>>>\n>>>>>>> Introduce a new interface memremap_device_private_pagemap() that\n>>>>>>> allocates a requested amount of device private address space and creates\n>>>>>>> the necessary device private pages.\n>>>>>>>\n>>>>>>> To support this new interface, struct dev_pagemap needs some changes:\n>>>>>>>\n>>>>>>>      - Add a new dev_pagemap::nr_pages field as an input parameter.\n>>>>>>>      - Add a new dev_pagemap::pages array to store the device\n>>>>>>>        private pages.\n>>>>>>>\n>>>>>>> When using memremap_device_private_pagemap(), rather then passing in\n>>>>>>> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to\n>>>>>>> be remapped, dev_pagemap::nr_ranges will always be 1, and the device\n>>>>>>> private range that is reserved is returned in dev_pagemap::range.\n>>>>>>>\n>>>>>>> Forbid calling memremap_pages() with dev_pagemap::ranges::type =\n>>>>>>> MEMORY_DEVICE_PRIVATE.\n>>>>>>>\n>>>>>>> Represent this device private address space using a new\n>>>>>>> device_private_pgmap_tree maple tree. This tree maps a given device\n>>>>>>> private address to a struct dev_pagemap, where a specific device private\n>>>>>>> page may then be looked up in that dev_pagemap::pages array.\n>>>>>>>\n>>>>>>> Device private address space can be reclaimed and the assoicated device\n>>>>>>> private pages freed using the corresponding new\n>>>>>>> memunmap_device_private_pagemap() interface.\n>>>>>>>\n>>>>>>> Because the device private pages now live outside the physical address\n>>>>>>> space, they no longer have a normal PFN. This means that page_to_pfn(),\n>>>>>>> et al. are no longer meaningful.\n>>>>>>>\n>>>>>>> Introduce helpers:\n>>>>>>>\n>>>>>>>      - device_private_page_to_offset()\n>>>>>>>      - device_private_folio_to_offset()\n>>>>>>>\n>>>>>>> to take a given device private page / folio and return its offset within\n>>>>>>> the device private address space.\n>>>>>>>\n>>>>>>> Update the places where we previously converted a device private page to\n>>>>>>> a PFN to use these new helpers. When we encounter a device private\n>>>>>>> offset, instead of looking up its page within the pagemap use\n>>>>>>> device_private_offset_to_page() instead.\n>>>>>>>\n>>>>>>> Update the existing users:\n>>>>>>>\n>>>>>>>     - lib/test_hmm.c\n>>>>>>>     - ppc ultravisor\n>>>>>>>     - drm/amd/amdkfd\n>>>>>>>     - gpu/drm/xe\n>>>>>>>     - gpu/drm/nouveau\n>>>>>>>\n>>>>>>> to use the new memremap_device_private_pagemap() interface.\n>>>>>>>\n>>>>>>> Signed-off-by: Jordan Niethe <jniethe@nvidia.com>\n>>>>>>> Signed-off-by: Alistair Popple <apopple@nvidia.com>\n>>>>>>>\n>>>>>>> ---\n>>>>>>>\n>>>>>>> NOTE: The updates to the existing drivers have only been compile tested.\n>>>>>>> I'll need some help in testing these drivers.\n>>>>>>>\n>>>>>>> v1:\n>>>>>>> - Include NUMA node paramater for memremap_device_private_pagemap()\n>>>>>>> - Add devm_memremap_device_private_pagemap() and friends\n>>>>>>> - Update existing users of memremap_pages():\n>>>>>>>        - ppc ultravisor\n>>>>>>>        - drm/amd/amdkfd\n>>>>>>>        - gpu/drm/xe\n>>>>>>>        - gpu/drm/nouveau\n>>>>>>> - Update for HMM huge page support\n>>>>>>> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE\n>>>>>>>\n>>>>>>> v2:\n>>>>>>> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);\n>>>>>>> ---\n>>>>>>>     Documentation/mm/hmm.rst                 |  11 +-\n>>>>>>>     arch/powerpc/kvm/book3s_hv_uvmem.c       |  41 ++---\n>>>>>>>     drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  23 +--\n>>>>>>>     drivers/gpu/drm/nouveau/nouveau_dmem.c   |  35 ++--\n>>>>>>>     drivers/gpu/drm/xe/xe_svm.c              |  28 +---\n>>>>>>>     include/linux/hmm.h                      |   3 +\n>>>>>>>     include/linux/leafops.h                  |  16 +-\n>>>>>>>     include/linux/memremap.h                 |  64 +++++++-\n>>>>>>>     include/linux/migrate.h                  |   6 +-\n>>>>>>>     include/linux/mm.h                       |   2 +\n>>>>>>>     include/linux/rmap.h                     |   5 +-\n>>>>>>>     include/linux/swapops.h                  |  10 +-\n>>>>>>>     lib/test_hmm.c                           |  69 ++++----\n>>>>>>>     mm/debug.c                               |   9 +-\n>>>>>>>     mm/memremap.c                            | 193 ++++++++++++++++++-----\n>>>>>>>     mm/mm_init.c                             |   8 +-\n>>>>>>>     mm/page_vma_mapped.c                     |  19 ++-\n>>>>>>>     mm/rmap.c                                |  43 +++--\n>>>>>>>     mm/util.c                                |   5 +-\n>>>>>>>     19 files changed, 391 insertions(+), 199 deletions(-)\n>>>>>>>\n>>>>>> <snip>\n>>>>>>\n>>>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h\n>>>>>>> index e65329e1969f..b36599ab41ba 100644\n>>>>>>> --- a/include/linux/mm.h\n>>>>>>> +++ b/include/linux/mm.h\n>>>>>>> @@ -2038,6 +2038,8 @@ static inline unsigned long memdesc_section(memdesc_flags_t mdf)\n>>>>>>>      */\n>>>>>>>     static inline unsigned long folio_pfn(const struct folio *folio)\n>>>>>>>     {\n>>>>>>> +\tVM_BUG_ON(folio_is_device_private(folio));\n>>>>>>\n>>>>>> Please use VM_WARN_ON instead.\n>>>>>\n>>>>> ack.\n>>>>>\n>>>>>>\n>>>>>>> +\n>>>>>>>     \treturn page_to_pfn(&folio->page);\n>>>>>>>     }\n>>>>>>>\n>>>>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n>>>>>>> index 57c63b6a8f65..c1561a92864f 100644\n>>>>>>> --- a/include/linux/rmap.h\n>>>>>>> +++ b/include/linux/rmap.h\n>>>>>>> @@ -951,7 +951,7 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>>>>>>>     static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>>>>     {\n>>>>>>>     \tif (folio_is_device_private(folio))\n>>>>>>> -\t\treturn page_vma_walk_pfn(folio_pfn(folio)) |\n>>>>>>> +\t\treturn page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n>>>>>>>     \t\t       PVMW_PFN_DEVICE_PRIVATE;\n>>>>>>>\n>>>>>>>     \treturn page_vma_walk_pfn(folio_pfn(folio));\n>>>>>>> @@ -959,6 +959,9 @@ static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>>>>\n>>>>>>>     static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n>>>>>>>     {\n>>>>>>> +\tif (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n>>>>>>> +\t\treturn device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>>>> +\n>>>>>>>     \treturn pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>>>>     }\n>>>>>>\n>>>>>> <snip>\n>>>>>>\n>>>>>>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n>>>>>>> index 96c525785d78..141fe5abd33f 100644\n>>>>>>> --- a/mm/page_vma_mapped.c\n>>>>>>> +++ b/mm/page_vma_mapped.c\n>>>>>>> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,\n>>>>>>>     static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>     {\n>>>>>>>     \tunsigned long pfn;\n>>>>>>> +\tbool device_private = false;\n>>>>>>>     \tpte_t ptent = ptep_get(pvmw->pte);\n>>>>>>>\n>>>>>>>     \tif (pvmw->flags & PVMW_MIGRATION) {\n>>>>>>> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>     \t\tif (!softleaf_is_migration(entry))\n>>>>>>>     \t\t\treturn false;\n>>>>>>>\n>>>>>>> +\t\tif (softleaf_is_migration_device_private(entry))\n>>>>>>> +\t\t\tdevice_private = true;\n>>>>>>> +\n>>>>>>>     \t\tpfn = softleaf_to_pfn(entry);\n>>>>>>>     \t} else if (pte_present(ptent)) {\n>>>>>>>     \t\tpfn = pte_pfn(ptent);\n>>>>>>> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>     \t\t\treturn false;\n>>>>>>>\n>>>>>>>     \t\tpfn = softleaf_to_pfn(entry);\n>>>>>>> +\n>>>>>>> +\t\tif (softleaf_is_device_private(entry))\n>>>>>>> +\t\t\tdevice_private = true;\n>>>>>>>     \t}\n>>>>>>>\n>>>>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>>>>> +\t\treturn false;\n>>>>>>> +\n>>>>>>>     \tif ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>>>>     \t\treturn false;\n>>>>>>>     \tif (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n>>>>>>> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>     }\n>>>>>>>\n>>>>>>>     /* Returns true if the two ranges overlap.  Careful to not overflow. */\n>>>>>>> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)\n>>>>>>> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n>>>>>>>     {\n>>>>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>>>>> +\t\treturn false;\n>>>>>>> +\n>>>>>>>     \tif ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>>>>     \t\treturn false;\n>>>>>>>     \tif (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n>>>>>>> @@ -255,6 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>>>>\n>>>>>>>     \t\t\t\tif (!softleaf_is_migration(entry) ||\n>>>>>>>     \t\t\t\t    !check_pmd(softleaf_to_pfn(entry),\n>>>>>>> +\t\t\t\t\t       softleaf_is_device_private(entry) ||\n>>>>>>> +\t\t\t\t\t       softleaf_is_migration_device_private(entry),\n>>>>>>>     \t\t\t\t\t       pvmw))\n>>>>>>>     \t\t\t\t\treturn not_found(pvmw);\n>>>>>>>     \t\t\t\treturn true;\n>>>>>>> @@ -262,7 +277,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>>>>     \t\t\tif (likely(pmd_trans_huge(pmde))) {\n>>>>>>>     \t\t\t\tif (pvmw->flags & PVMW_MIGRATION)\n>>>>>>>     \t\t\t\t\treturn not_found(pvmw);\n>>>>>>> -\t\t\t\tif (!check_pmd(pmd_pfn(pmde), pvmw))\n>>>>>>> +\t\t\t\tif (!check_pmd(pmd_pfn(pmde), false, pvmw))\n>>>>>>>     \t\t\t\t\treturn not_found(pvmw);\n>>>>>>>     \t\t\t\treturn true;\n>>>>>>>     \t\t\t}\n>>>>>>\n>>>>>> It seems to me that you can add a new flag like “bool is_device_private” to\n>>>>>> indicate whether pfn is a device private index instead of pfn without\n>>>>>> manipulating pvmw->pfn itself.\n>>>>>\n>>>>> We could do it like that, however my concern with using a new param was that\n>>>>> storing this info seperately might make it easier to misuse a device private\n>>>>> index as a regular pfn.\n>>>>>\n>>>>> It seemed like it could be easy to overlook both when creating the pvmw and\n>>>>> then when accessing the pfn.\n>>>>\n>>>> That is why I asked for a helper function like page_vma_walk_pfn(pvmw) to\n>>>> return the converted pfn instead of pvmw->pfn directly. You can add a comment\n>>>> to ask people to use helper function and even mark pvmw->pfn /* do not use\n>>>> directly */.\n>>>\n>>> Yeah I agree that is a good idea.\n>>>\n>>>>\n>>>> In addition, your patch manipulates pfn by left shifting it by 1. Are you sure\n>>>> there is no weird arch having pfns with bit 63 being 1? Your change could\n>>>> break it, right?\n>>>\n>>> Currently for migrate pfns we left shift by pfns by MIGRATE_PFN_SHIFT (6), so I\n>>> thought doing something similiar here should be safe.\n>>\n>> Yeah, but that limits to archs supporting HMM. page_vma_mapped_walk is used\n>> by almost every arch, so it has a broader impact.\n>\n> That is a good point.\n>\n> I see a few options:\n>\n> - On every arch we can assume SWP_PFN_BITS? I could add a sanity check that we\n>   have an extra bit on top of SWP_PFN_BITS within an unsigned long.\n\nYes, but if there is no extra bit, are you going to disable device private\npages?\n\n> - We could define PVMW_PFN_SHIFT as 0 if !CONFIG_MIGRATION as the flag is not\n>   required.\n\nSure, or !CONFIG_DEVICE_MIGRATION\n\n> - Instead of modifying pvmw->pfn we could use pvmw->flags but that has the\n>   issues of separating the offset type and offset.\n\nIt seems that I was not clear on my proposal. Here is the patch on top of\nyour patchset and it compiles.\n\nBasically, pvmw->pfn stores either PFN or device private offset without\nadditional shift. Caller interprets pvmw->pfn based on\npvmw->flags & PVMW_DEVICE_PRIVATE. And you can ignore my helper function\nof pvmw->pfn suggestion, since my patch below can use pvmw->pfn directly.\n\nLet me know if my patch works. Thanks.\n\ndiff --git a/include/linux/rmap.h b/include/linux/rmap.h\nindex c1561a92864f..4423f0e886aa 100644\n--- a/include/linux/rmap.h\n+++ b/include/linux/rmap.h\n@@ -921,6 +921,7 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr,\n #define PVMW_SYNC\t\t(1 << 0)\n /* Look for migration entries rather than present PTEs */\n #define PVMW_MIGRATION\t\t(1 << 1)\n+#define PVMW_DEVICE_PRIVATE\t(1 << 2)\n\n /* Result flags */\n\n@@ -943,6 +944,13 @@ struct page_vma_mapped_walk {\n #define PVMW_PFN_DEVICE_PRIVATE\t(1UL << 0)\n #define PVMW_PFN_SHIFT\t\t1\n\n+static inline unsigned long page_vma_walk_flags(struct folio *folio, unsigned long flags)\n+{\n+\tif (folio_is_device_private(folio))\n+\t\treturn flags | PVMW_DEVICE_PRIVATE;\n+\treturn flags;\n+}\n+\n static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n {\n \treturn (pfn << PVMW_PFN_SHIFT);\n@@ -951,23 +959,16 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n {\n \tif (folio_is_device_private(folio))\n-\t\treturn page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n-\t\t       PVMW_PFN_DEVICE_PRIVATE;\n-\n-\treturn page_vma_walk_pfn(folio_pfn(folio));\n-}\n-\n-static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n-{\n-\tif (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n-\t\treturn device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n+\t\treturn device_private_folio_to_offset(folio);\n\n-\treturn pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n+\treturn (folio_pfn(folio));\n }\n\n-static inline struct folio *page_vma_walk_pfn_to_folio(unsigned long pvmw_pfn)\n+static inline struct folio *page_vma_walk_pfn_to_folio(struct page_vma_mapped_walk *pvmw)\n {\n-\treturn page_folio(page_vma_walk_pfn_to_page(pvmw_pfn));\n+\tif (pvmw->flags & PVMW_DEVICE_PRIVATE)\n+\t\treturn page_folio(device_private_offset_to_page(pvmw->pfn));\n+\treturn pfn_folio(pvmw->pfn);\n }\n\n #define DEFINE_FOLIO_VMA_WALK(name, _folio, _vma, _address, _flags)\t\\\n@@ -977,7 +978,7 @@ static inline struct folio *page_vma_walk_pfn_to_folio(unsigned long pvmw_pfn)\n \t\t.pgoff = folio_pgoff(_folio),\t\t\t\t\\\n \t\t.vma = _vma,\t\t\t\t\t\t\\\n \t\t.address = _address,\t\t\t\t\t\\\n-\t\t.flags = _flags,\t\t\t\t\t\\\n+\t\t.flags = page_vma_walk_flags(_folio, _flags),\t\t\\\n \t}\n\n static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)\ndiff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\nindex 141fe5abd33f..e61a0e49a7c9 100644\n--- a/mm/page_vma_mapped.c\n+++ b/mm/page_vma_mapped.c\n@@ -136,12 +136,12 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n \t\t\tdevice_private = true;\n \t}\n\n-\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n+\tif ((device_private) ^ !!(pvmw->flags & PVMW_DEVICE_PRIVATE))\n \t\treturn false;\n\n-\tif ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n+\tif ((pfn + pte_nr - 1) < pvmw->pfn)\n \t\treturn false;\n-\tif (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n+\tif (pfn > (pvmw->pfn + pvmw->nr_pages - 1))\n \t\treturn false;\n \treturn true;\n }\n@@ -149,12 +149,12 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n /* Returns true if the two ranges overlap.  Careful to not overflow. */\n static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n {\n-\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n+\tif ((device_private) ^ !!(pvmw->flags & PVMW_DEVICE_PRIVATE))\n \t\treturn false;\n\n-\tif ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n+\tif ((pfn + HPAGE_PMD_NR - 1) < pvmw->pfn)\n \t\treturn false;\n-\tif (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n+\tif (pfn > pvmw->pfn + pvmw->nr_pages - 1)\n \t\treturn false;\n \treturn true;\n }\n@@ -369,7 +369,7 @@ unsigned long page_mapped_in_vma(const struct page *page,\n \t\t.pfn = folio_page_vma_walk_pfn(folio),\n \t\t.nr_pages = 1,\n \t\t.vma = vma,\n-\t\t.flags = PVMW_SYNC,\n+\t\t.flags = page_vma_walk_flags(folio, PVMW_SYNC),\n \t};\n\n \tpvmw.address = vma_address(vma, page_pgoff(folio, page), 1);\ndiff --git a/mm/vmscan.c b/mm/vmscan.c\nindex be5682d345b5..5d81939bf12a 100644\n--- a/mm/vmscan.c\n+++ b/mm/vmscan.c\n@@ -4203,7 +4203,7 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw)\n \tpte_t *pte = pvmw->pte;\n \tunsigned long addr = pvmw->address;\n \tstruct vm_area_struct *vma = pvmw->vma;\n-\tstruct folio *folio = page_vma_walk_pfn_to_folio(pvmw->pfn);\n+\tstruct folio *folio = page_vma_walk_pfn_to_folio(pvmw);\n \tstruct mem_cgroup *memcg = folio_memcg(folio);\n \tstruct pglist_data *pgdat = folio_pgdat(folio);\n \tstruct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat);\n\n\n\n\n\nBest Regards,\nYan, Zi","headers":{"Return-Path":"\n <linuxppc-dev+bounces-16087-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=cstmkwo5;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=2404:9400:21b9:f100::1; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-16087-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=\"2a01:111:f403:c100::f\" arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=cstmkwo5;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=2a01:111:f403:c100::f;\n helo=bl2pr02cu003.outbound.protection.outlook.com;\n envelope-from=ziy@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org\n [IPv6:2404:9400:21b9:f100::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dwpPS12cRz1xrD\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 21 Jan 2026 13:42:02 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4dwpPQ36TPz2yFg;\n\tWed, 21 Jan 2026 13:42:02 +1100 (AEDT)","from BL2PR02CU003.outbound.protection.outlook.com\n (mail-eastusazlp17011000f.outbound.protection.outlook.com\n [IPv6:2a01:111:f403:c100::f])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4dwpPN14yvz2xqD\n\tfor <linuxppc-dev@lists.ozlabs.org>; Wed, 21 Jan 2026 13:41:59 +1100 (AEDT)","from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by\n SJ1PR12MB6124.namprd12.prod.outlook.com (2603:10b6:a03:459::15) with\n Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9520.12; Wed, 21 Jan\n 2026 02:41:31 +0000","from DS7PR12MB9473.namprd12.prod.outlook.com\n ([fe80::f01d:73d2:2dda:c7b2]) by DS7PR12MB9473.namprd12.prod.outlook.com\n ([fe80::f01d:73d2:2dda:c7b2%4]) with mapi id 15.20.9542.008; Wed, 21 Jan 2026\n 02:41:31 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1768963322;\n\tcv=pass;\n b=il7lo0sPZmTlrZVpfE25rE72HOk2JPOpwsr23ICVwxR98VXFQoH8Va6fS8MOiifql+rO7yakPhwVyrmNjyVSHFDJvziQCTjKdsLRekYKVL3QRWDF/5VOvLs4+lKntbaxvWcywIzUky1vlARD31nnQGaycXl0TxvcBiVPj4kmDlaXsHi1blKPqC7BGUl4oOmGAK3N2UcpEESFHecNg94WBGqFvyvd7WzBUna+bhA5H8kiqiJ2GXm3udUZwbfLqFF3EjP3YCIax9flRSWEpeE65S+G5PCTc+DG+jy0JZ+NdkfJBxPBw46AZlrr0aunzhD3yehS1H7kk/yWvaid03IjlA==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=QIFJVYOtbJZTcgJT/ProB1Fi9+H2fP2I0K3LFfAqLJFCGvtyweNACNbMo8pfGjzyuwSa7gTnFPIU3QtgXUdEb/S7bBNY+pS3TU4z9cBTqkAg0GDGJmpR2vXvEmX7JCBrSx1SLJVQxkVudAYqFCwqGlBXn6bnVRFrGdoOrSGXwr+77v2PwyMsvUAjETpPB/0kU6CtBqch658Nfq3JpiSO1CUUnJD34T95YmlnvV9C09ED8WEHVtpiVpXhJxegKcO03zSzPAZInbeGOD1b0q8+hZVY8TBMuUz8qnfJobMos8yBJN5vep8Mi3tU0GjI6dblCtzx8OPDi1E1vi62p0/10A=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1768963322; c=relaxed/relaxed;\n\tbh=+GwwR18OS1+4ZPR6i9SW7Q0SG4ujDVNzWChuPawKPBk=;\n\th=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References:\n\t Content-Type:MIME-Version;\n b=d0D6QvIhz/bkF3SOM+Fsc8n5ey0KDNhns1JQpUrJxjYZeThGQSfwYBdTmiUlZk2RapRDoYnrfTeDE/6cBj48kyPU5oFhXUdwL/G1OSUT4KkYTTg+qKpdc23AXwGXB/hK/UKhBGFfQ8tmJvWl7y/CM+1jmRRYBP0XC9SM1cZNnOZRbeeRiIVZpGDf/wA23nWLMXJwRHAYe/JSBmuDgYx3SrrfPKtzx1131n84WLko4ZUDV/Rle/FTHVGvj+Z6MaDC06gfd+66fIZGB+PuU5R5BjuTnfR7BU2LPfd4E7wWdbfb1KltABofzm/4B7SsnV6LiNiZFYizerPoOnFc1sh4Qg==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=+GwwR18OS1+4ZPR6i9SW7Q0SG4ujDVNzWChuPawKPBk=;\n b=Gp02TVrPsgYVjkEfKxK6P2thtPkFAinK48CZ08aVfggRvQb6uJBPNnRhpny9OURjCmB8qatwF8R0phMEmkeXOf3RFIoFDpye1JA5rhCfyn3CdVR0YfGanSuXjxthKahfv6HN1RKvHdO3TFEB3bdb28PbXZeU2qnf3LRTDWJ0KOBaOn8Y3zJskW2Rk3dCI9elYXG7NIJUUdpyHqxH+Ax/vbB/0MYvujR5yY2f9wVs+02RdXvW7gtI182FJQ2rbvt8dJrdfRNDWPRDLqmVlP86QS1Xb83lpaG67WrPJvbiidr8XKL0Z5tsY+rUl0qI8ipgACbWTQ6pLVClzCNp7dbfWA=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=cstmkwo5; dkim-atps=neutral;\n spf=pass (client-ip=2a01:111:f403:c100::f;\n helo=bl2pr02cu003.outbound.protection.outlook.com;\n envelope-from=ziy@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=+GwwR18OS1+4ZPR6i9SW7Q0SG4ujDVNzWChuPawKPBk=;\n b=cstmkwo5p3wL2Uoktz18lVVjWMOox72vDilUBA7q8GGp3XsKFZlXTVJdOrisx9VGkhsExDG6P3ogiD+y8oJggSbWLoxlB2nfX32H71Yfim1cw6PPP5DKgi70b/vfl+/PkCXfT7KmJOXYeDML5neTn9O0Gte/wV44YIfj8g8Z68fGThLv/XTckYwZVBrhy2ifw1eN+Uh1XMBj9UsD5Ae2jkEB4qF2e2/Q6pzgOJN5e1yUXXjLD4p/PvI3VT2VcEUgN+eI8J45IgEhRMxSi1tfsyGunfKpTM+hK6mFhykwfVmnrF7Ggs/1bHJCkv6LajymlctRsnGDIbgle+Mn1yeoTQ==","From":"Zi Yan <ziy@nvidia.com>","To":"Jordan Niethe <jniethe@nvidia.com>","Cc":"linux-mm@kvack.org, balbirs@nvidia.com, matthew.brost@intel.com,\n akpm@linux-foundation.org, linux-kernel@vger.kernel.org,\n dri-devel@lists.freedesktop.org, david@redhat.com, apopple@nvidia.com,\n lorenzo.stoakes@oracle.com, lyude@redhat.com, dakr@kernel.org,\n airlied@gmail.com, simona@ffwll.ch, rcampbell@nvidia.com,\n mpenttil@redhat.com, jgg@nvidia.com, willy@infradead.org,\n linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org, jgg@ziepe.ca,\n Felix.Kuehling@amd.com","Subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","Date":"Tue, 20 Jan 2026 21:41:25 -0500","X-Mailer":"MailMate (2.0r6290)","Message-ID":"<C2A9F124-9EA8-4916-AB86-659BD280390D@nvidia.com>","In-Reply-To":"<649cc20c-161c-4343-8263-3712a4f6dccb@nvidia.com>","References":"<20260107091823.68974-1-jniethe@nvidia.com>\n <20260107091823.68974-12-jniethe@nvidia.com>\n <36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>\n <c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>\n <6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com>\n <fd4b6553-3e9e-4829-a12f-51d29a5d7571@nvidia.com>\n <16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com>\n <649cc20c-161c-4343-8263-3712a4f6dccb@nvidia.com>","Content-Type":"text/plain; charset=UTF-8","Content-Transfer-Encoding":"quoted-printable","X-ClientProxiedBy":"SJ0PR13CA0056.namprd13.prod.outlook.com\n (2603:10b6:a03:2c2::31) To DS7PR12MB9473.namprd12.prod.outlook.com\n (2603:10b6:8:252::5)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DS7PR12MB9473:EE_|SJ1PR12MB6124:EE_","X-MS-Office365-Filtering-Correlation-Id":"e9806105-0d1f-49b9-0232-08de58969599","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|376014|366016|1800799024|7416014;","X-Microsoft-Antispam-Message-Info":"=?utf-8?q?XiDrDm3qpFlz+Y3kBJoM3psJ813QNVM?=\n\t=?utf-8?q?zozic6fVNF4wcXt7fq2ady0zcHkvQ7LkmeS3LJaoZq16h8v4qhC9Pv5ht9fWsuY9x?=\n\t=?utf-8?q?dxzsQpLjTMuxAA6ppNK6lWfx4as0sMa0RemMNSHaDNAYgBwzbIk0j2mRXdY1uzz+1?=\n\t=?utf-8?q?AipmcVXBQasXL5TKnHxhnKbLd067n+kwqvbmt9W1AkPwdYYnTsvKko+f61lDffI2/?=\n\t=?utf-8?q?Yx0lU+v8AsHiFvN1mZN+KfkNIRzxFz6DjsCKRGlZnb18s1Qa8Cuzcwow8PvhQXrG/?=\n\t=?utf-8?q?NNYOj3h1d+sLx5jauEmNzC2srBlAZpVhMmI9xrNd5pr3XPvPMUpIcOOztUGfxYhoy?=\n\t=?utf-8?q?tsAInQbyqnYkfSDa4e+GNmyOhsvodx8CVmGWXo9/WqScIQm08ZCzkrP2lqfswXYS+?=\n\t=?utf-8?q?KHEGj6pr8hpC8pEKF6jYWqSs6kfzuNBTCJZdBK5eb7tHy3+S7cM6zuYl/Z0KeCINE?=\n\t=?utf-8?q?4w87QnlzXQk74ANaMeM5WXuMiX/6mkW+El3OGUqObz7h5p1n4XznASiuWHV0r2wFf?=\n\t=?utf-8?q?9Hm616uoKf29jdmNVD3Brkp3i0OmYpp+b9DKjI7VrTq/2RJmvzzTGgmYnXV6Al3Iv?=\n\t=?utf-8?q?wJvY93KLKojioa70xqMSUK/FzhReul1CN2E055fYSCwYIfAQCr2P49zl/T0ItUBWK?=\n\t=?utf-8?q?GkhCADBqVYIL20sggtMaJgEe0VG/s+XNsQj/72iLIp3wJmd78K78vg7WhND5oLk8S?=\n\t=?utf-8?q?HEFEuzBucb3vgfhbu10Z08eH+J6o3Giiuw04Ditgt8O+eaTJMLgTN9Y5cbGCvtyCQ?=\n\t=?utf-8?q?aHfY895ROyA9bgdP87LVQEqY91TCea/u6pTOSUgdXL3QyMXXAXTnRkx7y407Jfa0Z?=\n\t=?utf-8?q?NiV7up1nyAxt/WtkCkVCA+EkP5d39c0fEXqLrE6FwrU7sZE31xKfRukI+VQavugAy?=\n\t=?utf-8?q?EYtcH7R1irXjilQOdLwNQdSiAoZmwpMfyonjJE6XGnz74eFjkoOaTdbs+8kODBuGI?=\n\t=?utf-8?q?LEAtitg6KiVHQZcrb7Io6hG9uFfmuwqMrE9b5v81Z8YL3Wty0gq/PNWJK5WJGUOyP?=\n\t=?utf-8?q?76hpT6I8BunmzJkCJJxjEyphWakwcPi70yorx3gst3CyN6u3Crne+msFRde8CDT/M?=\n\t=?utf-8?q?v9jZ9FPusmvDYvSkl+8VDy5lHE2U7EMQuaX36U7BN85MjPq/fgZJ5EU9OD8xejmxP?=\n\t=?utf-8?q?VqVkXoBup7UHPErI5xkN+NG9OY4iP6L/yl12ZhM3yZ+cvwrzJHI8uT3VVBFlsuAd0?=\n\t=?utf-8?q?YLToEdUk3yOI4LgRxMyOjJueSOJBC9oYZKafmm8D+CeALgOXPpMjICgf+cb68tt6p?=\n\t=?utf-8?q?X0jVatX8aKdXOAt6qx7BxACTfbjSqtvW4JMofkLn+A7gvDl923WG8vO6H+kliLnb1?=\n\t=?utf-8?q?n+TvuZG0v+jRyqkIjQY7AXFC4xLLt0z95pzjfNsh4PkIOhAX/4AMUHMN98lxYf1xy?=\n\t=?utf-8?q?ikJy+aH/At7q05sNZdztXOtyf7CaN+BQlvOpvL2OGkp3j40I04pNMmLVQWyA/v+g0?=\n\t=?utf-8?q?1T2LEhxVac+t/6GaX0Bf0IlCLbolsKXM00d1Z65TxdWJ2wbP9qiHw=3D?=","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(376014)(366016)(1800799024)(7416014);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"=?utf-8?q?MQIOycksNMoHGJOSurjPtYXAVqSP?=\n\t=?utf-8?q?f/4J1e6AZBhwsdWb8wlWCynjkDcZ9clqwDP8IBYfhL+ZXoP1Gj4CgnqRJ0HRRNvgu?=\n\t=?utf-8?q?vICqlhn1a3QXOEvZT+5PYPbZDLewWp58yr66Jpd/KniTf+ubbzF8m6wRVyFDYeRMn?=\n\t=?utf-8?q?yFgm/DoYx+17JWVz8hU/klFhbKjxbRxV1tY21niEZlx+x0xCWlCsnI+MggnCoR4TM?=\n\t=?utf-8?q?QAC6TZ3Fxy0C43hzhCvoYtKnlYd2E4kvUb7nNagwlRrwkMpWTU6h27r6ne04vHz3n?=\n\t=?utf-8?q?3J9+Z20xIy3QNIlrnsJW4vU34kJuKAn1He321wVMQGNrIjqHRu4G7o9k2ozdEgDK4?=\n\t=?utf-8?q?Puixg3hfeS2ZhC5nn9WAGa+6wWfdiYwfzOoUuWtriuRIOgsqL6Dm0XVPiOGYw7v0E?=\n\t=?utf-8?q?tIqUEfxns+8v3ccFGayISkjR+DmPFdha/NgpJJ6oiHCrrx9HhkPp4PK+wjBoSBw5w?=\n\t=?utf-8?q?IA/Ju1uEwbhCxpo1mK7rupczJ7ZU1/zigw2E10p1TA0tPCtx6IzEYFZRekiVSWyl3?=\n\t=?utf-8?q?2YBMwbas6jrILkHBIv+sNiPZKeHZjA5fzEI0ZkXCIVdCjSHBbny7yWvXaBG1/GYW4?=\n\t=?utf-8?q?W7gFIbniDSM6JsHVi0fbX/Yf+bTObvgLCtTZS7y9amIv6mjffUI68lBCQAF0nwvef?=\n\t=?utf-8?q?P5GC5E32NK0Td6wifJf4mwZ17O3EmjAcflI7tnf1EyCJjOu9Z57iVtwv15TXtMJD7?=\n\t=?utf-8?q?R0IFlswGfaH/5K7NA7Z3HRo+Wmxy9WbOWFCuPJffkinoUi7aRI/t4G9+keFGrRJbI?=\n\t=?utf-8?q?bVif0gXWBAIJGyFmKkb7nwj/IhysibO3sItg6yK2dv7dR+czAmolkpfI3jZYwFA+5?=\n\t=?utf-8?q?uaVR60b/NDxWX1lHqOrrFWSwgYLaMOmlfMSVVVo1XfGX8tMdRKc+BDcT+GimLEztr?=\n\t=?utf-8?q?+YCaIN8IpN25k78QWA/IkQbCV8r8S/rt1deD9XBpXDUR96Xf06J05MGm4oV0akHhA?=\n\t=?utf-8?q?xxABqYUjSf8AOHaSfih/O+a7YfU8ijb59bXXdX1XXBi9doT2cyY/83h56L0vJHA9O?=\n\t=?utf-8?q?dvc3AbAPKjKAqpCg72j/WMhzoTMKEEFxQcwWPyR7WwKQ4S3Yd+qBKFPv4FXTwesfP?=\n\t=?utf-8?q?dG7919eveFgb6RBcgrJzKN9rP7K0pqdZpiCkP3eQvaK+GJDqCGbjbIwY6Slyp8Q6Q?=\n\t=?utf-8?q?LKjBAiyiGCLrIFf639v8RTMjD2+W5RzSVDg+HySsKSlTS4PXcxpIuY2kTXWJEOZ8k?=\n\t=?utf-8?q?yAC6p60b9yC/2EK8XqtOb5kZ4dmqE1c05qsljZi9c0sFdCNTw6cl3RNH2LzZU5l1B?=\n\t=?utf-8?q?x8HICBhKh7Wb2Knp4URifFMvCxYNvL5V0bUGi8mmPkk0bN+n9gOikLmes0dT/pATh?=\n\t=?utf-8?q?WAGYVSOWRDsk0bz+KRo5PVuUeLBFuSm69Hl4LaKYlNyvJcuz29Zfjfk3JSM61nv8K?=\n\t=?utf-8?q?7B5VJQFLbNDeTZ9tbJxMe8d/k+pC14+wxIsD0v93R4FUttJb0GkvVZ8C6efJ7a2rG?=\n\t=?utf-8?q?2M3lOOmZpesqvrMN0jf6nT/jhAkOl/Abb+ykYkVCJXxqtl00Mdvb7Hyq4UiFLyOWq?=\n\t=?utf-8?q?MCKUxsfdU0p+xHBbvHzZl/a7PDu9MH+Nre6BUPuLPYv4h/e+iAJIetZH0WRiKupAD?=\n\t=?utf-8?q?ldAqPUyEy71LMOtEo+RbtVnzX0LGJ8hikwAy9Vxo49JMNQdc1lOAT12OeDOIPgRBt?=\n\t=?utf-8?q?Qbrq4OsKg+?=","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n e9806105-0d1f-49b9-0232-08de58969599","X-MS-Exchange-CrossTenant-AuthSource":"DS7PR12MB9473.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"21 Jan 2026 02:41:31.3637\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n pJZIvUPYtl0CrEy1ODnuKltTsze1DU9SDzH6zgcSn1SvFcFT6Hah+H5cB+Lnd0U+","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"SJ1PR12MB6124","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tSPF_HELO_PASS,SPF_PASS autolearn=disabled version=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"}},{"id":3639453,"web_url":"http://patchwork.ozlabs.org/comment/3639453/","msgid":"<254bd66c-4c0f-44f4-a4a1-87dc44bc5e30@nvidia.com>","date":"2026-01-21T04:04:29","subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","submitter":{"id":92354,"url":"http://patchwork.ozlabs.org/api/people/92354/","name":"Jordan Niethe","email":"jniethe@nvidia.com"},"content":"On 21/1/26 13:41, Zi Yan wrote:\n> On 20 Jan 2026, at 18:34, Jordan Niethe wrote:\n> \n>> Hi,\n>>\n>> On 21/1/26 10:06, Zi Yan wrote:\n>>> On 20 Jan 2026, at 18:02, Jordan Niethe wrote:\n>>>\n>>>> Hi,\n>>>>\n>>>> On 21/1/26 09:53, Zi Yan wrote:\n>>>>> On 20 Jan 2026, at 17:33, Jordan Niethe wrote:\n>>>>>\n>>>>>> On 14/1/26 07:04, Zi Yan wrote:\n>>>>>>> On 7 Jan 2026, at 4:18, Jordan Niethe wrote:\n>>>>>>>\n>>>>>>>> Currently when creating these device private struct pages, the first\n>>>>>>>> step is to use request_free_mem_region() to get a range of physical\n>>>>>>>> address space large enough to represent the devices memory. This\n>>>>>>>> allocated physical address range is then remapped as device private\n>>>>>>>> memory using memremap_pages().\n>>>>>>>>\n>>>>>>>> Needing allocation of physical address space has some problems:\n>>>>>>>>\n>>>>>>>>       1) There may be insufficient physical address space to represent the\n>>>>>>>>          device memory. KASLR reducing the physical address space and VM\n>>>>>>>>          configurations with limited physical address space increase the\n>>>>>>>>          likelihood of hitting this especially as device memory increases. This\n>>>>>>>>          has been observed to prevent device private from being initialized.\n>>>>>>>>\n>>>>>>>>       2) Attempting to add the device private pages to the linear map at\n>>>>>>>>          addresses beyond the actual physical memory causes issues on\n>>>>>>>>          architectures like aarch64 meaning the feature does not work there.\n>>>>>>>>\n>>>>>>>> Instead of using the physical address space, introduce a device private\n>>>>>>>> address space and allocate devices regions from there to represent the\n>>>>>>>> device private pages.\n>>>>>>>>\n>>>>>>>> Introduce a new interface memremap_device_private_pagemap() that\n>>>>>>>> allocates a requested amount of device private address space and creates\n>>>>>>>> the necessary device private pages.\n>>>>>>>>\n>>>>>>>> To support this new interface, struct dev_pagemap needs some changes:\n>>>>>>>>\n>>>>>>>>       - Add a new dev_pagemap::nr_pages field as an input parameter.\n>>>>>>>>       - Add a new dev_pagemap::pages array to store the device\n>>>>>>>>         private pages.\n>>>>>>>>\n>>>>>>>> When using memremap_device_private_pagemap(), rather then passing in\n>>>>>>>> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to\n>>>>>>>> be remapped, dev_pagemap::nr_ranges will always be 1, and the device\n>>>>>>>> private range that is reserved is returned in dev_pagemap::range.\n>>>>>>>>\n>>>>>>>> Forbid calling memremap_pages() with dev_pagemap::ranges::type =\n>>>>>>>> MEMORY_DEVICE_PRIVATE.\n>>>>>>>>\n>>>>>>>> Represent this device private address space using a new\n>>>>>>>> device_private_pgmap_tree maple tree. This tree maps a given device\n>>>>>>>> private address to a struct dev_pagemap, where a specific device private\n>>>>>>>> page may then be looked up in that dev_pagemap::pages array.\n>>>>>>>>\n>>>>>>>> Device private address space can be reclaimed and the assoicated device\n>>>>>>>> private pages freed using the corresponding new\n>>>>>>>> memunmap_device_private_pagemap() interface.\n>>>>>>>>\n>>>>>>>> Because the device private pages now live outside the physical address\n>>>>>>>> space, they no longer have a normal PFN. This means that page_to_pfn(),\n>>>>>>>> et al. are no longer meaningful.\n>>>>>>>>\n>>>>>>>> Introduce helpers:\n>>>>>>>>\n>>>>>>>>       - device_private_page_to_offset()\n>>>>>>>>       - device_private_folio_to_offset()\n>>>>>>>>\n>>>>>>>> to take a given device private page / folio and return its offset within\n>>>>>>>> the device private address space.\n>>>>>>>>\n>>>>>>>> Update the places where we previously converted a device private page to\n>>>>>>>> a PFN to use these new helpers. When we encounter a device private\n>>>>>>>> offset, instead of looking up its page within the pagemap use\n>>>>>>>> device_private_offset_to_page() instead.\n>>>>>>>>\n>>>>>>>> Update the existing users:\n>>>>>>>>\n>>>>>>>>      - lib/test_hmm.c\n>>>>>>>>      - ppc ultravisor\n>>>>>>>>      - drm/amd/amdkfd\n>>>>>>>>      - gpu/drm/xe\n>>>>>>>>      - gpu/drm/nouveau\n>>>>>>>>\n>>>>>>>> to use the new memremap_device_private_pagemap() interface.\n>>>>>>>>\n>>>>>>>> Signed-off-by: Jordan Niethe <jniethe@nvidia.com>\n>>>>>>>> Signed-off-by: Alistair Popple <apopple@nvidia.com>\n>>>>>>>>\n>>>>>>>> ---\n>>>>>>>>\n>>>>>>>> NOTE: The updates to the existing drivers have only been compile tested.\n>>>>>>>> I'll need some help in testing these drivers.\n>>>>>>>>\n>>>>>>>> v1:\n>>>>>>>> - Include NUMA node paramater for memremap_device_private_pagemap()\n>>>>>>>> - Add devm_memremap_device_private_pagemap() and friends\n>>>>>>>> - Update existing users of memremap_pages():\n>>>>>>>>         - ppc ultravisor\n>>>>>>>>         - drm/amd/amdkfd\n>>>>>>>>         - gpu/drm/xe\n>>>>>>>>         - gpu/drm/nouveau\n>>>>>>>> - Update for HMM huge page support\n>>>>>>>> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE\n>>>>>>>>\n>>>>>>>> v2:\n>>>>>>>> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);\n>>>>>>>> ---\n>>>>>>>>      Documentation/mm/hmm.rst                 |  11 +-\n>>>>>>>>      arch/powerpc/kvm/book3s_hv_uvmem.c       |  41 ++---\n>>>>>>>>      drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  23 +--\n>>>>>>>>      drivers/gpu/drm/nouveau/nouveau_dmem.c   |  35 ++--\n>>>>>>>>      drivers/gpu/drm/xe/xe_svm.c              |  28 +---\n>>>>>>>>      include/linux/hmm.h                      |   3 +\n>>>>>>>>      include/linux/leafops.h                  |  16 +-\n>>>>>>>>      include/linux/memremap.h                 |  64 +++++++-\n>>>>>>>>      include/linux/migrate.h                  |   6 +-\n>>>>>>>>      include/linux/mm.h                       |   2 +\n>>>>>>>>      include/linux/rmap.h                     |   5 +-\n>>>>>>>>      include/linux/swapops.h                  |  10 +-\n>>>>>>>>      lib/test_hmm.c                           |  69 ++++----\n>>>>>>>>      mm/debug.c                               |   9 +-\n>>>>>>>>      mm/memremap.c                            | 193 ++++++++++++++++++-----\n>>>>>>>>      mm/mm_init.c                             |   8 +-\n>>>>>>>>      mm/page_vma_mapped.c                     |  19 ++-\n>>>>>>>>      mm/rmap.c                                |  43 +++--\n>>>>>>>>      mm/util.c                                |   5 +-\n>>>>>>>>      19 files changed, 391 insertions(+), 199 deletions(-)\n>>>>>>>>\n>>>>>>> <snip>\n>>>>>>>\n>>>>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h\n>>>>>>>> index e65329e1969f..b36599ab41ba 100644\n>>>>>>>> --- a/include/linux/mm.h\n>>>>>>>> +++ b/include/linux/mm.h\n>>>>>>>> @@ -2038,6 +2038,8 @@ static inline unsigned long memdesc_section(memdesc_flags_t mdf)\n>>>>>>>>       */\n>>>>>>>>      static inline unsigned long folio_pfn(const struct folio *folio)\n>>>>>>>>      {\n>>>>>>>> +\tVM_BUG_ON(folio_is_device_private(folio));\n>>>>>>>\n>>>>>>> Please use VM_WARN_ON instead.\n>>>>>>\n>>>>>> ack.\n>>>>>>\n>>>>>>>\n>>>>>>>> +\n>>>>>>>>      \treturn page_to_pfn(&folio->page);\n>>>>>>>>      }\n>>>>>>>>\n>>>>>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n>>>>>>>> index 57c63b6a8f65..c1561a92864f 100644\n>>>>>>>> --- a/include/linux/rmap.h\n>>>>>>>> +++ b/include/linux/rmap.h\n>>>>>>>> @@ -951,7 +951,7 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>>>>>>>>      static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>>>>>      {\n>>>>>>>>      \tif (folio_is_device_private(folio))\n>>>>>>>> -\t\treturn page_vma_walk_pfn(folio_pfn(folio)) |\n>>>>>>>> +\t\treturn page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n>>>>>>>>      \t\t       PVMW_PFN_DEVICE_PRIVATE;\n>>>>>>>>\n>>>>>>>>      \treturn page_vma_walk_pfn(folio_pfn(folio));\n>>>>>>>> @@ -959,6 +959,9 @@ static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>>>>>\n>>>>>>>>      static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n>>>>>>>>      {\n>>>>>>>> +\tif (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n>>>>>>>> +\t\treturn device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>>>>> +\n>>>>>>>>      \treturn pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>>>>>      }\n>>>>>>>\n>>>>>>> <snip>\n>>>>>>>\n>>>>>>>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n>>>>>>>> index 96c525785d78..141fe5abd33f 100644\n>>>>>>>> --- a/mm/page_vma_mapped.c\n>>>>>>>> +++ b/mm/page_vma_mapped.c\n>>>>>>>> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,\n>>>>>>>>      static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>>      {\n>>>>>>>>      \tunsigned long pfn;\n>>>>>>>> +\tbool device_private = false;\n>>>>>>>>      \tpte_t ptent = ptep_get(pvmw->pte);\n>>>>>>>>\n>>>>>>>>      \tif (pvmw->flags & PVMW_MIGRATION) {\n>>>>>>>> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>>      \t\tif (!softleaf_is_migration(entry))\n>>>>>>>>      \t\t\treturn false;\n>>>>>>>>\n>>>>>>>> +\t\tif (softleaf_is_migration_device_private(entry))\n>>>>>>>> +\t\t\tdevice_private = true;\n>>>>>>>> +\n>>>>>>>>      \t\tpfn = softleaf_to_pfn(entry);\n>>>>>>>>      \t} else if (pte_present(ptent)) {\n>>>>>>>>      \t\tpfn = pte_pfn(ptent);\n>>>>>>>> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>>      \t\t\treturn false;\n>>>>>>>>\n>>>>>>>>      \t\tpfn = softleaf_to_pfn(entry);\n>>>>>>>> +\n>>>>>>>> +\t\tif (softleaf_is_device_private(entry))\n>>>>>>>> +\t\t\tdevice_private = true;\n>>>>>>>>      \t}\n>>>>>>>>\n>>>>>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>>>>>> +\t\treturn false;\n>>>>>>>> +\n>>>>>>>>      \tif ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>>>>>      \t\treturn false;\n>>>>>>>>      \tif (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n>>>>>>>> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>>      }\n>>>>>>>>\n>>>>>>>>      /* Returns true if the two ranges overlap.  Careful to not overflow. */\n>>>>>>>> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)\n>>>>>>>> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n>>>>>>>>      {\n>>>>>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>>>>>> +\t\treturn false;\n>>>>>>>> +\n>>>>>>>>      \tif ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>>>>>      \t\treturn false;\n>>>>>>>>      \tif (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n>>>>>>>> @@ -255,6 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>>>>>\n>>>>>>>>      \t\t\t\tif (!softleaf_is_migration(entry) ||\n>>>>>>>>      \t\t\t\t    !check_pmd(softleaf_to_pfn(entry),\n>>>>>>>> +\t\t\t\t\t       softleaf_is_device_private(entry) ||\n>>>>>>>> +\t\t\t\t\t       softleaf_is_migration_device_private(entry),\n>>>>>>>>      \t\t\t\t\t       pvmw))\n>>>>>>>>      \t\t\t\t\treturn not_found(pvmw);\n>>>>>>>>      \t\t\t\treturn true;\n>>>>>>>> @@ -262,7 +277,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>>>>>      \t\t\tif (likely(pmd_trans_huge(pmde))) {\n>>>>>>>>      \t\t\t\tif (pvmw->flags & PVMW_MIGRATION)\n>>>>>>>>      \t\t\t\t\treturn not_found(pvmw);\n>>>>>>>> -\t\t\t\tif (!check_pmd(pmd_pfn(pmde), pvmw))\n>>>>>>>> +\t\t\t\tif (!check_pmd(pmd_pfn(pmde), false, pvmw))\n>>>>>>>>      \t\t\t\t\treturn not_found(pvmw);\n>>>>>>>>      \t\t\t\treturn true;\n>>>>>>>>      \t\t\t}\n>>>>>>>\n>>>>>>> It seems to me that you can add a new flag like “bool is_device_private” to\n>>>>>>> indicate whether pfn is a device private index instead of pfn without\n>>>>>>> manipulating pvmw->pfn itself.\n>>>>>>\n>>>>>> We could do it like that, however my concern with using a new param was that\n>>>>>> storing this info seperately might make it easier to misuse a device private\n>>>>>> index as a regular pfn.\n>>>>>>\n>>>>>> It seemed like it could be easy to overlook both when creating the pvmw and\n>>>>>> then when accessing the pfn.\n>>>>>\n>>>>> That is why I asked for a helper function like page_vma_walk_pfn(pvmw) to\n>>>>> return the converted pfn instead of pvmw->pfn directly. You can add a comment\n>>>>> to ask people to use helper function and even mark pvmw->pfn /* do not use\n>>>>> directly */.\n>>>>\n>>>> Yeah I agree that is a good idea.\n>>>>\n>>>>>\n>>>>> In addition, your patch manipulates pfn by left shifting it by 1. Are you sure\n>>>>> there is no weird arch having pfns with bit 63 being 1? Your change could\n>>>>> break it, right?\n>>>>\n>>>> Currently for migrate pfns we left shift by pfns by MIGRATE_PFN_SHIFT (6), so I\n>>>> thought doing something similiar here should be safe.\n>>>\n>>> Yeah, but that limits to archs supporting HMM. page_vma_mapped_walk is used\n>>> by almost every arch, so it has a broader impact.\n>>\n>> That is a good point.\n>>\n>> I see a few options:\n>>\n>> - On every arch we can assume SWP_PFN_BITS? I could add a sanity check that we\n>>    have an extra bit on top of SWP_PFN_BITS within an unsigned long.\n> \n> Yes, but if there is no extra bit, are you going to disable device private\n> pages?\n\nIn this case, migrate PFNs would also be broken (due to MIGRATE_PFN_SHIFT) so we'd have to.\n\n> \n>> - We could define PVMW_PFN_SHIFT as 0 if !CONFIG_MIGRATION as the flag is not\n>>    required.\n> \n> Sure, or !CONFIG_DEVICE_MIGRATION\n> \n>> - Instead of modifying pvmw->pfn we could use pvmw->flags but that has the\n>>    issues of separating the offset type and offset.\n> \n> It seems that I was not clear on my proposal. Here is the patch on top of\n> your patchset and it compiles.\n\nOh I'd interpreted “bool is_device_private” as adding a new field to pvmw.\n\n> \n> Basically, pvmw->pfn stores either PFN or device private offset without\n> additional shift. Caller interprets pvmw->pfn based on\n> pvmw->flags & PVMW_DEVICE_PRIVATE. And you can ignore my helper function\n> of pvmw->pfn suggestion, since my patch below can use pvmw->pfn directly.\n\nThanks, looks reasonable. I'll try it.\n\nThanks,\nJordan.\n\n> \n> Let me know if my patch works. Thanks.\n> \n> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n> index c1561a92864f..4423f0e886aa 100644\n> --- a/include/linux/rmap.h\n> +++ b/include/linux/rmap.h\n> @@ -921,6 +921,7 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr,\n>   #define PVMW_SYNC\t\t(1 << 0)\n>   /* Look for migration entries rather than present PTEs */\n>   #define PVMW_MIGRATION\t\t(1 << 1)\n> +#define PVMW_DEVICE_PRIVATE\t(1 << 2)\n> \n>   /* Result flags */\n> \n> @@ -943,6 +944,13 @@ struct page_vma_mapped_walk {\n>   #define PVMW_PFN_DEVICE_PRIVATE\t(1UL << 0)\n>   #define PVMW_PFN_SHIFT\t\t1\n> \n> +static inline unsigned long page_vma_walk_flags(struct folio *folio, unsigned long flags)\n> +{\n> +\tif (folio_is_device_private(folio))\n> +\t\treturn flags | PVMW_DEVICE_PRIVATE;\n> +\treturn flags;\n> +}\n> +\n>   static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>   {\n>   \treturn (pfn << PVMW_PFN_SHIFT);\n> @@ -951,23 +959,16 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>   static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>   {\n>   \tif (folio_is_device_private(folio))\n> -\t\treturn page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n> -\t\t       PVMW_PFN_DEVICE_PRIVATE;\n> -\n> -\treturn page_vma_walk_pfn(folio_pfn(folio));\n> -}\n> -\n> -static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n> -{\n> -\tif (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n> -\t\treturn device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n> +\t\treturn device_private_folio_to_offset(folio);\n> \n> -\treturn pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n> +\treturn (folio_pfn(folio));\n>   }\n> \n> -static inline struct folio *page_vma_walk_pfn_to_folio(unsigned long pvmw_pfn)\n> +static inline struct folio *page_vma_walk_pfn_to_folio(struct page_vma_mapped_walk *pvmw)\n>   {\n> -\treturn page_folio(page_vma_walk_pfn_to_page(pvmw_pfn));\n> +\tif (pvmw->flags & PVMW_DEVICE_PRIVATE)\n> +\t\treturn page_folio(device_private_offset_to_page(pvmw->pfn));\n> +\treturn pfn_folio(pvmw->pfn);\n>   }\n> \n>   #define DEFINE_FOLIO_VMA_WALK(name, _folio, _vma, _address, _flags)\t\\\n> @@ -977,7 +978,7 @@ static inline struct folio *page_vma_walk_pfn_to_folio(unsigned long pvmw_pfn)\n>   \t\t.pgoff = folio_pgoff(_folio),\t\t\t\t\\\n>   \t\t.vma = _vma,\t\t\t\t\t\t\\\n>   \t\t.address = _address,\t\t\t\t\t\\\n> -\t\t.flags = _flags,\t\t\t\t\t\\\n> +\t\t.flags = page_vma_walk_flags(_folio, _flags),\t\t\\\n>   \t}\n> \n>   static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)\n> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n> index 141fe5abd33f..e61a0e49a7c9 100644\n> --- a/mm/page_vma_mapped.c\n> +++ b/mm/page_vma_mapped.c\n> @@ -136,12 +136,12 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>   \t\t\tdevice_private = true;\n>   \t}\n> \n> -\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n> +\tif ((device_private) ^ !!(pvmw->flags & PVMW_DEVICE_PRIVATE))\n>   \t\treturn false;\n> \n> -\tif ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n> +\tif ((pfn + pte_nr - 1) < pvmw->pfn)\n>   \t\treturn false;\n> -\tif (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n> +\tif (pfn > (pvmw->pfn + pvmw->nr_pages - 1))\n>   \t\treturn false;\n>   \treturn true;\n>   }\n> @@ -149,12 +149,12 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>   /* Returns true if the two ranges overlap.  Careful to not overflow. */\n>   static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n>   {\n> -\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n> +\tif ((device_private) ^ !!(pvmw->flags & PVMW_DEVICE_PRIVATE))\n>   \t\treturn false;\n> \n> -\tif ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n> +\tif ((pfn + HPAGE_PMD_NR - 1) < pvmw->pfn)\n>   \t\treturn false;\n> -\tif (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n> +\tif (pfn > pvmw->pfn + pvmw->nr_pages - 1)\n>   \t\treturn false;\n>   \treturn true;\n>   }\n> @@ -369,7 +369,7 @@ unsigned long page_mapped_in_vma(const struct page *page,\n>   \t\t.pfn = folio_page_vma_walk_pfn(folio),\n>   \t\t.nr_pages = 1,\n>   \t\t.vma = vma,\n> -\t\t.flags = PVMW_SYNC,\n> +\t\t.flags = page_vma_walk_flags(folio, PVMW_SYNC),\n>   \t};\n> \n>   \tpvmw.address = vma_address(vma, page_pgoff(folio, page), 1);\n> diff --git a/mm/vmscan.c b/mm/vmscan.c\n> index be5682d345b5..5d81939bf12a 100644\n> --- a/mm/vmscan.c\n> +++ b/mm/vmscan.c\n> @@ -4203,7 +4203,7 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw)\n>   \tpte_t *pte = pvmw->pte;\n>   \tunsigned long addr = pvmw->address;\n>   \tstruct vm_area_struct *vma = pvmw->vma;\n> -\tstruct folio *folio = page_vma_walk_pfn_to_folio(pvmw->pfn);\n> +\tstruct folio *folio = page_vma_walk_pfn_to_folio(pvmw);\n>   \tstruct mem_cgroup *memcg = folio_memcg(folio);\n>   \tstruct pglist_data *pgdat = folio_pgdat(folio);\n>   \tstruct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat);\n> \n> \n> \n> \n> \n> Best Regards,\n> Yan, Zi","headers":{"Return-Path":"\n <linuxppc-dev+bounces-16095-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=aKKJMQbc;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=112.213.38.117; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-16095-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=\"2a01:111:f403:c112::7\" arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=aKKJMQbc;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=2a01:111:f403:c112::7;\n helo=cy3pr05cu001.outbound.protection.outlook.com;\n envelope-from=jniethe@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dwrFG22gTz1xrD\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 21 Jan 2026 15:05:06 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4dwrFF2z1jz2yFg;\n\tWed, 21 Jan 2026 15:05:05 +1100 (AEDT)","from CY3PR05CU001.outbound.protection.outlook.com\n (mail-westcentralusazlp170130007.outbound.protection.outlook.com\n [IPv6:2a01:111:f403:c112::7])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4dwrFD0FSrz2xqD\n\tfor <linuxppc-dev@lists.ozlabs.org>; Wed, 21 Jan 2026 15:05:03 +1100 (AEDT)","from DM4PR12MB9072.namprd12.prod.outlook.com (2603:10b6:8:be::6) by\n LV2PR12MB5726.namprd12.prod.outlook.com (2603:10b6:408:17e::9) with Microsoft\n SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.9542.9; Wed, 21 Jan 2026 04:04:40 +0000","from DM4PR12MB9072.namprd12.prod.outlook.com\n ([fe80::9e49:782:8e98:1ff1]) by DM4PR12MB9072.namprd12.prod.outlook.com\n ([fe80::9e49:782:8e98:1ff1%5]) with mapi id 15.20.9520.011; Wed, 21 Jan 2026\n 04:04:39 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1768968305;\n\tcv=pass;\n b=JdkHwKotkA6MyG0yqx2jeQCku92cpVntNYl8y/XjTvGAQIldwB2JI3VxzS1HYk4nwMC13h78ngdwA8Q3VyC2QCPR48otnK36IZf4kMc+YmjYINz21A7S3tmKdXpwvVH0Nr5v1vzbnc35qZfvX2fmjuFtVviDXL2G5a9vARoSemij6wSI6ytfy6C1RhNinJVpWP5QK0Wh3XJm+hVN5uiCI5D0LFDVrLeV/f0uqGkrYr1w0j64kezK/1c85v+EjCL2MuiG0dZJzeIpamdCngQ2ZgPfHZwFZsdhQk2pRxAcsQUKTgXY/Q2KwBXZOHl69DYPLSpPuxxxuBvTwTEcuw/Jvg==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=o0r7+lZgfiCBa5aUN+kSou3upbRK4T+k3L9T5Zb2UYcTjvWdkXAVR2QBVWf0gpji/Mmb6SBJecyuMR3FTRdo5pWwnbC+wfTaM8mrNShz+7a7A3Z17s5OFliu0nB8WSLuDAwHUazoVFsSaDG/Z8KoSjQQnW7iFSh0K3BuR/dMjFYQc/yq5GPoXlKA0Nfubz6YcrW511lh6Z73xLYq0/lMm8r74PP3vELvb2snV+w5Dk81Td+EB/Ub2O0AGNIx5H+gko6/VgOyikbXYjCmXmbtz/ewOFfjV30HznW8FmlitLgUZ/X7IDNBs5xPpvS8Ds/9iheCKbwRiVTOR+NRIGwUMw=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1768968305; c=relaxed/relaxed;\n\tbh=PBffttv8d1xwNnTMeL0YLVQ+HTF4g9W82VdaC3AZmLQ=;\n\th=Message-ID:Date:Subject:To:Cc:References:From:In-Reply-To:\n\t Content-Type:MIME-Version;\n b=nz/sAgL0sWbChTdGw1mOxvISimW21cOQxHzfVSfnqev81rTu42x2lTBUqcm51KHyrK0USuKO39GIuM4EXff8eyjD7ChgJ6oLtf2RE6x7VoQKxcXVyOFBeKMe4R7Hv+IwgHncHTVqvij7k5tjbVyBVoOngNX4o65aJo/eK8o4WWslt8f7JDeLTCxgSd8Kt5umJBxw+AriFgCtivgN98QRNQqNdng1s3TjCe+W4+YF9DMxh5AVRMrGlP+V/Oq7t66WnywHtFRPLQQ93sO+KB3MKZfZaJmr6Lc7Jug2SZ1HlXw/Gm838lala1FHpSy5nCOaDBElFqjE17RGTgkOikDJcA==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=PBffttv8d1xwNnTMeL0YLVQ+HTF4g9W82VdaC3AZmLQ=;\n b=wn+YUKKGBrI9XX5i65AeXjzu1gIic9naBA4xmp1ijwTflsjGD3Ds/0becqYNGKCMmyZMqXQ+JNBUpgGfRsMBOTJFlfeCRWYC5mamxCWZMkLC1zznyLmiYyzwoBYI3J3nu4+GEY684sz/plVzDYSw32ghFyumLpPkUhkfvBgbkQjEeM3TPtUUFA7rw2Rff9gPw5vtEymS+ofc2tF2OodeGAwNPKfJXHIiL7S56Qr5Dhi/Ly1ywvL+n1o+tSaPyiQwiD+3qukIwPO72N112XGkOIwMGnFHRd2faAjtxjff713aYtc2pF2D4pwD1icOt8eweyCZ/eIZTuzV3Yeqa+8Nog=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=aKKJMQbc; dkim-atps=neutral;\n spf=pass (client-ip=2a01:111:f403:c112::7;\n helo=cy3pr05cu001.outbound.protection.outlook.com;\n envelope-from=jniethe@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=PBffttv8d1xwNnTMeL0YLVQ+HTF4g9W82VdaC3AZmLQ=;\n b=aKKJMQbcQXLiiDXRDT9bNBtnm9mLYUbIAOqRugci82Lz7/QeBTKTpuBjkquOei/0DQ9yMxhm9f7WXUoN2tQ0rYaUS55UMlvspRDqtaBasQtWdAxJ882N30/HaISxub/JT4roq95EU8zP7O2cmiMbr8BAesCI/rYR2zdLgotT9vyItvMz54cgPxXnl20dal35C6Zmw4cuYOFqaHgd42WwA2QYceNMPpIm+6Able13vIduHsKInzgPf0Lz6ESNd8VDOE5eY1u2manQBl7WrTy34HuQ/e5SpU3z1gbgDY4bSpp8ek8euiyEwdTirBJ0TYW5GOlTjK848nhl8aUUAEBxhQ==","Message-ID":"<254bd66c-4c0f-44f4-a4a1-87dc44bc5e30@nvidia.com>","Date":"Wed, 21 Jan 2026 15:04:29 +1100","User-Agent":"Mozilla Thunderbird","Subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","To":"Zi Yan <ziy@nvidia.com>","Cc":"linux-mm@kvack.org, balbirs@nvidia.com, matthew.brost@intel.com,\n akpm@linux-foundation.org, linux-kernel@vger.kernel.org,\n dri-devel@lists.freedesktop.org, david@redhat.com, apopple@nvidia.com,\n lorenzo.stoakes@oracle.com, lyude@redhat.com, dakr@kernel.org,\n airlied@gmail.com, simona@ffwll.ch, rcampbell@nvidia.com,\n mpenttil@redhat.com, jgg@nvidia.com, willy@infradead.org,\n linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org, jgg@ziepe.ca,\n Felix.Kuehling@amd.com","References":"<20260107091823.68974-1-jniethe@nvidia.com>\n <20260107091823.68974-12-jniethe@nvidia.com>\n <36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>\n <c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>\n <6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com>\n <fd4b6553-3e9e-4829-a12f-51d29a5d7571@nvidia.com>\n <16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com>\n <649cc20c-161c-4343-8263-3712a4f6dccb@nvidia.com>\n <C2A9F124-9EA8-4916-AB86-659BD280390D@nvidia.com>","Content-Language":"en-US","From":"Jordan Niethe <jniethe@nvidia.com>","In-Reply-To":"<C2A9F124-9EA8-4916-AB86-659BD280390D@nvidia.com>","Content-Type":"text/plain; charset=UTF-8; format=flowed","Content-Transfer-Encoding":"8bit","X-ClientProxiedBy":"SJ0PR05CA0206.namprd05.prod.outlook.com\n (2603:10b6:a03:330::31) To DM4PR12MB9072.namprd12.prod.outlook.com\n (2603:10b6:8:be::6)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DM4PR12MB9072:EE_|LV2PR12MB5726:EE_","X-MS-Office365-Filtering-Correlation-Id":"5ad84f64-8b45-48d8-bd49-08de58a23278","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|366016|7416014|376014|1800799024;","X-Microsoft-Antispam-Message-Info":"=?utf-8?q?lBgiyQ9uRG2nUqOEMa2f//TOFv4Mtzp?=\n\t=?utf-8?q?3aNl2nFZklJ03FOQKswUQgKG3UkO0Ce09GAb5gKCG4fNrzCyEbvfSEHqwuH4h/Hxh?=\n\t=?utf-8?q?Me3YcYbdrN9kAHIzU45PvuAltYTzZQuqsG321fJvqoPx2T42Bx2F90EzZ0t+Db75I?=\n\t=?utf-8?q?v545pbz208ef0KIrXe29jmrjcB3h8QhrBXixZHkJLb9iaTP9ZDc1KEbsAejM+gw+0?=\n\t=?utf-8?q?HWlM+VutycI1hz45z0ArINwEQkD2w31hheJtLoN1XIMHwYWnctq9vZ5DYM64cDvur?=\n\t=?utf-8?q?Se3viNjNozmQ1oZwLk2BC+dpBzk8dpKloPsS8uiTZu+Au7BzpEs62qpQmQAdYUOb8?=\n\t=?utf-8?q?hFq9DYaXYNHAvuMDihBbXp39lE64CQNOUEe+qz8zwLxSFKcbPnvzpp9A3CtgSPKhE?=\n\t=?utf-8?q?U2FSYqkUpxohHA8s/x5PIq4I2HlQBhdIrW/aE4XWjkjFgUUj4ltk8refLLcqbjz/b?=\n\t=?utf-8?q?/RI3fhjZepVeADnfeTHxUlBH9wAXv+K1npO4fYI7Ok8/SGSUSWQYfYpVVGegAbh/k?=\n\t=?utf-8?q?hDVb7HIwVqMBek3noF6c2BcE2WAXMQOZXqsCub+uALcvM6W1d2ZBNLNbS1wXhAEjn?=\n\t=?utf-8?q?KHnOpdd0M0EK7OJOlApWag2JsnACx2n7Df4PluEnNDsA0lWidTSn15ByzWL1eDVLx?=\n\t=?utf-8?q?uRkcVpubly6bE6WcXaMhS5JdYkIA0TmIwlefMBFnJ3Ayv1GetUSt+6+rplxs6fo3v?=\n\t=?utf-8?q?HrVxdhPvu5tlYCFEvy7vwXQgi8u7u5L2OlK4QtjCSsi+Ebb2Ll7vi0NdORttS8f1O?=\n\t=?utf-8?q?HwD5LBFaIu0WiQfq7KV9RNfIrWE9VgdvHhH1mzPHjyFcjeWXpZZ5XWJuqxGr0nRJt?=\n\t=?utf-8?q?VLRWSYwS74Q/rBNElqkNMyWk9TyFIBfRLPpOt7d+G9q6mocTWkTn07jxB4FNZ1Okk?=\n\t=?utf-8?q?xKts/+AMQi/obCUlJSWAOJdo0ntg+ZJE90WsDMyf/0+0ZlLo9VO+0uW9Cg7YEPtBF?=\n\t=?utf-8?q?dnhjlKi1eak7QIBb4drXTlh/qQXnRNncWS/HPAcAwfTf3GnTxcf0BGbkQf2vc8EDU?=\n\t=?utf-8?q?KqD/Vnnwup96iaD+6amjAqWxKj7L64aB14YJ7n23JSdcAMW3EXJVUpGgaipjHQTk5?=\n\t=?utf-8?q?hHMCtDblFoaco/T7vzWcTg1PrqQ4nTQlWMs8ZLk48sEJ84bK2bas3LO18agVHCe38?=\n\t=?utf-8?q?33YeKIeIEB4wCNgkZBQpHXqpv2cLDSqWFjqzbF5Wm7WKHkyFMi7t3oqSkK+ooU5bm?=\n\t=?utf-8?q?dEoUbt9p4EFMt3J7ZVNksNq71occzQAYO7GB6Z8kB8OuhNNMpgOAHMyW7qSe3Hwdl?=\n\t=?utf-8?q?sXpkuZxNGAVqiT46tsBdo64ijN2jdnjwZKpjbcp3kTMC7ZlpRWWr5J3eyO7JWzNrY?=\n\t=?utf-8?q?HNrzS3WBx88s+ysMUygHvTQzPaBvmIATiuMU9XlFRkgtDpfzHhFgL+nUlRRRFNkfR?=\n\t=?utf-8?q?tFV/GcDdRhFNFRrQHQw30jVQckXfqHsPXKN04f8JSyZMjTOgoorn+pegfxKeWEo9l?=\n\t=?utf-8?q?juwsrTDQQx0Rt1Hk8Oxusv6qPnfyNv6pbwe2Sr+TiX/hPhgUHyJJc=3D?=","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR12MB9072.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(7416014)(376014)(1800799024);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"=?utf-8?q?uug1bd2ZGho0kDcNjO4GG8x5ioWT?=\n\t=?utf-8?q?0a39behlvqL4WrkmDLwsDct+yltQRk9za9AYbx1qvpnsfHTOaMRbaBVaXzJfcweHm?=\n\t=?utf-8?q?j2Rx3w5U5WAhkd58R0bMcK3KOkBGOiHUMY3WsOUTgBIHBzuLpzkjDZWWwiRijd8v3?=\n\t=?utf-8?q?WovGgMakKhiT/eMwXY3zx4HMLEAJUnti/TOUMMaU7PTbXBQxEHkNfdPLPTrfD1KZe?=\n\t=?utf-8?q?E8BV1t93jLON0xItNVJb36aSMqP9W6au4TGe1DrkhIX51PMKr1RX9pcx7GrTIvxvA?=\n\t=?utf-8?q?ErcDoWb2JdccPvrf9hiTiS0u/I6Oa7v19uv+WR/uclVGcMR9fXVMsNkXM764J9Kqp?=\n\t=?utf-8?q?FQVzROYi75kV3cvbzQ8df7CRLfidtCUwlV/2eM/J5qU5gVnR34iNxSGm5mj1PKzra?=\n\t=?utf-8?q?oz5iY9qDraxatZZZPDusV8K52Zzs7ER+4VxgN6qEo21lWLAy+M3pwt/rt4AwM42nA?=\n\t=?utf-8?q?HIm1c4ySdq43wS4u5EU5fZBfwubQTPfj3qgdbHbl4+ibl88Eag09LoueP9dG3llxz?=\n\t=?utf-8?q?a76kT86GHxmfFGqbu8h2qjZJcY8HycfdPuw3Cbb/E5R3vlYownkYAybyCMqIyf9rZ?=\n\t=?utf-8?q?T539PhK3HKkzLO8NrOla3bXb+7FFmFICMUQcqmArUqtcUJBmyQ3OlkvONcpK0Omwa?=\n\t=?utf-8?q?/52TdXzptgyBBlPZuDc5EBEwfabkbF86ytPa3ptC0SYjzWlZ2edtP/JG+EO4VsO7m?=\n\t=?utf-8?q?+FTSIPHDUPNT+idi/gGQZUp6ZZ6xynNVu2ZMhyfZfaUQ3SQ5i5ZKjAV9sb+BMM6lx?=\n\t=?utf-8?q?XWZwHJiYHG2SqIiPmAcNXOzTofZArUoyp5izs+N5YDg8uGYOKAIppCH7ArhUW1p2R?=\n\t=?utf-8?q?YuhrnPPEc7GXjD4GqgbClFSO6yfIYjuHBmXSLdx1AqT6f0c0qic9YK4k/JtphF4Th?=\n\t=?utf-8?q?msBLFr01PKqyxFu8EIpeqV4CzJ+xTIKqleXzAwYpu5BRcbRJM3Oi5UJVz+Sr8qnY8?=\n\t=?utf-8?q?mdqpxCxlrZzOeLhX3WbqsUsr0mt9hGzyFPOXtdORks7Ib2U/uJq+0/AY2WAOFcEkD?=\n\t=?utf-8?q?iVSh6Zk9L2J0yeQgKaWoPF55u9K/tZKIXy2qrEzNpf9YXkO2viBe18rTcGzFZHxFV?=\n\t=?utf-8?q?r6VNcQ/eeBl7YpdIvsC3n2g8WWS98JGj9+0wmGrDGAxgUtQdvU0ky7rxl4UXmbRTO?=\n\t=?utf-8?q?+JLYqoJhhArL65raLRaFp+SDkQzC9niRzlijvMpfgHyVJa7+lk3VDtq7I7QtT64Uw?=\n\t=?utf-8?q?l4T67uylCaMF330q0ywNo9G6F3BmnYxWGDc57h+vnXH8nTt5z85NQ8KfXgWAZWAgu?=\n\t=?utf-8?q?PriW6Rym8aZHiDXra/nZMPhb09MDVuxTMlV2Zn2GFmAMpvhzQdO4Sz9qd7hLGbGGx?=\n\t=?utf-8?q?v0P6E5pF03UZW9jzeMr2X9EhfNjp44i97sVx0hMOETVusyCfHN+2iM5lbzj/vcKGS?=\n\t=?utf-8?q?0EISp782vuOelKqYKOk5P9GTggp9YGD9WQ7gtl/J5655oS+2iUfZI8UKsvHc4p0pW?=\n\t=?utf-8?q?PNTYgMgLhV/ev7lxcIkw8PhE8Wb2CMeVAzpKFwInkDy96ZWaC11WhlaysCdeE0lSH?=\n\t=?utf-8?q?JORhMJg+4p0L72J4Uuhq6EeKapTdVwXywMW1EYfCGZKTsyDp4+zNwLCEfKt0IH11W?=\n\t=?utf-8?q?YWbsb2s2z94ElRD8NXJNaqExj9XVu2yGwFLKZUR2oNEFzC8ciOnGI+jtdXeseuQql?=\n\t=?utf-8?q?2vLL40CI4fuHGgGU18kt9l9HxGlCbeLg=3D=3D?=","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n 5ad84f64-8b45-48d8-bd49-08de58a23278","X-MS-Exchange-CrossTenant-AuthSource":"DM4PR12MB9072.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"21 Jan 2026 04:04:39.6121\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n o6v14zjODi/WB+OX2MmQfd6DTbvgvl6zvfqU4Ov2WLcyevmnCsjNEjmzRx80YbBNoZYqz6zZkllpViTaAgFPFg==","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"LV2PR12MB5726","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tRCVD_IN_DNSWL_NONE,SPF_HELO_PASS,SPF_PASS autolearn=disabled\n\tversion=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"}},{"id":3640073,"web_url":"http://patchwork.ozlabs.org/comment/3640073/","msgid":"<428a2aa3-d5b6-4a48-8cc3-34b3a0ccb350@nvidia.com>","date":"2026-01-22T06:24:26","subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","submitter":{"id":92354,"url":"http://patchwork.ozlabs.org/api/people/92354/","name":"Jordan Niethe","email":"jniethe@nvidia.com"},"content":"Hi,\n\nOn 21/1/26 15:04, Jordan Niethe wrote:\n> On 21/1/26 13:41, Zi Yan wrote:\n>> On 20 Jan 2026, at 18:34, Jordan Niethe wrote:\n>>\n>>> Hi,\n>>>\n>>> On 21/1/26 10:06, Zi Yan wrote:\n>>>> On 20 Jan 2026, at 18:02, Jordan Niethe wrote:\n>>>>\n>>>>> Hi,\n>>>>>\n>>>>> On 21/1/26 09:53, Zi Yan wrote:\n>>>>>> On 20 Jan 2026, at 17:33, Jordan Niethe wrote:\n>>>>>>\n>>>>>>> On 14/1/26 07:04, Zi Yan wrote:\n>>>>>>>> On 7 Jan 2026, at 4:18, Jordan Niethe wrote:\n>>>>>>>>\n>>>>>>>>> Currently when creating these device private struct pages, the first\n>>>>>>>>> step is to use request_free_mem_region() to get a range of physical\n>>>>>>>>> address space large enough to represent the devices memory. This\n>>>>>>>>> allocated physical address range is then remapped as device private\n>>>>>>>>> memory using memremap_pages().\n>>>>>>>>>\n>>>>>>>>> Needing allocation of physical address space has some problems:\n>>>>>>>>>\n>>>>>>>>>       1) There may be insufficient physical address space to represent the\n>>>>>>>>>          device memory. KASLR reducing the physical address space and VM\n>>>>>>>>>          configurations with limited physical address space increase the\n>>>>>>>>>          likelihood of hitting this especially as device memory increases. This\n>>>>>>>>>          has been observed to prevent device private from being initialized.\n>>>>>>>>>\n>>>>>>>>>       2) Attempting to add the device private pages to the linear map at\n>>>>>>>>>          addresses beyond the actual physical memory causes issues on\n>>>>>>>>>          architectures like aarch64 meaning the feature does not work there.\n>>>>>>>>>\n>>>>>>>>> Instead of using the physical address space, introduce a device private\n>>>>>>>>> address space and allocate devices regions from there to represent the\n>>>>>>>>> device private pages.\n>>>>>>>>>\n>>>>>>>>> Introduce a new interface memremap_device_private_pagemap() that\n>>>>>>>>> allocates a requested amount of device private address space and creates\n>>>>>>>>> the necessary device private pages.\n>>>>>>>>>\n>>>>>>>>> To support this new interface, struct dev_pagemap needs some changes:\n>>>>>>>>>\n>>>>>>>>>       - Add a new dev_pagemap::nr_pages field as an input parameter.\n>>>>>>>>>       - Add a new dev_pagemap::pages array to store the device\n>>>>>>>>>         private pages.\n>>>>>>>>>\n>>>>>>>>> When using memremap_device_private_pagemap(), rather then passing in\n>>>>>>>>> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to\n>>>>>>>>> be remapped, dev_pagemap::nr_ranges will always be 1, and the device\n>>>>>>>>> private range that is reserved is returned in dev_pagemap::range.\n>>>>>>>>>\n>>>>>>>>> Forbid calling memremap_pages() with dev_pagemap::ranges::type =\n>>>>>>>>> MEMORY_DEVICE_PRIVATE.\n>>>>>>>>>\n>>>>>>>>> Represent this device private address space using a new\n>>>>>>>>> device_private_pgmap_tree maple tree. This tree maps a given device\n>>>>>>>>> private address to a struct dev_pagemap, where a specific device private\n>>>>>>>>> page may then be looked up in that dev_pagemap::pages array.\n>>>>>>>>>\n>>>>>>>>> Device private address space can be reclaimed and the assoicated device\n>>>>>>>>> private pages freed using the corresponding new\n>>>>>>>>> memunmap_device_private_pagemap() interface.\n>>>>>>>>>\n>>>>>>>>> Because the device private pages now live outside the physical address\n>>>>>>>>> space, they no longer have a normal PFN. This means that page_to_pfn(),\n>>>>>>>>> et al. are no longer meaningful.\n>>>>>>>>>\n>>>>>>>>> Introduce helpers:\n>>>>>>>>>\n>>>>>>>>>       - device_private_page_to_offset()\n>>>>>>>>>       - device_private_folio_to_offset()\n>>>>>>>>>\n>>>>>>>>> to take a given device private page / folio and return its offset within\n>>>>>>>>> the device private address space.\n>>>>>>>>>\n>>>>>>>>> Update the places where we previously converted a device private page to\n>>>>>>>>> a PFN to use these new helpers. When we encounter a device private\n>>>>>>>>> offset, instead of looking up its page within the pagemap use\n>>>>>>>>> device_private_offset_to_page() instead.\n>>>>>>>>>\n>>>>>>>>> Update the existing users:\n>>>>>>>>>\n>>>>>>>>>      - lib/test_hmm.c\n>>>>>>>>>      - ppc ultravisor\n>>>>>>>>>      - drm/amd/amdkfd\n>>>>>>>>>      - gpu/drm/xe\n>>>>>>>>>      - gpu/drm/nouveau\n>>>>>>>>>\n>>>>>>>>> to use the new memremap_device_private_pagemap() interface.\n>>>>>>>>>\n>>>>>>>>> Signed-off-by: Jordan Niethe <jniethe@nvidia.com>\n>>>>>>>>> Signed-off-by: Alistair Popple <apopple@nvidia.com>\n>>>>>>>>>\n>>>>>>>>> ---\n>>>>>>>>>\n>>>>>>>>> NOTE: The updates to the existing drivers have only been compile tested.\n>>>>>>>>> I'll need some help in testing these drivers.\n>>>>>>>>>\n>>>>>>>>> v1:\n>>>>>>>>> - Include NUMA node paramater for memremap_device_private_pagemap()\n>>>>>>>>> - Add devm_memremap_device_private_pagemap() and friends\n>>>>>>>>> - Update existing users of memremap_pages():\n>>>>>>>>>         - ppc ultravisor\n>>>>>>>>>         - drm/amd/amdkfd\n>>>>>>>>>         - gpu/drm/xe\n>>>>>>>>>         - gpu/drm/nouveau\n>>>>>>>>> - Update for HMM huge page support\n>>>>>>>>> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE\n>>>>>>>>>\n>>>>>>>>> v2:\n>>>>>>>>> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);\n>>>>>>>>> ---\n>>>>>>>>>      Documentation/mm/hmm.rst                 |  11 +-\n>>>>>>>>>      arch/powerpc/kvm/book3s_hv_uvmem.c       |  41 ++---\n>>>>>>>>>      drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  23 +--\n>>>>>>>>>      drivers/gpu/drm/nouveau/nouveau_dmem.c   |  35 ++--\n>>>>>>>>>      drivers/gpu/drm/xe/xe_svm.c              |  28 +---\n>>>>>>>>>      include/linux/hmm.h                      |   3 +\n>>>>>>>>>      include/linux/leafops.h                  |  16 +-\n>>>>>>>>>      include/linux/memremap.h                 |  64 +++++++-\n>>>>>>>>>      include/linux/migrate.h                  |   6 +-\n>>>>>>>>>      include/linux/mm.h                       |   2 +\n>>>>>>>>>      include/linux/rmap.h                     |   5 +-\n>>>>>>>>>      include/linux/swapops.h                  |  10 +-\n>>>>>>>>>      lib/test_hmm.c                           |  69 ++++----\n>>>>>>>>>      mm/debug.c                               |   9 +-\n>>>>>>>>>      mm/memremap.c                            | 193 ++++++++++++++++++-----\n>>>>>>>>>      mm/mm_init.c                             |   8 +-\n>>>>>>>>>      mm/page_vma_mapped.c                     |  19 ++-\n>>>>>>>>>      mm/rmap.c                                |  43 +++--\n>>>>>>>>>      mm/util.c                                |   5 +-\n>>>>>>>>>      19 files changed, 391 insertions(+), 199 deletions(-)\n>>>>>>>>>\n>>>>>>>> <snip>\n>>>>>>>>\n>>>>>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h\n>>>>>>>>> index e65329e1969f..b36599ab41ba 100644\n>>>>>>>>> --- a/include/linux/mm.h\n>>>>>>>>> +++ b/include/linux/mm.h\n>>>>>>>>> @@ -2038,6 +2038,8 @@ static inline unsigned long memdesc_section(memdesc_flags_t mdf)\n>>>>>>>>>       */\n>>>>>>>>>      static inline unsigned long folio_pfn(const struct folio *folio)\n>>>>>>>>>      {\n>>>>>>>>> +    VM_BUG_ON(folio_is_device_private(folio));\n>>>>>>>>\n>>>>>>>> Please use VM_WARN_ON instead.\n>>>>>>>\n>>>>>>> ack.\n>>>>>>>\n>>>>>>>>\n>>>>>>>>> +\n>>>>>>>>>          return page_to_pfn(&folio->page);\n>>>>>>>>>      }\n>>>>>>>>>\n>>>>>>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n>>>>>>>>> index 57c63b6a8f65..c1561a92864f 100644\n>>>>>>>>> --- a/include/linux/rmap.h\n>>>>>>>>> +++ b/include/linux/rmap.h\n>>>>>>>>> @@ -951,7 +951,7 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>>>>>>>>>      static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>>>>>>      {\n>>>>>>>>>          if (folio_is_device_private(folio))\n>>>>>>>>> -        return page_vma_walk_pfn(folio_pfn(folio)) |\n>>>>>>>>> +        return page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n>>>>>>>>>                     PVMW_PFN_DEVICE_PRIVATE;\n>>>>>>>>>\n>>>>>>>>>          return page_vma_walk_pfn(folio_pfn(folio));\n>>>>>>>>> @@ -959,6 +959,9 @@ static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>>>>>>\n>>>>>>>>>      static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n>>>>>>>>>      {\n>>>>>>>>> +    if (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n>>>>>>>>> +        return device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>>>>>> +\n>>>>>>>>>          return pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>>>>>>      }\n>>>>>>>>\n>>>>>>>> <snip>\n>>>>>>>>\n>>>>>>>>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n>>>>>>>>> index 96c525785d78..141fe5abd33f 100644\n>>>>>>>>> --- a/mm/page_vma_mapped.c\n>>>>>>>>> +++ b/mm/page_vma_mapped.c\n>>>>>>>>> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,\n>>>>>>>>>      static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>>>      {\n>>>>>>>>>          unsigned long pfn;\n>>>>>>>>> +    bool device_private = false;\n>>>>>>>>>          pte_t ptent = ptep_get(pvmw->pte);\n>>>>>>>>>\n>>>>>>>>>          if (pvmw->flags & PVMW_MIGRATION) {\n>>>>>>>>> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>>>              if (!softleaf_is_migration(entry))\n>>>>>>>>>                  return false;\n>>>>>>>>>\n>>>>>>>>> +        if (softleaf_is_migration_device_private(entry))\n>>>>>>>>> +            device_private = true;\n>>>>>>>>> +\n>>>>>>>>>              pfn = softleaf_to_pfn(entry);\n>>>>>>>>>          } else if (pte_present(ptent)) {\n>>>>>>>>>              pfn = pte_pfn(ptent);\n>>>>>>>>> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>>>                  return false;\n>>>>>>>>>\n>>>>>>>>>              pfn = softleaf_to_pfn(entry);\n>>>>>>>>> +\n>>>>>>>>> +        if (softleaf_is_device_private(entry))\n>>>>>>>>> +            device_private = true;\n>>>>>>>>>          }\n>>>>>>>>>\n>>>>>>>>> +    if ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>>>>>>> +        return false;\n>>>>>>>>> +\n>>>>>>>>>          if ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>>>>>>              return false;\n>>>>>>>>>          if (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n>>>>>>>>> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>>>      }\n>>>>>>>>>\n>>>>>>>>>      /* Returns true if the two ranges overlap.  Careful to not overflow. */\n>>>>>>>>> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)\n>>>>>>>>> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n>>>>>>>>>      {\n>>>>>>>>> +    if ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>>>>>>> +        return false;\n>>>>>>>>> +\n>>>>>>>>>          if ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>>>>>>              return false;\n>>>>>>>>>          if (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n>>>>>>>>> @@ -255,6 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>>>>>>\n>>>>>>>>>                      if (!softleaf_is_migration(entry) ||\n>>>>>>>>>                          !check_pmd(softleaf_to_pfn(entry),\n>>>>>>>>> +                           softleaf_is_device_private(entry) ||\n>>>>>>>>> +                           softleaf_is_migration_device_private(entry),\n>>>>>>>>>                                 pvmw))\n>>>>>>>>>                          return not_found(pvmw);\n>>>>>>>>>                      return true;\n>>>>>>>>> @@ -262,7 +277,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>>>>>>                  if (likely(pmd_trans_huge(pmde))) {\n>>>>>>>>>                      if (pvmw->flags & PVMW_MIGRATION)\n>>>>>>>>>                          return not_found(pvmw);\n>>>>>>>>> -                if (!check_pmd(pmd_pfn(pmde), pvmw))\n>>>>>>>>> +                if (!check_pmd(pmd_pfn(pmde), false, pvmw))\n>>>>>>>>>                          return not_found(pvmw);\n>>>>>>>>>                      return true;\n>>>>>>>>>                  }\n>>>>>>>>\n>>>>>>>> It seems to me that you can add a new flag like “bool is_device_private” to\n>>>>>>>> indicate whether pfn is a device private index instead of pfn without\n>>>>>>>> manipulating pvmw->pfn itself.\n>>>>>>>\n>>>>>>> We could do it like that, however my concern with using a new param was that\n>>>>>>> storing this info seperately might make it easier to misuse a device private\n>>>>>>> index as a regular pfn.\n>>>>>>>\n>>>>>>> It seemed like it could be easy to overlook both when creating the pvmw and\n>>>>>>> then when accessing the pfn.\n>>>>>>\n>>>>>> That is why I asked for a helper function like page_vma_walk_pfn(pvmw) to\n>>>>>> return the converted pfn instead of pvmw->pfn directly. You can add a comment\n>>>>>> to ask people to use helper function and even mark pvmw->pfn /* do not use\n>>>>>> directly */.\n>>>>>\n>>>>> Yeah I agree that is a good idea.\n>>>>>\n>>>>>>\n>>>>>> In addition, your patch manipulates pfn by left shifting it by 1. Are you sure\n>>>>>> there is no weird arch having pfns with bit 63 being 1? Your change could\n>>>>>> break it, right?\n>>>>>\n>>>>> Currently for migrate pfns we left shift by pfns by MIGRATE_PFN_SHIFT (6), so I\n>>>>> thought doing something similiar here should be safe.\n>>>>\n>>>> Yeah, but that limits to archs supporting HMM. page_vma_mapped_walk is used\n>>>> by almost every arch, so it has a broader impact.\n>>>\n>>> That is a good point.\n>>>\n>>> I see a few options:\n>>>\n>>> - On every arch we can assume SWP_PFN_BITS? I could add a sanity check that we\n>>>    have an extra bit on top of SWP_PFN_BITS within an unsigned long.\n>>\n>> Yes, but if there is no extra bit, are you going to disable device private\n>> pages?\n> \n> In this case, migrate PFNs would also be broken (due to MIGRATE_PFN_SHIFT) so we'd have to.\n> \n>>\n>>> - We could define PVMW_PFN_SHIFT as 0 if !CONFIG_MIGRATION as the flag is not\n>>>    required.\n>>\n>> Sure, or !CONFIG_DEVICE_MIGRATION\n>>\n>>> - Instead of modifying pvmw->pfn we could use pvmw->flags but that has the\n>>>    issues of separating the offset type and offset.\n>>\n>> It seems that I was not clear on my proposal. Here is the patch on top of\n>> your patchset and it compiles.\n> \n> Oh I'd interpreted “bool is_device_private” as adding a new field to pvmw.\n> \n>>\n>> Basically, pvmw->pfn stores either PFN or device private offset without\n>> additional shift. Caller interprets pvmw->pfn based on\n>> pvmw->flags & PVMW_DEVICE_PRIVATE. And you can ignore my helper function\n>> of pvmw->pfn suggestion, since my patch below can use pvmw->pfn directly.\n> \n> Thanks, looks reasonable. I'll try it.\n> \n> Thanks,\n> Jordan.\n> \n>>\n>> Let me know if my patch works. Thanks.\n\nWe need to be careful now to ensure the PVMW_DEVICE_PRIVATE flag doesn't get\noverwritten:\n\n\n--- a/mm/rmap.c\n+++ b/mm/rmap.c\n@@ -1871,7 +1871,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,\n          * if page table locking is skipped: use TTU_SYNC to wait for that.\n          */\n         if (flags & TTU_SYNC)\n-               pvmw.flags = PVMW_SYNC;\n+               pvmw.flags = page_vma_walk_flags(folio, PVMW_SYNC);\n  \n         /*\n          * For THP, we have to assume the worse case ie pmd for invalidation.\n@@ -2304,7 +2304,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,\n          * if page table locking is skipped: use TTU_SYNC to wait for that.\n          */\n         if (flags & TTU_SYNC)\n-               pvmw.flags = PVMW_SYNC;\n+               pvmw.flags = page_vma_walk_flags(folio, PVMW_SYNC);\n  \n         /*\n          * For THP, we have to assume the worse case ie pmd for invalidation.\n\nOther than that tests ok.\n\nThanks,\nJordan.\n>>\n>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n>> index c1561a92864f..4423f0e886aa 100644\n>> --- a/include/linux/rmap.h\n>> +++ b/include/linux/rmap.h\n>> @@ -921,6 +921,7 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr,\n>>   #define PVMW_SYNC        (1 << 0)\n>>   /* Look for migration entries rather than present PTEs */\n>>   #define PVMW_MIGRATION        (1 << 1)\n>> +#define PVMW_DEVICE_PRIVATE    (1 << 2)\n>>\n>>   /* Result flags */\n>>\n>> @@ -943,6 +944,13 @@ struct page_vma_mapped_walk {\n>>   #define PVMW_PFN_DEVICE_PRIVATE    (1UL << 0)\n>>   #define PVMW_PFN_SHIFT        1\n>>\n>> +static inline unsigned long page_vma_walk_flags(struct folio *folio, unsigned long flags)\n>> +{\n>> +    if (folio_is_device_private(folio))\n>> +        return flags | PVMW_DEVICE_PRIVATE;\n>> +    return flags;\n>> +}\n>> +\n>>   static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>>   {\n>>       return (pfn << PVMW_PFN_SHIFT);\n>> @@ -951,23 +959,16 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>>   static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>   {\n>>       if (folio_is_device_private(folio))\n>> -        return page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n>> -               PVMW_PFN_DEVICE_PRIVATE;\n>> -\n>> -    return page_vma_walk_pfn(folio_pfn(folio));\n>> -}\n>> -\n>> -static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n>> -{\n>> -    if (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n>> -        return device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>> +        return device_private_folio_to_offset(folio);\n>>\n>> -    return pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>> +    return (folio_pfn(folio));\n>>   }\n>>\n>> -static inline struct folio *page_vma_walk_pfn_to_folio(unsigned long pvmw_pfn)\n>> +static inline struct folio *page_vma_walk_pfn_to_folio(struct page_vma_mapped_walk *pvmw)\n>>   {\n>> -    return page_folio(page_vma_walk_pfn_to_page(pvmw_pfn));\n>> +    if (pvmw->flags & PVMW_DEVICE_PRIVATE)\n>> +        return page_folio(device_private_offset_to_page(pvmw->pfn));\n>> +    return pfn_folio(pvmw->pfn);\n>>   }\n>>\n>>   #define DEFINE_FOLIO_VMA_WALK(name, _folio, _vma, _address, _flags)    \\\n>> @@ -977,7 +978,7 @@ static inline struct folio *page_vma_walk_pfn_to_folio(unsigned long pvmw_pfn)\n>>           .pgoff = folio_pgoff(_folio),                \\\n>>           .vma = _vma,                        \\\n>>           .address = _address,                    \\\n>> -        .flags = _flags,                    \\\n>> +        .flags = page_vma_walk_flags(_folio, _flags),        \\\n>>       }\n>>\n>>   static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)\n>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n>> index 141fe5abd33f..e61a0e49a7c9 100644\n>> --- a/mm/page_vma_mapped.c\n>> +++ b/mm/page_vma_mapped.c\n>> @@ -136,12 +136,12 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>               device_private = true;\n>>       }\n>>\n>> -    if ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>> +    if ((device_private) ^ !!(pvmw->flags & PVMW_DEVICE_PRIVATE))\n>>           return false;\n>>\n>> -    if ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>> +    if ((pfn + pte_nr - 1) < pvmw->pfn)\n>>           return false;\n>> -    if (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n>> +    if (pfn > (pvmw->pfn + pvmw->nr_pages - 1))\n>>           return false;\n>>       return true;\n>>   }\n>> @@ -149,12 +149,12 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>   /* Returns true if the two ranges overlap.  Careful to not overflow. */\n>>   static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n>>   {\n>> -    if ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>> +    if ((device_private) ^ !!(pvmw->flags & PVMW_DEVICE_PRIVATE))\n>>           return false;\n>>\n>> -    if ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>> +    if ((pfn + HPAGE_PMD_NR - 1) < pvmw->pfn)\n>>           return false;\n>> -    if (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n>> +    if (pfn > pvmw->pfn + pvmw->nr_pages - 1)\n>>           return false;\n>>       return true;\n>>   }\n>> @@ -369,7 +369,7 @@ unsigned long page_mapped_in_vma(const struct page *page,\n>>           .pfn = folio_page_vma_walk_pfn(folio),\n>>           .nr_pages = 1,\n>>           .vma = vma,\n>> -        .flags = PVMW_SYNC,\n>> +        .flags = page_vma_walk_flags(folio, PVMW_SYNC),\n>>       };\n>>\n>>       pvmw.address = vma_address(vma, page_pgoff(folio, page), 1);\n>> diff --git a/mm/vmscan.c b/mm/vmscan.c\n>> index be5682d345b5..5d81939bf12a 100644\n>> --- a/mm/vmscan.c\n>> +++ b/mm/vmscan.c\n>> @@ -4203,7 +4203,7 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw)\n>>       pte_t *pte = pvmw->pte;\n>>       unsigned long addr = pvmw->address;\n>>       struct vm_area_struct *vma = pvmw->vma;\n>> -    struct folio *folio = page_vma_walk_pfn_to_folio(pvmw->pfn);\n>> +    struct folio *folio = page_vma_walk_pfn_to_folio(pvmw);\n>>       struct mem_cgroup *memcg = folio_memcg(folio);\n>>       struct pglist_data *pgdat = folio_pgdat(folio);\n>>       struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat);\n>>\n>>\n>>\n>>\n>>\n>> Best Regards,\n>> Yan, Zi\n>","headers":{"Return-Path":"\n <linuxppc-dev+bounces-16116-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=MVAd7Rv7;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=2404:9400:21b9:f100::1; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-16116-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=\"2a01:111:f403:c005::5\" arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=MVAd7Rv7;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=2a01:111:f403:c005::5;\n helo=co1pr03cu002.outbound.protection.outlook.com;\n envelope-from=jniethe@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org\n [IPv6:2404:9400:21b9:f100::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1 raw public key)\n server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dxWJX0ScPz1xqf\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 22 Jan 2026 17:25:14 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4dxWJP0TTbz2ySb;\n\tThu, 22 Jan 2026 17:25:09 +1100 (AEDT)","from CO1PR03CU002.outbound.protection.outlook.com\n (mail-westus2azlp170100005.outbound.protection.outlook.com\n [IPv6:2a01:111:f403:c005::5])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4dxWJM1SHHz2xS5\n\tfor <linuxppc-dev@lists.ozlabs.org>; Thu, 22 Jan 2026 17:25:06 +1100 (AEDT)","from DM4PR12MB9072.namprd12.prod.outlook.com (2603:10b6:8:be::6) by\n PH0PR12MB8174.namprd12.prod.outlook.com (2603:10b6:510:298::6) with Microsoft\n SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.9542.9; Thu, 22 Jan 2026 06:24:35 +0000","from DM4PR12MB9072.namprd12.prod.outlook.com\n ([fe80::9e49:782:8e98:1ff1]) by DM4PR12MB9072.namprd12.prod.outlook.com\n ([fe80::9e49:782:8e98:1ff1%5]) with mapi id 15.20.9542.009; Thu, 22 Jan 2026\n 06:24:35 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1769063108;\n\tcv=pass;\n b=R8+lhQT3qAyenXLGFVW6m9tsNpzgS4FmJbbTQTKxLuoCXhA6UVdRHitUzHzf4F6yUk3+ZyCtYXb07l2MLLaEKW/iwndbhFy5tQXTHEXR+jOuWmw17wZAL0RKJwP2rP7YoD9TQyT1CbyICrTUipvxVcwyw5tXuROez7RK4IKFLlRv9/UGdYRNTU+Q/dPLkHrhD1Jsp/NrJcvDOh+RjtB8+xbRQs1RWFPfcmZamrmuPiZk7mjJ7ht1XW8UqC3ZljWsl2EyGchHCdNRCRvUzk6P+q9N7+wCwWhSBVELrxsVBsvxg7iZxCOB+VCwQVWiaH8I0/ypp8qeWdeof/BYDGmYiw==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=FGq5C2GTx6f4/LXnlBpyXktbEZXa3JZZ0s/Jyg4JCYk0oZrciIhjUWsZSYR4FgXwBnPGDy9ZFQ/yoUglfRSg44kOcvtS+Q3HbOeqVlGXQha4xpaGU9DUUT8QwB8A6nFKJI3jPj7dgQuAnVQ0WcJXUGPw+21oKUFPIwfUI/GOhl3q4YQZf3R9kF4iAHx3ojbNe5x5FP41LAZYV+bSvGYJa+mMvI425F/EET/XnXnLBixFgkgEU8QFFo00u9aEydGoXcfhtywtn7RJGd6Z1e+7YCzm+cpRFv0u35BfHIKI0mipipUSvIOFwffRiyztgBTK6ueN/vK8PiEwHUvEX7mfvQ=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1769063108; c=relaxed/relaxed;\n\tbh=PW3rE2bnb6SBXTg9+tmjNgAxJtTGEh3/i/B1on6allA=;\n\th=Message-ID:Date:Subject:From:To:Cc:References:In-Reply-To:\n\t Content-Type:MIME-Version;\n b=n1dLXplXxo9hKMyYhuDS0ZmC04H/h5AwX5wFYv4UhOgdllG/OV2564g5o/JG+fAwzikGc6f3aPP7Ff4fVFtGYopOPyXbybeaDadzT5KI/UQorOHaQoJ4aeu8Y9ZHSJTZ2zGL/Hf/eitckjGbZhLMWjoGgPjgC+aOX9v2Kc3RPpFy2jV5FvA8HGTorrVP0bPFr5mq3Z1W4FUVCWRhez2X90QiWuTA8RvwmeOWAJve0HUud633NMPKmCo3b/1a6oqoYA3qXviJPjXqF2L1xqJa2nzy8JslFATWDQ86FAs8uUFsSB3knHc107YOwTcDVNyZ2jCOlapBCw//veO8AQiRww==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=PW3rE2bnb6SBXTg9+tmjNgAxJtTGEh3/i/B1on6allA=;\n b=FhxtVYf54B0I4gOptvfx4DzQpNIgVFgjSHMKpxHMaMLPKLLyF6X9FaCeqNEbZgIk4ySYDsKOEnE8xGlRrcTlm9+EEu54BKAcGfJxuGR/Sz0RTqB7ePw0sZ+WnC5WXhZCaFMBvs5Pbo3/xYcZTxq4zJhX00Gp2E5cYXVbnRA645oQj9DPne2xQ5oJHVkuxjtgNvLpIrRpmfkk/HKrCInaDsyL744liXztt0+YwHp6x1JCFdlKL3POnPu/jU77izcToFG5q3pBqYnSsnUCD3kWZoNLO8c4h0v9dEANCfDdvWiKUgEYginXrogStakOYvXGUcUgwYk9pHbZWBk74neOBQ=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=MVAd7Rv7; dkim-atps=neutral;\n spf=pass (client-ip=2a01:111:f403:c005::5;\n helo=co1pr03cu002.outbound.protection.outlook.com;\n envelope-from=jniethe@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=PW3rE2bnb6SBXTg9+tmjNgAxJtTGEh3/i/B1on6allA=;\n b=MVAd7Rv7W1YrdbKjJDZ2oCexRLPbVPP3pC1EUU7jdz82Ff9ywUuZcj7thjtrhhxwJhtBxA3xCs0xGOk34ypq5sfXpWSUTd23xJb150bRH3QPWKJ8cs+LtZYbC16w307XpI74fpvTGR+XJhBhw74QzhVO3lBddeNMLvxT7xDaxRU33cWjZx8ennFCeOyBs0lWt9kIb7QdC1i+O54gvx0bXAxxwXpYzOgkgv4HjKwmcLkCM52sOul5rRRLDiInFsd23Ug0b3tHetwMYUaFoeK4HSLc9ZG0JdvC5L5gWTmpT4O4FERmk6sMtuMbdmNUaIuYhyVxG18ewI/cm70mMFqeKA==","Message-ID":"<428a2aa3-d5b6-4a48-8cc3-34b3a0ccb350@nvidia.com>","Date":"Thu, 22 Jan 2026 17:24:26 +1100","User-Agent":"Mozilla Thunderbird","Subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","From":"Jordan Niethe <jniethe@nvidia.com>","To":"Zi Yan <ziy@nvidia.com>","Cc":"linux-mm@kvack.org, balbirs@nvidia.com, matthew.brost@intel.com,\n akpm@linux-foundation.org, linux-kernel@vger.kernel.org,\n dri-devel@lists.freedesktop.org, david@redhat.com, apopple@nvidia.com,\n lorenzo.stoakes@oracle.com, lyude@redhat.com, dakr@kernel.org,\n airlied@gmail.com, simona@ffwll.ch, rcampbell@nvidia.com,\n mpenttil@redhat.com, jgg@nvidia.com, willy@infradead.org,\n linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org, jgg@ziepe.ca,\n Felix.Kuehling@amd.com","References":"<20260107091823.68974-1-jniethe@nvidia.com>\n <20260107091823.68974-12-jniethe@nvidia.com>\n <36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>\n <c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>\n <6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com>\n <fd4b6553-3e9e-4829-a12f-51d29a5d7571@nvidia.com>\n <16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com>\n <649cc20c-161c-4343-8263-3712a4f6dccb@nvidia.com>\n <C2A9F124-9EA8-4916-AB86-659BD280390D@nvidia.com>\n <254bd66c-4c0f-44f4-a4a1-87dc44bc5e30@nvidia.com>","Content-Language":"en-US","In-Reply-To":"<254bd66c-4c0f-44f4-a4a1-87dc44bc5e30@nvidia.com>","Content-Type":"text/plain; charset=UTF-8; format=flowed","Content-Transfer-Encoding":"8bit","X-ClientProxiedBy":"SJ0PR05CA0096.namprd05.prod.outlook.com\n (2603:10b6:a03:334::11) To DM4PR12MB9072.namprd12.prod.outlook.com\n (2603:10b6:8:be::6)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DM4PR12MB9072:EE_|PH0PR12MB8174:EE_","X-MS-Office365-Filtering-Correlation-Id":"481a356d-04a8-4ecb-45ba-08de597ee95a","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|366016|7416014|376014|1800799024;","X-Microsoft-Antispam-Message-Info":"=?utf-8?q?2DMQjJXtHceH5m+mfXEZMW7P4gkuszf?=\n\t=?utf-8?q?x6H+6JZSIAiP9RR2r6joWmD+xgGzqF+T3Vj31mVjO1Vm1DiZGQESq0KDT6UKP2FWP?=\n\t=?utf-8?q?qXcgqBToncvTYFIu8A3xEZ2gZy+C01y3JXZEYUy3M++BbD0PD5eVFp8UNNGneYyHV?=\n\t=?utf-8?q?4E1IsIGg5yut9eUGq2+zYcD2x/aRqJ2/AfDaxUsMOAkQsj5hR43KC0JDiV6lA72/x?=\n\t=?utf-8?q?87fQXHoLmc8uLhl4U7i25HCp7AX3VlkTxG+1+1O0UkcEsEMvxzLR2qub73X9wU/Ey?=\n\t=?utf-8?q?dJymxGSAl94WnSd5jRk2MTP71zaKFyouxAhofKxkfs5oOm57N05isg40gXyJaWSvx?=\n\t=?utf-8?q?5cY5OS0ZaA2215f0qxPjgl12CW+kXcuv473muZcvQAbJ5JTpdk/Pe+ScFJv+CgP0N?=\n\t=?utf-8?q?oOvPSTtJu6Z7+m3pbulCGsXjm7LOx5sdJMIXv0asUp591TAWVEz2My/JX6fliCP/d?=\n\t=?utf-8?q?3DLanGNxVo2DazVmr0N85Xlf1O8438Dd1e5PSh8rmTHZ3DmBsai8e1rg9j8DALWUn?=\n\t=?utf-8?q?TCgzuMkkjjE6nWr/1eybKS/4vKFb6SEmsYDCasO2wvo8xWCxhiQ5PDeQhtu4davQf?=\n\t=?utf-8?q?YzHfWEt7ZPb47q/xJPPFypU4eZKBS9P4iGJwv5Ycaj8H5XQX7Tl0ffdA+aVgsjxkl?=\n\t=?utf-8?q?aaY5eOp/TG3kmwXl3nGo+iB9xI5g5x/Qsu4WuMjNarMvPUfCnL79WuCKDtIFPkW12?=\n\t=?utf-8?q?fddh1Vjp5Yt640mD6n+XCd5UjaD+TDwpzZ0D04qcU/Yh86Cnb+PTyaliHlxuu+P7h?=\n\t=?utf-8?q?EY44dcX1zY9+NFqBbKZb1Ue/mJYYlI3AVUBEpRj3WJpsaXDpIJWtQ3jV8F/oVAOV6?=\n\t=?utf-8?q?YjLLmWjz3ORjvi+H+5hKSqMtmStVayoN+gcUV/JPwfm01xXPibZD7w90Oq3Z02yw/?=\n\t=?utf-8?q?7M74joDA7lTqAf0xM8a6H44xQThbk8GwAzjD3oUem8wmNj1u2/8qEMnBMg0TzsEjO?=\n\t=?utf-8?q?oT7TzqnYN/jQrxTNLufh7CnvAsP3jGJMHz8+I3lRefG3rs6Ckt3cSLj01e2EE1X2V?=\n\t=?utf-8?q?gupNZG9i2xWubWO7DiE3rthMniCdeFhpuna9UkvmbRqbjvYUColHb4PEzWCICxt4g?=\n\t=?utf-8?q?C/1jZrtJqtcqt/AMPf1rAlnYDQvxz5syoP7Pd7td2NEr5XvF2wtpyxCSaZ1gjb5Ms?=\n\t=?utf-8?q?6RoImNRSb6xNA+oTdcsyCCXyrtJc9iLcycUBp9Udo1EaeByMB2ofgg5eeIyT8i4gY?=\n\t=?utf-8?q?vlhua4TbTSWgORAqTyo0w6rglmJXHFYxVfomnLe86aNalU+7ODHX92CirZPGbc1+G?=\n\t=?utf-8?q?4vJ1FA+QN2mYDR9G4vlC8UX84DsBnHsWfcWBi3yU0B3XVJDhd/mxUv4S1XqtkTmaK?=\n\t=?utf-8?q?c+T4V0VkjaLkqs0jRoPruW47j4WHYCkkvXKQ3iRzPpjubJSKKslXDJ/TpoQ22OW+T?=\n\t=?utf-8?q?JfGJYIYSTNwN9KMjLASqWK2fsOVTCfXN5QFui+w+erF0BJtjVQ38Xj33MlD1PvmPL?=\n\t=?utf-8?q?WcJ071i35IU+2v7FddVrXnyHfvA/IJZLLXTZz3Xuvxa8bbmOJstJg=3D?=","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR12MB9072.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(7416014)(376014)(1800799024);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"=?utf-8?q?q/8NSriQhoCyJ7ulqB6WQwBknYdw?=\n\t=?utf-8?q?UN5oj3hwxdvK/MUQtNFMdeqIc9mmRuSlGG8UCXKiJfo+/sRjSLY3NzbLJ1s22I6sA?=\n\t=?utf-8?q?ylIxM3st/TFI6UMPuJp7bbkaR5EaRs+WEnuBG5QvlSOixQd+BjWNOnEbcMhmlo2+u?=\n\t=?utf-8?q?uvAiiNYXXDYhn2LbWoqfduVeNQju825cbgOgFQL6Cfs2x0V4CmhDvgBLIALw0U7NM?=\n\t=?utf-8?q?OG8k13pIk8FwOPPDUzfwc1t+EUa8LmCw0CTXZUuod2pg7hlI9LJ0VpH86Ksq7toLU?=\n\t=?utf-8?q?AsfFucELRPlxmF1zKsWUDGCsbYxbYTkDkUzLGrkbl5JiKvuY9TCt6vE9+kqXvJZ9p?=\n\t=?utf-8?q?ZJJQSqcAI6+giffa9kapvRquOG+Jz34OL5JKevq6GB2Buvb9sGPLWnQNPfFOl1tEU?=\n\t=?utf-8?q?aSYLbJR0v2r/GmJAGO5KduMEejlxgSOjylv63WNYcsZRB2oml4eE5ijNhP8w5XnY1?=\n\t=?utf-8?q?kumQjlcK/Vdjij0hnr/ohIZC77yin8g5344viTHtJ0QjNHA06Z5wxVFM/2hfjkMG1?=\n\t=?utf-8?q?wWTYsr23EJYxqFK24o23C4HxWvLwXu/hM24MD/0NZ5tMcRMlUBFpTMW2EsQyB3ZuJ?=\n\t=?utf-8?q?OcvXQ2mGXkaLUCYUBrDDhmuYaaIuKc/W4uX5gO0WmALqX11gBrq7izX76bNfqL1z4?=\n\t=?utf-8?q?nWqfaAJnch7asgaGrgs/q9nU8yj1y3LipMPHes6pk5VKfjsGY/LpLfA3KtF0eHI4T?=\n\t=?utf-8?q?sJmhvo2wTporMBHGN8kbHZM05MtKbIAnSo7iCAgbNmfgyEm8KyNS+f6uz053Vtwd8?=\n\t=?utf-8?q?VJ9c8yF9BFqTT5Hb5seFeDIUSAKp/eJ+vhjekwvQBWxxDCVPhuUys/MesQ+PglKNC?=\n\t=?utf-8?q?HCawuD88ZjFH+9kIUQ6jSyFWeMYVnKLW0mSZ3fPJxjCKX1qVvs84LORKXUVbp+YpJ?=\n\t=?utf-8?q?ZZw38xPB0tncHb4gNWWljf4jVjUXpwPp5ouv//lAeZxfxqfwiZkPQMLPmc4cQYOPZ?=\n\t=?utf-8?q?WgwYAZu+VghKsbUtrrShX2X9jRgmm0cVkR0bvPcwShDnZa6slkq/FHlzuH3FG5vcJ?=\n\t=?utf-8?q?FPnZZLXpRvVn7nC1uBPkqlczHGR2ln/QLi1G6CHz65o0Smq8H+Bke1f2scXuHIOsO?=\n\t=?utf-8?q?sd8lWmQxxGem/pisjP1ZejGJKIB7OKSj0vISDfHbzb8tu2/WNsH6GsYDEcsANwut7?=\n\t=?utf-8?q?gfSWJklH8BM9GXDMd7QfCE3yOXLSHbRhdeXq5daFBk1NhBnGvJ4Z6NITgyYro+H+u?=\n\t=?utf-8?q?rL1Ch3s/TSq0P91mTCpZrxEw6Y83sxe6tuzCbknYxUBhxDIoNL9sARmlrGSwOVWqy?=\n\t=?utf-8?q?5W+eKNhJxuCb8fuh1TNAAa91M5tJ0AC1Vea50ImUm2LYfK7pxevlPxJEMYzjX9mVP?=\n\t=?utf-8?q?jOibZfNdmDFg3qoj4TXFy5aEN2AmFroi7TusVE/4ieFVPeC1oH3xRLuZpf1K2AnNH?=\n\t=?utf-8?q?jZH9xThcrTntuCP/dh3j1tMNJKrgynvLsXG2Qbk1aCYViowL98Gy5kwHIBCc68x9x?=\n\t=?utf-8?q?YxYez+wGADxUhZ1jOL9fojh+fHqNxqPwwMgEn2hfwkvP4Ck12TFJF1yxJrmVgaRHI?=\n\t=?utf-8?q?3lwmLxdr0bzOjTnUJNixAchlHgc37YZdX1ibzgTly1YhlIyaMXeLQw8uqUX/WJE+s?=\n\t=?utf-8?q?H6IRK1RsPYxhDEsnL9cBrfXA4LuC6Kutu0pAeL+2FuknAq2IwcS/snwz9FGzMfD+Y?=\n\t=?utf-8?q?rvciEh1dt/CkgXEds2u6nhkt2tVeeYXw=3D=3D?=","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n 481a356d-04a8-4ecb-45ba-08de597ee95a","X-MS-Exchange-CrossTenant-AuthSource":"DM4PR12MB9072.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"22 Jan 2026 06:24:35.1030\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n C1dXjxqjdRmaTwByJid/lOE+tM6rQ6YWCBXkBbR1nYjGmwJyqSnLb9bKowWUgK5ttgu59/jdjxsaVi2PxHNM1A==","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"PH0PR12MB8174","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tSPF_HELO_PASS,SPF_PASS autolearn=disabled version=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"}},{"id":3640698,"web_url":"http://patchwork.ozlabs.org/comment/3640698/","msgid":"<sezye7d27h7pioazf4k3wfrdbradxovmdqyyp5slhljkmcnxf5@ckj3ujikhsnj>","date":"2026-01-23T02:02:42","subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","submitter":{"id":81117,"url":"http://patchwork.ozlabs.org/api/people/81117/","name":"Alistair Popple","email":"apopple@nvidia.com"},"content":"On 2026-01-21 at 10:06 +1100, Zi Yan <ziy@nvidia.com> wrote...\n> On 20 Jan 2026, at 18:02, Jordan Niethe wrote:\n> \n> > Hi,\n> >\n> > On 21/1/26 09:53, Zi Yan wrote:\n> >> On 20 Jan 2026, at 17:33, Jordan Niethe wrote:\n> >>\n> >>> On 14/1/26 07:04, Zi Yan wrote:\n> >>>> On 7 Jan 2026, at 4:18, Jordan Niethe wrote:\n> >>>>\n> >>>>> Currently when creating these device private struct pages, the first\n> >>>>> step is to use request_free_mem_region() to get a range of physical\n> >>>>> address space large enough to represent the devices memory. This\n> >>>>> allocated physical address range is then remapped as device private\n> >>>>> memory using memremap_pages().\n> >>>>>\n> >>>>> Needing allocation of physical address space has some problems:\n> >>>>>\n> >>>>>     1) There may be insufficient physical address space to represent the\n> >>>>>        device memory. KASLR reducing the physical address space and VM\n> >>>>>        configurations with limited physical address space increase the\n> >>>>>        likelihood of hitting this especially as device memory increases. This\n> >>>>>        has been observed to prevent device private from being initialized.\n> >>>>>\n> >>>>>     2) Attempting to add the device private pages to the linear map at\n> >>>>>        addresses beyond the actual physical memory causes issues on\n> >>>>>        architectures like aarch64 meaning the feature does not work there.\n> >>>>>\n> >>>>> Instead of using the physical address space, introduce a device private\n> >>>>> address space and allocate devices regions from there to represent the\n> >>>>> device private pages.\n> >>>>>\n> >>>>> Introduce a new interface memremap_device_private_pagemap() that\n> >>>>> allocates a requested amount of device private address space and creates\n> >>>>> the necessary device private pages.\n> >>>>>\n> >>>>> To support this new interface, struct dev_pagemap needs some changes:\n> >>>>>\n> >>>>>     - Add a new dev_pagemap::nr_pages field as an input parameter.\n> >>>>>     - Add a new dev_pagemap::pages array to store the device\n> >>>>>       private pages.\n> >>>>>\n> >>>>> When using memremap_device_private_pagemap(), rather then passing in\n> >>>>> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to\n> >>>>> be remapped, dev_pagemap::nr_ranges will always be 1, and the device\n> >>>>> private range that is reserved is returned in dev_pagemap::range.\n> >>>>>\n> >>>>> Forbid calling memremap_pages() with dev_pagemap::ranges::type =\n> >>>>> MEMORY_DEVICE_PRIVATE.\n> >>>>>\n> >>>>> Represent this device private address space using a new\n> >>>>> device_private_pgmap_tree maple tree. This tree maps a given device\n> >>>>> private address to a struct dev_pagemap, where a specific device private\n> >>>>> page may then be looked up in that dev_pagemap::pages array.\n> >>>>>\n> >>>>> Device private address space can be reclaimed and the assoicated device\n> >>>>> private pages freed using the corresponding new\n> >>>>> memunmap_device_private_pagemap() interface.\n> >>>>>\n> >>>>> Because the device private pages now live outside the physical address\n> >>>>> space, they no longer have a normal PFN. This means that page_to_pfn(),\n> >>>>> et al. are no longer meaningful.\n> >>>>>\n> >>>>> Introduce helpers:\n> >>>>>\n> >>>>>     - device_private_page_to_offset()\n> >>>>>     - device_private_folio_to_offset()\n> >>>>>\n> >>>>> to take a given device private page / folio and return its offset within\n> >>>>> the device private address space.\n> >>>>>\n> >>>>> Update the places where we previously converted a device private page to\n> >>>>> a PFN to use these new helpers. When we encounter a device private\n> >>>>> offset, instead of looking up its page within the pagemap use\n> >>>>> device_private_offset_to_page() instead.\n> >>>>>\n> >>>>> Update the existing users:\n> >>>>>\n> >>>>>    - lib/test_hmm.c\n> >>>>>    - ppc ultravisor\n> >>>>>    - drm/amd/amdkfd\n> >>>>>    - gpu/drm/xe\n> >>>>>    - gpu/drm/nouveau\n> >>>>>\n> >>>>> to use the new memremap_device_private_pagemap() interface.\n> >>>>>\n> >>>>> Signed-off-by: Jordan Niethe <jniethe@nvidia.com>\n> >>>>> Signed-off-by: Alistair Popple <apopple@nvidia.com>\n> >>>>>\n> >>>>> ---\n> >>>>>\n> >>>>> NOTE: The updates to the existing drivers have only been compile tested.\n> >>>>> I'll need some help in testing these drivers.\n> >>>>>\n> >>>>> v1:\n> >>>>> - Include NUMA node paramater for memremap_device_private_pagemap()\n> >>>>> - Add devm_memremap_device_private_pagemap() and friends\n> >>>>> - Update existing users of memremap_pages():\n> >>>>>       - ppc ultravisor\n> >>>>>       - drm/amd/amdkfd\n> >>>>>       - gpu/drm/xe\n> >>>>>       - gpu/drm/nouveau\n> >>>>> - Update for HMM huge page support\n> >>>>> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE\n> >>>>>\n> >>>>> v2:\n> >>>>> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);\n> >>>>> ---\n> >>>>>    Documentation/mm/hmm.rst                 |  11 +-\n> >>>>>    arch/powerpc/kvm/book3s_hv_uvmem.c       |  41 ++---\n> >>>>>    drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  23 +--\n> >>>>>    drivers/gpu/drm/nouveau/nouveau_dmem.c   |  35 ++--\n> >>>>>    drivers/gpu/drm/xe/xe_svm.c              |  28 +---\n> >>>>>    include/linux/hmm.h                      |   3 +\n> >>>>>    include/linux/leafops.h                  |  16 +-\n> >>>>>    include/linux/memremap.h                 |  64 +++++++-\n> >>>>>    include/linux/migrate.h                  |   6 +-\n> >>>>>    include/linux/mm.h                       |   2 +\n> >>>>>    include/linux/rmap.h                     |   5 +-\n> >>>>>    include/linux/swapops.h                  |  10 +-\n> >>>>>    lib/test_hmm.c                           |  69 ++++----\n> >>>>>    mm/debug.c                               |   9 +-\n> >>>>>    mm/memremap.c                            | 193 ++++++++++++++++++-----\n> >>>>>    mm/mm_init.c                             |   8 +-\n> >>>>>    mm/page_vma_mapped.c                     |  19 ++-\n> >>>>>    mm/rmap.c                                |  43 +++--\n> >>>>>    mm/util.c                                |   5 +-\n> >>>>>    19 files changed, 391 insertions(+), 199 deletions(-)\n> >>>>>\n> >>>> <snip>\n> >>>>\n> >>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h\n> >>>>> index e65329e1969f..b36599ab41ba 100644\n> >>>>> --- a/include/linux/mm.h\n> >>>>> +++ b/include/linux/mm.h\n> >>>>> @@ -2038,6 +2038,8 @@ static inline unsigned long memdesc_section(memdesc_flags_t mdf)\n> >>>>>     */\n> >>>>>    static inline unsigned long folio_pfn(const struct folio *folio)\n> >>>>>    {\n> >>>>> +\tVM_BUG_ON(folio_is_device_private(folio));\n> >>>>\n> >>>> Please use VM_WARN_ON instead.\n> >>>\n> >>> ack.\n> >>>\n> >>>>\n> >>>>> +\n> >>>>>    \treturn page_to_pfn(&folio->page);\n> >>>>>    }\n> >>>>>\n> >>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n> >>>>> index 57c63b6a8f65..c1561a92864f 100644\n> >>>>> --- a/include/linux/rmap.h\n> >>>>> +++ b/include/linux/rmap.h\n> >>>>> @@ -951,7 +951,7 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n> >>>>>    static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n> >>>>>    {\n> >>>>>    \tif (folio_is_device_private(folio))\n> >>>>> -\t\treturn page_vma_walk_pfn(folio_pfn(folio)) |\n> >>>>> +\t\treturn page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n> >>>>>    \t\t       PVMW_PFN_DEVICE_PRIVATE;\n> >>>>>\n> >>>>>    \treturn page_vma_walk_pfn(folio_pfn(folio));\n> >>>>> @@ -959,6 +959,9 @@ static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n> >>>>>\n> >>>>>    static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n> >>>>>    {\n> >>>>> +\tif (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n> >>>>> +\t\treturn device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n> >>>>> +\n> >>>>>    \treturn pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n> >>>>>    }\n> >>>>\n> >>>> <snip>\n> >>>>\n> >>>>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n> >>>>> index 96c525785d78..141fe5abd33f 100644\n> >>>>> --- a/mm/page_vma_mapped.c\n> >>>>> +++ b/mm/page_vma_mapped.c\n> >>>>> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,\n> >>>>>    static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n> >>>>>    {\n> >>>>>    \tunsigned long pfn;\n> >>>>> +\tbool device_private = false;\n> >>>>>    \tpte_t ptent = ptep_get(pvmw->pte);\n> >>>>>\n> >>>>>    \tif (pvmw->flags & PVMW_MIGRATION) {\n> >>>>> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n> >>>>>    \t\tif (!softleaf_is_migration(entry))\n> >>>>>    \t\t\treturn false;\n> >>>>>\n> >>>>> +\t\tif (softleaf_is_migration_device_private(entry))\n> >>>>> +\t\t\tdevice_private = true;\n> >>>>> +\n> >>>>>    \t\tpfn = softleaf_to_pfn(entry);\n> >>>>>    \t} else if (pte_present(ptent)) {\n> >>>>>    \t\tpfn = pte_pfn(ptent);\n> >>>>> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n> >>>>>    \t\t\treturn false;\n> >>>>>\n> >>>>>    \t\tpfn = softleaf_to_pfn(entry);\n> >>>>> +\n> >>>>> +\t\tif (softleaf_is_device_private(entry))\n> >>>>> +\t\t\tdevice_private = true;\n> >>>>>    \t}\n> >>>>>\n> >>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n> >>>>> +\t\treturn false;\n> >>>>> +\n> >>>>>    \tif ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n> >>>>>    \t\treturn false;\n> >>>>>    \tif (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n> >>>>> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n> >>>>>    }\n> >>>>>\n> >>>>>    /* Returns true if the two ranges overlap.  Careful to not overflow. */\n> >>>>> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)\n> >>>>> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n> >>>>>    {\n> >>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n> >>>>> +\t\treturn false;\n> >>>>> +\n> >>>>>    \tif ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n> >>>>>    \t\treturn false;\n> >>>>>    \tif (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n> >>>>> @@ -255,6 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n> >>>>>\n> >>>>>    \t\t\t\tif (!softleaf_is_migration(entry) ||\n> >>>>>    \t\t\t\t    !check_pmd(softleaf_to_pfn(entry),\n> >>>>> +\t\t\t\t\t       softleaf_is_device_private(entry) ||\n> >>>>> +\t\t\t\t\t       softleaf_is_migration_device_private(entry),\n> >>>>>    \t\t\t\t\t       pvmw))\n> >>>>>    \t\t\t\t\treturn not_found(pvmw);\n> >>>>>    \t\t\t\treturn true;\n> >>>>> @@ -262,7 +277,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n> >>>>>    \t\t\tif (likely(pmd_trans_huge(pmde))) {\n> >>>>>    \t\t\t\tif (pvmw->flags & PVMW_MIGRATION)\n> >>>>>    \t\t\t\t\treturn not_found(pvmw);\n> >>>>> -\t\t\t\tif (!check_pmd(pmd_pfn(pmde), pvmw))\n> >>>>> +\t\t\t\tif (!check_pmd(pmd_pfn(pmde), false, pvmw))\n> >>>>>    \t\t\t\t\treturn not_found(pvmw);\n> >>>>>    \t\t\t\treturn true;\n> >>>>>    \t\t\t}\n> >>>>\n> >>>> It seems to me that you can add a new flag like “bool is_device_private” to\n> >>>> indicate whether pfn is a device private index instead of pfn without\n> >>>> manipulating pvmw->pfn itself.\n> >>>\n> >>> We could do it like that, however my concern with using a new param was that\n> >>> storing this info seperately might make it easier to misuse a device private\n> >>> index as a regular pfn.\n> >>>\n> >>> It seemed like it could be easy to overlook both when creating the pvmw and\n> >>> then when accessing the pfn.\n> >>\n> >> That is why I asked for a helper function like page_vma_walk_pfn(pvmw) to\n> >> return the converted pfn instead of pvmw->pfn directly. You can add a comment\n> >> to ask people to use helper function and even mark pvmw->pfn /* do not use\n> >> directly */.\n> >\n> > Yeah I agree that is a good idea.\n> >\n> >>\n> >> In addition, your patch manipulates pfn by left shifting it by 1. Are you sure\n> >> there is no weird arch having pfns with bit 63 being 1? Your change could\n> >> break it, right?\n> >\n> > Currently for migrate pfns we left shift by pfns by MIGRATE_PFN_SHIFT (6), so I\n> > thought doing something similiar here should be safe.\n> \n> Yeah, but that limits to archs supporting HMM. page_vma_mapped_walk is used\n> by almost every arch, so it has a broader impact.\n\nWe need to be a bit careful by what we mean when we say \"HMM\" in the kernel.\n\nSpecifically MIGRATE_PFN_SHIFT is used with migrate_vma/migrate_device, which\nis the migration half of \"HMM\" which does depend on CONFIG_DEVICE_MIGRATION or\nreally just CONFIG_ZONE_DEVICE making it somewhat arch specific.\n\nHowever hmm_range_fault() does something similar - see the definition of\nhmm_pfn_flags - it actually steals the top 11 bits of a pfn for flags, and it is\nnot architecture specific. It only depends on CONFIG_MMU.\n\nNow I'm not saying this implies it actually works on all architectures as I\nagree the page_vma_mapped_walk code is used much more widely. Rather I'm just\npointing out if there are issues with some architectures using high PFN bits\nthen we likely have a problem here too :-)\n\n - Alistair\n\n> Best Regards,\n> Yan, Zi","headers":{"Return-Path":"\n <linuxppc-dev+bounces-16179-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=iPEy6hLO;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=2404:9400:21b9:f100::1; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-16179-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=\"2a01:111:f403:c000::1\" arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=iPEy6hLO;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=2a01:111:f403:c000::1;\n helo=byapr05cu005.outbound.protection.outlook.com;\n envelope-from=apopple@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org\n [IPv6:2404:9400:21b9:f100::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1 raw public key)\n server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dy1Rt34Gcz1xrC\n\tfor <incoming@patchwork.ozlabs.org>; Fri, 23 Jan 2026 13:03:21 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4dy1Rm4HYgz2xJ5;\n\tFri, 23 Jan 2026 13:03:16 +1100 (AEDT)","from BYAPR05CU005.outbound.protection.outlook.com\n (mail-westusazlp170100001.outbound.protection.outlook.com\n [IPv6:2a01:111:f403:c000::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4dy1Rk4RRtz2x9M\n\tfor <linuxppc-dev@lists.ozlabs.org>; Fri, 23 Jan 2026 13:03:13 +1100 (AEDT)","from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by\n SA1PR12MB5658.namprd12.prod.outlook.com (2603:10b6:806:235::5) with Microsoft\n SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.9542.10; Fri, 23 Jan 2026 02:02:48 +0000","from DS0PR12MB7726.namprd12.prod.outlook.com\n ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com\n ([fe80::953f:2f80:90c5:67fe%4]) with mapi id 15.20.9542.009; Fri, 23 Jan 2026\n 02:02:48 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1769133796;\n\tcv=pass;\n b=P+Jmwl9Ka3tKChBo1RMh1fdCbcpV3heYtosmZxTez7C0mirW4hwtItlF9EwWpSFxhmfupBI8bM2rS0YE7+1n7P5406AwzVSNStZ1OIB9rvdB2JekNCZaM0GIm0nuqLWzrRKrJgF8C78/CIBQqPBOl+qH9ou1rwt+Ahnkmlc8CCO8OT8yCGWdM6JVFTjtPbMHMxzoDmtQqsZsZeHXYtctBbEEkZM/+b8nsyzYVCXgQdIDt7xv2uEiGpWLVEAqpGU5a55IiMqkV95QK1YedEvWxotv42P31Qsb5b/9jusXvXhNzSOZ0VcnamYZi8JT6UvmY8SxZxRbrwN3oWEBvDfgig==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=QFFp4vS2uAG0rSoDZqsQi+gdcLezrxVU/UKykKmHKo1B8nWLaUEMyQJPd+TLfDGpSq06aTN306tdU374NqQp8eOnJCGpjp93VK++EpGpvbdnJyq9yTxGt2SgfkXOskRncVAdSLD0rm0IoXoJ/lMJUDaU3LijRgJTsgqZs/czJzlLghwpPHfwJ4AkqrWDc9X7x2efMYXzicRRmXyTJ9506SXrZr/sLHVv6+YpeOPLxxsyWdsVPd2oN9v3rgOd/UVmCswwILIkI0yavzAG6DyftqAfKYuoPXjHBjqcSOnqaV1IxlOiN9prUDedUy6F8esZGHLLxpwAzRJA87YApkRfug=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1769133796; c=relaxed/relaxed;\n\tbh=Z2+gfhpbAKFnoGbDKDGaL4VqP6A6VQbCIUe7wbNIvL4=;\n\th=Date:From:To:Cc:Subject:Message-ID:References:Content-Type:\n\t Content-Disposition:In-Reply-To:MIME-Version;\n b=fhZViTj7SxOc91Enzw1URJpME6P4VADHPNvFjYXWiN5kiaRFqKEBetwTP6+wh9nGYPRmCj9OsBS3KXsa1UbJk3GQ1LWTuvZUsjqmKKEfcEHAU60W6vNnwB79gOKMs1SArNLk4MGL7rhAaDqP16KzhBae+HLhv1khru+I8MM3MEhBwSUsk23GeDGOSN+mFrC02Y2G9WxM8NwcB7KIwj7+WrrJlSILKnoocgzQQxol/1uVUuGJX7WS+kc9EEjTlglGEMCcQ2WxJHhjybhl5PtdhiIKzGyj7SEEyut+zAE6I0X60Sw/qwj+ulYtZevhCa/T29Ji7yEW8a6T/H4UmmdDsA==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=Z2+gfhpbAKFnoGbDKDGaL4VqP6A6VQbCIUe7wbNIvL4=;\n b=AhhQtulBeVgduM7hPazvlDXcqDLPHUE3SmxV5S3ru3/Of0F+mDzB6ak0YbVjHZCIGgeV5bktB7U6xivONA7TukqIpGWf1gVmYLjtrwyYHxcdT9XxeYp3L0xij+SwIHBn6M4SnbcZgyy7bBpKTr3QZWwp40zPhDPJcO1ki9UAfQzvNnfFYQXwLPTJYytqvS3KuKbzKyXbuaX5Wuk+0YGiOCM9chv9dJfaIKx2lwecLONxROWwBLD8DVBfwfSUl2iW3ccez8t0re3sXZaY4wLCMdvhq/fuyBXtf1naolkHakporbDILhmUUtu5KHFj8t3+9pY2nS0Wo9VFjcKHrngodw=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=iPEy6hLO; dkim-atps=neutral;\n spf=pass (client-ip=2a01:111:f403:c000::1;\n helo=byapr05cu005.outbound.protection.outlook.com;\n envelope-from=apopple@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=Z2+gfhpbAKFnoGbDKDGaL4VqP6A6VQbCIUe7wbNIvL4=;\n b=iPEy6hLOeGlIEWlLpyP6Cb37kdZkXZMWoOnwE95kdUb9GtE6UIICM9eUKzJhX5+XJ4mmcQdwsxz/S8TeP/goPIAMaFhdy3lmjnHdryu3VAkBGxbRfWhJnWuSYVEf8Vt22jeAhRvVPTTc2SWtOq4UQKfyqy+uIwND6at8bBkuuglbr7NrKNkY3FGYHw3g7/++eUAAUIZPNypFSXgdGmFp2xIsUNhYh3346NoY/VyhGDT2kk8YfNUoS4Wbqbv/sy2jHTJ/r2aEmO9fH5lIBH4KfJjGTJYeERuNFQW/tDF0Rmlzk5tDdDO+VFjbiW1+Wjz+XkG00NLQv06TEKOwNlyi9g==","Date":"Fri, 23 Jan 2026 13:02:42 +1100","From":"Alistair Popple <apopple@nvidia.com>","To":"Zi Yan <ziy@nvidia.com>","Cc":"Jordan Niethe <jniethe@nvidia.com>, linux-mm@kvack.org,\n\tbalbirs@nvidia.com, matthew.brost@intel.com, akpm@linux-foundation.org,\n\tlinux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,\n david@redhat.com,\n\tlorenzo.stoakes@oracle.com, lyude@redhat.com, dakr@kernel.org,\n airlied@gmail.com,\n\tsimona@ffwll.ch, rcampbell@nvidia.com, mpenttil@redhat.com, jgg@nvidia.com,\n\twilly@infradead.org, linuxppc-dev@lists.ozlabs.org,\n intel-xe@lists.freedesktop.org,\n\tjgg@ziepe.ca, Felix.Kuehling@amd.com","Subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","Message-ID":"<sezye7d27h7pioazf4k3wfrdbradxovmdqyyp5slhljkmcnxf5@ckj3ujikhsnj>","References":"<20260107091823.68974-1-jniethe@nvidia.com>\n <20260107091823.68974-12-jniethe@nvidia.com>\n <36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>\n <c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>\n <6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com>\n <fd4b6553-3e9e-4829-a12f-51d29a5d7571@nvidia.com>\n <16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com>","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","Content-Transfer-Encoding":"8bit","In-Reply-To":"<16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com>","X-ClientProxiedBy":"SY5P282CA0153.AUSP282.PROD.OUTLOOK.COM\n (2603:10c6:10:24a::13) To DS0PR12MB7726.namprd12.prod.outlook.com\n (2603:10b6:8:130::6)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DS0PR12MB7726:EE_|SA1PR12MB5658:EE_","X-MS-Office365-Filtering-Correlation-Id":"07921699-b4ca-47a2-89a8-08de5a238182","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|376014|7416014|1800799024|366016;","X-Microsoft-Antispam-Message-Info":"=?utf-8?q?Ho5TQAgvRdQvQzJpfZ2uhXdNlE+jVKD?=\n\t=?utf-8?q?6SToTimXdBghNqljfm+BjdrHkL32cNrJhIbk08TgcH+MQB5k+JWMBHF1ixFCVeI/8?=\n\t=?utf-8?q?uRTG8Obz/i3p95o3PdPSs4SgOMga+375HNDtVqmEARw3jc3fVTAHPoRcT4+IaW0tp?=\n\t=?utf-8?q?b0zFWH3XoXWV33xt5hGN5srtzpfQSBCwMqrHOEd1k+4rhuAMcrfr01z++zTUPbKg2?=\n\t=?utf-8?q?M2gsR7vTjYjuAeINz77ytfZ5zwXWnFdn70oo/MZ/xp8C4nLkLYTgnpLSDYEguOOwY?=\n\t=?utf-8?q?H1y3DaOq5YVPkxVDS68YUdClItIiRW9i7Q6A52jx7M8AndJp5zNicuQuEEJrYUcC/?=\n\t=?utf-8?q?W2IuTxlltSo8JRT+FsjpCfHrFXOufFvr6BLw8Ny3dZiobfLxdMAi1/G2LGg1uDEcU?=\n\t=?utf-8?q?VWcrPWTKVJ7Rj9IJ2MUgH7PlHQUcFaCXlMqNhxR4yecwyByLjyjOp9u/zhkhEo1/x?=\n\t=?utf-8?q?YmoDVXE5eCKuJpThb35BQYG+U7kMVwOz/va9lJaIF9NZmu5kPqkNt75HuYFAKDXvR?=\n\t=?utf-8?q?KzOiXic6i/57NlxJauPLYXM1LUOc4O0wYCm8LaRH1gaog2mS5BxAhNHWXW3+vM4Py?=\n\t=?utf-8?q?qeJrCExr77MbhxshCunKMTRh/hPN/atXsq9QCiXn2VD6EOp2+Fk/bawXZx1LkAK6/?=\n\t=?utf-8?q?Chl+WrXpn4OnaqHg3h4AD3L5atZ2TArUXh9SUdm4QJxv+T/Sz3pJBQ0BOfF3bdtHB?=\n\t=?utf-8?q?J5zF86TVUX+nUn8qOPo8WemU1SPNkA81U4qlJy3GH6pDdIOex4Sksl6ID9hh8TUZ3?=\n\t=?utf-8?q?fdjISryEXPIQQaJVA8DdCF6j75tQ56xnqVzCYxlZVniGBhp8gqbEkydrQ//1dLFE/?=\n\t=?utf-8?q?ALiEC4DbK1vhhbk9KlqgzM2Qn/okLyutBy2mI1CFeQNQXsvSmvls8aOsM5T6sj9pw?=\n\t=?utf-8?q?+rQ53+J1qmas5s7Xv0pC68Jy8L24gwvyBKwF52Bjl9cJtmwf5wLby0ym1XU3QJgPH?=\n\t=?utf-8?q?hjLKiogyi5xIs37E6iQ7+Z10HzHojGHPxc2ZSxKS/gzIm1SRu6thVHzenCxxIMc+h?=\n\t=?utf-8?q?2M7AkWWaC91kUWY8nxN39Vn3UvxLAxTAoZJmpvyJbs2Or8tAb1XnMW2hz/ZelMe/D?=\n\t=?utf-8?q?Uk4pzXGatS39zOs8j2W9gdtgJk3Q0GRblBv6OADvwRqwF39hiLXVdHCG4OcY6WnfV?=\n\t=?utf-8?q?v84gsYXih4nLVYW/9vxljIT02xZ7vQ/amVsaAqqBd2mRPB/z39dF9eu/uUMCPord/?=\n\t=?utf-8?q?o2LCZW/vODItYRjcZaf4Tg/JkWcS4twO3YUvh65pVeJGP1sU5bloi6jAS3q2lcxqk?=\n\t=?utf-8?q?uT85f/bg4lPz+DmRKDmQq+8md4g35bd+GT2yEaV8xpBFGzaWZ3uZeDkG1vCflFD+7?=\n\t=?utf-8?q?NiN4WJh0r2vQnvTIAjr4Kf4aq13TNQcjjMPSvm5IhSsdtK/aO0tjXgU3nyQWtoTSK?=\n\t=?utf-8?q?AJpWQ2yPePFd3+FPZJTddPFl7nJEXlIzBCZK7awDiZ7ayhcO6NzVNYf9QN1geUQh7?=\n\t=?utf-8?q?9565wwxbcxbvwxcIdGZDja8y/yvWkGCc28Q0Pp5zu1IMgDt7kQoMs=3D?=","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(376014)(7416014)(1800799024)(366016);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"=?utf-8?q?0lB6Gy8Aoze+oA2DbvuqS4gYDaVN?=\n\t=?utf-8?q?nWbwbMQWE8XJZ6q66m8BYYT2c9VoNsUsB+qAPCs1gf13B+ZZQCzh1IjXFE5HjzHgk?=\n\t=?utf-8?q?BTUOSIY46aZLvRvA7D3bUY2JgMw6CsJcDWSRWIukcy8mBbQCdimUH4QcOjDYsLYfl?=\n\t=?utf-8?q?cEDv9nRoalHpYeQsgvuKjzbVUjchlTmGmnKhmuRx+r7t9ZdAWzZa8MR+/dU5Ph8pD?=\n\t=?utf-8?q?RjDGDDy/IFnwl095yqoQuuB05FyxuOF0rRnkht6+G2T38SuJ/wYxR9GTObY+7pnkq?=\n\t=?utf-8?q?Lqr80YGvWwVVZXGQbq4AjMwV/VwY8H6ujnidqjRO3JUhmmdRX2hG0PHwmvQxl51o7?=\n\t=?utf-8?q?FqiebFHXOPwX5TKjcL/5kh6+cVgs6VgYJ30yokc2kuEXPbiQmd8ZDPpiHltJ8yqZs?=\n\t=?utf-8?q?LVuo48aIuJjOa8uUilxo8noSKMmtOUuGaAILOsTI3B3uqvgH9Qgl6UL+EXkHaCyqy?=\n\t=?utf-8?q?brfpbXnvIot4/FXkZEcOgJoSYnm1H/Sz/dbvkrC02kuY3wsCkQdGkw9Ku1rxfMtJr?=\n\t=?utf-8?q?XjDBds4dvaxOjAVcrBmX5iRy9WaosU6Oy1/9EofGurml5P76/Wxh2W4BKcJegbl8F?=\n\t=?utf-8?q?FmzawkNAOCh1b5b6KFgz6EDM4qtu6OM3Qkv0aIyXSF1XWnKdUWYngRik8agKEu+hQ?=\n\t=?utf-8?q?oVTG4FOse5O+RRvIk4zSanXBeJWTMFllLR5avAzp7KTSShG2oHbMq4Cmy4znxcX7d?=\n\t=?utf-8?q?R0ltZrbdWsQo7fB9fGGkQlriPhjCDW5EhEBwYWlZVads4EWHD+zP0yA2KA6wC/3bD?=\n\t=?utf-8?q?GOxBMpSyouy9sMHkSDIEOguKgCiUhyV8kwumq26A9RSBR/zx7VbvOBWVy/MyBxHE5?=\n\t=?utf-8?q?C1JQEsZZh9wSmxUmQ9gt1OxYgEz+2e8WirwiIzbsW2yIw2m8vxYRMW74sbjjtmNPV?=\n\t=?utf-8?q?SQq/3oezfOoexVLhmRKtSuLiSeMR3PqTGt0Sv6DNfo+bzuXylz2Fb2LIchRWfr3Gd?=\n\t=?utf-8?q?5MNkw/hFpZL4hROiC0TK5znoW5H7OoxQTR+VNnNR4Y9u6ujNSErlNbaMFRt9JD9x4?=\n\t=?utf-8?q?46d8ytm+vmt4qjOk6XIz+ytaK02Do7o3G1vRsgrkQA58os90zZOOJym4H91Mqo+XK?=\n\t=?utf-8?q?4sEjjIi8vWeMo/OniGcrfXJWseQ/C8c4Fveh3GyzUSRFxWetbd03JX+Fv8Yeo1BKw?=\n\t=?utf-8?q?/jHpgycLdl6Me/JBlZYx5xExv0Yu5DmyiielTJVXWrd4jh2Txw5WMLyzFVoi0fNmx?=\n\t=?utf-8?q?Pq/WTMfIaOIAFPdGwvhyOxxaaTeVe0NHBnJk3jGVcMECi1oJzCRjE8FNWbxfLcAoh?=\n\t=?utf-8?q?1xr+ltdXd/4iaEwfVeWTaYkiQrUa2j/pWcdv6HtjtkTFpDmg8icxAanon2SHXHMBN?=\n\t=?utf-8?q?e8DkTEKDRvSLzDxOhi5TyC+/TVEk6JIiauBcHWSuG1Monexa5W7b1GaH3uK8HKxGp?=\n\t=?utf-8?q?yqPcMTFdle1PLplfA6RRIcVf0J7zXWQNeuGZaqopGaZ9cXuteQhzkrlkO/U7Pm5Ok?=\n\t=?utf-8?q?oFz8yIhjbkNlfiKDrVHOsQV89TzxxVI8CQF3Gkhm/5oyD80ly1S/mFeR4a+lyydqZ?=\n\t=?utf-8?q?r/DKqBOn0rr0kvVtkrbrksWOe01E5jNzp0RMNEmRhEiDnl1KALUsKbBo0EtHv4kcP?=\n\t=?utf-8?q?V3bKYNcL2rglWZY8WdUzyVs7tYnMCxYcXVVFX5Wj4TdUHjYWtHFh+maLqO3k4IblC?=\n\t=?utf-8?q?SgRlVoJ/nVGa80vc/QU3idYWsahIGePg=3D=3D?=","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n 07921699-b4ca-47a2-89a8-08de5a238182","X-MS-Exchange-CrossTenant-AuthSource":"DS0PR12MB7726.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"23 Jan 2026 02:02:48.1858\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n kXhScdCwRM2ixBrAiO+M7oKJm7YEYRttB8b/fRcu3IMNo6HEyag7hpZWXoyQ2pd+6MhEKQkWIrppHl/lkIa/hw==","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"SA1PR12MB5658","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tRCVD_IN_DNSWL_NONE,SPF_HELO_PASS,SPF_PASS autolearn=disabled\n\tversion=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"}},{"id":3640705,"web_url":"http://patchwork.ozlabs.org/comment/3640705/","msgid":"<DBBD65CA-A8F2-40AC-AFA0-FC95CBDB3DF5@nvidia.com>","date":"2026-01-23T03:06:28","subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","submitter":{"id":76947,"url":"http://patchwork.ozlabs.org/api/people/76947/","name":"Zi Yan","email":"ziy@nvidia.com"},"content":"On 22 Jan 2026, at 21:02, Alistair Popple wrote:\n\n> On 2026-01-21 at 10:06 +1100, Zi Yan <ziy@nvidia.com> wrote...\n>> On 20 Jan 2026, at 18:02, Jordan Niethe wrote:\n>>\n>>> Hi,\n>>>\n>>> On 21/1/26 09:53, Zi Yan wrote:\n>>>> On 20 Jan 2026, at 17:33, Jordan Niethe wrote:\n>>>>\n>>>>> On 14/1/26 07:04, Zi Yan wrote:\n>>>>>> On 7 Jan 2026, at 4:18, Jordan Niethe wrote:\n>>>>>>\n>>>>>>> Currently when creating these device private struct pages, the first\n>>>>>>> step is to use request_free_mem_region() to get a range of physical\n>>>>>>> address space large enough to represent the devices memory. This\n>>>>>>> allocated physical address range is then remapped as device private\n>>>>>>> memory using memremap_pages().\n>>>>>>>\n>>>>>>> Needing allocation of physical address space has some problems:\n>>>>>>>\n>>>>>>>     1) There may be insufficient physical address space to represent the\n>>>>>>>        device memory. KASLR reducing the physical address space and VM\n>>>>>>>        configurations with limited physical address space increase the\n>>>>>>>        likelihood of hitting this especially as device memory increases. This\n>>>>>>>        has been observed to prevent device private from being initialized.\n>>>>>>>\n>>>>>>>     2) Attempting to add the device private pages to the linear map at\n>>>>>>>        addresses beyond the actual physical memory causes issues on\n>>>>>>>        architectures like aarch64 meaning the feature does not work there.\n>>>>>>>\n>>>>>>> Instead of using the physical address space, introduce a device private\n>>>>>>> address space and allocate devices regions from there to represent the\n>>>>>>> device private pages.\n>>>>>>>\n>>>>>>> Introduce a new interface memremap_device_private_pagemap() that\n>>>>>>> allocates a requested amount of device private address space and creates\n>>>>>>> the necessary device private pages.\n>>>>>>>\n>>>>>>> To support this new interface, struct dev_pagemap needs some changes:\n>>>>>>>\n>>>>>>>     - Add a new dev_pagemap::nr_pages field as an input parameter.\n>>>>>>>     - Add a new dev_pagemap::pages array to store the device\n>>>>>>>       private pages.\n>>>>>>>\n>>>>>>> When using memremap_device_private_pagemap(), rather then passing in\n>>>>>>> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to\n>>>>>>> be remapped, dev_pagemap::nr_ranges will always be 1, and the device\n>>>>>>> private range that is reserved is returned in dev_pagemap::range.\n>>>>>>>\n>>>>>>> Forbid calling memremap_pages() with dev_pagemap::ranges::type =\n>>>>>>> MEMORY_DEVICE_PRIVATE.\n>>>>>>>\n>>>>>>> Represent this device private address space using a new\n>>>>>>> device_private_pgmap_tree maple tree. This tree maps a given device\n>>>>>>> private address to a struct dev_pagemap, where a specific device private\n>>>>>>> page may then be looked up in that dev_pagemap::pages array.\n>>>>>>>\n>>>>>>> Device private address space can be reclaimed and the assoicated device\n>>>>>>> private pages freed using the corresponding new\n>>>>>>> memunmap_device_private_pagemap() interface.\n>>>>>>>\n>>>>>>> Because the device private pages now live outside the physical address\n>>>>>>> space, they no longer have a normal PFN. This means that page_to_pfn(),\n>>>>>>> et al. are no longer meaningful.\n>>>>>>>\n>>>>>>> Introduce helpers:\n>>>>>>>\n>>>>>>>     - device_private_page_to_offset()\n>>>>>>>     - device_private_folio_to_offset()\n>>>>>>>\n>>>>>>> to take a given device private page / folio and return its offset within\n>>>>>>> the device private address space.\n>>>>>>>\n>>>>>>> Update the places where we previously converted a device private page to\n>>>>>>> a PFN to use these new helpers. When we encounter a device private\n>>>>>>> offset, instead of looking up its page within the pagemap use\n>>>>>>> device_private_offset_to_page() instead.\n>>>>>>>\n>>>>>>> Update the existing users:\n>>>>>>>\n>>>>>>>    - lib/test_hmm.c\n>>>>>>>    - ppc ultravisor\n>>>>>>>    - drm/amd/amdkfd\n>>>>>>>    - gpu/drm/xe\n>>>>>>>    - gpu/drm/nouveau\n>>>>>>>\n>>>>>>> to use the new memremap_device_private_pagemap() interface.\n>>>>>>>\n>>>>>>> Signed-off-by: Jordan Niethe <jniethe@nvidia.com>\n>>>>>>> Signed-off-by: Alistair Popple <apopple@nvidia.com>\n>>>>>>>\n>>>>>>> ---\n>>>>>>>\n>>>>>>> NOTE: The updates to the existing drivers have only been compile tested.\n>>>>>>> I'll need some help in testing these drivers.\n>>>>>>>\n>>>>>>> v1:\n>>>>>>> - Include NUMA node paramater for memremap_device_private_pagemap()\n>>>>>>> - Add devm_memremap_device_private_pagemap() and friends\n>>>>>>> - Update existing users of memremap_pages():\n>>>>>>>       - ppc ultravisor\n>>>>>>>       - drm/amd/amdkfd\n>>>>>>>       - gpu/drm/xe\n>>>>>>>       - gpu/drm/nouveau\n>>>>>>> - Update for HMM huge page support\n>>>>>>> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE\n>>>>>>>\n>>>>>>> v2:\n>>>>>>> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);\n>>>>>>> ---\n>>>>>>>    Documentation/mm/hmm.rst                 |  11 +-\n>>>>>>>    arch/powerpc/kvm/book3s_hv_uvmem.c       |  41 ++---\n>>>>>>>    drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  23 +--\n>>>>>>>    drivers/gpu/drm/nouveau/nouveau_dmem.c   |  35 ++--\n>>>>>>>    drivers/gpu/drm/xe/xe_svm.c              |  28 +---\n>>>>>>>    include/linux/hmm.h                      |   3 +\n>>>>>>>    include/linux/leafops.h                  |  16 +-\n>>>>>>>    include/linux/memremap.h                 |  64 +++++++-\n>>>>>>>    include/linux/migrate.h                  |   6 +-\n>>>>>>>    include/linux/mm.h                       |   2 +\n>>>>>>>    include/linux/rmap.h                     |   5 +-\n>>>>>>>    include/linux/swapops.h                  |  10 +-\n>>>>>>>    lib/test_hmm.c                           |  69 ++++----\n>>>>>>>    mm/debug.c                               |   9 +-\n>>>>>>>    mm/memremap.c                            | 193 ++++++++++++++++++-----\n>>>>>>>    mm/mm_init.c                             |   8 +-\n>>>>>>>    mm/page_vma_mapped.c                     |  19 ++-\n>>>>>>>    mm/rmap.c                                |  43 +++--\n>>>>>>>    mm/util.c                                |   5 +-\n>>>>>>>    19 files changed, 391 insertions(+), 199 deletions(-)\n>>>>>>>\n>>>>>> <snip>\n>>>>>>\n>>>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h\n>>>>>>> index e65329e1969f..b36599ab41ba 100644\n>>>>>>> --- a/include/linux/mm.h\n>>>>>>> +++ b/include/linux/mm.h\n>>>>>>> @@ -2038,6 +2038,8 @@ static inline unsigned long memdesc_section(memdesc_flags_t mdf)\n>>>>>>>     */\n>>>>>>>    static inline unsigned long folio_pfn(const struct folio *folio)\n>>>>>>>    {\n>>>>>>> +\tVM_BUG_ON(folio_is_device_private(folio));\n>>>>>>\n>>>>>> Please use VM_WARN_ON instead.\n>>>>>\n>>>>> ack.\n>>>>>\n>>>>>>\n>>>>>>> +\n>>>>>>>    \treturn page_to_pfn(&folio->page);\n>>>>>>>    }\n>>>>>>>\n>>>>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n>>>>>>> index 57c63b6a8f65..c1561a92864f 100644\n>>>>>>> --- a/include/linux/rmap.h\n>>>>>>> +++ b/include/linux/rmap.h\n>>>>>>> @@ -951,7 +951,7 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>>>>>>>    static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>>>>    {\n>>>>>>>    \tif (folio_is_device_private(folio))\n>>>>>>> -\t\treturn page_vma_walk_pfn(folio_pfn(folio)) |\n>>>>>>> +\t\treturn page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n>>>>>>>    \t\t       PVMW_PFN_DEVICE_PRIVATE;\n>>>>>>>\n>>>>>>>    \treturn page_vma_walk_pfn(folio_pfn(folio));\n>>>>>>> @@ -959,6 +959,9 @@ static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>>>>\n>>>>>>>    static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n>>>>>>>    {\n>>>>>>> +\tif (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n>>>>>>> +\t\treturn device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>>>> +\n>>>>>>>    \treturn pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>>>>    }\n>>>>>>\n>>>>>> <snip>\n>>>>>>\n>>>>>>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n>>>>>>> index 96c525785d78..141fe5abd33f 100644\n>>>>>>> --- a/mm/page_vma_mapped.c\n>>>>>>> +++ b/mm/page_vma_mapped.c\n>>>>>>> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,\n>>>>>>>    static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>    {\n>>>>>>>    \tunsigned long pfn;\n>>>>>>> +\tbool device_private = false;\n>>>>>>>    \tpte_t ptent = ptep_get(pvmw->pte);\n>>>>>>>\n>>>>>>>    \tif (pvmw->flags & PVMW_MIGRATION) {\n>>>>>>> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>    \t\tif (!softleaf_is_migration(entry))\n>>>>>>>    \t\t\treturn false;\n>>>>>>>\n>>>>>>> +\t\tif (softleaf_is_migration_device_private(entry))\n>>>>>>> +\t\t\tdevice_private = true;\n>>>>>>> +\n>>>>>>>    \t\tpfn = softleaf_to_pfn(entry);\n>>>>>>>    \t} else if (pte_present(ptent)) {\n>>>>>>>    \t\tpfn = pte_pfn(ptent);\n>>>>>>> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>    \t\t\treturn false;\n>>>>>>>\n>>>>>>>    \t\tpfn = softleaf_to_pfn(entry);\n>>>>>>> +\n>>>>>>> +\t\tif (softleaf_is_device_private(entry))\n>>>>>>> +\t\t\tdevice_private = true;\n>>>>>>>    \t}\n>>>>>>>\n>>>>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>>>>> +\t\treturn false;\n>>>>>>> +\n>>>>>>>    \tif ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>>>>    \t\treturn false;\n>>>>>>>    \tif (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n>>>>>>> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>    }\n>>>>>>>\n>>>>>>>    /* Returns true if the two ranges overlap.  Careful to not overflow. */\n>>>>>>> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)\n>>>>>>> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n>>>>>>>    {\n>>>>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>>>>> +\t\treturn false;\n>>>>>>> +\n>>>>>>>    \tif ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>>>>    \t\treturn false;\n>>>>>>>    \tif (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n>>>>>>> @@ -255,6 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>>>>\n>>>>>>>    \t\t\t\tif (!softleaf_is_migration(entry) ||\n>>>>>>>    \t\t\t\t    !check_pmd(softleaf_to_pfn(entry),\n>>>>>>> +\t\t\t\t\t       softleaf_is_device_private(entry) ||\n>>>>>>> +\t\t\t\t\t       softleaf_is_migration_device_private(entry),\n>>>>>>>    \t\t\t\t\t       pvmw))\n>>>>>>>    \t\t\t\t\treturn not_found(pvmw);\n>>>>>>>    \t\t\t\treturn true;\n>>>>>>> @@ -262,7 +277,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>>>>    \t\t\tif (likely(pmd_trans_huge(pmde))) {\n>>>>>>>    \t\t\t\tif (pvmw->flags & PVMW_MIGRATION)\n>>>>>>>    \t\t\t\t\treturn not_found(pvmw);\n>>>>>>> -\t\t\t\tif (!check_pmd(pmd_pfn(pmde), pvmw))\n>>>>>>> +\t\t\t\tif (!check_pmd(pmd_pfn(pmde), false, pvmw))\n>>>>>>>    \t\t\t\t\treturn not_found(pvmw);\n>>>>>>>    \t\t\t\treturn true;\n>>>>>>>    \t\t\t}\n>>>>>>\n>>>>>> It seems to me that you can add a new flag like “bool is_device_private” to\n>>>>>> indicate whether pfn is a device private index instead of pfn without\n>>>>>> manipulating pvmw->pfn itself.\n>>>>>\n>>>>> We could do it like that, however my concern with using a new param was that\n>>>>> storing this info seperately might make it easier to misuse a device private\n>>>>> index as a regular pfn.\n>>>>>\n>>>>> It seemed like it could be easy to overlook both when creating the pvmw and\n>>>>> then when accessing the pfn.\n>>>>\n>>>> That is why I asked for a helper function like page_vma_walk_pfn(pvmw) to\n>>>> return the converted pfn instead of pvmw->pfn directly. You can add a comment\n>>>> to ask people to use helper function and even mark pvmw->pfn /* do not use\n>>>> directly */.\n>>>\n>>> Yeah I agree that is a good idea.\n>>>\n>>>>\n>>>> In addition, your patch manipulates pfn by left shifting it by 1. Are you sure\n>>>> there is no weird arch having pfns with bit 63 being 1? Your change could\n>>>> break it, right?\n>>>\n>>> Currently for migrate pfns we left shift by pfns by MIGRATE_PFN_SHIFT (6), so I\n>>> thought doing something similiar here should be safe.\n>>\n>> Yeah, but that limits to archs supporting HMM. page_vma_mapped_walk is used\n>> by almost every arch, so it has a broader impact.\n>\n> We need to be a bit careful by what we mean when we say \"HMM\" in the kernel.\n>\n> Specifically MIGRATE_PFN_SHIFT is used with migrate_vma/migrate_device, which\n> is the migration half of \"HMM\" which does depend on CONFIG_DEVICE_MIGRATION or\n> really just CONFIG_ZONE_DEVICE making it somewhat arch specific.\n>\n> However hmm_range_fault() does something similar - see the definition of\n> hmm_pfn_flags - it actually steals the top 11 bits of a pfn for flags, and it is\n> not architecture specific. It only depends on CONFIG_MMU.\n\nOh, that is hacky. But are HMM PFNs with any flag exposed to code outside HMM?\nCurrently, device private needs to reserve PFNs for struct page, so I assume\nonly the reserved PFNs are seen by outsiders. Otherwise, when outsiders see\na HMM PFN with a flag, pfn_to_page() on such a PFN will read non exist\nstruct page, right?\n\nFor this page_vma_mapped_walk code, it is manipulating PFNs used by everyone,\nnot just HMM, and can potentially (might be very rare) alter their values\nafter shifts. And if an HMM PFN with HMM_PFN_VALID is processed by the code,\nthe HMM PFN will lose HMM_PFN_VALID bit. So I guess HMM PFN is not showing\noutside HMM code.\n\n>\n> Now I'm not saying this implies it actually works on all architectures as I\n> agree the page_vma_mapped_walk code is used much more widely. Rather I'm just\n> pointing out if there are issues with some architectures using high PFN bits\n> then we likely have a problem here too :-)\n\n\nBest Regards,\nYan, Zi","headers":{"Return-Path":"\n <linuxppc-dev+bounces-16181-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=hHULNQmn;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=112.213.38.117; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-16181-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=\"2a01:111:f403:c000::1\" arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=hHULNQmn;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=2a01:111:f403:c000::1;\n helo=byapr05cu005.outbound.protection.outlook.com;\n envelope-from=ziy@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dy2sP0x6Hz1xrF\n\tfor <incoming@patchwork.ozlabs.org>; Fri, 23 Jan 2026 14:07:03 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4dy2sM2wjXz2xJ5;\n\tFri, 23 Jan 2026 14:07:03 +1100 (AEDT)","from BYAPR05CU005.outbound.protection.outlook.com\n (mail-westusazlp170100001.outbound.protection.outlook.com\n [IPv6:2a01:111:f403:c000::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4dy2sK63Rhz2x9M\n\tfor <linuxppc-dev@lists.ozlabs.org>; Fri, 23 Jan 2026 14:07:01 +1100 (AEDT)","from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by\n PH7PR12MB7258.namprd12.prod.outlook.com (2603:10b6:510:206::5) with Microsoft\n SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.9542.11; Fri, 23 Jan 2026 03:06:35 +0000","from DS7PR12MB9473.namprd12.prod.outlook.com\n ([fe80::f01d:73d2:2dda:c7b2]) by DS7PR12MB9473.namprd12.prod.outlook.com\n ([fe80::f01d:73d2:2dda:c7b2%4]) with mapi id 15.20.9542.010; Fri, 23 Jan 2026\n 03:06:35 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1769137623;\n\tcv=pass;\n b=ZCwECD/s8aOo4BUnNLsmeoNy10A7WrCm7d2miiAA0i3NJDhvod/+Twx6E0e8zx3L4rJu4plz59HLniA3sBZbFs8ck8q/9j2G3zTlHAOB4QGCPiZeDXtZc6MpO+ziP8V5XAjeu/Gklpu0MEert2v+1qxKIQ3YitvpeV/kmS6WugicHpBN7WSTx9MJcHJNt3b/42ffrmcjuaMJkE2k1z0YwjtLOxDWNCwo9i3WhcWDHn3fHFGt+xTzGMYzkJvg2+NXp5sJOw7ELZho/Jwj/rhdicWbE2xSeg450QVTYDzKGVyqyfpEjnfuTZ8C76HpJvfrOID49aagXLJE6CopQ9AVPw==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=fK9dIg0K/4pyczmpXPfBdHKsCfuZzCl/yFnSJr7V0mfXqhd563HgqnpPCND2U/Jkv3Vs4AeI/gBsM7obx6G9LSMyo8UyMhSXXJluK4abkWj2NBccm2vU1n5tR1iS+3BRKQiXY+2AJWYG+WMIlJKKYqkyyVEAWfRkSzX+glLKUN02CEq5Spbo6Gb3q0/nlG9xHi11PtHcBCqnr78uc8Sad7Acj/5ZMdLv7qcPmvzUlmIdDk+RKbiU/0g0enc/Byathp5KSVSWl+THysUdI+cF/WwK7qfox5m/kFU9W6i6mNDdcxbKwefc2KAgem6yRVaFJJxgLQKS23MGGjrok+awvw=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1769137623; c=relaxed/relaxed;\n\tbh=u37pWPu+kSFA1irq3bmen8j8dy5zytLm5Z0/V7vHRhc=;\n\th=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References:\n\t Content-Type:MIME-Version;\n b=Bh8Dq9dqD1JPFBXb5DDAL2SmV5y2AvcQ91NJkBVQW+mRd3OSH3GILCHg/Xz+amaipZfjAlbu7oaNEuwEJkTaf5pIFpXDseLBOdglLMY0ipAs8OxsqmyAjF9En2unuZyI1aqr7ftj/t5KL8CVSaknYyoGWu06Cyv1oquqSp9BmKbYRmhYJKq65yzt/RKkInO/0L267eeq/au0KZweVZuQsCx4h2PsZfkcU41DbZaWrWzfN/Yi4u5oZAU7VFvmqMup+czm2O5khII0p3hu07b2SdSG1AITb4I1un1yeGo8B7QH4n9DEnGCvTOzqE1EcNZ/M9dhqt5w4LeKg7P31Acsmw==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=u37pWPu+kSFA1irq3bmen8j8dy5zytLm5Z0/V7vHRhc=;\n b=U6d1tix3BSdTzGILiwJq4DXJZ8HTp1qYJB2kOZ/Bm3wBWAoOxAdUuUWyedE4rOkwuP/1tW60iH0+YKVVHczeE2LX0PptIwLis8ylzXvVJLkZcY8BFd0dSNS2pOL9XBIvg2vz8CJyKDOkzYBEtOfwXiiX+JkBFsynJEkPGQ8Wdz1QpsZyT9Ap8DPTlpInWLpi2QxoF6ybYUYdcqvXrtk9jqshwYs8Bi4oQVPzO6CbmrVKA3+AlagFsylK+l3Kc1kzqzb6x874g873chZCnHG/MdtqecrIOmMIwiRS6HGIYFOZkh+NlV4ebPA0yYfX5L5gg7JloSOMB0RSLo65gZCPLA=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=hHULNQmn; dkim-atps=neutral;\n spf=pass (client-ip=2a01:111:f403:c000::1;\n helo=byapr05cu005.outbound.protection.outlook.com;\n envelope-from=ziy@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=u37pWPu+kSFA1irq3bmen8j8dy5zytLm5Z0/V7vHRhc=;\n b=hHULNQmnttht/L8HARioIkj56axihUSg24bvqCxCDywnjIqPwRgetjte/+k7Si97NzBVNwPSgHXLxc87ebmLP3yxa2YIbSuaV/7ligSQhAZjyOmBzpkYxnjwOf2yRZSNCRweXapb5565YJFAmp6rMeZDtHsZJORwnDbd513JKgbk19P+ahV8TqPOkHl8LT5JhclE7ZjZa9YZXJZ1LCBFgTINIELhU862Z1FqXX0KHBz9YLOOVjBknk8mWtg9XnLzpUThwaiKDTsvxVa2A0E2e8HdYqhfhL2IcfPn2Mz4LYJhKBzpC8fGXh9JnxLAsIJnzEfyqtG5880e4QlQJ+Pyuw==","From":"Zi Yan <ziy@nvidia.com>","To":"Alistair Popple <apopple@nvidia.com>","Cc":"Jordan Niethe <jniethe@nvidia.com>, linux-mm@kvack.org,\n balbirs@nvidia.com, matthew.brost@intel.com, akpm@linux-foundation.org,\n linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,\n david@redhat.com, lorenzo.stoakes@oracle.com, lyude@redhat.com,\n dakr@kernel.org, airlied@gmail.com, simona@ffwll.ch, rcampbell@nvidia.com,\n mpenttil@redhat.com, jgg@nvidia.com, willy@infradead.org,\n linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org, jgg@ziepe.ca,\n Felix.Kuehling@amd.com","Subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","Date":"Thu, 22 Jan 2026 22:06:28 -0500","X-Mailer":"MailMate (2.0r6290)","Message-ID":"<DBBD65CA-A8F2-40AC-AFA0-FC95CBDB3DF5@nvidia.com>","In-Reply-To":"<sezye7d27h7pioazf4k3wfrdbradxovmdqyyp5slhljkmcnxf5@ckj3ujikhsnj>","References":"<20260107091823.68974-1-jniethe@nvidia.com>\n <20260107091823.68974-12-jniethe@nvidia.com>\n <36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>\n <c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>\n <6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com>\n <fd4b6553-3e9e-4829-a12f-51d29a5d7571@nvidia.com>\n <16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com>\n <sezye7d27h7pioazf4k3wfrdbradxovmdqyyp5slhljkmcnxf5@ckj3ujikhsnj>","Content-Type":"text/plain; charset=UTF-8","Content-Transfer-Encoding":"quoted-printable","X-ClientProxiedBy":"BYAPR02CA0023.namprd02.prod.outlook.com\n (2603:10b6:a02:ee::36) To DS7PR12MB9473.namprd12.prod.outlook.com\n (2603:10b6:8:252::5)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DS7PR12MB9473:EE_|PH7PR12MB7258:EE_","X-MS-Office365-Filtering-Correlation-Id":"c5693040-f08a-42ac-309a-08de5a2c6aa1","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|1800799024|366016|7416014|376014;","X-Microsoft-Antispam-Message-Info":"=?utf-8?q?s4tlEAvb7sweB112AwVYMtgO0RavWKy?=\n\t=?utf-8?q?pcSoojlIfENhoZm4co64pSWm04QO2r88GyK4WgWKXvm4cbSKX0vEJSrS4saNbLPyu?=\n\t=?utf-8?q?rJa5BdHdV9qAf/o9xGAC2MffPy+DvtXJxoeZdkRkBjLnP5WPIsbRIW01jWAp1gFMP?=\n\t=?utf-8?q?e+IHJTwSHIQ6RHZdlAN7Ny2etGeM7LjJhH8M6xhOgRx61Y7/jgLD5+uTU7tMIu++5?=\n\t=?utf-8?q?vLkMqysyEyEy8O0j8AFef2GRSy6jD/ym6toeUZYiWrRyjTL2wxGzBL+81xrCPo0Da?=\n\t=?utf-8?q?k6VDTrC1C3mDZKBKrG3w3Nqn0GLFBhQ8soBXvxVz6IDP6Jdg0ZiQ1LVO3PSv3BIaJ?=\n\t=?utf-8?q?y6rfX9HmAyHyc7wpsobERsR4FAu9J7b5uEf/+ktqKqJ8jaWc+2t+XBVxC1vMAkwi5?=\n\t=?utf-8?q?iKqEkmc/MhFbPBN0+hS8vQdfkOwB0qGUqc9Q0GCSk43vd8qSnkctPrHA68T9iZx/T?=\n\t=?utf-8?q?POvnCErf8TkXzKnxyJ2OEU5qfG13jAZb2IOZm9Hzjmt6tE1NYPwM9iT1X/Za6dA7+?=\n\t=?utf-8?q?kQMQrQ+p0g0LoxDzADMYxOW2e1bY6SYxRFX2lYyMfhnXulycZHsFqZKmd8wVf0ar2?=\n\t=?utf-8?q?VkmBl8MSMCIVRRV1+DmhwtDrO1Erkf8m8g0S3tyGmM2cVwhmHhLtIvzTKU0CEccGO?=\n\t=?utf-8?q?FcgekgGw4D8F7S10/WapKkWD7g7IEyTS2+y/okSZSaztXHRqJoh+EpzlgMxMG0BNB?=\n\t=?utf-8?q?5EUsJeWRuEervB3myNGdG1I/Ap453bUrQsSIsoXUTDGrxspxADuF8+W4WhYuNTByB?=\n\t=?utf-8?q?tl0/q+yLmXiwBNm+0Rt5KVR+D/BsyqsDR2mrmsN2oNhdbtMQeMbglc7CkNgr6DsSd?=\n\t=?utf-8?q?FcgoBVVkp0L0o2HpcZVxlpG3XtTv6biXvv9gRpH6iV0kaLcRMV2keQnYFCBfWu0V4?=\n\t=?utf-8?q?Pa6F2KM3SDLjucqc2NBPZ73r6nM0tQtUrYsmTIXvMky+BPt/xY4M9sdf8+imfHAvo?=\n\t=?utf-8?q?xf2HfYiuon0Sopz7WUvrGNja8ER+vsUoERkNiXfqQTqYalQ7DP+F3ritdTcklCnY8?=\n\t=?utf-8?q?yQ4f/Jr2zi+/BRO6b5z8wk5Wky2E9pbF7XnqP67A9Z9HHYSGWoQpDOpy0WEUx64D2?=\n\t=?utf-8?q?2WDDqy7G0hbr0u9QbcttNOjF38DoTxZUqFYUAftqcjFLBlEW/xt+SYPr4q8l98UiF?=\n\t=?utf-8?q?mDBFw7pn9CdfidqqBskwII8iQgfUGozHUnlZCjUECXQXzkj0uRg3BJJ4lJXAYztpS?=\n\t=?utf-8?q?qO1KaKPrDgzBEq3+wZ+L+lqW7dBaQMW39otY9yqLDy7v17YQi1Bvw0zk8eWJgBwgM?=\n\t=?utf-8?q?tNtTWe4XN5j0HsL2u9S0OMYIZg7NL9iUuF4Nr/IcoySJAzooF2/bwcXh7d+rOl5DM?=\n\t=?utf-8?q?G0ANbJs29msVm+aeQ6vwrjYfw1EAcoH040uOiAMhV96vMlVQ1+7sYcDjXGrNcQawQ?=\n\t=?utf-8?q?+BjyKkvJAZ2hLqyRPWbc2RZ5qCj+FiUKbmzBnuIYrTfmzWJRdM6tpvAMHKbWZNVER?=\n\t=?utf-8?q?+0zHpABW0ZshAEnxDfw2TP71CZqReUoRHA2kKFe2gsTjnus1C5X1I=3D?=","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(7416014)(376014);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"=?utf-8?q?esjsqjjrA9B9xP20LWH2cmT8Y2Nb?=\n\t=?utf-8?q?y3C61mfnEZAVXZ0cMQhqmXKJhE347LtuiXsWDyU3dcTBY1UWS2MPsXuKtBFg1OZBO?=\n\t=?utf-8?q?i1yJpQMZL4WbJA4St720CZTCg08L9+kGRsuencDQcYWx6Q6TRhM4q/kGBR1+wBYIw?=\n\t=?utf-8?q?muohTsGuiHxxLTVZSzcWFiSPJO9ykiO3jkyyG4LTbobTtcQUNWNIQFHJoM8316mAd?=\n\t=?utf-8?q?kux76bhsLqWbe+ZhxaLx/nI2cGBUUxUMKyAHsc7Om6z1anIdAlJIIn1gKsXxbVPcF?=\n\t=?utf-8?q?V/g8hFkd2brZj7OYOBDvNRxRdM4lrkX19IxaXC2BbVaggG8+su7bbsE+X6iuOSMZo?=\n\t=?utf-8?q?NTozvKhtEULOrzN33xR/06KXdEq2HROGnNZsgdnqKflNe9ZUM9ax1TY32GfRrJtmy?=\n\t=?utf-8?q?oEHbNyy7hjOToknIehlZ9Fe4TvU+1SeDMkrSD1voKCERmpBhM+0Kkx4hHzD+Vy8+N?=\n\t=?utf-8?q?uJEG8YvQqV78dcy2Z+6ShHYMvvHGjuxN0P7GFzgPCBrxyJgZkXOJvJUzB2wDNFHDM?=\n\t=?utf-8?q?6GGc5AU33jeBJE1OlKYwo5lyItBlwDei4pzITtXUxm8b0ohnmDiOD2LxX3cb6mRRF?=\n\t=?utf-8?q?9sgJQZ29cL2FZifYs9dU9GIDInio2LoH732DXuTNpjdPhkBMoArrbn++udqxyD71y?=\n\t=?utf-8?q?xnkswc4Ejq58fLkez+fetmlLjsgmIOuqZ7r4S8smM0d8Lhzw+9+QSv0bc8bTGC1tA?=\n\t=?utf-8?q?crWLWLOVV+Ot1Zft1SqDnbYDSVl3wQybjJmKa9geas7IprtBiEnHZ2O/YdtRwN3B6?=\n\t=?utf-8?q?sbbQQSFN+8X49weKLQ2cF3HLtcFyWiSn9zo5PNFzT+jYBZYY9h/XDL2JCWRBUGop8?=\n\t=?utf-8?q?0X26g0NyzLK14M3Cjqq3h+0w7gk4O6VzAp6xWckNJ2V9bQiz60aDfu6svQy8qfvHT?=\n\t=?utf-8?q?l1E0C0adUMkNZX0yikZ1qjDGSjvTzZyMn+3HcCoAfYo/J5ujYYz53gd9jReEUB/aO?=\n\t=?utf-8?q?STpigjx/cJbIEiNRf67AFb3M77c9aP3G3nX8EsgoHFgtMdzBnvoaaFD+DkRhNP+Nd?=\n\t=?utf-8?q?90tp+MYZ8pxvhfq2THpsce/tBbkLsUP7y9NubHzty323WMf4CK8OX33TX3z6K3p77?=\n\t=?utf-8?q?Ag1yHqrFbKYaMTY+ApvffY96JkkI7xhF6pps8nmV8RigMbBhsTLce5E+bZEDEUb9i?=\n\t=?utf-8?q?HDVPXa8dkhQ9uJyGwXAG4YdogbIvffdPlBI27v0EOZ/2mPbf0fvZum82vsOQHRzzM?=\n\t=?utf-8?q?6EHDx0YTjd3hdqWW3QRG79cXys6gffHptXgDrcADkzTMhz6HxFnw32+Ga0I24D5QQ?=\n\t=?utf-8?q?vr3WYt5KT+S9VOw9AIv6zTaWzgsTXDpCHa9jPBKpGz7YXv8Zsx0b/XWrKMorUCcfA?=\n\t=?utf-8?q?kYGdpV/E+KDihH/p9eNFBhmPDg/wc7o1DH0GYYBWQflViDhUfJBDj9wBH18flN+kb?=\n\t=?utf-8?q?zDt6Zpg1GXuldfhVtFomDEy6gga+4Us8Mah8D34RMZUceQEUhf/uLYJNa2da4V61r?=\n\t=?utf-8?q?QCJ7AZlsa9SUPXUfmP61z6BTWPvmxYi7kyVWyp6UA/NceJ3cDxkHZj7G3mtl4iGK+?=\n\t=?utf-8?q?jyPqzRCmExaxnqZqBMNxaT6ZcPZbEa3QuNRFSwHEXDI8Yqpu7QjlZWje6qaz5BD3a?=\n\t=?utf-8?q?hluvWmWsbafAFBBw9+kGdq10x2hMQlKrILqNmYunKCeB5D4+IUIN/rR/QwCeiUr/o?=\n\t=?utf-8?q?8tss4ejHOF?=","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n c5693040-f08a-42ac-309a-08de5a2c6aa1","X-MS-Exchange-CrossTenant-AuthSource":"DS7PR12MB9473.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"23 Jan 2026 03:06:34.8691\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n BGU78P93DkBkH50bw4MdIvdUh0YIhyy6xrVaDLeU9Xzr3q1HuviXCTQvtD2csMsZ","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"PH7PR12MB7258","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tRCVD_IN_DNSWL_NONE,SPF_HELO_PASS,SPF_PASS autolearn=disabled\n\tversion=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"}},{"id":3640706,"web_url":"http://patchwork.ozlabs.org/comment/3640706/","msgid":"<0C16A79F-5A7B-4358-9806-7F78E7EA8EE6@nvidia.com>","date":"2026-01-23T03:09:41","subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","submitter":{"id":76947,"url":"http://patchwork.ozlabs.org/api/people/76947/","name":"Zi Yan","email":"ziy@nvidia.com"},"content":"On 22 Jan 2026, at 22:06, Zi Yan wrote:\n\n> On 22 Jan 2026, at 21:02, Alistair Popple wrote:\n>\n>> On 2026-01-21 at 10:06 +1100, Zi Yan <ziy@nvidia.com> wrote...\n>>> On 20 Jan 2026, at 18:02, Jordan Niethe wrote:\n>>>\n>>>> Hi,\n>>>>\n>>>> On 21/1/26 09:53, Zi Yan wrote:\n>>>>> On 20 Jan 2026, at 17:33, Jordan Niethe wrote:\n>>>>>\n>>>>>> On 14/1/26 07:04, Zi Yan wrote:\n>>>>>>> On 7 Jan 2026, at 4:18, Jordan Niethe wrote:\n>>>>>>>\n>>>>>>>> Currently when creating these device private struct pages, the first\n>>>>>>>> step is to use request_free_mem_region() to get a range of physical\n>>>>>>>> address space large enough to represent the devices memory. This\n>>>>>>>> allocated physical address range is then remapped as device private\n>>>>>>>> memory using memremap_pages().\n>>>>>>>>\n>>>>>>>> Needing allocation of physical address space has some problems:\n>>>>>>>>\n>>>>>>>>     1) There may be insufficient physical address space to represent the\n>>>>>>>>        device memory. KASLR reducing the physical address space and VM\n>>>>>>>>        configurations with limited physical address space increase the\n>>>>>>>>        likelihood of hitting this especially as device memory increases. This\n>>>>>>>>        has been observed to prevent device private from being initialized.\n>>>>>>>>\n>>>>>>>>     2) Attempting to add the device private pages to the linear map at\n>>>>>>>>        addresses beyond the actual physical memory causes issues on\n>>>>>>>>        architectures like aarch64 meaning the feature does not work there.\n>>>>>>>>\n>>>>>>>> Instead of using the physical address space, introduce a device private\n>>>>>>>> address space and allocate devices regions from there to represent the\n>>>>>>>> device private pages.\n>>>>>>>>\n>>>>>>>> Introduce a new interface memremap_device_private_pagemap() that\n>>>>>>>> allocates a requested amount of device private address space and creates\n>>>>>>>> the necessary device private pages.\n>>>>>>>>\n>>>>>>>> To support this new interface, struct dev_pagemap needs some changes:\n>>>>>>>>\n>>>>>>>>     - Add a new dev_pagemap::nr_pages field as an input parameter.\n>>>>>>>>     - Add a new dev_pagemap::pages array to store the device\n>>>>>>>>       private pages.\n>>>>>>>>\n>>>>>>>> When using memremap_device_private_pagemap(), rather then passing in\n>>>>>>>> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to\n>>>>>>>> be remapped, dev_pagemap::nr_ranges will always be 1, and the device\n>>>>>>>> private range that is reserved is returned in dev_pagemap::range.\n>>>>>>>>\n>>>>>>>> Forbid calling memremap_pages() with dev_pagemap::ranges::type =\n>>>>>>>> MEMORY_DEVICE_PRIVATE.\n>>>>>>>>\n>>>>>>>> Represent this device private address space using a new\n>>>>>>>> device_private_pgmap_tree maple tree. This tree maps a given device\n>>>>>>>> private address to a struct dev_pagemap, where a specific device private\n>>>>>>>> page may then be looked up in that dev_pagemap::pages array.\n>>>>>>>>\n>>>>>>>> Device private address space can be reclaimed and the assoicated device\n>>>>>>>> private pages freed using the corresponding new\n>>>>>>>> memunmap_device_private_pagemap() interface.\n>>>>>>>>\n>>>>>>>> Because the device private pages now live outside the physical address\n>>>>>>>> space, they no longer have a normal PFN. This means that page_to_pfn(),\n>>>>>>>> et al. are no longer meaningful.\n>>>>>>>>\n>>>>>>>> Introduce helpers:\n>>>>>>>>\n>>>>>>>>     - device_private_page_to_offset()\n>>>>>>>>     - device_private_folio_to_offset()\n>>>>>>>>\n>>>>>>>> to take a given device private page / folio and return its offset within\n>>>>>>>> the device private address space.\n>>>>>>>>\n>>>>>>>> Update the places where we previously converted a device private page to\n>>>>>>>> a PFN to use these new helpers. When we encounter a device private\n>>>>>>>> offset, instead of looking up its page within the pagemap use\n>>>>>>>> device_private_offset_to_page() instead.\n>>>>>>>>\n>>>>>>>> Update the existing users:\n>>>>>>>>\n>>>>>>>>    - lib/test_hmm.c\n>>>>>>>>    - ppc ultravisor\n>>>>>>>>    - drm/amd/amdkfd\n>>>>>>>>    - gpu/drm/xe\n>>>>>>>>    - gpu/drm/nouveau\n>>>>>>>>\n>>>>>>>> to use the new memremap_device_private_pagemap() interface.\n>>>>>>>>\n>>>>>>>> Signed-off-by: Jordan Niethe <jniethe@nvidia.com>\n>>>>>>>> Signed-off-by: Alistair Popple <apopple@nvidia.com>\n>>>>>>>>\n>>>>>>>> ---\n>>>>>>>>\n>>>>>>>> NOTE: The updates to the existing drivers have only been compile tested.\n>>>>>>>> I'll need some help in testing these drivers.\n>>>>>>>>\n>>>>>>>> v1:\n>>>>>>>> - Include NUMA node paramater for memremap_device_private_pagemap()\n>>>>>>>> - Add devm_memremap_device_private_pagemap() and friends\n>>>>>>>> - Update existing users of memremap_pages():\n>>>>>>>>       - ppc ultravisor\n>>>>>>>>       - drm/amd/amdkfd\n>>>>>>>>       - gpu/drm/xe\n>>>>>>>>       - gpu/drm/nouveau\n>>>>>>>> - Update for HMM huge page support\n>>>>>>>> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE\n>>>>>>>>\n>>>>>>>> v2:\n>>>>>>>> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);\n>>>>>>>> ---\n>>>>>>>>    Documentation/mm/hmm.rst                 |  11 +-\n>>>>>>>>    arch/powerpc/kvm/book3s_hv_uvmem.c       |  41 ++---\n>>>>>>>>    drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  23 +--\n>>>>>>>>    drivers/gpu/drm/nouveau/nouveau_dmem.c   |  35 ++--\n>>>>>>>>    drivers/gpu/drm/xe/xe_svm.c              |  28 +---\n>>>>>>>>    include/linux/hmm.h                      |   3 +\n>>>>>>>>    include/linux/leafops.h                  |  16 +-\n>>>>>>>>    include/linux/memremap.h                 |  64 +++++++-\n>>>>>>>>    include/linux/migrate.h                  |   6 +-\n>>>>>>>>    include/linux/mm.h                       |   2 +\n>>>>>>>>    include/linux/rmap.h                     |   5 +-\n>>>>>>>>    include/linux/swapops.h                  |  10 +-\n>>>>>>>>    lib/test_hmm.c                           |  69 ++++----\n>>>>>>>>    mm/debug.c                               |   9 +-\n>>>>>>>>    mm/memremap.c                            | 193 ++++++++++++++++++-----\n>>>>>>>>    mm/mm_init.c                             |   8 +-\n>>>>>>>>    mm/page_vma_mapped.c                     |  19 ++-\n>>>>>>>>    mm/rmap.c                                |  43 +++--\n>>>>>>>>    mm/util.c                                |   5 +-\n>>>>>>>>    19 files changed, 391 insertions(+), 199 deletions(-)\n>>>>>>>>\n>>>>>>> <snip>\n>>>>>>>\n>>>>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h\n>>>>>>>> index e65329e1969f..b36599ab41ba 100644\n>>>>>>>> --- a/include/linux/mm.h\n>>>>>>>> +++ b/include/linux/mm.h\n>>>>>>>> @@ -2038,6 +2038,8 @@ static inline unsigned long memdesc_section(memdesc_flags_t mdf)\n>>>>>>>>     */\n>>>>>>>>    static inline unsigned long folio_pfn(const struct folio *folio)\n>>>>>>>>    {\n>>>>>>>> +\tVM_BUG_ON(folio_is_device_private(folio));\n>>>>>>>\n>>>>>>> Please use VM_WARN_ON instead.\n>>>>>>\n>>>>>> ack.\n>>>>>>\n>>>>>>>\n>>>>>>>> +\n>>>>>>>>    \treturn page_to_pfn(&folio->page);\n>>>>>>>>    }\n>>>>>>>>\n>>>>>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n>>>>>>>> index 57c63b6a8f65..c1561a92864f 100644\n>>>>>>>> --- a/include/linux/rmap.h\n>>>>>>>> +++ b/include/linux/rmap.h\n>>>>>>>> @@ -951,7 +951,7 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n>>>>>>>>    static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>>>>>    {\n>>>>>>>>    \tif (folio_is_device_private(folio))\n>>>>>>>> -\t\treturn page_vma_walk_pfn(folio_pfn(folio)) |\n>>>>>>>> +\t\treturn page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n>>>>>>>>    \t\t       PVMW_PFN_DEVICE_PRIVATE;\n>>>>>>>>\n>>>>>>>>    \treturn page_vma_walk_pfn(folio_pfn(folio));\n>>>>>>>> @@ -959,6 +959,9 @@ static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n>>>>>>>>\n>>>>>>>>    static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n>>>>>>>>    {\n>>>>>>>> +\tif (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n>>>>>>>> +\t\treturn device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>>>>> +\n>>>>>>>>    \treturn pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n>>>>>>>>    }\n>>>>>>>\n>>>>>>> <snip>\n>>>>>>>\n>>>>>>>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n>>>>>>>> index 96c525785d78..141fe5abd33f 100644\n>>>>>>>> --- a/mm/page_vma_mapped.c\n>>>>>>>> +++ b/mm/page_vma_mapped.c\n>>>>>>>> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,\n>>>>>>>>    static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>>    {\n>>>>>>>>    \tunsigned long pfn;\n>>>>>>>> +\tbool device_private = false;\n>>>>>>>>    \tpte_t ptent = ptep_get(pvmw->pte);\n>>>>>>>>\n>>>>>>>>    \tif (pvmw->flags & PVMW_MIGRATION) {\n>>>>>>>> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>>    \t\tif (!softleaf_is_migration(entry))\n>>>>>>>>    \t\t\treturn false;\n>>>>>>>>\n>>>>>>>> +\t\tif (softleaf_is_migration_device_private(entry))\n>>>>>>>> +\t\t\tdevice_private = true;\n>>>>>>>> +\n>>>>>>>>    \t\tpfn = softleaf_to_pfn(entry);\n>>>>>>>>    \t} else if (pte_present(ptent)) {\n>>>>>>>>    \t\tpfn = pte_pfn(ptent);\n>>>>>>>> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>>    \t\t\treturn false;\n>>>>>>>>\n>>>>>>>>    \t\tpfn = softleaf_to_pfn(entry);\n>>>>>>>> +\n>>>>>>>> +\t\tif (softleaf_is_device_private(entry))\n>>>>>>>> +\t\t\tdevice_private = true;\n>>>>>>>>    \t}\n>>>>>>>>\n>>>>>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>>>>>> +\t\treturn false;\n>>>>>>>> +\n>>>>>>>>    \tif ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>>>>>    \t\treturn false;\n>>>>>>>>    \tif (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n>>>>>>>> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n>>>>>>>>    }\n>>>>>>>>\n>>>>>>>>    /* Returns true if the two ranges overlap.  Careful to not overflow. */\n>>>>>>>> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)\n>>>>>>>> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n>>>>>>>>    {\n>>>>>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n>>>>>>>> +\t\treturn false;\n>>>>>>>> +\n>>>>>>>>    \tif ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n>>>>>>>>    \t\treturn false;\n>>>>>>>>    \tif (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n>>>>>>>> @@ -255,6 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>>>>>\n>>>>>>>>    \t\t\t\tif (!softleaf_is_migration(entry) ||\n>>>>>>>>    \t\t\t\t    !check_pmd(softleaf_to_pfn(entry),\n>>>>>>>> +\t\t\t\t\t       softleaf_is_device_private(entry) ||\n>>>>>>>> +\t\t\t\t\t       softleaf_is_migration_device_private(entry),\n>>>>>>>>    \t\t\t\t\t       pvmw))\n>>>>>>>>    \t\t\t\t\treturn not_found(pvmw);\n>>>>>>>>    \t\t\t\treturn true;\n>>>>>>>> @@ -262,7 +277,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n>>>>>>>>    \t\t\tif (likely(pmd_trans_huge(pmde))) {\n>>>>>>>>    \t\t\t\tif (pvmw->flags & PVMW_MIGRATION)\n>>>>>>>>    \t\t\t\t\treturn not_found(pvmw);\n>>>>>>>> -\t\t\t\tif (!check_pmd(pmd_pfn(pmde), pvmw))\n>>>>>>>> +\t\t\t\tif (!check_pmd(pmd_pfn(pmde), false, pvmw))\n>>>>>>>>    \t\t\t\t\treturn not_found(pvmw);\n>>>>>>>>    \t\t\t\treturn true;\n>>>>>>>>    \t\t\t}\n>>>>>>>\n>>>>>>> It seems to me that you can add a new flag like “bool is_device_private” to\n>>>>>>> indicate whether pfn is a device private index instead of pfn without\n>>>>>>> manipulating pvmw->pfn itself.\n>>>>>>\n>>>>>> We could do it like that, however my concern with using a new param was that\n>>>>>> storing this info seperately might make it easier to misuse a device private\n>>>>>> index as a regular pfn.\n>>>>>>\n>>>>>> It seemed like it could be easy to overlook both when creating the pvmw and\n>>>>>> then when accessing the pfn.\n>>>>>\n>>>>> That is why I asked for a helper function like page_vma_walk_pfn(pvmw) to\n>>>>> return the converted pfn instead of pvmw->pfn directly. You can add a comment\n>>>>> to ask people to use helper function and even mark pvmw->pfn /* do not use\n>>>>> directly */.\n>>>>\n>>>> Yeah I agree that is a good idea.\n>>>>\n>>>>>\n>>>>> In addition, your patch manipulates pfn by left shifting it by 1. Are you sure\n>>>>> there is no weird arch having pfns with bit 63 being 1? Your change could\n>>>>> break it, right?\n>>>>\n>>>> Currently for migrate pfns we left shift by pfns by MIGRATE_PFN_SHIFT (6), so I\n>>>> thought doing something similiar here should be safe.\n>>>\n>>> Yeah, but that limits to archs supporting HMM. page_vma_mapped_walk is used\n>>> by almost every arch, so it has a broader impact.\n>>\n>> We need to be a bit careful by what we mean when we say \"HMM\" in the kernel.\n>>\n>> Specifically MIGRATE_PFN_SHIFT is used with migrate_vma/migrate_device, which\n>> is the migration half of \"HMM\" which does depend on CONFIG_DEVICE_MIGRATION or\n>> really just CONFIG_ZONE_DEVICE making it somewhat arch specific.\n>>\n>> However hmm_range_fault() does something similar - see the definition of\n>> hmm_pfn_flags - it actually steals the top 11 bits of a pfn for flags, and it is\n>> not architecture specific. It only depends on CONFIG_MMU.\n>\n> Oh, that is hacky. But are HMM PFNs with any flag exposed to code outside HMM?\n> Currently, device private needs to reserve PFNs for struct page, so I assume\n> only the reserved PFNs are seen by outsiders. Otherwise, when outsiders see\n> a HMM PFN with a flag, pfn_to_page() on such a PFN will read non exist\n> struct page, right?\n>\n> For this page_vma_mapped_walk code, it is manipulating PFNs used by everyone,\n> not just HMM, and can potentially (might be very rare) alter their values\n> after shifts.\n\n\n> And if an HMM PFN with HMM_PFN_VALID is processed by the code,\n> the HMM PFN will lose HMM_PFN_VALID bit. So I guess HMM PFN is not showing\n> outside HMM code.\n\nOops, this code is removing PFN reservation mechanism, so please ignore\nthe above two sentences.\n\n>\n>>\n>> Now I'm not saying this implies it actually works on all architectures as I\n>> agree the page_vma_mapped_walk code is used much more widely. Rather I'm just\n>> pointing out if there are issues with some architectures using high PFN bits\n>> then we likely have a problem here too :-)\n>\n>\n> Best Regards,\n> Yan, Zi\n\n\nBest Regards,\nYan, Zi","headers":{"Return-Path":"\n <linuxppc-dev+bounces-16182-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=Pmk7QDmu;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=112.213.38.117; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-16182-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=\"2a01:111:f403:c107::9\" arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=Pmk7QDmu;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=2a01:111:f403:c107::9;\n helo=ph7pr06cu001.outbound.protection.outlook.com;\n envelope-from=ziy@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dy2x218rrz1xsN\n\tfor <incoming@patchwork.ozlabs.org>; Fri, 23 Jan 2026 14:10:13 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4dy2x13Bqxz2xJ5;\n\tFri, 23 Jan 2026 14:10:13 +1100 (AEDT)","from PH7PR06CU001.outbound.protection.outlook.com\n (mail-westus3azlp170100009.outbound.protection.outlook.com\n [IPv6:2a01:111:f403:c107::9])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4dy2x03tFZz2x9M\n\tfor <linuxppc-dev@lists.ozlabs.org>; Fri, 23 Jan 2026 14:10:12 +1100 (AEDT)","from DS7PR12MB9473.namprd12.prod.outlook.com (2603:10b6:8:252::5) by\n CH3PR12MB7665.namprd12.prod.outlook.com (2603:10b6:610:14a::12) with\n Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9542.11; Fri, 23 Jan\n 2026 03:09:48 +0000","from DS7PR12MB9473.namprd12.prod.outlook.com\n ([fe80::f01d:73d2:2dda:c7b2]) by DS7PR12MB9473.namprd12.prod.outlook.com\n ([fe80::f01d:73d2:2dda:c7b2%4]) with mapi id 15.20.9542.010; Fri, 23 Jan 2026\n 03:09:48 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1769137813;\n\tcv=pass;\n b=hJRp2pFgsROyb1ftnHP/J0aL0v3R/wQo/PzZ5cTSrcTECbkrdU2NkjB4pBhyx9jyCBb6uOrgu1ZSO1S5hDG7mfKG8QP//dsvOmOy3MyoUngHRau7/bxuUxzcmGy3SM1WgWh38mteBRkTePHy3H565xUavE2EY5MsVpB3zLRW47uGOzfGeho/BqE2ez2GVwQ+A4WUwyeh8CtyRrQuDNTS066jn040+JJdJiSmqLyhTiUVSyjjMuppG56nGixJ0ItF5Ksy4zwjElQrY5GekDNfR8w7wyyzF0J3/8REu6H3bPdZvh5+N1vTi4LHnaDyd1FK0bGiloInZ3B448mmf0981A==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=HeOPbkstp28HMzqaMAKmOlPVdnzWhOSF4OvW+v2usS7kelSsAIKu3V9l32l1qXbTuKyn7zLXVD7xQbFKqie+R8sNYd1ukkZN8yc34rVOYn0MiycaPGah5wY18lfh3/AW2m3BhC0X0J+t9L2Hz5J77A2kohHWCs171K7b11YWftb/Qxgz0ZhYrG7AS7vmZvbSgZ/IVjiZUlQ0/cs5uu03uClguutV2onsiRbI77XvgGIutM/PM2X9LIHD1EmaAT91Z7lrIfdO+geVw4b7UYcbHUdOyUZCCHTIdp7/Vk/zOm8JByzu4vqSYDMjGsOXc7QBzEM/WZJKvERLD1yXRGLEow=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1769137813; c=relaxed/relaxed;\n\tbh=dMRaaueU64RXWPtbDxqf25Qe0llFU9MlwPcIheP9gZE=;\n\th=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References:\n\t Content-Type:MIME-Version;\n b=nNRmZJ2ZyjYZjwdfuucJWZ3P69afen8idKlsIgPxqhxXE2oqbBE0j3LOUDIzv10mY4VTQf0bjfTkm2Jm6194qgN+cxyMCVSV4JTNiIKAcpG6XCFZ54eECFlA/j8gmEVFzA/u0OKS8wusHKu+W6Mh7YkRD0F7ZwWOys/b4LSC7Bu1MBNSinObwwcuT6gf2Xyhy5ILh+LCS2TVE12Cb3Y6oTByb6UBQv5k7LdUaPAV57OkAwUNJOBH20LUvFbSrL5GCoEHZL9AQPgkoHjouUBj3sjbg9UbJo4gDgpui3OclV/O84gapwVyJdZMibUV/7nkX+gpH0N+vJfTNZGa6R50aw==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=dMRaaueU64RXWPtbDxqf25Qe0llFU9MlwPcIheP9gZE=;\n b=PB95TuxhSYJNnSenlMovbmWZ5HwwznjlbB1SeNp3Sqr388siAzGJvHQniKh3Se3tF674hXDOInbTHbEQhxYi/QvP70WUtvxhi45mUYbQTkTuGj7YPjE9RGqgRAUcRv177Y9PD+iKvsceLrp6iUTewGCbgT1wSnKK0iGaPsNt92lHWRkHgd7dCqkIR/awjJXJX9KK8Q2FvBeRO5FY7Q0eNtr76nd1oB82N/0ECce6zNo9XpYvuAMgFtteDurY90R72auFwVJWHA+67uqTN0kY+yghraC1rsaplsvbTADHZEMlIbo87rsYL0mjxNUoWbcjqRJiwjNwmDL4wVPlVZwZYQ=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=Pmk7QDmu; dkim-atps=neutral;\n spf=pass (client-ip=2a01:111:f403:c107::9;\n helo=ph7pr06cu001.outbound.protection.outlook.com;\n envelope-from=ziy@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=dMRaaueU64RXWPtbDxqf25Qe0llFU9MlwPcIheP9gZE=;\n b=Pmk7QDmuKxMP+o7czJfqyJXRt2OGQJirT68uuThzgD84mgXra4dYz8HJe/Gl1nSuK0llrIiOeegSjaCPCf9NgqU9xbiMeiKGG/Nh38JalgklnF62GlR42aZqTE1Gcw3cpjXajPfvOTTLKA36+wodWPPfUWgym/+w2lU21IrV6Kc2bTcT6K2RJrD9Xp4DbkZWCvLvwn/JvJO2ZRnrnw7plJm6f7BZ87km0X+HT+ZYWbROQEXUBLokZWlHzErrAVNYaXrDj4fxsxkQpDGEJy5rd/aeRSCfSXJHmLpHV3hWp5BBJMsiNVk3UK98w4fJunrH1sVE1ocGsHTuHC9TZhyhKQ==","From":"Zi Yan <ziy@nvidia.com>","To":"Alistair Popple <apopple@nvidia.com>","Cc":"Jordan Niethe <jniethe@nvidia.com>, <linux-mm@kvack.org>,\n <balbirs@nvidia.com>, <matthew.brost@intel.com>, <akpm@linux-foundation.org>,\n <linux-kernel@vger.kernel.org>, <dri-devel@lists.freedesktop.org>,\n <david@redhat.com>, <lorenzo.stoakes@oracle.com>, <lyude@redhat.com>,\n <dakr@kernel.org>, <airlied@gmail.com>, <simona@ffwll.ch>,\n <rcampbell@nvidia.com>, <mpenttil@redhat.com>, <jgg@nvidia.com>,\n <willy@infradead.org>, <linuxppc-dev@lists.ozlabs.org>,\n <intel-xe@lists.freedesktop.org>, <jgg@ziepe.ca>, <Felix.Kuehling@amd.com>","Subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","Date":"Thu, 22 Jan 2026 22:09:41 -0500","X-Mailer":"MailMate (2.0r6290)","Message-ID":"<0C16A79F-5A7B-4358-9806-7F78E7EA8EE6@nvidia.com>","In-Reply-To":"<DBBD65CA-A8F2-40AC-AFA0-FC95CBDB3DF5@nvidia.com>","References":"<20260107091823.68974-1-jniethe@nvidia.com>\n <20260107091823.68974-12-jniethe@nvidia.com>\n <36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>\n <c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>\n <6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com>\n <fd4b6553-3e9e-4829-a12f-51d29a5d7571@nvidia.com>\n <16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com>\n <sezye7d27h7pioazf4k3wfrdbradxovmdqyyp5slhljkmcnxf5@ckj3ujikhsnj>\n <DBBD65CA-A8F2-40AC-AFA0-FC95CBDB3DF5@nvidia.com>","Content-Type":"text/plain; charset=UTF-8","Content-Transfer-Encoding":"quoted-printable","X-ClientProxiedBy":"SJ0PR13CA0048.namprd13.prod.outlook.com\n (2603:10b6:a03:2c2::23) To DS7PR12MB9473.namprd12.prod.outlook.com\n (2603:10b6:8:252::5)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DS7PR12MB9473:EE_|CH3PR12MB7665:EE_","X-MS-Office365-Filtering-Correlation-Id":"845717da-657a-4369-39ef-08de5a2cddd9","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|376014|7416014|366016|1800799024;","X-Microsoft-Antispam-Message-Info":"=?utf-8?q?uFUfCaksLZLpcvCd+zbgpjPd0BZAYb+?=\n\t=?utf-8?q?JNG8asbQ+zHLEaNrlpLLvY4Ta/4LQZAqRULabu3Bu+WePpQ0KQAK6gCC/V3xCMUuT?=\n\t=?utf-8?q?2357yLEcVLtIUoa8QFqFVh4K5U/Dt1EP2aZSuuCQE07i9o+fpR7Tf2JUHQMyUom5Q?=\n\t=?utf-8?q?De+I9mzpWmzeNq18CGfeZtf8DBDV6p5BH4s8EvWj8Yh4kypQxrYKQ6DX4Eog6jyeE?=\n\t=?utf-8?q?Lpv4YOi7TVA7i7neHtkW6F0GJxfQjQexSvDGwxlG4UBm7eVqFNcKMi0q5VkOsuawk?=\n\t=?utf-8?q?DbMX8mVHhkQHZ9o+GleXS0MO9MHkofLQnIowpQdRE6aMhc1NH2+sDxxHk8Kp+M765?=\n\t=?utf-8?q?tI4qMohZYPXvtJhaSw4oShbZmzoyTwudOk0ioGAdArNIciyEEn3ndhb+w3QsC1/Un?=\n\t=?utf-8?q?oMbcm5bgYtcgVsupDdTIg3tppvjShL0DUSgh0WBGlGCNVNzBLCLdwxPATF4qG17dv?=\n\t=?utf-8?q?Jm7wtVAA8obDrk7s+9gy1vWF15cTzJwoNLc7pAJdrzySyTez/dTC9+HZGT6zbcd8T?=\n\t=?utf-8?q?Wk5zr+NOI22q8+uJD2F9+6IhH1+2rY26Srp0U0PyB2Qp0E9Dp81MEB1dcev5W3JM5?=\n\t=?utf-8?q?/3Wf7xevFZ+t3jXay6DoubRYGgQ33xUS3snyZAZPk2Pevf62VCKDm0Mm867G+knoX?=\n\t=?utf-8?q?ocey/0gJmSFwJMmk2FdmyG/cQJDM6o4cQC022wLhxWK9AtwwjWhA8WI5SvmS4lxJu?=\n\t=?utf-8?q?LTDDjKBRJJuBqFdBVmEdiE+CFWm8nQz+GKrfEhVkIbjfzzZ/sZB6Ub8CIi6aZOzss?=\n\t=?utf-8?q?FgAa5T/qbJx5+yMUGo1Ax0sWcDsYLMAbsdDWVTKar7/JnxgCsgB9t/8v7Z9exlxx1?=\n\t=?utf-8?q?Hmc/ijYwu0yMdYIY8mrNE0nwILDGTQ+kFXgT8UWGrKUNGEgPm1TFDJ98j5DPDKV3t?=\n\t=?utf-8?q?4IBoH1xweOC0dm8ywTVqlnYrhJnpYvNYYNtehl5WBJSAmHbbnzFZaoxEo9ruRs6n7?=\n\t=?utf-8?q?HMcHbPnLFGXmhrvKuPgYK9Vb8LjcnPOU6AoQAoC6BZzsovc7soExMS7/51NYP5UMg?=\n\t=?utf-8?q?IvtbC9DDRNWEg9SOx+K+ewthM3qiYYY5SJAgl3WUady2MhLcnsu2I1kWE2WRAKmlJ?=\n\t=?utf-8?q?WmA5pMXrGNTt18am8/tUWPt4ILe1JDQQE37DDrVT2JJm6DnL+AldJZHIBFIp7xbJJ?=\n\t=?utf-8?q?GJFsrcJK+cLMeHd+Rmx8FgmlVYzXuzElAZPL2DVe/zrn1IoiZaxyf8JDCS3xBlJt7?=\n\t=?utf-8?q?0IHV/YuEA7+gVDN1EWosiVAuydqyYB9L/N5v/NQWmW9qT9kwYfp1H86KmUgIuYBka?=\n\t=?utf-8?q?yGN4KGErRDsC2cmelusm6HNkasQoDptIWJQLmW5kbdp6Q2S+5LPPyOiBPZKjwtJal?=\n\t=?utf-8?q?VchWS7rpVbF496rRAZjT8hh4W/OgTkUfFtBQGBtyDRSDYzsDNwOdI6W37ftilY8kD?=\n\t=?utf-8?q?m4/H+es/LqDrgmiRNjEbh6wNSw+RA4z5KTjGv+REog5R25pl1P0N7/FAkCg88iU+n?=\n\t=?utf-8?q?LW2PICLehGhALSoQXgFqikErtt9VTS8xzpIBBtC6z7d1B4+Kkadgc=3D?=","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR12MB9473.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(376014)(7416014)(366016)(1800799024);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"=?utf-8?q?eZ2Hv8krlUArvIGeQ57s0SFNxDEb?=\n\t=?utf-8?q?bX2QM0Bp/Qw3hrWoCeqXjrJV6nSdzvD2EQrudDmbWWn0vi8ojKs1M2eTaOM70ixLn?=\n\t=?utf-8?q?cy4AGehlu6lnyd+wLKFj8he1IZFtJ+WzMuP5j7UaMrs9S7OrrN9CNwB22r9VsHDhv?=\n\t=?utf-8?q?aBJBwzX3Pl6KYFEkGwwDO57Bqt4fgD02F4XaO/U2tSeYHHiOYadEQ1Q0vLxyXawGn?=\n\t=?utf-8?q?qG7Gt47ckSh0qY+1mHe8z9Fjglzac5HIqvp0VWN9ZlGDHguE/yqdDNOn1PMaNknUf?=\n\t=?utf-8?q?RMiFqm9LRgswNbleRVPwzDDyGCpIFKmjK/XPQTgavWi7/jAXRZ73pMAMHPnA3DgqG?=\n\t=?utf-8?q?+b+Zf3rjGJS5yz8Fmig/3yIlmSt4KTfa1WsFskfphHb551mna1DWeM92ZVnjxLbvU?=\n\t=?utf-8?q?vcA1CQ1KcSYltmWLWWfWeKVDQdb0UXPwtvcOCXfAMjj+7bHbCFGc/ZbP+Mx5Vb0vF?=\n\t=?utf-8?q?QGXtqSeErmjuiseVad1K47+HMQXeFZ9VCKsvntRcnmBjrUn9L46AQfqBdLfJrbefC?=\n\t=?utf-8?q?cwT8hZEhfc4O8MniGWzTnNTof6LweFEj9Gkf5dsvyumRJFKhJBSzWKc0VTUn7dpBw?=\n\t=?utf-8?q?y42faSK3Cd0MTEcMKGCgrAAH7MXMztzdsA06ik6nUS32nczzwcpdZq1okOTsPiZx9?=\n\t=?utf-8?q?LIU5TOJ6u6/Thk+hqm0z/YBb2Kx/kEcvU8iXg9BKNdhaTDJddHBQ5+dWZWRDWbJNH?=\n\t=?utf-8?q?wb1mArv15EEgvRgJ9NYen6S1JNmu1tn8a0XyJp8py9xhyvZmah8w60A7Vxy0PCmHb?=\n\t=?utf-8?q?MfnuL9hLqM/leX2jy8i6gb6TnUtDQd5DQCv32UyNu/Z754MHmYfSJHJgOJwSGWlOI?=\n\t=?utf-8?q?TDXCCTdAmcWnlrzVoGsHuDw/D08KM0Hj9o3S8TmKzB1b5JjVeufZQ0Uc4o36e2lyM?=\n\t=?utf-8?q?ex7hF0D865MMWaZz+ysH384/QTwGha3J3X6wjLbgrK8c/rM/6DzARqDSfPPB4nV3a?=\n\t=?utf-8?q?QTrOLyEnxvqL6Id69DMjw1jOKAA/rZzpaGUDp9faB94VV9gEYB2+ZGvgWJ31mspKG?=\n\t=?utf-8?q?APs/KDqupMP5Io4na8qtR2YYLOnstNOtkxeSy4yvrsCXywZhiuv8XVNLM6z/+wyAO?=\n\t=?utf-8?q?PznTRvrK5zqNjzJxw95yXxXIDNf9wj82q5AZTyfJN5UIvKXcqQj0HRIXM6tLFlH95?=\n\t=?utf-8?q?N/COlb8e+X9HdzZ4In6OO9nDOtPdQnQuNWeL/LPq4xKIjzMwmxWaQt0nzZEQZaqB9?=\n\t=?utf-8?q?XCaIrxd1wMcgJ/svEqg8Lb5DFJivZTfYpDDfrBHVt3bxTGvqQEUOTLJq4J1NOsDUv?=\n\t=?utf-8?q?hcrotgY6Yn84MgIXFr8T3BREXafF7eg47ckbkuJit2LU0K7cHQ40FGgSHCr7PiPAd?=\n\t=?utf-8?q?Gwz9QmvfuOrG/0F4UaOCwYIq8uaoKGPzs3RGcFsD1A6ztAFJ2aeJa/ysJcafNkk6L?=\n\t=?utf-8?q?f7ntonGX2zzJk2UFkN9xs4nQotq24qfcbqBMjgQLwtzw0tE2uXaKU6KvbCs3Dekky?=\n\t=?utf-8?q?T5XAUuUipImNH+j6cxC97QwJxFPBlkx9Wo4+WL1iRYZ6KPixCQ3vQFz0FH/Ny5KeD?=\n\t=?utf-8?q?OF7XZLB3xrobO8dVFiuNMW6OwLt/wYSzUq5SXIgVuLsAzwmeV75FSsjxl8NNScUgX?=\n\t=?utf-8?q?xeUYLg6XYn1MKwZzlb9WBTmioLMjq3QjnoeQIzC7Mz1vVqpH65tshDaXybz//xDpn?=\n\t=?utf-8?q?KM9fWPaz7h?=","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n 845717da-657a-4369-39ef-08de5a2cddd9","X-MS-Exchange-CrossTenant-AuthSource":"DS7PR12MB9473.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"23 Jan 2026 03:09:48.1776\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n r7MYBcC2NAcFIWh0RY88Mxum+z9zyc6evITVs1eqBTs2oFTbdpIZ/qdV7WVF1+Fx","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"CH3PR12MB7665","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tSPF_HELO_NONE,SPF_PASS autolearn=disabled version=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"}},{"id":3640728,"web_url":"http://patchwork.ozlabs.org/comment/3640728/","msgid":"<l5jxxobpj6shwuuthsyxlzfnhs6dx4spvzcqxrycn4chtywniq@e2eaio4nhorq>","date":"2026-01-23T05:38:49","subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","submitter":{"id":81117,"url":"http://patchwork.ozlabs.org/api/people/81117/","name":"Alistair Popple","email":"apopple@nvidia.com"},"content":"On 2026-01-23 at 14:09 +1100, Zi Yan <ziy@nvidia.com> wrote...\n> On 22 Jan 2026, at 22:06, Zi Yan wrote:\n> \n> > On 22 Jan 2026, at 21:02, Alistair Popple wrote:\n> >\n> >> On 2026-01-21 at 10:06 +1100, Zi Yan <ziy@nvidia.com> wrote...\n> >>> On 20 Jan 2026, at 18:02, Jordan Niethe wrote:\n> >>>\n> >>>> Hi,\n> >>>>\n> >>>> On 21/1/26 09:53, Zi Yan wrote:\n> >>>>> On 20 Jan 2026, at 17:33, Jordan Niethe wrote:\n> >>>>>\n> >>>>>> On 14/1/26 07:04, Zi Yan wrote:\n> >>>>>>> On 7 Jan 2026, at 4:18, Jordan Niethe wrote:\n> >>>>>>>\n> >>>>>>>> Currently when creating these device private struct pages, the first\n> >>>>>>>> step is to use request_free_mem_region() to get a range of physical\n> >>>>>>>> address space large enough to represent the devices memory. This\n> >>>>>>>> allocated physical address range is then remapped as device private\n> >>>>>>>> memory using memremap_pages().\n> >>>>>>>>\n> >>>>>>>> Needing allocation of physical address space has some problems:\n> >>>>>>>>\n> >>>>>>>>     1) There may be insufficient physical address space to represent the\n> >>>>>>>>        device memory. KASLR reducing the physical address space and VM\n> >>>>>>>>        configurations with limited physical address space increase the\n> >>>>>>>>        likelihood of hitting this especially as device memory increases. This\n> >>>>>>>>        has been observed to prevent device private from being initialized.\n> >>>>>>>>\n> >>>>>>>>     2) Attempting to add the device private pages to the linear map at\n> >>>>>>>>        addresses beyond the actual physical memory causes issues on\n> >>>>>>>>        architectures like aarch64 meaning the feature does not work there.\n> >>>>>>>>\n> >>>>>>>> Instead of using the physical address space, introduce a device private\n> >>>>>>>> address space and allocate devices regions from there to represent the\n> >>>>>>>> device private pages.\n> >>>>>>>>\n> >>>>>>>> Introduce a new interface memremap_device_private_pagemap() that\n> >>>>>>>> allocates a requested amount of device private address space and creates\n> >>>>>>>> the necessary device private pages.\n> >>>>>>>>\n> >>>>>>>> To support this new interface, struct dev_pagemap needs some changes:\n> >>>>>>>>\n> >>>>>>>>     - Add a new dev_pagemap::nr_pages field as an input parameter.\n> >>>>>>>>     - Add a new dev_pagemap::pages array to store the device\n> >>>>>>>>       private pages.\n> >>>>>>>>\n> >>>>>>>> When using memremap_device_private_pagemap(), rather then passing in\n> >>>>>>>> dev_pagemap::ranges[dev_pagemap::nr_ranges] of physical address space to\n> >>>>>>>> be remapped, dev_pagemap::nr_ranges will always be 1, and the device\n> >>>>>>>> private range that is reserved is returned in dev_pagemap::range.\n> >>>>>>>>\n> >>>>>>>> Forbid calling memremap_pages() with dev_pagemap::ranges::type =\n> >>>>>>>> MEMORY_DEVICE_PRIVATE.\n> >>>>>>>>\n> >>>>>>>> Represent this device private address space using a new\n> >>>>>>>> device_private_pgmap_tree maple tree. This tree maps a given device\n> >>>>>>>> private address to a struct dev_pagemap, where a specific device private\n> >>>>>>>> page may then be looked up in that dev_pagemap::pages array.\n> >>>>>>>>\n> >>>>>>>> Device private address space can be reclaimed and the assoicated device\n> >>>>>>>> private pages freed using the corresponding new\n> >>>>>>>> memunmap_device_private_pagemap() interface.\n> >>>>>>>>\n> >>>>>>>> Because the device private pages now live outside the physical address\n> >>>>>>>> space, they no longer have a normal PFN. This means that page_to_pfn(),\n> >>>>>>>> et al. are no longer meaningful.\n> >>>>>>>>\n> >>>>>>>> Introduce helpers:\n> >>>>>>>>\n> >>>>>>>>     - device_private_page_to_offset()\n> >>>>>>>>     - device_private_folio_to_offset()\n> >>>>>>>>\n> >>>>>>>> to take a given device private page / folio and return its offset within\n> >>>>>>>> the device private address space.\n> >>>>>>>>\n> >>>>>>>> Update the places where we previously converted a device private page to\n> >>>>>>>> a PFN to use these new helpers. When we encounter a device private\n> >>>>>>>> offset, instead of looking up its page within the pagemap use\n> >>>>>>>> device_private_offset_to_page() instead.\n> >>>>>>>>\n> >>>>>>>> Update the existing users:\n> >>>>>>>>\n> >>>>>>>>    - lib/test_hmm.c\n> >>>>>>>>    - ppc ultravisor\n> >>>>>>>>    - drm/amd/amdkfd\n> >>>>>>>>    - gpu/drm/xe\n> >>>>>>>>    - gpu/drm/nouveau\n> >>>>>>>>\n> >>>>>>>> to use the new memremap_device_private_pagemap() interface.\n> >>>>>>>>\n> >>>>>>>> Signed-off-by: Jordan Niethe <jniethe@nvidia.com>\n> >>>>>>>> Signed-off-by: Alistair Popple <apopple@nvidia.com>\n> >>>>>>>>\n> >>>>>>>> ---\n> >>>>>>>>\n> >>>>>>>> NOTE: The updates to the existing drivers have only been compile tested.\n> >>>>>>>> I'll need some help in testing these drivers.\n> >>>>>>>>\n> >>>>>>>> v1:\n> >>>>>>>> - Include NUMA node paramater for memremap_device_private_pagemap()\n> >>>>>>>> - Add devm_memremap_device_private_pagemap() and friends\n> >>>>>>>> - Update existing users of memremap_pages():\n> >>>>>>>>       - ppc ultravisor\n> >>>>>>>>       - drm/amd/amdkfd\n> >>>>>>>>       - gpu/drm/xe\n> >>>>>>>>       - gpu/drm/nouveau\n> >>>>>>>> - Update for HMM huge page support\n> >>>>>>>> - Guard device_private_offset_to_page and friends with CONFIG_ZONE_DEVICE\n> >>>>>>>>\n> >>>>>>>> v2:\n> >>>>>>>> - Make sure last member of struct dev_pagemap remains DECLARE_FLEX_ARRAY(struct range, ranges);\n> >>>>>>>> ---\n> >>>>>>>>    Documentation/mm/hmm.rst                 |  11 +-\n> >>>>>>>>    arch/powerpc/kvm/book3s_hv_uvmem.c       |  41 ++---\n> >>>>>>>>    drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  23 +--\n> >>>>>>>>    drivers/gpu/drm/nouveau/nouveau_dmem.c   |  35 ++--\n> >>>>>>>>    drivers/gpu/drm/xe/xe_svm.c              |  28 +---\n> >>>>>>>>    include/linux/hmm.h                      |   3 +\n> >>>>>>>>    include/linux/leafops.h                  |  16 +-\n> >>>>>>>>    include/linux/memremap.h                 |  64 +++++++-\n> >>>>>>>>    include/linux/migrate.h                  |   6 +-\n> >>>>>>>>    include/linux/mm.h                       |   2 +\n> >>>>>>>>    include/linux/rmap.h                     |   5 +-\n> >>>>>>>>    include/linux/swapops.h                  |  10 +-\n> >>>>>>>>    lib/test_hmm.c                           |  69 ++++----\n> >>>>>>>>    mm/debug.c                               |   9 +-\n> >>>>>>>>    mm/memremap.c                            | 193 ++++++++++++++++++-----\n> >>>>>>>>    mm/mm_init.c                             |   8 +-\n> >>>>>>>>    mm/page_vma_mapped.c                     |  19 ++-\n> >>>>>>>>    mm/rmap.c                                |  43 +++--\n> >>>>>>>>    mm/util.c                                |   5 +-\n> >>>>>>>>    19 files changed, 391 insertions(+), 199 deletions(-)\n> >>>>>>>>\n> >>>>>>> <snip>\n> >>>>>>>\n> >>>>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h\n> >>>>>>>> index e65329e1969f..b36599ab41ba 100644\n> >>>>>>>> --- a/include/linux/mm.h\n> >>>>>>>> +++ b/include/linux/mm.h\n> >>>>>>>> @@ -2038,6 +2038,8 @@ static inline unsigned long memdesc_section(memdesc_flags_t mdf)\n> >>>>>>>>     */\n> >>>>>>>>    static inline unsigned long folio_pfn(const struct folio *folio)\n> >>>>>>>>    {\n> >>>>>>>> +\tVM_BUG_ON(folio_is_device_private(folio));\n> >>>>>>>\n> >>>>>>> Please use VM_WARN_ON instead.\n> >>>>>>\n> >>>>>> ack.\n> >>>>>>\n> >>>>>>>\n> >>>>>>>> +\n> >>>>>>>>    \treturn page_to_pfn(&folio->page);\n> >>>>>>>>    }\n> >>>>>>>>\n> >>>>>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h\n> >>>>>>>> index 57c63b6a8f65..c1561a92864f 100644\n> >>>>>>>> --- a/include/linux/rmap.h\n> >>>>>>>> +++ b/include/linux/rmap.h\n> >>>>>>>> @@ -951,7 +951,7 @@ static inline unsigned long page_vma_walk_pfn(unsigned long pfn)\n> >>>>>>>>    static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n> >>>>>>>>    {\n> >>>>>>>>    \tif (folio_is_device_private(folio))\n> >>>>>>>> -\t\treturn page_vma_walk_pfn(folio_pfn(folio)) |\n> >>>>>>>> +\t\treturn page_vma_walk_pfn(device_private_folio_to_offset(folio)) |\n> >>>>>>>>    \t\t       PVMW_PFN_DEVICE_PRIVATE;\n> >>>>>>>>\n> >>>>>>>>    \treturn page_vma_walk_pfn(folio_pfn(folio));\n> >>>>>>>> @@ -959,6 +959,9 @@ static inline unsigned long folio_page_vma_walk_pfn(const struct folio *folio)\n> >>>>>>>>\n> >>>>>>>>    static inline struct page *page_vma_walk_pfn_to_page(unsigned long pvmw_pfn)\n> >>>>>>>>    {\n> >>>>>>>> +\tif (pvmw_pfn & PVMW_PFN_DEVICE_PRIVATE)\n> >>>>>>>> +\t\treturn device_private_offset_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n> >>>>>>>> +\n> >>>>>>>>    \treturn pfn_to_page(pvmw_pfn >> PVMW_PFN_SHIFT);\n> >>>>>>>>    }\n> >>>>>>>\n> >>>>>>> <snip>\n> >>>>>>>\n> >>>>>>>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c\n> >>>>>>>> index 96c525785d78..141fe5abd33f 100644\n> >>>>>>>> --- a/mm/page_vma_mapped.c\n> >>>>>>>> +++ b/mm/page_vma_mapped.c\n> >>>>>>>> @@ -107,6 +107,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,\n> >>>>>>>>    static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n> >>>>>>>>    {\n> >>>>>>>>    \tunsigned long pfn;\n> >>>>>>>> +\tbool device_private = false;\n> >>>>>>>>    \tpte_t ptent = ptep_get(pvmw->pte);\n> >>>>>>>>\n> >>>>>>>>    \tif (pvmw->flags & PVMW_MIGRATION) {\n> >>>>>>>> @@ -115,6 +116,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n> >>>>>>>>    \t\tif (!softleaf_is_migration(entry))\n> >>>>>>>>    \t\t\treturn false;\n> >>>>>>>>\n> >>>>>>>> +\t\tif (softleaf_is_migration_device_private(entry))\n> >>>>>>>> +\t\t\tdevice_private = true;\n> >>>>>>>> +\n> >>>>>>>>    \t\tpfn = softleaf_to_pfn(entry);\n> >>>>>>>>    \t} else if (pte_present(ptent)) {\n> >>>>>>>>    \t\tpfn = pte_pfn(ptent);\n> >>>>>>>> @@ -127,8 +131,14 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n> >>>>>>>>    \t\t\treturn false;\n> >>>>>>>>\n> >>>>>>>>    \t\tpfn = softleaf_to_pfn(entry);\n> >>>>>>>> +\n> >>>>>>>> +\t\tif (softleaf_is_device_private(entry))\n> >>>>>>>> +\t\t\tdevice_private = true;\n> >>>>>>>>    \t}\n> >>>>>>>>\n> >>>>>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n> >>>>>>>> +\t\treturn false;\n> >>>>>>>> +\n> >>>>>>>>    \tif ((pfn + pte_nr - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n> >>>>>>>>    \t\treturn false;\n> >>>>>>>>    \tif (pfn > ((pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1))\n> >>>>>>>> @@ -137,8 +147,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)\n> >>>>>>>>    }\n> >>>>>>>>\n> >>>>>>>>    /* Returns true if the two ranges overlap.  Careful to not overflow. */\n> >>>>>>>> -static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)\n> >>>>>>>> +static bool check_pmd(unsigned long pfn, bool device_private, struct page_vma_mapped_walk *pvmw)\n> >>>>>>>>    {\n> >>>>>>>> +\tif ((device_private) ^ !!(pvmw->pfn & PVMW_PFN_DEVICE_PRIVATE))\n> >>>>>>>> +\t\treturn false;\n> >>>>>>>> +\n> >>>>>>>>    \tif ((pfn + HPAGE_PMD_NR - 1) < (pvmw->pfn >> PVMW_PFN_SHIFT))\n> >>>>>>>>    \t\treturn false;\n> >>>>>>>>    \tif (pfn > (pvmw->pfn >> PVMW_PFN_SHIFT) + pvmw->nr_pages - 1)\n> >>>>>>>> @@ -255,6 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n> >>>>>>>>\n> >>>>>>>>    \t\t\t\tif (!softleaf_is_migration(entry) ||\n> >>>>>>>>    \t\t\t\t    !check_pmd(softleaf_to_pfn(entry),\n> >>>>>>>> +\t\t\t\t\t       softleaf_is_device_private(entry) ||\n> >>>>>>>> +\t\t\t\t\t       softleaf_is_migration_device_private(entry),\n> >>>>>>>>    \t\t\t\t\t       pvmw))\n> >>>>>>>>    \t\t\t\t\treturn not_found(pvmw);\n> >>>>>>>>    \t\t\t\treturn true;\n> >>>>>>>> @@ -262,7 +277,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)\n> >>>>>>>>    \t\t\tif (likely(pmd_trans_huge(pmde))) {\n> >>>>>>>>    \t\t\t\tif (pvmw->flags & PVMW_MIGRATION)\n> >>>>>>>>    \t\t\t\t\treturn not_found(pvmw);\n> >>>>>>>> -\t\t\t\tif (!check_pmd(pmd_pfn(pmde), pvmw))\n> >>>>>>>> +\t\t\t\tif (!check_pmd(pmd_pfn(pmde), false, pvmw))\n> >>>>>>>>    \t\t\t\t\treturn not_found(pvmw);\n> >>>>>>>>    \t\t\t\treturn true;\n> >>>>>>>>    \t\t\t}\n> >>>>>>>\n> >>>>>>> It seems to me that you can add a new flag like “bool is_device_private” to\n> >>>>>>> indicate whether pfn is a device private index instead of pfn without\n> >>>>>>> manipulating pvmw->pfn itself.\n> >>>>>>\n> >>>>>> We could do it like that, however my concern with using a new param was that\n> >>>>>> storing this info seperately might make it easier to misuse a device private\n> >>>>>> index as a regular pfn.\n> >>>>>>\n> >>>>>> It seemed like it could be easy to overlook both when creating the pvmw and\n> >>>>>> then when accessing the pfn.\n> >>>>>\n> >>>>> That is why I asked for a helper function like page_vma_walk_pfn(pvmw) to\n> >>>>> return the converted pfn instead of pvmw->pfn directly. You can add a comment\n> >>>>> to ask people to use helper function and even mark pvmw->pfn /* do not use\n> >>>>> directly */.\n> >>>>\n> >>>> Yeah I agree that is a good idea.\n> >>>>\n> >>>>>\n> >>>>> In addition, your patch manipulates pfn by left shifting it by 1. Are you sure\n> >>>>> there is no weird arch having pfns with bit 63 being 1? Your change could\n> >>>>> break it, right?\n> >>>>\n> >>>> Currently for migrate pfns we left shift by pfns by MIGRATE_PFN_SHIFT (6), so I\n> >>>> thought doing something similiar here should be safe.\n> >>>\n> >>> Yeah, but that limits to archs supporting HMM. page_vma_mapped_walk is used\n> >>> by almost every arch, so it has a broader impact.\n> >>\n> >> We need to be a bit careful by what we mean when we say \"HMM\" in the kernel.\n> >>\n> >> Specifically MIGRATE_PFN_SHIFT is used with migrate_vma/migrate_device, which\n> >> is the migration half of \"HMM\" which does depend on CONFIG_DEVICE_MIGRATION or\n> >> really just CONFIG_ZONE_DEVICE making it somewhat arch specific.\n> >>\n> >> However hmm_range_fault() does something similar - see the definition of\n> >> hmm_pfn_flags - it actually steals the top 11 bits of a pfn for flags, and it is\n> >> not architecture specific. It only depends on CONFIG_MMU.\n> >\n> > Oh, that is hacky. But are HMM PFNs with any flag exposed to code outside HMM?\n> > Currently, device private needs to reserve PFNs for struct page, so I assume\n> > only the reserved PFNs are seen by outsiders. Otherwise, when outsiders see\n> > a HMM PFN with a flag, pfn_to_page() on such a PFN will read non exist\n> > struct page, right?\n\nAny user of hmm_range_fault() would be exposed to an issue - most users of\nhmm_range_fault() use it to grab a PFN (ie. physical address) to map into some\nremote page table. So potentially if some important bit in the PFN is dropped\nthat could potentially result in users mapping the wrong physical address or\npage.\n\nAnd just to be clear this is completely orthogonal to any DEVICE_PRIVATE\nspecific issue - it existed well before any changes here. Of course it may not\nactually be an issue - do we know if there are any architectures that actually\nuse upper physical address bits? I assume not because how would they fit in a\npage table entry. But I don't really know.\n\n> > For this page_vma_mapped_walk code, it is manipulating PFNs used by everyone,\n> > not just HMM, and can potentially (might be very rare) alter their values\n> > after shifts.\n> \n> \n> > And if an HMM PFN with HMM_PFN_VALID is processed by the code,\n> > the HMM PFN will lose HMM_PFN_VALID bit. So I guess HMM PFN is not showing\n> > outside HMM code.\n> \n> Oops, this code is removing PFN reservation mechanism, so please ignore\n> the above two sentences.\n> \n> >\n> >>\n> >> Now I'm not saying this implies it actually works on all architectures as I\n> >> agree the page_vma_mapped_walk code is used much more widely. Rather I'm just\n> >> pointing out if there are issues with some architectures using high PFN bits\n> >> then we likely have a problem here too :-)\n> >\n> >\n> > Best Regards,\n> > Yan, Zi\n> \n> \n> Best Regards,\n> Yan, Zi","headers":{"Return-Path":"\n <linuxppc-dev+bounces-16184-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=kqanfFcQ;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=2404:9400:21b9:f100::1; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-16184-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=\"2a01:111:f403:c10d::1\" arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=kqanfFcQ;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=2a01:111:f403:c10d::1;\n helo=sn4pr2101cu001.outbound.protection.outlook.com;\n envelope-from=apopple@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org\n [IPv6:2404:9400:21b9:f100::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1 raw public key)\n server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dy6F93ZCsz1xqf\n\tfor <incoming@patchwork.ozlabs.org>; Fri, 23 Jan 2026 16:39:24 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4dy6F73D4Tz2xJ5;\n\tFri, 23 Jan 2026 16:39:23 +1100 (AEDT)","from SN4PR2101CU001.outbound.protection.outlook.com\n (mail-southcentralusazlp170120001.outbound.protection.outlook.com\n [IPv6:2a01:111:f403:c10d::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4dy6F54JYhz2x9M\n\tfor <linuxppc-dev@lists.ozlabs.org>; Fri, 23 Jan 2026 16:39:20 +1100 (AEDT)","from DS0PR12MB7726.namprd12.prod.outlook.com (2603:10b6:8:130::6) by\n SJ0PR12MB6831.namprd12.prod.outlook.com (2603:10b6:a03:47d::9) with Microsoft\n SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.9542.11; Fri, 23 Jan 2026 05:38:55 +0000","from DS0PR12MB7726.namprd12.prod.outlook.com\n ([fe80::953f:2f80:90c5:67fe]) by DS0PR12MB7726.namprd12.prod.outlook.com\n ([fe80::953f:2f80:90c5:67fe%4]) with mapi id 15.20.9542.009; Fri, 23 Jan 2026\n 05:38:54 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1769146763;\n\tcv=pass;\n b=N/1IR20kK2XTnSiOMs6sgE6ZFJkBWcOOek77uIIzlxWErB7ArZVDlqBue2nM1vu5DTTWhU3Y6dIM3Ho5CId0QNN1Oucbt1A7cSYcALJUHVuTbDHtdc/NYO6ItEZtKrAbQzq329zKO/NPgy1zWtAqgZiJLZuaFV1KbkJlCEpRE2UGQ8ko7sLFGEvWIcyh5lOaPit/T1dvZVp6Q6DJpDeZNdFRdyMsZiFylSTUCxOqLPZUTVnKq92t205pdn1S96aLIctL7v4AHrWJoS9IaHhCP6KhpivhAY2fNtGkkVFEbtNnAXyhFDlOnQRuyYnVBMhGnyP4kdxZ47mPyEyiDXpdJg==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=l4ecgIYAkTOc6wQIh4/fIoqaVGcYIW9pQgYClA+5yFHhpmkPHqFAlsuxaXsCSkbqarX/kFHOi4FTPrfAmGiOijq3uKIJom9W+EN4/B+z/ZaCz9+g8R0PO7r6Kn1CnkCF2kH0UNj+Ia2puxihhtwtJCieSOFKOlYX9+bCEowwWyy9LMKBfzdxRKX8/JSp9diMR+PbtW/d2IIbKxQHNRkNvYDqHl5EMDrmr/eZSk4L3YS0AeYQExHodwVQMTF3wcQ2jQM+MrJppYAipYTA/e/f8rO26OTeJqdPi7g/cxr0Cd7/k6ooSk2ozfwkuagCop1Rt5e76Ed6ivD3EsPDCHEcgQ=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1769146763; c=relaxed/relaxed;\n\tbh=KsxljKAh94X0v4Pcrfc4mtCBLf1ukYCqZJtpZjRN850=;\n\th=Date:From:To:Cc:Subject:Message-ID:References:Content-Type:\n\t Content-Disposition:In-Reply-To:MIME-Version;\n b=dN884LbMaqxobYPUOqZLsgsu+tlq3jf/RZJwEc2vAo2wJEYNK5kykc1nsiAtnYMqg+bwLs0+t18NKQUDPUBnuT8+Gn/t0mUUsbq/s8OIBO3hwV86NoFPLdvatrKzY/thIxHTPpUVL7683MGibllGo4Bb4XACGq4azTNrnA1W/zp72WkuOjkHP7aV5vYrnqjMPrYqTG/oboQSw6aQDbSrhfVuNGS38tp316w5tHS/nYWRwf/H8zBOQOujE58LihOHHYsXCn1si0ErgIqRA+D+fm/j6a60agK6oTF7dYdGd6ahoRRZOi5HYY7SoJSmF5urwyc4WVD4QWBC00O8O1bcFg==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=KsxljKAh94X0v4Pcrfc4mtCBLf1ukYCqZJtpZjRN850=;\n b=akY3TEfgEkOJmKd+i798CVRGL+qhezpQkGSbf+EceTZJXtfUdmYbxvH5oYxxn8Dbxau0BRCgLdvRa/X3xYZOViY2lVWvKTGmIhn5R8pOhw6P7L5ZDQE3Z7Coxue3sKDWRyjMaZ6VE1Ule83zY8/ijZyJr+IMpJUp7q4OIQ0TR7lf+kQjBQGDstvY3D9WNWbkWNlYlvJtna3WdVHmEUqbSXkZuyrIkkq/qveEFQLov7yZ1RdfQvG3lLULB33A0B930IYC9kaBYPx3UDJJtXR/q719gl0LSaH6/8o+gGoPse4IHyvuear0rcrrSNneHteyhqDN5mdN1OQV2uZHBM5FJw=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=kqanfFcQ; dkim-atps=neutral;\n spf=pass (client-ip=2a01:111:f403:c10d::1;\n helo=sn4pr2101cu001.outbound.protection.outlook.com;\n envelope-from=apopple@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=KsxljKAh94X0v4Pcrfc4mtCBLf1ukYCqZJtpZjRN850=;\n b=kqanfFcQcmgZg9sqQmY4Ll6DW8nDQOhnNMiG11sYlK/Yr8MUT4URBSA/cKF4LxfRWrWzfPUJZBI9wHmETD7o0oskxiuyDbrJQAZ8EkEhgean5pfgSP3z/NEI9ej+knhIQnlkn4HWq9GLSbTnSw6XeVZQnJkw049wGnYcX3MZT0pUJ2zerTc6pfNZ3q6dQgqjg2qQZe9F7DrdNYZVXmVg/+bSiRX1vuemq070TJbe8Oy3uFwCBCe0QsF2/QCd1+4K/dwcDHfarUFgnY8jfnHbgJHXhN+HYjqZElcGiEDch5zYwWQz4NE3xYZfS7lLuDx/epZwZaUjpjb0u9ALBTFzKA==","Date":"Fri, 23 Jan 2026 16:38:49 +1100","From":"Alistair Popple <apopple@nvidia.com>","To":"Zi Yan <ziy@nvidia.com>","Cc":"Jordan Niethe <jniethe@nvidia.com>, linux-mm@kvack.org,\n\tbalbirs@nvidia.com, matthew.brost@intel.com, akpm@linux-foundation.org,\n\tlinux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,\n david@redhat.com,\n\tlorenzo.stoakes@oracle.com, lyude@redhat.com, dakr@kernel.org,\n airlied@gmail.com,\n\tsimona@ffwll.ch, rcampbell@nvidia.com, mpenttil@redhat.com, jgg@nvidia.com,\n\twilly@infradead.org, linuxppc-dev@lists.ozlabs.org,\n intel-xe@lists.freedesktop.org,\n\tjgg@ziepe.ca, Felix.Kuehling@amd.com","Subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","Message-ID":"<l5jxxobpj6shwuuthsyxlzfnhs6dx4spvzcqxrycn4chtywniq@e2eaio4nhorq>","References":"<20260107091823.68974-1-jniethe@nvidia.com>\n <20260107091823.68974-12-jniethe@nvidia.com>\n <36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>\n <c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>\n <6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com>\n <fd4b6553-3e9e-4829-a12f-51d29a5d7571@nvidia.com>\n <16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com>\n <sezye7d27h7pioazf4k3wfrdbradxovmdqyyp5slhljkmcnxf5@ckj3ujikhsnj>\n <DBBD65CA-A8F2-40AC-AFA0-FC95CBDB3DF5@nvidia.com>\n <0C16A79F-5A7B-4358-9806-7F78E7EA8EE6@nvidia.com>","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","Content-Transfer-Encoding":"8bit","In-Reply-To":"<0C16A79F-5A7B-4358-9806-7F78E7EA8EE6@nvidia.com>","X-ClientProxiedBy":"SY4P282CA0015.AUSP282.PROD.OUTLOOK.COM\n (2603:10c6:10:a0::25) To DS0PR12MB7726.namprd12.prod.outlook.com\n (2603:10b6:8:130::6)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DS0PR12MB7726:EE_|SJ0PR12MB6831:EE_","X-MS-Office365-Filtering-Correlation-Id":"0c4d5552-044f-447a-ce5b-08de5a41b237","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|1800799024|7416014|376014|366016;","X-Microsoft-Antispam-Message-Info":"=?utf-8?q?/2dVK4DResq2YYPYKF83V699Gw2wOsj?=\n\t=?utf-8?q?+7md0TQG4P5Flk+KHLvywZkc3XyG+R//+vu/u7bIewuG8Ijnc0wJ9JTo3oZE06XAJ?=\n\t=?utf-8?q?T0EU8t5/G6MbLR6V1flXtwKDx5pBUZcIsqeRULWA7C/gMbaMIfgKttc12Noz40QqU?=\n\t=?utf-8?q?9QZemL/BqRWxOIwdKi6v4bd3Iz7K/3YGWZx801pNK+JKJdr6l8vJFwGORHl9MKNJw?=\n\t=?utf-8?q?64xZu8xWgKIjOZvj5j6atqF5HZWRJmfq1UFCoKsZM8VXNSHcwH26j6GQXIj6lXThl?=\n\t=?utf-8?q?PvDbodZHyHYzmLCZoTT03Z6N8T5RW/oI/x+phYdK3MyoS2yGFBbwQGQx+ilB61SeY?=\n\t=?utf-8?q?zc2oo8cAcu/Kq+jLtC+N5tgjCzNVszKvfelVc+0DnlKpjYMrF0QecWJ47zS5QCITW?=\n\t=?utf-8?q?iN8RuLj1+2eNrfqFHiWWae8VmX0GHzm72ZA24Qp9obQP9hNpJKbdeHJHZEiQgIqkU?=\n\t=?utf-8?q?uDL8DnrrcrqL7KrCseP+M6hW7cRx2H4m9auGbZH1dJMgtiCfwBKskEZhP/w5yVQYZ?=\n\t=?utf-8?q?fqdISuK0T9qDrJYjDdiibTqrFIg9cB7Vxhur+q5onNlHP0RqVjuDDGVNUvck3VEG8?=\n\t=?utf-8?q?h/4juS71nhDkD1QRfUrDu9aeKeYvKufO/GDNN4nh9n+Fq2ioT/b7CQBAB07lDM0oF?=\n\t=?utf-8?q?rDFG4rJfAK4hPoZ4woGB98e8M8mRLzbejWmfyOOmWOQ/C5jw8AX47zxUL78ux1mRc?=\n\t=?utf-8?q?Q593dFOaoMDQjSbfzAk6IlIMXgmRoa/P0tbGLwv6B6kWsfpxkLhGToC2LyNgYJbLQ?=\n\t=?utf-8?q?IduYAU/yaaVRh/iCFsWXSC0UdjepUCyIBFHmlz6dnjRAYb2Vy/hxRlA33v8uChw6M?=\n\t=?utf-8?q?6tpixW7b5RKH3iOnGBtXmNbgCbZv/sR5bSO5GWMGbmvfJIipbk/oJZg7rWdGMyAZJ?=\n\t=?utf-8?q?vYHtEi6akrJaZZD7DAKMl/ZxVn9WA9sddDL6U3sg/KF5t6vn9HTyloAh5K7nsLjTa?=\n\t=?utf-8?q?YRxSMVzyBmLj7N4mWL6kdaIPitnXUdD2fw1JDiZvkdWTK/VodSjR+x5HMd9NIc/dT?=\n\t=?utf-8?q?6Qm9z3jXhuH/7jw7nAP79qgOBQ7DCGTkiIvqxnIIQBl/lT54ztwkHYdRN/W4MhUq8?=\n\t=?utf-8?q?bjfbuLcv2s458w0hsD1BW+k/V2JOj5r09MIBZZTFckb+tf/48kyPmlfv+tU8R3SA4?=\n\t=?utf-8?q?zyo7VP8fVsCzIQMtdWHRnLOfJLIom6xCYiLCHOhRz9TrZziHitbvkMwGDZKH18Zgk?=\n\t=?utf-8?q?UdU8RroVkXITtKrNIh3nILdNZprvQWZ+MabdgveB8UVMews+X9C1t7BTr2x9b6zZa?=\n\t=?utf-8?q?CcYyBcwgk6lXX3MCiAuD86rpnTlz4SYN+/A+kYjdIlgzYDYBPrh2Pq9dzcFNGQcJj?=\n\t=?utf-8?q?f9c7+IED7aSfNjgi9O72P9pvz6aiOOS/fRKfCGAIjd4OJqIsKW8F+k7qcuZA+ZyOY?=\n\t=?utf-8?q?t9eJMu9dTS1ARC2mvCgdpVAXn+bmZBibjmfa3plNF5/2RC13+JAO74HLd3hR37ttJ?=\n\t=?utf-8?q?AYG5jsHe5h+78JncdtRtWLW92fDc5lioIqvc1co/VjkMh/Z/XA9Bw=3D?=","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB7726.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(366016);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"=?utf-8?q?Jznqk6F3sGJ9cUhoDHxhpCZrcN/A?=\n\t=?utf-8?q?VpYWnML6eyV5y3kwXibeLq5eCXI552l1j0JadVI0jpgPAyZUT3rJKju/R49HLiR9C?=\n\t=?utf-8?q?3MYhzx0q+YIXt+Fcl79x4xHZ3Qf5QIVgd1qk6lq2jH1lTZ/ILmKAIEcQd3CNsFIMb?=\n\t=?utf-8?q?ADFPIHgRrFEz6HyHAF6z9k9QlgYQWMnVltaiKOpqpr4gNQaEtiKT6G0k9lcshGgVP?=\n\t=?utf-8?q?2QHk5vhXr/AC4CiLx90K/h25l8l/jikFkHKqII0TibB1qD7eVK5oR0/WOmT8Zp7qu?=\n\t=?utf-8?q?+9sv8ZVo5hF2QjaEivNgSv9PTCqxDTB10WhtgQqoBWNdtoLh0mpOu7kh/mxn/sXHO?=\n\t=?utf-8?q?jqCmr5e7zX06koBsFHRI7VamtIj3lC1uY2wYQ1fLz8pcBjDJV/S61xIjTtTHqy5pI?=\n\t=?utf-8?q?ZJaTq8pvzfubEYni2MRV79cQUx+UBC91DOVjlvEwQ0yw0f465m0L2PmSF7r75XodY?=\n\t=?utf-8?q?pvsWOXMiXYAj1PxIdngDIzht9Geh2DI5+DmQDDdSEcvUaxKwY3uH39wi7d3sNE3DS?=\n\t=?utf-8?q?6Hd87fI2BlXbrLUptOp45D6YCQBElQ5b0qEgg2TSWQzpFMbONIxGArcHl3a7h4sFU?=\n\t=?utf-8?q?7yUi43wHNeaH4/b1JUMZAwRer/JRvC7pBJlvi81O6CuR5+XUmDwp/ukmgT9yRds+l?=\n\t=?utf-8?q?aYyavCQb2jVUz+WA88zQ6QTSGLKdvQK5v/iI+HIKiSioIh6RsvDMbW9omw6Sp2y9/?=\n\t=?utf-8?q?S/s6I+LKQxEMX3UkGeQ1bMxG81l6iaOBABBXAj4NjeukYCy9EgMgKERBisX9KvJr/?=\n\t=?utf-8?q?i3Zshew/J0vcD8QT3oC6no289MAJjSRPushSormX/t6kqwBYtuXOIzLwjttSEA9B+?=\n\t=?utf-8?q?hNxbmDc9Ecv2q+isDOVM1LLUOPTwRkrVodL7dEF7IZQe0XdxsCzqZKm0C4JT7tMmD?=\n\t=?utf-8?q?W4vVueJXQonyRRtmQ6uGMvCRx1svgc9AATB9AEhqzjhvvXm5vvw8ise/ylbjX6BcM?=\n\t=?utf-8?q?327cPymAjpNOCupjNMd8lTYJg3zyMGW8erD/LgkEEK0vFp1hfz70DmFTo26JNn0w3?=\n\t=?utf-8?q?K5azk2rEcdhSDbpmvVzqgk3IkHi3seZxDEEVpIs8yGs1f6TprsHt6EGEKYzLnNmK5?=\n\t=?utf-8?q?65UaaspSTcfTKcaDAgSa6qnNaC5TwpWY5pj688ypuHH8XRwykNMigNDyHZaM39RH6?=\n\t=?utf-8?q?N+tGNAtCxcZ0inOoof3Yiu637goxHu2SCP+Sube8F/fp1TGIs4hi/+JeYQqmGLm1T?=\n\t=?utf-8?q?jdJssvY0GePaPIuu44kb9h9wdjYaCRnHbqjWRiXKgddEYWp8GtrAHTJZoH27SlGv0?=\n\t=?utf-8?q?sVC5E4uoHyxZwjnjkS8rv4vONip1vtzd9/kXEhLZ1Yixo7upSlw2tSZH/JG9PlGQS?=\n\t=?utf-8?q?scTuEnb60aWyGT6Lu3XqS5YY+5aOVvMDMLWFQnaZRKWBYox/0YuosjMYko0nGQzw9?=\n\t=?utf-8?q?8iqQ4Bf108cWBI+XDr1qlrSDSFA7xbl3lf3FJTpiEEad5oQWeH8q8UGMJTDPXcNYD?=\n\t=?utf-8?q?kmgaDG5BP5DkF4RiqrOpoxFGuVefDTC0zK3kYvhpW5Og28KIgXMVsDHeT3RZ96aql?=\n\t=?utf-8?q?BZM7QXV/LooM6/HPeyvqwZCEHB6UcTT2G5mDObGWixnn/pmtt+2PFggvFJKSQ3njl?=\n\t=?utf-8?q?ECdQ+286rcKB3ohKjSqbxLh9mrRboUxUFngv+f9gHkzAODBMvI/hkZwgkismnaI78?=\n\t=?utf-8?q?XGdDXFk9Yzh/yGGAR8Z88djfEuiwt51w=3D=3D?=","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n 0c4d5552-044f-447a-ce5b-08de5a41b237","X-MS-Exchange-CrossTenant-AuthSource":"DS0PR12MB7726.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"23 Jan 2026 05:38:54.7305\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n 2XQqG2bnwQBsSRj3gB5gosaO4hsYssnG9/gYdJDeRYjZdChtn+FnkIs/gNTuhiTC8slWIA1VmV3MHOp9Mc0DTg==","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"SJ0PR12MB6831","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tSPF_HELO_PASS,SPF_PASS autolearn=disabled version=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"}},{"id":3640974,"web_url":"http://patchwork.ozlabs.org/comment/3640974/","msgid":"<20260123135050.GV1134360@nvidia.com>","date":"2026-01-23T13:50:50","subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","submitter":{"id":79424,"url":"http://patchwork.ozlabs.org/api/people/79424/","name":"Jason Gunthorpe","email":"jgg@nvidia.com"},"content":"On Fri, Jan 23, 2026 at 04:38:49PM +1100, Alistair Popple wrote:\n> > >> We need to be a bit careful by what we mean when we say \"HMM\" in the kernel.\n> > >>\n> > >> Specifically MIGRATE_PFN_SHIFT is used with migrate_vma/migrate_device, which\n> > >> is the migration half of \"HMM\" which does depend on CONFIG_DEVICE_MIGRATION or\n> > >> really just CONFIG_ZONE_DEVICE making it somewhat arch specific.\n> > >>\n> > >> However hmm_range_fault() does something similar - see the definition of\n> > >> hmm_pfn_flags - it actually steals the top 11 bits of a pfn for flags, and it is\n> > >> not architecture specific. It only depends on CONFIG_MMU.\n> > >\n> > > Oh, that is hacky. But are HMM PFNs with any flag exposed to code outside HMM?\n> > > Currently, device private needs to reserve PFNs for struct page, so I assume\n> > > only the reserved PFNs are seen by outsiders. Otherwise, when outsiders see\n> > > a HMM PFN with a flag, pfn_to_page() on such a PFN will read non exist\n> > > struct page, right?\n> \n> Any user of hmm_range_fault() would be exposed to an issue - most users of\n> hmm_range_fault() use it to grab a PFN (ie. physical address) to map into some\n> remote page table. So potentially if some important bit in the PFN is dropped\n> that could potentially result in users mapping the wrong physical address or\n> page.\n\nTrim the quotes guys..\n\nhmm is arguably returning phys_addr_t >> PAGE_SHIFT. This is a lossless\ntranslation because everything is aligned, it isn't hacky.\n\nThe value it returns is not a \"pfn\", it is a hmm structure that has to\nbe decoded to something else using a hmm helper function.\n\nI think we take a number of liberties going between pte, phys_addr_t,\npfn. If there are arches that use a special encoding for the mm PFN\nthen range_fault would need to call converter functions to get to/from\nphys_addr_t.\n\nJason","headers":{"Return-Path":"\n <linuxppc-dev+bounces-16240-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=EvzMft5Y;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=112.213.38.117; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-16240-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=\"2a01:111:f403:c007::2\" arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=EvzMft5Y;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=2a01:111:f403:c007::2;\n helo=mw6pr02cu001.outbound.protection.outlook.com;\n envelope-from=jgg@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1 raw public key)\n server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dyK8l2jV4z1xrC\n\tfor <incoming@patchwork.ozlabs.org>; Sat, 24 Jan 2026 00:51:18 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4dyK8j4Ztgz2xdY;\n\tSat, 24 Jan 2026 00:51:17 +1100 (AEDT)","from MW6PR02CU001.outbound.protection.outlook.com\n (mail-westus2azlp170120002.outbound.protection.outlook.com\n [IPv6:2a01:111:f403:c007::2])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4dyK8h0bV0z2xHt\n\tfor <linuxppc-dev@lists.ozlabs.org>; Sat, 24 Jan 2026 00:51:15 +1100 (AEDT)","from LV8PR12MB9620.namprd12.prod.outlook.com (2603:10b6:408:2a1::19)\n by CY3PR12MB9655.namprd12.prod.outlook.com (2603:10b6:930:100::17) with\n Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9542.11; Fri, 23 Jan\n 2026 13:50:51 +0000","from LV8PR12MB9620.namprd12.prod.outlook.com\n ([fe80::1b59:c8a2:4c00:8a2c]) by LV8PR12MB9620.namprd12.prod.outlook.com\n ([fe80::1b59:c8a2:4c00:8a2c%3]) with mapi id 15.20.9542.010; Fri, 23 Jan 2026\n 13:50:51 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1769176277;\n\tcv=pass;\n b=knEUlVrYz3iPdyE353v7LHMs5Tu5BpA88w56dwTDoFZPhqvqXO0HXqxbpw5SRVBO2RnuiBG2WZHegP5Xy8bUYP+3cyzb7eqgGogIQ78kdN+g+fHMHDDvyKoP6RzOxf+X6DPQtCdreCLCxdaO7Og0a4XyXTH6s6Tv3Ces4bfZzEIhmMIIRfdqwBCOhOKdGv23DU5Fcl3RLiM6KTsWGKtdq2giAIErLUslZfuEfs59qBv190/u3xYOBVmDsi5ynWwyZ8gs/oYD9W54wltn6uzOyzeS0jCtGAUxJ2k0SVmvc5nHwqKguUBVcP8iEjJJPs70ejWdQAnrW22owdYi7z778w==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=ORX5KQb+sUpI+3viBQ4yY6j1/Kca4/35sLpbMQE4IJeUYVdG3mQTHywkT/skGTQHRkGyqb4QWZ64aIdCDCdvH77oJv6XC7CraQN2Hlsw8SBwyezC4eQU3GxTy+WZgQ5uRN2F0FBWgJZG2XsEQgBcJt6/nGk0taSOFcUfqmC7Put89V3VsGsO+w1XdAlpZzPXUgKYFOfe/ia1KJs3EJIirkN902A4L+qUhsWkh1MX3LJHI56d8gUV7Z4azuolWKbjwrZEk3Z+cHeyB2oS9CtsvzyvhlEQEftJq8ayCtwhpdjWPsy/Nb5as9oIhqWbvtdC5cJB2NLSgc7qQOhN9jwLhw=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1769176277; c=relaxed/relaxed;\n\tbh=Kwkv0CPCm+LrqMV02WPMv2mTlRAudQjYt+uuDKxdFjU=;\n\th=Date:From:To:Cc:Subject:Message-ID:References:Content-Type:\n\t Content-Disposition:In-Reply-To:MIME-Version;\n b=AQ9INTxUpA86JwDDYlwCQGmrkXa0YUOWz6TL5PY4fRJHp75njSeRemAjGaFPdiwVzG29eG5B2rEWwKfCqThzb1lvtqv+vF/dovdf11A0GztMlFX019HRrD1EokH9/JVDbQSrPYzPPNzosOtR93En8woUE8xRiyAfNa/pT2e1tls12lM0Lp8St1LkFVCkAODvqYn3CG1trVj2K6e57mj5rX95lSRBMjyS4jKxCWeXV4qQQIXvEGcakWiuPBpkEK3XZMdZLEV7AcowdfTYDl5V7fHgcp7RXB/gykqQuulXPUNmt84WxfZvto926Sx6JHCkfO7dRvwi8+3yeuzf0hSb1Q==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=Kwkv0CPCm+LrqMV02WPMv2mTlRAudQjYt+uuDKxdFjU=;\n b=UVYK2A6xpKqAGR5StexFNtdtnj+7Otj6ZsgVwRUzvBk2FacvmFCc7oC2y8FWO6IXV2md3VV7KFYeSH3geX36KLCk6YdVXG9qzdTnbAmgVi4gwslErB5AYbL4fHKwx6Lk+jFQL8Y+WBiiG5bFBckkFIV61nncxKv8ucmuNECuqlKTNeSPNyxo3ukELWIjsUvoaXkCo1s+InlQSM8PMYNXpkz4jUeuNCJ+W33z0maJeiQ3ma0v4pi+O/r1LM0d9dsG0f8A64e/nozqaHHUXiicfaoRT2dPZ0CdroPeAcJhaL9/Y/LdqYv09shzKuVtZFOTWo4ginVcISz93yHJha6ceg=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=EvzMft5Y; dkim-atps=neutral;\n spf=pass (client-ip=2a01:111:f403:c007::2;\n helo=mw6pr02cu001.outbound.protection.outlook.com;\n envelope-from=jgg@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=Kwkv0CPCm+LrqMV02WPMv2mTlRAudQjYt+uuDKxdFjU=;\n b=EvzMft5YJ/df9prsh/xcQELZ0389OaP1QN7p+DHYbTYBLLW9FCN/HPzsy7gcRGekfcx2SMtBbvd5TAmA4KEoiPbeOMGz5P7IWLZYOl/ar3uE0o766B0PRRZyMevkS00WSPABZ5RmxE7jl0fTV3cuSDV/SWIyNpMXdzXJnDy7560AJC5kv1YMMxHQftWMXawAs8nycYQ+2RIIkkpWUte70/Du4aJUJC6nGIF1VfI46h2H40dVROqk9B1Q9K5SMJKXaLV5VGggfxtUT7wQQ0cWDDJb7UbAqNuf+ORVLwk3iDBGmt/clOkY3tfDCdN2cdvuZvjPNWdxAReSz+LIlwVAag==","Date":"Fri, 23 Jan 2026 09:50:50 -0400","From":"Jason Gunthorpe <jgg@nvidia.com>","To":"Alistair Popple <apopple@nvidia.com>","Cc":"Zi Yan <ziy@nvidia.com>, Jordan Niethe <jniethe@nvidia.com>,\n\tlinux-mm@kvack.org, balbirs@nvidia.com, matthew.brost@intel.com,\n\takpm@linux-foundation.org, linux-kernel@vger.kernel.org,\n\tdri-devel@lists.freedesktop.org, david@redhat.com,\n\tlorenzo.stoakes@oracle.com, lyude@redhat.com, dakr@kernel.org,\n\tairlied@gmail.com, simona@ffwll.ch, rcampbell@nvidia.com,\n\tmpenttil@redhat.com, willy@infradead.org,\n\tlinuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org,\n\tFelix.Kuehling@amd.com","Subject":"Re: [PATCH v2 11/11] mm: Remove device private pages from the\n physical address space","Message-ID":"<20260123135050.GV1134360@nvidia.com>","References":"<20260107091823.68974-12-jniethe@nvidia.com>\n <36F96303-8FBB-4350-9472-52CC50BAB956@nvidia.com>\n <c9afedc6-f763-410f-b78b-522b98122f06@nvidia.com>\n <6C5F185E-BB12-4B01-8283-F2C956E84AA3@nvidia.com>\n <fd4b6553-3e9e-4829-a12f-51d29a5d7571@nvidia.com>\n <16770FCE-A248-4184-ABFC-94C02C0B30F3@nvidia.com>\n <sezye7d27h7pioazf4k3wfrdbradxovmdqyyp5slhljkmcnxf5@ckj3ujikhsnj>\n <DBBD65CA-A8F2-40AC-AFA0-FC95CBDB3DF5@nvidia.com>\n <0C16A79F-5A7B-4358-9806-7F78E7EA8EE6@nvidia.com>\n <l5jxxobpj6shwuuthsyxlzfnhs6dx4spvzcqxrycn4chtywniq@e2eaio4nhorq>","Content-Type":"text/plain; charset=us-ascii","Content-Disposition":"inline","In-Reply-To":"<l5jxxobpj6shwuuthsyxlzfnhs6dx4spvzcqxrycn4chtywniq@e2eaio4nhorq>","X-ClientProxiedBy":"MN2PR14CA0001.namprd14.prod.outlook.com\n (2603:10b6:208:23e::6) To LV8PR12MB9620.namprd12.prod.outlook.com\n (2603:10b6:408:2a1::19)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"LV8PR12MB9620:EE_|CY3PR12MB9655:EE_","X-MS-Office365-Filtering-Correlation-Id":"fcc29591-dc62-40fd-74fd-08de5a866b85","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|366016|1800799024|7416014|376014;","X-Microsoft-Antispam-Message-Info":"\n A4wW0D44C/oxAKRogWI/ujFJVZ5ltMY6Hwv9i533fwU/IYuc0hvogf0Xp89F16ovx/A0CCymnP8pNvDrjf3TBz/SXlFg/us6O1YlEbJ3e8gA25uxnIhqCLl//Qzk7SGZBlQetyeR0Rvkusa54TgC2NyBXtDuWRt4WtHQQZaOaguYnPBS8IspB7cUyK61uZwd5BHUsjzwfIuHM7oF9kbBMgOc+O5h+b0FHCLRdwua259YVA4wgV+SDy5aheZLaAAujauhBbx5AEh3Wztt7FtGjnjwfC9w4c2R2r/6g5wRXC0r9F76QufySD5yrAo4MqZ7uXX6SolaokBm18yWiSIqzmQ4UQbJmJeUnT0tByvnBHpzHb6qO0He3LgeVSyQBpUSNbc3wudeEO3PrD3PwciU9bZtEyzWexg708JcjnP1x9WXfb9zA1STgR7cTH1GYMZkHHMsEferQxwat7mIerZ5TeY7pC3+zidEvi8x2ownDAX9GaDgYJSZD8OS1QHvjD49H7zImGENyzWUVd9+LZV5rchvIB5aUDHp+UThp+oWufc3F0BXaIh3aOJvCPW+G2kK/gHJYLZ3um9uACQ6dl1dA94TosS/rsRQgMPgZywARHZxgAXfzHEq+vtbiXq0tjwC1GDPKcOsV5CwsYxeFBUvf9E2LQVpCs8DFttH+iLh13P8lFHemM55WVOeD6xTTZ4sCTvB32r0aF4SyEDlGDri7sGaLor87tHBbCziKaAROZ4KQwTbdIkUVy0YtmgVqAnvCFh4CdImZo4huY38wuPGAKaScBUiCkYUw5wofnMi5Rs+kqSH6LlJx4cBk2skFwstwv0Ptr5iJ8xVwn+/71jWwsttbcorUkg1PXEyj0uS/O2fXWXGfBdmwY9BfcrhNXe0aF595ejo9GZjqaV5rOITZoeE5uqKUqPtwurDhOqXer5cB4EwliIEGH+HZh8G8GhF2Uvw4xRs0aJ8Sj0/xsXYlyD/Hj0qAWuQP/NjR/qZGc46EQZmsPF1rLFFZFnbhmdZ0I8mz8nCJgLeLNINY7CLsyy00TVAxuhbeVKD4twx4g/XWg4ur4DEollQlEHI31tasCTh1B7vdm+mqzM3MGsBUucKeZca0ROZymJ+DPybREQDLDLqB6mBJZyQnX1MoNYTOigZkUQcN3rAYUhsINs3MfhGHM8IQmvLtGdHE8uIcIjm2PU19Jurub3cWJ20JkeOBlNGcWr8Y91v1IhTH0ResGF41O+rgqurCe75YfA2zshgEvMsWtbuABtySbth8TxD/RbTaR0vVlvyJzC5Eea2b15kuuw/i0ahy+6Vb/gzd8m/qV+FvJ6t6CfAINgDrBsF6IaIQhvAlT4H1XicGb3O1IzVkYx7UP1lpUcdvg6L+Vx6P5HjlgKS5fp9c1NMdNhLh/CyuuIEaJnaXJR9kw1cv2jcoTCJstBjs7/NuC6Dxn4UAZm5BYXd2QqGASuE9eJ4bPq8Pu2xUjsDOmiN9e8GK1hjqfEVjs5152yil416hxUKiQa36GZ4gq3wNMhISObW3MFnZtiRuenku1LcKYKOylaVaIJ84syENlqjOWhrj4c=","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LV8PR12MB9620.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(1800799024)(7416014)(376014);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"\n 6n5bbpHPt/ZRXAxW/cRAjMitrJ6ClygL5kppuG9OHQ6EygfERfSxKMVcKnaXEXAJ9Z6PTt+7vvNNqB+a5L5nA1xhnZhSP/DmepcJ+fdoZ1lYbm5jn5pqIuPNFh77mzDZRyt4LyNe0eYWAEjd+/NWV/htkko+aJc12VtlNcWmLJEIwT+TcpUyEqzvrlvkcFpaLGkWFpsMwciaOR/XZ1eTi/lUfvef5gg9EpySurhxP1klqkep9ytiUBjeSgiCeUF4umD+7AMQc+4++I9MKJvm1HNOroZ94/4S8Flk+qlKOIKMj4wkLm9u03kjxRY0OizkKYpwue4H84XodEkJzh8hDGdTWP4minypilOaBobSeOx7ZpPJVneetKQ1UxuR3aTP5l+sjKxxzpiaqIOZNFP1MZurYCH5NIvWaiF3yYUCRH/j65owbS0Dod/nGgvctaGVT+cq5sCOFwoEOCL1+YpKXav20lM3KCmv6Ubvru98qtXQR9CY5NyTihnhBq9OGmPbEA+xQ+KwOb6LogdVpZ5fiyPKU6DF/6cFw/3sJOxRzfYgq0IVIWhcPsJOK20knximwCN2XYQ0SAoCBqhnPq6oBVkq9yaNx+YO7ATDJ24w/nbUvllcxwjW+AkLHzfmxMBsIzFREuJVoELhuFZhKm/8bH20imouET3K4uHD1R00XA2HkU4KhnM7Zuq/2J/gTnwkfc7818IF70xOjsXAxYZBh5IWY97ffjwIac0hOq4G3IlK5SHa6q/kiehpzrE4jq/4RdjtDorErXrlvRc3t5wcqQBBpcHCBh4TXzwjbMnPMHranGJzB3kKnwWXXlzD+m2oiiJYh8Z+uvdrodRxJluwG2pQ0j3mYInlAfwD+pAfcls2dy48irnXdopiF4uNDq7xbpdgddi0vRbNJ/WnEJ71aUtjBJPVcY0As0ECUj4oU0llYSmLv5aa9rtP+sJ+hY9zQHiFyOky+ai+Fb0CjBUXiFHOR8m+2VcQGk5/mUdcBNLfRV0tGjM5NAkSYO38ncAxVK1wrpL+V6ehVQWeSE6GTCquOv4EuogoTw+wOoksOnn3D1gE6jzHFh0UNxpamDRX9hYgqEqJNbJ4mRd1/LLncCO+RaNJU4Mnk5N/mLvaNIb/q0+61iimHqzV5FggZSYTUEFBLraCdwoYHQb/UgyBRrqHv6TOVLBle+WPNYXJjU556WQVHV/ORRRRbgjkmvlS1+hZ3iuY25EVVTp0SZUhxrOPZOiOGDahQ58c4wfIxB6tPufVzs0BfRUPQxjAntD1gxsk/rQ0+JXmI7QyBkzWO74P/i1dFV/0wTnp43OGnJtQvJ8E7N/1LALj+lPePNyGisA3czpxZR2kivDJpZatLyjy98CN/oH41xoqv4R3uVRTYWI1rB+fCc1JR9FMpJDX6xIjiVwDlChA1O8jApsQd6CjnaoW4rqu+K1zF9mWcsQq//DdDH+lOeaBvZgWkjVPCJyAIPA5lPrwwWJBDSCf05VXlR/uwP4lCle2VLx45LOMRQYqXTH8C7faBTS2KQms4nQBBjc17WeF7cf70P57BHCxHP3zntQ80Ikt9pfd/+f7cgFv2BBYM/yQBDNPrfHzPIrjZDIxViX4eCmDs9TqJyADniIUhyZrF/ghdob41Ecbex8QP9lwwKxRctsEdOUjPkmHosvUqUAi2uT2XRuuzlioMzCwody3epeVaLh0x3hdCVvUn+6WKR1pBaNTEXxIOg9aJ3T/FKLup0cw5dzvQg==","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n fcc29591-dc62-40fd-74fd-08de5a866b85","X-MS-Exchange-CrossTenant-AuthSource":"LV8PR12MB9620.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"23 Jan 2026 13:50:51.1056\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n o0Dcu+lYo9RzqG09W5FZTgTlAgHF0BbKGUhWGGo1ryzJTCTzDCMQ+18J/9Hb54CE","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"CY3PR12MB9655","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tSPF_HELO_NONE,SPF_PASS autolearn=disabled version=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"}}]