{"id":2180670,"url":"http://patchwork.ozlabs.org/api/1.1/patches/2180670/?format=json","web_url":"http://patchwork.ozlabs.org/project/linuxppc-dev/patch/20260107091823.68974-4-jniethe@nvidia.com/","project":{"id":2,"url":"http://patchwork.ozlabs.org/api/1.1/projects/2/?format=json","name":"Linux PPC development","link_name":"linuxppc-dev","list_id":"linuxppc-dev.lists.ozlabs.org","list_email":"linuxppc-dev@lists.ozlabs.org","web_url":"https://github.com/linuxppc/wiki/wiki","scm_url":"https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git","webscm_url":"https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/"},"msgid":"<20260107091823.68974-4-jniethe@nvidia.com>","date":"2026-01-07T09:18:15","name":"[v2,03/11] mm/migrate_device: Make migrate_device_{pfns,range}() take mpfns","commit_ref":null,"pull_url":null,"state":"handled-elsewhere","archived":false,"hash":"7a7014462bfb1f6f1d9e62a96bebe7c1fddfbac9","submitter":{"id":92354,"url":"http://patchwork.ozlabs.org/api/1.1/people/92354/?format=json","name":"Jordan Niethe","email":"jniethe@nvidia.com"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/linuxppc-dev/patch/20260107091823.68974-4-jniethe@nvidia.com/mbox/","series":[{"id":487451,"url":"http://patchwork.ozlabs.org/api/1.1/series/487451/?format=json","web_url":"http://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=487451","date":"2026-01-07T09:18:12","name":"Remove device private pages from physical address space","version":2,"mbox":"http://patchwork.ozlabs.org/series/487451/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2180670/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2180670/checks/","tags":{},"headers":{"Return-Path":"\n <linuxppc-dev+bounces-15366-incoming=patchwork.ozlabs.org@lists.ozlabs.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=oDmfdQ8S;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=112.213.38.117; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-15366-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)","lists.ozlabs.org;\n arc=pass smtp.remote-ip=52.101.56.71 arc.chain=microsoft.com","lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com","lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=oDmfdQ8S;\n\tdkim-atps=neutral","lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com\n (client-ip=52.101.56.71; helo=bn1pr04cu002.outbound.protection.outlook.com;\n envelope-from=jniethe@nvidia.com; receiver=lists.ozlabs.org)","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=nvidia.com;"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dmMtV3bs5z1xpR\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 07 Jan 2026 20:19:30 +1100 (AEDT)","from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4dmMtG1XJ7z2yR5;\n\tWed, 07 Jan 2026 20:19:18 +1100 (AEDT)","from BN1PR04CU002.outbound.protection.outlook.com\n (mail-eastus2azon11010071.outbound.protection.outlook.com [52.101.56.71])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange secp256r1 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4dmMtF1vxrz2yN1\n\tfor <linuxppc-dev@lists.ozlabs.org>; Wed, 07 Jan 2026 20:19:17 +1100 (AEDT)","from DM4PR12MB9072.namprd12.prod.outlook.com (2603:10b6:8:be::6) by\n MN2PR12MB4335.namprd12.prod.outlook.com (2603:10b6:208:1d4::13) with\n Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9499.2; Wed, 7 Jan\n 2026 09:18:43 +0000","from DM4PR12MB9072.namprd12.prod.outlook.com\n ([fe80::9e49:782:8e98:1ff1]) by DM4PR12MB9072.namprd12.prod.outlook.com\n ([fe80::9e49:782:8e98:1ff1%5]) with mapi id 15.20.9499.002; Wed, 7 Jan 2026\n 09:18:43 +0000"],"ARC-Seal":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1767777558;\n\tcv=pass;\n b=Rvd992CyyhkLtrhfOwHnv6GKuLbXlXsVNvHV8SDfeW9LHEOWSBYOT2EC8KJ1C+/hQWnG2mo3Rz8Qw/V0QnSGQJweCxNw4DHaLHn4YrC0v+Vx7mMi4WYyGLsvSllN+WkCVblpUoppvYFfdylcmNcBA0SDYymZSs06a+lTnFF1BbcOJ08wbAMwQraLg3oiQKf1j8b2JLNc/0UA1OCBxOWX3g/8W5giP1+0V/+f5gQC5G2GTKBhr41VX8fKu0taWP4CapCdULTYHcUM+b5A5Yf7Sz/RYZBIZjDuma0jyb4q5v2JF6Bal6IdjmVNOltqOOm9vazAXOo5rUcsnaJPhcYZrg==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=wpjgpwG5w1nLihEeKc5WDfjxbOGVkhKKYWNns4DRNaM9Oz5l/31G11QfUCw5AHvluwvKdpyUAebUMhU3edSsrx2fns0z1NAvhh5wXy6120XUIn6lj1zDT+KAjjawUiNSlIUnvoQ6OjHhafKGT2VYN7pOLWYSJcM4IYIXhh9fVgsfS3gNYX9N0bhdR/n0Pcra/GlhHmpUWldUZX4MjyX5gAwcQTEEVmwFA36Zrb85PLgJUUKclz/+4w9EB5inXbILFomFjZu0Uf1qBgW2NN6m1IF/34ujrSUIq5sscM60CQOlvT36TGdlNO3XDpl6jZPB0M5gDFMlbUH+jtWOgZBGyg=="],"ARC-Message-Signature":["i=2; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1767777558; c=relaxed/relaxed;\n\tbh=0FJ0nFMwah6bOPj3aj8uu2307JgLdg6A3pZ+U/MQn4E=;\n\th=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:\n\t Content-Type:MIME-Version;\n b=lRHfH5hHNCWF+U6qRQ2dELULqiwVyXem7hjLKrSfyCrhDLzQx4dyu7OL8kt9KT13XLIForfnr1CRHNxpQetbUvPX1hMOXByxf9nBm0fQIO6DqCdIhc3gI0ij2ZRsbOf7K0eHnxNx8a8e3MuiNOu4WjYfec3EEegbjjGBBo8gPaOUNgDynLWQGe6fCYifGO3N2aqqTlW9Q4NRVbvUgITMip1Mx4xsuZAAU/KUM9BoyeEu1vo1xj6QqTxswK8hNHL69RAIbYzVrbxALxp0ZHnuiTyJV7KKarLatOYq8/jXlLMoA91dQc2SFd9LrOcW9fX0gkMse7IjYRMqbM7yCd+Syg==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=0FJ0nFMwah6bOPj3aj8uu2307JgLdg6A3pZ+U/MQn4E=;\n b=HdV9ExRTISTpnLINPZJC2Hq5D/ZOYhlKeMvU1ClnmhHNYRFJDze5UTNcDRSXyHeFHgjbH2lEGBpF8VxL39B8DJ2EIuqp6LVF2J+ESuJlls54dm2RcCvjqGXaKqjpRLCvFshtHy+MQzruPvSI0e+T6WpZ3f26iMSPOb0N1acyfmurbByBh1RjU8ATEyjcK/ecdGUN+3Ug2drPoohsP9z/gTP4THpsOmW7Y2E9QnRaXrBs0HNj2EdnmSkqSpUzZO/o5gsaHTxg4ez1Sfns9quBPe5+1cobz8VllBJxMudrN4TGJL31CUG/Q7IVsFDTRpHvRY/NssBYY5mgBmFK7eKsnw=="],"ARC-Authentication-Results":["i=2; lists.ozlabs.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com;\n dkim=pass (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=oDmfdQ8S; dkim-atps=neutral;\n spf=pass (client-ip=52.101.56.71;\n helo=bn1pr04cu002.outbound.protection.outlook.com;\n envelope-from=jniethe@nvidia.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=nvidia.com","i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;\n dkim=pass header.d=nvidia.com; arc=none"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=0FJ0nFMwah6bOPj3aj8uu2307JgLdg6A3pZ+U/MQn4E=;\n b=oDmfdQ8SkV+HpZ+YO5zotNHN2skZ+It4WhzIGHo2ilonyJ2VzOaL+Vj4Qfkz7tVFtipzvB8CbhddbYR7+Hd4XLfJzZxzkiU37btp34AidP6FyXGB1MTUiNYQHQ+59WZuxyPrdlvlnMbVO5Wbv9s3TD1rI7x2LycOeYXw1/Ik4e/CAQN84IJVBwXviZuM3KYKVkQQRksMm/3zDN9WTv3utKe65qHsW4AqRg3BVP0uiPAxfwXwEkKqMhRo+KFbliuFOF+S528uYw8kQVRF0pOoXpHeepqZGLBzpbgVK/iWfA9DCp4IpX5tRANTXc/bR1NXeG72FGWzrdsYtHrquUwovA==","From":"Jordan Niethe <jniethe@nvidia.com>","To":"linux-mm@kvack.org","Cc":"balbirs@nvidia.com,\n\tmatthew.brost@intel.com,\n\takpm@linux-foundation.org,\n\tlinux-kernel@vger.kernel.org,\n\tdri-devel@lists.freedesktop.org,\n\tdavid@redhat.com,\n\tziy@nvidia.com,\n\tapopple@nvidia.com,\n\tlorenzo.stoakes@oracle.com,\n\tlyude@redhat.com,\n\tdakr@kernel.org,\n\tairlied@gmail.com,\n\tsimona@ffwll.ch,\n\trcampbell@nvidia.com,\n\tmpenttil@redhat.com,\n\tjgg@nvidia.com,\n\twilly@infradead.org,\n\tlinuxppc-dev@lists.ozlabs.org,\n\tintel-xe@lists.freedesktop.org,\n\tjgg@ziepe.ca,\n\tFelix.Kuehling@amd.com","Subject":"[PATCH v2 03/11] mm/migrate_device: Make\n migrate_device_{pfns,range}() take mpfns","Date":"Wed,  7 Jan 2026 20:18:15 +1100","Message-Id":"<20260107091823.68974-4-jniethe@nvidia.com>","X-Mailer":"git-send-email 2.34.1","In-Reply-To":"<20260107091823.68974-1-jniethe@nvidia.com>","References":"<20260107091823.68974-1-jniethe@nvidia.com>","Content-Transfer-Encoding":"8bit","Content-Type":"text/plain","X-ClientProxiedBy":"BY3PR03CA0028.namprd03.prod.outlook.com\n (2603:10b6:a03:39a::33) To DM4PR12MB9072.namprd12.prod.outlook.com\n (2603:10b6:8:be::6)","X-Mailing-List":"linuxppc-dev@lists.ozlabs.org","List-Id":"<linuxppc-dev.lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev+help@lists.ozlabs.org>","List-Owner":"<mailto:linuxppc-dev+owner@lists.ozlabs.org>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Archive":"<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Subscribe":"<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>","List-Unsubscribe":"<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>","Precedence":"list","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DM4PR12MB9072:EE_|MN2PR12MB4335:EE_","X-MS-Office365-Filtering-Correlation-Id":"5c3e9036-632c-47c2-f5e0-08de4dcdc0ae","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;ARA:13230040|7416014|376014|1800799024|366016;","X-Microsoft-Antispam-Message-Info":"\n MTU9Esvnd0ncORySdqbNUeFPWACA6MV9bGmHkNgoS7jUNKcX5GSC0G0Apf1wlc04E8d///7YQwzrWpR8WX3ahl2okJFtT4XlwkXE0Ptth6sMSeUyzbPbZNnzi6gJBQjj2NQfeok98Zxb8I2/Z5zOkWyorVTrD5eJANixff/xxwtGsmudXJtu8hgwf9RCY1mUxFtbR8iujk2YYtb5PT9el6DOSpd2kddUHCouRJ0SLLhsXnqoJA5sAozEi5//wf5LlUXq8O23JDNfR/nOheqbCIr5qFN/AGyU1ad0XjAR7/r+WLT/y1lyjZ3IAcH3e1JWF+e89ajqaKnn2DdL9wdvnsOW3Cih8uoOCaf+3WuwGN2CG5TSVBWzu5IEnY8JcydLKHoE4nwYSqi+Er6Yo8/j9PD+J9Z00WVbJNQPdgfhg8zCFcwVg/MvMjS2Pc5vCCerSvoYK2e/pwlufahnlXLbuU3aXXrQrXwHocOnsabeO7Zk9HtVn+uA+nlu93Pe5oYWtYhgWtgvy2u68KfQN0ASJ01ANJM1pTW9UpubeA86acDhdxOB9erOA+lOF5WP4brjKEvE0B9YFEn7dpV7BjcVA+szYjLS1R4DHd1BENHhHT7k+4OsjIR4RXFYzqD4Xv9p1SYID4j3Z1spiSGU4rnVzZsv28ecfAduNXqEttIyjBnhwCcOXTlVUgy5URNozQGSc4DEotGdg6ldzFlYLrb4jOa0awU2gO47Jq3kLSyvJSec4Kzz44Nl1iZ/g0Ax/An9K+Qp4f/F5xidVYdcffYCn2EtY8tonxagQz87TWKkG2KN7RJTc2ZGdPSLCO8CjsH2DTVucvW+l9PkP0caw5p5ObYZv+mw8oA/8fx/9+tFs1i1RZuvjtrC/Zs7YX43j6jyZPXfrAymmW23WhHxm3qPmAhUPtkDmM2ruCIEPY0MtM53oHUcUwT0WlngEV9x/SKZ6x74J3bKRccDLUGxqEwSBhdzPVUOPG6o7Nj3SGQFtFKwxS1zz8ZKJs5ggyJQKCurpz75NuPsemUlPADG7erI+DLgz+6iojThQZnv0R67P/7tQDHjVYFdEVcl6djUHkK+1w0JMHskasU9Pv8GGMiW/Fn1nsCbVWCtBoTV1Wgy+aegkh94q1f3ewofKVsGD2/N60uCxq02gVVTPiqjk0eUvR9wyIOml5JOezhFPb4ubpO5QPx864QdGqidyaUnDg5ARkaVCLrEv6NqRnGhyPRyoeEgg+VD7jVa7kZBMLJ0dMfITZr0fnLuP82mX9vmJds0TMB+i6dnmadcOM3oMoYWXSQzuBcs6d4wV1BNvckQrgaD4deEd6GgP3ezWS973dxiSz5Jvqg6ymp6i1QaByMPKRtyJsnr1RoNf7r/z7+9V0D30OoxQa4WtYaEZ/aCXMxXbK7LNX0/15UwBgMcDfynynV+hrRQERhKbK+haitpNgMe0eVVvO3zQnbGvkVCVn70","X-Forefront-Antispam-Report":"\n\tCIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR12MB9072.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"\n 1zmY4AYULCN9r9D0PQV2ZQDucUSPjOXyAuyR496llEPHS4U+mxTo5Gq6Ec2YT3R5sHu1nrlgHNML9UPL50f+MbEQ5G0GNmlX7LID5w966X0JrITN4Y/hY+QnQNiyfQBZfEN5saHiDZ88NXtSggUECGObYHxYoGGysM2ioVC8kXdWz7+bjU0qSZuepI0GcIECExBxnAaofybaEJCdQII6poHYm16407r8MqzOCWrbX+GhRJJazmL17r+kmI+ZXE6R6zaUhu4RsY3BYN63rHjy49tP1+b9eCIe3zBqbeUGVzlCpOqv9eKA3ninkbu6JBOWcEQkycu0XK/os8XYxcfhhhO4fR1kpmw3+QLYNH1C83GLMIhp9CJTzfksCLr49lh8NKewZ9ungjXSiO7oRVl8TYPAj12nqXCqhbns67IaBkLYrDqi+thvxReUW4neEHWu+Rq6PoK76C8kZ/A58qkAeOf5PQM4B2lqa1CD1FxuUHj/r9DJHtMv1Q2JfAUERWgl65fHsVpqJHNHLaOMzLa3WOCCpk/M8HrvLlX8hoZjUREqbEGQhauGiTk02O81umcPD3leWb5NSjKSveQ82mYZdR3pN110sBz1joeHdolXJUd1gWVb3/cSznzOQ4c2mTmKhedeKbfvKn6aMyOwxwXchHlAvTGuBsrXsAABDk+qBpZLmiTyLJ6oVA6izUm+WBcf/SQRjPyqfhEASH0QS4qCa8hNEZz1PBXWNLNN0nnoITEAUtxED8AD3jKhvU97y8nU9HXfFZzL3DcI0IPOTEyA/Ueid+DGOp8y+JztvBhKvu9wmRidKKFgt88fQCLLZ8aY8MByaukaEKjKark3rfmmkujgYCHkxXSzvCksMCgu0Wir4caS8RlyNBR9DtznY0AmVq64hF440zne18Ao298B4sxlCxhEZLnt0Z1cUFm2AezZs/2kX0Wm0g3URUcpjCm/+az6DZRH6+MApXbpwC2HzB5VyyqikVTMh2xV9s8zx62bShsj9329O7/0UEpPbJATX3X1y30THbodJMNM6s/VM9mHJljYWn1jLTVN+H3lMWalkkdZi+c9bzmoRkDuAOAW4AVG8jsyv9C9fq250vWqXFBqxJWaQoyBBKUhMC/MpzFfmKIwNfpFq4CHlEpJN0j0N3kqmNlU2xUOfXL4OQAeqroKmS7IdzJj0xTxkZKnefPcQsb7eUV6fiS90x2BtBPnZ4k+VV1CVFB6HIYDgiF3w6J6av0mKu3wfmbUuyOgritLBe/bE4+IGa4PUWx7uKUzON3GLatvB3TVqZF8++imRfqBJ57atwZSTe90BSJ3gRjHtYy7kvi0iMr5BYa41ZkxValr1L3yIE7eCyaovBHgZf85o8QXIwF6w8bdefuym8MdKxdhkimzk7+m1Dt9JoNonMGcRodzGZDxol+lNN4xQUf1Lm05Bw3beebd/PeMUbAc/USfEQJkVlcocxC5AOlKetc4FLwDWjHwERMLerHwQT1S/ZwHUEszzwh5SRCW1noXg7q83TMBJOckfn68oqoxcn1VRkcSL8898vwbbIDO1i1YUaBHIZIm+8VdKGxtxOPTxNukeP4ejtUw/Zloqr6AHSedctUYbuAmVP65AaM3xqR9i1iA6CakFc54P72dAKrdb9aiaVzI9JeW7lBD3RDLnBP3kzLyQOsba4HRxEhAvAChfCvkOVMPFqfnPjO89LVAJOuULjYfqwrhzw1UhEJDX9NdVavnCnDqE2mASLKYOA==","X-OriginatorOrg":"Nvidia.com","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n 5c3e9036-632c-47c2-f5e0-08de4dcdc0ae","X-MS-Exchange-CrossTenant-AuthSource":"DM4PR12MB9072.namprd12.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"07 Jan 2026 09:18:43.2973\n (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"43083d15-7273-40c1-b7db-39efd9ccc17a","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n jb23eVlR9TKfixmPSh4V2JmVqQbboPiTs7IDT3VK1zFNyHdbXNTBHFCgKPdAGzezSxDuJeD/SZzAoJtzo9aa3w==","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"MN2PR12MB4335","X-Spam-Status":"No, score=-0.2 required=3.0 tests=ARC_SIGNED,ARC_VALID,\n\tDKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,\n\tRCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_PASS\n\tautolearn=disabled version=4.0.1 OzLabs 8","X-Spam-Checker-Version":"SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"},"content":"A future change will remove device private pages from the physical\naddress space. This will mean that device private pages no longer have a\npfn.\n\nThis causes an issue for migrate_device_{pfns,range}() which take pfn\nparameters because depending on of the device is MEMORY_DEVICE_PRIVATE\nor MEMORY_DEVICE_COHERENT will effect how that parameter should be\ninterpreted.\n\nA MIGRATE_PFN flag will be introduced that distinguishes between mpfns\nthat contain a pfn vs an offset into device private memory, we will take\nadvantage of that here.\n\nUpdate migrate_device_{pfns,range}() to take a mpfn instead of pfn.\n\nUpdate the users of migrate_device_{pfns,range}() to pass in an mpfn.\n\nTo support this change, update the\ndpagemap_devmem_ops::populate_devmem_pfn() to instead return mpfns and\nrename accordingly.\n\nSigned-off-by: Jordan Niethe <jniethe@nvidia.com>\n---\nv2: New to series\n---\n drivers/gpu/drm/drm_pagemap.c          |  9 +++---\n drivers/gpu/drm/nouveau/nouveau_dmem.c |  5 +--\n drivers/gpu/drm/xe/xe_svm.c            |  9 +++---\n include/drm/drm_pagemap.h              |  8 ++---\n lib/test_hmm.c                         |  2 +-\n mm/migrate_device.c                    | 45 ++++++++++++++------------\n 6 files changed, 41 insertions(+), 37 deletions(-)","diff":"diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c\nindex 5ddf395847ef..e4c73a9ce68b 100644\n--- a/drivers/gpu/drm/drm_pagemap.c\n+++ b/drivers/gpu/drm/drm_pagemap.c\n@@ -337,7 +337,7 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,\n \n \tmmap_assert_locked(mm);\n \n-\tif (!ops->populate_devmem_pfn || !ops->copy_to_devmem ||\n+\tif (!ops->populate_devmem_mpfn || !ops->copy_to_devmem ||\n \t    !ops->copy_to_ram)\n \t\treturn -EOPNOTSUPP;\n \n@@ -390,7 +390,7 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,\n \t\tgoto err_finalize;\n \t}\n \n-\terr = ops->populate_devmem_pfn(devmem_allocation, npages, migrate.dst);\n+\terr = ops->populate_devmem_mpfn(devmem_allocation, npages, migrate.dst);\n \tif (err)\n \t\tgoto err_finalize;\n \n@@ -401,10 +401,9 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,\n \t\tgoto err_finalize;\n \n \tfor (i = 0; i < npages; ++i) {\n-\t\tstruct page *page = pfn_to_page(migrate.dst[i]);\n+\t\tstruct page *page = migrate_pfn_to_page(migrate.dst[i]);\n \n \t\tpages[i] = page;\n-\t\tmigrate.dst[i] = migrate_pfn(migrate.dst[i]);\n \t\tdrm_pagemap_get_devmem_page(page, zdd);\n \t}\n \n@@ -575,7 +574,7 @@ int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)\n \tpagemap_addr = buf + (2 * sizeof(*src) * npages);\n \tpages = buf + (2 * sizeof(*src) + sizeof(*pagemap_addr)) * npages;\n \n-\terr = ops->populate_devmem_pfn(devmem_allocation, npages, src);\n+\terr = ops->populate_devmem_mpfn(devmem_allocation, npages, src);\n \tif (err)\n \t\tgoto err_free;\n \ndiff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c\nindex a7edcdca9701..bd3f7102c3f9 100644\n--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c\n+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c\n@@ -483,8 +483,9 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)\n \tdst_pfns = kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAIL);\n \tdma_info = kvcalloc(npages, sizeof(*dma_info), GFP_KERNEL | __GFP_NOFAIL);\n \n-\tmigrate_device_range(src_pfns, chunk->pagemap.range.start >> PAGE_SHIFT,\n-\t\t\tnpages);\n+\tmigrate_device_range(src_pfns,\n+\t\t\t     migrate_pfn(chunk->pagemap.range.start >> PAGE_SHIFT),\n+\t\t\t     npages);\n \n \tfor (i = 0; i < npages; i++) {\n \t\tif (src_pfns[i] & MIGRATE_PFN_MIGRATE) {\ndiff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c\nindex 55c5a0eb82e1..260676b0d246 100644\n--- a/drivers/gpu/drm/xe/xe_svm.c\n+++ b/drivers/gpu/drm/xe/xe_svm.c\n@@ -5,6 +5,7 @@\n \n #include <drm/drm_drv.h>\n \n+#include <linux/migrate.h>\n #include \"xe_bo.h\"\n #include \"xe_exec_queue_types.h\"\n #include \"xe_gt_stats.h\"\n@@ -681,8 +682,8 @@ static struct drm_buddy *vram_to_buddy(struct xe_vram_region *vram)\n \treturn &vram->ttm.mm;\n }\n \n-static int xe_svm_populate_devmem_pfn(struct drm_pagemap_devmem *devmem_allocation,\n-\t\t\t\t      unsigned long npages, unsigned long *pfn)\n+static int xe_svm_populate_devmem_mpfn(struct drm_pagemap_devmem *devmem_allocation,\n+\t\t\t\t       unsigned long npages, unsigned long *pfn)\n {\n \tstruct xe_bo *bo = to_xe_bo(devmem_allocation);\n \tstruct ttm_resource *res = bo->ttm.resource;\n@@ -697,7 +698,7 @@ static int xe_svm_populate_devmem_pfn(struct drm_pagemap_devmem *devmem_allocati\n \t\tint i;\n \n \t\tfor (i = 0; i < drm_buddy_block_size(buddy, block) >> PAGE_SHIFT; ++i)\n-\t\t\tpfn[j++] = block_pfn + i;\n+\t\t\tpfn[j++] = migrate_pfn(block_pfn + i);\n \t}\n \n \treturn 0;\n@@ -705,7 +706,7 @@ static int xe_svm_populate_devmem_pfn(struct drm_pagemap_devmem *devmem_allocati\n \n static const struct drm_pagemap_devmem_ops dpagemap_devmem_ops = {\n \t.devmem_release = xe_svm_devmem_release,\n-\t.populate_devmem_pfn = xe_svm_populate_devmem_pfn,\n+\t.populate_devmem_mpfn = xe_svm_populate_devmem_mpfn,\n \t.copy_to_devmem = xe_svm_copy_to_devmem,\n \t.copy_to_ram = xe_svm_copy_to_ram,\n };\ndiff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h\nindex f6e7e234c089..0d1d083b778a 100644\n--- a/include/drm/drm_pagemap.h\n+++ b/include/drm/drm_pagemap.h\n@@ -157,17 +157,17 @@ struct drm_pagemap_devmem_ops {\n \tvoid (*devmem_release)(struct drm_pagemap_devmem *devmem_allocation);\n \n \t/**\n-\t * @populate_devmem_pfn: Populate device memory PFN (required for migration)\n+\t * @populate_devmem_mpfn: Populate device memory PFN (required for migration)\n \t * @devmem_allocation: device memory allocation\n \t * @npages: Number of pages to populate\n-\t * @pfn: Array of page frame numbers to populate\n+\t * @mpfn: Array of migrate page frame numbers to populate\n \t *\n \t * Populate device memory page frame numbers (PFN).\n \t *\n \t * Return: 0 on success, a negative error code on failure.\n \t */\n-\tint (*populate_devmem_pfn)(struct drm_pagemap_devmem *devmem_allocation,\n-\t\t\t\t   unsigned long npages, unsigned long *pfn);\n+\tint (*populate_devmem_mpfn)(struct drm_pagemap_devmem *devmem_allocation,\n+\t\t\t\t    unsigned long npages, unsigned long *pfn);\n \n \t/**\n \t * @copy_to_devmem: Copy to device memory (required for migration)\ndiff --git a/lib/test_hmm.c b/lib/test_hmm.c\nindex 7e5248404d00..a6ff292596f3 100644\n--- a/lib/test_hmm.c\n+++ b/lib/test_hmm.c\n@@ -1389,7 +1389,7 @@ static void dmirror_device_evict_chunk(struct dmirror_chunk *chunk)\n \tsrc_pfns = kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAIL);\n \tdst_pfns = kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAIL);\n \n-\tmigrate_device_range(src_pfns, start_pfn, npages);\n+\tmigrate_device_range(src_pfns, migrate_pfn(start_pfn), npages);\n \tfor (i = 0; i < npages; i++) {\n \t\tstruct page *dpage, *spage;\n \ndiff --git a/mm/migrate_device.c b/mm/migrate_device.c\nindex 1a2067f830da..a2baaa2a81f9 100644\n--- a/mm/migrate_device.c\n+++ b/mm/migrate_device.c\n@@ -1354,11 +1354,11 @@ void migrate_vma_finalize(struct migrate_vma *migrate)\n }\n EXPORT_SYMBOL(migrate_vma_finalize);\n \n-static unsigned long migrate_device_pfn_lock(unsigned long pfn)\n+static unsigned long migrate_device_pfn_lock(unsigned long mpfn)\n {\n \tstruct folio *folio;\n \n-\tfolio = folio_get_nontail_page(pfn_to_page(pfn));\n+\tfolio = folio_get_nontail_page(migrate_pfn_to_page(mpfn));\n \tif (!folio)\n \t\treturn 0;\n \n@@ -1367,13 +1367,14 @@ static unsigned long migrate_device_pfn_lock(unsigned long pfn)\n \t\treturn 0;\n \t}\n \n-\treturn migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE;\n+\treturn mpfn | MIGRATE_PFN_MIGRATE;\n }\n \n /**\n  * migrate_device_range() - migrate device private pfns to normal memory.\n- * @src_pfns: array large enough to hold migrating source device private pfns.\n- * @start: starting pfn in the range to migrate.\n+ * @src_mpfns: array large enough to hold migrating source device private\n+ * migrate pfns.\n+ * @start: starting migrate pfn in the range to migrate.\n  * @npages: number of pages to migrate.\n  *\n  * migrate_vma_setup() is similar in concept to migrate_vma_setup() except that\n@@ -1389,28 +1390,29 @@ static unsigned long migrate_device_pfn_lock(unsigned long pfn)\n  * allocate destination pages and start copying data from the device to CPU\n  * memory before calling migrate_device_pages().\n  */\n-int migrate_device_range(unsigned long *src_pfns, unsigned long start,\n+int migrate_device_range(unsigned long *src_mpfns, unsigned long start,\n \t\t\tunsigned long npages)\n {\n-\tunsigned long i, j, pfn;\n+\tunsigned long i, j, mpfn;\n \n-\tfor (pfn = start, i = 0; i < npages; pfn++, i++) {\n-\t\tstruct page *page = pfn_to_page(pfn);\n+\tfor (mpfn = start, i = 0; i < npages; i++) {\n+\t\tstruct page *page = migrate_pfn_to_page(mpfn);\n \t\tstruct folio *folio = page_folio(page);\n \t\tunsigned int nr = 1;\n \n-\t\tsrc_pfns[i] = migrate_device_pfn_lock(pfn);\n+\t\tsrc_mpfns[i] = migrate_device_pfn_lock(mpfn);\n \t\tnr = folio_nr_pages(folio);\n \t\tif (nr > 1) {\n-\t\t\tsrc_pfns[i] |= MIGRATE_PFN_COMPOUND;\n+\t\t\tsrc_mpfns[i] |= MIGRATE_PFN_COMPOUND;\n \t\t\tfor (j = 1; j < nr; j++)\n-\t\t\t\tsrc_pfns[i+j] = 0;\n+\t\t\t\tsrc_mpfns[i+j] = 0;\n \t\t\ti += j - 1;\n-\t\t\tpfn += j - 1;\n+\t\t\tmpfn += (j - 1) << MIGRATE_PFN_SHIFT;\n \t\t}\n+\t\tmpfn += 1 << MIGRATE_PFN_SHIFT;\n \t}\n \n-\tmigrate_device_unmap(src_pfns, npages, NULL);\n+\tmigrate_device_unmap(src_mpfns, npages, NULL);\n \n \treturn 0;\n }\n@@ -1418,32 +1420,33 @@ EXPORT_SYMBOL(migrate_device_range);\n \n /**\n  * migrate_device_pfns() - migrate device private pfns to normal memory.\n- * @src_pfns: pre-popluated array of source device private pfns to migrate.\n+ * @src_mpfns: pre-popluated array of source device private migrate pfns to\n+ * migrate.\n  * @npages: number of pages to migrate.\n  *\n  * Similar to migrate_device_range() but supports non-contiguous pre-popluated\n  * array of device pages to migrate.\n  */\n-int migrate_device_pfns(unsigned long *src_pfns, unsigned long npages)\n+int migrate_device_pfns(unsigned long *src_mpfns, unsigned long npages)\n {\n \tunsigned long i, j;\n \n \tfor (i = 0; i < npages; i++) {\n-\t\tstruct page *page = pfn_to_page(src_pfns[i]);\n+\t\tstruct page *page = migrate_pfn_to_page(src_mpfns[i]);\n \t\tstruct folio *folio = page_folio(page);\n \t\tunsigned int nr = 1;\n \n-\t\tsrc_pfns[i] = migrate_device_pfn_lock(src_pfns[i]);\n+\t\tsrc_mpfns[i] = migrate_device_pfn_lock(src_mpfns[i]);\n \t\tnr = folio_nr_pages(folio);\n \t\tif (nr > 1) {\n-\t\t\tsrc_pfns[i] |= MIGRATE_PFN_COMPOUND;\n+\t\t\tsrc_mpfns[i] |= MIGRATE_PFN_COMPOUND;\n \t\t\tfor (j = 1; j < nr; j++)\n-\t\t\t\tsrc_pfns[i+j] = 0;\n+\t\t\t\tsrc_mpfns[i+j] = 0;\n \t\t\ti += j - 1;\n \t\t}\n \t}\n \n-\tmigrate_device_unmap(src_pfns, npages, NULL);\n+\tmigrate_device_unmap(src_mpfns, npages, NULL);\n \n \treturn 0;\n }\n","prefixes":["v2","03/11"]}