From patchwork Fri Mar 12 08:38:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 1451761 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256 header.s=selector2 header.b=sYuI2wOA; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4DxfQL41VGz9sW4 for ; Fri, 12 Mar 2021 19:40:10 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232251AbhCLIji (ORCPT ); Fri, 12 Mar 2021 03:39:38 -0500 Received: from mail-bn7nam10on2047.outbound.protection.outlook.com ([40.107.92.47]:50051 "EHLO NAM10-BN7-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S232305AbhCLIjY (ORCPT ); Fri, 12 Mar 2021 03:39:24 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=K9n8IHP7tkywlQ3fMJP+9zh6rGiGhgYf8Q0M8CVOTNui58ZnQ1T/Pj/yMyOVG51VF/ZgU1SXmu68YKwryGkJQqqRdi8am9APsgGoE1jI+DZLIC7IDOTDVPR26m8MjVtpB4+TyX2P2zVcnISUaaqvLp2E4gOZqCmQF69hsgna0XeBGPayaeTwzl1F6ka8UJA9IRTxrMMtlE3KyR+/FCXCA2DjW8pYnLimzLxxyhlNMfNdo00RPcnOIT5XwHYhdUcDi02neoaMNsBQq3Er/3e4j6gG/jqNvc4MpiBsfHSBx/DQIHarDRJHjbniYOp1UMExdHSm0QcnRWtgCq88929ckw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nlaT9AeoNq2tJnCUgFftYTZBwJ5psJEBr89xnxUI7rI=; b=GrfdLNt9jwXezds9s6Q4ixd5Q3Al13jrIfhA7tCoQlxC1kOp9c6gMqTa5rrPuML/L6x91LCFoMT69usCbA+pJT0RDZ8+LOm6zMf8jFOLxu6lbwdwC76Cfnt3BWevaVXmHjD+xq9+hUyVuYUnh7DXqFDAOfEYtdGblQ8kCplMUArwiE/QPMMJ4dqvcuWjO4WkdtocruflZXkB7ikAGOCRw/1vUrlx8ts7tH45+K0mfsqEvpFw8LDc8bZFE5s07fa7V8QhrjTpbfkFRp1/tPg/6qnh98NqKLJ4f9dupNKZXF8f3vEjV0cJpjfiiQULgZLpf/yUiwVaHkcLNZan8qipGQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nlaT9AeoNq2tJnCUgFftYTZBwJ5psJEBr89xnxUI7rI=; b=sYuI2wOAln0ZOyBrBFNPZv9HN/OB0ofv/84nN2iC4qohqjAdEamfafID7y3BbIDuB7xykBdNBFuGduYAbr8ABAMcOx5q9iUX6mboGVbUEoo10r3D+huCc3GQr/h7M+eMm1mETu5Tu4ddVwVF1tQ8Vb8FTMrnS8qvf2wLuVRHodP7rvEy9YIae/D03q3/k6yAxJvIEvyPbf2afBEk9Onw64CveyKOkTyZFtcGZ5h3jfpev0CkqtcgOapEIyFIcpHtZwbJR9KDL1FpZuEhQBbu4DWFY69SBuL7rLUnqIaChf0eSWGjUC82uRRWkeJ0q5qMre7qsNv+TetSqfO02GATVg== Received: from DM6PR02CA0065.namprd02.prod.outlook.com (2603:10b6:5:177::42) by SN1PR12MB2477.namprd12.prod.outlook.com (2603:10b6:802:28::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.27; Fri, 12 Mar 2021 08:39:19 +0000 Received: from DM6NAM11FT068.eop-nam11.prod.protection.outlook.com (2603:10b6:5:177:cafe::ae) by DM6PR02CA0065.outlook.office365.com (2603:10b6:5:177::42) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.17 via Frontend Transport; Fri, 12 Mar 2021 08:39:19 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; redhat.com; dkim=none (message not signed) header.d=none;redhat.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT068.mail.protection.outlook.com (10.13.173.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.3933.31 via Frontend Transport; Fri, 12 Mar 2021 08:39:19 +0000 Received: from localhost (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 12 Mar 2021 08:39:18 +0000 From: Alistair Popple To: , , , CC: , , , , , , , , , , , Alistair Popple Subject: [PATCH v6 1/8] mm: Remove special swap entry functions Date: Fri, 12 Mar 2021 19:38:44 +1100 Message-ID: <20210312083851.15981-2-apopple@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210312083851.15981-1-apopple@nvidia.com> References: <20210312083851.15981-1-apopple@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 60490fbc-95ad-4939-ad45-08d8e5325430 X-MS-TrafficTypeDiagnostic: SN1PR12MB2477: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:257; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: jYvf2KUbDop6F8WVEG88GUnwNw+NczijeUOKn8WtcUdY+vaQAXPiUpb9bR2hv98h2bi1VFTcH+q5smlDdlpd6wd0qa7FLdy/DPazNzydOv/NaB4+he5/UWycrWCbxlyiQoWYXZk8Tui0z2WbLU9JMCPfz3L7ssH6dOfckWjcReDXpmdNKvSfo/Thbyqg/Zab0DNaHzDGtxEZ6svp1gK+7d5bZcdIhpsEWSA/0Ms+lXH2ea2oWtZd4xWqbl5JL7KGP5ca0iccCe6SU2WSS/P+lWBTp7r8MRTZWBwtnSAByojAE3LEtoiMGKdxCnt3jSL54hoUaDoDXmLuw0uXXMeN7viDhcSlrTK2PKhvXvn98XqL7hM4eEPZTgvMPAqEDD5Relsgdz5RLc+rm/Qppv/Tkmps1kFdd3YJb0Rboqps9hzl5lgK0XoNaIQiOBWlLSNJWUfphBRVnw49EeekiJ5Fyxw/aSY+PUtQ4lOqlRuaV4O9wwfdH/egrSLq9IM1HaVMuoxYloqt+7FxVil+8iUWT/peFPQDN/ZUt3CKnIE5iXiL7EDZmtXaNFIdiwJqeIZpzdO1aJ7qI1RpHkwfGiY9vu6zFnHjgnRh/2lMQNcHKH2v7/oiOup1q+Igja2bjeb/BRPGJUK/33btEephMfz6yDcWkcyizeJZou0YniJANPnCWEtzoFkedDeAd6ZfzAwx X-Forefront-Antispam-Report: CIP:216.228.112.34;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid03.nvidia.com;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(396003)(39860400002)(46966006)(36840700001)(6666004)(47076005)(316002)(2906002)(30864003)(110136005)(356005)(16526019)(34020700004)(426003)(83380400001)(86362001)(36860700001)(2616005)(7636003)(82740400003)(82310400003)(8936002)(26005)(36906005)(70586007)(70206006)(36756003)(478600001)(8676002)(54906003)(1076003)(5660300002)(4326008)(7416002)(107886003)(336012)(186003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Mar 2021 08:39:19.5337 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 60490fbc-95ad-4939-ad45-08d8e5325430 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.34];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT068.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR12MB2477 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Remove the migration and device private entry_to_page() and entry_to_pfn() inline functions and instead open code them directly. This results in shorter code which is easier to understand. Signed-off-by: Alistair Popple Reviewed-by: Ralph Campbell Reviewed-by: Christoph Hellwig --- v6: * Removed redundant compound_page() call from inside PageLocked() * Fixed a minor build issue for s390 reported by kernel test bot v4: * Added pfn_swap_entry_to_page() * Reinstated check that migration entries point to locked pages * Removed #define swapcache_prepare which isn't needed for CONFIG_SWAP=0 builds --- arch/s390/mm/pgtable.c | 2 +- fs/proc/task_mmu.c | 23 +++++--------- include/linux/swap.h | 4 +-- include/linux/swapops.h | 69 ++++++++++++++--------------------------- mm/hmm.c | 5 ++- mm/huge_memory.c | 4 +-- mm/memcontrol.c | 2 +- mm/memory.c | 10 +++--- mm/migrate.c | 6 ++-- mm/page_vma_mapped.c | 6 ++-- 10 files changed, 50 insertions(+), 81 deletions(-) diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c index 18205f851c24..eec3a9d7176e 100644 --- a/arch/s390/mm/pgtable.c +++ b/arch/s390/mm/pgtable.c @@ -691,7 +691,7 @@ static void ptep_zap_swap_entry(struct mm_struct *mm, swp_entry_t entry) if (!non_swap_entry(entry)) dec_mm_counter(mm, MM_SWAPENTS); else if (is_migration_entry(entry)) { - struct page *page = migration_entry_to_page(entry); + struct page *page = pfn_swap_entry_to_page(entry); dec_mm_counter(mm, mm_counter(page)); } diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 3cec6fbef725..08ee59d945c0 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -514,10 +514,8 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr, } else { mss->swap_pss += (u64)PAGE_SIZE << PSS_SHIFT; } - } else if (is_migration_entry(swpent)) - page = migration_entry_to_page(swpent); - else if (is_device_private_entry(swpent)) - page = device_private_entry_to_page(swpent); + } else if (is_pfn_swap_entry(swpent)) + page = pfn_swap_entry_to_page(swpent); } else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && mss->check_shmem_swap && pte_none(*pte))) { page = xa_load(&vma->vm_file->f_mapping->i_pages, @@ -549,7 +547,7 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr, swp_entry_t entry = pmd_to_swp_entry(*pmd); if (is_migration_entry(entry)) - page = migration_entry_to_page(entry); + page = pfn_swap_entry_to_page(entry); } if (IS_ERR_OR_NULL(page)) return; @@ -691,10 +689,8 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask, } else if (is_swap_pte(*pte)) { swp_entry_t swpent = pte_to_swp_entry(*pte); - if (is_migration_entry(swpent)) - page = migration_entry_to_page(swpent); - else if (is_device_private_entry(swpent)) - page = device_private_entry_to_page(swpent); + if (is_pfn_swap_entry(swpent)) + page = pfn_swap_entry_to_page(swpent); } if (page) { int mapcount = page_mapcount(page); @@ -1383,11 +1379,8 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm, frame = swp_type(entry) | (swp_offset(entry) << MAX_SWAPFILES_SHIFT); flags |= PM_SWAP; - if (is_migration_entry(entry)) - page = migration_entry_to_page(entry); - - if (is_device_private_entry(entry)) - page = device_private_entry_to_page(entry); + if (is_pfn_swap_entry(entry)) + page = pfn_swap_entry_to_page(entry); } if (page && !PageAnon(page)) @@ -1444,7 +1437,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, if (pmd_swp_soft_dirty(pmd)) flags |= PM_SOFT_DIRTY; VM_BUG_ON(!is_pmd_migration_entry(pmd)); - page = migration_entry_to_page(entry); + page = pfn_swap_entry_to_page(entry); } #endif diff --git a/include/linux/swap.h b/include/linux/swap.h index 4cc6ec3bf0ab..516104b9334b 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -523,8 +523,8 @@ static inline void show_swap_cache_info(void) { } -#define free_swap_and_cache(e) ({(is_migration_entry(e) || is_device_private_entry(e));}) -#define swapcache_prepare(e) ({(is_migration_entry(e) || is_device_private_entry(e));}) +/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */ +#define free_swap_and_cache(e) is_pfn_swap_entry(e) static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp_mask) { diff --git a/include/linux/swapops.h b/include/linux/swapops.h index d9b7c9132c2f..139be8235ad2 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -121,16 +121,6 @@ static inline bool is_write_device_private_entry(swp_entry_t entry) { return unlikely(swp_type(entry) == SWP_DEVICE_WRITE); } - -static inline unsigned long device_private_entry_to_pfn(swp_entry_t entry) -{ - return swp_offset(entry); -} - -static inline struct page *device_private_entry_to_page(swp_entry_t entry) -{ - return pfn_to_page(swp_offset(entry)); -} #else /* CONFIG_DEVICE_PRIVATE */ static inline swp_entry_t make_device_private_entry(struct page *page, bool write) { @@ -150,16 +140,6 @@ static inline bool is_write_device_private_entry(swp_entry_t entry) { return false; } - -static inline unsigned long device_private_entry_to_pfn(swp_entry_t entry) -{ - return 0; -} - -static inline struct page *device_private_entry_to_page(swp_entry_t entry) -{ - return NULL; -} #endif /* CONFIG_DEVICE_PRIVATE */ #ifdef CONFIG_MIGRATION @@ -182,22 +162,6 @@ static inline int is_write_migration_entry(swp_entry_t entry) return unlikely(swp_type(entry) == SWP_MIGRATION_WRITE); } -static inline unsigned long migration_entry_to_pfn(swp_entry_t entry) -{ - return swp_offset(entry); -} - -static inline struct page *migration_entry_to_page(swp_entry_t entry) -{ - struct page *p = pfn_to_page(swp_offset(entry)); - /* - * Any use of migration entries may only occur while the - * corresponding page is locked - */ - BUG_ON(!PageLocked(compound_head(p))); - return p; -} - static inline void make_migration_entry_read(swp_entry_t *entry) { *entry = swp_entry(SWP_MIGRATION_READ, swp_offset(*entry)); @@ -217,16 +181,6 @@ static inline int is_migration_entry(swp_entry_t swp) return 0; } -static inline unsigned long migration_entry_to_pfn(swp_entry_t entry) -{ - return 0; -} - -static inline struct page *migration_entry_to_page(swp_entry_t entry) -{ - return NULL; -} - static inline void make_migration_entry_read(swp_entry_t *entryp) { } static inline void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, spinlock_t *ptl) { } @@ -241,6 +195,29 @@ static inline int is_write_migration_entry(swp_entry_t entry) #endif +static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) +{ + struct page *p = pfn_to_page(swp_offset(entry)); + + /* + * Any use of migration entries may only occur while the + * corresponding page is locked + */ + BUG_ON(is_migration_entry(entry) && !PageLocked(p)); + + return p; +} + +/* + * A pfn swap entry is a special type of swap entry that always has a pfn stored + * in the swap offset. They are used to represent unaddressable device memory + * and to restrict access to a page undergoing migration. + */ +static inline bool is_pfn_swap_entry(swp_entry_t entry) +{ + return is_migration_entry(entry) || is_device_private_entry(entry); +} + struct page_vma_mapped_walk; #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION diff --git a/mm/hmm.c b/mm/hmm.c index 943cb2ba4442..3b2dda71d0ed 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -214,7 +214,7 @@ static inline bool hmm_is_device_private_entry(struct hmm_range *range, swp_entry_t entry) { return is_device_private_entry(entry) && - device_private_entry_to_page(entry)->pgmap->owner == + pfn_swap_entry_to_page(entry)->pgmap->owner == range->dev_private_owner; } @@ -257,8 +257,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, cpu_flags = HMM_PFN_VALID; if (is_write_device_private_entry(entry)) cpu_flags |= HMM_PFN_WRITE; - *hmm_pfn = device_private_entry_to_pfn(entry) | - cpu_flags; + *hmm_pfn = swp_offset(entry) | cpu_flags; return 0; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 395c75111d33..a4cda8564bcf 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1700,7 +1700,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); entry = pmd_to_swp_entry(orig_pmd); - page = pfn_to_page(swp_offset(entry)); + page = pfn_swap_entry_to_page(entry); flush_needed = 0; } else WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); @@ -2108,7 +2108,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, swp_entry_t entry; entry = pmd_to_swp_entry(old_pmd); - page = pfn_to_page(swp_offset(entry)); + page = pfn_swap_entry_to_page(entry); write = is_write_migration_entry(entry); young = false; soft_dirty = pmd_swp_soft_dirty(old_pmd); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 845eec01ef9d..043840dbe48a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5523,7 +5523,7 @@ static struct page *mc_handle_swap_pte(struct vm_area_struct *vma, * as special swap entry in the CPU page table. */ if (is_device_private_entry(ent)) { - page = device_private_entry_to_page(ent); + page = pfn_swap_entry_to_page(ent); /* * MEMORY_DEVICE_PRIVATE means ZONE_DEVICE page and which have * a refcount of 1 when free (unlike normal page) diff --git a/mm/memory.c b/mm/memory.c index c8e357627318..1c98e3c1c2de 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -730,7 +730,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, } rss[MM_SWAPENTS]++; } else if (is_migration_entry(entry)) { - page = migration_entry_to_page(entry); + page = pfn_swap_entry_to_page(entry); rss[mm_counter(page)]++; @@ -749,7 +749,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, set_pte_at(src_mm, addr, src_pte, pte); } } else if (is_device_private_entry(entry)) { - page = device_private_entry_to_page(entry); + page = pfn_swap_entry_to_page(entry); /* * Update rss count even for unaddressable pages, as @@ -1286,7 +1286,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, entry = pte_to_swp_entry(ptent); if (is_device_private_entry(entry)) { - struct page *page = device_private_entry_to_page(entry); + struct page *page = pfn_swap_entry_to_page(entry); if (unlikely(details && details->check_mapping)) { /* @@ -1315,7 +1315,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, else if (is_migration_entry(entry)) { struct page *page; - page = migration_entry_to_page(entry); + page = pfn_swap_entry_to_page(entry); rss[mm_counter(page)]--; } if (unlikely(!free_swap_and_cache(entry))) @@ -3282,7 +3282,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) migration_entry_wait(vma->vm_mm, vmf->pmd, vmf->address); } else if (is_device_private_entry(entry)) { - vmf->page = device_private_entry_to_page(entry); + vmf->page = pfn_swap_entry_to_page(entry); ret = vmf->page->pgmap->ops->migrate_to_ram(vmf); } else if (is_hwpoison_entry(entry)) { ret = VM_FAULT_HWPOISON; diff --git a/mm/migrate.c b/mm/migrate.c index 62b81d5257aa..600978d18750 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -321,7 +321,7 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, if (!is_migration_entry(entry)) goto out; - page = migration_entry_to_page(entry); + page = pfn_swap_entry_to_page(entry); /* * Once page cache replacement of page migration started, page_count @@ -361,7 +361,7 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd) ptl = pmd_lock(mm, pmd); if (!is_pmd_migration_entry(*pmd)) goto unlock; - page = migration_entry_to_page(pmd_to_swp_entry(*pmd)); + page = pfn_swap_entry_to_page(pmd_to_swp_entry(*pmd)); if (!get_page_unless_zero(page)) goto unlock; spin_unlock(ptl); @@ -2443,7 +2443,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, if (!is_device_private_entry(entry)) goto next; - page = device_private_entry_to_page(entry); + page = pfn_swap_entry_to_page(entry); if (!(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_PRIVATE) || page->pgmap->owner != migrate->pgmap_owner) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 86e3a3688d59..eed988ab2e81 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -96,7 +96,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) if (!is_migration_entry(entry)) return false; - pfn = migration_entry_to_pfn(entry); + pfn = swp_offset(entry); } else if (is_swap_pte(*pvmw->pte)) { swp_entry_t entry; @@ -105,7 +105,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) if (!is_device_private_entry(entry)) return false; - pfn = device_private_entry_to_pfn(entry); + pfn = swp_offset(entry); } else { if (!pte_present(*pvmw->pte)) return false; @@ -200,7 +200,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (is_migration_entry(pmd_to_swp_entry(*pvmw->pmd))) { swp_entry_t entry = pmd_to_swp_entry(*pvmw->pmd); - if (migration_entry_to_page(entry) != page) + if (pfn_swap_entry_to_page(entry) != page) return not_found(pvmw); return true; } From patchwork Fri Mar 12 08:38:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 1451762 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256 header.s=selector2 header.b=lsyJOrEO; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4DxfQM6l3nz9sS8 for ; Fri, 12 Mar 2021 19:40:11 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232445AbhCLIjj (ORCPT ); Fri, 12 Mar 2021 03:39:39 -0500 Received: from mail-bn8nam11on2086.outbound.protection.outlook.com ([40.107.236.86]:58720 "EHLO NAM11-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S232313AbhCLIj1 (ORCPT ); Fri, 12 Mar 2021 03:39:27 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gcn6AKYVgOFaKhEJHEGXmNOuiI2oBxN9MydhAoa5iAE0eTR3G2035oYxqAIDMQBY0IFmz2Lopn68G8DCk8u6aVvFR5Y46KTBJjIy6lhDnbV7oyHJHi4I8PDzb6CaI9DDiP3618mzrvM/RTybYepcX9bJnf/smTb6rIjGAQAVhUvkZ91SdSDVQeFv/BMGOehTkh3qHSO747pbtA0r1v1hi6vuzQQ6ZIjgpT0blOSna68pab6BjLgc7Ef/XWRiTXe89jyRnMCZmcG1NpVi4dfwcLL2FDktvhOURpCfjSFMMLRk0nJhjZOVBBEv57euhGVEgXfU3YxVJ9W8HRLC3M1GDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3kXiR3VUVkDsntG0bfYhLL4P0lGWZ8p3XL3iW3/CGUQ=; b=I5MLRTKeb6V7+INIpN6IFX/oxpn8YWQM4BEUgk0U9lR1LHZH5GgxCFWt6bID3MH6r5RUCnQMcWS5E1kpI+9c1g4TX8VL9POTpHUA7MnjuFUSKQu+zpBt2LYk5aacZsAmEVtFA9zxs9AghypJPkQn+t1GxLJ7+jC9EOLCWFjGTMR8OJM7sY1ME/i5Pa+MpisEji+hkEqEX9V/IOqGWtWMFGVvMawOa9otbg5Dca0ZUgDUI1ws4icTfWRbpuTe78qywGbmeex8hCRrG7w7NSsJZTkaJims0BpWN5Fiu84gPsiXmaiX/PwShnbuV9vNGd+w32IhFZwjUJ2uAuqjpPmvZw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=infradead.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3kXiR3VUVkDsntG0bfYhLL4P0lGWZ8p3XL3iW3/CGUQ=; b=lsyJOrEOWD4memGHfJ9/X+unxPjUHUda3kvo7zebQSsvjRm5Nxgl0PYmxNBWH56JFzvC6y6iSl2M07/A8p6T0z5vIf5P3jFsbqujxyZ2ofhHowkL7ZHa/0ioJQxvDQX/osTY2Yi53uEzyk59+7QR4C7vWDZ8BTnZkrPjxlIhmmvhRnj0I+Zj+vWJtt7jDyA+wJ7V9t5uNCVST4HbkNujYDQQTopEpVPpoYGdCsA9TyzJvAzI/MgJGEHXjDEIv+Qz4ZHHdgcmhaR5EHldjjgCp3tLFICu1UU0g0mOCURew/ZB+Kg7A2PBGnYGXnW9N9fNA4mnXRK8gCKDm8g+iSfp1A== Received: from DM6PR21CA0005.namprd21.prod.outlook.com (2603:10b6:5:174::15) by DM6PR12MB4716.namprd12.prod.outlook.com (2603:10b6:5:34::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.27; Fri, 12 Mar 2021 08:39:22 +0000 Received: from DM6NAM11FT027.eop-nam11.prod.protection.outlook.com (2603:10b6:5:174:cafe::a1) by DM6PR21CA0005.outlook.office365.com (2603:10b6:5:174::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3955.3 via Frontend Transport; Fri, 12 Mar 2021 08:39:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; infradead.org; dkim=none (message not signed) header.d=none;infradead.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT027.mail.protection.outlook.com (10.13.172.205) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.3933.31 via Frontend Transport; Fri, 12 Mar 2021 08:39:22 +0000 Received: from localhost (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 12 Mar 2021 08:39:21 +0000 From: Alistair Popple To: , , , CC: , , , , , , , , , , , Alistair Popple , "Christoph Hellwig" Subject: [PATCH v6 2/8] mm/swapops: Rework swap entry manipulation code Date: Fri, 12 Mar 2021 19:38:45 +1100 Message-ID: <20210312083851.15981-3-apopple@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210312083851.15981-1-apopple@nvidia.com> References: <20210312083851.15981-1-apopple@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: fbfa1033-caec-4241-ad11-08d8e532560d X-MS-TrafficTypeDiagnostic: DM6PR12MB4716: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1265; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 1eUWmQ2Iw2B4TZfptNYD28tRvWDwTDOHcPhUDKbrCM8Lrgz8Pqp/mE6fc0Aea/QpAZ5i/d1USLsNPLiuY1n1s7+YcRQaOMdH1ZBlmwltXPriFNVzPVhdhGBviYcInKlDU4z8j/qZBPEkoD6ZD/iTKvLrfLIV74RkCQTvskr17mXcfbiFzbS4GQfVnb85swdYfH+7yZwPZIyBOMbiQ3Ex50+esFIvcnHUm8jTDBuPLcBwa0GSKuRiwOgHjD6U5GJbuSRM7I9EFR9qZM5qL8pMhJx9a0vv2QNuayUcoEsgiMGKziIjxr/d+tIox3+JZgTjW42jItIuNuGcVpEKXrdrkXyiVg4EID3kMzCBVr6NxX2rGKjwrHg8iMODJSZ6uR3J6f4AImIc+FX31KukvC5AtyXydDUeAoG/2lL0MN8O8pS3kQXB2/2iU8P8vOfgqPbqM2K2U3lWH2cITKxHRyLzQULK251xJ4pPB1MMOBeLjXAssnFUcH5U/DL++FPozE9kelZJdrZWHHuWG3Di0nztcyY3BCSNB61wZUArS9CG6nqR7/QG86Ch8+CBQukAxOYEgTgJAHZbMEp3jUvIx6T4bqZhvrXv+/TeD3epZDHZ6d01aC2QtCrm44snAbiSNAXsZw27z4czakk3JRsGlGt8Dy/0fUrdqdctaPJEZJ/8mD35eN3Pc5iAONNp2e3EQaju X-Forefront-Antispam-Report: CIP:216.228.112.34;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid03.nvidia.com;CAT:NONE;SFS:(4636009)(39860400002)(346002)(376002)(396003)(136003)(46966006)(36840700001)(7636003)(356005)(7416002)(316002)(82310400003)(4326008)(36906005)(8936002)(36756003)(47076005)(110136005)(478600001)(70586007)(36860700001)(54906003)(426003)(186003)(6666004)(70206006)(336012)(83380400001)(26005)(16526019)(34020700004)(5660300002)(2616005)(8676002)(1076003)(86362001)(2906002)(30864003)(82740400003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Mar 2021 08:39:22.6138 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fbfa1033-caec-4241-ad11-08d8e532560d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.34];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT027.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4716 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Both migration and device private pages use special swap entries that are manipluated by a range of inline functions. The arguments to these are somewhat inconsitent so rework them to remove flag type arguments and to make the arguments similar for both read and write entry creation. Signed-off-by: Alistair Popple Reviewed-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe Reviewed-by: Ralph Campbell --- include/linux/swapops.h | 56 ++++++++++++++++++++++------------------- mm/debug_vm_pgtable.c | 12 ++++----- mm/hmm.c | 2 +- mm/huge_memory.c | 26 +++++++++++++------ mm/hugetlb.c | 10 +++++--- mm/memory.c | 10 +++++--- mm/migrate.c | 26 ++++++++++++++----- mm/mprotect.c | 10 +++++--- mm/rmap.c | 10 +++++--- 9 files changed, 100 insertions(+), 62 deletions(-) diff --git a/include/linux/swapops.h b/include/linux/swapops.h index 139be8235ad2..4dfd807ae52a 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -100,35 +100,35 @@ static inline void *swp_to_radix_entry(swp_entry_t entry) } #if IS_ENABLED(CONFIG_DEVICE_PRIVATE) -static inline swp_entry_t make_device_private_entry(struct page *page, bool write) +static inline swp_entry_t make_readable_device_private_entry(pgoff_t offset) { - return swp_entry(write ? SWP_DEVICE_WRITE : SWP_DEVICE_READ, - page_to_pfn(page)); + return swp_entry(SWP_DEVICE_READ, offset); } -static inline bool is_device_private_entry(swp_entry_t entry) +static inline swp_entry_t make_writable_device_private_entry(pgoff_t offset) { - int type = swp_type(entry); - return type == SWP_DEVICE_READ || type == SWP_DEVICE_WRITE; + return swp_entry(SWP_DEVICE_WRITE, offset); } -static inline void make_device_private_entry_read(swp_entry_t *entry) +static inline bool is_device_private_entry(swp_entry_t entry) { - *entry = swp_entry(SWP_DEVICE_READ, swp_offset(*entry)); + int type = swp_type(entry); + return type == SWP_DEVICE_READ || type == SWP_DEVICE_WRITE; } -static inline bool is_write_device_private_entry(swp_entry_t entry) +static inline bool is_writable_device_private_entry(swp_entry_t entry) { return unlikely(swp_type(entry) == SWP_DEVICE_WRITE); } #else /* CONFIG_DEVICE_PRIVATE */ -static inline swp_entry_t make_device_private_entry(struct page *page, bool write) +static inline swp_entry_t make_readable_device_private_entry(pgoff_t offset) { return swp_entry(0, 0); } -static inline void make_device_private_entry_read(swp_entry_t *entry) +static inline swp_entry_t make_writable_device_private_entry(pgoff_t offset) { + return swp_entry(0, 0); } static inline bool is_device_private_entry(swp_entry_t entry) @@ -136,35 +136,32 @@ static inline bool is_device_private_entry(swp_entry_t entry) return false; } -static inline bool is_write_device_private_entry(swp_entry_t entry) +static inline bool is_writable_device_private_entry(swp_entry_t entry) { return false; } #endif /* CONFIG_DEVICE_PRIVATE */ #ifdef CONFIG_MIGRATION -static inline swp_entry_t make_migration_entry(struct page *page, int write) -{ - BUG_ON(!PageLocked(compound_head(page))); - - return swp_entry(write ? SWP_MIGRATION_WRITE : SWP_MIGRATION_READ, - page_to_pfn(page)); -} - static inline int is_migration_entry(swp_entry_t entry) { return unlikely(swp_type(entry) == SWP_MIGRATION_READ || swp_type(entry) == SWP_MIGRATION_WRITE); } -static inline int is_write_migration_entry(swp_entry_t entry) +static inline int is_writable_migration_entry(swp_entry_t entry) { return unlikely(swp_type(entry) == SWP_MIGRATION_WRITE); } -static inline void make_migration_entry_read(swp_entry_t *entry) +static inline swp_entry_t make_readable_migration_entry(pgoff_t offset) { - *entry = swp_entry(SWP_MIGRATION_READ, swp_offset(*entry)); + return swp_entry(SWP_MIGRATION_READ, offset); +} + +static inline swp_entry_t make_writable_migration_entry(pgoff_t offset) +{ + return swp_entry(SWP_MIGRATION_WRITE, offset); } extern void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, @@ -174,21 +171,28 @@ extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, extern void migration_entry_wait_huge(struct vm_area_struct *vma, struct mm_struct *mm, pte_t *pte); #else +static inline swp_entry_t make_readable_migration_entry(pgoff_t offset) +{ + return swp_entry(0, 0); +} + +static inline swp_entry_t make_writable_migration_entry(pgoff_t offset) +{ + return swp_entry(0, 0); +} -#define make_migration_entry(page, write) swp_entry(0, 0) static inline int is_migration_entry(swp_entry_t swp) { return 0; } -static inline void make_migration_entry_read(swp_entry_t *entryp) { } static inline void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, spinlock_t *ptl) { } static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, unsigned long address) { } static inline void migration_entry_wait_huge(struct vm_area_struct *vma, struct mm_struct *mm, pte_t *pte) { } -static inline int is_write_migration_entry(swp_entry_t entry) +static inline int is_writable_migration_entry(swp_entry_t entry) { return 0; } diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index a9bd6ce1ba02..3697a80b32f8 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -817,17 +817,17 @@ static void __init swap_migration_tests(void) * locked, otherwise it stumbles upon a BUG_ON(). */ __SetPageLocked(page); - swp = make_migration_entry(page, 1); + swp = make_writable_migration_entry(page_to_pfn(page)); WARN_ON(!is_migration_entry(swp)); - WARN_ON(!is_write_migration_entry(swp)); + WARN_ON(!is_writable_migration_entry(swp)); - make_migration_entry_read(&swp); + swp = make_readable_migration_entry(swp_offset(swp)); WARN_ON(!is_migration_entry(swp)); - WARN_ON(is_write_migration_entry(swp)); + WARN_ON(is_writable_migration_entry(swp)); - swp = make_migration_entry(page, 0); + swp = make_readable_migration_entry(page_to_pfn(page)); WARN_ON(!is_migration_entry(swp)); - WARN_ON(is_write_migration_entry(swp)); + WARN_ON(is_writable_migration_entry(swp)); __ClearPageLocked(page); __free_page(page); } diff --git a/mm/hmm.c b/mm/hmm.c index 3b2dda71d0ed..11df3ca30b82 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -255,7 +255,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, */ if (hmm_is_device_private_entry(range, entry)) { cpu_flags = HMM_PFN_VALID; - if (is_write_device_private_entry(entry)) + if (is_writable_device_private_entry(entry)) cpu_flags |= HMM_PFN_WRITE; *hmm_pfn = swp_offset(entry) | cpu_flags; return 0; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a4cda8564bcf..89af065cea5b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1051,8 +1051,9 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, swp_entry_t entry = pmd_to_swp_entry(pmd); VM_BUG_ON(!is_pmd_migration_entry(pmd)); - if (is_write_migration_entry(entry)) { - make_migration_entry_read(&entry); + if (is_writable_migration_entry(entry)) { + entry = make_readable_migration_entry( + swp_offset(entry)); pmd = swp_entry_to_pmd(entry); if (pmd_swp_soft_dirty(*src_pmd)) pmd = pmd_swp_mksoft_dirty(pmd); @@ -1825,13 +1826,14 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, swp_entry_t entry = pmd_to_swp_entry(*pmd); VM_BUG_ON(!is_pmd_migration_entry(*pmd)); - if (is_write_migration_entry(entry)) { + if (is_writable_migration_entry(entry)) { pmd_t newpmd; /* * A protection check is difficult so * just be safe and disable write */ - make_migration_entry_read(&entry); + entry = make_readable_migration_entry( + swp_offset(entry)); newpmd = swp_entry_to_pmd(entry); if (pmd_swp_soft_dirty(*pmd)) newpmd = pmd_swp_mksoft_dirty(newpmd); @@ -2109,7 +2111,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, entry = pmd_to_swp_entry(old_pmd); page = pfn_swap_entry_to_page(entry); - write = is_write_migration_entry(entry); + write = is_writable_migration_entry(entry); young = false; soft_dirty = pmd_swp_soft_dirty(old_pmd); uffd_wp = pmd_swp_uffd_wp(old_pmd); @@ -2141,7 +2143,12 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, */ if (freeze || pmd_migration) { swp_entry_t swp_entry; - swp_entry = make_migration_entry(page + i, write); + if (write) + swp_entry = make_writable_migration_entry( + page_to_pfn(page + i)); + else + swp_entry = make_readable_migration_entry( + page_to_pfn(page + i)); entry = swp_entry_to_pte(swp_entry); if (soft_dirty) entry = pte_swp_mksoft_dirty(entry); @@ -2998,7 +3005,10 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw, pmdval = pmdp_invalidate(vma, address, pvmw->pmd); if (pmd_dirty(pmdval)) set_page_dirty(page); - entry = make_migration_entry(page, pmd_write(pmdval)); + if (pmd_write(pmdval)) + entry = make_writable_migration_entry(page_to_pfn(page)); + else + entry = make_readable_migration_entry(page_to_pfn(page)); pmdswp = swp_entry_to_pmd(entry); if (pmd_soft_dirty(pmdval)) pmdswp = pmd_swp_mksoft_dirty(pmdswp); @@ -3024,7 +3034,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new) pmde = pmd_mkold(mk_huge_pmd(new, vma->vm_page_prot)); if (pmd_swp_soft_dirty(*pvmw->pmd)) pmde = pmd_mksoft_dirty(pmde); - if (is_write_migration_entry(entry)) + if (is_writable_migration_entry(entry)) pmde = maybe_pmd_mkwrite(pmde, vma); flush_cache_range(vma, mmun_start, mmun_start + HPAGE_PMD_SIZE); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8fb42c6dd74b..59645169839b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3795,12 +3795,13 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, is_hugetlb_entry_hwpoisoned(entry))) { swp_entry_t swp_entry = pte_to_swp_entry(entry); - if (is_write_migration_entry(swp_entry) && cow) { + if (is_writable_migration_entry(swp_entry) && cow) { /* * COW mappings require pages in both * parent and child to be set to read. */ - make_migration_entry_read(&swp_entry); + swp_entry = make_readable_migration_entry( + swp_offset(swp_entry)); entry = swp_entry_to_pte(swp_entry); set_huge_swap_pte_at(src, addr, src_pte, entry, sz); @@ -4970,10 +4971,11 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, if (unlikely(is_hugetlb_entry_migration(pte))) { swp_entry_t entry = pte_to_swp_entry(pte); - if (is_write_migration_entry(entry)) { + if (is_writable_migration_entry(entry)) { pte_t newpte; - make_migration_entry_read(&entry); + entry = make_readable_migration_entry( + swp_offset(entry)); newpte = swp_entry_to_pte(entry); set_huge_swap_pte_at(mm, address, ptep, newpte, huge_page_size(h)); diff --git a/mm/memory.c b/mm/memory.c index 1c98e3c1c2de..3a5705cfc891 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -734,13 +734,14 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, rss[mm_counter(page)]++; - if (is_write_migration_entry(entry) && + if (is_writable_migration_entry(entry) && is_cow_mapping(vm_flags)) { /* * COW mappings require pages in both * parent and child to be set to read. */ - make_migration_entry_read(&entry); + entry = make_readable_migration_entry( + swp_offset(entry)); pte = swp_entry_to_pte(entry); if (pte_swp_soft_dirty(*src_pte)) pte = pte_swp_mksoft_dirty(pte); @@ -771,9 +772,10 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, * when a device driver is involved (you cannot easily * save and restore device driver state). */ - if (is_write_device_private_entry(entry) && + if (is_writable_device_private_entry(entry) && is_cow_mapping(vm_flags)) { - make_device_private_entry_read(&entry); + entry = make_readable_device_private_entry( + swp_offset(entry)); pte = swp_entry_to_pte(entry); if (pte_swp_uffd_wp(*src_pte)) pte = pte_swp_mkuffd_wp(pte); diff --git a/mm/migrate.c b/mm/migrate.c index 600978d18750..b752543adb64 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -237,13 +237,18 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, * Recheck VMA as permissions can change since migration started */ entry = pte_to_swp_entry(*pvmw.pte); - if (is_write_migration_entry(entry)) + if (is_writable_migration_entry(entry)) pte = maybe_mkwrite(pte, vma); else if (pte_swp_uffd_wp(*pvmw.pte)) pte = pte_mkuffd_wp(pte); if (unlikely(is_device_private_page(new))) { - entry = make_device_private_entry(new, pte_write(pte)); + if (pte_write(pte)) + entry = make_writable_device_private_entry( + page_to_pfn(new)); + else + entry = make_readable_device_private_entry( + page_to_pfn(new)); pte = swp_entry_to_pte(entry); if (pte_swp_soft_dirty(*pvmw.pte)) pte = pte_swp_mksoft_dirty(pte); @@ -2451,7 +2456,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, mpfn = migrate_pfn(page_to_pfn(page)) | MIGRATE_PFN_MIGRATE; - if (is_write_device_private_entry(entry)) + if (is_writable_device_private_entry(entry)) mpfn |= MIGRATE_PFN_WRITE; } else { if (!(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) @@ -2497,8 +2502,12 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, ptep_get_and_clear(mm, addr, ptep); /* Setup special migration page table entry */ - entry = make_migration_entry(page, mpfn & - MIGRATE_PFN_WRITE); + if (mpfn & MIGRATE_PFN_WRITE) + entry = make_writable_migration_entry( + page_to_pfn(page)); + else + entry = make_readable_migration_entry( + page_to_pfn(page)); swp_pte = swp_entry_to_pte(entry); if (pte_present(pte)) { if (pte_soft_dirty(pte)) @@ -2971,7 +2980,12 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, if (is_device_private_page(page)) { swp_entry_t swp_entry; - swp_entry = make_device_private_entry(page, vma->vm_flags & VM_WRITE); + if (vma->vm_flags & VM_WRITE) + swp_entry = make_writable_device_private_entry( + page_to_pfn(page)); + else + swp_entry = make_readable_device_private_entry( + page_to_pfn(page)); entry = swp_entry_to_pte(swp_entry); } } else { diff --git a/mm/mprotect.c b/mm/mprotect.c index 94188df1ee55..f21b760ec809 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -143,23 +143,25 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, swp_entry_t entry = pte_to_swp_entry(oldpte); pte_t newpte; - if (is_write_migration_entry(entry)) { + if (is_writable_migration_entry(entry)) { /* * A protection check is difficult so * just be safe and disable write */ - make_migration_entry_read(&entry); + entry = make_readable_migration_entry( + swp_offset(entry)); newpte = swp_entry_to_pte(entry); if (pte_swp_soft_dirty(oldpte)) newpte = pte_swp_mksoft_dirty(newpte); if (pte_swp_uffd_wp(oldpte)) newpte = pte_swp_mkuffd_wp(newpte); - } else if (is_write_device_private_entry(entry)) { + } else if (is_writable_device_private_entry(entry)) { /* * We do not preserve soft-dirtiness. See * copy_one_pte() for explanation. */ - make_device_private_entry_read(&entry); + entry = make_readable_device_private_entry( + swp_offset(entry)); newpte = swp_entry_to_pte(entry); if (pte_swp_uffd_wp(oldpte)) newpte = pte_swp_mkuffd_wp(newpte); diff --git a/mm/rmap.c b/mm/rmap.c index b0fc27e77d6d..977e70803ed8 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1526,7 +1526,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, * pte. do_swap_page() will wait until the migration * pte is removed and then restart fault handling. */ - entry = make_migration_entry(page, 0); + entry = make_readable_migration_entry(page_to_pfn(page)); swp_pte = swp_entry_to_pte(entry); /* @@ -1622,8 +1622,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, * pte. do_swap_page() will wait until the migration * pte is removed and then restart fault handling. */ - entry = make_migration_entry(subpage, - pte_write(pteval)); + if (pte_write(pteval)) + entry = make_writable_migration_entry( + page_to_pfn(subpage)); + else + entry = make_readable_migration_entry( + page_to_pfn(subpage)); swp_pte = swp_entry_to_pte(entry); if (pte_soft_dirty(pteval)) swp_pte = pte_swp_mksoft_dirty(swp_pte); From patchwork Fri Mar 12 08:38:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 1451764 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256 header.s=selector2 header.b=og1dJgFj; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4DxfQQ0QD4z9sj5 for ; Fri, 12 Mar 2021 19:40:14 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232466AbhCLIjj (ORCPT ); Fri, 12 Mar 2021 03:39:39 -0500 Received: from mail-eopbgr770085.outbound.protection.outlook.com ([40.107.77.85]:13294 "EHLO NAM02-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S232408AbhCLIjc (ORCPT ); Fri, 12 Mar 2021 03:39:32 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KOip/FrPM1U4OEg16sLhlWvaI2B1gjBP262cxPh8xJLAGo7+cQbq4WSijhs4YIjF5lileM/KaZJm+H9nnWd1tVLCWnVOs4yRIT86+zeP+S8CgemiaJP1vIyEmQgDEF4SRwkVMxyIlWPB+u2iuaVRfWN7ZRHYkqzeZGguj7PpF8j2C2vYSBydkdmGBisrtYr4bjpdJ5FBBYpEhlTHdxPhlwI/nanzl5kHy5DpetFkK9Eqf4kGWrAKL1ltr11pIteU6vkOKxiGxURTrZIV9i5jZ7brPCk9Mx+F1htxqgrBPsY4zZ57CcR6wxFOMXiWvfYe98BhwnaxB737cstiEHj9hg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ma2SIhuD2Q07N2hePYNkgTrRVySBPkOjh8cT+Sy5R8A=; b=YeIYAa34bPcvfOfLVgbKHxiKr/J9Jj8QX4MVYeuSSzDpRY845mul444HsCL2d77ZzzstdeOTZ38EAN8vZkMc27c9nm/AfKGqGyjO1dOom4KZkQNls2HRms7XTKrA9ct+uPJLFgWqDAuHB6kFowJQqwVbngk7B37f/nIPNaCLZQ9+VEUUhiaBpeXW+5MzENPj5Lblpbr/WO2oK7uydhk3mCPMmKIEJ2ZOGOqTwtr/ANcoxm6iJgT6kt+Jz23j+xbG4xtv5XCMPVE+j0icE9JczYuyq2qC2hpqqIhhuTXn3Lm5C8HVux3OAaaUXN6mFUq5o2EaUu0JIvHhaims7mIXFA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ma2SIhuD2Q07N2hePYNkgTrRVySBPkOjh8cT+Sy5R8A=; b=og1dJgFjz84x7iMzvsnKpxygQaYYsLBa0n35RzxjbraxnyVfUEL1GN8lXIDdc3xqq+03MF1Ay3KxdqtsvIcWduXCBeWw4XOur4T/8Crm/1TNeEebKSgdxUrvVI1EH4ygpb5I2OYHVPjE4c/UotS6PLXqWYyb63l9uc9yHkyMmsk/vv5bIMdFk67lNWdPtxPYv2XHd5JBLpAVroljcoU+itNDiNZ7edxoxPw72rvwuriGPlEXjLSJNoOERS0oa0XpcOEKAA3PnBOfP6fEDj1/9dUc68IqUvn8V+9O8BCsuYWmc3ZFgVqxEB454Ekt+V4tKwPjy/ewVuSmjlAw9Oi7JA== Received: from DM5PR06CA0089.namprd06.prod.outlook.com (2603:10b6:3:4::27) by CH2PR12MB3957.namprd12.prod.outlook.com (2603:10b6:610:2c::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.17; Fri, 12 Mar 2021 08:39:25 +0000 Received: from DM6NAM11FT004.eop-nam11.prod.protection.outlook.com (2603:10b6:3:4:cafe::bb) by DM5PR06CA0089.outlook.office365.com (2603:10b6:3:4::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.17 via Frontend Transport; Fri, 12 Mar 2021 08:39:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; redhat.com; dkim=none (message not signed) header.d=none;redhat.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT004.mail.protection.outlook.com (10.13.172.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.3933.31 via Frontend Transport; Fri, 12 Mar 2021 08:39:25 +0000 Received: from localhost (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 12 Mar 2021 08:39:24 +0000 From: Alistair Popple To: , , , CC: , , , , , , , , , , , Alistair Popple Subject: [PATCH v6 3/8] mm/rmap: Split try_to_munlock from try_to_unmap Date: Fri, 12 Mar 2021 19:38:46 +1100 Message-ID: <20210312083851.15981-4-apopple@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210312083851.15981-1-apopple@nvidia.com> References: <20210312083851.15981-1-apopple@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5d1ffd88-81b4-41f1-67d7-08d8e53257ce X-MS-TrafficTypeDiagnostic: CH2PR12MB3957: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: yjP7KLsmqntbmCaT2B9DzNxHoMVR8jGUnZp2BYu44nIrJGdxeYGBILS5IQESh7ulpDkW6PIMI6IkRaRjDa9FD1KUw7M0qomkns7JpgGYpt1szWlLzbg9LTBQoYuwpR6dLKujrPXSR2poLjpzxtjA5wLgYdrxook3IUzCz5WZm7D72y9obXU7WiRC/OrSjKppxpLptGoDe8VdUF0pfDsT75aRW0t2TI5FdKHVeOzxFMMxc1BgDP1zOj6Q1E3kZHKuYw2mdXtkJ+rhPSiLBNSSwirwX0ojUS9mb4BvYUOnT4fjiURyOdon5YLfuz7n/8ex72jSxpbj/pKZNkA2qlfy3NkE5mmaPHoQ3QAaWkEUy5EzsoTDvvzZOOlkmDfTEQmviTVGnMma8qjZGtUkpXnfw9+N/JTxf6Uq5LdB3OZSp2W3uoN1vtMMEc8JZhHURK03BNZleaR1F6kwJDPEVYbN5UR92YD58nayoCJiKCt5qCJHPzl2taCfv2tAdpmK/N8tqFAuSw58yJYPdQPykhg55es7ltoKI/JkI6zb1Lbnf1U8gE+zOowpQLnJlL7HztXJu2zolG3vTKTeTzsfWpy9ybTg4v24sqEkdQrdqv24Zd0qHYOj6aSNkipT3CvW2Bgw95L4D6A+djp5+Vu4qo9+PG8HCL22k57z/AEDUVq/kbNohjbpF1MHP8nvcOv3taX5 X-Forefront-Antispam-Report: CIP:216.228.112.34;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid03.nvidia.com;CAT:NONE;SFS:(4636009)(136003)(396003)(346002)(376002)(39860400002)(46966006)(36840700001)(8936002)(26005)(8676002)(16526019)(70206006)(36860700001)(186003)(110136005)(54906003)(2616005)(70586007)(82740400003)(107886003)(4326008)(336012)(7636003)(47076005)(356005)(34020700004)(36756003)(7416002)(316002)(36906005)(83380400001)(86362001)(478600001)(5660300002)(426003)(2906002)(6666004)(82310400003)(1076003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Mar 2021 08:39:25.5326 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5d1ffd88-81b4-41f1-67d7-08d8e53257ce X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.34];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT004.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB3957 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The behaviour of try_to_unmap_one() is difficult to follow because it performs different operations based on a fairly large set of flags used in different combinations. TTU_MUNLOCK is one such flag. However it is exclusively used by try_to_munlock() which specifies no other flags. Therefore rather than overload try_to_unmap_one() with unrelated behaviour split this out into it's own function and remove the flag. Signed-off-by: Alistair Popple Reviewed-by: Ralph Campbell Reviewed-by: Christoph Hellwig --- Christoph - I didn't add your Reviewed-by from v3 because removal of the extra VM_LOCKED check in v4 changed things slightly. Let me know if you're still ok for me to add it. Thanks. v4: * Removed redundant check for VM_LOCKED --- include/linux/rmap.h | 1 - mm/rmap.c | 40 ++++++++++++++++++++++++++++++++-------- 2 files changed, 32 insertions(+), 9 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index def5c62c93b3..e26ac2d71346 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -87,7 +87,6 @@ struct anon_vma_chain { enum ttu_flags { TTU_MIGRATION = 0x1, /* migration mode */ - TTU_MUNLOCK = 0x2, /* munlock mode */ TTU_SPLIT_HUGE_PMD = 0x4, /* split huge PMD if any */ TTU_IGNORE_MLOCK = 0x8, /* ignore mlock */ diff --git a/mm/rmap.c b/mm/rmap.c index 977e70803ed8..d02bade5245b 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1405,10 +1405,6 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; - /* munlock has nothing to gain from examining un-locked vmas */ - if ((flags & TTU_MUNLOCK) && !(vma->vm_flags & VM_LOCKED)) - return true; - if (IS_ENABLED(CONFIG_MIGRATION) && (flags & TTU_MIGRATION) && is_zone_device_page(page) && !is_device_private_page(page)) return true; @@ -1469,8 +1465,6 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, page_vma_mapped_walk_done(&pvmw); break; } - if (flags & TTU_MUNLOCK) - continue; } /* Unexpected PMD-mapped THP? */ @@ -1784,6 +1778,37 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags) return !page_mapcount(page) ? true : false; } +static bool try_to_munlock_one(struct page *page, struct vm_area_struct *vma, + unsigned long address, void *arg) +{ + struct page_vma_mapped_walk pvmw = { + .page = page, + .vma = vma, + .address = address, + }; + + /* munlock has nothing to gain from examining un-locked vmas */ + if (!(vma->vm_flags & VM_LOCKED)) + return true; + + while (page_vma_mapped_walk(&pvmw)) { + /* PTE-mapped THP are never mlocked */ + if (!PageTransCompound(page)) { + /* + * Holding pte lock, we do *not* need + * mmap_lock here + */ + mlock_vma_page(page); + } + page_vma_mapped_walk_done(&pvmw); + + /* found a mlocked page, no point continuing munlock check */ + return false; + } + + return true; +} + /** * try_to_munlock - try to munlock a page * @page: the page to be munlocked @@ -1796,8 +1821,7 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags) void try_to_munlock(struct page *page) { struct rmap_walk_control rwc = { - .rmap_one = try_to_unmap_one, - .arg = (void *)TTU_MUNLOCK, + .rmap_one = try_to_munlock_one, .done = page_not_mapped, .anon_lock = page_lock_anon_vma_read, From patchwork Fri Mar 12 08:38:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 1451763 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256 header.s=selector2 header.b=aw/GpXC+; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4DxfQN5PJvz9sXh for ; Fri, 12 Mar 2021 19:40:12 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232473AbhCLIjk (ORCPT ); Fri, 12 Mar 2021 03:39:40 -0500 Received: from mail-eopbgr750071.outbound.protection.outlook.com ([40.107.75.71]:7841 "EHLO NAM02-BL2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S232317AbhCLIjg (ORCPT ); Fri, 12 Mar 2021 03:39:36 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MZ3knfIDwMCrkMDh/l0Wl4nrlVOwy6lalaH+AGofRN3KIqWT4hoo06+6GjjdXV4BC+7EiNtwtU/YA7OkyQAzTJAj1yVdg+nUzvXOuV/4S87XZo5h5Nncgvu+XnZ0P4uFmIZzAH/YSZQlD2FBuwWMEFvSX4/slUoYPdH0mXY4LN/ExVDpLOj6VsxXzeDJHW1GRnqfpLfTjzPMX1tRxpQkYDNiHoLtoXyN7Q70QyxrLlqIwr1wcD97iRtIOA/jfUAn7lh+KKeCdPuaRaJlMcjih2OThDG4QSnKPuUh+PFRExZV1yFW4stGw4ooTfHx0iLbzikXGQ768Ql0g/LTZKh4zA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ycfvveltp0WT3+WQH44D4PNEBiCHpjrlS4O0AJQVjYw=; b=oOPbtyoSYDGvUo6PJBRwtso+ZPdUonxTkz/tTKSTLNb6xQP17LQYHDdxECU/i+od8tgLsUcgksxNnW0J5gIvslXINKGB0c1klGNzWmVOlK7MSl40V/xyYIwwA6keBgEJyIy16Q/vojUDeclNfjVpxzwS1YfMKRuO6bu5JYNAe8MAZdpQojCUggRsPuBi+SQ6301YnqR66sP3RCNbuKoqDP/agVBNWG2lfvZcYhByeMFOEj21Ll9VS+nhEwNwSpRui9qK1XXVqnI3hhAiIewmJBkrPe94XIseb2uDRk3dKzpeNo6pKg3O995HPnGOR7W3VhebHEbXTGF6pp62sYulyw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=infradead.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ycfvveltp0WT3+WQH44D4PNEBiCHpjrlS4O0AJQVjYw=; b=aw/GpXC+MCB1+7VeKP7V2LdjSF/GyrCQOGxD+crCL8+d9fTdtHZyI5Pjfsq4VFnNcjXkcakkscRyULvrMOrZJqC7J1ZiGhqyUKujprkZZIB1qE8y1sSWrG9z+ADkRHY8BVolOf4T5OX49YMzwZTu0hiNY33A+2AbGzlKFXOodyT4FqyIng/9f1hlwgCD+bILvJvo/BofEfmuhpKx4s/FjlG2r1XY5hvltCaNN5jsXmlxqMmW62/F02XNOWPOiWVSU9AwCXlQDnj7cDPCFV46JXkcZ3QSzc8zsbP6slU/4M3QlCHjm0yMWeLqx/FfZIewSeAO5kL6RPkgkvxivEdbDQ== Received: from DM5PR06CA0092.namprd06.prod.outlook.com (2603:10b6:3:4::30) by CY4PR12MB1159.namprd12.prod.outlook.com (2603:10b6:903:36::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.30; Fri, 12 Mar 2021 08:39:30 +0000 Received: from DM6NAM11FT004.eop-nam11.prod.protection.outlook.com (2603:10b6:3:4:cafe::b4) by DM5PR06CA0092.outlook.office365.com (2603:10b6:3:4::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.17 via Frontend Transport; Fri, 12 Mar 2021 08:39:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; infradead.org; dkim=none (message not signed) header.d=none;infradead.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT004.mail.protection.outlook.com (10.13.172.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.3933.31 via Frontend Transport; Fri, 12 Mar 2021 08:39:30 +0000 Received: from localhost (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 12 Mar 2021 08:39:27 +0000 From: Alistair Popple To: , , , CC: , , , , , , , , , , , Alistair Popple , "Christoph Hellwig" Subject: [PATCH v6 4/8] mm/rmap: Split migration into its own function Date: Fri, 12 Mar 2021 19:38:47 +1100 Message-ID: <20210312083851.15981-5-apopple@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210312083851.15981-1-apopple@nvidia.com> References: <20210312083851.15981-1-apopple@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7a2301d5-88d5-4e18-bd1f-08d8e5325ab8 X-MS-TrafficTypeDiagnostic: CY4PR12MB1159: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DuFt1dWBAXjO3gTUqgm3EgBDWBp3XOUjDAEv0JkyJ5/wwsV8G3xAt2cuwHdyc1Qp49sJSOA/b+J10QXlo4HIU173eebduL6kXHxonPiLxBjjkgXoqz9CWYCtVZj04vG7Ji4qtI15Rbal8fHblrXijjw76MKH+cbMm5bElt/ERQIG3V0hxp8tXIz8iDWEyB3t9fVH/z4plzj39gmSAYGWc3nooAVB8yv61LqGRYAuhI4cM4s46N5X9YrLZVVElyPicSQUnYYEtfQCl6rO8Hw8aZXbWKTVxAP9hGeHgUcDIWUScKo57KwZ5CHjApcXjCBC/u0X6ljM9iH8C40skcE9KZsLwTF73/P117k6Is+d3I1MXj6uYpmX0ZNNmdb2GqXsrXjpDbj1SMBFL9OcR7D9KCcYVJncSPnTf+xU0IHrau9qm0mCR/fxruqe94IFRc+k+KmDLJ8DEzONU0xFaDiePvYQxRzkTa95fD5CGKacEPKqU8z3zBcX/e2oColeXW+27P5FW+Mr132eR6B7Rt3gHj7m0pknth64i8jNDLNcWGNGrDLgSvfQTOCDhtMdbPNChFmgSwqx24V8m8eoy4LM0xPnWfxJA5vP5M5aDjcJUCp8cyeiubqY/hbyiArcC2MfMvau7WOBPcN+ZFq6OkyIDalxrlyhYPk3Tng41+0Er14U4zyCnKIngmyLBU9jX5JU/obd6gn8NRDEAWfEton/9w== X-Forefront-Antispam-Report: CIP:216.228.112.34;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid03.nvidia.com;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(396003)(39860400002)(36840700001)(46966006)(8936002)(36860700001)(2906002)(47076005)(82740400003)(356005)(30864003)(110136005)(1076003)(83380400001)(4326008)(316002)(54906003)(36906005)(426003)(2616005)(82310400003)(86362001)(70206006)(36756003)(8676002)(34020700004)(5660300002)(478600001)(7636003)(186003)(26005)(70586007)(7416002)(16526019)(6666004)(336012)(21314003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Mar 2021 08:39:30.4518 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7a2301d5-88d5-4e18-bd1f-08d8e5325ab8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.34];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT004.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1159 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Migration is currently implemented as a mode of operation for try_to_unmap_one() generally specified by passing the TTU_MIGRATION flag or in the case of splitting a huge anonymous page TTU_SPLIT_FREEZE. However it does not have much in common with the rest of the unmap functionality of try_to_unmap_one() and thus splitting it into a separate function reduces the complexity of try_to_unmap_one() making it more readable. Several simplifications can also be made in try_to_migrate_one() based on the following observations: - All users of TTU_MIGRATION also set TTU_IGNORE_MLOCK. - No users of TTU_MIGRATION ever set TTU_IGNORE_HWPOISON. - No users of TTU_MIGRATION ever set TTU_BATCH_FLUSH. TTU_SPLIT_FREEZE is a special case of migration used when splitting an anonymous page. This is most easily dealt with by calling the correct function from unmap_page() in mm/huge_memory.c - either try_to_migrate() for PageAnon or try_to_unmap(). Signed-off-by: Alistair Popple Reviewed-by: Christoph Hellwig Reviewed-by: Ralph Campbell --- v5: * Added comments about how PMD splitting works for migration vs. unmapping * Tightened up the flag check in try_to_migrate() to be explicit about which TTU_XXX flags are supported. --- include/linux/rmap.h | 4 +- mm/huge_memory.c | 15 +- mm/migrate.c | 9 +- mm/rmap.c | 358 ++++++++++++++++++++++++++++++++----------- 4 files changed, 280 insertions(+), 106 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index e26ac2d71346..6062e0cfca2d 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -86,8 +86,6 @@ struct anon_vma_chain { }; enum ttu_flags { - TTU_MIGRATION = 0x1, /* migration mode */ - TTU_SPLIT_HUGE_PMD = 0x4, /* split huge PMD if any */ TTU_IGNORE_MLOCK = 0x8, /* ignore mlock */ TTU_IGNORE_HWPOISON = 0x20, /* corrupted page is recoverable */ @@ -96,7 +94,6 @@ enum ttu_flags { * do a final flush if necessary */ TTU_RMAP_LOCKED = 0x80, /* do not grab rmap lock: * caller holds it */ - TTU_SPLIT_FREEZE = 0x100, /* freeze pte under splitting thp */ }; #ifdef CONFIG_MMU @@ -193,6 +190,7 @@ static inline void page_dup_rmap(struct page *page, bool compound) int page_referenced(struct page *, int is_locked, struct mem_cgroup *memcg, unsigned long *vm_flags); +bool try_to_migrate(struct page *page, enum ttu_flags flags); bool try_to_unmap(struct page *, enum ttu_flags flags); /* Avoid racy checks */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 89af065cea5b..eab004331b97 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2357,16 +2357,21 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma, static void unmap_page(struct page *page) { - enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | - TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD; + enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD; bool unmap_success; VM_BUG_ON_PAGE(!PageHead(page), page); if (PageAnon(page)) - ttu_flags |= TTU_SPLIT_FREEZE; - - unmap_success = try_to_unmap(page, ttu_flags); + unmap_success = try_to_migrate(page, ttu_flags); + else + /* + * Don't install migration entries for file backed pages. This + * helps handle cases when i_size is in the middle of the page + * as there is no need to unmap pages beyond i_size manually. + */ + unmap_success = try_to_unmap(page, ttu_flags | + TTU_IGNORE_MLOCK); VM_BUG_ON_PAGE(!unmap_success, page); } diff --git a/mm/migrate.c b/mm/migrate.c index b752543adb64..cc4612e2a246 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1130,7 +1130,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage, /* Establish migration ptes */ VM_BUG_ON_PAGE(PageAnon(page) && !PageKsm(page) && !anon_vma, page); - try_to_unmap(page, TTU_MIGRATION|TTU_IGNORE_MLOCK); + try_to_migrate(page, 0); page_was_mapped = 1; } @@ -1332,7 +1332,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, if (page_mapped(hpage)) { bool mapping_locked = false; - enum ttu_flags ttu = TTU_MIGRATION|TTU_IGNORE_MLOCK; + enum ttu_flags ttu = 0; if (!PageAnon(hpage)) { /* @@ -1349,7 +1349,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, ttu |= TTU_RMAP_LOCKED; } - try_to_unmap(hpage, ttu); + try_to_migrate(hpage, ttu); page_was_mapped = 1; if (mapping_locked) @@ -2756,7 +2756,6 @@ static void migrate_vma_prepare(struct migrate_vma *migrate) */ static void migrate_vma_unmap(struct migrate_vma *migrate) { - int flags = TTU_MIGRATION | TTU_IGNORE_MLOCK; const unsigned long npages = migrate->npages; const unsigned long start = migrate->start; unsigned long addr, i, restore = 0; @@ -2768,7 +2767,7 @@ static void migrate_vma_unmap(struct migrate_vma *migrate) continue; if (page_mapped(page)) { - try_to_unmap(page, flags); + try_to_migrate(page, 0); if (page_mapped(page)) goto restore; } diff --git a/mm/rmap.c b/mm/rmap.c index d02bade5245b..b540b44e299a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1405,14 +1405,8 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; - if (IS_ENABLED(CONFIG_MIGRATION) && (flags & TTU_MIGRATION) && - is_zone_device_page(page) && !is_device_private_page(page)) - return true; - - if (flags & TTU_SPLIT_HUGE_PMD) { - split_huge_pmd_address(vma, address, - flags & TTU_SPLIT_FREEZE, page); - } + if (flags & TTU_SPLIT_HUGE_PMD) + split_huge_pmd_address(vma, address, false, page); /* * For THP, we have to assume the worse case ie pmd for invalidation. @@ -1436,16 +1430,6 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, mmu_notifier_invalidate_range_start(&range); while (page_vma_mapped_walk(&pvmw)) { -#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION - /* PMD-mapped THP migration entry */ - if (!pvmw.pte && (flags & TTU_MIGRATION)) { - VM_BUG_ON_PAGE(PageHuge(page) || !PageTransCompound(page), page); - - set_pmd_migration_entry(&pvmw, page); - continue; - } -#endif - /* * If the page is mlock()d, we cannot swap it out. * If it's recently referenced (perhaps page_referenced @@ -1507,46 +1491,6 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, } } - if (IS_ENABLED(CONFIG_MIGRATION) && - (flags & TTU_MIGRATION) && - is_zone_device_page(page)) { - swp_entry_t entry; - pte_t swp_pte; - - pteval = ptep_get_and_clear(mm, pvmw.address, pvmw.pte); - - /* - * Store the pfn of the page in a special migration - * pte. do_swap_page() will wait until the migration - * pte is removed and then restart fault handling. - */ - entry = make_readable_migration_entry(page_to_pfn(page)); - swp_pte = swp_entry_to_pte(entry); - - /* - * pteval maps a zone device page and is therefore - * a swap pte. - */ - if (pte_swp_soft_dirty(pteval)) - swp_pte = pte_swp_mksoft_dirty(swp_pte); - if (pte_swp_uffd_wp(pteval)) - swp_pte = pte_swp_mkuffd_wp(swp_pte); - set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); - /* - * No need to invalidate here it will synchronize on - * against the special swap migration pte. - * - * The assignment to subpage above was computed from a - * swap PTE which results in an invalid pointer. - * Since only PAGE_SIZE pages can currently be - * migrated, just set it to page. This will need to be - * changed when hugepage migrations to device private - * memory are supported. - */ - subpage = page; - goto discard; - } - /* Nuke the page table entry. */ flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); if (should_defer_flush(mm, flags)) { @@ -1599,39 +1543,6 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, /* We have to invalidate as we cleared the pte */ mmu_notifier_invalidate_range(mm, address, address + PAGE_SIZE); - } else if (IS_ENABLED(CONFIG_MIGRATION) && - (flags & (TTU_MIGRATION|TTU_SPLIT_FREEZE))) { - swp_entry_t entry; - pte_t swp_pte; - - if (arch_unmap_one(mm, vma, address, pteval) < 0) { - set_pte_at(mm, address, pvmw.pte, pteval); - ret = false; - page_vma_mapped_walk_done(&pvmw); - break; - } - - /* - * Store the pfn of the page in a special migration - * pte. do_swap_page() will wait until the migration - * pte is removed and then restart fault handling. - */ - if (pte_write(pteval)) - entry = make_writable_migration_entry( - page_to_pfn(subpage)); - else - entry = make_readable_migration_entry( - page_to_pfn(subpage)); - swp_pte = swp_entry_to_pte(entry); - if (pte_soft_dirty(pteval)) - swp_pte = pte_swp_mksoft_dirty(swp_pte); - if (pte_uffd_wp(pteval)) - swp_pte = pte_swp_mkuffd_wp(swp_pte); - set_pte_at(mm, address, pvmw.pte, swp_pte); - /* - * No need to invalidate here it will synchronize on - * against the special swap migration pte. - */ } else if (PageAnon(page)) { swp_entry_t entry = { .val = page_private(subpage) }; pte_t swp_pte; @@ -1758,6 +1669,268 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags) .anon_lock = page_lock_anon_vma_read, }; + if (flags & TTU_RMAP_LOCKED) + rmap_walk_locked(page, &rwc); + else + rmap_walk(page, &rwc); + + return !page_mapcount(page) ? true : false; +} + +/* + * @arg: enum ttu_flags will be passed to this argument. + * + * If TTU_SPLIT_HUGE_PMD is specified any PMD mappings will be split into PTEs + * containing migration entries. This and TTU_RMAP_LOCKED are the only supported + * flags. + */ +static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, + unsigned long address, void *arg) +{ + struct mm_struct *mm = vma->vm_mm; + struct page_vma_mapped_walk pvmw = { + .page = page, + .vma = vma, + .address = address, + }; + pte_t pteval; + struct page *subpage; + bool ret = true; + struct mmu_notifier_range range; + enum ttu_flags flags = (enum ttu_flags)(long)arg; + + if (is_zone_device_page(page) && !is_device_private_page(page)) + return true; + + /* + * unmap_page() in mm/huge_memory.c is the only user of migration with + * TTU_SPLIT_HUGE_PMD and it wants to freeze. + */ + if (flags & TTU_SPLIT_HUGE_PMD) + split_huge_pmd_address(vma, address, true, page); + + /* + * For THP, we have to assume the worse case ie pmd for invalidation. + * For hugetlb, it could be much worse if we need to do pud + * invalidation in the case of pmd sharing. + * + * Note that the page can not be free in this function as call of + * try_to_unmap() must hold a reference on the page. + */ + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, + address, + min(vma->vm_end, address + page_size(page))); + if (PageHuge(page)) { + /* + * If sharing is possible, start and end will be adjusted + * accordingly. + */ + adjust_range_if_pmd_sharing_possible(vma, &range.start, + &range.end); + } + mmu_notifier_invalidate_range_start(&range); + + while (page_vma_mapped_walk(&pvmw)) { +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION + /* PMD-mapped THP migration entry */ + if (!pvmw.pte) { + VM_BUG_ON_PAGE(PageHuge(page) || + !PageTransCompound(page), page); + + set_pmd_migration_entry(&pvmw, page); + continue; + } +#endif + + /* Unexpected PMD-mapped THP? */ + VM_BUG_ON_PAGE(!pvmw.pte, page); + + subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte); + address = pvmw.address; + + if (PageHuge(page) && !PageAnon(page)) { + /* + * To call huge_pmd_unshare, i_mmap_rwsem must be + * held in write mode. Caller needs to explicitly + * do this outside rmap routines. + */ + VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); + if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { + /* + * huge_pmd_unshare unmapped an entire PMD + * page. There is no way of knowing exactly + * which PMDs may be cached for this mm, so + * we must flush them all. start/end were + * already adjusted above to cover this range. + */ + flush_cache_range(vma, range.start, range.end); + flush_tlb_range(vma, range.start, range.end); + mmu_notifier_invalidate_range(mm, range.start, + range.end); + + /* + * The ref count of the PMD page was dropped + * which is part of the way map counting + * is done for shared PMDs. Return 'true' + * here. When there is no other sharing, + * huge_pmd_unshare returns false and we will + * unmap the actual page and drop map count + * to zero. + */ + page_vma_mapped_walk_done(&pvmw); + break; + } + } + + /* Nuke the page table entry. */ + flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); + pteval = ptep_clear_flush(vma, address, pvmw.pte); + + /* Move the dirty bit to the page. Now the pte is gone. */ + if (pte_dirty(pteval)) + set_page_dirty(page); + + /* Update high watermark before we lower rss */ + update_hiwater_rss(mm); + + if (is_zone_device_page(page)) { + swp_entry_t entry; + pte_t swp_pte; + + /* + * Store the pfn of the page in a special migration + * pte. do_swap_page() will wait until the migration + * pte is removed and then restart fault handling. + */ + entry = make_readable_migration_entry( + page_to_pfn(page)); + swp_pte = swp_entry_to_pte(entry); + + /* + * pteval maps a zone device page and is therefore + * a swap pte. + */ + if (pte_swp_soft_dirty(pteval)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + if (pte_swp_uffd_wp(pteval)) + swp_pte = pte_swp_mkuffd_wp(swp_pte); + set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); + /* + * No need to invalidate here it will synchronize on + * against the special swap migration pte. + * + * The assignment to subpage above was computed from a + * swap PTE which results in an invalid pointer. + * Since only PAGE_SIZE pages can currently be + * migrated, just set it to page. This will need to be + * changed when hugepage migrations to device private + * memory are supported. + */ + subpage = page; + } else if (PageHWPoison(page)) { + pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); + if (PageHuge(page)) { + hugetlb_count_sub(compound_nr(page), mm); + set_huge_swap_pte_at(mm, address, + pvmw.pte, pteval, + vma_mmu_pagesize(vma)); + } else { + dec_mm_counter(mm, mm_counter(page)); + set_pte_at(mm, address, pvmw.pte, pteval); + } + + } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { + /* + * The guest indicated that the page content is of no + * interest anymore. Simply discard the pte, vmscan + * will take care of the rest. + * A future reference will then fault in a new zero + * page. When userfaultfd is active, we must not drop + * this page though, as its main user (postcopy + * migration) will not expect userfaults on already + * copied pages. + */ + dec_mm_counter(mm, mm_counter(page)); + /* We have to invalidate as we cleared the pte */ + mmu_notifier_invalidate_range(mm, address, + address + PAGE_SIZE); + } else { + swp_entry_t entry; + pte_t swp_pte; + + if (arch_unmap_one(mm, vma, address, pteval) < 0) { + set_pte_at(mm, address, pvmw.pte, pteval); + ret = false; + page_vma_mapped_walk_done(&pvmw); + break; + } + + /* + * Store the pfn of the page in a special migration + * pte. do_swap_page() will wait until the migration + * pte is removed and then restart fault handling. + */ + if (pte_write(pteval)) + entry = make_writable_migration_entry( + page_to_pfn(subpage)); + else + entry = make_readable_migration_entry( + page_to_pfn(subpage)); + + swp_pte = swp_entry_to_pte(entry); + if (pte_soft_dirty(pteval)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + if (pte_uffd_wp(pteval)) + swp_pte = pte_swp_mkuffd_wp(swp_pte); + set_pte_at(mm, address, pvmw.pte, swp_pte); + /* + * No need to invalidate here it will synchronize on + * against the special swap migration pte. + */ + } + + /* + * No need to call mmu_notifier_invalidate_range() it has be + * done above for all cases requiring it to happen under page + * table lock before mmu_notifier_invalidate_range_end() + * + * See Documentation/vm/mmu_notifier.rst + */ + page_remove_rmap(subpage, PageHuge(page)); + put_page(page); + } + + mmu_notifier_invalidate_range_end(&range); + + return ret; +} + +/** + * try_to_migrate - try to replace all page table mappings with swap entries + * @page: the page to replace page table entries for + * @flags: action and flags + * + * Tries to remove all the page table entries which are mapping this page and + * replace them with special swap entries. Caller must hold the page lock. + * + * If is successful, return true. Otherwise, false. + */ +bool try_to_migrate(struct page *page, enum ttu_flags flags) +{ + struct rmap_walk_control rwc = { + .rmap_one = try_to_migrate_one, + .arg = (void *)flags, + .done = page_not_mapped, + .anon_lock = page_lock_anon_vma_read, + }; + + /* + * Migration always ignores mlock and only supports TTU_RMAP_LOCKED and + * TTU_SPLIT_HUGE_PMD flags. + */ + if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD))) + return false; + /* * During exec, a temporary VMA is setup and later moved. * The VMA is moved under the anon_vma lock but not the @@ -1766,8 +1939,7 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags) * locking requirements of exec(), migration skips * temporary VMAs until after exec() completes. */ - if ((flags & (TTU_MIGRATION|TTU_SPLIT_FREEZE)) - && !PageKsm(page) && PageAnon(page)) + if (!PageKsm(page) && PageAnon(page)) rwc.invalid_vma = invalid_migration_vma; if (flags & TTU_RMAP_LOCKED) From patchwork Fri Mar 12 08:38:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 1451765 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256 header.s=selector2 header.b=olFbToq0; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4DxfQQ5P8pz9srZ for ; Fri, 12 Mar 2021 19:40:14 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232487AbhCLIjk (ORCPT ); Fri, 12 Mar 2021 03:39:40 -0500 Received: from mail-mw2nam10on2052.outbound.protection.outlook.com ([40.107.94.52]:42976 "EHLO NAM10-MW2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S232417AbhCLIjh (ORCPT ); Fri, 12 Mar 2021 03:39:37 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GzXR+4M15WpT2DQfCNNS7W3IvfHFmHm8VjFMpVloddxqwzQw7WhpaJj/CF4XaUbhcOo02R7eNcMKFmy6gvEcpLfo2HENTJmIng9bL8WpO2ID8q3wAlYpSru8PcvVbti8GzguCy5zgMQi7ag/xsUbZobDznQLUfIeR0ZHEHRSVA0D8DV1EUt3CIQTxSLGuwj6gG7Gl7pD8OrVhsGLnIdQribAU/oW+T+bZQUVf0blyLqqyuiWxLzY+nH65zUqPCJm3zMGiKjXbIk81doSKmT1OeqgO6HO53zW/j+P9jOv9EsMSGFnWrfLYaPYCjfxq+eKR7F/zbMivDdAdNMnkCFN2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rG0IWzz4WLmkNcUWP3RHSwPTgbhzJhsjN8/3F7AB8BE=; b=RrsgRq5fud4QwE5ZwwU9wTlF4lPg2TtZoF6VBgvfXLYrb94eooMettRtgJm8tvj+P7JEk9wMS58RqOkbQk/8nuiZO6tCacylZK5I8mEc9SJinSvN3OJZrCgQ0Y5Yj4/HvmPvEs2tVqgt+osZN1OisKmzWSqxvsbJeDeUvLSyOqPBrYng0z3kvoIHEr34vs1hvTXZYji7shKTFHmHSGhaqnL/EFpO16NZNWJ2P8t2pnwBq3LgdxFIqsX0OijTXE9BaLp6XhOVJRBwJHCQCb1642fgOl5M3Cyc3WKhfPpZLwc8DfNWSS2JLdbD108z6XTP8GjttV5AQPJFZ18wMJmfqw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rG0IWzz4WLmkNcUWP3RHSwPTgbhzJhsjN8/3F7AB8BE=; b=olFbToq0qFTtcJupv19rYB9z95WFpL8ENMn5kJgD96ZYqNBbTTbpQP8GMjzMqyI3xhAoKONqGY0QXDC+N1H7U7tMkjL72wt/ifmiD0d6saYXsOyztq1sPpBZGaBbL28ZONZYMdbr7CZ1DgrO2CUGjHJIXu6ApO8AEDVrYb0L1p09NzczhPjBFDVtI50lvDzLLJzKFgvDsrgB7/KHOFhlsCapIaHS7aQGgHP7gUpwk/1mxm93GxjEK1eAR3Q6mvHa3TTe0yCS3KmhtB3+otUkL5bLhcTYsAEKf6IwvC6AQgOybHWvtDSo0YQiD+rTxU1OnEnOQiP1X6CtyzDhFGaCSw== Received: from DM5PR05CA0013.namprd05.prod.outlook.com (2603:10b6:3:d4::23) by DM6PR12MB4747.namprd12.prod.outlook.com (2603:10b6:5:30::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.30; Fri, 12 Mar 2021 08:39:33 +0000 Received: from DM6NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:3:d4:cafe::d8) by DM5PR05CA0013.outlook.office365.com (2603:10b6:3:d4::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3955.10 via Frontend Transport; Fri, 12 Mar 2021 08:39:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; redhat.com; dkim=none (message not signed) header.d=none;redhat.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT013.mail.protection.outlook.com (10.13.173.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.3933.31 via Frontend Transport; Fri, 12 Mar 2021 08:39:32 +0000 Received: from localhost (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 12 Mar 2021 08:39:31 +0000 From: Alistair Popple To: , , , CC: , , , , , , , , , , , Alistair Popple Subject: [PATCH v6 5/8] mm: Device exclusive memory access Date: Fri, 12 Mar 2021 19:38:48 +1100 Message-ID: <20210312083851.15981-6-apopple@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210312083851.15981-1-apopple@nvidia.com> References: <20210312083851.15981-1-apopple@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: eac58ec9-f9c5-4316-e8b8-08d8e5325c1c X-MS-TrafficTypeDiagnostic: DM6PR12MB4747: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4125; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4k96iD8dwV4+gIFw4HzW4GRxvyr/+hLq0msVnhtLfmS+fLiRzgoerRig2PPG7uI5Kc0WiusbA6SpxM92oWv+FOkskiqZZQjwkgvKo0kRkzVVRVa8tjfpf3Q4R+bCtosIDAnXtwJCCCpMsA4NqMzl6AWSivO9L4IYHxtoYcQRTrduhwAk4M48rlR4UD+GCJLzPjnD0QJim8PVXgOj63FZ/eZZHO/L0qlhd/CWDb/sqdj7Wi1W2+RGFx4HPCj3MLjNeTkrpA3pFRRvPdOE0QnL1kzdzcUk7m5BiiP9Cfa+AL5yiqAJ+74yXhmzN2DzqkrlzqtA+qhPt9/VOrKkexcadv5NmtpeeMwAtYLfKnX98/rwcYaF1Vd0qf9xz0shikI7juqk13kUnu4O5nOEyn4RNrWjQucBJfhe7akUy1OFYngV+zVZCNGH7OvtIXhNacv+o6/C+JrlmGTG3Wf7P1Hy7DFJ/Z2GIsX1xi6kwRRqAF7oG0lJlQFxfCM9UyqYnKJxAzNbFHqlREzasy+76QoAxIsxHCdcifWNMHPi6U6p/4TJ+vK/SiDI90Qo82pwSPBR7LW30Ecnqtrj46J19tMYumBbz11BMkw/Jw02uaQugqs/dJHI7sjQVT0shxnyYTjbpycbCa6U386Z7fUST5b/FQok8L57tjGJLCU3Lnmc6xWbv/PZsI+S+KblKkmNW9Vn X-Forefront-Antispam-Report: CIP:216.228.112.34;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid03.nvidia.com;CAT:NONE;SFS:(4636009)(136003)(376002)(346002)(39860400002)(396003)(36840700001)(46966006)(54906003)(8936002)(30864003)(34020700004)(36756003)(110136005)(186003)(82310400003)(426003)(36906005)(336012)(47076005)(107886003)(2906002)(16526019)(7416002)(70586007)(5660300002)(70206006)(36860700001)(26005)(86362001)(2616005)(82740400003)(6666004)(83380400001)(8676002)(1076003)(7636003)(478600001)(4326008)(356005)(316002);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Mar 2021 08:39:32.7720 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: eac58ec9-f9c5-4316-e8b8-08d8e5325c1c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.34];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4747 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Some devices require exclusive write access to shared virtual memory (SVM) ranges to perform atomic operations on that memory. This requires CPU page tables to be updated to deny access whilst atomic operations are occurring. In order to do this introduce a new swap entry type (SWP_DEVICE_EXCLUSIVE). When a SVM range needs to be marked for exclusive access by a device all page table mappings for the particular range are replaced with device exclusive swap entries. This causes any CPU access to the page to result in a fault. Faults are resovled by replacing the faulting entry with the original mapping. This results in MMU notifiers being called which a driver uses to update access permissions such as revoking atomic access. After notifiers have been called the device will no longer have exclusive access to the region. Signed-off-by: Alistair Popple Reviewed-by: Christoph Hellwig --- v6: * Fixed a bisectablity issue due to incorrectly applying the rename of migrate_pgmap_owner to the wrong patches for Nouveau and hmm_test. v5: * Renamed range->migrate_pgmap_owner to range->owner. * Added MMU_NOTIFY_EXCLUSIVE to allow passing of a driver cookie which allows notifiers called as a result of make_device_exclusive_range() to be ignored. * Added a check to try_to_protect_one() to detect if the pages originally returned from get_user_pages() have been unmapped or not. * Removed check_device_exclusive_range() as it is no longer required with the other changes. * Documentation update. v4: * Add function to check that mappings are still valid and exclusive. * s/long/unsigned long/ in make_device_exclusive_entry(). --- Documentation/vm/hmm.rst | 19 ++- drivers/gpu/drm/nouveau/nouveau_svm.c | 2 +- include/linux/mmu_notifier.h | 25 +++- include/linux/rmap.h | 4 + include/linux/swap.h | 4 +- include/linux/swapops.h | 44 +++++- lib/test_hmm.c | 2 +- mm/hmm.c | 5 + mm/memory.c | 107 +++++++++++++- mm/mprotect.c | 8 + mm/page_vma_mapped.c | 9 +- mm/rmap.c | 203 ++++++++++++++++++++++++++ 12 files changed, 419 insertions(+), 13 deletions(-) diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst index 09e28507f5b2..a5fdee82c037 100644 --- a/Documentation/vm/hmm.rst +++ b/Documentation/vm/hmm.rst @@ -332,7 +332,7 @@ between device driver specific code and shared common code: walks to fill in the ``args->src`` array with PFNs to be migrated. The ``invalidate_range_start()`` callback is passed a ``struct mmu_notifier_range`` with the ``event`` field set to - ``MMU_NOTIFY_MIGRATE`` and the ``migrate_pgmap_owner`` field set to + ``MMU_NOTIFY_MIGRATE`` and the ``owner`` field set to the ``args->pgmap_owner`` field passed to migrate_vma_setup(). This is allows the device driver to skip the invalidation callback and only invalidate device private MMU mappings that are actually migrating. @@ -405,6 +405,23 @@ between device driver specific code and shared common code: The lock can now be released. +Exclusive access memory +======================= + +Not all devices support atomic access to system memory. To support atomic +operations to a shared virtual memory page such a device needs access to that +page which is exclusive of any userspace access from the CPU. The +``make_device_exclusive_range()`` function can be used to make a memory range +inaccessible from userspace. + +This replaces all mappings for pages in the given range with special swap +entries. Any attempt to access the swap entry results in a fault which is +resovled by replacing the entry with the original mapping. A driver gets +notified that the mapping has been changed by MMU notifiers, after which point +it will no longer have exclusive access to the page. Exclusive access is +guranteed to last until the driver drops the page lock and page reference, at +which point any CPU faults on the page may proceed as described. + Memory cgroup (memcg) and rss accounting ======================================== diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index f18bd53da052..94f841026c3b 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -265,7 +265,7 @@ nouveau_svmm_invalidate_range_start(struct mmu_notifier *mn, * the invalidation is handled as part of the migration process. */ if (update->event == MMU_NOTIFY_MIGRATE && - update->migrate_pgmap_owner == svmm->vmm->cli->drm->dev) + update->owner == svmm->vmm->cli->drm->dev) goto out; if (limit > svmm->unmanaged.start && start < svmm->unmanaged.limit) { diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index b8200782dede..455e269bf825 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -41,7 +41,12 @@ struct mmu_interval_notifier; * * @MMU_NOTIFY_MIGRATE: used during migrate_vma_collect() invalidate to signal * a device driver to possibly ignore the invalidation if the - * migrate_pgmap_owner field matches the driver's device private pgmap owner. + * owner field matches the driver's device private pgmap owner. + * + * @MMU_NOTIFY_EXCLUSIVE: to signal a device driver that the device will no + * longer have exclusive access to the page. May ignore the invalidation that's + * part of make_device_exclusive_range() if the owner field + * matches the value passed to make_device_exclusive_range(). */ enum mmu_notifier_event { MMU_NOTIFY_UNMAP = 0, @@ -51,6 +56,7 @@ enum mmu_notifier_event { MMU_NOTIFY_SOFT_DIRTY, MMU_NOTIFY_RELEASE, MMU_NOTIFY_MIGRATE, + MMU_NOTIFY_EXCLUSIVE, }; #define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0) @@ -269,7 +275,7 @@ struct mmu_notifier_range { unsigned long end; unsigned flags; enum mmu_notifier_event event; - void *migrate_pgmap_owner; + void *owner; }; static inline int mm_has_notifiers(struct mm_struct *mm) @@ -528,7 +534,17 @@ static inline void mmu_notifier_range_init_migrate( { mmu_notifier_range_init(range, MMU_NOTIFY_MIGRATE, flags, vma, mm, start, end); - range->migrate_pgmap_owner = pgmap; + range->owner = pgmap; +} + +static inline void mmu_notifier_range_init_exclusive( + struct mmu_notifier_range *range, unsigned int flags, + struct vm_area_struct *vma, struct mm_struct *mm, + unsigned long start, unsigned long end, void *owner) +{ + mmu_notifier_range_init(range, MMU_NOTIFY_EXCLUSIVE, flags, vma, mm, + start, end); + range->owner = owner; } #define ptep_clear_flush_young_notify(__vma, __address, __ptep) \ @@ -658,6 +674,9 @@ static inline void _mmu_notifier_range_init(struct mmu_notifier_range *range, #define mmu_notifier_range_init_migrate(range, flags, vma, mm, start, end, \ pgmap) \ _mmu_notifier_range_init(range, start, end) +#define mmu_notifier_range_init_exclusive(range, flags, vma, mm, start, end, \ + owner) \ + _mmu_notifier_range_init(range, start, end) static inline bool mmu_notifier_range_blockable(const struct mmu_notifier_range *range) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 6062e0cfca2d..b207c138cbff 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -193,6 +193,10 @@ int page_referenced(struct page *, int is_locked, bool try_to_migrate(struct page *page, enum ttu_flags flags); bool try_to_unmap(struct page *, enum ttu_flags flags); +int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, + unsigned long end, struct page **pages, + void *arg); + /* Avoid racy checks */ #define PVMW_SYNC (1 << 0) /* Look for migarion entries rather than present PTEs */ diff --git a/include/linux/swap.h b/include/linux/swap.h index 516104b9334b..7a3c260146df 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -63,9 +63,11 @@ static inline int current_is_kswapd(void) * to a special SWP_DEVICE_* entry. */ #ifdef CONFIG_DEVICE_PRIVATE -#define SWP_DEVICE_NUM 2 +#define SWP_DEVICE_NUM 4 #define SWP_DEVICE_WRITE (MAX_SWAPFILES+SWP_HWPOISON_NUM+SWP_MIGRATION_NUM) #define SWP_DEVICE_READ (MAX_SWAPFILES+SWP_HWPOISON_NUM+SWP_MIGRATION_NUM+1) +#define SWP_DEVICE_EXCLUSIVE_WRITE (MAX_SWAPFILES+SWP_HWPOISON_NUM+SWP_MIGRATION_NUM+2) +#define SWP_DEVICE_EXCLUSIVE_READ (MAX_SWAPFILES+SWP_HWPOISON_NUM+SWP_MIGRATION_NUM+3) #else #define SWP_DEVICE_NUM 0 #endif diff --git a/include/linux/swapops.h b/include/linux/swapops.h index 4dfd807ae52a..4129bd2ff9d6 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -120,6 +120,27 @@ static inline bool is_writable_device_private_entry(swp_entry_t entry) { return unlikely(swp_type(entry) == SWP_DEVICE_WRITE); } + +static inline swp_entry_t make_readable_device_exclusive_entry(pgoff_t offset) +{ + return swp_entry(SWP_DEVICE_EXCLUSIVE_READ, offset); +} + +static inline swp_entry_t make_writable_device_exclusive_entry(pgoff_t offset) +{ + return swp_entry(SWP_DEVICE_EXCLUSIVE_WRITE, offset); +} + +static inline bool is_device_exclusive_entry(swp_entry_t entry) +{ + return swp_type(entry) == SWP_DEVICE_EXCLUSIVE_READ || + swp_type(entry) == SWP_DEVICE_EXCLUSIVE_WRITE; +} + +static inline bool is_writable_device_exclusive_entry(swp_entry_t entry) +{ + return unlikely(swp_type(entry) == SWP_DEVICE_EXCLUSIVE_WRITE); +} #else /* CONFIG_DEVICE_PRIVATE */ static inline swp_entry_t make_readable_device_private_entry(pgoff_t offset) { @@ -140,6 +161,26 @@ static inline bool is_writable_device_private_entry(swp_entry_t entry) { return false; } + +static inline swp_entry_t make_readable_device_exclusive_entry(pgoff_t offset) +{ + return swp_entry(0, 0); +} + +static inline swp_entry_t make_writable_device_exclusive_entry(pgoff_t offset) +{ + return swp_entry(0, 0); +} + +static inline bool is_device_exclusive_entry(swp_entry_t entry) +{ + return false; +} + +static inline bool is_writable_device_exclusive_entry(swp_entry_t entry) +{ + return false; +} #endif /* CONFIG_DEVICE_PRIVATE */ #ifdef CONFIG_MIGRATION @@ -219,7 +260,8 @@ static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) */ static inline bool is_pfn_swap_entry(swp_entry_t entry) { - return is_migration_entry(entry) || is_device_private_entry(entry); + return is_migration_entry(entry) || is_device_private_entry(entry) || + is_device_exclusive_entry(entry); } struct page_vma_mapped_walk; diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 80a78877bd93..5c9f5a020c1d 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -218,7 +218,7 @@ static bool dmirror_interval_invalidate(struct mmu_interval_notifier *mni, * the invalidation is handled as part of the migration process. */ if (range->event == MMU_NOTIFY_MIGRATE && - range->migrate_pgmap_owner == dmirror->mdevice) + range->owner == dmirror->mdevice) return true; if (mmu_notifier_range_blockable(range)) diff --git a/mm/hmm.c b/mm/hmm.c index 11df3ca30b82..fad6be2bf072 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -26,6 +26,8 @@ #include #include +#include "internal.h" + struct hmm_vma_walk { struct hmm_range *range; unsigned long last; @@ -271,6 +273,9 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, if (!non_swap_entry(entry)) goto fault; + if (is_device_exclusive_entry(entry)) + goto fault; + if (is_migration_entry(entry)) { pte_unmap(ptep); hmm_vma_walk->last = addr; diff --git a/mm/memory.c b/mm/memory.c index 3a5705cfc891..6e634104b9ae 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -781,6 +781,27 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, pte = pte_swp_mkuffd_wp(pte); set_pte_at(src_mm, addr, src_pte, pte); } + } else if (is_device_exclusive_entry(entry)) { + page = pfn_swap_entry_to_page(entry); + + get_page(page); + rss[mm_counter(page)]++; + + if (is_writable_device_exclusive_entry(entry) && + is_cow_mapping(vm_flags)) { + /* + * COW mappings require pages in both + * parent and child to be set to read. + */ + entry = make_readable_device_exclusive_entry( + swp_offset(entry)); + pte = swp_entry_to_pte(entry); + if (pte_swp_soft_dirty(*src_pte)) + pte = pte_swp_mksoft_dirty(pte); + if (pte_swp_uffd_wp(*src_pte)) + pte = pte_swp_mkuffd_wp(pte); + set_pte_at(src_mm, addr, src_pte, pte); + } } set_pte_at(dst_mm, addr, dst_pte, pte); return 0; @@ -1287,7 +1308,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, } entry = pte_to_swp_entry(ptent); - if (is_device_private_entry(entry)) { + if (is_device_private_entry(entry) || + is_device_exclusive_entry(entry)) { struct page *page = pfn_swap_entry_to_page(entry); if (unlikely(details && details->check_mapping)) { @@ -1303,7 +1325,10 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); rss[mm_counter(page)]--; - page_remove_rmap(page, false); + + if (is_device_private_entry(entry)) + page_remove_rmap(page, false); + put_page(page); continue; } @@ -3256,6 +3281,81 @@ void unmap_mapping_range(struct address_space *mapping, } EXPORT_SYMBOL(unmap_mapping_range); +static void restore_exclusive_pte(struct vm_area_struct *vma, + struct page *page, unsigned long address, + pte_t *ptep) +{ + pte_t pte; + swp_entry_t entry; + + pte = pte_mkold(mk_pte(page, READ_ONCE(vma->vm_page_prot))); + if (pte_swp_soft_dirty(*ptep)) + pte = pte_mksoft_dirty(pte); + + entry = pte_to_swp_entry(*ptep); + if (pte_swp_uffd_wp(*ptep)) + pte = pte_mkuffd_wp(pte); + else if (is_writable_device_exclusive_entry(entry)) + pte = maybe_mkwrite(pte_mkdirty(pte), vma); + + set_pte_at(vma->vm_mm, address, ptep, pte); + + /* + * No need to take a page reference as one was already + * created when the swap entry was made. + */ + if (PageAnon(page)) + page_add_anon_rmap(page, vma, address, false); + else + page_add_file_rmap(page, false); + + if (vma->vm_flags & VM_LOCKED) + mlock_vma_page(page); + + /* + * No need to invalidate - it was non-present before. However + * secondary CPUs may have mappings that need invalidating. + */ + update_mmu_cache(vma, address, ptep); +} + +/* + * Restore a potential device exclusive pte to a working pte entry + */ +static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) +{ + struct page *page = vmf->page; + struct vm_area_struct *vma = vmf->vma; + struct page_vma_mapped_walk pvmw = { + .page = page, + .vma = vma, + .address = vmf->address, + .flags = PVMW_SYNC, + }; + vm_fault_t ret = 0; + struct mmu_notifier_range range; + + lock_page(page); + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, + vmf->address & PAGE_MASK, + (vmf->address & PAGE_MASK) + PAGE_SIZE); + mmu_notifier_invalidate_range_start(&range); + + while (page_vma_mapped_walk(&pvmw)) { + if (unlikely(!pte_same(*pvmw.pte, vmf->orig_pte))) { + page_vma_mapped_walk_done(&pvmw); + break; + } + + restore_exclusive_pte(vma, page, pvmw.address, pvmw.pte); + } + + unlock_page(page); + + mmu_notifier_invalidate_range_end(&range); + return ret; +} + /* * We enter with non-exclusive mmap_lock (to exclude vma changes, * but allow concurrent faults), and pte mapped but not yet locked. @@ -3283,6 +3383,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (is_migration_entry(entry)) { migration_entry_wait(vma->vm_mm, vmf->pmd, vmf->address); + } else if (is_device_exclusive_entry(entry)) { + vmf->page = pfn_swap_entry_to_page(entry); + ret = remove_device_exclusive_entry(vmf); } else if (is_device_private_entry(entry)) { vmf->page = pfn_swap_entry_to_page(entry); ret = vmf->page->pgmap->ops->migrate_to_ram(vmf); diff --git a/mm/mprotect.c b/mm/mprotect.c index f21b760ec809..c6018541ea3d 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -165,6 +165,14 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, newpte = swp_entry_to_pte(entry); if (pte_swp_uffd_wp(oldpte)) newpte = pte_swp_mkuffd_wp(newpte); + } else if (is_writable_device_exclusive_entry(entry)) { + entry = make_readable_device_exclusive_entry( + swp_offset(entry)); + newpte = swp_entry_to_pte(entry); + if (pte_swp_soft_dirty(oldpte)) + newpte = pte_swp_mksoft_dirty(newpte); + if (pte_swp_uffd_wp(oldpte)) + newpte = pte_swp_mkuffd_wp(newpte); } else { newpte = oldpte; } diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index eed988ab2e81..29842f169219 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -41,7 +41,8 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw) /* Handle un-addressable ZONE_DEVICE memory */ entry = pte_to_swp_entry(*pvmw->pte); - if (!is_device_private_entry(entry)) + if (!is_device_private_entry(entry) && + !is_device_exclusive_entry(entry)) return false; } else if (!pte_present(*pvmw->pte)) return false; @@ -93,7 +94,8 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) return false; entry = pte_to_swp_entry(*pvmw->pte); - if (!is_migration_entry(entry)) + if (!is_migration_entry(entry) && + !is_device_exclusive_entry(entry)) return false; pfn = swp_offset(entry); @@ -102,7 +104,8 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) /* Handle un-addressable ZONE_DEVICE memory */ entry = pte_to_swp_entry(*pvmw->pte); - if (!is_device_private_entry(entry)) + if (!is_device_private_entry(entry) && + !is_device_exclusive_entry(entry)) return false; pfn = swp_offset(entry); diff --git a/mm/rmap.c b/mm/rmap.c index b540b44e299a..8bf7bd0404ce 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2005,6 +2005,209 @@ void try_to_munlock(struct page *page) rmap_walk(page, &rwc); } +struct ttp_args { + struct mm_struct *mm; + unsigned long address; + void *arg; + bool valid; +}; + +static bool try_to_protect_one(struct page *page, struct vm_area_struct *vma, + unsigned long address, void *arg) +{ + struct mm_struct *mm = vma->vm_mm; + struct page_vma_mapped_walk pvmw = { + .page = page, + .vma = vma, + .address = address, + }; + struct ttp_args *ttp = (struct ttp_args *) arg; + pte_t pteval; + struct page *subpage; + bool ret = true; + struct mmu_notifier_range range; + swp_entry_t entry; + pte_t swp_pte; + + mmu_notifier_range_init_exclusive(&range, 0, vma, + vma->vm_mm, address, + min(vma->vm_end, + address + page_size(page)), + ttp->arg); + if (PageHuge(page)) { + /* + * If sharing is possible, start and end will be adjusted + * accordingly. + */ + adjust_range_if_pmd_sharing_possible(vma, &range.start, + &range.end); + } + mmu_notifier_invalidate_range_start(&range); + + while (page_vma_mapped_walk(&pvmw)) { + /* Unexpected PMD-mapped THP? */ + VM_BUG_ON_PAGE(!pvmw.pte, page); + + if (!pte_present(*pvmw.pte)) { + ret = false; + page_vma_mapped_walk_done(&pvmw); + break; + } + + subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte); + address = pvmw.address; + + /* Nuke the page table entry. */ + flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); + pteval = ptep_clear_flush(vma, address, pvmw.pte); + + /* Move the dirty bit to the page. Now the pte is gone. */ + if (pte_dirty(pteval)) + set_page_dirty(page); + + /* Update high watermark before we lower rss */ + update_hiwater_rss(mm); + + if (arch_unmap_one(mm, vma, address, pteval) < 0) { + set_pte_at(mm, address, pvmw.pte, pteval); + ret = false; + page_vma_mapped_walk_done(&pvmw); + break; + } + + /* + * Check that our target page is still mapped at the expected + * address. + */ + if (ttp->mm == mm && ttp->address == address && + pte_write(pteval)) + ttp->valid = true; + + /* + * Store the pfn of the page in a special migration + * pte. do_swap_page() will wait until the migration + * pte is removed and then restart fault handling. + */ + if (pte_write(pteval)) + entry = make_writable_device_exclusive_entry( + page_to_pfn(subpage)); + else + entry = make_readable_device_exclusive_entry( + page_to_pfn(subpage)); + swp_pte = swp_entry_to_pte(entry); + if (pte_soft_dirty(pteval)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + if (pte_uffd_wp(pteval)) + swp_pte = pte_swp_mkuffd_wp(swp_pte); + + /* Take a reference for the swap entry */ + get_page(page); + set_pte_at(mm, address, pvmw.pte, swp_pte); + + page_remove_rmap(subpage, PageHuge(page)); + put_page(page); + } + + mmu_notifier_invalidate_range_end(&range); + + return ret; +} + +/** + * try_to_protect - try to replace all page table mappings with swap entries + * @page: the page to replace page table entries for + * @flags: action and flags + * @mm: the mm_struct where the page is expected to be mapped + * @address: address where the page is expected to be mapped + * @arg: passed to MMU_NOTIFY_EXCLUSIVE range notifier callbacks + * + * Tries to remove all the page table entries which are mapping this page and + * replace them with special swap entries to grant a device exclusive access to + * the page. Caller must hold the page lock. + * + * Returns false if the page is still mapped, or if it could not be unmapped + * from the expected address. Otherwise returns true (success). + */ +static bool try_to_protect(struct page *page, struct mm_struct *mm, + unsigned long address, void *arg) +{ + struct ttp_args ttp = { + .mm = mm, + .address = address, + .arg = arg, + .valid = false, + }; + struct rmap_walk_control rwc = { + .rmap_one = try_to_protect_one, + .done = page_not_mapped, + .anon_lock = page_lock_anon_vma_read, + .arg = &ttp, + }; + + /* + * During exec, a temporary VMA is setup and later moved. + * The VMA is moved under the anon_vma lock but not the + * page tables leading to a race where migration cannot + * find the migration ptes. Rather than increasing the + * locking requirements of exec(), migration skips + * temporary VMAs until after exec() completes. + */ + if (!PageKsm(page) && PageAnon(page)) + rwc.invalid_vma = invalid_migration_vma; + + rmap_walk(page, &rwc); + + return ttp.valid && (!page_mapcount(page) ? true : false); +} + +/** + * make_device_exclusive_range() - Mark a range for exclusive use by a device + * @mm: mm_struct of assoicated target process + * @start: start of the region to mark for exclusive device access + * @end: end address of region + * @pages: returns the pages which were successfully marked for exclusive access + * @arg: passed to MMU_NOTIFY_EXCLUSIVE range notifier too allow filtering + * + * Returns: number of pages successfully marked for exclusive access + * + * This function finds ptes mapping page(s) to the given address range, locks + * them and replaces mappings with special swap entries preventing userspace CPU + * access. On fault these entries are replaced with the original mapping after + * calling MMU notifiers. + * + * A driver using this to program access from a device must use a mmu notifier + * critical section to hold a device specific lock during programming. Once + * programming is complete it should drop the page lock and reference after + * which point CPU access to the page will revoke the exclusive access. + */ +int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, + unsigned long end, struct page **pages, + void *arg) +{ + unsigned long npages = (end - start) >> PAGE_SHIFT; + unsigned long i; + + npages = get_user_pages_remote(mm, start, npages, + FOLL_GET | FOLL_WRITE | FOLL_SPLIT_PMD, + pages, NULL, NULL); + for (i = 0; i < npages; i++, start += PAGE_SIZE) { + if (!trylock_page(pages[i])) { + put_page(pages[i]); + pages[i] = NULL; + continue; + } + + if (!try_to_protect(pages[i], mm, start, arg)) { + unlock_page(pages[i]); + put_page(pages[i]); + pages[i] = NULL; + } + } + + return npages; +} +EXPORT_SYMBOL_GPL(make_device_exclusive_range); + void __put_anon_vma(struct anon_vma *anon_vma) { struct anon_vma *root = anon_vma->root; From patchwork Fri Mar 12 08:38:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 1451767 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256 header.s=selector2 header.b=cO8KXCN0; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4DxfQy27RJz9sWf for ; Fri, 12 Mar 2021 19:40:42 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232552AbhCLIkK (ORCPT ); Fri, 12 Mar 2021 03:40:10 -0500 Received: from mail-mw2nam10on2047.outbound.protection.outlook.com ([40.107.94.47]:18401 "EHLO NAM10-MW2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S232397AbhCLIji (ORCPT ); Fri, 12 Mar 2021 03:39:38 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hMk4m0lbGpir5uy3oy8JHl2sUn4v7vlqDqN2i50tKAR+9TyNtuKIVgBoUsEyC+/0x8vRfBKYAMZ5dotobrg8Hz2zwQgGwuVhKKn++fHkJhcK1v2bhu8gt6GA3ZecWmojIZXT8TcksTioit0/DQyqProP5XHP9Sbga4UEHrcnoYTCyF6kyXEdn64b8eWXvdhP28aXffUIllTDyl3Rs5PXz/ppxbQXq7qEbX26C+Vh3Og6c5iAO63Wqrq0aEdlcFJO1SU7gc0FRAz07GZZpoHF+59opTypqKLsxbJkS6EV06A6KacEZZp+L7S2sriu1slOUK37JLtT4NOzwqJ3yi7kmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DHg7+L8laAfAZIffcai4Z1Bzj8ixMXAYd33zbn3eV4o=; b=E9bQsbmxDXn/3RezQX5QrYy8DNxAj7luWOrzQsmOeax4UCRwVAlhQfe+3ZImU+134AR/+Kq1Mkbh6pLILu7+l1Ce976B2//n9DFjZ2M9QDc+MkRrp67/2UUe4xYq6Df9l2/qI+EBQ/+S+gXQiqe/lsssYeyjiyvTMgiAgaem7mEzPhzFqt43EP0PicpLCWwt7zyOikBoTaajIUCTrIlWpAGW+DGy1DhrbvAn42ZYw1AnDWkgdvr3AakX65sGzgZygCJkwhIEXOKnJTksbjwyy4BkTrjPI5d6tgT3sbtXBPzM6dCe3nK8OJnVkzPncVlsPgR9svGtXd9v6W14pQq12g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DHg7+L8laAfAZIffcai4Z1Bzj8ixMXAYd33zbn3eV4o=; b=cO8KXCN0QVAAd7k9TLdnnfUq4pVkQ8cjUCwsBxYH8bg2MK9r3k+nwHqKldUPhOuo0FSVNUB5WlDyZW6oeTIan0qxppz3PgrUaRMqXO8V8jW5/pXezxK3lZ8eiW1JtXOdsAVNQODULvVYgqQXfUzNlVxbiBaEanXEkrzW8pIteZhaz4xq+k6Qr0ElbfGXidI5jy3t7LD3auF2UPhon5CRcTjYxDJt5sBmIZYvYYhPxmHIs07YggPYHUTQDK5x6ryjndtEWfJRplpZk51fa99/2sjR0HWdi0uvxQP6pbNYDD5HYxugGAn20mXIFEwT8N2bi6sEV1OngNUXMDG8SwNhTA== Received: from DM3PR12CA0055.namprd12.prod.outlook.com (2603:10b6:0:56::23) by DM5PR12MB2422.namprd12.prod.outlook.com (2603:10b6:4:b9::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3933.31; Fri, 12 Mar 2021 08:39:36 +0000 Received: from DM6NAM11FT045.eop-nam11.prod.protection.outlook.com (2603:10b6:0:56:cafe::a1) by DM3PR12CA0055.outlook.office365.com (2603:10b6:0:56::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.17 via Frontend Transport; Fri, 12 Mar 2021 08:39:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; redhat.com; dkim=none (message not signed) header.d=none;redhat.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT045.mail.protection.outlook.com (10.13.173.123) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.3933.31 via Frontend Transport; Fri, 12 Mar 2021 08:39:35 +0000 Received: from localhost (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 12 Mar 2021 08:39:34 +0000 From: Alistair Popple To: , , , CC: , , , , , , , , , , , Alistair Popple Subject: [PATCH v6 6/8] mm: Selftests for exclusive device memory Date: Fri, 12 Mar 2021 19:38:49 +1100 Message-ID: <20210312083851.15981-7-apopple@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210312083851.15981-1-apopple@nvidia.com> References: <20210312083851.15981-1-apopple@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1c53020b-4be2-49ec-efe7-08d8e5325df0 X-MS-TrafficTypeDiagnostic: DM5PR12MB2422: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bnmbdknQTIR0xea5Fxa/Q+oNTWXmIHhzn+BRyrFz2LSwUjmd7QfBwBsrzZfmNYdgnS0MgiCInFDyqW4GYml2N1EASgQVVFBYV2X5rmrR2mi0ZSewslHCESwgqxo4nOMYqXDyVsjyQcGv5PngPv/yD9T8fCzc4k9dA940jYV/LUm5AZ+ttGo5OS08W7d2Brg1e5v/bTvsapFH9/ApXApL1xdjhkQ9+tZyIgQWaguPVaBD0+MPWDnd1UIk6lWOEhbno+XDSgmPn2/E+0YTUf5XxgqqRI0vVdAKf1Woius+3+7PJ3pL+XubmJyINZJdQndUwXrZiIuT5LbNKwhylU6GnRkP+PtAdDTP773KITINVbyQP4hFEb9WmNoryH8/uiGPzEqQcMu1vyNHS77kPDd8ElmCDvGu/yEwWDM8hcD43zPOXQSn40xxYN+fqCvCCLpx3ojkrMPsIsg1UBp5gOTLOBQnOQVXbvmcukXM+JJNJvWLiAOcH9hlS7bMygiJ5wSE9EhDTpP7YRTDKVh6O3JYFNCpT7+SBLgPvLw2rVug4r9lhL2uDV274FeHoozjxEDAqkur3Bz8s04ue1kKC6PIhYlOBsGwXlqQ9UxXnHHKfCpbxXEFd2z9DGB1VJPju5uDZWHkSQND1dFUw0JP8xa0IVZeOAye7ibVnfM3lHzKcbTd3qgApjn0LmR5cQtDlyD2 X-Forefront-Antispam-Report: CIP:216.228.112.34;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid03.nvidia.com;CAT:NONE;SFS:(4636009)(376002)(396003)(346002)(39860400002)(136003)(46966006)(36840700001)(336012)(36756003)(70206006)(4326008)(8936002)(70586007)(2616005)(2906002)(83380400001)(186003)(82740400003)(478600001)(26005)(82310400003)(356005)(54906003)(426003)(34020700004)(7416002)(16526019)(316002)(47076005)(36906005)(36860700001)(6666004)(107886003)(30864003)(86362001)(110136005)(8676002)(5660300002)(1076003)(7636003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Mar 2021 08:39:35.9102 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1c53020b-4be2-49ec-efe7-08d8e5325df0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.34];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT045.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB2422 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Adds some selftests for exclusive device memory. Signed-off-by: Alistair Popple Acked-by: Jason Gunthorpe Tested-by: Ralph Campbell Reviewed-by: Ralph Campbell --- lib/test_hmm.c | 124 ++++++++++++++ lib/test_hmm_uapi.h | 2 + tools/testing/selftests/vm/hmm-tests.c | 219 +++++++++++++++++++++++++ 3 files changed, 345 insertions(+) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 5c9f5a020c1d..305a9d9e2b4c 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -25,6 +25,7 @@ #include #include #include +#include #include "test_hmm_uapi.h" @@ -46,6 +47,7 @@ struct dmirror_bounce { unsigned long cpages; }; +#define DPT_XA_TAG_ATOMIC 1UL #define DPT_XA_TAG_WRITE 3UL /* @@ -619,6 +621,54 @@ static void dmirror_migrate_alloc_and_copy(struct migrate_vma *args, } } +static int dmirror_check_atomic(struct dmirror *dmirror, unsigned long start, + unsigned long end) +{ + unsigned long pfn; + + for (pfn = start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++) { + void *entry; + struct page *page; + + entry = xa_load(&dmirror->pt, pfn); + page = xa_untag_pointer(entry); + if (xa_pointer_tag(entry) == DPT_XA_TAG_ATOMIC) + return -EPERM; + } + + return 0; +} + +static int dmirror_atomic_map(unsigned long start, unsigned long end, + struct page **pages, struct dmirror *dmirror) +{ + unsigned long pfn, mapped = 0; + int i; + + /* Map the migrated pages into the device's page tables. */ + mutex_lock(&dmirror->mutex); + + for (i = 0, pfn = start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++, i++) { + void *entry; + + if (!pages[i]) + continue; + + entry = pages[i]; + entry = xa_tag_pointer(entry, DPT_XA_TAG_ATOMIC); + entry = xa_store(&dmirror->pt, pfn, entry, GFP_ATOMIC); + if (xa_is_err(entry)) { + mutex_unlock(&dmirror->mutex); + return xa_err(entry); + } + + mapped++; + } + + mutex_unlock(&dmirror->mutex); + return mapped; +} + static int dmirror_migrate_finalize_and_map(struct migrate_vma *args, struct dmirror *dmirror) { @@ -661,6 +711,71 @@ static int dmirror_migrate_finalize_and_map(struct migrate_vma *args, return 0; } +static int dmirror_exclusive(struct dmirror *dmirror, + struct hmm_dmirror_cmd *cmd) +{ + unsigned long start, end, addr; + unsigned long size = cmd->npages << PAGE_SHIFT; + struct mm_struct *mm = dmirror->notifier.mm; + struct page *pages[64]; + struct dmirror_bounce bounce; + unsigned long next; + int ret; + + start = cmd->addr; + end = start + size; + if (end < start) + return -EINVAL; + + /* Since the mm is for the mirrored process, get a reference first. */ + if (!mmget_not_zero(mm)) + return -EINVAL; + + mmap_read_lock(mm); + for (addr = start; addr < end; addr = next) { + int i, mapped; + + if (end < addr + (ARRAY_SIZE(pages) << PAGE_SHIFT)) + next = end; + else + next = addr + (ARRAY_SIZE(pages) << PAGE_SHIFT); + + ret = make_device_exclusive_range(mm, addr, next, pages, NULL); + mapped = dmirror_atomic_map(addr, next, pages, dmirror); + for (i = 0; i < ret; i++) { + if (pages[i]) { + unlock_page(pages[i]); + put_page(pages[i]); + } + } + + if (addr + (mapped << PAGE_SHIFT) < next) { + mmap_read_unlock(mm); + mmput(mm); + return -EBUSY; + } + } + mmap_read_unlock(mm); + mmput(mm); + + /* Return the migrated data for verification. */ + ret = dmirror_bounce_init(&bounce, start, size); + if (ret) + return ret; + mutex_lock(&dmirror->mutex); + ret = dmirror_do_read(dmirror, start, end, &bounce); + mutex_unlock(&dmirror->mutex); + if (ret == 0) { + if (copy_to_user(u64_to_user_ptr(cmd->ptr), bounce.ptr, + bounce.size)) + ret = -EFAULT; + } + + cmd->cpages = bounce.cpages; + dmirror_bounce_fini(&bounce); + return ret; +} + static int dmirror_migrate(struct dmirror *dmirror, struct hmm_dmirror_cmd *cmd) { @@ -949,6 +1064,15 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp, ret = dmirror_migrate(dmirror, &cmd); break; + case HMM_DMIRROR_EXCLUSIVE: + ret = dmirror_exclusive(dmirror, &cmd); + break; + + case HMM_DMIRROR_CHECK_EXCLUSIVE: + ret = dmirror_check_atomic(dmirror, cmd.addr, + cmd.addr + (cmd.npages << PAGE_SHIFT)); + break; + case HMM_DMIRROR_SNAPSHOT: ret = dmirror_snapshot(dmirror, &cmd); break; diff --git a/lib/test_hmm_uapi.h b/lib/test_hmm_uapi.h index 670b4ef2a5b6..f14dea5dcd06 100644 --- a/lib/test_hmm_uapi.h +++ b/lib/test_hmm_uapi.h @@ -33,6 +33,8 @@ struct hmm_dmirror_cmd { #define HMM_DMIRROR_WRITE _IOWR('H', 0x01, struct hmm_dmirror_cmd) #define HMM_DMIRROR_MIGRATE _IOWR('H', 0x02, struct hmm_dmirror_cmd) #define HMM_DMIRROR_SNAPSHOT _IOWR('H', 0x03, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_EXCLUSIVE _IOWR('H', 0x04, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_CHECK_EXCLUSIVE _IOWR('H', 0x05, struct hmm_dmirror_cmd) /* * Values returned in hmm_dmirror_cmd.ptr for HMM_DMIRROR_SNAPSHOT. diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c index 5d1ac691b9f4..5d3c5db9ed3a 100644 --- a/tools/testing/selftests/vm/hmm-tests.c +++ b/tools/testing/selftests/vm/hmm-tests.c @@ -1485,4 +1485,223 @@ TEST_F(hmm2, double_map) hmm_buffer_free(buffer); } +/* + * Basic check of exclusive faulting. + */ +TEST_F(hmm, exclusive) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + int *ptr; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Map memory exclusively for device access. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_EXCLUSIVE, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + /* Fault pages back to system memory and check them. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i]++, i); + + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i+1); + + /* Check atomic access revoked */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_CHECK_EXCLUSIVE, buffer, npages); + ASSERT_EQ(ret, 0); + + hmm_buffer_free(buffer); +} + +TEST_F(hmm, exclusive_shared) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + int *ptr; + int ret, i; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_SHARED | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Map memory exclusively for device access. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_EXCLUSIVE, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + /* Fault pages back to system memory and check them. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i]++, i); + + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i+1); + + /* Check atomic access revoked */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_CHECK_EXCLUSIVE, buffer, npages); + ASSERT_FALSE(ret); + + /* Map memory exclusively for device access again to check process tear down */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_EXCLUSIVE, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + hmm_buffer_free(buffer); +} + +/* + * Same as above but for shared anonymous memory. + */ +TEST_F(hmm, exclusive_mprotect) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + int *ptr; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Map memory exclusively for device access. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_EXCLUSIVE, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + ret = mprotect(buffer->ptr, size, PROT_READ); + ASSERT_EQ(ret, 0); + + /* Simulate a device writing system memory. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_WRITE, buffer, npages); + ASSERT_EQ(ret, -EPERM); + + hmm_buffer_free(buffer); +} + +/* + * Check copy-on-write works. + */ +TEST_F(hmm, exclusive_cow) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + int *ptr; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Map memory exclusively for device access. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_EXCLUSIVE, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + fork(); + + /* Fault pages back to system memory and check them. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i]++, i); + + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i+1); + + hmm_buffer_free(buffer); +} + TEST_HARNESS_MAIN From patchwork Fri Mar 12 08:38:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 1451766 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256 header.s=selector2 header.b=enLYfE56; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4DxfQx4V31z9sWC for ; Fri, 12 Mar 2021 19:40:41 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232334AbhCLIkL (ORCPT ); Fri, 12 Mar 2021 03:40:11 -0500 Received: from mail-bn7nam10on2078.outbound.protection.outlook.com ([40.107.92.78]:56832 "EHLO NAM10-BN7-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S232504AbhCLIjo (ORCPT ); Fri, 12 Mar 2021 03:39:44 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oJuLIurd2NQ1j2PtB0Uq5xG+0Sal1CAfpCWQ6IqPJKycT4RfXApDYIYtJ9/09luxORgbeAR43qW+BtAuu28TQUZVHDBuzD0vzi0Kkvfatvz5/caqXrE3dkrBXo4YOjyjxhwvekvy0VN/53UQjYd/fDVhKt9nCRdE7d30ZtKvjtUHMXZUQ9cVmvykIVDLT8v9Vslpe/OkoNMcqo+vWpOy6sDQ+jbEScIqXVtrjLBzKUBRZhZ45wWpP10bPczPYkqTURT36DmJfPDcXx4paHoNO+m5V9Oj5UHCB7QG27gUCLiqa75cpHvUODbPsIvctsyEAH4K6zYR8HT+fzI6vcC0pg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4c9870s3/fS3cTF+SLC8wpSfZhem4zErBulvbvUfaA4=; b=VM6KuhR/qMJsZgCIoXw1xRYpkMNxTVAQPnvDYeR1PHSG8ak6k4xnINOoyL1KvkoV80no+m+Vrpact2AtuGCQmGTL0iKKfBm0eiEtz9t/aLoDrfRp/KNnPZw0vuaGU7V43TuR7ShUFfjhAFGGYOggXBz3hEWI+c22NbA+00SfzlRyJNHWeaLWNu4Y2vQuEuiEtJoU5oFhudsy7scVBDyS0pKyIZftU2lHu2V4tZ3UmQOb7lX0MSZqrHFvsfAKHcjj705RCZYMlKQpXNrvT+zlreq+6IzK1jO99D//Z2XErj2TUNkaA0z3nEH5fIcoUajJyu5FtHJQIG/p8TF5kKgGSg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4c9870s3/fS3cTF+SLC8wpSfZhem4zErBulvbvUfaA4=; b=enLYfE56lHj1dXC/Lk4vaPGIkLQq3F7wbFqadbr6519xrVGj+XXuFdHRDhJ+ASBQbvO5U4qSvZzpnNFKbZ05ut4Sy1SpcpB63YSozeJAR/eDxYU+1FQleSZfpzGxLKCKUTZYwi1FXiyhcfLyq1GYuUKaBp7N6Gse/uYHUsacH/xz1gwqLTJGRJ7DPTP/Rr4F777vehR7U07fxVMFHAvJAwh1efV6mmRNYhpePZvaSBPjPnRAwOh/cpu/ZyGc7sd0U+Gfhc5aTYC8l3o4HaNpx+Kvn0Tsp6UOMELv5rwRSlAF7IbfY1Y6292qZ5X39n6cyldAJJRMicB3g4Rq+Q266w== Received: from DM6PR02CA0096.namprd02.prod.outlook.com (2603:10b6:5:1f4::37) by MWHPR12MB1438.namprd12.prod.outlook.com (2603:10b6:300:14::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3933.31; Fri, 12 Mar 2021 08:39:39 +0000 Received: from DM6NAM11FT012.eop-nam11.prod.protection.outlook.com (2603:10b6:5:1f4:cafe::59) by DM6PR02CA0096.outlook.office365.com (2603:10b6:5:1f4::37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.17 via Frontend Transport; Fri, 12 Mar 2021 08:39:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; redhat.com; dkim=none (message not signed) header.d=none;redhat.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT012.mail.protection.outlook.com (10.13.173.109) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.3933.31 via Frontend Transport; Fri, 12 Mar 2021 08:39:38 +0000 Received: from localhost (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 12 Mar 2021 08:39:37 +0000 From: Alistair Popple To: , , , CC: , , , , , , , , , , , Alistair Popple Subject: [PATCH v6 7/8] nouveau/svm: Refactor nouveau_range_fault Date: Fri, 12 Mar 2021 19:38:50 +1100 Message-ID: <20210312083851.15981-8-apopple@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210312083851.15981-1-apopple@nvidia.com> References: <20210312083851.15981-1-apopple@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5ed76b16-f8b9-41aa-7429-08d8e5325f9e X-MS-TrafficTypeDiagnostic: MWHPR12MB1438: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6108; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: j+yKeutSZ4TOi32SsYmdJHDl1e96d4N+T4LpVQo6ZxLDVW58X7Pw2cB2Ywf+I54k+NJ/GoOtG+qQEVESEehiutafuSAU2HKCZF2inKz/GrnE4J4nzFDHQ5/Mz2uqHPEPHtT4wPGoMik0JWmcr5hl/P+Bgb2mC6yOz6DK18D9N9yICt8fL3fvqZUT8PGMAvOr1pc4kZQ1voTA6nE3UQjeAoOl7T62sN6FbCVbeoyP9x1fgSH/RDLPWqKHyixv254vuI5+dpg4ONFR+0NErDIJknk1qRcp39FhP2Ux7X241j/LzIwEJb1hpJ6ndcVi6pNLI1rp/yXn+G9Ctdi7B1C+OARcK998iV7tq1UTE5sKPcAcOK7C0ACUsg8ksLqV3i7xzrp1yQfWsJKWKBRRtWgoUhpcgeYBctnQcn/kUSqWyAhIrNyuPD+WYLoyjXWQUq4aU0+X4DaXC0so30XDKsYLxu51GmxtK5JarJKLsDndONKtkgib0jCaEBfz4wrj/ixc+vzVOAa2I95wK+gvrM2MjnOOvCAM/Uuqol2oIz4p0mdXP5A6zJ8Fm5jzafw7FizHNUeaWtG/GgNXfmuT4C+Gh/6rFrL/jONssl59yhv0xW4lfRMncv6lpuHJz7GJd0v8P13f+7JYZ7B0QaIRrI6BBHw0F0LZFufviifeZgOZJrJfnDVbpZB8/miFQP6AB4uX X-Forefront-Antispam-Report: CIP:216.228.112.34;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid03.nvidia.com;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(39860400002)(136003)(46966006)(36840700001)(8936002)(5660300002)(86362001)(110136005)(70206006)(70586007)(316002)(34020700004)(36860700001)(107886003)(36906005)(478600001)(336012)(54906003)(83380400001)(47076005)(8676002)(4326008)(82310400003)(2906002)(16526019)(82740400003)(36756003)(7636003)(356005)(26005)(186003)(6666004)(2616005)(426003)(1076003)(7416002);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Mar 2021 08:39:38.6781 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5ed76b16-f8b9-41aa-7429-08d8e5325f9e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.34];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT012.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1438 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Call mmu_interval_notifier_insert() as part of nouveau_range_fault(). This doesn't introduce any functional change but makes it easier for a subsequent patch to alter the behaviour of nouveau_range_fault() to support GPU atomic operations. Signed-off-by: Alistair Popple --- drivers/gpu/drm/nouveau/nouveau_svm.c | 34 ++++++++++++++++----------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index 94f841026c3b..a195e48c9aee 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -567,18 +567,27 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm, unsigned long hmm_pfns[1]; struct hmm_range range = { .notifier = ¬ifier->notifier, - .start = notifier->notifier.interval_tree.start, - .end = notifier->notifier.interval_tree.last + 1, .default_flags = hmm_flags, .hmm_pfns = hmm_pfns, .dev_private_owner = drm->dev, }; - struct mm_struct *mm = notifier->notifier.mm; + struct mm_struct *mm = svmm->notifier.mm; int ret; + ret = mmu_interval_notifier_insert(¬ifier->notifier, mm, + args->p.addr, args->p.size, + &nouveau_svm_mni_ops); + if (ret) + return ret; + + range.start = notifier->notifier.interval_tree.start; + range.end = notifier->notifier.interval_tree.last + 1; + while (true) { - if (time_after(jiffies, timeout)) - return -EBUSY; + if (time_after(jiffies, timeout)) { + ret = -EBUSY; + goto out; + } range.notifier_seq = mmu_interval_read_begin(range.notifier); mmap_read_lock(mm); @@ -587,7 +596,7 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm, if (ret) { if (ret == -EBUSY) continue; - return ret; + goto out; } mutex_lock(&svmm->mutex); @@ -606,6 +615,9 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm, svmm->vmm->vmm.object.client->super = false; mutex_unlock(&svmm->mutex); +out: + mmu_interval_notifier_remove(¬ifier->notifier); + return ret; } @@ -727,14 +739,8 @@ nouveau_svm_fault(struct nvif_notify *notify) } notifier.svmm = svmm; - ret = mmu_interval_notifier_insert(¬ifier.notifier, mm, - args.i.p.addr, args.i.p.size, - &nouveau_svm_mni_ops); - if (!ret) { - ret = nouveau_range_fault(svmm, svm->drm, &args.i, - sizeof(args), hmm_flags, ¬ifier); - mmu_interval_notifier_remove(¬ifier.notifier); - } + ret = nouveau_range_fault(svmm, svm->drm, &args.i, + sizeof(args), hmm_flags, ¬ifier); mmput(mm); limit = args.i.p.addr + args.i.p.size; From patchwork Fri Mar 12 08:38:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 1451768 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256 header.s=selector2 header.b=WMBZMmlT; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4DxfQz3qnxz9sf9 for ; Fri, 12 Mar 2021 19:40:43 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232494AbhCLIkL (ORCPT ); Fri, 12 Mar 2021 03:40:11 -0500 Received: from mail-dm6nam11on2074.outbound.protection.outlook.com ([40.107.223.74]:3168 "EHLO NAM11-DM6-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S232516AbhCLIjp (ORCPT ); Fri, 12 Mar 2021 03:39:45 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cGXm0VkaAHLTFhnr+eMHJ2p7dyDDMjMxf9QGqUZB0IeXsXpYXNE9Sr/duhFyTyo5FgiojQ6frn2EdP705ZsmLOJVC/RLY2iCGMM5QnpYTOw00QNW3AoIJWarT6XPXJ8LYobxY/6K6gsEx0at6mlmVlT5Tfyj5f5xrR6gGK7PHeHRsDgTP5gysY9SU/Kk3k8yXPS9Pf+I5MUBKn1nBSUpIHh7FFZ/uqv6cRFJ1sAgPaRocDKMDHq30ay5VMZT+/OCebcQj4d7WeBnHsRfHkJl+Yy1sLBeR9v4tOwODOV/e8ofQorZUAce0wUijM990uQmOuH9FXXriQ8BpMScBmKfcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6j8kdX05K9L+Y8AaEl2SlYLFPnXNAjrpx+mjtPQ90mU=; b=Q/qACOyk4Q6QLV3dngt6iktj9GQoE4nrh/FlQcfpYh8qNe4n5bVeUt1N+BDJe0veJYxe1qSC4Cl9AqEBf04BfeKsyxrZgtGr0OodKfFcJB2bvLcd8h3Hz4onBENzSCnJyjOj9uV8Ma1QAIm98dqjoCrvQ7Sz30/ggJ+CeMOXm8uIlaBij0BUXESc4n4PUgoHBdW5J+9x4MacZJuDfVRscZw4xgONlLqnui6NE8eRZutJjVSGihD1qxU5L6qRPhPty4LCDAnPbh8VJx86ztr9zh5kTDaCui+GuBHYnw4Laivln2lr/8kF+M9hhvLxSGy8Wb0u3fcrLtgQFvAT7dTshw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6j8kdX05K9L+Y8AaEl2SlYLFPnXNAjrpx+mjtPQ90mU=; b=WMBZMmlTqCXqkiEjD9gkzKT7b6X6yr2VAC29g1zJLqNiU0GZbSVJByujxzsmp6nvNtcqzL74sPLUZkRRvlJkkYsLfAPAnpdkKtJWKIwSrRhKtu1rTW5uHhcImWssWFsHQzH/5gzIyhPlqOmbHLW4n5S4FhOhrm+BlKpK2uzYhI0mK3ABDLvN/Ibuv2znmmKeqdgym8MAKYWGZ55L37oER70gf1Da5ywNJBzFcPUoypMlWSeTuvNNnVWCR8WsMYUo5RHosgjRkaokBp0CUlP37j+bV9jNYlpMkSvXo0Gxwo7VhQWxbp+huYK2tn+yL31nV7nMhvWu3yABTQvEM6SizA== Received: from DM3PR12CA0054.namprd12.prod.outlook.com (2603:10b6:0:56::22) by DM6PR12MB3786.namprd12.prod.outlook.com (2603:10b6:5:14a::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.30; Fri, 12 Mar 2021 08:39:42 +0000 Received: from DM6NAM11FT045.eop-nam11.prod.protection.outlook.com (2603:10b6:0:56:cafe::3a) by DM3PR12CA0054.outlook.office365.com (2603:10b6:0:56::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.17 via Frontend Transport; Fri, 12 Mar 2021 08:39:42 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; redhat.com; dkim=none (message not signed) header.d=none;redhat.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT045.mail.protection.outlook.com (10.13.173.123) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.3933.31 via Frontend Transport; Fri, 12 Mar 2021 08:39:42 +0000 Received: from localhost (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 12 Mar 2021 08:39:41 +0000 From: Alistair Popple To: , , , CC: , , , , , , , , , , , Alistair Popple Subject: [PATCH v6 8/8] nouveau/svm: Implement atomic SVM access Date: Fri, 12 Mar 2021 19:38:51 +1100 Message-ID: <20210312083851.15981-9-apopple@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210312083851.15981-1-apopple@nvidia.com> References: <20210312083851.15981-1-apopple@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 85b5f4b2-36db-4946-c191-08d8e53261a2 X-MS-TrafficTypeDiagnostic: DM6PR12MB3786: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1850; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: uz8GY3RcPRPDfrL8932w0E5QPmzu55p9pupugPEQ3bXt0z1cEoLNhNJbb0lNyKlw8i8Ks/AONmVefduhGfYognBepSzEKjcJdOVEhsXEr+1fCIwaTk5AGk5rgNhHTE5CPhNuEI3loXw3gWtbEuHMateXovX1YwsfQC+GObDczmC30+6xRFe7I3QfrXKN3ty2ulwIW5wmv+sIAnRWiCd17ngc4m7NliJLbj8BMe00QEpoo8wDflydiggioDWwZiRaWmmc/mfxBGHNZldJYPTjNRN4+OTzpk9JqXWak47VOVspGKCxZT8WEDk6B9SQDDFUbotABSLsS4eAxjKEqn9tFMQ9ZAAQ2BpFlCtWHtkTPkOMZE0p6n3OcXlB1+QohH32vVC7fOpvoat6UrGiEbWZSmxvymJpVruo9j1cPu66wZDqGBvlVuJ4mu+zyHLekSDOmsV3KewByDvD/LwSaHGKAcrFLiV4xaYtjCk/GP12JEvrIGoGTe0wxlCc63MCY9gkDRjJn/5Ojy6MBkTVixnPpCWWx3t9QvGPjCo6eoPHojU4X6+Xom+V0cFBFePoeLoH/795kfcJIAhWtwRyDkCUaB7EjQMkTp9OrQD/bbc1dm0nHYUyuAZXvqdGb8fx0CY0MbwQvppVydONS4PzYSQfr+XaZc+kXpDuU8wl3sX/ibKUzcGW1VOHLdKlVA+APLo2qRX6Z4bYqyBnhREcDOirwA== X-Forefront-Antispam-Report: CIP:216.228.112.34;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid03.nvidia.com;CAT:NONE;SFS:(4636009)(136003)(346002)(376002)(39860400002)(396003)(36840700001)(46966006)(26005)(110136005)(107886003)(83380400001)(36860700001)(8676002)(426003)(316002)(86362001)(336012)(356005)(186003)(34020700004)(54906003)(70586007)(16526019)(36906005)(82310400003)(82740400003)(478600001)(7636003)(1076003)(7416002)(6666004)(70206006)(2616005)(36756003)(4326008)(8936002)(47076005)(2906002)(5660300002)(21314003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Mar 2021 08:39:42.0637 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 85b5f4b2-36db-4946-c191-08d8e53261a2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.34];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT045.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3786 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Some NVIDIA GPUs do not support direct atomic access to system memory via PCIe. Instead this must be emulated by granting the GPU exclusive access to the memory. This is achieved by replacing CPU page table entries with special swap entries that fault on userspace access. The driver then grants the GPU permission to update the page undergoing atomic access via the GPU page tables. When CPU access to the page is required a CPU fault is raised which calls into the device driver via MMU notifiers to revoke the atomic access. The original page table entries are then restored allowing CPU access to proceed. Signed-off-by: Alistair Popple --- v4: * Check that page table entries haven't changed before mapping on the device --- drivers/gpu/drm/nouveau/include/nvif/if000c.h | 1 + drivers/gpu/drm/nouveau/nouveau_svm.c | 100 ++++++++++++++++-- drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h | 1 + .../drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 6 ++ 4 files changed, 100 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/nouveau/include/nvif/if000c.h b/drivers/gpu/drm/nouveau/include/nvif/if000c.h index d6dd40f21eed..9c7ff56831c5 100644 --- a/drivers/gpu/drm/nouveau/include/nvif/if000c.h +++ b/drivers/gpu/drm/nouveau/include/nvif/if000c.h @@ -77,6 +77,7 @@ struct nvif_vmm_pfnmap_v0 { #define NVIF_VMM_PFNMAP_V0_APER 0x00000000000000f0ULL #define NVIF_VMM_PFNMAP_V0_HOST 0x0000000000000000ULL #define NVIF_VMM_PFNMAP_V0_VRAM 0x0000000000000010ULL +#define NVIF_VMM_PFNMAP_V0_A 0x0000000000000004ULL #define NVIF_VMM_PFNMAP_V0_W 0x0000000000000002ULL #define NVIF_VMM_PFNMAP_V0_V 0x0000000000000001ULL #define NVIF_VMM_PFNMAP_V0_NONE 0x0000000000000000ULL diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index a195e48c9aee..16b07d7589d2 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -35,6 +35,7 @@ #include #include #include +#include struct nouveau_svm { struct nouveau_drm *drm; @@ -421,9 +422,9 @@ nouveau_svm_fault_cmp(const void *a, const void *b) return ret; if ((ret = (s64)fa->addr - fb->addr)) return ret; - /*XXX: atomic? */ - return (fa->access == 0 || fa->access == 3) - - (fb->access == 0 || fb->access == 3); + /* Atomic access (2) has highest priority */ + return (-1*(fa->access == 2) + (fa->access == 0 || fa->access == 3)) - + (-1*(fb->access == 2) + (fb->access == 0 || fb->access == 3)); } static void @@ -487,6 +488,10 @@ static bool nouveau_svm_range_invalidate(struct mmu_interval_notifier *mni, struct svm_notifier *sn = container_of(mni, struct svm_notifier, notifier); + if (range->event == MMU_NOTIFY_EXCLUSIVE && + range->owner == sn->svmm->vmm->cli->drm->dev) + return true; + /* * serializes the update to mni->invalidate_seq done by caller and * prevents invalidation of the PTE from progressing while HW is being @@ -555,6 +560,73 @@ static void nouveau_hmm_convert_pfn(struct nouveau_drm *drm, args->p.phys[0] |= NVIF_VMM_PFNMAP_V0_W; } +static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm, + struct nouveau_drm *drm, + struct nouveau_pfnmap_args *args, u32 size, + struct svm_notifier *notifier) +{ + unsigned long timeout = + jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT); + struct mm_struct *mm = svmm->notifier.mm; + struct page *page; + unsigned long start = args->p.addr; + unsigned long notifier_seq; + int ret = 0; + + ret = mmu_interval_notifier_insert(¬ifier->notifier, mm, + args->p.addr, args->p.size, + &nouveau_svm_mni_ops); + if (ret) + return ret; + + while (true) { + if (time_after(jiffies, timeout)) { + ret = -EBUSY; + goto out; + } + + notifier_seq = mmu_interval_read_begin(¬ifier->notifier); + mmap_read_lock(mm); + make_device_exclusive_range(mm, start, start + PAGE_SIZE, + &page, drm->dev); + mmap_read_unlock(mm); + if (!page) { + ret = -EINVAL; + goto out; + } + + mutex_lock(&svmm->mutex); + if (mmu_interval_read_retry(¬ifier->notifier, + notifier_seq)) { + mutex_unlock(&svmm->mutex); + continue; + } + break; + } + + /* Map the page on the GPU. */ + args->p.page = 12; + args->p.size = PAGE_SIZE; + args->p.addr = start; + args->p.phys[0] = page_to_phys(page) | + NVIF_VMM_PFNMAP_V0_V | + NVIF_VMM_PFNMAP_V0_W | + NVIF_VMM_PFNMAP_V0_A | + NVIF_VMM_PFNMAP_V0_HOST; + + svmm->vmm->vmm.object.client->super = true; + ret = nvif_object_ioctl(&svmm->vmm->vmm.object, args, size, NULL); + svmm->vmm->vmm.object.client->super = false; + mutex_unlock(&svmm->mutex); + + unlock_page(page); + put_page(page); + +out: + mmu_interval_notifier_remove(¬ifier->notifier); + return ret; +} + static int nouveau_range_fault(struct nouveau_svmm *svmm, struct nouveau_drm *drm, struct nouveau_pfnmap_args *args, u32 size, @@ -637,7 +709,7 @@ nouveau_svm_fault(struct nvif_notify *notify) unsigned long hmm_flags; u64 inst, start, limit; int fi, fn; - int replay = 0, ret; + int replay = 0, atomic = 0, ret; /* Parse available fault buffer entries into a cache, and update * the GET pointer so HW can reuse the entries. @@ -718,12 +790,14 @@ nouveau_svm_fault(struct nvif_notify *notify) /* * Determine required permissions based on GPU fault * access flags. - * XXX: atomic? */ switch (buffer->fault[fi]->access) { case 0: /* READ. */ hmm_flags = HMM_PFN_REQ_FAULT; break; + case 2: /* ATOMIC. */ + atomic = true; + break; case 3: /* PREFETCH. */ hmm_flags = 0; break; @@ -739,8 +813,14 @@ nouveau_svm_fault(struct nvif_notify *notify) } notifier.svmm = svmm; - ret = nouveau_range_fault(svmm, svm->drm, &args.i, - sizeof(args), hmm_flags, ¬ifier); + if (atomic) + ret = nouveau_atomic_range_fault(svmm, svm->drm, + &args.i, sizeof(args), + ¬ifier); + else + ret = nouveau_range_fault(svmm, svm->drm, &args.i, + sizeof(args), hmm_flags, + ¬ifier); mmput(mm); limit = args.i.p.addr + args.i.p.size; @@ -760,7 +840,11 @@ nouveau_svm_fault(struct nvif_notify *notify) !(args.phys[0] & NVIF_VMM_PFNMAP_V0_V)) || (buffer->fault[fi]->access != 0 /* READ. */ && buffer->fault[fi]->access != 3 /* PREFETCH. */ && - !(args.phys[0] & NVIF_VMM_PFNMAP_V0_W))) + !(args.phys[0] & NVIF_VMM_PFNMAP_V0_W)) || + (buffer->fault[fi]->access != 0 /* READ. */ && + buffer->fault[fi]->access != 1 /* WRITE. */ && + buffer->fault[fi]->access != 3 /* PREFETCH. */ && + !(args.phys[0] & NVIF_VMM_PFNMAP_V0_A))) break; } diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h index a2b179568970..f6188aa9171c 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h @@ -178,6 +178,7 @@ void nvkm_vmm_unmap_region(struct nvkm_vmm *, struct nvkm_vma *); #define NVKM_VMM_PFN_APER 0x00000000000000f0ULL #define NVKM_VMM_PFN_HOST 0x0000000000000000ULL #define NVKM_VMM_PFN_VRAM 0x0000000000000010ULL +#define NVKM_VMM_PFN_A 0x0000000000000004ULL #define NVKM_VMM_PFN_W 0x0000000000000002ULL #define NVKM_VMM_PFN_V 0x0000000000000001ULL #define NVKM_VMM_PFN_NONE 0x0000000000000000ULL diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c index 236db5570771..f02abd9cb4dd 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c @@ -88,6 +88,9 @@ gp100_vmm_pgt_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt, if (!(*map->pfn & NVKM_VMM_PFN_W)) data |= BIT_ULL(6); /* RO. */ + if (!(*map->pfn & NVKM_VMM_PFN_A)) + data |= BIT_ULL(7); /* Atomic disable. */ + if (!(*map->pfn & NVKM_VMM_PFN_VRAM)) { addr = *map->pfn >> NVKM_VMM_PFN_ADDR_SHIFT; addr = dma_map_page(dev, pfn_to_page(addr), 0, @@ -322,6 +325,9 @@ gp100_vmm_pd0_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt, if (!(*map->pfn & NVKM_VMM_PFN_W)) data |= BIT_ULL(6); /* RO. */ + if (!(*map->pfn & NVKM_VMM_PFN_A)) + data |= BIT_ULL(7); /* Atomic disable. */ + if (!(*map->pfn & NVKM_VMM_PFN_VRAM)) { addr = *map->pfn >> NVKM_VMM_PFN_ADDR_SHIFT; addr = dma_map_page(dev, pfn_to_page(addr), 0,