From patchwork Tue Nov 22 06:19:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cengiz Can X-Patchwork-Id: 1707646 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=BO0R7X/J; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NGYyH5DZfz23nR for ; Tue, 22 Nov 2022 17:19:51 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1oxMdG-0003Ri-7y; Tue, 22 Nov 2022 06:19:46 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1oxMdD-0003Qn-KD for kernel-team@lists.ubuntu.com; Tue, 22 Nov 2022 06:19:43 +0000 Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 3CD603F091 for ; Tue, 22 Nov 2022 06:19:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1669097982; bh=pLwfAW1LWq+0mhIFlG9kYKvzK+nIXykjVV6xTLFSrLo=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BO0R7X/JGo9naElJGrb1Gd2pUkk/MnEsCHaYG5mtqOBl2rx/tzHpsLttOT2tmgMmc YLBFlXza5k66E4N+REu9EV1rsrZhJqQcU8nScAFLBhxmWOxpXdZag3j/qPSH4qHPdA XbZXTPCu94kzl2TuzbxtBMtGMm7ZyVTsGDA8yPF8lET+Adu+J+s7EwD8+lgOc3Cojv 4B4MA7aDtCO1eUA7cBBLl1ZMqCkewXNh+rrQ2BQOqj0t68q/NO256S/JGmnFOln/2c RJq+jm96ZnZmCe7MNka0W6UTP7wbAsX1cTI4xO2IETT3D2IDQ90RrkcP2zYijDgQ6u BvQg+QzBinZmg== Received: by mail-qk1-f197.google.com with SMTP id ay43-20020a05620a17ab00b006fa30ed61fdso17912489qkb.5 for ; Mon, 21 Nov 2022 22:19:42 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pLwfAW1LWq+0mhIFlG9kYKvzK+nIXykjVV6xTLFSrLo=; b=QNdRHpDcU4rMLsav+8IMG1/Zjvh8Vv/B5dKdYuh9OyRUVSFKB56cnP4w3zWdeKOIZN 9EKel811ZxBTwavWlM6F99bUASVUXSyMAWloX6e1YRV6kMvSklAayV6O5JV1CUEVVrQZ AMpHa/FVq9wmpmffCk+GmL2Yh+pYUWrsTfsB8NppeileBYkLI28lqBpy4xRbEpORMnx/ 1SHCfhnEZ+pWahz8jzJbbgLkpmXUqqIJBvK56VwmPYK+QBIaO2B8ps3W9W9VRW6n/0B5 v1vpjZsqwm3sKgzfu3iqnBHwkFd73l3rcjyN3dxFcBgy5Qszj6si6ataMHQ9Izec+ACE bMEg== X-Gm-Message-State: ANoB5pksGoQ8ZrsWUQwU7vIm7HK1XKpg50RBFaS+22yEKb3HCpVCcnEZ HPuCo53O1i9GyNNgWjWSSTJMHUx6o8SefzMS/i6S1KMHKgRZwYoKifTq4m2Wpqjz56tCtSasQBi Girlb+7oVXbBzJPdfZxK4EtX+j9elON5t2CP/XvwoVg== X-Received: by 2002:a05:6214:481:b0:4bb:68f5:74ee with SMTP id pt1-20020a056214048100b004bb68f574eemr2079169qvb.85.1669097980567; Mon, 21 Nov 2022 22:19:40 -0800 (PST) X-Google-Smtp-Source: AA0mqf6wfE8Y/IjhFhibVdslYJTw7Uo1/snwDUbz/I4sTUYXIcwMBV0eZd5zQSakFTHaMI+ESidVtg== X-Received: by 2002:a05:6214:481:b0:4bb:68f5:74ee with SMTP id pt1-20020a056214048100b004bb68f574eemr2079159qvb.85.1669097980166; Mon, 21 Nov 2022 22:19:40 -0800 (PST) Received: from localhost ([2001:67c:1560:8007::aac:c03c]) by smtp.gmail.com with ESMTPSA id w128-20020a379486000000b006cf19068261sm9248661qkd.116.2022.11.21.22.19.39 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Nov 2022 22:19:39 -0800 (PST) From: Cengiz Can To: kernel-team@lists.ubuntu.com Subject: [SRU Bionic] mm/mremap: hold the rmap lock in write mode when moving page table entries. Date: Tue, 22 Nov 2022 09:19:12 +0300 Message-Id: <20221122061910.64510-2-cengiz.can@canonical.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221122061910.64510-1-cengiz.can@canonical.com> References: <20221122061910.64510-1-cengiz.can@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: "Aneesh Kumar K.V" commit 97113eb39fa7972722ff490b947d8af023e1f6a2 upstream. To avoid a race between rmap walk and mremap, mremap does take_rmap_locks(). The lock was taken to ensure that rmap walk don't miss a page table entry due to PTE moves via move_pagetables(). The kernel does further optimization of this lock such that if we are going to find the newly added vma after the old vma, the rmap lock is not taken. This is because rmap walk would find the vmas in the same order and if we don't find the page table attached to older vma we would find it with the new vma which we would iterate later. As explained in commit eb66ae030829 ("mremap: properly flush TLB before releasing the page") mremap is special in that it doesn't take ownership of the page. The optimized version for PUD/PMD aligned mremap also doesn't hold the ptl lock. This can result in stale TLB entries as show below. This patch updates the rmap locking requirement in mremap to handle the race condition explained below with optimized mremap:: Optmized PMD move CPU 1 CPU 2 CPU 3 mremap(old_addr, new_addr) page_shrinker/try_to_unmap_one mmap_write_lock_killable() addr = old_addr lock(pte_ptl) lock(pmd_ptl) pmd = *old_pmd pmd_clear(old_pmd) flush_tlb_range(old_addr) *new_pmd = pmd *new_addr = 10; and fills TLB with new addr and old pfn unlock(pmd_ptl) ptep_clear_flush() old pfn is free. Stale TLB entry Optimized PUD move also suffers from a similar race. Both the above race condition can be fixed if we force mremap path to take rmap lock. Link: https://lkml.kernel.org/r/20210616045239.370802-7-aneesh.kumar@linux.ibm.com Fixes: 2c91bd4a4e2e ("mm: speed up mremap by 20x on large regions") Fixes: c49dd3401802 ("mm: speedup mremap on 1GB or larger regions") Link: https://lore.kernel.org/linux-mm/CAHk-=wgXVR04eBNtxQfevontWnP6FDm+oj5vauQXP3S-huwbPw@mail.gmail.com Signed-off-by: Aneesh Kumar K.V Acked-by: Hugh Dickins Acked-by: Kirill A. Shutemov Cc: Christophe Leroy Cc: Joel Fernandes Cc: Kalesh Singh Cc: Kirill A. Shutemov Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Stephen Rothwell Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds [patch rewritten for backport since the code was refactored since] Signed-off-by: Jann Horn Signed-off-by: Greg Kroah-Hartman CVE-2022-41222 (backported from commit 79e522101cf40735f1936a10312e17f937b8dcad linux-5.4.y) [cengizcan: adapt context] Signed-off-by: Cengiz Can --- mm/mremap.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/mm/mremap.c b/mm/mremap.c index 2bdb255cde9a9..473cf0e4c5f13 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -230,12 +230,10 @@ unsigned long move_page_tables(struct vm_area_struct *vma, if (extent == HPAGE_PMD_SIZE) { bool moved; /* See comment in move_ptes() */ - if (need_rmap_locks) - take_rmap_locks(vma); + take_rmap_locks(vma); moved = move_huge_pmd(vma, old_addr, new_addr, old_end, old_pmd, new_pmd); - if (need_rmap_locks) - drop_rmap_locks(vma); + drop_rmap_locks(vma); if (moved) continue; }