From patchwork Tue Mar 30 06:07:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aneesh Kumar K V X-Patchwork-Id: 1459890 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org (client-ip=112.213.38.117; helo=lists.ozlabs.org; envelope-from=linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=ibm.com header.i=@ibm.com header.a=rsa-sha256 header.s=pp1 header.b=QpU3JSo0; dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4F8fFz08hvz9sRf for ; Tue, 30 Mar 2021 17:11:03 +1100 (AEDT) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4F8fFy66Psz30Jy for ; Tue, 30 Mar 2021 17:11:02 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=ibm.com header.i=@ibm.com header.a=rsa-sha256 header.s=pp1 header.b=QpU3JSo0; dkim-atps=neutral X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=linux.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=aneesh.kumar@linux.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ibm.com header.i=@ibm.com header.a=rsa-sha256 header.s=pp1 header.b=QpU3JSo0; dkim-atps=neutral Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4F8fC91bfNz30J3 for ; Tue, 30 Mar 2021 17:08:37 +1100 (AEDT) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 12U62g7Y156897; Tue, 30 Mar 2021 02:08:30 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=U9WSLfraYrHhKqAW4vji2mb/X+hKa8BChHdGaKCI2cw=; b=QpU3JSo0i0isNBG+2uKV9Mijg0/5jT3IO5VkT3DhTLZaSX8N8LgJdtwRnyzJW6Tyv+iJ GPSvxlbpv/MziMdmMcctYeOyOCA0hGI1GFyqKKpmfNlmhXp5ONeoL0PfjKk+AgksPD/d lzDr6poR1KM+iW23JVGWXJTF9Rc41ZoQTT2PtbjWAPK6er1feU1qldwR1spBT71OkzRl wcRJlOtpF6LGQZgI1YU/Xjhw+4ApnKALibmM3ORRX2Y+cbyr/I9f8bWfmq858p+tIKZU M6jnrEQy8r19NGM4Eh6mPTZhjHaYAJsu/1068Gqx2fLdHRVgstgvxnGEwrT8VYbr8oZh 7g== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 37jj60p5uu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 Mar 2021 02:08:30 -0400 Received: from m0098419.ppops.net (m0098419.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.43/8.16.0.43) with SMTP id 12U634q9157992; Tue, 30 Mar 2021 02:08:30 -0400 Received: from ppma01wdc.us.ibm.com (fd.55.37a9.ip4.static.sl-reverse.com [169.55.85.253]) by mx0b-001b2d01.pphosted.com with ESMTP id 37jj60p5uk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 Mar 2021 02:08:30 -0400 Received: from pps.filterd (ppma01wdc.us.ibm.com [127.0.0.1]) by ppma01wdc.us.ibm.com (8.16.0.43/8.16.0.43) with SMTP id 12U65clS030860; Tue, 30 Mar 2021 06:08:29 GMT Received: from b03cxnp08027.gho.boulder.ibm.com (b03cxnp08027.gho.boulder.ibm.com [9.17.130.19]) by ppma01wdc.us.ibm.com with ESMTP id 37hvb8ugen-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 Mar 2021 06:08:29 +0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp08027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 12U68SjM10617474 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 30 Mar 2021 06:08:28 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6B07F7806A; Tue, 30 Mar 2021 06:08:28 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 93DA17805E; Tue, 30 Mar 2021 06:08:25 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.199.52.226]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Tue, 30 Mar 2021 06:08:25 +0000 (GMT) From: "Aneesh Kumar K.V" To: linux-mm@kvack.org, akpm@linux-foundation.org Subject: [PATCH v3 6/9] mm/mremap: Use range flush that does TLB and page walk cache flush Date: Tue, 30 Mar 2021 11:37:49 +0530 Message-Id: <20210330060752.592769-7-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210330060752.592769-1-aneesh.kumar@linux.ibm.com> References: <20210330060752.592769-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: NC6onVdnQU0pTMiNmjtpg03NjRK4bawF X-Proofpoint-ORIG-GUID: nR4p3_gb6At2pFCqvKyreF28lbdiwuJu X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-30_01:2021-03-26, 2021-03-30 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 lowpriorityscore=0 bulkscore=0 phishscore=0 mlxlogscore=999 priorityscore=1501 clxscore=1015 malwarescore=0 mlxscore=0 spamscore=0 adultscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2103250000 definitions=main-2103300043 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kaleshsingh@google.com, npiggin@gmail.com, "Aneesh Kumar K.V" , joel@joelfernandes.org, linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Some architectures do have the concept of page walk cache which need to be flush when updating higher levels of page tables. A fast mremap that involves moving page table pages instead of copying pte entries should flush page walk cache since the old translation cache is no more valid. Add new helper flush_pte_tlb_pwc_range() which invalidates both TLB and page walk cache where TLB entries are mapped with page size PAGE_SIZE. Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/include/asm/book3s/64/tlbflush.h | 11 +++++++++++ mm/mremap.c | 15 +++++++++++++-- 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush.h b/arch/powerpc/include/asm/book3s/64/tlbflush.h index efe5336e2b6f..b9022eb9f20e 100644 --- a/arch/powerpc/include/asm/book3s/64/tlbflush.h +++ b/arch/powerpc/include/asm/book3s/64/tlbflush.h @@ -80,6 +80,17 @@ static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma, return flush_hugetlb_tlb_pwc_range(vma, start, end, false); } +#define flush_pte_tlb_pwc_range flush_tlb_pwc_range +static inline void flush_pte_tlb_pwc_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end, + bool also_pwc) +{ + if (radix_enabled()) + return radix__flush_tlb_pwc_range_psize(vma->vm_mm, start, + end, mmu_virtual_psize, also_pwc); + return hash__flush_tlb_range(vma, start, end); +} + static inline void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { diff --git a/mm/mremap.c b/mm/mremap.c index 574287f9bb39..0e7b11daafee 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -210,6 +210,17 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, drop_rmap_locks(vma); } +#ifndef flush_pte_tlb_pwc_range +#define flush_pte_tlb_pwc_range flush_pte_tlb_pwc_range +static inline void flush_pte_tlb_pwc_range(struct vm_area_struct *vma, + unsigned long start, + unsigned long end, + bool also_pwc) +{ + return flush_tlb_range(vma, start, end); +} +#endif + #ifdef CONFIG_HAVE_MOVE_PMD static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) @@ -260,7 +271,7 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, VM_BUG_ON(!pmd_none(*new_pmd)); pmd_populate(mm, new_pmd, (pgtable_t)pmd_page_vaddr(pmd)); - flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE); + flush_pte_tlb_pwc_range(vma, old_addr, old_addr + PMD_SIZE, true); if (new_ptl != old_ptl) spin_unlock(new_ptl); spin_unlock(old_ptl); @@ -307,7 +318,7 @@ static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, VM_BUG_ON(!pud_none(*new_pud)); pud_populate(mm, new_pud, (pmd_t *)pud_page_vaddr(pud)); - flush_tlb_range(vma, old_addr, old_addr + PUD_SIZE); + flush_pte_tlb_pwc_range(vma, old_addr, old_addr + PUD_SIZE, true); if (new_ptl != old_ptl) spin_unlock(new_ptl); spin_unlock(old_ptl);