From patchwork Thu Mar 12 13:27:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Sivaraj X-Patchwork-Id: 1253575 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48dVN80L0Qz9sR4 for ; Fri, 13 Mar 2020 00:41:12 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=fossix.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.a=rsa-sha256 header.s=20150623 header.b=lErWO/yW; dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48dVN72R4ZzDqbB for ; Fri, 13 Mar 2020 00:41:11 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=fossix.org (client-ip=2607:f8b0:4864:20::1044; helo=mail-pj1-x1044.google.com; envelope-from=santosh@fossix.org; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=fossix.org Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.a=rsa-sha256 header.s=20150623 header.b=lErWO/yW; dkim-atps=neutral Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48dV5B0240zDqMW for ; Fri, 13 Mar 2020 00:28:13 +1100 (AEDT) Received: by mail-pj1-x1044.google.com with SMTP id np16so2560054pjb.4 for ; Thu, 12 Mar 2020 06:28:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fossix-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=epu6wmj6Ft/4Rps2Jy4h0kjSZmJ0dnkFAY3fVVQ+XPg=; b=lErWO/yWGrzIPqhIYqCx37unoh4hD8+ENeQOFWwu56C7E4ve4ApmFIQycC+29+E5Iu Yah327Pd326g0OJJ53e3pJf6dv0cmu2jxl1qXkSdXCZqJQmDdtMJn4rNTPsgb+Yhgyk6 Df/sLLdFl1PbpU7WRz7bRwHhdeXVMT63Rw1HCefEl0GO0Eju0s1LcotA8lWlbgKZjAlP gkXros2rl0jiNf38tNpRwYFMc44tM32jdbu2MRgc/gASDQpDFVMf6j8B/Q9S/LMNEJPM u4G4JQvqgxx1SRVwKgCb94U67LwWuE9YcqEHI4X3NyM/Sb9xI20T26qzmvJo5Eu8WAcC 8aiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=epu6wmj6Ft/4Rps2Jy4h0kjSZmJ0dnkFAY3fVVQ+XPg=; b=Cel0kCKu67o2XjFxkFArkLViLSiuMp5OvVHZ0udMiyW5gqAvWH0N2+/siuSrvOWb+4 yY1qtbczeiAfc9/2bm2NVm8m4aMcdqy4PWVl4xhLVhZDssYyuBAbr4LHFGlJq8k8nNaF 2RBvOWrQRGFg3HIqUgGJot2cFovxRXRHYcbuUUepgAngONq04a1AuH4HTMjQ6Oy9UUDA JTTpxqJlK5QAPcEtGvUVyYNQaDURooEhWTqUgHq0JqTFF7yk9gjyAvVSZdaY8kui+KlL HqPihW3+faNONMuls1fKkAViOfXVhlTZbz5EyAwW1eiHoT/CdcAMe5vwZJ7ckLBHSW1p dG4g== X-Gm-Message-State: ANhLgQ2UPKpiw6839Fqptfk/NfzwP1VqD1ff2DlxWJscpspCLb5G3sM7 LBTL0OfEBe+J1u4NzEJdxyEVuQ== X-Google-Smtp-Source: ADFU+vv0JOQGQhpb4v/9/M3jT392OAGyw4apgSqJSgTdp8YhgbpqAZvk354HwGUpiR/BQ5ED+Fm2cA== X-Received: by 2002:a17:90a:21ce:: with SMTP id q72mr4074655pjc.160.1584019689969; Thu, 12 Mar 2020 06:28:09 -0700 (PDT) Received: from santosiv.in.ibm.com ([111.125.206.208]) by smtp.gmail.com with ESMTPSA id w206sm13007435pfc.54.2020.03.12.06.28.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 06:28:08 -0700 (PDT) From: Santosh Sivaraj To: , linuxppc-dev Subject: [PATCH v3 1/6] asm-generic/tlb: Track freeing of page-table directories in struct mmu_gather Date: Thu, 12 Mar 2020 18:57:35 +0530 Message-Id: <20200312132740.225241-2-santosh@fossix.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200312132740.225241-1-santosh@fossix.org> References: <20200312132740.225241-1-santosh@fossix.org> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sasha Levin , Peter Zijlstra , Will Deacon , Greg KH Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Peter Zijlstra commit 22a61c3c4f1379ef8b0ce0d5cb78baf3178950e2 upstream Some architectures require different TLB invalidation instructions depending on whether it is only the last-level of page table being changed, or whether there are also changes to the intermediate (directory) entries higher up the tree. Add a new bit to the flags bitfield in struct mmu_gather so that the architecture code can operate accordingly if it's the intermediate levels being invalidated. Signed-off-by: Peter Zijlstra Signed-off-by: Will Deacon Cc: # 4.19 Signed-off-by: Santosh Sivaraj [santosh: prerequisite for tlbflush backports] --- include/asm-generic/tlb.h | 31 +++++++++++++++++++++++-------- 1 file changed, 23 insertions(+), 8 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index b3353e21f3b3..97306b32d8d2 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -97,12 +97,22 @@ struct mmu_gather { #endif unsigned long start; unsigned long end; - /* we are in the middle of an operation to clear - * a full mm and can make some optimizations */ - unsigned int fullmm : 1, - /* we have performed an operation which - * requires a complete flush of the tlb */ - need_flush_all : 1; + /* + * we are in the middle of an operation to clear + * a full mm and can make some optimizations + */ + unsigned int fullmm : 1; + + /* + * we have performed an operation which + * requires a complete flush of the tlb + */ + unsigned int need_flush_all : 1; + + /* + * we have removed page directories + */ + unsigned int freed_tables : 1; struct mmu_gather_batch *active; struct mmu_gather_batch local; @@ -137,6 +147,7 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) tlb->start = TASK_SIZE; tlb->end = 0; } + tlb->freed_tables = 0; } static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) @@ -278,6 +289,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pte_free_tlb(tlb, ptep, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __pte_free_tlb(tlb, ptep, address); \ } while (0) #endif @@ -285,7 +297,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #ifndef pmd_free_tlb #define pmd_free_tlb(tlb, pmdp, address) \ do { \ - __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __pmd_free_tlb(tlb, pmdp, address); \ } while (0) #endif @@ -295,6 +308,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pud_free_tlb(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __pud_free_tlb(tlb, pudp, address); \ } while (0) #endif @@ -304,7 +318,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #ifndef p4d_free_tlb #define p4d_free_tlb(tlb, pudp, address) \ do { \ - __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __p4d_free_tlb(tlb, pudp, address); \ } while (0) #endif From patchwork Thu Mar 12 13:27:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Sivaraj X-Patchwork-Id: 1253576 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48dVRJ3TFwz9sPR for ; Fri, 13 Mar 2020 00:43:56 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=fossix.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.a=rsa-sha256 header.s=20150623 header.b=TQxBr599; dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48dVRH6729zDqcJ for ; Fri, 13 Mar 2020 00:43:55 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=fossix.org (client-ip=2607:f8b0:4864:20::643; helo=mail-pl1-x643.google.com; envelope-from=santosh@fossix.org; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=fossix.org Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.a=rsa-sha256 header.s=20150623 header.b=TQxBr599; dkim-atps=neutral Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48dV5C6hGzzDqMd for ; Fri, 13 Mar 2020 00:28:15 +1100 (AEDT) Received: by mail-pl1-x643.google.com with SMTP id g6so2682572plt.2 for ; Thu, 12 Mar 2020 06:28:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fossix-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OWUR5oTygA9h8IedsUL9OoKv5C2XK4YyN0lJTOYOESQ=; b=TQxBr599wxEnsJrTgWp/b4+5UTubNRHKPeR/0fagNg8aPCxlm7Dp9uL1FX7ORq1cfP aythl/H+IOYyxDhd5IxHAo3bKE8HG55ooMlb59EkD9LGQ0u/MUAglNn0SLvrkMLnlnDC 0wrkl/gi3uTAMpYUt2wGgxXM2H+5gEOM1uMT0zflB7agDx+nnUMU2imgYs3Xgtsm6+Py mHxBX/68WRIqXomzmf1nl2o09dVhw9zA5+iuX/KPdHeCYJbUf9SSStk2lcBxhKs0PUDM Ylo/TUhYnqvISsNzcB2hTascryCMVt+e+2peYcjtmcaFlCtLEf+MFOnIwj1i7XAkzQvw RrDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OWUR5oTygA9h8IedsUL9OoKv5C2XK4YyN0lJTOYOESQ=; b=PdKfv7N3bn7ajeO18/NW5ouGRrxOr7uNuzTvd6jMc4r9T/VhLWjmqi2tk9Ar2wlUjh C1zkACg+CTvl6Dz8ld7CeCCC4eov/m7vJTf1tvrHCJVaBBimMi2WC26iJbE2ILm/zkFf PMCL3TrAgDxsYPI4iZfnBf0N7HmPr5xHVELwpacgIi6vhSnBvPIl8GL+pnqLreCHxtzz ahgfPeUopaYaJwveXZLuBmpppc8x/dAGUNccumn1Itptygn3tarVn3GvcXqrnhSQputJ PxXqRFyWdHhCT9Vl2UiBN5R6wL2qTLuByHWO7Yhn8mw3Ytg2eXzGzNZkdmChEC6o7L1s ot9w== X-Gm-Message-State: ANhLgQ3GjYNka4A11l9hFciO5JrAcDtHfY940Sm1Qb8uO6TbInj5OAxj ufTmbFNkTvBAp0kildrR43lxQA== X-Google-Smtp-Source: ADFU+vs4+MiCArWOjSwU/+K3oc6OnEA/HtE53MuP5QpxV7aXQmET9IoOUa3UxE5wk7y8p02evTAAVw== X-Received: by 2002:a17:902:7895:: with SMTP id q21mr8085969pll.222.1584019693510; Thu, 12 Mar 2020 06:28:13 -0700 (PDT) Received: from santosiv.in.ibm.com ([111.125.206.208]) by smtp.gmail.com with ESMTPSA id w206sm13007435pfc.54.2020.03.12.06.28.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 06:28:11 -0700 (PDT) From: Santosh Sivaraj To: , linuxppc-dev Subject: [PATCH v3 2/6] asm-generic/tlb: Track which levels of the page tables have been cleared Date: Thu, 12 Mar 2020 18:57:36 +0530 Message-Id: <20200312132740.225241-3-santosh@fossix.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200312132740.225241-1-santosh@fossix.org> References: <20200312132740.225241-1-santosh@fossix.org> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sasha Levin , Will Deacon , Greg KH Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Will Deacon commit a6d60245d6d9b1caf66b0d94419988c4836980af upstream It is common for architectures with hugepage support to require only a single TLB invalidation operation per hugepage during unmap(), rather than iterating through the mapping at a PAGE_SIZE increment. Currently, however, the level in the page table where the unmap() operation occurs is not stored in the mmu_gather structure, therefore forcing architectures to issue additional TLB invalidation operations or to give up and over-invalidate by e.g. invalidating the entire TLB. Ideally, we could add an interval rbtree to the mmu_gather structure, which would allow us to associate the correct mapping granule with the various sub-mappings within the range being invalidated. However, this is costly in terms of book-keeping and memory management, so instead we approximate by keeping track of the page table levels that are cleared and provide a means to query the smallest granule required for invalidation. Signed-off-by: Will Deacon Cc: # 4.19 Signed-off-by: Santosh Sivaraj [santosh: prerequisite for upcoming tlbflush backports] --- include/asm-generic/tlb.h | 58 +++++++++++++++++++++++++++++++++------ mm/memory.c | 4 ++- 2 files changed, 53 insertions(+), 9 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 97306b32d8d2..f2b9dc9cbaf8 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -114,6 +114,14 @@ struct mmu_gather { */ unsigned int freed_tables : 1; + /* + * at which levels have we cleared entries? + */ + unsigned int cleared_ptes : 1; + unsigned int cleared_pmds : 1; + unsigned int cleared_puds : 1; + unsigned int cleared_p4ds : 1; + struct mmu_gather_batch *active; struct mmu_gather_batch local; struct page *__pages[MMU_GATHER_BUNDLE]; @@ -148,6 +156,10 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) tlb->end = 0; } tlb->freed_tables = 0; + tlb->cleared_ptes = 0; + tlb->cleared_pmds = 0; + tlb->cleared_puds = 0; + tlb->cleared_p4ds = 0; } static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) @@ -197,6 +209,25 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, } #endif +static inline unsigned long tlb_get_unmap_shift(struct mmu_gather *tlb) +{ + if (tlb->cleared_ptes) + return PAGE_SHIFT; + if (tlb->cleared_pmds) + return PMD_SHIFT; + if (tlb->cleared_puds) + return PUD_SHIFT; + if (tlb->cleared_p4ds) + return P4D_SHIFT; + + return PAGE_SHIFT; +} + +static inline unsigned long tlb_get_unmap_size(struct mmu_gather *tlb) +{ + return 1UL << tlb_get_unmap_shift(tlb); +} + /* * In the case of tlb vma handling, we can optimise these away in the * case where we're doing a full MM flush. When we're doing a munmap, @@ -230,13 +261,19 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define tlb_remove_tlb_entry(tlb, ptep, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->cleared_ptes = 1; \ __tlb_remove_tlb_entry(tlb, ptep, address); \ } while (0) -#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ - do { \ - __tlb_adjust_range(tlb, address, huge_page_size(h)); \ - __tlb_remove_tlb_entry(tlb, ptep, address); \ +#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ + do { \ + unsigned long _sz = huge_page_size(h); \ + __tlb_adjust_range(tlb, address, _sz); \ + if (_sz == PMD_SIZE) \ + tlb->cleared_pmds = 1; \ + else if (_sz == PUD_SIZE) \ + tlb->cleared_puds = 1; \ + __tlb_remove_tlb_entry(tlb, ptep, address); \ } while (0) /** @@ -250,6 +287,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define tlb_remove_pmd_tlb_entry(tlb, pmdp, address) \ do { \ __tlb_adjust_range(tlb, address, HPAGE_PMD_SIZE); \ + tlb->cleared_pmds = 1; \ __tlb_remove_pmd_tlb_entry(tlb, pmdp, address); \ } while (0) @@ -264,6 +302,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define tlb_remove_pud_tlb_entry(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, HPAGE_PUD_SIZE); \ + tlb->cleared_puds = 1; \ __tlb_remove_pud_tlb_entry(tlb, pudp, address); \ } while (0) @@ -289,7 +328,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pte_free_tlb(tlb, ptep, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_pmds = 1; \ __pte_free_tlb(tlb, ptep, address); \ } while (0) #endif @@ -298,7 +338,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pmd_free_tlb(tlb, pmdp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_puds = 1; \ __pmd_free_tlb(tlb, pmdp, address); \ } while (0) #endif @@ -308,7 +349,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pud_free_tlb(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_p4ds = 1; \ __pud_free_tlb(tlb, pudp, address); \ } while (0) #endif @@ -319,7 +361,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define p4d_free_tlb(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ __p4d_free_tlb(tlb, pudp, address); \ } while (0) #endif diff --git a/mm/memory.c b/mm/memory.c index bbf0cc4066c8..1832c5ed6ac0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -267,8 +267,10 @@ void arch_tlb_finish_mmu(struct mmu_gather *tlb, { struct mmu_gather_batch *batch, *next; - if (force) + if (force) { + __tlb_reset_range(tlb); __tlb_adjust_range(tlb, start, end - start); + } tlb_flush_mmu(tlb); From patchwork Thu Mar 12 13:27:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Sivaraj X-Patchwork-Id: 1253581 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48dVWd2Lsvz9sPR for ; Fri, 13 Mar 2020 00:47:41 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=fossix.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.a=rsa-sha256 header.s=20150623 header.b=IbGHOA3s; dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48dVWc74t1zDqHk for ; Fri, 13 Mar 2020 00:47:40 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=fossix.org (client-ip=2607:f8b0:4864:20::641; helo=mail-pl1-x641.google.com; envelope-from=santosh@fossix.org; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=fossix.org Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.a=rsa-sha256 header.s=20150623 header.b=IbGHOA3s; dkim-atps=neutral Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48dV5H4BXZzDqNd for ; Fri, 13 Mar 2020 00:28:19 +1100 (AEDT) Received: by mail-pl1-x641.google.com with SMTP id g6so2682618plt.2 for ; Thu, 12 Mar 2020 06:28:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fossix-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ptL293kyj8nSFPX5gOhpUcpQg2dEQmzdIA3eipuwDzk=; b=IbGHOA3sWoTfAV58TcG7+E3AhEpW57IQvk9yDRbkbKZzcOJtB0wa8x3Hm/czySSKWR +0c+d+bJgiwcQv0Vi0altJNp/v5SBggifoTlJEkXWtE6X0yKlElTnVsvG4vjmvWajgEp Oj7x75h7o9LBho0fwgP5lYTdpMeSRmvFu4uf+7FvyB6GsEFyYE2DJ1U09c3o6EclbRRo FbL3VGvuZg0+AaEkiO2yYs9y6fc+9lLU8rd6CthyHuGdxFu6rNsJvo5imIbskHrzXVsQ AfdDE101+Ie7EbLNbOI5qa7kmktJb+8d1zQNyNgXJtK2SNPKiiOZjIh5uBFmnuZgOab7 siTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ptL293kyj8nSFPX5gOhpUcpQg2dEQmzdIA3eipuwDzk=; b=eoNLxcdAgw8WPDGaB3zDrvtnmGrfOJ8lIHW6ZFzgiDYELvm3hmlJy4Ba39oYinTtlg mQfRj/IJEeteCvMpZrXVDIs96pSkrRsqT0xjHG/I03ztPgNFBKyg8mdDU7b0psKGkubZ f/YIn31TvsajwbzXSMN6H5aN83ATgY9e9w4d8wKMxAYxJPkXF2opVZ06krvWdsyvKxaV u52Qc45cI/YUhtgGavnFfvwZyjW4yRaf7VTR1E8PXdN1FTEra3skjfxiY0n7tnmJhPYq rv+MPH1cCJerItVV62ROW55TzOKfGpr7DezPFSL84WWTd7bRhMOBLnV2Q3uii89Msf25 SMbQ== X-Gm-Message-State: ANhLgQ04YjC4YY94DcayysRURnytsY0vrgGCv0b8n5ksiBwKum7Q3zVp CipiIHNMQq6nvFfllsMGFPlO1A== X-Google-Smtp-Source: ADFU+vst0ubOJ3OTPrlmehYL5+WoIBd645gGraJhFqONxy/bybPJuO7k0PfDYSHJ3OEs6iZnWB7pOQ== X-Received: by 2002:a17:90a:34c6:: with SMTP id m6mr4265443pjf.13.1584019696597; Thu, 12 Mar 2020 06:28:16 -0700 (PDT) Received: from santosiv.in.ibm.com ([111.125.206.208]) by smtp.gmail.com with ESMTPSA id w206sm13007435pfc.54.2020.03.12.06.28.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 06:28:15 -0700 (PDT) From: Santosh Sivaraj To: , linuxppc-dev Subject: [PATCH v3 3/6] asm-generic/tlb, arch: Invert CONFIG_HAVE_RCU_TABLE_INVALIDATE Date: Thu, 12 Mar 2020 18:57:37 +0530 Message-Id: <20200312132740.225241-4-santosh@fossix.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200312132740.225241-1-santosh@fossix.org> References: <20200312132740.225241-1-santosh@fossix.org> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sasha Levin , Peter Zijlstra , Greg KH Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Peter Zijlstra commit 96bc9567cbe112e9320250f01b9c060c882e8619 upstream. Make issuing a TLB invalidate for page-table pages the normal case. The reason is twofold: - too many invalidates is safer than too few, - most architectures use the linux page-tables natively and would thus require this. Make it an opt-out, instead of an opt-in. No change in behavior intended. Signed-off-by: Peter Zijlstra (Intel) Cc: # 4.19 Signed-off-by: Santosh Sivaraj [santosh: prerequisite for upcoming tlbflush backports] --- arch/Kconfig | 2 +- arch/powerpc/Kconfig | 1 + arch/sparc/Kconfig | 1 + arch/x86/Kconfig | 1 - mm/memory.c | 2 +- 5 files changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index a336548487e6..061a12b8140e 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -363,7 +363,7 @@ config HAVE_ARCH_JUMP_LABEL config HAVE_RCU_TABLE_FREE bool -config HAVE_RCU_TABLE_INVALIDATE +config HAVE_RCU_TABLE_NO_INVALIDATE bool config ARCH_HAVE_NMI_SAFE_CMPXCHG diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 6f475dc5829b..e09cfb109b8c 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -216,6 +216,7 @@ config PPC select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE if SMP + select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC64 && CPU_LITTLE_ENDIAN select HAVE_SYSCALL_TRACEPOINTS diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index e6f2a38d2e61..d90d632868aa 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -64,6 +64,7 @@ config SPARC64 select HAVE_KRETPROBES select HAVE_KPROBES select HAVE_RCU_TABLE_FREE if SMP + select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_MEMBLOCK_NODE_MAP select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_DYNAMIC_FTRACE diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index af35f5caadbe..181d0d522977 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -181,7 +181,6 @@ config X86 select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE if PARAVIRT - select HAVE_RCU_TABLE_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if X86_64 && (UNWINDER_FRAME_POINTER || UNWINDER_ORC) && STACK_VALIDATION select HAVE_STACKPROTECTOR if CC_HAS_SANE_STACKPROTECTOR diff --git a/mm/memory.c b/mm/memory.c index 1832c5ed6ac0..ba5689610c04 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -327,7 +327,7 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ */ static inline void tlb_table_invalidate(struct mmu_gather *tlb) { -#ifdef CONFIG_HAVE_RCU_TABLE_INVALIDATE +#ifndef CONFIG_HAVE_RCU_TABLE_NO_INVALIDATE /* * Invalidate page-table caches used by hardware walkers. Then we still * need to RCU-sched wait while freeing the pages because software From patchwork Thu Mar 12 13:27:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Sivaraj X-Patchwork-Id: 1253582 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48dVbV3f3jz9sNg for ; Fri, 13 Mar 2020 00:51:02 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=fossix.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.a=rsa-sha256 header.s=20150623 header.b=f3zl+M1T; dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48dVbV2ZRSzDqmJ for ; Fri, 13 Mar 2020 00:51:02 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=fossix.org (client-ip=2607:f8b0:4864:20::1043; helo=mail-pj1-x1043.google.com; envelope-from=santosh@fossix.org; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=fossix.org Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.a=rsa-sha256 header.s=20150623 header.b=f3zl+M1T; dkim-atps=neutral Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48dV5L4fd5zDqN6 for ; Fri, 13 Mar 2020 00:28:22 +1100 (AEDT) Received: by mail-pj1-x1043.google.com with SMTP id bo3so1238253pjb.5 for ; Thu, 12 Mar 2020 06:28:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fossix-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=h2xq/K+3WzENoJ8KwsXlKORLR95m5NFG5cGAyvZPyKg=; b=f3zl+M1TJ+orpZE/K+/8x+s1IdDwdXJ+28NmE2K2aCuAGqrcu9PmE5QFartUSTJZKW Jm0P0696/z9XjxWE/Pnl8WkZvlLwTp3Wydtr01NOS3aYObWxQbisVwjGWICVs/whOhUU 15PHycP0kQFgPj/3xYuKqttv9guxNB/2pmjPez/eiWZZ/S6mawdJQT12EF92/wwbCRPg CQhwiNTLcGQQteCfdRgin+CGHyDLqmBJ59LDZZE7wS/MkJMwWki+LHriZRYlKjcWX0IM MiPVYStwN2wWZNtgWkZOgIr+Ff6YD3Mfp3mOhnJd79nS9VlCUhPxhdyK7LPqvJDOhFog 7mxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=h2xq/K+3WzENoJ8KwsXlKORLR95m5NFG5cGAyvZPyKg=; b=R2IqrpiLEngYlNr3eKUx9nm88iMbVKdDpCuVBQ9kzj50DOsJ30miLmu9Jf2Sv1UR8f zZqLA4vDiKgCm8N5rrF9S4UINeseTcsuRTrgHZYE87Fz1iYvsNs1q+lHT32Pa/FwA5Gh QZgnMdrLbrmv0t5rKPQE0NGHT/uiQwiPrYb4BNfHjfLcKAw7QYiZgFf0noU6DOKq7prv yxlhkGF3QdM8EVSsWPBHcqV8ZFRwg5DUPiUl5TtT0zVuy4EDFI4xmXCd7vbTvY4Ui+WU fy4h7AwzjaDT0JnYIXDqh31yWgHfH3DxtnH8nbm9f8UeWrdiZ1mKCF8v2mTOKGs+bdQS shkg== X-Gm-Message-State: ANhLgQ0ov8Y2ylgXo2k/4n+EHFRvypRs0UruRtuLE8weMQec3kW6TICh z4KwoLrKHL6ZHsDwIa+V/JLAvA== X-Google-Smtp-Source: ADFU+vtIGIduLEP33hS97iQ8OdyhXw5d1sUcPS0i9I+mQHEE0pDH/7KRjE176lAzQ46fB3TogMHOYQ== X-Received: by 2002:a17:90b:151:: with SMTP id em17mr3517670pjb.51.1584019699901; Thu, 12 Mar 2020 06:28:19 -0700 (PDT) Received: from santosiv.in.ibm.com ([111.125.206.208]) by smtp.gmail.com with ESMTPSA id w206sm13007435pfc.54.2020.03.12.06.28.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 06:28:18 -0700 (PDT) From: Santosh Sivaraj To: , linuxppc-dev Subject: [PATCH v3 4/6] powerpc/mmu_gather: enable RCU_TABLE_FREE even for !SMP case Date: Thu, 12 Mar 2020 18:57:38 +0530 Message-Id: <20200312132740.225241-5-santosh@fossix.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200312132740.225241-1-santosh@fossix.org> References: <20200312132740.225241-1-santosh@fossix.org> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sasha Levin , "Aneesh Kumar K.V" , Greg KH Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: "Aneesh Kumar K.V" commit 12e4d53f3f04e81f9e83d6fc10edc7314ab9f6b9 upstream. Patch series "Fixup page directory freeing", v4. This is a repost of patch series from Peter with the arch specific changes except ppc64 dropped. ppc64 changes are added here because we are redoing the patch series on top of ppc64 changes. This makes it easy to backport these changes. Only the first 2 patches need to be backported to stable. The thing is, on anything SMP, freeing page directories should observe the exact same order as normal page freeing: 1) unhook page/directory 2) TLB invalidate 3) free page/directory Without this, any concurrent page-table walk could end up with a Use-after-Free. This is esp. trivial for anything that has software page-table walkers (HAVE_FAST_GUP / software TLB fill) or the hardware caches partial page-walks (ie. caches page directories). Even on UP this might give issues since mmu_gather is preemptible these days. An interrupt or preempted task accessing user pages might stumble into the free page if the hardware caches page directories. This patch series fixes ppc64 and add generic MMU_GATHER changes to support the conversion of other architectures. I haven't added patches w.r.t other architecture because they are yet to be acked. This patch (of 9): A followup patch is going to make sure we correctly invalidate page walk cache before we free page table pages. In order to keep things simple enable RCU_TABLE_FREE even for !SMP so that we don't have to fixup the !SMP case differently in the followup patch !SMP case is right now broken for radix translation w.r.t page walk cache flush. We can get interrupted in between page table free and that would imply we have page walk cache entries pointing to tables which got freed already. Michael said "both our platforms that run on Power9 force SMP on in Kconfig, so the !SMP case is unlikely to be a problem for anyone in practice, unless they've hacked their kernel to build it !SMP." Link: http://lkml.kernel.org/r/20200116064531.483522-2-aneesh.kumar@linux.ibm.com Signed-off-by: Aneesh Kumar K.V Cc: # 4.19 Signed-off-by: Santosh Sivaraj [santosh: backported for 4.19 stable] --- arch/powerpc/Kconfig | 2 +- arch/powerpc/include/asm/book3s/32/pgalloc.h | 8 -------- arch/powerpc/include/asm/book3s/64/pgalloc.h | 2 -- arch/powerpc/include/asm/nohash/32/pgalloc.h | 8 -------- arch/powerpc/include/asm/nohash/64/pgalloc.h | 9 +-------- arch/powerpc/mm/pgtable-book3s64.c | 7 ------- 6 files changed, 2 insertions(+), 34 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index e09cfb109b8c..1a00ce4b0040 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -215,7 +215,7 @@ config PPC select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select HAVE_RCU_TABLE_FREE if SMP + select HAVE_RCU_TABLE_FREE select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC64 && CPU_LITTLE_ENDIAN diff --git a/arch/powerpc/include/asm/book3s/32/pgalloc.h b/arch/powerpc/include/asm/book3s/32/pgalloc.h index 82e44b1a00ae..79ba3fbb512e 100644 --- a/arch/powerpc/include/asm/book3s/32/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/32/pgalloc.h @@ -110,7 +110,6 @@ static inline void pgtable_free(void *table, unsigned index_size) #define check_pgt_cache() do { } while (0) #define get_hugepd_cache_index(x) (x) -#ifdef CONFIG_SMP static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift) { @@ -127,13 +126,6 @@ static inline void __tlb_remove_table(void *_table) pgtable_free(table, shift); } -#else -static inline void pgtable_free_tlb(struct mmu_gather *tlb, - void *table, int shift) -{ - pgtable_free(table, shift); -} -#endif static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, unsigned long address) diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h index f9019b579903..1013c0214213 100644 --- a/arch/powerpc/include/asm/book3s/64/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h @@ -47,9 +47,7 @@ extern pmd_t *pmd_fragment_alloc(struct mm_struct *, unsigned long); extern void pte_fragment_free(unsigned long *, int); extern void pmd_fragment_free(unsigned long *); extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift); -#ifdef CONFIG_SMP extern void __tlb_remove_table(void *_table); -#endif static inline pgd_t *radix__pgd_alloc(struct mm_struct *mm) { diff --git a/arch/powerpc/include/asm/nohash/32/pgalloc.h b/arch/powerpc/include/asm/nohash/32/pgalloc.h index 8825953c225b..96eed46d5684 100644 --- a/arch/powerpc/include/asm/nohash/32/pgalloc.h +++ b/arch/powerpc/include/asm/nohash/32/pgalloc.h @@ -111,7 +111,6 @@ static inline void pgtable_free(void *table, unsigned index_size) #define check_pgt_cache() do { } while (0) #define get_hugepd_cache_index(x) (x) -#ifdef CONFIG_SMP static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift) { @@ -128,13 +127,6 @@ static inline void __tlb_remove_table(void *_table) pgtable_free(table, shift); } -#else -static inline void pgtable_free_tlb(struct mmu_gather *tlb, - void *table, int shift) -{ - pgtable_free(table, shift); -} -#endif static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, unsigned long address) diff --git a/arch/powerpc/include/asm/nohash/64/pgalloc.h b/arch/powerpc/include/asm/nohash/64/pgalloc.h index e2d62d033708..e3a0caba65f4 100644 --- a/arch/powerpc/include/asm/nohash/64/pgalloc.h +++ b/arch/powerpc/include/asm/nohash/64/pgalloc.h @@ -142,7 +142,7 @@ static inline void pgtable_free(void *table, int shift) } #define get_hugepd_cache_index(x) (x) -#ifdef CONFIG_SMP + static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift) { unsigned long pgf = (unsigned long)table; @@ -160,13 +160,6 @@ static inline void __tlb_remove_table(void *_table) pgtable_free(table, shift); } -#else -static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift) -{ - pgtable_free(table, shift); -} -#endif - static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, unsigned long address) { diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c index 297db665d953..5b4e9fd8990c 100644 --- a/arch/powerpc/mm/pgtable-book3s64.c +++ b/arch/powerpc/mm/pgtable-book3s64.c @@ -432,7 +432,6 @@ static inline void pgtable_free(void *table, int index) } } -#ifdef CONFIG_SMP void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int index) { unsigned long pgf = (unsigned long)table; @@ -449,12 +448,6 @@ void __tlb_remove_table(void *_table) return pgtable_free(table, index); } -#else -void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int index) -{ - return pgtable_free(table, index); -} -#endif #ifdef CONFIG_PROC_FS atomic_long_t direct_pages_count[MMU_PAGE_COUNT]; From patchwork Thu Mar 12 13:27:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Sivaraj X-Patchwork-Id: 1253584 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48dVhS3WtMz9sT4 for ; Fri, 13 Mar 2020 00:55:20 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=fossix.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.a=rsa-sha256 header.s=20150623 header.b=lS0GhE93; dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48dVhN6kQRzDqsy for ; Fri, 13 Mar 2020 00:55:16 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=fossix.org (client-ip=2607:f8b0:4864:20::544; helo=mail-pg1-x544.google.com; envelope-from=santosh@fossix.org; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=fossix.org Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.a=rsa-sha256 header.s=20150623 header.b=lS0GhE93; dkim-atps=neutral Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48dV5Q3TQczDqLw for ; Fri, 13 Mar 2020 00:28:26 +1100 (AEDT) Received: by mail-pg1-x544.google.com with SMTP id t3so3095853pgn.1 for ; Thu, 12 Mar 2020 06:28:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fossix-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AOYTyCBfTy8MK3UPn6mW4+x0YQLfUKNtszBzIavYGWA=; b=lS0GhE933CCmOmeFpvr/ip2iV/KpmaPPFcD8rxSamy00DeBg+mdvBlS5+H6c/ex7XX K3aiuBWHV6Qf5EQ9/Wp5+tSiiijGe8mBqxaG+wwHvKKE0tYpbNvKj88JqqwoPZ7QMrbX yzovdeL1Cys01eyNxdLuAAKzcy4cg4CX62XCipRkekbz8faqicPn75ZUG+oGB/9FMtKb PKKg1mfCCJo6km0DekMDWyM1FS0MQskkbPf14Y9esO72VTkxP5E1tU92Nx2k/7iMsFbt c6k1s0fJKfgTypfQVeHCFLI22dp2zi5rT3BM3NdaOWJH9pvtVpqkm4Tj/aQreaCqPVAk HggQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AOYTyCBfTy8MK3UPn6mW4+x0YQLfUKNtszBzIavYGWA=; b=rSF+rRAia6D/S6PDqCgrq/HSxD5eGYzC33Otr8fVYsOJV4xwLKZP5EkWWviCngJxod 2T8RaP5TeXLqQoM7mqJe+9wQb2XsO87EECPndbeuWG9JJ/IugeBotZOGQJ/vWhb9VvTj OHvBey+4mtocftB/NiSERvOCwdojR+EF49SzikQ7A+/WxNCWBuE7LjDlXDACeLQKjY8K Sc4jzQU/JdCn+C3L84JHotlWlZ/l1qfWbWMmK2Gq7I2JytKToTi0WCBKrZfS8R89gjdJ rMeHigstSbz5IAq+53Uv7juUSRU+9hg9g5UdBlGjkE355Z3sabCMliuLjssWAIcy82N9 +Tmw== X-Gm-Message-State: ANhLgQ21zbiS1qAM8A3bHqDoD1e+KhcazAymZ1uL6dCbE5XpjkizVrNO E39ahXAU8yx5gIFqeyoBHKEaxg== X-Google-Smtp-Source: ADFU+vv2KWyb3e7X6LuFGDW5/DQGC9VabvRVdjphSKuYITMI1VSWGCqToqd7D5TYG1iQ9Ytrzj18/g== X-Received: by 2002:a63:3111:: with SMTP id x17mr7979463pgx.422.1584019703713; Thu, 12 Mar 2020 06:28:23 -0700 (PDT) Received: from santosiv.in.ibm.com ([111.125.206.208]) by smtp.gmail.com with ESMTPSA id w206sm13007435pfc.54.2020.03.12.06.28.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 06:28:22 -0700 (PDT) From: Santosh Sivaraj To: , linuxppc-dev Subject: [PATCH v3 5/6] mm/mmu_gather: invalidate TLB correctly on batch allocation failure and flush Date: Thu, 12 Mar 2020 18:57:39 +0530 Message-Id: <20200312132740.225241-6-santosh@fossix.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200312132740.225241-1-santosh@fossix.org> References: <20200312132740.225241-1-santosh@fossix.org> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sasha Levin , Peter Zijlstra , "Aneesh Kumar K . V" , Greg KH Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Peter Zijlstra commit 0ed1325967ab5f7a4549a2641c6ebe115f76e228 upstream. Architectures for which we have hardware walkers of Linux page table should flush TLB on mmu gather batch allocation failures and batch flush. Some architectures like POWER supports multiple translation modes (hash and radix) and in the case of POWER only radix translation mode needs the above TLBI. This is because for hash translation mode kernel wants to avoid this extra flush since there are no hardware walkers of linux page table. With radix translation, the hardware also walks linux page table and with that, kernel needs to make sure to TLB invalidate page walk cache before page table pages are freed. More details in commit d86564a2f085 ("mm/tlb, x86/mm: Support invalidating TLB caches for RCU_TABLE_FREE") The changes to sparc are to make sure we keep the old behavior since we are now removing HAVE_RCU_TABLE_NO_INVALIDATE. The default value for tlb_needs_table_invalidate is to always force an invalidate and sparc can avoid the table invalidate. Hence we define tlb_needs_table_invalidate to false for sparc architecture. Link: http://lkml.kernel.org/r/20200116064531.483522-3-aneesh.kumar@linux.ibm.com Fixes: a46cc7a90fd8 ("powerpc/mm/radix: Improve TLB/PWC flushes") Signed-off-by: Peter Zijlstra (Intel) Cc: # 4.19 Signed-off-by: Santosh Sivaraj [santosh: backported to 4.19 stable] --- arch/Kconfig | 3 --- arch/powerpc/Kconfig | 1 - arch/powerpc/include/asm/tlb.h | 11 +++++++++++ arch/sparc/Kconfig | 1 - arch/sparc/include/asm/tlb_64.h | 9 +++++++++ include/asm-generic/tlb.h | 15 +++++++++++++++ mm/memory.c | 16 ++++++++-------- 7 files changed, 43 insertions(+), 13 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 061a12b8140e..3abbdb0cea44 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -363,9 +363,6 @@ config HAVE_ARCH_JUMP_LABEL config HAVE_RCU_TABLE_FREE bool -config HAVE_RCU_TABLE_NO_INVALIDATE - bool - config ARCH_HAVE_NMI_SAFE_CMPXCHG bool diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 1a00ce4b0040..e5bc0cfea2b1 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -216,7 +216,6 @@ config PPC select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE - select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC64 && CPU_LITTLE_ENDIAN select HAVE_SYSCALL_TRACEPOINTS diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h index f0e571b2dc7c..63418275f402 100644 --- a/arch/powerpc/include/asm/tlb.h +++ b/arch/powerpc/include/asm/tlb.h @@ -30,6 +30,17 @@ #define tlb_remove_check_page_size_change tlb_remove_check_page_size_change extern void tlb_flush(struct mmu_gather *tlb); +/* + * book3s: + * Hash does not use the linux page-tables, so we can avoid + * the TLB invalidate for page-table freeing, Radix otoh does use the + * page-tables and needs the TLBI. + * + * nohash: + * We still do TLB invalidate in the __pte_free_tlb routine before we + * add the page table pages to mmu gather table batch. + */ +#define tlb_needs_table_invalidate() radix_enabled() /* Get the generic bits... */ #include diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index d90d632868aa..e6f2a38d2e61 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -64,7 +64,6 @@ config SPARC64 select HAVE_KRETPROBES select HAVE_KPROBES select HAVE_RCU_TABLE_FREE if SMP - select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_MEMBLOCK_NODE_MAP select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_DYNAMIC_FTRACE diff --git a/arch/sparc/include/asm/tlb_64.h b/arch/sparc/include/asm/tlb_64.h index a2f3fa61ee36..8cb8f3833239 100644 --- a/arch/sparc/include/asm/tlb_64.h +++ b/arch/sparc/include/asm/tlb_64.h @@ -28,6 +28,15 @@ void flush_tlb_pending(void); #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) #define tlb_flush(tlb) flush_tlb_pending() +/* + * SPARC64's hardware TLB fill does not use the Linux page-tables + * and therefore we don't need a TLBI when freeing page-table pages. + */ + +#ifdef CONFIG_HAVE_RCU_TABLE_FREE +#define tlb_needs_table_invalidate() (false) +#endif + #include #endif /* _SPARC64_TLB_H */ diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index f2b9dc9cbaf8..19934cdd143e 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -61,8 +61,23 @@ struct mmu_table_batch { extern void tlb_table_flush(struct mmu_gather *tlb); extern void tlb_remove_table(struct mmu_gather *tlb, void *table); +/* + * This allows an architecture that does not use the linux page-tables for + * hardware to skip the TLBI when freeing page tables. + */ +#ifndef tlb_needs_table_invalidate +#define tlb_needs_table_invalidate() (true) #endif +#else + +#ifdef tlb_needs_table_invalidate +#error tlb_needs_table_invalidate() requires HAVE_RCU_TABLE_FREE +#endif + +#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ + + /* * If we can't allocate a page to make a big batch of page pointers * to work on, then just handle a few from the on-stack structure. diff --git a/mm/memory.c b/mm/memory.c index ba5689610c04..7daa7ae1b046 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -327,14 +327,14 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ */ static inline void tlb_table_invalidate(struct mmu_gather *tlb) { -#ifndef CONFIG_HAVE_RCU_TABLE_NO_INVALIDATE - /* - * Invalidate page-table caches used by hardware walkers. Then we still - * need to RCU-sched wait while freeing the pages because software - * walkers can still be in-flight. - */ - tlb_flush_mmu_tlbonly(tlb); -#endif + if (tlb_needs_table_invalidate()) { + /* + * Invalidate page-table caches used by hardware walkers. Then + * we still need to RCU-sched wait while freeing the pages + * because software walkers can still be in-flight. + */ + tlb_flush_mmu_tlbonly(tlb); + } } static void tlb_remove_table_smp_sync(void *arg) From patchwork Thu Mar 12 13:27:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Sivaraj X-Patchwork-Id: 1253588 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48dVpC748Kz9sPk for ; Fri, 13 Mar 2020 01:00:19 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=fossix.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.a=rsa-sha256 header.s=20150623 header.b=CClqHLwq; dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48dVpB36qCzDqyH for ; Fri, 13 Mar 2020 01:00:18 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=fossix.org (client-ip=2607:f8b0:4864:20::543; helo=mail-pg1-x543.google.com; envelope-from=santosh@fossix.org; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=fossix.org Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=fossix-org.20150623.gappssmtp.com header.i=@fossix-org.20150623.gappssmtp.com header.a=rsa-sha256 header.s=20150623 header.b=CClqHLwq; dkim-atps=neutral Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48dV5W2P2jzDqNR for ; Fri, 13 Mar 2020 00:28:31 +1100 (AEDT) Received: by mail-pg1-x543.google.com with SMTP id b1so3080948pgm.8 for ; Thu, 12 Mar 2020 06:28:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fossix-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ceobO9udD14LrQZbpBJ6o1PQGbHK97VsLgC6qxhMQAM=; b=CClqHLwq/2JiI5Z8INqms9RzXBNoQKXNgFe5NSeNFs/3HbKL/gkSJn9AnIUGBSagnm 6G6AZ9qyWwQTx56eX9A2hizNqDE3fTdSSrEnG5KXkmwhoM/LHeitleT4237iSpEaVMli R8JHuBnDG0a5ffIHDOPi412aHkkPkMERdKR3N2wlb0SFhXuIaDO47mWOM+snwniuRIEg Fcoi+KQlyqXIyPXGM5tjumwI+FHNKL9nNC8489rl0bcgcHm1OT54lBa6XLuAGSzoQiOI obLUcPT8gNbRq+eD7+9lYaAGmMuMr47lR7EvyQqTB8mHOHIxS8GQUwnJmSNnCD/TC8xf QupA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ceobO9udD14LrQZbpBJ6o1PQGbHK97VsLgC6qxhMQAM=; b=D6LLC2cE3K97adKvOKXeLQh48wZfG6q8vlYDsmeWjaXHLKC2ZKWtC5TITK8jkAQFlV YLj/Rs3w6syh6oG1eJMHcXXic6/NevrbKmzwhoSpHrJmBtNEedk+J4Bz2qUHBlwocWv/ OjJWIs3AhPWPWq7s26vvEhmCDtWz84RJWQhEQEWs8I/ittpVL8hNq6h95OBW1+Falcnw CcxXJE7CYIJ+gc8M6WHjLNQsWm9n/HBltLIbEyYZkrmvFG0pYB+k/QkPCbUYhZa0xdsD BAeNXGsgBDG3MWQdqVvAChySJLFS59cHv16v9nQ5g31yMw84q+wGez8wqTVn7owZboCZ vQiA== X-Gm-Message-State: ANhLgQ2AXhKiazoqZZAuDm3qS5bMTH69VBOEnDA8TUAI5j0GU7nwQSyc 5hKSgaD8X6GWiIERoeLkofW9kA== X-Google-Smtp-Source: ADFU+vsUCc8IglJOZc1JplgZ66GG8uaBvrcHlGnViNOlSTYDkiUPzaaN1tMozhOpYCNGdSM3kEcqvg== X-Received: by 2002:a63:cb4a:: with SMTP id m10mr7779986pgi.259.1584019708733; Thu, 12 Mar 2020 06:28:28 -0700 (PDT) Received: from santosiv.in.ibm.com ([111.125.206.208]) by smtp.gmail.com with ESMTPSA id w206sm13007435pfc.54.2020.03.12.06.28.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 06:28:26 -0700 (PDT) From: Santosh Sivaraj To: , linuxppc-dev Subject: [PATCH v3 6/6] asm-generic/tlb: avoid potential double flush Date: Thu, 12 Mar 2020 18:57:40 +0530 Message-Id: <20200312132740.225241-7-santosh@fossix.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200312132740.225241-1-santosh@fossix.org> References: <20200312132740.225241-1-santosh@fossix.org> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sasha Levin , Peter Zijlstra , "Aneesh Kumar K . V" , Greg KH Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Peter Zijlstra commit 0758cd8304942292e95a0f750c374533db378b32 upstream. Aneesh reported that: tlb_flush_mmu() tlb_flush_mmu_tlbonly() tlb_flush() <-- #1 tlb_flush_mmu_free() tlb_table_flush() tlb_table_invalidate() tlb_flush_mmu_tlbonly() tlb_flush() <-- #2 does two TLBIs when tlb->fullmm, because __tlb_reset_range() will not clear tlb->end in that case. Observe that any caller to __tlb_adjust_range() also sets at least one of the tlb->freed_tables || tlb->cleared_p* bits, and those are unconditionally cleared by __tlb_reset_range(). Change the condition for actually issuing TLBI to having one of those bits set, as opposed to having tlb->end != 0. Link: http://lkml.kernel.org/r/20200116064531.483522-4-aneesh.kumar@linux.ibm.com Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Aneesh Kumar K.V Reported-by: "Aneesh Kumar K.V" Cc: # 4.19 Signed-off-by: Santosh Sivaraj [santosh: backported to 4.19 stable] --- include/asm-generic/tlb.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 19934cdd143e..427a70c56ddd 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -179,7 +179,12 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) { - if (!tlb->end) + /* + * Anything calling __tlb_adjust_range() also sets at least one of + * these bits. + */ + if (!(tlb->freed_tables || tlb->cleared_ptes || tlb->cleared_pmds || + tlb->cleared_puds || tlb->cleared_p4ds)) return; tlb_flush(tlb);