From patchwork Wed Jun 20 13:05:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 932232 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 419lWZ42gwz9s7F for ; Wed, 20 Jun 2018 23:08:26 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.vnet.ibm.com Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 419lWZ1xHNzF0n5 for ; Wed, 20 Jun 2018 23:08:26 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.vnet.ibm.com X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=khandual@linux.vnet.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.vnet.ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 419lSZ532JzF0l6 for ; Wed, 20 Jun 2018 23:05:50 +1000 (AEST) Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w5KD4OEk122056 for ; Wed, 20 Jun 2018 09:05:47 -0400 Received: from e06smtp01.uk.ibm.com (e06smtp01.uk.ibm.com [195.75.94.97]) by mx0a-001b2d01.pphosted.com with ESMTP id 2jqmn9qg12-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 20 Jun 2018 09:05:47 -0400 Received: from localhost by e06smtp01.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 20 Jun 2018 14:05:44 +0100 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp01.uk.ibm.com (192.168.101.131) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 20 Jun 2018 14:05:42 +0100 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w5KD5gIx28115118 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 20 Jun 2018 13:05:42 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id BC81A4204B; Wed, 20 Jun 2018 13:55:41 +0100 (BST) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A505A4203F; Wed, 20 Jun 2018 13:55:40 +0100 (BST) Received: from localhost.in.ibm.com (unknown [9.79.182.230]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 20 Jun 2018 13:55:40 +0100 (BST) From: Anshuman Khandual To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH V3] powerpc/mm: Initialize kernel pagetable memory for PTE fragments Date: Wed, 20 Jun 2018 18:35:39 +0530 X-Mailer: git-send-email 2.9.3 X-TM-AS-GCONF: 00 x-cbid: 18062013-4275-0000-0000-0000028FE0C5 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18062013-4276-0000-0000-0000379729B3 Message-Id: <20180620130539.379-1-khandual@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2018-06-20_06:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1805220000 definitions=main-1806200148 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.26 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: aneesh.kumar@linux.ibm.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Kernel pagetable pages for PTE fragments never go through the standard init sequence which can cause inaccuracies in utilization statistics reported at places like /proc and /sysfs interfaces etc. Also the allocated page misses out on pagetable lock and page flag init as well. Fix it by making sure all pages allocated for either user process or kernel PTE fragments go through same initialization. Signed-off-by: Anshuman Khandual --- Changes in V3: - Replaced 'kernel' argument with direct check on init_mm as per Aneesh Changes in V2: - Call the destructor function during free for all cases arch/powerpc/include/asm/book3s/64/pgalloc.h | 12 ++++----- arch/powerpc/mm/pgtable-book3s64.c | 37 +++++++++++++--------------- 2 files changed, 23 insertions(+), 26 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h index 01ee40f..ccb351c 100644 --- a/arch/powerpc/include/asm/book3s/64/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h @@ -41,9 +41,9 @@ struct vmemmap_backing { pgtable_cache[(shift) - 1]; \ }) -extern pte_t *pte_fragment_alloc(struct mm_struct *, unsigned long, int); +extern pte_t *pte_fragment_alloc(struct mm_struct *, unsigned long); extern pmd_t *pmd_fragment_alloc(struct mm_struct *, unsigned long); -extern void pte_fragment_free(unsigned long *, int); +extern void pte_fragment_free(unsigned long *); extern void pmd_fragment_free(unsigned long *); extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift); #ifdef CONFIG_SMP @@ -176,23 +176,23 @@ static inline pgtable_t pmd_pgtable(pmd_t pmd) static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address) { - return (pte_t *)pte_fragment_alloc(mm, address, 1); + return (pte_t *)pte_fragment_alloc(mm, address); } static inline pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long address) { - return (pgtable_t)pte_fragment_alloc(mm, address, 0); + return (pgtable_t)pte_fragment_alloc(mm, address); } static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) { - pte_fragment_free((unsigned long *)pte, 1); + pte_fragment_free((unsigned long *)pte); } static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage) { - pte_fragment_free((unsigned long *)ptepage, 0); + pte_fragment_free((unsigned long *)ptepage); } static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c index c1f4ca4..b792f8a 100644 --- a/arch/powerpc/mm/pgtable-book3s64.c +++ b/arch/powerpc/mm/pgtable-book3s64.c @@ -333,25 +333,23 @@ static pte_t *get_pte_from_cache(struct mm_struct *mm) return (pte_t *)ret; } -static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel) +static pte_t *__alloc_for_ptecache(struct mm_struct *mm) { + gfp_t gfp_mask = PGALLOC_GFP; void *ret = NULL; struct page *page; - if (!kernel) { - page = alloc_page(PGALLOC_GFP | __GFP_ACCOUNT); - if (!page) - return NULL; - if (!pgtable_page_ctor(page)) { - __free_page(page); - return NULL; - } - } else { - page = alloc_page(PGALLOC_GFP); - if (!page) - return NULL; - } + if (mm != &init_mm) + gfp_mask |= __GFP_ACCOUNT; + page = alloc_page(gfp_mask); + if (!page) + return NULL; + + if (!pgtable_page_ctor(page)) { + __free_page(page); + return NULL; + } ret = page_address(page); /* @@ -375,7 +373,7 @@ static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel) return (pte_t *)ret; } -pte_t *pte_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel) +pte_t *pte_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr) { pte_t *pte; @@ -383,16 +381,15 @@ pte_t *pte_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel if (pte) return pte; - return __alloc_for_ptecache(mm, kernel); + return __alloc_for_ptecache(mm); } -void pte_fragment_free(unsigned long *table, int kernel) +void pte_fragment_free(unsigned long *table) { struct page *page = virt_to_page(table); if (put_page_testzero(page)) { - if (!kernel) - pgtable_page_dtor(page); + pgtable_page_dtor(page); free_unref_page(page); } } @@ -401,7 +398,7 @@ static inline void pgtable_free(void *table, int index) { switch (index) { case PTE_INDEX: - pte_fragment_free(table, 0); + pte_fragment_free(table); break; case PMD_INDEX: pmd_fragment_free(table);