From patchwork Thu Jun 23 01:11:42 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitin Gupta X-Patchwork-Id: 639467 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3rZk596L8Tz9sBG for ; Thu, 23 Jun 2016 11:14:21 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751742AbcFWBOA (ORCPT ); Wed, 22 Jun 2016 21:14:00 -0400 Received: from aserp1040.oracle.com ([141.146.126.69]:27972 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751573AbcFWBN6 (ORCPT ); Wed, 22 Jun 2016 21:13:58 -0400 Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id u5N1DUix031300 (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 23 Jun 2016 01:13:31 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userv0021.oracle.com (8.13.8/8.13.8) with ESMTP id u5N1DUgW028504 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 23 Jun 2016 01:13:30 GMT Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id u5N1DQUm004658; Thu, 23 Jun 2016 01:13:26 GMT Received: from ca-qasparc20.us.oracle.com (/10.147.24.73) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 22 Jun 2016 18:13:26 -0700 From: Nitin Gupta To: "David S. Miller" Cc: Nitin Gupta , "David S. Miller" , Andrew Morton , Zhang Zhen , Dominik Dingel , Martin Schwidefsky , "Kirill A. Shutemov" , Naoya Horiguchi , Mike Kravetz , Hillf Danton , Michal Hocko , Chen Gang , David Woods , Yaowei Bai , sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/2] sparc64: Fix pagetable freeing for hugepage regions Date: Wed, 22 Jun 2016 18:11:42 -0700 Message-Id: <1466644374-58354-2-git-send-email-nitin.m.gupta@oracle.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1466644374-58354-1-git-send-email-nitin.m.gupta@oracle.com> References: <1466644374-58354-1-git-send-email-nitin.m.gupta@oracle.com> X-Source-IP: userv0021.oracle.com [156.151.31.71] Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org 8M pages now allocate page tables till PMD level only. So, when freeing page table for 8M hugepage backed region, make sure we don't try to access non-existent PTE level. Signed-off-by: Nitin Gupta --- arch/sparc/include/asm/hugetlb.h | 8 ---- arch/sparc/mm/hugetlbpage.c | 98 ++++++++++++++++++++++++++++++++++++++++ include/linux/hugetlb.h | 4 ++ 3 files changed, 102 insertions(+), 8 deletions(-) diff --git a/arch/sparc/include/asm/hugetlb.h b/arch/sparc/include/asm/hugetlb.h index 139e711..1a6708c 100644 --- a/arch/sparc/include/asm/hugetlb.h +++ b/arch/sparc/include/asm/hugetlb.h @@ -31,14 +31,6 @@ static inline int prepare_hugepage_range(struct file *file, return 0; } -static inline void hugetlb_free_pgd_range(struct mmu_gather *tlb, - unsigned long addr, unsigned long end, - unsigned long floor, - unsigned long ceiling) -{ - free_pgd_range(tlb, addr, end, floor, ceiling); -} - static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c index cafb5ca..494c390 100644 --- a/arch/sparc/mm/hugetlbpage.c +++ b/arch/sparc/mm/hugetlbpage.c @@ -12,6 +12,7 @@ #include #include +#include #include #include #include @@ -202,3 +203,100 @@ int pud_huge(pud_t pud) { return 0; } + +static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, + unsigned long addr) +{ + pgtable_t token = pmd_pgtable(*pmd); + + pmd_clear(pmd); + pte_free_tlb(tlb, token, addr); + atomic_long_dec(&tlb->mm->nr_ptes); +} + +static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud, + unsigned long addr, unsigned long end, + unsigned long floor, unsigned long ceiling) +{ + pmd_t *pmd; + unsigned long next; + unsigned long start; + + start = addr; + pmd = pmd_offset(pud, addr); + do { + next = pmd_addr_end(addr, end); + if (pmd_none(*pmd)) + continue; + if (is_hugetlb_pmd(*pmd)) + pmd_clear(pmd); + else + hugetlb_free_pte_range(tlb, pmd, addr); + } while (pmd++, addr = next, addr != end); + + start &= PUD_MASK; + if (start < floor) + return; + if (ceiling) { + ceiling &= PUD_MASK; + if (!ceiling) + return; + } + if (end - 1 > ceiling - 1) + return; + + pmd = pmd_offset(pud, start); + pud_clear(pud); + pmd_free_tlb(tlb, pmd, start); + mm_dec_nr_pmds(tlb->mm); +} + +static void hugetlb_free_pud_range(struct mmu_gather *tlb, pgd_t *pgd, + unsigned long addr, unsigned long end, + unsigned long floor, unsigned long ceiling) +{ + pud_t *pud; + unsigned long next; + unsigned long start; + + start = addr; + pud = pud_offset(pgd, addr); + do { + next = pud_addr_end(addr, end); + if (pud_none_or_clear_bad(pud)) + continue; + hugetlb_free_pmd_range(tlb, pud, addr, next, floor, + ceiling); + } while (pud++, addr = next, addr != end); + + start &= PGDIR_MASK; + if (start < floor) + return; + if (ceiling) { + ceiling &= PGDIR_MASK; + if (!ceiling) + return; + } + if (end - 1 > ceiling - 1) + return; + + pud = pud_offset(pgd, start); + pgd_clear(pgd); + pud_free_tlb(tlb, pud, start); +} + +void hugetlb_free_pgd_range(struct mmu_gather *tlb, + unsigned long addr, unsigned long end, + unsigned long floor, unsigned long ceiling) +{ + pgd_t *pgd; + unsigned long next; + + pgd = pgd_offset(tlb->mm, addr); + do { + next = pgd_addr_end(addr, end); + if (pgd_none_or_clear_bad(pgd)) + continue; + hugetlb_free_pud_range(tlb, pgd, addr, next, floor, ceiling); + } while (pgd++, addr = next, addr != end); +} diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index c26d463..4461309 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -120,6 +120,10 @@ int pud_huge(pud_t pmd); unsigned long hugetlb_change_protection(struct vm_area_struct *vma, unsigned long address, unsigned long end, pgprot_t newprot); +void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr, + unsigned long end, unsigned long floor, + unsigned long ceiling); + #else /* !CONFIG_HUGETLB_PAGE */ static inline void reset_vma_resv_huge_pages(struct vm_area_struct *vma)