From patchwork Thu May 28 11:52:36 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dominik Dingel X-Patchwork-Id: 477491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on bilbo.ozlabs.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, SPF_HELO_PASS, SPF_PASS, T_RP_MATCHES_RCVD autolearn=disabled version=3.4.1 X-Original-To: jk@ozlabs.org Delivered-To: jk@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 950D61402A0 for ; Thu, 28 May 2015 22:30:59 +1000 (AEST) Received: from ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 79BB41A19C1 for ; Thu, 28 May 2015 22:30:59 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Received: from e06smtp15.uk.ibm.com (e06smtp15.uk.ibm.com [195.75.94.111]) (using TLSv1 with cipher CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id DC4361A1284 for ; Thu, 28 May 2015 21:52:56 +1000 (AEST) Received: from /spool/local by e06smtp15.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 28 May 2015 12:52:53 +0100 Received: from d06dlp03.portsmouth.uk.ibm.com (9.149.20.15) by e06smtp15.uk.ibm.com (192.168.101.145) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 28 May 2015 12:52:51 +0100 Received: from b06cxnps3075.portsmouth.uk.ibm.com (d06relay10.portsmouth.uk.ibm.com [9.149.109.195]) by d06dlp03.portsmouth.uk.ibm.com (Postfix) with ESMTP id 4DADA1B08061 for ; Thu, 28 May 2015 12:53:43 +0100 (BST) Received: from d06av06.portsmouth.uk.ibm.com (d06av06.portsmouth.uk.ibm.com [9.149.37.217]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id t4SBqoS622347978 for ; Thu, 28 May 2015 11:52:50 GMT Received: from d06av06.portsmouth.uk.ibm.com (localhost [127.0.0.1]) by d06av06.portsmouth.uk.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id t4S6kQsw013638 for ; Thu, 28 May 2015 02:46:30 -0400 Received: from tuxmaker.boeblingen.de.ibm.com (tuxmaker.boeblingen.de.ibm.com [9.152.85.9]) by d06av06.portsmouth.uk.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id t4S6kQYU013617; Thu, 28 May 2015 02:46:26 -0400 Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 2944) id 41DB5122445A; Thu, 28 May 2015 13:52:46 +0200 (CEST) From: Dominik Dingel To: linux-kernel@vger.kernel.org Subject: [PATCH 4/5] s390/hugetlb: remove dead code for sw emulated huge pages Date: Thu, 28 May 2015 13:52:36 +0200 Message-Id: <1432813957-46874-5-git-send-email-dingel@linux.vnet.ibm.com> X-Mailer: git-send-email 2.3.7 In-Reply-To: <1432813957-46874-1-git-send-email-dingel@linux.vnet.ibm.com> References: <1432813957-46874-1-git-send-email-dingel@linux.vnet.ibm.com> X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15052811-0021-0000-0000-00000412ADE2 X-Mailman-Approved-At: Thu, 28 May 2015 22:29:21 +1000 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-mips@linux-mips.org, linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Catalin Marinas , Heiko Carstens , Zhang Zhen , Luiz Capitulino , linux-mm@kvack.org, Chris Metcalf , Paul Mackerras , Dominik Dingel , "H. Peter Anvin" , sparclinux@vger.kernel.org, linux-s390@vger.kernel.org, Davidlohr Bueso , Russell King , x86@kernel.org, Hugh Dickins , linux-metag@vger.kernel.org, Christian Borntraeger , Ingo Molnar , "Jason J. Herne" , Fenghua Yu , James Hogan , Will Deacon , David Rientjes , Paolo Bonzini , Thomas Gleixner , Michael Holzheu , Naoya Horiguchi , linux-arm-kernel@lists.infradead.org, Tony Luck , Nathan Lynch , Ralf Baechle , Andy Lutomirski , "Aneesh Kumar K.V" , Martin Schwidefsky , linux390@de.ibm.com, Andrew Morton , linuxppc-dev@lists.ozlabs.org, "David S. Miller" , "Kirill A. Shutemov" , Mike Kravetz MIME-Version: 1.0 Errors-To: linuxppc-dev-bounces+jk=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" We now support only hugepages on hardware with EDAT1 support. So we remove the prepare/release_hugepage hooks and simplify set_huge_pte_at and huge_ptep_get. Acked-by: Martin Schwidefsky Signed-off-by: Dominik Dingel --- arch/s390/include/asm/hugetlb.h | 3 --- arch/s390/mm/hugetlbpage.c | 60 +++-------------------------------------- 2 files changed, 3 insertions(+), 60 deletions(-) diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h index dfb542a..0130d03 100644 --- a/arch/s390/include/asm/hugetlb.h +++ b/arch/s390/include/asm/hugetlb.h @@ -37,9 +37,6 @@ static inline int prepare_hugepage_range(struct file *file, #define arch_clear_hugepage_flags(page) do { } while (0) -int arch_prepare_hugepage(struct page *page); -void arch_release_hugepage(struct page *page); - static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c index fa6e1bc..999616b 100644 --- a/arch/s390/mm/hugetlbpage.c +++ b/arch/s390/mm/hugetlbpage.c @@ -80,31 +80,16 @@ static inline pte_t __pmd_to_pte(pmd_t pmd) void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - pmd_t pmd; + pmd_t pmd = __pte_to_pmd(pte); - pmd = __pte_to_pmd(pte); - if (!MACHINE_HAS_HPAGE) { - /* Emulated huge ptes loose the dirty and young bit */ - pmd_val(pmd) &= ~_SEGMENT_ENTRY_ORIGIN; - pmd_val(pmd) |= pte_page(pte)[1].index; - } else - pmd_val(pmd) |= _SEGMENT_ENTRY_LARGE; + pmd_val(pmd) |= _SEGMENT_ENTRY_LARGE; *(pmd_t *) ptep = pmd; } pte_t huge_ptep_get(pte_t *ptep) { - unsigned long origin; - pmd_t pmd; + pmd_t pmd = *(pmd_t *) ptep; - pmd = *(pmd_t *) ptep; - if (!MACHINE_HAS_HPAGE && pmd_present(pmd)) { - origin = pmd_val(pmd) & _SEGMENT_ENTRY_ORIGIN; - pmd_val(pmd) &= ~_SEGMENT_ENTRY_ORIGIN; - pmd_val(pmd) |= *(unsigned long *) origin; - /* Emulated huge ptes are young and dirty by definition */ - pmd_val(pmd) |= _SEGMENT_ENTRY_YOUNG | _SEGMENT_ENTRY_DIRTY; - } return __pmd_to_pte(pmd); } @@ -119,45 +104,6 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, return pte; } -int arch_prepare_hugepage(struct page *page) -{ - unsigned long addr = page_to_phys(page); - pte_t pte; - pte_t *ptep; - int i; - - if (MACHINE_HAS_HPAGE) - return 0; - - ptep = (pte_t *) pte_alloc_one(&init_mm, addr); - if (!ptep) - return -ENOMEM; - - pte_val(pte) = addr; - for (i = 0; i < PTRS_PER_PTE; i++) { - set_pte_at(&init_mm, addr + i * PAGE_SIZE, ptep + i, pte); - pte_val(pte) += PAGE_SIZE; - } - page[1].index = (unsigned long) ptep; - return 0; -} - -void arch_release_hugepage(struct page *page) -{ - pte_t *ptep; - - if (MACHINE_HAS_HPAGE) - return; - - ptep = (pte_t *) page[1].index; - if (!ptep) - return; - clear_table((unsigned long *) ptep, _PAGE_INVALID, - PTRS_PER_PTE * sizeof(pte_t)); - page_table_free(&init_mm, (unsigned long *) ptep); - page[1].index = 0; -} - pte_t *huge_pte_alloc(struct mm_struct *mm, unsigned long addr, unsigned long sz) {