From patchwork Sun May 13 20:22:24 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Miller X-Patchwork-Id: 158861 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 70208B700B for ; Mon, 14 May 2012 06:22:32 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752967Ab2EMUW0 (ORCPT ); Sun, 13 May 2012 16:22:26 -0400 Received: from shards.monkeyblade.net ([198.137.202.13]:49878 "EHLO shards.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752693Ab2EMUWZ (ORCPT ); Sun, 13 May 2012 16:22:25 -0400 Received: from localhost (cpe-66-108-118-54.nyc.res.rr.com [66.108.118.54]) (authenticated bits=0) by shards.monkeyblade.net (8.14.4/8.14.4) with ESMTP id q4DKMO16023662 (version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=NO) for ; Sun, 13 May 2012 13:22:25 -0700 Date: Sun, 13 May 2012 16:22:24 -0400 (EDT) Message-Id: <20120513.162224.175384916531399412.davem@davemloft.net> To: sparclinux@vger.kernel.org Subject: [PATCH] sparc32: Un-btfixup update_mmu_cache(). From: David Miller X-Mailer: Mew version 6.5 on Emacs 24.0.95 / Mule 6.0 (HANACHIRUSATO) Mime-Version: 1.0 X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.6 (shards.monkeyblade.net [198.137.202.13]); Sun, 13 May 2012 13:22:25 -0700 (PDT) Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org The magic Swift SRMMU code in question has not been enabled for something on the order of a decade, and it as well as it's comment is there in the history in case we ever need it again. Therefore all implementations are NOPs and we can kill this stuff off. Signed-off-by: David S. Miller --- arch/sparc/include/asm/pgtable_32.h | 4 +--- arch/sparc/mm/srmmu.c | 40 ----------------------------------- 2 files changed, 1 insertion(+), 43 deletions(-) diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h index 088db0c..b7c202f 100644 --- a/arch/sparc/include/asm/pgtable_32.h +++ b/arch/sparc/include/asm/pgtable_32.h @@ -356,9 +356,7 @@ void mmu_info(struct seq_file *m); #define FAULT_CODE_WRITE 0x2 #define FAULT_CODE_USER 0x4 -BTFIXUPDEF_CALL(void, update_mmu_cache, struct vm_area_struct *, unsigned long, pte_t *) - -#define update_mmu_cache(vma,addr,ptep) BTFIXUP_CALL(update_mmu_cache)(vma,addr,ptep) +#define update_mmu_cache(vma, address, ptep) do { } while (0) void srmmu_mapiorange(unsigned int bus, unsigned long xpa, unsigned long xva, unsigned int len); diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c index 6efcb6b..4c4002f 100644 --- a/arch/sparc/mm/srmmu.c +++ b/arch/sparc/mm/srmmu.c @@ -531,38 +531,6 @@ extern void tsunami_flush_tlb_range(struct vm_area_struct *vma, unsigned long st extern void tsunami_flush_tlb_page(struct vm_area_struct *vma, unsigned long page); extern void tsunami_setup_blockops(void); -/* - * Workaround, until we find what's going on with Swift. When low on memory, - * it sometimes loops in fault/handle_mm_fault incl. flush_tlb_page to find - * out it is already in page tables/ fault again on the same instruction. - * I really don't understand it, have checked it and contexts - * are right, flush_tlb_all is done as well, and it faults again... - * Strange. -jj - * - * The following code is a deadwood that may be necessary when - * we start to make precise page flushes again. --zaitcev - */ -static void swift_update_mmu_cache(struct vm_area_struct * vma, unsigned long address, pte_t *ptep) -{ -#if 0 - static unsigned long last; - unsigned int val; - /* unsigned int n; */ - - if (address == last) { - val = srmmu_hwprobe(address); - if (val != 0 && pte_val(*ptep) != val) { - printk("swift_update_mmu_cache: " - "addr %lx put %08x probed %08x from %pf\n", - address, pte_val(*ptep), val, - __builtin_return_address(0)); - srmmu_flush_whole_tlb(); - } - } - last = address; -#endif -} - /* swift.S */ extern void swift_flush_cache_all(void); extern void swift_flush_cache_mm(struct mm_struct *mm); @@ -1218,10 +1186,6 @@ void mmu_info(struct seq_file *m) srmmu_nocache_map.used << SRMMU_NOCACHE_BITMAP_SHIFT); } -static void srmmu_update_mmu_cache(struct vm_area_struct * vma, unsigned long address, pte_t pte) -{ -} - void destroy_context(struct mm_struct *mm) { @@ -1522,8 +1486,6 @@ static void __init init_swift(void) BTFIXUPSET_CALL(flush_sig_insns, swift_flush_sig_insns, BTFIXUPCALL_NORM); BTFIXUPSET_CALL(flush_page_for_dma, swift_flush_page_for_dma, BTFIXUPCALL_NORM); - BTFIXUPSET_CALL(update_mmu_cache, swift_update_mmu_cache, BTFIXUPCALL_NORM); - flush_page_for_dma_global = 0; /* @@ -1982,8 +1944,6 @@ void __init load_mmu(void) extern void ld_mmu_iounit(void); /* Functions */ - BTFIXUPSET_CALL(update_mmu_cache, srmmu_update_mmu_cache, BTFIXUPCALL_NOP); - get_srmmu_type(); #ifdef CONFIG_SMP