From patchwork Fri Apr 19 17:40:51 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Kleikamp X-Patchwork-Id: 238072 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 626862C01D9 for ; Sat, 20 Apr 2013 03:40:58 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752423Ab3DSRk5 (ORCPT ); Fri, 19 Apr 2013 13:40:57 -0400 Received: from aserp1040.oracle.com ([141.146.126.69]:51664 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752363Ab3DSRk4 (ORCPT ); Fri, 19 Apr 2013 13:40:56 -0400 Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93]) by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with ESMTP id r3JHerlX006096 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 19 Apr 2013 17:40:55 GMT Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230]) by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id r3JHeq2J008877 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL); Fri, 19 Apr 2013 17:40:53 GMT Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65]) by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id r3JHeqoX006133; Fri, 19 Apr 2013 17:40:52 GMT Received: from [192.168.1.103] (/99.156.91.244) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 19 Apr 2013 10:40:52 -0700 Message-ID: <517181A3.5090905@oracle.com> Date: Fri, 19 Apr 2013 12:40:51 -0500 From: Dave Kleikamp User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130403 Thunderbird/17.0.5 MIME-Version: 1.0 To: David Miller CC: sparclinux@vger.kernel.org Subject: Re: [PATCH] sparc64: Don't pass a pointer to xcall_flush_tlb_pending References: <20130417.173127.1051335998167836570.davem@davemloft.net> <516F1D1E.5020307@oracle.com> <20130417.184445.2148323167004439141.davem@davemloft.net> <20130418.205139.332106502286987132.davem@davemloft.net> In-Reply-To: <20130418.205139.332106502286987132.davem@davemloft.net> X-Enigmail-Version: 1.5.1 X-Source-IP: ucsinet21.oracle.com [156.151.31.93] Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org Here's a lightly-tested incremental patch to yours that limits smp_flush_tlb_page's cross calls to the mm's active set. Signed-off-by: Dave Kleikamp --- To unsubscribe from this list: send the line "unsubscribe sparclinux" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/sparc/include/asm/tlbflush_64.h b/arch/sparc/include/asm/tlbflush_64.h index d4b56bb..f3c8a0f 100644 --- a/arch/sparc/include/asm/tlbflush_64.h +++ b/arch/sparc/include/asm/tlbflush_64.h @@ -54,21 +54,21 @@ do { flush_tsb_kernel_range(start,end); \ __flush_tlb_kernel_range(start,end); \ } while (0) -#define global_flush_tlb_page(context, vaddr) \ - __flush_tlb_page(context, vaddr) +#define global_flush_tlb_page(mm, vaddr) \ + __flush_tlb_page(CTX_HWBITS((mm)->context), vaddr) #else /* CONFIG_SMP */ extern void smp_flush_tlb_kernel_range(unsigned long start, unsigned long end); -extern void smp_flush_tlb_page(unsigned long context, unsigned long vaddr); +extern void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr); #define flush_tlb_kernel_range(start, end) \ do { flush_tsb_kernel_range(start,end); \ smp_flush_tlb_kernel_range(start, end); \ } while (0) -#define global_flush_tlb_page(context, vaddr) \ - smp_flush_tlb_page(context, vaddr) +#define global_flush_tlb_page(mm, vaddr) \ + smp_flush_tlb_page(mm, vaddr) #endif /* ! CONFIG_SMP */ diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c index 33bd996..1dee6e9 100644 --- a/arch/sparc/kernel/smp_64.c +++ b/arch/sparc/kernel/smp_64.c @@ -1108,10 +1108,16 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long put_cpu(); } -void smp_flush_tlb_page(unsigned long context, unsigned long vaddr) +void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr) { - smp_cross_call(&xcall_flush_tlb_page, - context, vaddr, 0); + unsigned long context = CTX_HWBITS(mm->context); + int cpu = get_cpu(); + + if (mm == current->mm && atomic_read(&mm->mm_users) == 1) + cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); + else + smp_cross_call(&xcall_flush_tlb_page, context, vaddr, 0); + __flush_tlb_page(context, vaddr); } diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index 26ddb58..1e31b30 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -77,7 +77,7 @@ static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr, } if (!tb->active) { - global_flush_tlb_page(CTX_HWBITS(mm->context), vaddr); + global_flush_tlb_page(mm, vaddr); flush_tsb_user_page(mm, vaddr); return; }