From patchwork Thu Aug 27 16:07:20 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 32258 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@bilbo.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from ozlabs.org (ozlabs.org [203.10.76.45]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx.ozlabs.org", Issuer "CA Cert Signing Authority" (verified OK)) by bilbo.ozlabs.org (Postfix) with ESMTPS id B89D1B7BA3 for ; Fri, 28 Aug 2009 02:12:47 +1000 (EST) Received: by ozlabs.org (Postfix) id 9D19EDDD0C; Fri, 28 Aug 2009 02:12:47 +1000 (EST) Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id 1544BDDD0B for ; Fri, 28 Aug 2009 02:12:47 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752956AbZH0QKt (ORCPT ); Thu, 27 Aug 2009 12:10:49 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752943AbZH0QKs (ORCPT ); Thu, 27 Aug 2009 12:10:48 -0400 Received: from mx1.redhat.com ([209.132.183.28]:39667 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752937AbZH0QKr (ORCPT ); Thu, 27 Aug 2009 12:10:47 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n7RG8psK001917; Thu, 27 Aug 2009 12:08:51 -0400 Received: from redhat.com (vpn-10-97.str.redhat.com [10.32.10.97]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id n7RG8hJs000780 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Thu, 27 Aug 2009 12:08:47 -0400 Date: Thu, 27 Aug 2009 19:07:20 +0300 From: "Michael S. Tsirkin" To: netdev@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, mingo@elte.hu, linux-mm@kvack.org, akpm@linux-foundation.org, hpa@zytor.com, gregory.haskins@gmail.com, Rusty Russell , s.hetze@linux-ag.com Subject: [PATCHv5 2/3] mm: reduce atomic use on use_mm fast path Message-ID: <20090827160720.GC23722@redhat.com> References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.19 (2009-01-05) X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org When mm switched to matches that of active mm, we don't need to increment and then drop the mm count. Making that conditional reduces contention on that cache line on SMP systems. Acked-by: Andrea Arcangeli Signed-off-by: Michael S. Tsirkin --- mm/mmu_context.c | 9 ++++++--- 1 files changed, 6 insertions(+), 3 deletions(-) diff --git a/mm/mmu_context.c b/mm/mmu_context.c index 9989c2f..0777654 100644 --- a/mm/mmu_context.c +++ b/mm/mmu_context.c @@ -27,13 +27,16 @@ void use_mm(struct mm_struct *mm) task_lock(tsk); active_mm = tsk->active_mm; - atomic_inc(&mm->mm_count); + if (active_mm != mm) { + atomic_inc(&mm->mm_count); + tsk->active_mm = mm; + } tsk->mm = mm; - tsk->active_mm = mm; switch_mm(active_mm, mm, tsk); task_unlock(tsk); - mmdrop(active_mm); + if (active_mm != mm) + mmdrop(active_mm); } EXPORT_SYMBOL_GPL(use_mm);