From patchwork Thu Sep 17 07:22:39 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 33760 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id 83FB0B7334 for ; Thu, 17 Sep 2009 17:25:18 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759145AbZIQHYe (ORCPT ); Thu, 17 Sep 2009 03:24:34 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1758905AbZIQHYd (ORCPT ); Thu, 17 Sep 2009 03:24:33 -0400 Received: from mx1.redhat.com ([209.132.183.28]:30767 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758654AbZIQHYc (ORCPT ); Thu, 17 Sep 2009 03:24:32 -0400 Received: from int-mx03.intmail.prod.int.phx2.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.16]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n8H7OUO3018510; Thu, 17 Sep 2009 03:24:30 -0400 Received: from redhat.com (vpn-10-13.str.redhat.com [10.32.10.13]) by int-mx03.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id n8H7OMR2004190 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Thu, 17 Sep 2009 03:24:26 -0400 Date: Thu, 17 Sep 2009 10:22:39 +0300 From: "Michael S. Tsirkin" To: netdev@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, mingo@elte.hu, linux-mm@kvack.org, akpm@linux-foundation.org, hpa@zytor.com Subject: [PATCHv3 2/2] mm: reduce atomic use on use_mm fast path Message-ID: <20090917072239.GC18115@redhat.com> References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.19 (2009-01-05) X-Scanned-By: MIMEDefang 2.67 on 10.5.11.16 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org When mm switched to matches that of active mm, we don't need to increment and then drop the mm count. In a simple benchmark this happens in about 50% of time. Making that conditional reduces contention on that cache line on SMP systems. Acked-by: Andrea Arcangeli Signed-off-by: Michael S. Tsirkin --- mm/mmu_context.c | 9 ++++++--- 1 files changed, 6 insertions(+), 3 deletions(-) diff --git a/mm/mmu_context.c b/mm/mmu_context.c index fd473b5..ded9081 100644 --- a/mm/mmu_context.c +++ b/mm/mmu_context.c @@ -26,13 +26,16 @@ void use_mm(struct mm_struct *mm) task_lock(tsk); active_mm = tsk->active_mm; - atomic_inc(&mm->mm_count); + if (active_mm != mm) { + atomic_inc(&mm->mm_count); + tsk->active_mm = mm; + } tsk->mm = mm; - tsk->active_mm = mm; switch_mm(active_mm, mm, tsk); task_unlock(tsk); - mmdrop(active_mm); + if (active_mm != mm) + mmdrop(active_mm); } /*