From patchwork Thu Sep 11 23:37:41 2008 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Detsch X-Patchwork-Id: 249 X-Patchwork-Delegate: jk@ozlabs.org Return-Path: X-Original-To: patchwork@ozlabs.org Delivered-To: patchwork@ozlabs.org Received: from ozlabs.org (localhost [127.0.0.1]) by ozlabs.org (Postfix) with ESMTP id 8A824DE1CA for ; Fri, 12 Sep 2008 09:38:57 +1000 (EST) X-Original-To: cbe-oss-dev@ozlabs.org Delivered-To: cbe-oss-dev@ozlabs.org Received: from igw1.br.ibm.com (igw1.br.ibm.com [32.104.18.24]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 77675DDF69; Fri, 12 Sep 2008 09:38:39 +1000 (EST) Received: from mailhub3.br.ibm.com (mailhub3 [9.18.232.110]) by igw1.br.ibm.com (Postfix) with ESMTP id C2B7A32C15C; Thu, 11 Sep 2008 20:08:03 -0300 (BRT) Received: from d24av01.br.ibm.com (d24av01.br.ibm.com [9.18.232.46]) by mailhub3.br.ibm.com (8.13.8/8.13.8/NCO v8.7) with ESMTP id m8BNcUa92375898; Thu, 11 Sep 2008 20:38:35 -0300 Received: from d24av01.br.ibm.com (loopback [127.0.0.1]) by d24av01.br.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id m8BNcNel023592; Thu, 11 Sep 2008 20:38:23 -0300 Received: from [9.8.10.86] ([9.8.10.86]) by d24av01.br.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id m8BNcMfu023587; Thu, 11 Sep 2008 20:38:22 -0300 From: Andre Detsch To: cbe-oss-dev@ozlabs.org Date: Thu, 11 Sep 2008 20:37:41 -0300 User-Agent: KMail/1.9.6 References: <200809111955.28780.adetsch@br.ibm.com> In-Reply-To: <200809111955.28780.adetsch@br.ibm.com> MIME-Version: 1.0 Content-Disposition: inline Message-Id: <200809112037.41236.adetsch@br.ibm.com> Cc: LukeBrowning@us.ibm.com, Jeremy Kerr Subject: [Cbe-oss-dev] [PATCH 01/11] powerpc/spufs: Change cbe_spu_info mutex_lock to spin_lock X-BeenThere: cbe-oss-dev@ozlabs.org X-Mailman-Version: 2.1.11 Precedence: list List-Id: Discussion about Open Source Software for the Cell Broadband Engine List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: cbe-oss-dev-bounces+patchwork=ozlabs.org@ozlabs.org Errors-To: cbe-oss-dev-bounces+patchwork=ozlabs.org@ozlabs.org This structure groups the physical spus. The list_mutex must be changed to a spin lock, because the runq_lock is a spin_lock. You can't nest mutexes under spin_locks. The lock for the cbe_spu_info[] is taken under the runq_lock as may spus need to be allocated to schedule a gang. Change spu_bind_context() and spu_unbind_context() so that they are not called under the new spin lock as that would cause a deadlock, if they blocked on higher level allocations (mmap) that are protected by mutexes. Signed-off-by: Luke Browning Signed-off-by: Andre Detsch diff --git a/arch/powerpc/include/asm/spu.h b/arch/powerpc/include/asm/spu.h index 8b2eb04..9d799b6 100644 --- a/arch/powerpc/include/asm/spu.h +++ b/arch/powerpc/include/asm/spu.h @@ -187,7 +187,7 @@ struct spu { }; struct cbe_spu_info { - struct mutex list_mutex; + spinlock_t list_lock; struct list_head spus; int n_spus; int nr_active; diff --git a/arch/powerpc/platforms/cell/spu_base.c b/arch/powerpc/platforms/cell/spu_base.c index a5bdb89..b1a97a1 100644 --- a/arch/powerpc/platforms/cell/spu_base.c +++ b/arch/powerpc/platforms/cell/spu_base.c @@ -650,10 +650,10 @@ static int __init create_spu(void *data) if (ret) goto out_free_irqs; - mutex_lock(&cbe_spu_info[spu->node].list_mutex); + spin_lock(&cbe_spu_info[spu->node].list_lock); list_add(&spu->cbe_list, &cbe_spu_info[spu->node].spus); cbe_spu_info[spu->node].n_spus++; - mutex_unlock(&cbe_spu_info[spu->node].list_mutex); + spin_unlock(&cbe_spu_info[spu->node].list_lock); mutex_lock(&spu_full_list_mutex); spin_lock_irqsave(&spu_full_list_lock, flags); @@ -732,7 +732,7 @@ static int __init init_spu_base(void) int i, ret = 0; for (i = 0; i < MAX_NUMNODES; i++) { - mutex_init(&cbe_spu_info[i].list_mutex); + spin_lock_init(&cbe_spu_info[i].list_lock); INIT_LIST_HEAD(&cbe_spu_info[i].spus); } diff --git a/arch/powerpc/platforms/cell/spufs/sched.c b/arch/powerpc/platforms/cell/spufs/sched.c index 897c740..386aa0a 100644 --- a/arch/powerpc/platforms/cell/spufs/sched.c +++ b/arch/powerpc/platforms/cell/spufs/sched.c @@ -153,11 +153,11 @@ void spu_update_sched_info(struct spu_context *ctx) node = ctx->spu->node; /* - * Take list_mutex to sync with find_victim(). + * Take list_lock to sync with find_victim(). */ - mutex_lock(&cbe_spu_info[node].list_mutex); + spin_lock(&cbe_spu_info[node].list_lock); __spu_update_sched_info(ctx); - mutex_unlock(&cbe_spu_info[node].list_mutex); + spin_unlock(&cbe_spu_info[node].list_lock); } else { __spu_update_sched_info(ctx); } @@ -179,9 +179,9 @@ static int node_allowed(struct spu_context *ctx, int node) { int rval; - spin_lock(&spu_prio->runq_lock); + spin_lock(&cbe_spu_info[node].list_lock); rval = __node_allowed(ctx, node); - spin_unlock(&spu_prio->runq_lock); + spin_unlock(&cbe_spu_info[node].list_lock); return rval; } @@ -199,7 +199,7 @@ void do_notify_spus_active(void) for_each_online_node(node) { struct spu *spu; - mutex_lock(&cbe_spu_info[node].list_mutex); + spin_lock(&cbe_spu_info[node].list_lock); list_for_each_entry(spu, &cbe_spu_info[node].spus, cbe_list) { if (spu->alloc_state != SPU_FREE) { struct spu_context *ctx = spu->ctx; @@ -209,7 +209,7 @@ void do_notify_spus_active(void) wake_up_all(&ctx->stop_wq); } } - mutex_unlock(&cbe_spu_info[node].list_mutex); + spin_unlock(&cbe_spu_info[node].list_lock); } } @@ -233,7 +233,6 @@ static void spu_bind_context(struct spu *spu, struct spu_context *ctx) spu_associate_mm(spu, ctx->owner); spin_lock_irq(&spu->register_lock); - spu->ctx = ctx; spu->flags = 0; ctx->spu = spu;