From patchwork Wed Dec 10 19:34:09 2008 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Detsch X-Patchwork-Id: 13302 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from ozlabs.org (localhost [127.0.0.1]) by ozlabs.org (Postfix) with ESMTP id 2F291474C3 for ; Thu, 11 Dec 2008 06:34:29 +1100 (EST) X-Original-To: cbe-oss-dev@ozlabs.org Delivered-To: cbe-oss-dev@ozlabs.org Received: from igw1.br.ibm.com (igw1.br.ibm.com [32.104.18.24]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id BED0BDDF04 for ; Thu, 11 Dec 2008 06:34:17 +1100 (EST) Received: from d24relay01.br.ibm.com (unknown [9.8.31.16]) by igw1.br.ibm.com (Postfix) with ESMTP id B67F032C2FF for ; Wed, 10 Dec 2008 17:30:09 -0200 (BRDT) Received: from d24av01.br.ibm.com (d24av01.br.ibm.com [9.18.232.46]) by d24relay01.br.ibm.com (8.13.8/8.13.8/NCO v9.1) with ESMTP id mBAKXjol3670242 for ; Wed, 10 Dec 2008 17:33:45 -0300 Received: from d24av01.br.ibm.com (loopback [127.0.0.1]) by d24av01.br.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id mBAJYA3h031878 for ; Wed, 10 Dec 2008 17:34:10 -0200 Received: from [9.8.13.23] ([9.8.13.23]) by d24av01.br.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id mBAJY9c1031825 for ; Wed, 10 Dec 2008 17:34:09 -0200 From: Andre Detsch To: cbe-oss-dev@ozlabs.org Date: Wed, 10 Dec 2008 17:34:09 -0200 User-Agent: KMail/1.9.6 References: <200812101719.42964.adetsch@br.ibm.com> In-Reply-To: <200812101719.42964.adetsch@br.ibm.com> MIME-Version: 1.0 Content-Disposition: inline Message-Id: <200812101734.09215.adetsch@br.ibm.com> Subject: [Cbe-oss-dev] [PATCH 01/18] powerpc/spufs: Change runq_lock to a mutex X-BeenThere: cbe-oss-dev@ozlabs.org X-Mailman-Version: 2.1.11 Precedence: list List-Id: Discussion about Open Source Software for the Cell Broadband Engine List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: cbe-oss-dev-bounces+patchwork-incoming=ozlabs.org@ozlabs.org Errors-To: cbe-oss-dev-bounces+patchwork-incoming=ozlabs.org@ozlabs.org We'll need to next runq_lock with the context state mutex, so change runq_lock itself into a mutex Signed-off-by: Jeremy Kerr Signed-off-by: Andre Detsch --- arch/powerpc/platforms/cell/spufs/sched.c | 28 ++++++++++++++-------------- 1 files changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/platforms/cell/spufs/sched.c b/arch/powerpc/platforms/cell/spufs/sched.c index 142e9c2..8ebbf05 100644 --- a/arch/powerpc/platforms/cell/spufs/sched.c +++ b/arch/powerpc/platforms/cell/spufs/sched.c @@ -51,7 +51,7 @@ struct spu_prio_array { DECLARE_BITMAP(bitmap, MAX_PRIO); struct list_head runq[MAX_PRIO]; - spinlock_t runq_lock; + struct mutex runq_lock; int nr_waiting; }; @@ -179,9 +179,9 @@ static int node_allowed(struct spu_context *ctx, int node) { int rval; - spin_lock(&spu_prio->runq_lock); + mutex_lock(&spu_prio->runq_lock); rval = __node_allowed(ctx, node); - spin_unlock(&spu_prio->runq_lock); + mutex_unlock(&spu_prio->runq_lock); return rval; } @@ -514,9 +514,9 @@ static void __spu_add_to_rq(struct spu_context *ctx) static void spu_add_to_rq(struct spu_context *ctx) { - spin_lock(&spu_prio->runq_lock); + mutex_lock(&spu_prio->runq_lock); __spu_add_to_rq(ctx); - spin_unlock(&spu_prio->runq_lock); + mutex_unlock(&spu_prio->runq_lock); } static void __spu_del_from_rq(struct spu_context *ctx) @@ -535,9 +535,9 @@ static void __spu_del_from_rq(struct spu_context *ctx) void spu_del_from_rq(struct spu_context *ctx) { - spin_lock(&spu_prio->runq_lock); + mutex_lock(&spu_prio->runq_lock); __spu_del_from_rq(ctx); - spin_unlock(&spu_prio->runq_lock); + mutex_unlock(&spu_prio->runq_lock); } static void spu_prio_wait(struct spu_context *ctx) @@ -551,18 +551,18 @@ static void spu_prio_wait(struct spu_context *ctx) */ BUG_ON(!(ctx->flags & SPU_CREATE_NOSCHED)); - spin_lock(&spu_prio->runq_lock); + mutex_lock(&spu_prio->runq_lock); prepare_to_wait_exclusive(&ctx->stop_wq, &wait, TASK_INTERRUPTIBLE); if (!signal_pending(current)) { __spu_add_to_rq(ctx); - spin_unlock(&spu_prio->runq_lock); + mutex_unlock(&spu_prio->runq_lock); mutex_unlock(&ctx->state_mutex); schedule(); mutex_lock(&ctx->state_mutex); - spin_lock(&spu_prio->runq_lock); + mutex_lock(&spu_prio->runq_lock); __spu_del_from_rq(ctx); } - spin_unlock(&spu_prio->runq_lock); + mutex_unlock(&spu_prio->runq_lock); __set_current_state(TASK_RUNNING); remove_wait_queue(&ctx->stop_wq, &wait); } @@ -838,7 +838,7 @@ static struct spu_context *grab_runnable_context(int prio, int node) struct spu_context *ctx; int best; - spin_lock(&spu_prio->runq_lock); + mutex_lock(&spu_prio->runq_lock); best = find_first_bit(spu_prio->bitmap, prio); while (best < prio) { struct list_head *rq = &spu_prio->runq[best]; @@ -854,7 +854,7 @@ static struct spu_context *grab_runnable_context(int prio, int node) } ctx = NULL; found: - spin_unlock(&spu_prio->runq_lock); + mutex_unlock(&spu_prio->runq_lock); return ctx; } @@ -1122,7 +1122,7 @@ int __init spu_sched_init(void) INIT_LIST_HEAD(&spu_prio->runq[i]); __clear_bit(i, spu_prio->bitmap); } - spin_lock_init(&spu_prio->runq_lock); + mutex_init(&spu_prio->runq_lock); setup_timer(&spusched_timer, spusched_wake, 0); setup_timer(&spuloadavg_timer, spuloadavg_wake, 0);