{"id":249,"url":"http://patchwork.ozlabs.org/api/1.1/patches/249/?format=json","web_url":"http://patchwork.ozlabs.org/project/cbe-oss-dev/patch/200809112037.41236.adetsch@br.ibm.com/","project":{"id":1,"url":"http://patchwork.ozlabs.org/api/1.1/projects/1/?format=json","name":"Cell Broadband Engine development","link_name":"cbe-oss-dev","list_id":"cbe-oss-dev.ozlabs.org","list_email":"cbe-oss-dev@ozlabs.org","web_url":null,"scm_url":null,"webscm_url":null},"msgid":"<200809112037.41236.adetsch@br.ibm.com>","date":"2008-09-11T23:37:41","name":"powerpc/spufs: Change cbe_spu_info mutex_lock to spin_lock","commit_ref":null,"pull_url":null,"state":"superseded","archived":false,"hash":"79a11a6ed47b9a1f482901fa12f6e42f7a391d8c","submitter":{"id":93,"url":"http://patchwork.ozlabs.org/api/1.1/people/93/?format=json","name":"Andre Detsch","email":"adetsch@br.ibm.com"},"delegate":{"id":1,"url":"http://patchwork.ozlabs.org/api/1.1/users/1/?format=json","username":"jk","first_name":"Jeremy","last_name":"Kerr","email":"jk@ozlabs.org"},"mbox":"http://patchwork.ozlabs.org/project/cbe-oss-dev/patch/200809112037.41236.adetsch@br.ibm.com/mbox/","series":[],"comments":"http://patchwork.ozlabs.org/api/patches/249/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/249/checks/","tags":{},"headers":{"Return-Path":"<cbe-oss-dev-bounces+patchwork=ozlabs.org@ozlabs.org>","X-Original-To":["patchwork@ozlabs.org","cbe-oss-dev@ozlabs.org"],"Delivered-To":["patchwork@ozlabs.org","cbe-oss-dev@ozlabs.org"],"Received":["from ozlabs.org (localhost [127.0.0.1])\n\tby ozlabs.org (Postfix) with ESMTP id 8A824DE1CA\n\tfor <patchwork@ozlabs.org>; Fri, 12 Sep 2008 09:38:57 +1000 (EST)","from igw1.br.ibm.com (igw1.br.ibm.com [32.104.18.24])\n\t(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))\n\t(Client did not present a certificate)\n\tby ozlabs.org (Postfix) with ESMTPS id 77675DDF69;\n\tFri, 12 Sep 2008 09:38:39 +1000 (EST)","from mailhub3.br.ibm.com (mailhub3 [9.18.232.110])\n\tby igw1.br.ibm.com (Postfix) with ESMTP id C2B7A32C15C;\n\tThu, 11 Sep 2008 20:08:03 -0300 (BRT)","from d24av01.br.ibm.com (d24av01.br.ibm.com [9.18.232.46])\n\tby mailhub3.br.ibm.com (8.13.8/8.13.8/NCO v8.7) with ESMTP id\n\tm8BNcUa92375898; Thu, 11 Sep 2008 20:38:35 -0300","from d24av01.br.ibm.com (loopback [127.0.0.1])\n\tby d24av01.br.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id\n\tm8BNcNel023592; Thu, 11 Sep 2008 20:38:23 -0300","from [9.8.10.86] ([9.8.10.86])\n\tby d24av01.br.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id\n\tm8BNcMfu023587; Thu, 11 Sep 2008 20:38:22 -0300"],"From":"Andre Detsch <adetsch@br.ibm.com>","To":"cbe-oss-dev@ozlabs.org","Date":"Thu, 11 Sep 2008 20:37:41 -0300","User-Agent":"KMail/1.9.6","References":"<200809111955.28780.adetsch@br.ibm.com>","In-Reply-To":"<200809111955.28780.adetsch@br.ibm.com>","MIME-Version":"1.0","Content-Disposition":"inline","Message-Id":"<200809112037.41236.adetsch@br.ibm.com>","Cc":"LukeBrowning@us.ibm.com, Jeremy Kerr <jk@ozlabs.org>","Subject":"[Cbe-oss-dev] [PATCH 01/11] powerpc/spufs: Change cbe_spu_info\n\tmutex_lock to spin_lock","X-BeenThere":"cbe-oss-dev@ozlabs.org","X-Mailman-Version":"2.1.11","Precedence":"list","List-Id":"Discussion about Open Source Software for the Cell Broadband Engine\n\t<cbe-oss-dev.ozlabs.org>","List-Unsubscribe":"<https://ozlabs.org/mailman/options/cbe-oss-dev>,\n\t<mailto:cbe-oss-dev-request@ozlabs.org?subject=unsubscribe>","List-Archive":"<http://ozlabs.org/pipermail/cbe-oss-dev>","List-Post":"<mailto:cbe-oss-dev@ozlabs.org>","List-Help":"<mailto:cbe-oss-dev-request@ozlabs.org?subject=help>","List-Subscribe":"<https://ozlabs.org/mailman/listinfo/cbe-oss-dev>,\n\t<mailto:cbe-oss-dev-request@ozlabs.org?subject=subscribe>","Content-Type":"text/plain; charset=\"us-ascii\"","Content-Transfer-Encoding":"7bit","Sender":"cbe-oss-dev-bounces+patchwork=ozlabs.org@ozlabs.org","Errors-To":"cbe-oss-dev-bounces+patchwork=ozlabs.org@ozlabs.org"},"content":"This structure groups the physical spus. The list_mutex must be changed\nto a spin lock, because the runq_lock is a spin_lock.  You can't nest\nmutexes under spin_locks.  The lock for the cbe_spu_info[] is taken\nunder the runq_lock as may spus need to be allocated to schedule a gang.\n\nChange spu_bind_context() and spu_unbind_context() so that they are not\ncalled under the new spin lock as that would cause a deadlock, if they\nblocked on higher level allocations (mmap) that are protected by mutexes.\n\nSigned-off-by: Luke Browning <lukebrowning@us.ibm.com>\nSigned-off-by: Andre Detsch <adetsch@br.ibm.com>","diff":"diff --git a/arch/powerpc/include/asm/spu.h b/arch/powerpc/include/asm/spu.h\nindex 8b2eb04..9d799b6 100644\n--- a/arch/powerpc/include/asm/spu.h\n+++ b/arch/powerpc/include/asm/spu.h\n@@ -187,7 +187,7 @@ struct spu {\n };\n \n struct cbe_spu_info {\n-\tstruct mutex list_mutex;\n+\tspinlock_t list_lock;\n \tstruct list_head spus;\n \tint n_spus;\n \tint nr_active;\ndiff --git a/arch/powerpc/platforms/cell/spu_base.c \nb/arch/powerpc/platforms/cell/spu_base.c\nindex a5bdb89..b1a97a1 100644\n--- a/arch/powerpc/platforms/cell/spu_base.c\n+++ b/arch/powerpc/platforms/cell/spu_base.c\n@@ -650,10 +650,10 @@ static int __init create_spu(void *data)\n \tif (ret)\n \t\tgoto out_free_irqs;\n \n-\tmutex_lock(&cbe_spu_info[spu->node].list_mutex);\n+\tspin_lock(&cbe_spu_info[spu->node].list_lock);\n \tlist_add(&spu->cbe_list, &cbe_spu_info[spu->node].spus);\n \tcbe_spu_info[spu->node].n_spus++;\n-\tmutex_unlock(&cbe_spu_info[spu->node].list_mutex);\n+\tspin_unlock(&cbe_spu_info[spu->node].list_lock);\n \n \tmutex_lock(&spu_full_list_mutex);\n \tspin_lock_irqsave(&spu_full_list_lock, flags);\n@@ -732,7 +732,7 @@ static int __init init_spu_base(void)\n \tint i, ret = 0;\n \n \tfor (i = 0; i < MAX_NUMNODES; i++) {\n-\t\tmutex_init(&cbe_spu_info[i].list_mutex);\n+\t\tspin_lock_init(&cbe_spu_info[i].list_lock);\n \t\tINIT_LIST_HEAD(&cbe_spu_info[i].spus);\n \t}\n \ndiff --git a/arch/powerpc/platforms/cell/spufs/sched.c \nb/arch/powerpc/platforms/cell/spufs/sched.c\nindex 897c740..386aa0a 100644\n--- a/arch/powerpc/platforms/cell/spufs/sched.c\n+++ b/arch/powerpc/platforms/cell/spufs/sched.c\n@@ -153,11 +153,11 @@ void spu_update_sched_info(struct spu_context *ctx)\n \t\tnode = ctx->spu->node;\n \n \t\t/*\n-\t\t * Take list_mutex to sync with find_victim().\n+\t\t * Take list_lock to sync with find_victim().\n \t\t */\n-\t\tmutex_lock(&cbe_spu_info[node].list_mutex);\n+\t\tspin_lock(&cbe_spu_info[node].list_lock);\n \t\t__spu_update_sched_info(ctx);\n-\t\tmutex_unlock(&cbe_spu_info[node].list_mutex);\n+\t\tspin_unlock(&cbe_spu_info[node].list_lock);\n \t} else {\n \t\t__spu_update_sched_info(ctx);\n \t}\n@@ -179,9 +179,9 @@ static int node_allowed(struct spu_context *ctx, int node)\n {\n \tint rval;\n \n-\tspin_lock(&spu_prio->runq_lock);\n+\tspin_lock(&cbe_spu_info[node].list_lock);\n \trval = __node_allowed(ctx, node);\n-\tspin_unlock(&spu_prio->runq_lock);\n+\tspin_unlock(&cbe_spu_info[node].list_lock);\n \n \treturn rval;\n }\n@@ -199,7 +199,7 @@ void do_notify_spus_active(void)\n \tfor_each_online_node(node) {\n \t\tstruct spu *spu;\n \n-\t\tmutex_lock(&cbe_spu_info[node].list_mutex);\n+\t\tspin_lock(&cbe_spu_info[node].list_lock);\n \t\tlist_for_each_entry(spu, &cbe_spu_info[node].spus, cbe_list) {\n \t\t\tif (spu->alloc_state != SPU_FREE) {\n \t\t\t\tstruct spu_context *ctx = spu->ctx;\n@@ -209,7 +209,7 @@ void do_notify_spus_active(void)\n \t\t\t\twake_up_all(&ctx->stop_wq);\n \t\t\t}\n \t\t}\n-\t\tmutex_unlock(&cbe_spu_info[node].list_mutex);\n+\t\tspin_unlock(&cbe_spu_info[node].list_lock);\n \t}\n }\n \n@@ -233,7 +233,6 @@ static void spu_bind_context(struct spu *spu, struct \nspu_context *ctx)\n \tspu_associate_mm(spu, ctx->owner);\n \n \tspin_lock_irq(&spu->register_lock);\n-\tspu->ctx = ctx;\n \tspu->flags = 0;\n \tctx->spu = spu;\n","prefixes":[]}