Patchwork powerpc/spufs: Fix possible scheduling of a context to multiple SPEs

login
register
mail settings
Submitter Andre Detsch
Date Sept. 5, 2008, 7:16 a.m.
Message ID <200809050416.27831.adetsch@br.ibm.com>
Download mbox | patch
Permalink /patch/185/
State Accepted
Commit b2e601d14deb2083e2a537b47869ab3895d23a28
Delegated to: Jeremy Kerr
Headers show

Comments

Andre Detsch - Sept. 5, 2008, 7:16 a.m.
We currently have a race when scheduling a context to a SPE -
after we have found a runnable context in spusched_tick, the same
context may have been scheduled by spu_activate().

This may result in a panic if we try to unschedule a context that has
been freed in the meantime.

This change exits spu_schedule() if the context has already been
scheduled, so we don't end up scheduling it twice.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>

Patch

Index: spufs/arch/powerpc/platforms/cell/spufs/sched.c
===================================================================
--- spufs.orig/arch/powerpc/platforms/cell/spufs/sched.c
+++ spufs/arch/powerpc/platforms/cell/spufs/sched.c
@@ -727,7 +727,8 @@  static void spu_schedule(struct spu *spu
 	/* not a candidate for interruptible because it's called either
 	   from the scheduler thread or from spu_deactivate */
 	mutex_lock(&ctx->state_mutex);
-	__spu_schedule(spu, ctx);
+	if (ctx->state == SPU_STATE_SAVED)
+		__spu_schedule(spu, ctx);
 	spu_release(ctx);
 }