powerpc/spufs: Fix possible scheduling of a context to multiple SPEs

Submitted by Andre Detsch on Sept. 5, 2008, 7:16 a.m.


Message ID 200809050416.27831.adetsch@br.ibm.com
State Accepted
Commit b2e601d14deb2083e2a537b47869ab3895d23a28
Delegated to: Jeremy Kerr
Headers show

Commit Message

Andre Detsch Sept. 5, 2008, 7:16 a.m.
We currently have a race when scheduling a context to a SPE -
after we have found a runnable context in spusched_tick, the same
context may have been scheduled by spu_activate().

This may result in a panic if we try to unschedule a context that has
been freed in the meantime.

This change exits spu_schedule() if the context has already been
scheduled, so we don't end up scheduling it twice.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>

Patch hide | download patch | download mbox

Index: spufs/arch/powerpc/platforms/cell/spufs/sched.c
--- spufs.orig/arch/powerpc/platforms/cell/spufs/sched.c
+++ spufs/arch/powerpc/platforms/cell/spufs/sched.c
@@ -727,7 +727,8 @@  static void spu_schedule(struct spu *spu
 	/* not a candidate for interruptible because it's called either
 	   from the scheduler thread or from spu_deactivate */
-	__spu_schedule(spu, ctx);
+	if (ctx->state == SPU_STATE_SAVED)
+		__spu_schedule(spu, ctx);