Patchwork [138/241] dm: fix deadlock with request based dm and queue request_fn recursion

mail settings
Submitter Herton Ronaldo Krzesinski
Date Dec. 13, 2012, 1:58 p.m.
Message ID <>
Download mbox | patch
Permalink /patch/205988/
State New
Headers show


Herton Ronaldo Krzesinski - Dec. 13, 2012, 1:58 p.m. -stable review patch.  If anyone has any objections, please let me know.


From: Jens Axboe <>

commit a8c32a5c98943d370ea606a2e7dc04717eb92206 upstream.

Request based dm attempts to re-run the request queue off the
request completion path. If used with a driver that potentially does
end_io from its request_fn, we could deadlock trying to recurse
back into request dispatch. Fix this by punting the request queue
run to kblockd.

Tested to fix a quickly reproducible deadlock in such a scenario.

Acked-by: Alasdair G Kergon <>
Signed-off-by: Jens Axboe <>
Signed-off-by: Herton Ronaldo Krzesinski <>
 drivers/md/dm.c |    8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)


diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 9ff3019..32370ea 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -754,8 +754,14 @@  static void rq_completed(struct mapped_device *md, int rw, int run_queue)
 	if (!md_in_flight(md))
+	/*
+	 * Run this off this callpath, as drivers could invoke end_io while
+	 * inside their request_fn (and holding the queue lock). Calling
+	 * back into ->request_fn() could deadlock attempting to grab the
+	 * queue lock again.
+	 */
 	if (run_queue)
-		blk_run_queue(md->queue);
+		blk_run_queue_async(md->queue);
 	 * dm_put() must be at the end of this function. See the comment above