From patchwork Fri Jan 6 03:52:47 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 134596 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id D24851007D8 for ; Fri, 6 Jan 2012 14:53:16 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758572Ab2AFDwx (ORCPT ); Thu, 5 Jan 2012 22:52:53 -0500 Received: from mail-iy0-f174.google.com ([209.85.210.174]:40145 "EHLO mail-iy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755516Ab2AFDww (ORCPT ); Thu, 5 Jan 2012 22:52:52 -0500 Received: by iaeh11 with SMTP id h11so1944492iae.19 for ; Thu, 05 Jan 2012 19:52:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=Wz14zfeIK8rES1zWSS+RE2zrK03OncIrlCELuuNHCYg=; b=slw28YI0kEZmMk55OqbPsAaXBKiMqryuYZLOrNo29CGry/mkdoDDShssg0wshykQB1 F9i5DXGMmtFtcjvirLJtH3cGNXoLE/kZC9mbKRXKyDi3CwFWbMmCFHcADIAi3kI2PueA eclF5wkQrEAzcpvX3W1zq5MEdV+OEcZS2NSw8= Received: by 10.50.168.2 with SMTP id zs2mr6036425igb.9.1325821971714; Thu, 05 Jan 2012 19:52:51 -0800 (PST) Received: from google.com (wtj.mtv.corp.google.com [172.18.96.96]) by mx.google.com with ESMTPS id py9sm105499356igc.2.2012.01.05.19.52.49 (version=SSLv3 cipher=OTHER); Thu, 05 Jan 2012 19:52:50 -0800 (PST) Date: Thu, 5 Jan 2012 19:52:47 -0800 From: Tejun Heo To: Jens Axboe , Hugh Dickins , Shaohua Li Cc: Andrew Morton , Stephen Rothwell , linux-next@vger.kernel.org, LKML , linux-scsi@vger.kernel.org, linux-ide@vger.kernel.org, x86@kernel.org Subject: [PATCH block:for-3.3/core] block: disable ELEVATOR_INSERT_SORT_MERGE Message-ID: <20120106035247.GF6276@google.com> References: <20120103221301.GH31746@google.com> <20120103223505.GI31746@google.com> <20120105012445.GP31746@google.com> <20120105183842.GF18486@google.com> <20120106021707.GA6276@google.com> <20120106023638.GC6276@google.com> <1325819655.22361.513.camel@sli10-conroe> <20120106030406.GD6276@google.com> <20120106033012.GE6276@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20120106033012.GE6276@google.com> User-Agent: Mutt/1.5.20 (2009-06-14) Sender: linux-ide-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ide@vger.kernel.org 5e84ea3a9c "block: attempt to merge with existing requests on plug flush" added support for merging requests on plug flush and 274193224c "block: recursive merge requests" added recursive merging. Because these mergings happen before the request is inserted on the elevator, the usual elv_latter/former_request() can't be used to locate merge candidates. It instead used bio merging mechanism - last_merge hint and rqhash; unfortunately, this means that the elevator doesn't have a say in which are allowed to merge and which aren't. For cfq, this resulted in merges across different cfqq's which led to crashes as requests jump between different cfqq's unexpectedly. Proper solution would be improving merge mechanism such that we can always query elevator to find out merge candidates and remove rqhash; however, the merge window is already upon us. Disable INSERT_SORT_MERGE for now. For detailed discussion of the bug: http://thread.gmane.org/gmane.linux.kernel.next/20064/focus=20159 Signed-off-by: Tejun Heo Reported-by: Hugh Dickins Cc: stable@vger.kernel.org --- block/blk-core.c | 5 ++++- block/elevator.c | 5 +++++ 2 files changed, 9 insertions(+), 1 deletion(-) -- To unsubscribe from this list: send the line "unsubscribe linux-ide" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/block/blk-core.c b/block/blk-core.c index 8fbdac7..7db6afa 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -2859,11 +2859,14 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule) /* * rq is already accounted, so use raw insert + * + * FIXME: We want INSERT_SORT_MERGE for non-FLUSH/FUA + * requests but it's currently broken. */ if (rq->cmd_flags & (REQ_FLUSH | REQ_FUA)) __elv_add_request(q, rq, ELEVATOR_INSERT_FLUSH); else - __elv_add_request(q, rq, ELEVATOR_INSERT_SORT_MERGE); + __elv_add_request(q, rq, ELEVATOR_INSERT_SORT); depth++; } diff --git a/block/elevator.c b/block/elevator.c index 99838f4..c32f5bc 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -644,6 +644,11 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where) rq->q = q; + /* + * FIXME: INSERT_SORT_MERGE is broken and blk_flush_plug_list(), + * the only user, is updated to use INSERT_SORT for now. + */ + if (rq->cmd_flags & REQ_SOFTBARRIER) { /* barriers are scheduling boundary, update end_sector */ if (rq->cmd_type == REQ_TYPE_FS ||