From patchwork Wed Nov 27 20:18:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Henrique Cerri X-Patchwork-Id: 1201792 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47NXCv0jh1z9sR8; Thu, 28 Nov 2019 07:18:51 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1ia3lt-00036x-Vf; Wed, 27 Nov 2019 20:18:45 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1ia3lq-00032v-22 for kernel-team@lists.ubuntu.com; Wed, 27 Nov 2019 20:18:42 +0000 Received: from mail-qv1-f71.google.com ([209.85.219.71]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1ia3lp-0004T3-97 for kernel-team@lists.ubuntu.com; Wed, 27 Nov 2019 20:18:41 +0000 Received: by mail-qv1-f71.google.com with SMTP id y9so3763356qvi.10 for ; Wed, 27 Nov 2019 12:18:41 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+hUY5BRXgqKe0zhfFZMF+a1iR/WWLdzx7fa9MYQE2fg=; b=YbYeD8epUBhObtX98VvuzO5rGkDUngcfjxhW/yhKkVYMfMJfnDbHtp98qkARAlB3cw 0tQuVjZ+NUZVjYEXUk5dgyUlkOY93+HBekbDjhxcj5Ut+e4VjPxm+5plSGwQn5SeLMVD EHDKaO+sX5iCIvPDT4yZAviizBlnGU37N8xUyn0VCSBukhseqg0JoeDn7lQPS6Lpl/yl e6PltEUZhNZPyToZpEzsKUd/ggB4wkL2HcFkUMfUKDo27PslY/H9GhKtxS2Bjz0ojo3A 37yRur5S6Pk3EKM/T6LGMO9XEd5LkauAiq4O88mfRIIkwZTlFsBz+Tt5vQJFokg7FvNG 1LVg== X-Gm-Message-State: APjAAAXSGfIoHN6jvAECZY5tQZPGrWts1k+Gt2/qcvs11zyr+V+k0lMn rtIS/jvqZUunetrJdZz3QxKlBPs3pWabZq4zFfnDCrXBrWCNvK8Axuw3BKPdjDauKVL2R2j+Hb1 eG09wvZz4/UdCfn06vJ7NIetNxEd2or9l3GXCdPLV X-Received: by 2002:ac8:73d0:: with SMTP id v16mr25405664qtp.335.1574885920031; Wed, 27 Nov 2019 12:18:40 -0800 (PST) X-Google-Smtp-Source: APXvYqzzc2D+ORuBSXtRXLniA170jzHeDhro050YgC05AG0k9T1Jz5SOdRApgycnDZC/lMLQhXlGFQ== X-Received: by 2002:ac8:73d0:: with SMTP id v16mr25405639qtp.335.1574885919714; Wed, 27 Nov 2019 12:18:39 -0800 (PST) Received: from gallifrey.lan ([2804:14c:4e6:1bc:4960:b0eb:4714:41f]) by smtp.gmail.com with ESMTPSA id o13sm8284524qto.96.2019.11.27.12.18.37 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2019 12:18:38 -0800 (PST) From: Marcelo Henrique Cerri To: kernel-team@lists.ubuntu.com Subject: [xenial:linux-azure][PATCH 07/15] blk-mq: don't dispatch request in blk_mq_request_direct_issue if queue is busy Date: Wed, 27 Nov 2019 17:18:12 -0300 Message-Id: <20191127201820.32174-8-marcelo.cerri@canonical.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191127201820.32174-1-marcelo.cerri@canonical.com> References: <20191127201820.32174-1-marcelo.cerri@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Ming Lei BugLink: https://bugs.launchpad.net/bugs/1848739 If we run into blk_mq_request_direct_issue(), when queue is busy, we don't want to dispatch this request into hctx->dispatch_list, and what we need to do is to return the queue busy info to caller, so that caller can deal with it well. Fixes: 396eaf21ee ("blk-mq: improve DM's blk-mq IO merging via blk_insert_cloned_request feedback") Reported-by: Laurence Oberman Reviewed-by: Mike Snitzer Signed-off-by: Ming Lei Signed-off-by: Jens Axboe (cherry picked from commit 23d4ee19e789ae3dce3e04bd24e3d1537965475f) Signed-off-by: Marcelo Henrique Cerri --- block/blk-mq.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 9abc5cbb58f1..d4945ffaf034 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1727,15 +1727,6 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx, return ret; } -static void __blk_mq_fallback_to_insert(struct request *rq, - bool run_queue, bool bypass_insert) -{ - if (!bypass_insert) - blk_mq_sched_insert_request(rq, false, run_queue, false); - else - blk_mq_request_bypass_insert(rq, run_queue); -} - static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, struct request *rq, blk_qc_t *cookie, @@ -1744,9 +1735,16 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, struct request_queue *q = rq->q; bool run_queue = true; - /* RCU or SRCU read lock is needed before checking quiesced flag */ + /* + * RCU or SRCU read lock is needed before checking quiesced flag. + * + * When queue is stopped or quiesced, ignore 'bypass_insert' from + * blk_mq_request_direct_issue(), and return BLK_STS_OK to caller, + * and avoid driver to try to dispatch again. + */ if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q)) { run_queue = false; + bypass_insert = false; goto insert; } @@ -1763,10 +1761,10 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, return __blk_mq_issue_directly(hctx, rq, cookie); insert: - __blk_mq_fallback_to_insert(rq, run_queue, bypass_insert); if (bypass_insert) return BLK_STS_RESOURCE; + blk_mq_sched_insert_request(rq, false, run_queue, false); return BLK_STS_OK; } @@ -1782,7 +1780,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false); if (ret == BLK_STS_RESOURCE) - __blk_mq_fallback_to_insert(rq, true, false); + blk_mq_sched_insert_request(rq, false, true, false); else if (ret != BLK_STS_OK) blk_mq_end_request(rq, ret);