From patchwork Wed Nov 27 20:18:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Henrique Cerri X-Patchwork-Id: 1201797 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47NXD925V3z9sR8; Thu, 28 Nov 2019 07:19:05 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1ia3m7-0003Fp-7N; Wed, 27 Nov 2019 20:18:59 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1ia3m0-0003B4-EI for kernel-team@lists.ubuntu.com; Wed, 27 Nov 2019 20:18:52 +0000 Received: from mail-qk1-f197.google.com ([209.85.222.197]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1ia3lz-0004UT-Kt for kernel-team@lists.ubuntu.com; Wed, 27 Nov 2019 20:18:51 +0000 Received: by mail-qk1-f197.google.com with SMTP id d144so14585038qke.16 for ; Wed, 27 Nov 2019 12:18:51 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/ks55vZeATZBwMwbhTYuTMmw0IQ+8kj9/jgtB3tmHD4=; b=cs8MaDmp4/8g6ZieO+9rl1ktrsfCelWkfgBTIOIw6Jf/UtxU9btBqeShFLPFJ4Jgn4 pvIPIg6Ghy4Z02Zm6kU4XejC4OOP3ThsCx7Z7lgSbSiisfq4a4OKCgjp6jTYs4CYHGKu 1nedPrpJLZ3zLUVV0lQ4cKObwZ7lWAmkT/ZHMUT4JBqweRiSEqHLtT4Yf4bXQLhQkNzx 0KQPShuCZ4b2Env7lBHUPeguGy115eyYFlkceKMURD3dFxm30eQCaQgQJjrOfYJYAUr7 JO/X39xEK0qFwRH0ioGNywto8+6IkUHQlXoRiJZoIcoAVFE1UzjLlswGtC8qpw13CLz5 xe1A== X-Gm-Message-State: APjAAAWg/2xA/shkP4v6NLw89Jx7x+Xy/ZImYjze84MxWO1bnDiKQKPm SBTB/5LgPFYM8XWTXxV8qaeB8PAEJl7xT04xgsRwMzBSl+Y4gv158Cfqz7l5Ka7hVpFLX/EoXPe tcaGkj/rCtwZcbrPlSD89qjO8MdeKxD0ypmpXTI65 X-Received: by 2002:a37:68d5:: with SMTP id d204mr6538670qkc.268.1574885930353; Wed, 27 Nov 2019 12:18:50 -0800 (PST) X-Google-Smtp-Source: APXvYqwe/VN9Pu/Nz/jQSHY2gJfJFL5DS3TyI7ArI2S6lpiPhLq5PAd8LMLVtE/NsUsz6reQ3MPMhg== X-Received: by 2002:a37:68d5:: with SMTP id d204mr6538642qkc.268.1574885930000; Wed, 27 Nov 2019 12:18:50 -0800 (PST) Received: from gallifrey.lan ([2804:14c:4e6:1bc:4960:b0eb:4714:41f]) by smtp.gmail.com with ESMTPSA id o13sm8284524qto.96.2019.11.27.12.18.48 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2019 12:18:49 -0800 (PST) From: Marcelo Henrique Cerri To: kernel-team@lists.ubuntu.com Subject: [xenial:linux-azure][PATCH 12/15] blk-mq: issue directly if hw queue isn't busy in case of 'none' Date: Wed, 27 Nov 2019 17:18:17 -0300 Message-Id: <20191127201820.32174-13-marcelo.cerri@canonical.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191127201820.32174-1-marcelo.cerri@canonical.com> References: <20191127201820.32174-1-marcelo.cerri@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Ming Lei BugLink: https://bugs.launchpad.net/bugs/1848739 In case of 'none' io scheduler, when hw queue isn't busy, it isn't necessary to enqueue request to sw queue and dequeue it from sw queue because request may be submitted to hw queue asap without extra cost, meantime there shouldn't be much request in sw queue, and we don't need to worry about effect on IO merge. There are still some single hw queue SCSI HBAs(HPSA, megaraid_sas, ...) which may connect high performance devices, so 'none' is often required for obtaining good performance. This patch improves IOPS and decreases CPU unilization on megaraid_sas, per Kashyap's test. Cc: Kashyap Desai Cc: Laurence Oberman Cc: Omar Sandoval Cc: Christoph Hellwig Cc: Bart Van Assche Cc: Hannes Reinecke Reported-by: Kashyap Desai Tested-by: Kashyap Desai Signed-off-by: Ming Lei Signed-off-by: Jens Axboe (cherry picked from commit 6ce3dd6eec114930cf2035a8bcb1e80477ed79a8) Signed-off-by: Marcelo Henrique Cerri --- block/blk-mq-sched.c | 13 ++++++++++++- block/blk-mq.c | 23 ++++++++++++++++++++++- block/blk-mq.h | 2 ++ 3 files changed, 36 insertions(+), 2 deletions(-) diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 1518c794a78c..45d8e861fe55 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -465,8 +465,19 @@ void blk_mq_sched_insert_requests(struct request_queue *q, if (e && e->type->ops.mq.insert_requests) e->type->ops.mq.insert_requests(hctx, list, false); - else + else { + /* + * try to issue requests directly if the hw queue isn't + * busy in case of 'none' scheduler, and this way may save + * us one extra enqueue & dequeue to sw queue. + */ + if (!hctx->dispatch_busy && !e && !run_queue_async) { + blk_mq_try_issue_list_directly(hctx, list); + if (list_empty(list)) + return; + } blk_mq_insert_requests(hctx, ctx, list); + } blk_mq_run_hw_queue(hctx, run_queue_async); } diff --git a/block/blk-mq.c b/block/blk-mq.c index 691ed5f8f6d9..ea3feeab1fd0 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1768,13 +1768,16 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx, ret = q->mq_ops->queue_rq(hctx, &bd); switch (ret) { case BLK_STS_OK: + blk_mq_update_dispatch_busy(hctx, false); *cookie = new_cookie; break; case BLK_STS_RESOURCE: case BLK_STS_DEV_RESOURCE: + blk_mq_update_dispatch_busy(hctx, true); __blk_mq_requeue_request(rq); break; default: + blk_mq_update_dispatch_busy(hctx, false); *cookie = BLK_QC_T_NONE; break; } @@ -1857,6 +1860,23 @@ blk_status_t blk_mq_request_issue_directly(struct request *rq) return ret; } +void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, + struct list_head *list) +{ + while (!list_empty(list)) { + blk_status_t ret; + struct request *rq = list_first_entry(list, struct request, + queuelist); + + list_del_init(&rq->queuelist); + ret = blk_mq_request_issue_directly(rq); + if (ret != BLK_STS_OK) { + list_add(&rq->queuelist, list); + break; + } + } +} + static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) { const int is_sync = op_is_sync(bio->bi_opf); @@ -1958,7 +1978,8 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) blk_mq_try_issue_directly(data.hctx, same_queue_rq, &cookie); } - } else if (q->nr_hw_queues > 1 && is_sync) { + } else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator && + !data.hctx->dispatch_busy)) { blk_mq_put_ctx(data.ctx); blk_mq_bio_to_request(rq, bio); blk_mq_try_issue_directly(data.hctx, rq, &cookie); diff --git a/block/blk-mq.h b/block/blk-mq.h index c11c627ebd6d..b78cdcad7d7f 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -62,6 +62,8 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, /* Used by blk_insert_cloned_request() to issue request directly */ blk_status_t blk_mq_request_issue_directly(struct request *rq); +void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, + struct list_head *list); /* * CPU -> queue mappings