From patchwork Wed Nov 27 20:18:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Henrique Cerri X-Patchwork-Id: 1201791 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47NXCq1BWpz9sRQ; Thu, 28 Nov 2019 07:18:47 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1ia3lq-00033f-Fd; Wed, 27 Nov 2019 20:18:42 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1ia3lo-00031R-3a for kernel-team@lists.ubuntu.com; Wed, 27 Nov 2019 20:18:40 +0000 Received: from mail-qk1-f198.google.com ([209.85.222.198]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1ia3ln-0004SH-8I for kernel-team@lists.ubuntu.com; Wed, 27 Nov 2019 20:18:39 +0000 Received: by mail-qk1-f198.google.com with SMTP id s144so14602641qke.20 for ; Wed, 27 Nov 2019 12:18:39 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=P/ZpPR6ObuSQFg8uBJuqZUzlTsGoCHCwm/MZo8s3Hso=; b=CSzTcslrS7MdmF1LkNX1sPZx5zVGdfy6dBAwk+tU3khQRNdFj/Y3BEy5DHOvrCQC+c aEC2kvSmgXxSaRG0uPpwgpVyk+Uju2pkGi/WuJqqMMb6gPpG3rTl68QpUH1yzqlQM2Tg mQ31bGErC5eSaIj2rchVZuk+8Nk1RKUDjwjPSk7jIpCK2pesfHgGdnp9ICy8fGdiA+Jr mGvZgEYOvr4LeXRtCgPcSrs62KuJfJbX1vVeDrFN6zBNBY9+3p7mOfxg20eYVGlEe3JN cetQhp0BZ7tgPy7FJRiqew0eQxowvaN3PJeWcLnyuCdlRfYColirtY3sURU0zSU0Vp0S /yJA== X-Gm-Message-State: APjAAAUaXV+lWGW1LEfUgK3vwVdkQLVrYCEMGvcEkRpA/VWrxYVB8g+z vU09Kw6cr4twozQ9AkmKPk8EIIBwbzzNre4xg78sax5Nn6ZUpzCHTh81hawoIV1zJxTnP9nZoX8 Xd7LsCt/3gT+ELNG9eIHrildeqeDWALaLQBR6ShA9 X-Received: by 2002:ac8:41c3:: with SMTP id o3mr26352793qtm.88.1574885917996; Wed, 27 Nov 2019 12:18:37 -0800 (PST) X-Google-Smtp-Source: APXvYqz33q28pFCAiFrOPxqNhy5Mq3QFiQwwUthaoMwnJxFavSTG63YruyzfK7bKN6MebjhIeJjuXA== X-Received: by 2002:ac8:41c3:: with SMTP id o3mr26352773qtm.88.1574885917735; Wed, 27 Nov 2019 12:18:37 -0800 (PST) Received: from gallifrey.lan ([2804:14c:4e6:1bc:4960:b0eb:4714:41f]) by smtp.gmail.com with ESMTPSA id o13sm8284524qto.96.2019.11.27.12.18.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2019 12:18:36 -0800 (PST) From: Marcelo Henrique Cerri To: kernel-team@lists.ubuntu.com Subject: [xenial:linux-azure][PATCH 06/15] blk-mq-sched: remove unused 'can_block' arg from blk_mq_sched_insert_request Date: Wed, 27 Nov 2019 17:18:11 -0300 Message-Id: <20191127201820.32174-7-marcelo.cerri@canonical.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191127201820.32174-1-marcelo.cerri@canonical.com> References: <20191127201820.32174-1-marcelo.cerri@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Mike Snitzer BugLink: https://bugs.launchpad.net/bugs/1848739 After commit: 923218f6166a ("blk-mq: don't allocate driver tag upfront for flush rq") we no longer use the 'can_block' argument in blk_mq_sched_insert_request(). Kill it. Signed-off-by: Mike Snitzer Added actual commit message as to why it's being removed. Signed-off-by: Jens Axboe (cherry picked from commit 9e97d2951a7e6ee6e204f87f6bda4ff754a8cede) [marcelo.cerri@canonical.com: fixed conflict in blk_mq_requeue_work() because the commit aef1897cd36d ("blk-mq: insert rq with DONTPREP to hctx dispatch list when requeue") was already applied] Signed-off-by: Marcelo Henrique Cerri --- block/blk-exec.c | 2 +- block/blk-mq-sched.c | 2 +- block/blk-mq-sched.h | 2 +- block/blk-mq.c | 16 +++++++--------- 4 files changed, 10 insertions(+), 12 deletions(-) diff --git a/block/blk-exec.c b/block/blk-exec.c index 5c0f3dc446dc..f7b292f12449 100644 --- a/block/blk-exec.c +++ b/block/blk-exec.c @@ -61,7 +61,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk, * be reused after dying flag is set */ if (q->mq_ops) { - blk_mq_sched_insert_request(rq, at_head, true, false, false); + blk_mq_sched_insert_request(rq, at_head, true, false); return; } diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index fc64558241c9..f3380331e5f3 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -429,7 +429,7 @@ void blk_mq_sched_restart(struct blk_mq_hw_ctx *const hctx) } void blk_mq_sched_insert_request(struct request *rq, bool at_head, - bool run_queue, bool async, bool can_block) + bool run_queue, bool async) { struct request_queue *q = rq->q; struct elevator_queue *e = q->elevator; diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h index ba1d1418a96d..1e9c9018ace1 100644 --- a/block/blk-mq-sched.h +++ b/block/blk-mq-sched.h @@ -18,7 +18,7 @@ bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq); void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx); void blk_mq_sched_insert_request(struct request *rq, bool at_head, - bool run_queue, bool async, bool can_block); + bool run_queue, bool async); void blk_mq_sched_insert_requests(struct request_queue *q, struct blk_mq_ctx *ctx, struct list_head *list, bool run_queue_async); diff --git a/block/blk-mq.c b/block/blk-mq.c index 90050e6ac9bd..9abc5cbb58f1 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -733,13 +733,13 @@ static void blk_mq_requeue_work(struct work_struct *work) if (rq->rq_flags & RQF_DONTPREP) blk_mq_request_bypass_insert(rq, false); else - blk_mq_sched_insert_request(rq, true, false, false, true); + blk_mq_sched_insert_request(rq, true, false, false); } while (!list_empty(&rq_list)) { rq = list_entry(rq_list.next, struct request, queuelist); list_del_init(&rq->queuelist); - blk_mq_sched_insert_request(rq, false, false, false, true); + blk_mq_sched_insert_request(rq, false, false, false); } blk_mq_run_hw_queues(q, false); @@ -1727,13 +1727,11 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx, return ret; } -static void __blk_mq_fallback_to_insert(struct blk_mq_hw_ctx *hctx, - struct request *rq, +static void __blk_mq_fallback_to_insert(struct request *rq, bool run_queue, bool bypass_insert) { if (!bypass_insert) - blk_mq_sched_insert_request(rq, false, run_queue, false, - hctx->flags & BLK_MQ_F_BLOCKING); + blk_mq_sched_insert_request(rq, false, run_queue, false); else blk_mq_request_bypass_insert(rq, run_queue); } @@ -1765,7 +1763,7 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, return __blk_mq_issue_directly(hctx, rq, cookie); insert: - __blk_mq_fallback_to_insert(hctx, rq, run_queue, bypass_insert); + __blk_mq_fallback_to_insert(rq, run_queue, bypass_insert); if (bypass_insert) return BLK_STS_RESOURCE; @@ -1784,7 +1782,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false); if (ret == BLK_STS_RESOURCE) - __blk_mq_fallback_to_insert(hctx, rq, true, false); + __blk_mq_fallback_to_insert(rq, true, false); else if (ret != BLK_STS_OK) blk_mq_end_request(rq, ret); @@ -1914,7 +1912,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) } else if (q->elevator) { blk_mq_put_ctx(data.ctx); blk_mq_bio_to_request(rq, bio); - blk_mq_sched_insert_request(rq, false, true, true, true); + blk_mq_sched_insert_request(rq, false, true, true); } else { blk_mq_put_ctx(data.ctx); blk_mq_bio_to_request(rq, bio);