From patchwork Thu Oct 29 03:07:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Ruffell X-Patchwork-Id: 1389815 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CM9Nx1yx8z9sTq; Thu, 29 Oct 2020 14:08:01 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1kXyI7-0005Az-42; Thu, 29 Oct 2020 03:07:55 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyI4-0005AL-I5 for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:07:52 +0000 Received: from mail-pg1-f197.google.com ([209.85.215.197]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyI4-00034h-5p for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:07:52 +0000 Received: by mail-pg1-f197.google.com with SMTP id t12so1043533pgv.0 for ; Wed, 28 Oct 2020 20:07:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uFq6NXHFrTiBE6Q/lDp9mgA3j5K2hrBFOC3QZhNwQiM=; b=kL7OYmPtDAEipXKFadMA8XWGbCCZaZJfSZtYW9MnCg+irBbslfLgeknsGl5TBz3sLx DdeXXUefKy/ni1mlZldszSfZ/SsDhT2kTAmFFIiyYfQO6saaJb2u9PdZUkDPVEoPLE5g CcbgLB0CmQU5oz4A9SLjRjR1ntWWhHRYDo1Mmz0H3AE/lcmxzpY0OuKXCkH+gfzcAsVe lv97YMINSQ41Xdyh1uaD1q0ek/uQ2NjesiwX9f4vjG1eWVciapnsEfOpeg9W+VVS2yvc Eaorc+EVpaOiJls2alzDocP4S+jz6QR03twVqU6RJe35+JktEGmFnhkZM82BET8Vt9+L 5myw== X-Gm-Message-State: AOAM530RAnUBniaYXkoUQxwMvsEqnxr4OEMVWK8x3buRIc11Ra2VIQ/U Z2DBbPgCIq8T3Ie17ZAOrWmGWVq9oPQRrF1mAKuudITArFjqPPpoR5QH6ifA4/FDMt3EagYFVAL Y81xepxohAk+UE/U1oyK/fzo8UywfdT9Cm4kxTBSsyw== X-Received: by 2002:a63:6207:: with SMTP id w7mr2188785pgb.350.1603940870692; Wed, 28 Oct 2020 20:07:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy9O4fMAaCRrIjSZ873F+1Hwuibf6w1sSIUj2mdC5UunqneI3Qjp1LQQbPj4mkOIAS5QQpbSw== X-Received: by 2002:a63:6207:: with SMTP id w7mr2188775pgb.350.1603940870470; Wed, 28 Oct 2020 20:07:50 -0700 (PDT) Received: from localhost.localdomain (222-152-178-139-fibre.sparkbb.co.nz. [222.152.178.139]) by smtp.gmail.com with ESMTPSA id r6sm966882pfg.85.2020.10.28.20.07.48 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Oct 2020 20:07:49 -0700 (PDT) From: Matthew Ruffell To: kernel-team@lists.ubuntu.com Subject: [SRU][B][PATCH 1/7] md: add md_submit_discard_bio() for submitting discard bio Date: Thu, 29 Oct 2020 16:07:29 +1300 Message-Id: <20201029030737.21204-3-matthew.ruffell@canonical.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201029030737.21204-1-matthew.ruffell@canonical.com> References: <20201029030737.21204-1-matthew.ruffell@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Xiao Ni BugLink: https://bugs.launchpad.net/bugs/1896578 Move these logic from raid0.c to md.c, so that we can also use it in raid10.c. Reviewed-by: Coly Li Reviewed-by: Guoqing Jiang Signed-off-by: Xiao Ni Signed-off-by: Song Liu (backported from commit 2628089b74d5a64bd0bcb5d247a18f78d7b6f4d0) [mruffell: change submit_bio_noacct() to generic_make_request(), bio_clone_blkg_association() to bio_clone_blkcg_association()] Signed-off-by: Matthew Ruffell --- drivers/md/md.c | 20 ++++++++++++++++++++ drivers/md/md.h | 2 ++ drivers/md/raid0.c | 14 ++------------ 3 files changed, 24 insertions(+), 12 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 8b9c58e291b3..a4bb51764ebf 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -8216,6 +8216,26 @@ void md_write_end(struct mddev *mddev) EXPORT_SYMBOL(md_write_end); +/* This is used by raid0 and raid10 */ +void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev, + struct bio *bio, sector_t start, sector_t size) +{ + struct bio *discard_bio = NULL; + + if (__blkdev_issue_discard(rdev->bdev, start, size, + GFP_NOIO, 0, &discard_bio) || !discard_bio) + return; + + bio_chain(discard_bio, bio); + bio_clone_blkcg_association(discard_bio, bio); + if (mddev->gendisk) + trace_block_bio_remap(bdev_get_queue(rdev->bdev), + discard_bio, disk_devt(mddev->gendisk), + bio->bi_iter.bi_sector); + generic_make_request(discard_bio); +} +EXPORT_SYMBOL(md_submit_discard_bio); + /* md_allow_write(mddev) * Calling this ensures that the array is marked 'active' so that writes * may proceed without blocking. It is important to call this before diff --git a/drivers/md/md.h b/drivers/md/md.h index 8960b462e0b2..3096fe2f601e 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -677,6 +677,8 @@ extern void md_write_end(struct mddev *mddev); extern void md_done_sync(struct mddev *mddev, int blocks, int ok); extern void md_error(struct mddev *mddev, struct md_rdev *rdev); extern void md_finish_reshape(struct mddev *mddev); +extern void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev, + struct bio *bio, sector_t start, sector_t size); extern int mddev_congested(struct mddev *mddev, int bits); extern bool __must_check md_flush_request(struct mddev *mddev, struct bio *bio); diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index ad75284f27a9..0125c79b889a 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -531,7 +531,6 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio) for (disk = 0; disk < zone->nb_dev; disk++) { sector_t dev_start, dev_end; - struct bio *discard_bio = NULL; struct md_rdev *rdev; if (disk < start_disk_index) @@ -554,18 +553,9 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio) rdev = conf->devlist[(zone - conf->strip_zone) * conf->strip_zone[0].nb_dev + disk]; - if (__blkdev_issue_discard(rdev->bdev, + md_submit_discard_bio(mddev, rdev, bio, dev_start + zone->dev_start + rdev->data_offset, - dev_end - dev_start, GFP_NOIO, 0, &discard_bio) || - !discard_bio) - continue; - bio_chain(discard_bio, bio); - bio_clone_blkcg_association(discard_bio, bio); - if (mddev->gendisk) - trace_block_bio_remap(bdev_get_queue(rdev->bdev), - discard_bio, disk_devt(mddev->gendisk), - bio->bi_iter.bi_sector); - generic_make_request(discard_bio); + dev_end - dev_start); } bio_endio(bio); } From patchwork Thu Oct 29 03:07:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Ruffell X-Patchwork-Id: 1389817 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CM9Nz2N9Hz9sV1; Thu, 29 Oct 2020 14:08:03 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1kXyI9-0005Bs-Hk; Thu, 29 Oct 2020 03:07:57 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyI6-0005Ai-JM for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:07:54 +0000 Received: from mail-pg1-f199.google.com ([209.85.215.199]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyI6-00034m-7e for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:07:54 +0000 Received: by mail-pg1-f199.google.com with SMTP id 19so1016812pgq.18 for ; Wed, 28 Oct 2020 20:07:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YZB+JTarKXE/vDkOqbEh3Or+D4Q2UAfIH/FToIhPnPI=; b=c9/M1V8i+b23FpYmi7Vr1/3LGc2u0UvFVgFuufGTnI2raH3afkwBRe+RO3QSz8+GGw d2nwlT9tpunmcgmYbeL3rdG6STHg83BiUQQQ4D3tOhRQr3zkRpHDtOp2NjQa++J60Ubo BvUL4kxqdf3bCwmYFrt7XYlaHgdz+zdEowZ5bad9fb6foQV9bB5+Os2Rwqp8UQQ1SJ9e kwPZrqso0/ZpX2Pxd5BSXGRADfSqnMvuQ9wh+kAhBIFABnDuLPNg3Bnm4kpq3vVRplTT y10QizBfgO3VoGV92a/PIYFoDWm0PybRBSxAAyz+kUkbYxpHe/itecEGI0vic1jPL/Ce v89w== X-Gm-Message-State: AOAM532Cv37XTX8QjV/oFolTu0L1gyw2lNpWfntzXSfUiUt7tDNbjAEs a4JjpC1CMx7IEysbzige7ImYvGtI47m/zYK2HQHinAuM45oncLUmp9fWF47SDpQCKsvp6GLUk+h N7FWfxOvthqkoBbBq9qg5Av9/zoFgNnBzYYe7WgPvOg== X-Received: by 2002:a63:7b43:: with SMTP id k3mr2237923pgn.25.1603940872714; Wed, 28 Oct 2020 20:07:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxhdz1B61BPihqteOQpc+S8YY10GFVWqQoWf3s+o9WxD5Z994/Vqjuk8UbZ8HIM+/FEkDEjYA== X-Received: by 2002:a63:7b43:: with SMTP id k3mr2237912pgn.25.1603940872498; Wed, 28 Oct 2020 20:07:52 -0700 (PDT) Received: from localhost.localdomain (222-152-178-139-fibre.sparkbb.co.nz. [222.152.178.139]) by smtp.gmail.com with ESMTPSA id r6sm966882pfg.85.2020.10.28.20.07.50 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Oct 2020 20:07:51 -0700 (PDT) From: Matthew Ruffell To: kernel-team@lists.ubuntu.com Subject: [SRU][B][F][G][PATCH 2/7] md/raid10: extend r10bio devs to raid disks Date: Thu, 29 Oct 2020 16:07:30 +1300 Message-Id: <20201029030737.21204-4-matthew.ruffell@canonical.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201029030737.21204-1-matthew.ruffell@canonical.com> References: <20201029030737.21204-1-matthew.ruffell@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Xiao Ni BugLink: https://bugs.launchpad.net/bugs/1896578 Now it allocs r10bio->devs[conf->copies]. Discard bio needs to submit to all member disks and it needs to use r10bio. So extend to r10bio->devs[geo.raid_disks]. Reviewed-by: Coly Li Signed-off-by: Xiao Ni Signed-off-by: Song Liu (cherry picked from commit 8650a889017cb1f6ea6813ccf83a2e9f6fa49dd3) Signed-off-by: Matthew Ruffell --- drivers/md/raid10.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index ec136e44aef7..d37a83fd1ccf 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -91,7 +91,7 @@ static inline struct r10bio *get_resync_r10bio(struct bio *bio) static void * r10bio_pool_alloc(gfp_t gfp_flags, void *data) { struct r10conf *conf = data; - int size = offsetof(struct r10bio, devs[conf->copies]); + int size = offsetof(struct r10bio, devs[conf->geo.raid_disks]); /* allocate a r10bio with room for raid_disks entries in the * bios array */ @@ -238,7 +238,7 @@ static void put_all_bios(struct r10conf *conf, struct r10bio *r10_bio) { int i; - for (i = 0; i < conf->copies; i++) { + for (i = 0; i < conf->geo.raid_disks; i++) { struct bio **bio = & r10_bio->devs[i].bio; if (!BIO_SPECIAL(*bio)) bio_put(*bio); @@ -327,7 +327,7 @@ static int find_bio_disk(struct r10conf *conf, struct r10bio *r10_bio, int slot; int repl = 0; - for (slot = 0; slot < conf->copies; slot++) { + for (slot = 0; slot < conf->geo.raid_disks; slot++) { if (r10_bio->devs[slot].bio == bio) break; if (r10_bio->devs[slot].repl_bio == bio) { @@ -336,7 +336,6 @@ static int find_bio_disk(struct r10conf *conf, struct r10bio *r10_bio, } } - BUG_ON(slot == conf->copies); update_head_pos(slot, r10_bio); if (slotp) @@ -1510,7 +1509,7 @@ static void __make_request(struct mddev *mddev, struct bio *bio, int sectors) r10_bio->mddev = mddev; r10_bio->sector = bio->bi_iter.bi_sector; r10_bio->state = 0; - memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * conf->copies); + memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * conf->geo.raid_disks); if (bio_data_dir(bio) == READ) raid10_read_request(mddev, bio, r10_bio); From patchwork Thu Oct 29 03:07:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Ruffell X-Patchwork-Id: 1389816 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CM9Ny4XnFz9sV0; Thu, 29 Oct 2020 14:08:02 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1kXyIA-0005Cb-MG; Thu, 29 Oct 2020 03:07:58 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyI8-0005Bj-Mf for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:07:56 +0000 Received: from mail-pl1-f200.google.com ([209.85.214.200]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyI8-00034u-9X for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:07:56 +0000 Received: by mail-pl1-f200.google.com with SMTP id u14so1013123plq.5 for ; Wed, 28 Oct 2020 20:07:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NVzC+BgQav429/rkzh74b2bBYoHY/aOz0KyUxs7BHe4=; b=LqOQ6cBjCq13pzASQLcZDBD7F8WdkWc3NLQcmSu7NLJqYB5kLA/GYCAvIWAewh3Ozt 6QWWpCm4fVWOl/OUAicHXw0Qo4K1WKGhEwKxX7HIfU5dzugmnBzQxZFo2YzppvG8Mtgm /gyhfQNjg+5sUmWFCeIhVaf8b7dLJ2mgkHqUEG+OS8ZHzUwEbb2EaturIbRjtMBiSdPH UKDY+P0U+1WG4xp3QaI+GunyZbi0KIkXfd1/iit9ooVB8GGzcuq1v7Q6S+CuZr2UpmR4 552HXRAvbqUXSkBbhF0iPrEMrm3tip7/BceTjEiBFi/ecrlV20KSQXNAOs/XcJHJEqXy 1DKA== X-Gm-Message-State: AOAM53397J1UPAH9R4HURCZ5f+9Enx7/SCyYTfoOsjramnxe+dV/NGLP mqsKqObOdaLPugUgMKB/Mt5qlbUb/oDRVsv/qsQ+2RL6MX4vDAvR67orT066eD4GOzGs/xYQy+l poMn96CzT9nljCMCND00uIEcUXO6NpvxYo5WiLOhjlg== X-Received: by 2002:aa7:8588:0:b029:152:a38c:fbba with SMTP id w8-20020aa785880000b0290152a38cfbbamr2484997pfn.0.1603940874646; Wed, 28 Oct 2020 20:07:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyZtmkuQWERw96q2Er3TmYjoSMW4MfP00a59ZhtP0N08XDRdhDxYI2hhjmV/W4QecODWqNeew== X-Received: by 2002:aa7:8588:0:b029:152:a38c:fbba with SMTP id w8-20020aa785880000b0290152a38cfbbamr2484979pfn.0.1603940874416; Wed, 28 Oct 2020 20:07:54 -0700 (PDT) Received: from localhost.localdomain (222-152-178-139-fibre.sparkbb.co.nz. [222.152.178.139]) by smtp.gmail.com with ESMTPSA id r6sm966882pfg.85.2020.10.28.20.07.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Oct 2020 20:07:53 -0700 (PDT) From: Matthew Ruffell To: kernel-team@lists.ubuntu.com Subject: [SRU][B][F][G][PATCH 3/7] md/raid10: pull codes that wait for blocked dev into one function Date: Thu, 29 Oct 2020 16:07:31 +1300 Message-Id: <20201029030737.21204-5-matthew.ruffell@canonical.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201029030737.21204-1-matthew.ruffell@canonical.com> References: <20201029030737.21204-1-matthew.ruffell@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Xiao Ni BugLink: https://bugs.launchpad.net/bugs/1896578 The following patch will reuse these logics, so pull the same codes into one function. Signed-off-by: Xiao Ni Signed-off-by: Song Liu (cherry picked from commit f046f5d0d79cdb968f219ce249e497fd1accf484) Signed-off-by: Matthew Ruffell --- drivers/md/raid10.c | 118 +++++++++++++++++++++++++------------------- 1 file changed, 67 insertions(+), 51 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index d37a83fd1ccf..51c483d71562 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1292,12 +1292,75 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio, } } +static void wait_blocked_dev(struct mddev *mddev, struct r10bio *r10_bio) +{ + int i; + struct r10conf *conf = mddev->private; + struct md_rdev *blocked_rdev; + +retry_wait: + blocked_rdev = NULL; + rcu_read_lock(); + for (i = 0; i < conf->copies; i++) { + struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev); + struct md_rdev *rrdev = rcu_dereference( + conf->mirrors[i].replacement); + if (rdev == rrdev) + rrdev = NULL; + if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) { + atomic_inc(&rdev->nr_pending); + blocked_rdev = rdev; + break; + } + if (rrdev && unlikely(test_bit(Blocked, &rrdev->flags))) { + atomic_inc(&rrdev->nr_pending); + blocked_rdev = rrdev; + break; + } + + if (rdev && test_bit(WriteErrorSeen, &rdev->flags)) { + sector_t first_bad; + sector_t dev_sector = r10_bio->devs[i].addr; + int bad_sectors; + int is_bad; + + /* Discard request doesn't care the write result + * so it doesn't need to wait blocked disk here. + */ + if (!r10_bio->sectors) + continue; + + is_bad = is_badblock(rdev, dev_sector, r10_bio->sectors, + &first_bad, &bad_sectors); + if (is_bad < 0) { + /* Mustn't write here until the bad block + * is acknowledged + */ + atomic_inc(&rdev->nr_pending); + set_bit(BlockedBadBlocks, &rdev->flags); + blocked_rdev = rdev; + break; + } + } + } + rcu_read_unlock(); + + if (unlikely(blocked_rdev)) { + /* Have to wait for this device to get unblocked, then retry */ + allow_barrier(conf); + raid10_log(conf->mddev, "%s wait rdev %d blocked", + __func__, blocked_rdev->raid_disk); + md_wait_for_blocked_rdev(blocked_rdev, mddev); + wait_barrier(conf); + goto retry_wait; + } +} + static void raid10_write_request(struct mddev *mddev, struct bio *bio, struct r10bio *r10_bio) { struct r10conf *conf = mddev->private; int i; - struct md_rdev *blocked_rdev; sector_t sectors; int max_sectors; @@ -1355,8 +1418,9 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, r10_bio->read_slot = -1; /* make sure repl_bio gets freed */ raid10_find_phys(conf, r10_bio); -retry_write: - blocked_rdev = NULL; + + wait_blocked_dev(mddev, r10_bio); + rcu_read_lock(); max_sectors = r10_bio->sectors; @@ -1367,16 +1431,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, conf->mirrors[d].replacement); if (rdev == rrdev) rrdev = NULL; - if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) { - atomic_inc(&rdev->nr_pending); - blocked_rdev = rdev; - break; - } - if (rrdev && unlikely(test_bit(Blocked, &rrdev->flags))) { - atomic_inc(&rrdev->nr_pending); - blocked_rdev = rrdev; - break; - } if (rdev && (test_bit(Faulty, &rdev->flags))) rdev = NULL; if (rrdev && (test_bit(Faulty, &rrdev->flags))) @@ -1397,15 +1451,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, is_bad = is_badblock(rdev, dev_sector, max_sectors, &first_bad, &bad_sectors); - if (is_bad < 0) { - /* Mustn't write here until the bad block - * is acknowledged - */ - atomic_inc(&rdev->nr_pending); - set_bit(BlockedBadBlocks, &rdev->flags); - blocked_rdev = rdev; - break; - } if (is_bad && first_bad <= dev_sector) { /* Cannot write here at all */ bad_sectors -= (dev_sector - first_bad); @@ -1441,35 +1486,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, } rcu_read_unlock(); - if (unlikely(blocked_rdev)) { - /* Have to wait for this device to get unblocked, then retry */ - int j; - int d; - - for (j = 0; j < i; j++) { - if (r10_bio->devs[j].bio) { - d = r10_bio->devs[j].devnum; - rdev_dec_pending(conf->mirrors[d].rdev, mddev); - } - if (r10_bio->devs[j].repl_bio) { - struct md_rdev *rdev; - d = r10_bio->devs[j].devnum; - rdev = conf->mirrors[d].replacement; - if (!rdev) { - /* Race with remove_disk */ - smp_mb(); - rdev = conf->mirrors[d].rdev; - } - rdev_dec_pending(rdev, mddev); - } - } - allow_barrier(conf); - raid10_log(conf->mddev, "wait rdev %d blocked", blocked_rdev->raid_disk); - md_wait_for_blocked_rdev(blocked_rdev, mddev); - wait_barrier(conf); - goto retry_write; - } - if (max_sectors < r10_bio->sectors) r10_bio->sectors = max_sectors; From patchwork Thu Oct 29 03:07:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Ruffell X-Patchwork-Id: 1389820 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CM9P316LVz9sSP; Thu, 29 Oct 2020 14:08:07 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1kXyIE-0005FL-S2; Thu, 29 Oct 2020 03:08:02 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyIA-0005Cg-S1 for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:07:58 +0000 Received: from mail-pl1-f200.google.com ([209.85.214.200]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyIA-000357-E4 for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:07:58 +0000 Received: by mail-pl1-f200.google.com with SMTP id g10so996719plq.16 for ; Wed, 28 Oct 2020 20:07:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=q77YgqxtzkTLHWRJ4Cu9m3bf0c67Yl9AaDmULT/YXE4=; b=nDwnhWuDeN2OPGZwvvkl8/dvL9bOh8ri5SSIZtlDfGTRix8G0Qzox0oQpIWEPSfLp/ 0UP2GUhZyaqJbvZ+sgD6oRCxSeUPRsagJNuFTpJZ+WagsfEi0UylQQXOAnO12nvxFyjN Ak+ViDz5xa6MUoF+VsCcUPgldVXf7GD6hzofrYUgW1cNXuNBeCc84NC55wEC34M8rxOC lq8bI9etmWgpohMq74UubpvFk5Z2k2Dg402/3KoNaVuPcto7GHiP5OsA3poZILXSB0u4 zIxr6IufBIIX6fbyKBqowlCJMf38lOoiblrOoKKBRuiUneWpe6Go20Bv5llgvEpvfXWx zYJg== X-Gm-Message-State: AOAM533Ezn1DBYAcjobirGaSGB4ybWNv6QMHmd+d2IjqZhMwi2SeDf0y 3AfoePnEw45RMx1M6tjbg7TXYDwyJA8kenm7cE3rujMDmw3S09cXiyMEM2QPP8NK4Opc5mmjlki vVE/7SeWbqULEqWICyguD6VTfHTSWI7J2R9W842RgYQ== X-Received: by 2002:a05:6a00:1693:b029:155:abe5:caa2 with SMTP id k19-20020a056a001693b0290155abe5caa2mr2045481pfc.39.1603940876741; Wed, 28 Oct 2020 20:07:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyq8msz1w0IOynrwJU5oJmmMiN1Lc6YLtQZ9kHLquhFTUGZpWcZlT6nT2ACJJPm3sOsjTuwjA== X-Received: by 2002:a05:6a00:1693:b029:155:abe5:caa2 with SMTP id k19-20020a056a001693b0290155abe5caa2mr2045463pfc.39.1603940876392; Wed, 28 Oct 2020 20:07:56 -0700 (PDT) Received: from localhost.localdomain (222-152-178-139-fibre.sparkbb.co.nz. [222.152.178.139]) by smtp.gmail.com with ESMTPSA id r6sm966882pfg.85.2020.10.28.20.07.54 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Oct 2020 20:07:55 -0700 (PDT) From: Matthew Ruffell To: kernel-team@lists.ubuntu.com Subject: [SRU][F][G][PATCH 4/7] md/raid10: improve raid10 discard request Date: Thu, 29 Oct 2020 16:07:32 +1300 Message-Id: <20201029030737.21204-6-matthew.ruffell@canonical.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201029030737.21204-1-matthew.ruffell@canonical.com> References: <20201029030737.21204-1-matthew.ruffell@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Xiao Ni BugLink: https://bugs.launchpad.net/bugs/1896578 Now the discard request is split by chunk size. So it takes a long time to finish mkfs on disks which support discard function. This patch improve handling raid10 discard request. It uses the similar way with patch 29efc390b (md/md0: optimize raid0 discard handling). But it's a little complex than raid0. Because raid10 has different layout. If raid10 is offset layout and the discard request is smaller than stripe size. There are some holes when we submit discard bio to underlayer disks. For example: five disks (disk1 - disk5) D01 D02 D03 D04 D05 D05 D01 D02 D03 D04 D06 D07 D08 D09 D10 D10 D06 D07 D08 D09 The discard bio just wants to discard from D03 to D10. For disk3, there is a hole between D03 and D08. For disk4, there is a hole between D04 and D09. D03 is a chunk, raid10_write_request can handle one chunk perfectly. So the part that is not aligned with stripe size is still handled by raid10_write_request. If reshape is running when discard bio comes and the discard bio spans the reshape position, raid10_write_request is responsible to handle this discard bio. I did a test with this patch set. Without patch: time mkfs.xfs /dev/md0 real4m39.775s user0m0.000s sys0m0.298s With patch: time mkfs.xfs /dev/md0 real0m0.105s user0m0.000s sys0m0.007s nvme3n1 259:1 0 477G 0 disk └─nvme3n1p1 259:10 0 50G 0 part nvme4n1 259:2 0 477G 0 disk └─nvme4n1p1 259:11 0 50G 0 part nvme5n1 259:6 0 477G 0 disk └─nvme5n1p1 259:12 0 50G 0 part nvme2n1 259:9 0 477G 0 disk └─nvme2n1p1 259:15 0 50G 0 part nvme0n1 259:13 0 477G 0 disk └─nvme0n1p1 259:14 0 50G 0 part Reviewed-by: Coly Li Reviewed-by: Guoqing Jiang Signed-off-by: Xiao Ni Signed-off-by: Song Liu (backported from commit bcc90d280465ebd51ab8688be86e1f00c62dccf9) [mruffell: change submit_bio_noacct() to generic_make_request()] Signed-off-by: Matthew Ruffell --- drivers/md/raid10.c | 256 +++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 255 insertions(+), 1 deletion(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 51c483d71562..e67ecd1067a1 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1533,6 +1533,256 @@ static void __make_request(struct mddev *mddev, struct bio *bio, int sectors) raid10_write_request(mddev, bio, r10_bio); } +static struct bio *raid10_split_bio(struct r10conf *conf, + struct bio *bio, sector_t sectors, bool want_first) +{ + struct bio *split; + + split = bio_split(bio, sectors, GFP_NOIO, &conf->bio_split); + bio_chain(split, bio); + allow_barrier(conf); + if (want_first) { + generic_make_request(bio); + bio = split; + } else + generic_make_request(split); + wait_barrier(conf); + + return bio; +} + +static void raid10_end_discard_request(struct bio *bio) +{ + struct r10bio *r10_bio = bio->bi_private; + struct r10conf *conf = r10_bio->mddev->private; + struct md_rdev *rdev = NULL; + int dev; + int slot, repl; + + /* + * We don't care the return value of discard bio + */ + if (!test_bit(R10BIO_Uptodate, &r10_bio->state)) + set_bit(R10BIO_Uptodate, &r10_bio->state); + + dev = find_bio_disk(conf, r10_bio, bio, &slot, &repl); + if (repl) + rdev = conf->mirrors[dev].replacement; + if (!rdev) { + /* raid10_remove_disk uses smp_mb to make sure rdev is set to + * replacement before setting replacement to NULL. It can read + * rdev first without barrier protect even replacment is NULL + */ + smp_rmb(); + rdev = conf->mirrors[dev].rdev; + } + + if (atomic_dec_and_test(&r10_bio->remaining)) { + md_write_end(r10_bio->mddev); + raid_end_bio_io(r10_bio); + } + + rdev_dec_pending(rdev, conf->mddev); +} + +/* There are some limitations to handle discard bio + * 1st, the discard size is bigger than stripe_size*2. + * 2st, if the discard bio spans reshape progress, we use the old way to + * handle discard bio + */ +static int raid10_handle_discard(struct mddev *mddev, struct bio *bio) +{ + struct r10conf *conf = mddev->private; + struct geom *geo = &conf->geo; + struct r10bio *r10_bio; + + int disk; + sector_t chunk; + unsigned int stripe_size; + sector_t split_size; + + sector_t bio_start, bio_end; + sector_t first_stripe_index, last_stripe_index; + sector_t start_disk_offset; + unsigned int start_disk_index; + sector_t end_disk_offset; + unsigned int end_disk_index; + unsigned int remainder; + + if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery)) + return -EAGAIN; + + wait_barrier(conf); + + /* Check reshape again to avoid reshape happens after checking + * MD_RECOVERY_RESHAPE and before wait_barrier + */ + if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery)) + goto out; + + stripe_size = geo->raid_disks << geo->chunk_shift; + bio_start = bio->bi_iter.bi_sector; + bio_end = bio_end_sector(bio); + + /* Maybe one discard bio is smaller than strip size or across one stripe + * and discard region is larger than one stripe size. For far offset layout, + * if the discard region is not aligned with stripe size, there is hole + * when we submit discard bio to member disk. For simplicity, we only + * handle discard bio which discard region is bigger than stripe_size*2 + */ + if (bio_sectors(bio) < stripe_size*2) + goto out; + + /* For far offset layout, if bio is not aligned with stripe size, it splits + * the part that is not aligned with strip size. + */ + div_u64_rem(bio_start, stripe_size, &remainder); + if (geo->far_offset && remainder) { + split_size = stripe_size - remainder; + bio = raid10_split_bio(conf, bio, split_size, false); + } + div_u64_rem(bio_end, stripe_size, &remainder); + if (geo->far_offset && remainder) { + split_size = bio_sectors(bio) - remainder; + bio = raid10_split_bio(conf, bio, split_size, true); + } + + r10_bio = mempool_alloc(&conf->r10bio_pool, GFP_NOIO); + r10_bio->mddev = mddev; + r10_bio->state = 0; + r10_bio->sectors = 0; + memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * geo->raid_disks); + + wait_blocked_dev(mddev, r10_bio); + + r10_bio->master_bio = bio; + + bio_start = bio->bi_iter.bi_sector; + bio_end = bio_end_sector(bio); + + /* raid10 uses chunk as the unit to store data. It's similar like raid0. + * One stripe contains the chunks from all member disk (one chunk from + * one disk at the same HBA address). For layout detail, see 'man md 4' + */ + chunk = bio_start >> geo->chunk_shift; + chunk *= geo->near_copies; + first_stripe_index = chunk; + start_disk_index = sector_div(first_stripe_index, geo->raid_disks); + if (geo->far_offset) + first_stripe_index *= geo->far_copies; + start_disk_offset = (bio_start & geo->chunk_mask) + + (first_stripe_index << geo->chunk_shift); + + chunk = bio_end >> geo->chunk_shift; + chunk *= geo->near_copies; + last_stripe_index = chunk; + end_disk_index = sector_div(last_stripe_index, geo->raid_disks); + if (geo->far_offset) + last_stripe_index *= geo->far_copies; + end_disk_offset = (bio_end & geo->chunk_mask) + + (last_stripe_index << geo->chunk_shift); + + rcu_read_lock(); + for (disk = 0; disk < geo->raid_disks; disk++) { + struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev); + struct md_rdev *rrdev = rcu_dereference( + conf->mirrors[disk].replacement); + + r10_bio->devs[disk].bio = NULL; + r10_bio->devs[disk].repl_bio = NULL; + + if (rdev && (test_bit(Faulty, &rdev->flags))) + rdev = NULL; + if (rrdev && (test_bit(Faulty, &rrdev->flags))) + rrdev = NULL; + if (!rdev && !rrdev) + continue; + + if (rdev) { + r10_bio->devs[disk].bio = bio; + atomic_inc(&rdev->nr_pending); + } + if (rrdev) { + r10_bio->devs[disk].repl_bio = bio; + atomic_inc(&rrdev->nr_pending); + } + } + rcu_read_unlock(); + + atomic_set(&r10_bio->remaining, 1); + for (disk = 0; disk < geo->raid_disks; disk++) { + sector_t dev_start, dev_end; + struct bio *mbio, *rbio = NULL; + struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev); + struct md_rdev *rrdev = rcu_dereference( + conf->mirrors[disk].replacement); + + /* + * Now start to calculate the start and end address for each disk. + * The space between dev_start and dev_end is the discard region. + * + * For dev_start, it needs to consider three conditions: + * 1st, the disk is before start_disk, you can imagine the disk in + * the next stripe. So the dev_start is the start address of next + * stripe. + * 2st, the disk is after start_disk, it means the disk is at the + * same stripe of first disk + * 3st, the first disk itself, we can use start_disk_offset directly + */ + if (disk < start_disk_index) + dev_start = (first_stripe_index + 1) * mddev->chunk_sectors; + else if (disk > start_disk_index) + dev_start = first_stripe_index * mddev->chunk_sectors; + else + dev_start = start_disk_offset; + + if (disk < end_disk_index) + dev_end = (last_stripe_index + 1) * mddev->chunk_sectors; + else if (disk > end_disk_index) + dev_end = last_stripe_index * mddev->chunk_sectors; + else + dev_end = end_disk_offset; + + /* It only handles discard bio which size is >= stripe size, so + * dev_end > dev_start all the time + */ + if (r10_bio->devs[disk].bio) { + mbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set); + mbio->bi_end_io = raid10_end_discard_request; + mbio->bi_private = r10_bio; + r10_bio->devs[disk].bio = mbio; + r10_bio->devs[disk].devnum = disk; + atomic_inc(&r10_bio->remaining); + md_submit_discard_bio(mddev, rdev, mbio, + dev_start + choose_data_offset(r10_bio, rdev), + dev_end - dev_start); + bio_endio(mbio); + } + if (r10_bio->devs[disk].repl_bio) { + rbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set); + rbio->bi_end_io = raid10_end_discard_request; + rbio->bi_private = r10_bio; + r10_bio->devs[disk].repl_bio = rbio; + r10_bio->devs[disk].devnum = disk; + atomic_inc(&r10_bio->remaining); + md_submit_discard_bio(mddev, rrdev, rbio, + dev_start + choose_data_offset(r10_bio, rrdev), + dev_end - dev_start); + bio_endio(rbio); + } + } + + if (atomic_dec_and_test(&r10_bio->remaining)) { + md_write_end(r10_bio->mddev); + raid_end_bio_io(r10_bio); + } + + return 0; +out: + allow_barrier(conf); + return -EAGAIN; +} + static bool raid10_make_request(struct mddev *mddev, struct bio *bio) { struct r10conf *conf = mddev->private; @@ -1547,6 +1797,10 @@ static bool raid10_make_request(struct mddev *mddev, struct bio *bio) if (!md_write_start(mddev, bio)) return false; + if (unlikely(bio_op(bio) == REQ_OP_DISCARD)) + if (!raid10_handle_discard(mddev, bio)) + return true; + /* * If this request crosses a chunk boundary, we need to split * it. @@ -3777,7 +4031,7 @@ static int raid10_run(struct mddev *mddev) chunk_size = mddev->chunk_sectors << 9; if (mddev->queue) { blk_queue_max_discard_sectors(mddev->queue, - mddev->chunk_sectors); + UINT_MAX); blk_queue_max_write_same_sectors(mddev->queue, 0); blk_queue_max_write_zeroes_sectors(mddev->queue, 0); blk_queue_io_min(mddev->queue, chunk_size); From patchwork Thu Oct 29 03:07:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Ruffell X-Patchwork-Id: 1389822 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CM9P76Wdkz9sSP; Thu, 29 Oct 2020 14:08:11 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1kXyII-0005IN-Kp; Thu, 29 Oct 2020 03:08:06 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyIE-0005FC-N6 for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:08:02 +0000 Received: from mail-pf1-f197.google.com ([209.85.210.197]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyIE-00035o-Ae for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:08:02 +0000 Received: by mail-pf1-f197.google.com with SMTP id l188so1034313pfl.23 for ; Wed, 28 Oct 2020 20:08:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6TNjyg2s2UBHkGyxH3KIsrzlNophJv448TyFKIaZtgI=; b=S+33jT1gfblOgA5sS+Czh9OxtGVayLltByEY1evFV2XaX+/YLhA+mUCmEnRulMslhu 97k2UxGVPI8ngTQxTyfhNU9xKt+b76b1506B6GD/Xa6tuG5KJtI5wViw80cd8M1IV2Na Y/OVPQjvyVeDERoMAPehg+fNNP8MSm4s9/kv4bhaE9OOsJznROYkA7kUd4HJj4fvX6Sf BcH326x2BBhYL+X9K4f/ZdNFOLPaYDx4IFkJVKESvNiZUrnoLGv3jWWuDgwFqeIIdbPW KglZWoc/OkMyhgDcGmRnSBpX+PIAkqyjuojJB6AaU4OB88o/px9dvU50vZsPI2BjBsa7 pGHA== X-Gm-Message-State: AOAM532SDlZGB/HygVLQ4YwJl1X0VIqc+50lLEE0Jt8yxNYF7WdKmNkC 8uR6D1ZuyyKnu8cV9mYdwZEuCmoS5LZ38JykyeEozsDweysV6xjQ7E0NyooFyIl3udIEbX+TplX lRahOEd8uYQRUPz4Z1EFl0mPtXtfwfBNtosuAmZzu6A== X-Received: by 2002:a17:902:a383:b029:d5:dde6:2e86 with SMTP id x3-20020a170902a383b02900d5dde62e86mr2275163pla.37.1603940880755; Wed, 28 Oct 2020 20:08:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxkKOqzIMjYcL5sZP5bku/855gFFiQSkmnkMnaaEw+YRXJ4nLP0CdcYqRpSC9T22ialA7PCwg== X-Received: by 2002:a17:902:a383:b029:d5:dde6:2e86 with SMTP id x3-20020a170902a383b02900d5dde62e86mr2275154pla.37.1603940880478; Wed, 28 Oct 2020 20:08:00 -0700 (PDT) Received: from localhost.localdomain (222-152-178-139-fibre.sparkbb.co.nz. [222.152.178.139]) by smtp.gmail.com with ESMTPSA id r6sm966882pfg.85.2020.10.28.20.07.58 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Oct 2020 20:07:59 -0700 (PDT) From: Matthew Ruffell To: kernel-team@lists.ubuntu.com Subject: [SRU][F][G][PATCH 5/7] md/raid10: improve discard request for far layout Date: Thu, 29 Oct 2020 16:07:34 +1300 Message-Id: <20201029030737.21204-8-matthew.ruffell@canonical.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201029030737.21204-1-matthew.ruffell@canonical.com> References: <20201029030737.21204-1-matthew.ruffell@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Xiao Ni BugLink: https://bugs.launchpad.net/bugs/1896578 For far layout, the discard region is not continuous on disks. So it needs far copies r10bio to cover all regions. It needs a way to know all r10bios have finish or not. Similar with raid10_sync_request, only the first r10bio master_bio records the discard bio. Other r10bios master_bio record the first r10bio. The first r10bio can finish after other r10bios finish and then return the discard bio. Signed-off-by: Xiao Ni Signed-off-by: Song Liu (cherry picked from commit d3ee2d8415a6256c1c41e1be36e80e640c3e6359) Signed-off-by: Matthew Ruffell --- drivers/md/raid10.c | 86 +++++++++++++++++++++++++++++++++------------ drivers/md/raid10.h | 1 + 2 files changed, 64 insertions(+), 23 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index e67ecd1067a1..e9afda6e8441 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1551,6 +1551,28 @@ static struct bio *raid10_split_bio(struct r10conf *conf, return bio; } +static void raid_end_discard_bio(struct r10bio *r10bio) +{ + struct r10conf *conf = r10bio->mddev->private; + struct r10bio *first_r10bio; + + while (atomic_dec_and_test(&r10bio->remaining)) { + + allow_barrier(conf); + + if (!test_bit(R10BIO_Discard, &r10bio->state)) { + first_r10bio = (struct r10bio *)r10bio->master_bio; + free_r10bio(r10bio); + r10bio = first_r10bio; + } else { + md_write_end(r10bio->mddev); + bio_endio(r10bio->master_bio); + free_r10bio(r10bio); + break; + } + } +} + static void raid10_end_discard_request(struct bio *bio) { struct r10bio *r10_bio = bio->bi_private; @@ -1577,11 +1599,7 @@ static void raid10_end_discard_request(struct bio *bio) rdev = conf->mirrors[dev].rdev; } - if (atomic_dec_and_test(&r10_bio->remaining)) { - md_write_end(r10_bio->mddev); - raid_end_bio_io(r10_bio); - } - + raid_end_discard_bio(r10_bio); rdev_dec_pending(rdev, conf->mddev); } @@ -1594,7 +1612,9 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio) { struct r10conf *conf = mddev->private; struct geom *geo = &conf->geo; - struct r10bio *r10_bio; + struct r10bio *r10_bio, *first_r10bio; + int far_copies = geo->far_copies; + bool first_copy = true; int disk; sector_t chunk; @@ -1633,30 +1653,20 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio) if (bio_sectors(bio) < stripe_size*2) goto out; - /* For far offset layout, if bio is not aligned with stripe size, it splits - * the part that is not aligned with strip size. + /* For far and far offset layout, if bio is not aligned with stripe size, + * it splits the part that is not aligned with strip size. */ div_u64_rem(bio_start, stripe_size, &remainder); - if (geo->far_offset && remainder) { + if ((far_copies > 1) && remainder) { split_size = stripe_size - remainder; bio = raid10_split_bio(conf, bio, split_size, false); } div_u64_rem(bio_end, stripe_size, &remainder); - if (geo->far_offset && remainder) { + if ((far_copies > 1) && remainder) { split_size = bio_sectors(bio) - remainder; bio = raid10_split_bio(conf, bio, split_size, true); } - r10_bio = mempool_alloc(&conf->r10bio_pool, GFP_NOIO); - r10_bio->mddev = mddev; - r10_bio->state = 0; - r10_bio->sectors = 0; - memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * geo->raid_disks); - - wait_blocked_dev(mddev, r10_bio); - - r10_bio->master_bio = bio; - bio_start = bio->bi_iter.bi_sector; bio_end = bio_end_sector(bio); @@ -1682,6 +1692,28 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio) end_disk_offset = (bio_end & geo->chunk_mask) + (last_stripe_index << geo->chunk_shift); +retry_discard: + r10_bio = mempool_alloc(&conf->r10bio_pool, GFP_NOIO); + r10_bio->mddev = mddev; + r10_bio->state = 0; + r10_bio->sectors = 0; + memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * geo->raid_disks); + wait_blocked_dev(mddev, r10_bio); + + /* For far layout it needs more than one r10bio to cover all regions. + * Inspired by raid10_sync_request, we can use the first r10bio->master_bio + * to record the discard bio. Other r10bio->master_bio record the first + * r10bio. The first r10bio only release after all other r10bios finish. + * The discard bio returns only first r10bio finishes + */ + if (first_copy) { + r10_bio->master_bio = bio; + set_bit(R10BIO_Discard, &r10_bio->state); + first_copy = false; + first_r10bio = r10_bio; + } else + r10_bio->master_bio = (struct bio *)first_r10bio; + rcu_read_lock(); for (disk = 0; disk < geo->raid_disks; disk++) { struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev); @@ -1772,11 +1804,19 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio) } } - if (atomic_dec_and_test(&r10_bio->remaining)) { - md_write_end(r10_bio->mddev); - raid_end_bio_io(r10_bio); + if (!geo->far_offset && --far_copies) { + first_stripe_index += geo->stride >> geo->chunk_shift; + start_disk_offset += geo->stride; + last_stripe_index += geo->stride >> geo->chunk_shift; + end_disk_offset += geo->stride; + atomic_inc(&first_r10bio->remaining); + raid_end_discard_bio(r10_bio); + wait_barrier(conf); + goto retry_discard; } + raid_end_discard_bio(r10_bio); + return 0; out: allow_barrier(conf); diff --git a/drivers/md/raid10.h b/drivers/md/raid10.h index 79cd2b7d3128..1461fd55311b 100644 --- a/drivers/md/raid10.h +++ b/drivers/md/raid10.h @@ -179,5 +179,6 @@ enum r10bio_state { R10BIO_Previous, /* failfast devices did receive failfast requests. */ R10BIO_FailFast, + R10BIO_Discard, }; #endif From patchwork Thu Oct 29 03:07:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Ruffell X-Patchwork-Id: 1389824 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CM9PF5Mdfz9sTR; Thu, 29 Oct 2020 14:08:17 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1kXyIP-0005Oj-5h; Thu, 29 Oct 2020 03:08:13 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyIK-0005Jv-9A for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:08:08 +0000 Received: from mail-pl1-f197.google.com ([209.85.214.197]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyIJ-00035x-Lp for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:08:07 +0000 Received: by mail-pl1-f197.google.com with SMTP id x9so1016483pll.2 for ; Wed, 28 Oct 2020 20:08:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PiwCfUg9BJjsLehLRkDaYnSmziadwINlfq45givc7cQ=; b=Fd7ro4bIPsoVLY8toEqN6Kh2FyDDFcOtWEafSJGiSASV2SsoyVPKy23JSIHl2vOhEB NBoevBLBfNEBclXAUFf/ZKLnvF+s9xiOZ9iA3WR2r2UYoNGO173c91razloC+FUZ2+Jk a0iGoVu+0gQDBW5u021Ui5T8pMni7vnPoGthcL44BfTAS0CeACUu2ZbW5V7RHRP3Wvgf RbBUzTOPHKk8UJ9bDzd+LJZH4Q5xhuVwN2UpHTL4iLWeGzohckQeSGpt7lnJsjnB/7tP xHO9IKlCpSSPF1IO5MHQayTDHqQwhvOesWTEm3uTKQ5rxxDx1iJ+vHMN2w3KlVQCPyix FzDQ== X-Gm-Message-State: AOAM532rRCGh+Q2MYgmFjM2aKWWctV/roIKb3X2ty+5UnhNYrHg71VCw zCuHAx9AscPLNZBgFlr7jNXM0RI0wgu73pwS6fp5RDZf9OSVTvtIBoHKqIx0H8za/9S1K9lrJfd MceP+FKDVCmOyquXshKDH3jib225KOfg69OmkgMGFGg== X-Received: by 2002:a17:902:101:b029:d6:5ca9:118f with SMTP id 1-20020a1709020101b02900d65ca9118fmr2054426plb.27.1603940884673; Wed, 28 Oct 2020 20:08:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwXwXLG4WsyUdLcTjkO8Ky5NS4f6eulDbmAfafQRzZ8tmucY/zIaYwycINvg9Xx27xMGj299Q== X-Received: by 2002:a17:902:101:b029:d6:5ca9:118f with SMTP id 1-20020a1709020101b02900d65ca9118fmr2054413plb.27.1603940884421; Wed, 28 Oct 2020 20:08:04 -0700 (PDT) Received: from localhost.localdomain (222-152-178-139-fibre.sparkbb.co.nz. [222.152.178.139]) by smtp.gmail.com with ESMTPSA id r6sm966882pfg.85.2020.10.28.20.08.02 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Oct 2020 20:08:03 -0700 (PDT) From: Matthew Ruffell To: kernel-team@lists.ubuntu.com Subject: [SRU][F][G][PATCH 6/7] dm raid: fix discard limits for raid1 and raid10 Date: Thu, 29 Oct 2020 16:07:36 +1300 Message-Id: <20201029030737.21204-10-matthew.ruffell@canonical.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201029030737.21204-1-matthew.ruffell@canonical.com> References: <20201029030737.21204-1-matthew.ruffell@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Mike Snitzer BugLink: https://bugs.launchpad.net/bugs/1896578 Block core warned that discard_granularity was 0 for dm-raid with personality of raid1. Reason is that raid_io_hints() was incorrectly special-casing raid1 rather than raid0. But since commit 29efc390b9462 ("md/md0: optimize raid0 discard handling") even raid0 properly handles large discards. Fix raid_io_hints() by removing discard limits settings for raid1. Also, fix limits for raid10 by properly stacking underlying limits as done in blk_stack_limits(). Depends-on: 29efc390b9462 ("md/md0: optimize raid0 discard handling") Fixes: 61697a6abd24a ("dm: eliminate 'split_discard_bios' flag from DM target interface") Cc: stable@vger.kernel.org Reported-by: Zdenek Kabelac Reported-by: Mikulas Patocka Signed-off-by: Mike Snitzer (cherry-picked from commit e0910c8e4f87bb9f767e61a778b0d9271c4dc512) Signed-off-by: Matthew Ruffell --- drivers/md/dm-raid.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c index 10e8b2fe787b..793348ae1e8c 100644 --- a/drivers/md/dm-raid.c +++ b/drivers/md/dm-raid.c @@ -3744,12 +3744,14 @@ static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits) blk_limits_io_opt(limits, chunk_size_bytes * mddev_data_stripes(rs)); /* - * RAID1 and RAID10 personalities require bio splitting, - * RAID0/4/5/6 don't and process large discard bios properly. + * RAID10 personality requires bio splitting, + * RAID0/1/4/5/6 don't and process large discard bios properly. */ - if (rs_is_raid1(rs) || rs_is_raid10(rs)) { - limits->discard_granularity = chunk_size_bytes; - limits->max_discard_sectors = rs->md.chunk_sectors; + if (rs_is_raid10(rs)) { + limits->discard_granularity = max(chunk_size_bytes, + limits->discard_granularity); + limits->max_discard_sectors = min_not_zero(rs->md.chunk_sectors, + limits->max_discard_sectors); } } From patchwork Thu Oct 29 03:07:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Ruffell X-Patchwork-Id: 1389825 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CM9PJ1sgfz9sS8; Thu, 29 Oct 2020 14:08:20 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1kXyIR-0005Qf-Bp; Thu, 29 Oct 2020 03:08:15 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyIM-0005Lm-W2 for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:08:11 +0000 Received: from mail-pf1-f199.google.com ([209.85.210.199]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyIL-000362-I2 for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:08:09 +0000 Received: by mail-pf1-f199.google.com with SMTP id j207so1055548pfd.13 for ; Wed, 28 Oct 2020 20:08:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3DdOcfwsX9Nq+PFa/dyBCdgBpqOLtVGAyvCGvN97ipk=; b=EmkFRM7j2jgHp7Sn1r0H7MwDbihx8H6F9un31iLQBUtVGladcilGS3cH+sFLszx/Jj i0hSHaDMJ/3n6UCBNrnBTImjlnFUAazZ4ZglwF2C1XF135cPrsIN5fMbapsLbx9ZKRoZ squKFr9RgFCKKe57BPscxtn+i9opPi/Zc6KB8EawLudeR6ttPuvm/cr+64zL9RSjtuM5 LfQvZcbmxuZwdrojOqy0OpE/6vYl6IayUOyh8m3sIg+DjBz4O6N7WKz/vP4zkZajmVNq FZFgH7JNadKMNuTF18NzwDnybj9dl9k4ufBAMjNiJLc32Jb0Rpg/iZnj5e4jp3so0KsH 6zfw== X-Gm-Message-State: AOAM530GcXp76L5vpeShQ7ZM7jU7kO7pJIL91KLhwbpQYGdBfZqyzwaa hAWILdPY9jXq7Aic3GXY8zuOSWTG4vBSDjRg+rUWphXwE8bbTPQbyw66iVYW5mr9FF4PJmbAm08 +rj6EzUrYRn3w34B5TqOukNwGy1h0ripqIXJS257S9w== X-Received: by 2002:a17:902:7004:b029:d6:489b:6657 with SMTP id y4-20020a1709027004b02900d6489b6657mr1954744plk.20.1603940886599; Wed, 28 Oct 2020 20:08:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzX1MUGeSmcLmU5UJoOsdYeHiubXxMv7M1EzYBmrF+/AReb/HkYOBMV6QOHShtfVqiKT+xnCg== X-Received: by 2002:a17:902:7004:b029:d6:489b:6657 with SMTP id y4-20020a1709027004b02900d6489b6657mr1954723plk.20.1603940886362; Wed, 28 Oct 2020 20:08:06 -0700 (PDT) Received: from localhost.localdomain (222-152-178-139-fibre.sparkbb.co.nz. [222.152.178.139]) by smtp.gmail.com with ESMTPSA id r6sm966882pfg.85.2020.10.28.20.08.04 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Oct 2020 20:08:05 -0700 (PDT) From: Matthew Ruffell To: kernel-team@lists.ubuntu.com Subject: [SRU][F][G][PATCH 7/7] dm raid: remove unnecessary discard limits for raid10 Date: Thu, 29 Oct 2020 16:07:37 +1300 Message-Id: <20201029030737.21204-11-matthew.ruffell@canonical.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201029030737.21204-1-matthew.ruffell@canonical.com> References: <20201029030737.21204-1-matthew.ruffell@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Mike Snitzer BugLink: https://bugs.launchpad.net/bugs/1896578 Commit bcc90d280465e ("md/raid10: improve raid10 discard request") removes raid10's inability to properly handle large discards. So eliminate associated constraint from dm-raid's raid10 support. Signed-off-by: Mike Snitzer (cherry picked from commit f0e90b6c663a7e3b4736cb318c6c7c589f152c28) Signed-off-by: Matthew Ruffell --- drivers/md/dm-raid.c | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c index 793348ae1e8c..e966cb678506 100644 --- a/drivers/md/dm-raid.c +++ b/drivers/md/dm-raid.c @@ -3742,17 +3742,6 @@ static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits) blk_limits_io_min(limits, chunk_size_bytes); blk_limits_io_opt(limits, chunk_size_bytes * mddev_data_stripes(rs)); - - /* - * RAID10 personality requires bio splitting, - * RAID0/1/4/5/6 don't and process large discard bios properly. - */ - if (rs_is_raid10(rs)) { - limits->discard_granularity = max(chunk_size_bytes, - limits->discard_granularity); - limits->max_discard_sectors = min_not_zero(rs->md.chunk_sectors, - limits->max_discard_sectors); - } } static void raid_postsuspend(struct dm_target *ti)