From patchwork Thu Oct 29 03:07:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Ruffell X-Patchwork-Id: 1389823 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CM9PF1VKgz9sS8; Thu, 29 Oct 2020 14:08:17 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1kXyIN-0005MY-5s; Thu, 29 Oct 2020 03:08:11 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyII-0005Hu-Sr for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:08:06 +0000 Received: from mail-pl1-f197.google.com ([209.85.214.197]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kXyIH-00035t-OW for kernel-team@lists.ubuntu.com; Thu, 29 Oct 2020 03:08:05 +0000 Received: by mail-pl1-f197.google.com with SMTP id k6so981385pls.22 for ; Wed, 28 Oct 2020 20:08:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RacatLTcROZeuaTJoM36P0HX1X6lz/+l+SCVFMMjIaE=; b=SJLA+NEbS0vaIVjM1rqOijYUKS5HrKoCE3edPXxlkQvmQPmbikW6jihdVpSCv1iIrq dQDVoMMB/mnmwXeFSX8T2c3Z1m1QG6m4cdtZYkzT1qhTNSSBgbPoNSj6j7wvRm7cacDJ z7MVXj9HiKroJ7QgiMipzV3NlFCqzz5L1G92bwcbjK5nXPqsuxllhkH63YwCtmY8tAv/ qzAwCzh7LlwwR4PTg30h4zLxvpJQqmCef2EIpvqfDTSY2xdVOYVCC4QEjTi/C4dXJsBX UM8fD/TI0lloEPMhE9vabWlX74nayCRXal95KCOQ7jKA/dJx6nvsh9MQD9CmbgnDnQqO tNpg== X-Gm-Message-State: AOAM530lKOIL4wk6ejMOetG2NB3rnv1ezFkMGT/Oi15ZM8bpZwV1ZYOQ kUYLhXsr8vSAwnLkZGUGRd0C0JOgeI21TVYsYdxVIsgSjqshleXWoTc2pyzH5LqhGPmFCK6OKcM CFdmHAE6hIhT2iIqtbPDFXAF1gtONFvG6SWPuo8hiOQ== X-Received: by 2002:a17:902:ab94:b029:d6:9c3:e99e with SMTP id f20-20020a170902ab94b02900d609c3e99emr2020535plr.68.1603940884181; Wed, 28 Oct 2020 20:08:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy8acXpJa4u9fiVtcQE3soy4cWP35cVtOUBj53V9wxf/mVRbHEf14DBYEs9DZLECEdVJOD/aA== X-Received: by 2002:a17:902:ab94:b029:d6:9c3:e99e with SMTP id f20-20020a170902ab94b02900d609c3e99emr2020454plr.68.1603940882473; Wed, 28 Oct 2020 20:08:02 -0700 (PDT) Received: from localhost.localdomain (222-152-178-139-fibre.sparkbb.co.nz. [222.152.178.139]) by smtp.gmail.com with ESMTPSA id r6sm966882pfg.85.2020.10.28.20.08.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Oct 2020 20:08:01 -0700 (PDT) From: Matthew Ruffell To: kernel-team@lists.ubuntu.com Subject: [SRU][B][PATCH 5/7] md/raid10: improve discard request for far layout Date: Thu, 29 Oct 2020 16:07:35 +1300 Message-Id: <20201029030737.21204-9-matthew.ruffell@canonical.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201029030737.21204-1-matthew.ruffell@canonical.com> References: <20201029030737.21204-1-matthew.ruffell@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Xiao Ni BugLink: https://bugs.launchpad.net/bugs/1896578 For far layout, the discard region is not continuous on disks. So it needs far copies r10bio to cover all regions. It needs a way to know all r10bios have finish or not. Similar with raid10_sync_request, only the first r10bio master_bio records the discard bio. Other r10bios master_bio record the first r10bio. The first r10bio can finish after other r10bios finish and then return the discard bio. Signed-off-by: Xiao Ni Signed-off-by: Song Liu (backported from commit d3ee2d8415a6256c1c41e1be36e80e640c3e6359) [mruffell: remove "address of" pointer for mempool_alloc()] Signed-off-by: Matthew Ruffell --- drivers/md/raid10.c | 86 +++++++++++++++++++++++++++++++++------------ drivers/md/raid10.h | 1 + 2 files changed, 64 insertions(+), 23 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 81a8aa60dbb9..1ebfd5217791 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1574,6 +1574,28 @@ static struct bio *raid10_split_bio(struct r10conf *conf, return bio; } +static void raid_end_discard_bio(struct r10bio *r10bio) +{ + struct r10conf *conf = r10bio->mddev->private; + struct r10bio *first_r10bio; + + while (atomic_dec_and_test(&r10bio->remaining)) { + + allow_barrier(conf); + + if (!test_bit(R10BIO_Discard, &r10bio->state)) { + first_r10bio = (struct r10bio *)r10bio->master_bio; + free_r10bio(r10bio); + r10bio = first_r10bio; + } else { + md_write_end(r10bio->mddev); + bio_endio(r10bio->master_bio); + free_r10bio(r10bio); + break; + } + } +} + static void raid10_end_discard_request(struct bio *bio) { struct r10bio *r10_bio = bio->bi_private; @@ -1600,11 +1622,7 @@ static void raid10_end_discard_request(struct bio *bio) rdev = conf->mirrors[dev].rdev; } - if (atomic_dec_and_test(&r10_bio->remaining)) { - md_write_end(r10_bio->mddev); - raid_end_bio_io(r10_bio); - } - + raid_end_discard_bio(r10_bio); rdev_dec_pending(rdev, conf->mddev); } @@ -1617,7 +1635,9 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio) { struct r10conf *conf = mddev->private; struct geom *geo = &conf->geo; - struct r10bio *r10_bio; + struct r10bio *r10_bio, *first_r10bio; + int far_copies = geo->far_copies; + bool first_copy = true; int disk; sector_t chunk; @@ -1656,30 +1676,20 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio) if (bio_sectors(bio) < stripe_size*2) goto out; - /* For far offset layout, if bio is not aligned with stripe size, it splits - * the part that is not aligned with strip size. + /* For far and far offset layout, if bio is not aligned with stripe size, + * it splits the part that is not aligned with strip size. */ div_u64_rem(bio_start, stripe_size, &remainder); - if (geo->far_offset && remainder) { + if ((far_copies > 1) && remainder) { split_size = stripe_size - remainder; bio = raid10_split_bio(conf, bio, split_size, false); } div_u64_rem(bio_end, stripe_size, &remainder); - if (geo->far_offset && remainder) { + if ((far_copies > 1) && remainder) { split_size = bio_sectors(bio) - remainder; bio = raid10_split_bio(conf, bio, split_size, true); } - r10_bio = mempool_alloc(conf->r10bio_pool, GFP_NOIO); - r10_bio->mddev = mddev; - r10_bio->state = 0; - r10_bio->sectors = 0; - memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * geo->raid_disks); - - wait_blocked_dev(mddev, r10_bio); - - r10_bio->master_bio = bio; - bio_start = bio->bi_iter.bi_sector; bio_end = bio_end_sector(bio); @@ -1705,6 +1715,28 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio) end_disk_offset = (bio_end & geo->chunk_mask) + (last_stripe_index << geo->chunk_shift); +retry_discard: + r10_bio = mempool_alloc(conf->r10bio_pool, GFP_NOIO); + r10_bio->mddev = mddev; + r10_bio->state = 0; + r10_bio->sectors = 0; + memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * geo->raid_disks); + wait_blocked_dev(mddev, r10_bio); + + /* For far layout it needs more than one r10bio to cover all regions. + * Inspired by raid10_sync_request, we can use the first r10bio->master_bio + * to record the discard bio. Other r10bio->master_bio record the first + * r10bio. The first r10bio only release after all other r10bios finish. + * The discard bio returns only first r10bio finishes + */ + if (first_copy) { + r10_bio->master_bio = bio; + set_bit(R10BIO_Discard, &r10_bio->state); + first_copy = false; + first_r10bio = r10_bio; + } else + r10_bio->master_bio = (struct bio *)first_r10bio; + rcu_read_lock(); for (disk = 0; disk < geo->raid_disks; disk++) { struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev); @@ -1795,11 +1827,19 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio) } } - if (atomic_dec_and_test(&r10_bio->remaining)) { - md_write_end(r10_bio->mddev); - raid_end_bio_io(r10_bio); + if (!geo->far_offset && --far_copies) { + first_stripe_index += geo->stride >> geo->chunk_shift; + start_disk_offset += geo->stride; + last_stripe_index += geo->stride >> geo->chunk_shift; + end_disk_offset += geo->stride; + atomic_inc(&first_r10bio->remaining); + raid_end_discard_bio(r10_bio); + wait_barrier(conf); + goto retry_discard; } + raid_end_discard_bio(r10_bio); + return 0; out: allow_barrier(conf); diff --git a/drivers/md/raid10.h b/drivers/md/raid10.h index e2e8840de9bf..f157ef5ce49c 100644 --- a/drivers/md/raid10.h +++ b/drivers/md/raid10.h @@ -179,5 +179,6 @@ enum r10bio_state { R10BIO_Previous, /* failfast devices did receive failfast requests. */ R10BIO_FailFast, + R10BIO_Discard, }; #endif