From patchwork Tue Dec 10 16:56:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kirill Tkhai X-Patchwork-Id: 1207163 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=virtuozzo.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47XR7H39VKz9sRK for ; Wed, 11 Dec 2019 03:57:15 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727762AbfLJQ5L (ORCPT ); Tue, 10 Dec 2019 11:57:11 -0500 Received: from relay.sw.ru ([185.231.240.75]:36460 "EHLO relay.sw.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727594AbfLJQ5G (ORCPT ); Tue, 10 Dec 2019 11:57:06 -0500 Received: from dhcp-172-16-24-104.sw.ru ([172.16.24.104] helo=localhost.localdomain) by relay.sw.ru with esmtp (Exim 4.92.3) (envelope-from ) id 1ieinw-0006yh-6k; Tue, 10 Dec 2019 19:56:08 +0300 Subject: [PATCH RFC 1/3] block: Add support for REQ_OP_ASSIGN_RANGE operation From: Kirill Tkhai To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org Cc: axboe@kernel.dk, tytso@mit.edu, adilger.kernel@dilger.ca, ming.lei@redhat.com, osandov@fb.com, jthumshirn@suse.de, minwoo.im.dev@gmail.com, damien.lemoal@wdc.com, ktkhai@virtuozzo.com, andrea.parri@amarulasolutions.com, hare@suse.com, tj@kernel.org, ajay.joshi@wdc.com, sagi@grimberg.me, dsterba@suse.com, chaitanya.kulkarni@wdc.com, bvanassche@acm.org, dhowells@redhat.com, asml.silence@gmail.com Date: Tue, 10 Dec 2019 19:56:08 +0300 Message-ID: <157599696813.12112.14140818972910110796.stgit@localhost.localdomain> In-Reply-To: <157599668662.12112.10184894900037871860.stgit@localhost.localdomain> References: <157599668662.12112.10184894900037871860.stgit@localhost.localdomain> User-Agent: StGit/0.19 MIME-Version: 1.0 Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org This operation allows to notify a device about the fact, that some sectors range was choosen by a filesystem as a single extent, and the device should try its best to reflect that (keep the range as a single hunk in its internals, or represent the range as minimal set of hunks). Speaking directly, the operation is for forwarding fallocate(0) requests into an essence, on which the device is based. This may be useful for some distributed network filesystems, providing block device interface, for optimization of their blocks placement over the cluster nodes. Also, block devices mapping a file (like loop) are users of that, since this allows to allocate more continuous extents and since this batches blocks allocation requests. In addition, hypervisors like QEMU may use this for better blocks placement. The patch adds a new blkdev_issue_assign_range() primitive, which is rather similar to existing blkdev_issue_{*} api. Also, a new queue limit.max_assign_range_sectors is added. Signed-off-by: Kirill Tkhai Signed-off-by: Kirill Tkhai --- block/blk-core.c | 4 +++ block/blk-lib.c | 70 +++++++++++++++++++++++++++++++++++++++++++++ block/blk-merge.c | 21 ++++++++++++++ block/bounce.c | 1 + include/linux/bio.h | 3 ++ include/linux/blk_types.h | 2 + include/linux/blkdev.h | 29 +++++++++++++++++++ 7 files changed, 130 insertions(+) diff --git a/block/blk-core.c b/block/blk-core.c index 8d4db6e74496..060cc0ea1246 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -978,6 +978,10 @@ generic_make_request_checks(struct bio *bio) if (!q->limits.max_write_zeroes_sectors) goto not_supported; break; + case REQ_OP_ASSIGN_RANGE: + if (!q->limits.max_assign_range_sectors) + goto not_supported; + break; default: break; } diff --git a/block/blk-lib.c b/block/blk-lib.c index 5f2c429d4378..fbf780d3ea32 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -252,6 +252,46 @@ static int __blkdev_issue_write_zeroes(struct block_device *bdev, return 0; } +static int __blkdev_issue_assign_range(struct block_device *bdev, + sector_t sector, sector_t nr_sects, gfp_t gfp_mask, + struct bio **biop, unsigned flags) +{ + struct bio *bio = *biop; + unsigned int max_sectors; + struct request_queue *q = bdev_get_queue(bdev); + + if (!q) + return -ENXIO; + + if (bdev_read_only(bdev)) + return -EPERM; + + max_sectors = bdev_assign_range_sectors(bdev); + + if (max_sectors == 0) + return -EOPNOTSUPP; + + while (nr_sects) { + bio = blk_next_bio(bio, 0, gfp_mask); + bio->bi_iter.bi_sector = sector; + bio_set_dev(bio, bdev); + bio->bi_opf = REQ_OP_ASSIGN_RANGE; + + if (nr_sects > max_sectors) { + bio->bi_iter.bi_size = max_sectors << 9; + nr_sects -= max_sectors; + sector += max_sectors; + } else { + bio->bi_iter.bi_size = nr_sects << 9; + nr_sects = 0; + } + cond_resched(); + } + + *biop = bio; + return 0; +} + /* * Convert a number of 512B sectors to a number of pages. * The result is limited to a number of pages that can fit into a BIO. @@ -405,3 +445,33 @@ int blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, return ret; } EXPORT_SYMBOL(blkdev_issue_zeroout); + +int blkdev_issue_assign_range(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp_mask, unsigned flags) +{ + int ret = 0; + sector_t bs_mask; + struct bio *bio; + struct blk_plug plug; + + if (bdev_assign_range_sectors(bdev) == 0) + return 0; + + bs_mask = (bdev_logical_block_size(bdev) >> 9) - 1; + if ((sector | nr_sects) & bs_mask) + return -EINVAL; + + bio = NULL; + blk_start_plug(&plug); + + ret = __blkdev_issue_assign_range(bdev, sector, nr_sects, + gfp_mask, &bio, flags); + if (ret == 0 && bio) { + ret = submit_bio_wait(bio); + bio_put(bio); + } + blk_finish_plug(&plug); + + return ret; +} +EXPORT_SYMBOL(blkdev_issue_assign_range); diff --git a/block/blk-merge.c b/block/blk-merge.c index d783bdc4559b..b2ae8b5acd72 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -102,6 +102,22 @@ static struct bio *blk_bio_discard_split(struct request_queue *q, return bio_split(bio, split_sectors, GFP_NOIO, bs); } +static struct bio *blk_bio_assign_range_split(struct request_queue *q, + struct bio *bio, + struct bio_set *bs, + unsigned *nsegs) +{ + *nsegs = 1; + + if (!q->limits.max_assign_range_sectors) + return NULL; + + if (bio_sectors(bio) <= q->limits.max_assign_range_sectors) + return NULL; + + return bio_split(bio, q->limits.max_assign_range_sectors, GFP_NOIO, bs); +} + static struct bio *blk_bio_write_zeroes_split(struct request_queue *q, struct bio *bio, struct bio_set *bs, unsigned *nsegs) { @@ -300,6 +316,10 @@ void __blk_queue_split(struct request_queue *q, struct bio **bio, case REQ_OP_SECURE_ERASE: split = blk_bio_discard_split(q, *bio, &q->bio_split, nr_segs); break; + case REQ_OP_ASSIGN_RANGE: + split = blk_bio_assign_range_split(q, *bio, &q->bio_split, + nr_segs); + break; case REQ_OP_WRITE_ZEROES: split = blk_bio_write_zeroes_split(q, *bio, &q->bio_split, nr_segs); @@ -382,6 +402,7 @@ unsigned int blk_recalc_rq_segments(struct request *rq) case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_ZEROES: + case REQ_OP_ASSIGN_RANGE: return 0; case REQ_OP_WRITE_SAME: return 1; diff --git a/block/bounce.c b/block/bounce.c index f8ed677a1bf7..017bedba7b23 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -255,6 +255,7 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask, switch (bio_op(bio)) { case REQ_OP_DISCARD: + case REQ_OP_ASSIGN_RANGE: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_ZEROES: break; diff --git a/include/linux/bio.h b/include/linux/bio.h index 3cdb84cdc488..cf235c997e45 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -63,6 +63,7 @@ static inline bool bio_has_data(struct bio *bio) if (bio && bio->bi_iter.bi_size && bio_op(bio) != REQ_OP_DISCARD && + bio_op(bio) != REQ_OP_ASSIGN_RANGE && bio_op(bio) != REQ_OP_SECURE_ERASE && bio_op(bio) != REQ_OP_WRITE_ZEROES) return true; @@ -73,6 +74,7 @@ static inline bool bio_has_data(struct bio *bio) static inline bool bio_no_advance_iter(struct bio *bio) { return bio_op(bio) == REQ_OP_DISCARD || + bio_op(bio) == REQ_OP_ASSIGN_RANGE || bio_op(bio) == REQ_OP_SECURE_ERASE || bio_op(bio) == REQ_OP_WRITE_SAME || bio_op(bio) == REQ_OP_WRITE_ZEROES; @@ -184,6 +186,7 @@ static inline unsigned bio_segments(struct bio *bio) switch (bio_op(bio)) { case REQ_OP_DISCARD: + case REQ_OP_ASSIGN_RANGE: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_ZEROES: return 0; diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 70254ae11769..f03dcf25c831 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -296,6 +296,8 @@ enum req_opf { REQ_OP_ZONE_CLOSE = 11, /* Transition a zone to full */ REQ_OP_ZONE_FINISH = 12, + /* assign sector range */ + REQ_OP_ASSIGN_RANGE = 15, /* SCSI passthrough using struct scsi_request */ REQ_OP_SCSI_IN = 32, diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 3cd1853dbdac..9af70120fe57 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -336,6 +336,7 @@ struct queue_limits { unsigned int max_hw_discard_sectors; unsigned int max_write_same_sectors; unsigned int max_write_zeroes_sectors; + unsigned int max_assign_range_sectors; unsigned int discard_granularity; unsigned int discard_alignment; @@ -995,6 +996,10 @@ static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q, return min(q->limits.max_discard_sectors, UINT_MAX >> SECTOR_SHIFT); + if (unlikely(op == REQ_OP_ASSIGN_RANGE)) + return min(q->limits.max_assign_range_sectors, + UINT_MAX >> SECTOR_SHIFT); + if (unlikely(op == REQ_OP_WRITE_SAME)) return q->limits.max_write_same_sectors; @@ -1028,6 +1033,7 @@ static inline unsigned int blk_rq_get_max_sectors(struct request *rq, if (!q->limits.chunk_sectors || req_op(rq) == REQ_OP_DISCARD || + req_op(rq) == REQ_OP_ASSIGN_RANGE || req_op(rq) == REQ_OP_SECURE_ERASE) return blk_queue_get_max_sectors(q, req_op(rq)); @@ -1225,6 +1231,8 @@ extern int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, unsigned flags); extern int blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, unsigned flags); +extern int blkdev_issue_assign_range(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp_mask, unsigned flags); static inline int sb_issue_discard(struct super_block *sb, sector_t block, sector_t nr_blocks, gfp_t gfp_mask, unsigned long flags) @@ -1247,6 +1255,17 @@ static inline int sb_issue_zeroout(struct super_block *sb, sector_t block, gfp_mask, 0); } +static inline int sb_issue_assign_range(struct super_block *sb, sector_t block, + sector_t nr_blocks, gfp_t gfp_mask) +{ + return blkdev_issue_assign_range(sb->s_bdev, + block << (sb->s_blocksize_bits - + SECTOR_SHIFT), + nr_blocks << (sb->s_blocksize_bits - + SECTOR_SHIFT), + gfp_mask, 0); +} + extern int blk_verify_command(unsigned char *cmd, fmode_t mode); enum blk_default_limits { @@ -1428,6 +1447,16 @@ static inline unsigned int bdev_write_zeroes_sectors(struct block_device *bdev) return 0; } +static inline unsigned int bdev_assign_range_sectors(struct block_device *bdev) +{ + struct request_queue *q = bdev_get_queue(bdev); + + if (q) + return q->limits.max_assign_range_sectors; + + return 0; +} + static inline enum blk_zoned_model bdev_zoned_model(struct block_device *bdev) { struct request_queue *q = bdev_get_queue(bdev); From patchwork Tue Dec 10 16:56:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kirill Tkhai X-Patchwork-Id: 1207164 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=virtuozzo.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47XR7L4K0cz9sR8 for ; Wed, 11 Dec 2019 03:57:18 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727708AbfLJQ5G (ORCPT ); Tue, 10 Dec 2019 11:57:06 -0500 Received: from relay.sw.ru ([185.231.240.75]:36454 "EHLO relay.sw.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727559AbfLJQ5F (ORCPT ); Tue, 10 Dec 2019 11:57:05 -0500 Received: from dhcp-172-16-24-104.sw.ru ([172.16.24.104] helo=localhost.localdomain) by relay.sw.ru with esmtp (Exim 4.92.3) (envelope-from ) id 1ieio2-0006yu-9G; Tue, 10 Dec 2019 19:56:14 +0300 Subject: [PATCH RFC 2/3] loop: Forward REQ_OP_ASSIGN_RANGE into fallocate(0) From: Kirill Tkhai To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org Cc: axboe@kernel.dk, tytso@mit.edu, adilger.kernel@dilger.ca, ming.lei@redhat.com, osandov@fb.com, jthumshirn@suse.de, minwoo.im.dev@gmail.com, damien.lemoal@wdc.com, ktkhai@virtuozzo.com, andrea.parri@amarulasolutions.com, hare@suse.com, tj@kernel.org, ajay.joshi@wdc.com, sagi@grimberg.me, dsterba@suse.com, chaitanya.kulkarni@wdc.com, bvanassche@acm.org, dhowells@redhat.com, asml.silence@gmail.com Date: Tue, 10 Dec 2019 19:56:13 +0300 Message-ID: <157599697369.12112.10138136904533871162.stgit@localhost.localdomain> In-Reply-To: <157599668662.12112.10184894900037871860.stgit@localhost.localdomain> References: <157599668662.12112.10184894900037871860.stgit@localhost.localdomain> User-Agent: StGit/0.19 MIME-Version: 1.0 Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Send fallocate(0) request into underlining filesystem after upper filesystem sent REQ_OP_ASSIGN_RANGE request to block device. Signed-off-by: Kirill Tkhai --- drivers/block/loop.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 739b372a5112..d99d9193de7a 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -609,6 +609,8 @@ static int do_req_filebacked(struct loop_device *lo, struct request *rq) FALLOC_FL_PUNCH_HOLE); case REQ_OP_DISCARD: return lo_fallocate(lo, rq, pos, FALLOC_FL_PUNCH_HOLE); + case REQ_OP_ASSIGN_RANGE: + return lo_fallocate(lo, rq, pos, 0); case REQ_OP_WRITE: if (lo->transfer) return lo_write_transfer(lo, rq, pos); @@ -875,6 +877,7 @@ static void loop_config_discard(struct loop_device *lo) lo->lo_encrypt_key_size) { q->limits.discard_granularity = 0; q->limits.discard_alignment = 0; + q->limits.max_assign_range_sectors = 0; blk_queue_max_discard_sectors(q, 0); blk_queue_max_write_zeroes_sectors(q, 0); blk_queue_flag_clear(QUEUE_FLAG_DISCARD, q); @@ -883,6 +886,7 @@ static void loop_config_discard(struct loop_device *lo) q->limits.discard_granularity = inode->i_sb->s_blocksize; q->limits.discard_alignment = 0; + q->limits.max_assign_range_sectors = UINT_MAX >> 9; blk_queue_max_discard_sectors(q, UINT_MAX >> 9); blk_queue_max_write_zeroes_sectors(q, UINT_MAX >> 9); @@ -1917,6 +1921,7 @@ static blk_status_t loop_queue_rq(struct blk_mq_hw_ctx *hctx, case REQ_OP_FLUSH: case REQ_OP_DISCARD: case REQ_OP_WRITE_ZEROES: + case REQ_OP_ASSIGN_RANGE: cmd->use_aio = false; break; default: From patchwork Tue Dec 10 16:56:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kirill Tkhai X-Patchwork-Id: 1207166 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-ext4-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=virtuozzo.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47XR7T0JY2z9sRH for ; Wed, 11 Dec 2019 03:57:25 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727629AbfLJQ5F (ORCPT ); Tue, 10 Dec 2019 11:57:05 -0500 Received: from relay.sw.ru ([185.231.240.75]:36462 "EHLO relay.sw.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727545AbfLJQ5F (ORCPT ); Tue, 10 Dec 2019 11:57:05 -0500 Received: from dhcp-172-16-24-104.sw.ru ([172.16.24.104] helo=localhost.localdomain) by relay.sw.ru with esmtp (Exim 4.92.3) (envelope-from ) id 1ieio7-0006yx-JX; Tue, 10 Dec 2019 19:56:19 +0300 Subject: [PATCH RFC 3/3] ext4: Notify block device about fallocate(0)-assigned blocks From: Kirill Tkhai To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org Cc: axboe@kernel.dk, tytso@mit.edu, adilger.kernel@dilger.ca, ming.lei@redhat.com, osandov@fb.com, jthumshirn@suse.de, minwoo.im.dev@gmail.com, damien.lemoal@wdc.com, ktkhai@virtuozzo.com, andrea.parri@amarulasolutions.com, hare@suse.com, tj@kernel.org, ajay.joshi@wdc.com, sagi@grimberg.me, dsterba@suse.com, chaitanya.kulkarni@wdc.com, bvanassche@acm.org, dhowells@redhat.com, asml.silence@gmail.com Date: Tue, 10 Dec 2019 19:56:19 +0300 Message-ID: <157599697948.12112.3846364542350011691.stgit@localhost.localdomain> In-Reply-To: <157599668662.12112.10184894900037871860.stgit@localhost.localdomain> References: <157599668662.12112.10184894900037871860.stgit@localhost.localdomain> User-Agent: StGit/0.19 MIME-Version: 1.0 Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Call sb_issue_assign_range() after extent range was allocated on user request. Hopeful, this helps block device to maintain its internals in the best way, if this is appliable. Signed-off-by: Kirill Tkhai --- fs/ext4/ext4.h | 1 + fs/ext4/extents.c | 11 +++++++++-- 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index f8578caba40d..fe2263c00c0e 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -622,6 +622,7 @@ enum { * allows jbd2 to avoid submitting data before commit. */ #define EXT4_GET_BLOCKS_IO_SUBMIT 0x0400 +#define EXT4_GET_BLOCKS_SUBMIT_ALLOC 0x0800 /* * The bit position of these flags must not overlap with any of the * EXT4_GET_BLOCKS_*. They are used by ext4_find_extent(), diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 0e8708b77da6..5f4fc660cbb1 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4490,6 +4490,13 @@ int ext4_ext_map_blocks(handle_t *handle, struct inode *inode, ar.len = allocated; got_allocated_blocks: + if ((flags & EXT4_GET_BLOCKS_SUBMIT_ALLOC) && sbi->fallocate) { + err = sb_issue_assign_range(inode->i_sb, newblock, + EXT4_C2B(sbi, allocated_clusters), GFP_NOFS); + if (err) + goto free_on_err; + } + /* try to insert new extent into found leaf and return */ ext4_ext_store_pblock(&newex, newblock + offset); newex.ee_len = cpu_to_le16(ar.len); @@ -4506,7 +4513,7 @@ int ext4_ext_map_blocks(handle_t *handle, struct inode *inode, if (!err) err = ext4_ext_insert_extent(handle, inode, &path, &newex, flags); - +free_on_err: if (err && free_on_err) { int fb_flags = flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE ? EXT4_FREE_BLOCKS_NO_QUOT_UPDATE : 0; @@ -4926,7 +4933,7 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len) lblk = offset >> blkbits; max_blocks = EXT4_MAX_BLOCKS(len, offset, blkbits); - flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT; + flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT | EXT4_GET_BLOCKS_SUBMIT_ALLOC; if (mode & FALLOC_FL_KEEP_SIZE) flags |= EXT4_GET_BLOCKS_KEEP_SIZE;