From patchwork Tue Aug 19 06:52:38 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gioh Kim X-Patchwork-Id: 381251 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 70BCC14011B for ; Tue, 19 Aug 2014 16:53:04 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752812AbaHSGwl (ORCPT ); Tue, 19 Aug 2014 02:52:41 -0400 Received: from lgeamrelo02.lge.com ([156.147.1.126]:58706 "EHLO lgeamrelo02.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752691AbaHSGwk (ORCPT ); Tue, 19 Aug 2014 02:52:40 -0400 Received: from unknown (HELO ?10.178.33.69?) (10.178.33.69) by 156.147.1.126 with ESMTP; 19 Aug 2014 15:52:28 +0900 X-Original-SENDERIP: 10.178.33.69 X-Original-MAILFROM: gioh.kim@lge.com Message-ID: <53F2F436.4070307@lge.com> Date: Tue, 19 Aug 2014 15:52:38 +0900 From: Gioh Kim User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: Alexander Viro , Andrew Morton , "Paul E. McKenney" , Peter Zijlstra , Jan Kara , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Theodore Ts'o , Andreas Dilger , linux-ext4@vger.kernel.org CC: Minchan Kim , Joonsoo Kim , =?EUC-KR?B?wMywx8ij?= Subject: [PATCHv2 1/3] fs/buffer.c: allocate buffer cache with user specific flag References: <53F2F3E6.1030901@lge.com> In-Reply-To: <53F2F3E6.1030901@lge.com> Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org A buffer cache is allocated from movable area because it is referred for a while and released soon. But some filesystems are taking buffer cache for a long time and it can disturb page migration. A new API should be introduced to allocate buffer cache with user specific flag. For instance if user set flag to zero, buffer cache is allocated from non-movable area. Signed-off-by: Gioh Kim --- fs/buffer.c | 52 +++++++++++++++++++++++++++++-------------- include/linux/buffer_head.h | 12 +++++++++- 2 files changed, 46 insertions(+), 18 deletions(-) -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/fs/buffer.c b/fs/buffer.c index 8f05111..14f2f21 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -993,7 +993,7 @@ init_page_buffers(struct page *page, struct block_device *bdev, */ static int grow_dev_page(struct block_device *bdev, sector_t block, - pgoff_t index, int size, int sizebits) + pgoff_t index, int size, int sizebits, gfp_t gfp) { struct inode *inode = bdev->bd_inode; struct page *page; @@ -1002,10 +1002,10 @@ grow_dev_page(struct block_device *bdev, sector_t block, int ret = 0; /* Will call free_more_memory() */ gfp_t gfp_mask; - gfp_mask = mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS; - gfp_mask |= __GFP_MOVABLE; + gfp_mask = (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS) | gfp; + /* - * XXX: __getblk_slow() can not really deal with failure and + * XXX: __getblk_gfp() can not really deal with failure and * will endlessly loop on improvised global reclaim. Prefer * looping in the allocator rather than here, at least that * code knows what it's doing. @@ -1058,7 +1058,7 @@ failed: * that page was dirty, the buffers are set dirty also. */ static int -grow_buffers(struct block_device *bdev, sector_t block, int size) +grow_buffers(struct block_device *bdev, sector_t block, int size, gfp_t gfp) { pgoff_t index; int sizebits; @@ -1085,11 +1085,12 @@ grow_buffers(struct block_device *bdev, sector_t block, int size) } /* Create a page with the proper size buffers.. */ - return grow_dev_page(bdev, block, index, size, sizebits); + return grow_dev_page(bdev, block, index, size, sizebits, gfp); } -static struct buffer_head * -__getblk_slow(struct block_device *bdev, sector_t block, int size) +struct buffer_head * +__getblk_gfp(struct block_device *bdev, sector_t block, + unsigned size, gfp_t gfp) { /* Size must be multiple of hard sectorsize */ if (unlikely(size & (bdev_logical_block_size(bdev)-1) || @@ -1111,13 +1112,14 @@ __getblk_slow(struct block_device *bdev, sector_t block, int size) if (bh) return bh; - ret = grow_buffers(bdev, block, size); + ret = grow_buffers(bdev, block, size, gfp); if (ret < 0) return NULL; if (ret == 0) free_more_memory(); } } +EXPORT_SYMBOL(__getblk_gfp); /* * The relationship between dirty buffers and dirty pages: @@ -1381,12 +1383,7 @@ EXPORT_SYMBOL(__find_get_block); struct buffer_head * __getblk(struct block_device *bdev, sector_t block, unsigned size) { - struct buffer_head *bh = __find_get_block(bdev, block, size); - - might_sleep(); - if (bh == NULL) - bh = __getblk_slow(bdev, block, size); - return bh; + return __getblk_gfp(bdev, block, size, __GFP_MOVABLE); } EXPORT_SYMBOL(__getblk); @@ -1410,18 +1407,39 @@ EXPORT_SYMBOL(__breadahead); * @size: size (in bytes) to read * * Reads a specified block, and returns buffer head that contains it. + * The page cache is allocated from movable area so that it can be migrated. * It returns NULL if the block was unreadable. */ struct buffer_head * __bread(struct block_device *bdev, sector_t block, unsigned size) { - struct buffer_head *bh = __getblk(bdev, block, size); + return __bread_gfp(bdev, block, size, __GFP_MOVABLE); +} +EXPORT_SYMBOL(__bread); + +/** + * __bread_gfp() - reads a specified block and returns the bh + * @bdev: the block_device to read from + * @block: number of block + * @size: size (in bytes) to read + * @gfp: page allocation flag + * + * Reads a specified block, and returns buffer head that contains it. + * The page cache can be allocated from non-movable area + * not to prevent page migration if you set gfp to zero. + * It returns NULL if the block was unreadable. + */ +struct buffer_head * +__bread_gfp(struct block_device *bdev, sector_t block, + unsigned size, gfp_t gfp) +{ + struct buffer_head *bh = __getblk_gfp(bdev, block, size, gfp); if (likely(bh) && !buffer_uptodate(bh)) bh = __bread_slow(bh); return bh; } -EXPORT_SYMBOL(__bread); +EXPORT_SYMBOL(__bread_gfp); /* * invalidate_bh_lrus() is called rarely - but not only at unmount. diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index 324329c..a1d73fd 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -177,10 +177,14 @@ struct buffer_head *__find_get_block(struct block_device *bdev, sector_t block, unsigned size); struct buffer_head *__getblk(struct block_device *bdev, sector_t block, unsigned size); +struct buffer_head *__getblk_gfp(struct block_device *bdev, sector_t block, + unsigned size, gfp_t gfp); void __brelse(struct buffer_head *); void __bforget(struct buffer_head *); void __breadahead(struct block_device *, sector_t block, unsigned int size); struct buffer_head *__bread(struct block_device *, sector_t block, unsigned size); +struct buffer_head *__bread_gfp(struct block_device *, + sector_t block, unsigned size, gfp_t gfp); void invalidate_bh_lrus(void); struct buffer_head *alloc_buffer_head(gfp_t gfp_flags); void free_buffer_head(struct buffer_head * bh); @@ -295,7 +299,13 @@ static inline void bforget(struct buffer_head *bh) static inline struct buffer_head * sb_bread(struct super_block *sb, sector_t block) { - return __bread(sb->s_bdev, block, sb->s_blocksize); + return __bread_gfp(sb->s_bdev, block, sb->s_blocksize, __GFP_MOVABLE); +} + +static inline struct buffer_head * +sb_bread_gfp(struct super_block *sb, sector_t block, gfp_t gfp) +{ + return __bread_gfp(sb->s_bdev, block, sb->s_blocksize, gfp); } static inline void