From patchwork Fri Jan 18 05:39:47 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Theodore Ts'o X-Patchwork-Id: 213455 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 6E18B2C007A for ; Fri, 18 Jan 2013 16:39:55 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751020Ab3ARFjy (ORCPT ); Fri, 18 Jan 2013 00:39:54 -0500 Received: from li9-11.members.linode.com ([67.18.176.11]:44128 "EHLO imap.thunk.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750830Ab3ARFjy (ORCPT ); Fri, 18 Jan 2013 00:39:54 -0500 Received: from root (helo=closure.thunk.org) by imap.thunk.org with local-esmtp (Exim 4.80) (envelope-from ) id 1Tw4g9-0002E6-5J; Fri, 18 Jan 2013 05:39:49 +0000 Received: by closure.thunk.org (Postfix, from userid 15806) id E3B732EA376; Fri, 18 Jan 2013 00:39:47 -0500 (EST) Date: Fri, 18 Jan 2013 00:39:47 -0500 From: Theodore Ts'o To: Zheng Liu Cc: linux-ext4@vger.kernel.org, Jan kara , Zheng Liu Subject: Re: [PATCH 7/7 v2] ext4: reclaim extents from extent status tree Message-ID: <20130118053947.GD13785@thunk.org> References: <1357901627-3068-1-git-send-email-wenqing.lz@taobao.com> <1357901627-3068-8-git-send-email-wenqing.lz@taobao.com> <20130118051921.GC13785@thunk.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20130118051921.GC13785@thunk.org> User-Agent: Mutt/1.5.21 (2010-09-15) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: tytso@thunk.org X-SA-Exim-Scanned: No (on imap.thunk.org); SAEximRunCond expanded to false Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Fri, Jan 18, 2013 at 12:19:21AM -0500, Theodore Ts'o wrote: > I'm a bit concerned we might be too aggressive, > because there are two ways that items can be freed from the > extent_status tree. One is if the inode is not used at all, and when > we release the inode, we'll drop all of the entries in the > extent_status_tree for that inode. The second way is via the shrinker > which we've registered. If we use the sb->s_op->free_cached_objects() approach, something like the following change to prune_super() in fs/super.c might address the above concern: What do folks think? - Ted --- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/fs/super.c b/fs/super.c index 12f1237..fb57bd2 100644 --- a/fs/super.c +++ b/fs/super.c @@ -80,6 +80,7 @@ static int prune_super(struct shrinker *shrink, struct shrink_control *sc) if (sc->nr_to_scan) { int dentries; int inodes; + int fs_to_scan = 0; /* proportion the scan between the caches */ dentries = (sc->nr_to_scan * sb->s_nr_dentry_unused) / @@ -87,7 +88,7 @@ static int prune_super(struct shrinker *shrink, struct shrink_control *sc) inodes = (sc->nr_to_scan * sb->s_nr_inodes_unused) / total_objects; if (fs_objects) - fs_objects = (sc->nr_to_scan * fs_objects) / + fs_to_scan = (sc->nr_to_scan * fs_objects) / total_objects; /* * prune the dcache first as the icache is pinned by it, then @@ -96,8 +97,23 @@ static int prune_super(struct shrinker *shrink, struct shrink_control *sc) prune_dcache_sb(sb, dentries); prune_icache_sb(sb, inodes); - if (fs_objects && sb->s_op->free_cached_objects) { - sb->s_op->free_cached_objects(sb, fs_objects); + /* + * If as a result of pruning the icache, we released some + * of the fs_objects, give credit to the fact and + * reduce the number of fs objects that we should try + * to release. + */ + if (fs_to_scan) { + int fs_objects_now = sb->s_op->nr_cached_objects(sb); + + if (fs_objects_now < fs_objects) + fs_to_scan -= fs_objects - fs_objects_now; + if (fs_to_scan < 0) + fs_to_scan = 0; + } + + if (fs_to_scan && sb->s_op->free_cached_objects) { + sb->s_op->free_cached_objects(sb, fs_to_scan); fs_objects = sb->s_op->nr_cached_objects(sb); } total_objects = sb->s_nr_dentry_unused +