From patchwork Wed Sep 3 03:37:38 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Theodore Ts'o X-Patchwork-Id: 385379 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 8ED751401F5 for ; Wed, 3 Sep 2014 13:37:53 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754749AbaICDhw (ORCPT ); Tue, 2 Sep 2014 23:37:52 -0400 Received: from imap.thunk.org ([74.207.234.97]:34023 "EHLO imap.thunk.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753439AbaICDhv (ORCPT ); Tue, 2 Sep 2014 23:37:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=thunk.org; s=ef5046eb; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date; bh=t17A0ktmwDl/WqaYHqV+y2c62+ydbu35DKXKDjUpwQo=; b=uuZQscg0xx90or9HOcl/J+/ndIc+2DjCXlF10ctUSVykmBwW56J0rzhiBXNaLRRgif9xHL7sVqOfav7dHrWEqeBSuVaviv+ONU520U8LPvWZImeFrR4z6WOGhrOqqYjdL+QKWZJItJS18Vmp2vz/ECqZdJ5zpL+OLDQHecKYOoU=; Received: from root (helo=closure.thunk.org) by imap.thunk.org with local-esmtp (Exim 4.80) (envelope-from ) id 1XP1OC-0004QP-SE; Wed, 03 Sep 2014 03:37:44 +0000 Received: by closure.thunk.org (Postfix, from userid 15806) id 9D2FC58031C; Tue, 2 Sep 2014 23:37:38 -0400 (EDT) Date: Tue, 2 Sep 2014 23:37:38 -0400 From: Theodore Ts'o To: Jan Kara Cc: Zheng Liu , linux-ext4@vger.kernel.org, Andreas Dilger , Zheng Liu Subject: Re: [PATCH v3 4/6] ext4: change lru to round-robin in extent status tree shrinker Message-ID: <20140903033738.GB2504@thunk.org> References: <1407382553-24256-1-git-send-email-wenqing.lz@taobao.com> <1407382553-24256-5-git-send-email-wenqing.lz@taobao.com> <20140827150121.GC22211@quack.suse.cz> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20140827150121.GC22211@quack.suse.cz> User-Agent: Mutt/1.5.23 (2014-03-12) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: tytso@thunk.org X-SA-Exim-Scanned: No (on imap.thunk.org); SAEximRunCond expanded to false Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Wed, Aug 27, 2014 at 05:01:21PM +0200, Jan Kara wrote: > On Thu 07-08-14 11:35:51, Zheng Liu wrote: > This comment is not directly related to this patch but looking into the > code made me think about it. It seems ugly to call __es_shrink() from > internals of ext4_es_insert_extent(). Also thinking about locking > implications makes me shudder a bit and finally this may make the pressure > on the extent cache artificially bigger because MM subsystem is not aware > of the shrinking you do here. I would prefer to leave shrinking on > the slab subsystem itself. If we fail, the allocation we only try to free at most one extent, so I don't think it's going to make the slab system that confused; it's the equivalent of freeing an entry and then using allocating it again. > Now GFP_ATOMIC allocation we use for extent cache makes it hard for the > slab subsystem and actually we could fairly easily use GFP_NOFS. We can just > allocate the structure before grabbing i_es_lock with GFP_NOFS allocation and > in case we don't need the structure, we can just free it again. It may > introduce some overhead from unnecessary alloc/free but things get simpler > that way (no need for that locked_ei argument for __es_shrink(), no need > for internal calls to __es_shrink() from within the filesystem). The tricky bit is that even __es_remove_extent() can require a memory allocation, and in the worst case, it's possible that ext4_es_insert_extent() can require *two* allocations. For example, if you start with a single large extent, and then need to insert a subregion with a different set of flags into the already existing extent, thus resulting in three extents where you started with one. And in some cases, no allocation is required at all.... One thing that can help is that so long as we haven't done something critical, such as erase a delalloc region, we always release the write lock and retry the allocation with GFP_NOFS, and the try the operation again. So we may need to think a bit about what's the best way to improve this, although it is separate topic from making the shrinker be less heavyweight. > Nothing seems to prevent reclaim from freeing the inode after we drop > s_es_lock. So we could use freed memory. I don't think we want to pin the > inode here by grabbing a refcount since we don't want to deal with iput() > in the shrinker (that could mean having to delete the inode from shrinker > context). But what we could do it to grab ei->i_es_lock before dropping > s_es_lock. Since ext4_es_remove_extent() called from ext4_clear_inode() > always grabs i_es_lock, we are protected from inode being freed while we > hold that lock. But please add comments about this both to the > __es_shrink() and ext4_es_remove_extent(). Something like this should work, yes? - Ted --- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c index 25da1bf..4768f7f 100644 --- a/fs/ext4/extents_status.c +++ b/fs/ext4/extents_status.c @@ -981,32 +981,27 @@ retry: list_del_init(&ei->i_es_list); sbi->s_es_nr_inode--; - spin_unlock(&sbi->s_es_lock); + if (ei->i_es_shk_nr == 0) + continue; /* * Normally we try hard to avoid shrinking precached inodes, * but we will as a last resort. */ - if (!retried && ext4_test_inode_state(&ei->vfs_inode, - EXT4_STATE_EXT_PRECACHED)) { + if ((!retried && ext4_test_inode_state(&ei->vfs_inode, + EXT4_STATE_EXT_PRECACHED)) || + ei == locked_ei || + !write_trylock(&ei->i_es_lock)) { nr_skipped++; - spin_lock(&sbi->s_es_lock); - __ext4_es_list_add(sbi, ei); - continue; - } - - if (ei->i_es_shk_nr == 0) { - spin_lock(&sbi->s_es_lock); - continue; - } - - if (ei == locked_ei || !write_trylock(&ei->i_es_lock)) { - nr_skipped++; - spin_lock(&sbi->s_es_lock); __ext4_es_list_add(sbi, ei); + if (spin_is_contended(&sbi->s_es_lock)) { + spin_unlock(&sbi->s_es_lock); + spin_lock(&sbi->s_es_lock); + } continue; } - + /* we only release s_es_lock once we have i_es_lock */ + spin_unlock(&sbi->s_es_lock); shrunk = __es_try_to_reclaim_extents(ei, nr_to_scan); write_unlock(&ei->i_es_lock);