From patchwork Wed May 13 18:13:44 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: bugzilla-daemon@bugzilla.kernel.org X-Patchwork-Id: 27161 Return-Path: X-Original-To: patchwork-incoming@bilbo.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from ozlabs.org (ozlabs.org [203.10.76.45]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx.ozlabs.org", Issuer "CA Cert Signing Authority" (verified OK)) by bilbo.ozlabs.org (Postfix) with ESMTPS id E5327B7081 for ; Thu, 14 May 2009 04:13:52 +1000 (EST) Received: by ozlabs.org (Postfix) id D70F9DDFF4; Thu, 14 May 2009 04:13:52 +1000 (EST) Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id 722DEDDFF2 for ; Thu, 14 May 2009 04:13:52 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760612AbZEMSNq (ORCPT ); Wed, 13 May 2009 14:13:46 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1760204AbZEMSNq (ORCPT ); Wed, 13 May 2009 14:13:46 -0400 Received: from demeter.kernel.org ([140.211.167.39]:55615 "EHLO demeter.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1760612AbZEMSNn (ORCPT ); Wed, 13 May 2009 14:13:43 -0400 Received: from demeter.kernel.org (localhost.localdomain [127.0.0.1]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n4DIDiP8000781 for ; Wed, 13 May 2009 18:13:44 GMT Received: (from bugzilla@localhost) by demeter.kernel.org (8.14.2/8.14.2/Submit) id n4DIDigN000780; Wed, 13 May 2009 18:13:44 GMT Date: Wed, 13 May 2009 18:13:44 GMT Message-Id: <200905131813.n4DIDigN000780@demeter.kernel.org> X-Authentication-Warning: demeter.kernel.org: bugzilla set sender to bugzilla-daemon@bugzilla.kernel.org using -f From: bugzilla-daemon@bugzilla.kernel.org To: linux-ext4@vger.kernel.org Subject: [Bug 13232] ext3/4 with synchronous writes gets wedged by Postfix X-Bugzilla-Reason: None X-Bugzilla-Type: newchanged X-Bugzilla-Watch-Reason: AssignedTo fs_ext3@kernel-bugs.osdl.org X-Bugzilla-Product: File System X-Bugzilla-Component: ext3 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: anonymous@kernel-bugs.osdl.org X-Bugzilla-Status: NEW X-Bugzilla-Priority: P1 X-Bugzilla-Assigned-To: fs_ext3@kernel-bugs.osdl.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Changed-Fields: In-Reply-To: References: Auto-Submitted: auto-generated MIME-Version: 1.0 Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org http://bugzilla.kernel.org/show_bug.cgi?id=13232 --- Comment #7 from Anonymous Emailer 2009-05-13 18:13:42 --- Reply-To: viro@ZenIV.linux.org.uk On Wed, May 13, 2009 at 05:52:54PM +0100, Al Viro wrote: > On Wed, May 13, 2009 at 03:48:02PM +0200, Jan Kara wrote: > > > Here, we have started a transaction in ext3_create() and then wait in > > > find_inode_fast() for I_FREEING to be cleared (obviously we have > > > reallocated the inode and squeezed the allocation before journal_stop() > > > from the delete was called). > > > Nasty deadlock and I don't see how to fix it now - have to go home for > > > today... Tomorrow I'll have a look what we can do about it. > > OK, the deadlock has been introduced by ext3 variant of > > 261bca86ed4f7f391d1938167624e78da61dcc6b (adding Al to CC). The deadlock > > is really tough to avoid - we have to first allocate inode on disk so > > that we know the inode number. For this we need transaction open but we > > cannot afford waiting for old inode with same INO to be freed when we have > > transaction open because of the above deadlock. So we'd have to wait for > > inode release only after everything is done and we closed the transaction. But > > that would mean reordering a lot of code in ext3/namei.c so that all the > > dcache handling is done after all the IO is done. > > Hmm, maybe we could change the delete side of the deadlock but that's > > going to be tricky as well :(. > > Al, any idea if we could somehow get away without waiting on > > I_FREEING? > > At which point do we actually run into deadlock on delete side? We could, > in principle, skip everything like that in insert_inode_locked(), but > I would rather avoid the "two inodes in icache at the same time, with the > same inumber" situations completely. We might get away with that, since > everything else *will* wait, so we can afford a bunch of inodes past the > point in foo_delete_inode() that has cleared it in bitmap + new locked > one, but if it's at all possible to avoid, I'd rather avoid it. OK, that's probably the easiest way to do that, as much as I don't like it... Since iget() et.al. will not accept I_FREEING (will wait to go away and restart), and since we'd better have serialization between new/free on fs data structures anyway, we can afford simply skipping I_FREEING et.al. in insert_inode_locked(). We do that from new_inode, so it won't race with free_inode in any interesting ways and it won't race with iget (of any origin; nfsd or in case of fs corruption a lookup) since both still will wait for I_LOCK. Tentative patch follow; folks, I would very much like review on that one, since I'm far too low on caffeine and the area is nasty. return 0; diff --git a/fs/inode.c b/fs/inode.c index 9d26490..4406952 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -1053,13 +1053,22 @@ int insert_inode_locked(struct inode *inode) struct super_block *sb = inode->i_sb; ino_t ino = inode->i_ino; struct hlist_head *head = inode_hashtable + hash(sb, ino); - struct inode *old; inode->i_state |= I_LOCK|I_NEW; while (1) { + struct hlist_node *node; + struct inode *old = NULL; spin_lock(&inode_lock); - old = find_inode_fast(sb, head, ino); - if (likely(!old)) { + hlist_for_each_entry(old, node, head, i_hash) { + if (old->i_ino != ino) + continue; + if (old->i_sb != sb) + continue; + if (old->i_state & (I_FREEING|I_CLEAR|I_WILL_FREE)) + continue; + break; + } + if (likely(!node)) { hlist_add_head(&inode->i_hash, head); spin_unlock(&inode_lock); return 0; @@ -1081,14 +1090,24 @@ int insert_inode_locked4(struct inode *inode, unsigned long hashval, { struct super_block *sb = inode->i_sb; struct hlist_head *head = inode_hashtable + hash(sb, hashval); - struct inode *old; inode->i_state |= I_LOCK|I_NEW; while (1) { + struct hlist_node *node; + struct inode *old = NULL; + spin_lock(&inode_lock); - old = find_inode(sb, head, test, data); - if (likely(!old)) { + hlist_for_each_entry(old, node, head, i_hash) { + if (old->i_sb != sb) + continue; + if (!test(old, data)) + continue; + if (old->i_state & (I_FREEING|I_CLEAR|I_WILL_FREE)) + continue; + break; + } + if (likely(!node)) { hlist_add_head(&inode->i_hash, head); spin_unlock(&inode_lock);