Patchwork [102/241] tmpfs: change final i_blocks BUG to WARNING

mail settings
Submitter Herton Ronaldo Krzesinski
Date Dec. 13, 2012, 1:57 p.m.
Message ID <>
Download mbox | patch
Permalink /patch/205952/
State New
Headers show


Herton Ronaldo Krzesinski - Dec. 13, 2012, 1:57 p.m. -stable review patch.  If anyone has any objections, please let me know.


From: Hugh Dickins <>

commit 0f3c42f522dc1ad7e27affc0a4aa8c790bce0a66 upstream.

Under a particular load on one machine, I have hit shmem_evict_inode()'s
BUG_ON(inode->i_blocks), enough times to narrow it down to a particular
race between swapout and eviction.

It comes from the "if (freed > 0)" asymmetry in shmem_recalc_inode(),
and the lack of coherent locking between mapping's nrpages and shmem's
swapped count.  There's a window in shmem_writepage(), between lowering
nrpages in shmem_delete_from_page_cache() and then raising swapped
count, when the freed count appears to be +1 when it should be 0, and
then the asymmetry stops it from being corrected with -1 before hitting
the BUG.

One answer is coherent locking: using tree_lock throughout, without
info->lock; reasonable, but the raw_spin_lock in percpu_counter_add() on
used_blocks makes that messier than expected.  Another answer may be a
further effort to eliminate the weird shmem_recalc_inode() altogether,
but previous attempts at that failed.

So far undecided, but for now change the BUG_ON to WARN_ON: in usual
circumstances it remains a useful consistency check.

Signed-off-by: Hugh Dickins <>
Signed-off-by: Andrew Morton <>
Signed-off-by: Linus Torvalds <>
[ herton: adjust context ]
Signed-off-by: Herton Ronaldo Krzesinski <>
 mm/shmem.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


diff --git a/mm/shmem.c b/mm/shmem.c
index c7f7a77..8d0c102 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -654,7 +654,7 @@  static void shmem_evict_inode(struct inode *inode)
-	BUG_ON(inode->i_blocks);
+	WARN_ON(inode->i_blocks);