Patchwork [3.5.y.z,extended,stable] Patch "tmpfs: fix shmem_getpage_gfp() VM_BUG_ON" has been added to staging queue

mail settings
Submitter Herton Ronaldo Krzesinski
Date Dec. 7, 2012, 4:06 p.m.
Message ID <>
Download mbox | patch
Permalink /patch/204565/
State New
Headers show


Herton Ronaldo Krzesinski - Dec. 7, 2012, 4:06 p.m.
This is a note to let you know that I have just added a patch titled

    tmpfs: fix shmem_getpage_gfp() VM_BUG_ON

to the linux-3.5.y-queue branch of the 3.5.y.z extended stable tree 
which can be found at:;a=shortlog;h=refs/heads/linux-3.5.y-queue

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.5.y.z tree, see



From ccb8539a11acffd2012be03ac3dc70ae4597bf62 Mon Sep 17 00:00:00 2001
From: Hugh Dickins <>
Date: Fri, 16 Nov 2012 14:15:03 -0800
Subject: [PATCH] tmpfs: fix shmem_getpage_gfp() VM_BUG_ON

commit 215c02bc33bbd5ff4d7379a909462d11f0103218 upstream.

Fuzzing with trinity hit the "impossible" VM_BUG_ON(error) (which Fedora
has converted to WARNING) in shmem_getpage_gfp():

  WARNING: at mm/shmem.c:1151 shmem_getpage_gfp+0xa5c/0xa70()
  Pid: 29795, comm: trinity-child4 Not tainted 3.7.0-rc2+ #49
  Call Trace:

Thanks to Johannes for pointing to truncation: free_swap_and_cache()
only does a trylock on the page, so the page lock we've held since
before confirming swap is not enough to protect against truncation.

What cleanup is needed in this case? Just delete_from_swap_cache(),
which takes care of the memcg uncharge.

Signed-off-by: Hugh Dickins <>
Reported-by: Dave Jones <>
Cc: Johannes Weiner <>
Signed-off-by: Andrew Morton <>
Signed-off-by: Linus Torvalds <>
Signed-off-by: Herton Ronaldo Krzesinski <>
 mm/shmem.c |   16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)



diff --git a/mm/shmem.c b/mm/shmem.c
index 06d48ca..c7f7a77 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1154,8 +1154,20 @@  repeat:
 		if (!error) {
 			error = shmem_add_to_page_cache(page, mapping, index,
 						gfp, swp_to_radix_entry(swap));
-			/* We already confirmed swap, and make no allocation */
-			VM_BUG_ON(error);
+			/*
+			 * We already confirmed swap under page lock, and make
+			 * no memory allocation here, so usually no possibility
+			 * of error; but free_swap_and_cache() only trylocks a
+			 * page, so it is just possible that the entry has been
+			 * truncated or holepunched since swap was confirmed.
+			 * shmem_undo_range() will have done some of the
+			 * unaccounting, now delete_from_swap_cache() will do
+			 * the rest (including mem_cgroup_uncharge_swapcache).
+			 * Reset swap.val? No, leave it so "failed" goes back to
+			 * "repeat": reading a hole and writing should succeed.
+			 */
+			if (error)
+				delete_from_swap_cache(page);
 		if (error)
 			goto failed;