diff mbox

[3.8.y.z,extended,stable] Patch "mm/hugetlb.c: add cond_resched_lock() in return_unused_surplus_pages()" has been added to staging queue

Message ID 1402611913-19071-1-git-send-email-kamal@canonical.com
State New
Headers show

Commit Message

Kamal Mostafa June 12, 2014, 10:25 p.m. UTC
This is a note to let you know that I have just added a patch titled

    mm/hugetlb.c: add cond_resched_lock() in return_unused_surplus_pages()

to the linux-3.8.y-queue branch of the 3.8.y.z extended stable tree 
which can be found at:

 http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.8.y-queue

This patch is scheduled to be released in version 3.8.13.24.

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.8.y.z tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Thanks.
-Kamal

------

From e986d8fc096346a9ae01b38b0c1317ad884a878d Mon Sep 17 00:00:00 2001
From: "Mizuma, Masayoshi" <m.mizuma@jp.fujitsu.com>
Date: Fri, 18 Apr 2014 15:07:18 -0700
Subject: mm/hugetlb.c: add cond_resched_lock() in
 return_unused_surplus_pages()

commit 7848a4bf51b34f41fcc9bd77e837126d99ae84e3 upstream.

soft lockup in freeing gigantic hugepage fixed in commit 55f67141a892 "mm:
hugetlb: fix softlockup when a large number of hugepages are freed." can
happen in return_unused_surplus_pages(), so let's fix it.

Signed-off-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Kamal Mostafa <kamal@canonical.com>
---
 mm/hugetlb.c | 1 +
 1 file changed, 1 insertion(+)

--
1.9.1
diff mbox

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index db8bc0a..35fc5eb 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1076,6 +1076,7 @@  static void return_unused_surplus_pages(struct hstate *h,
 	while (nr_pages--) {
 		if (!free_pool_huge_page(h, &node_states[N_MEMORY], 1))
 			break;
+		cond_resched_lock(&hugetlb_lock);
 	}
 }