From patchwork Tue Sep 14 16:04:04 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Bader X-Patchwork-Id: 64726 X-Patchwork-Delegate: stefan.bader@canonical.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id F2E9DB6F11 for ; Wed, 15 Sep 2010 02:04:20 +1000 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.69) (envelope-from ) id 1OvXzN-0005MD-SN; Tue, 14 Sep 2010 17:04:09 +0100 Received: from adelie.canonical.com ([91.189.90.139]) by chlorine.canonical.com with esmtp (Exim 4.69) (envelope-from ) id 1OvXzK-0005Lk-7J for kernel-team@lists.ubuntu.com; Tue, 14 Sep 2010 17:04:06 +0100 Received: from hutte.canonical.com ([91.189.90.181]) by adelie.canonical.com with esmtp (Exim 4.69 #1 (Debian)) id 1OvXzK-00018L-59 for ; Tue, 14 Sep 2010 17:04:06 +0100 Received: from p5b2e3e4c.dip.t-dialin.net ([91.46.62.76] helo=canonical.com) by hutte.canonical.com with esmtpsa (TLS-1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.69) (envelope-from ) id 1OvXzJ-0006Dw-Ru for kernel-team@lists.ubuntu.com; Tue, 14 Sep 2010 17:04:06 +0100 From: Stefan Bader To: kernel-team@lists.ubuntu.com Subject: [Lucid, Maverick] SRU: Fix (hopefully) the last stack guard page effect Date: Tue, 14 Sep 2010 18:04:04 +0200 Message-Id: <1284480244-5648-1-git-send-email-stefan.bader@canonical.com> X-Mailer: git-send-email 1.7.0.4 To: kernel-team@list.ubuntu.com X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.9 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com SRU Justification: Impact: When introducing the stack guard page, a follow up patch tried to minimize the effects user-space sees from this change. Unfortunately a lot of the checks did not take into account that user-space may mlock a portion of the stack, but not all of it. This causes the stack vma to be split and a lot of assumptions to go down the drain. Most places have been fixed by now but looking at /proc//maps when a stack vma has been split still hides a page on every area and not only on head of it as it was intended. Fix: The following upstream change makes one inline available to other code and uses that when checking for the guard page in the proc output. Testcase: A program that mlocks an area of the stack and then dumps its maps will see holes before the patch and none after. Note: When providing incremental patches to Linus, the cc for stable got lost, though I pinged Greg, but have not received an answer yet. In the end it hopefully comes from stable. I have not yet created a bug report for it as I am not really sure how urgent this is to fix. -Stefan Acked-by: Leann Ogasawara Acked-by: Brad Figg Acked-by: Steve Conklin --- From 39aa3cb3e8250db9188a6f1e3fb62ffa1a717678 Mon Sep 17 00:00:00 2001 From: Stefan Bader Date: Tue, 31 Aug 2010 15:52:27 +0200 Subject: [PATCH] mm: (pre-stable) Move vma_stack_continue into mm.h So it can be used by all that need to check for that. Signed-off-by: Stefan Bader Signed-off-by: Linus Torvalds (cherry-picked from commit 39aa3cb3e8250db9188a6f1e3fb62ffa1a717678 upstream) Signed-off-by: Stefan Bader --- fs/proc/task_mmu.c | 3 ++- include/linux/mm.h | 6 ++++++ mm/mlock.c | 6 ------ 3 files changed, 8 insertions(+), 7 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 439fc1f..271afc4 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -224,7 +224,8 @@ static void show_map_vma(struct seq_file *m, struct vm_area_struct *vma) /* We don't show the stack guard page in /proc/maps */ start = vma->vm_start; if (vma->vm_flags & VM_GROWSDOWN) - start += PAGE_SIZE; + if (!vma_stack_continue(vma->vm_prev, vma->vm_start)) + start += PAGE_SIZE; seq_printf(m, "%08lx-%08lx %c%c%c%c %08llx %02x:%02x %lu %n", start, diff --git a/include/linux/mm.h b/include/linux/mm.h index e6b1210..74949fb 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -864,6 +864,12 @@ int set_page_dirty(struct page *page); int set_page_dirty_lock(struct page *page); int clear_page_dirty_for_io(struct page *page); +/* Is the vma a continuation of the stack vma above it? */ +static inline int vma_stack_continue(struct vm_area_struct *vma, unsigned long addr) +{ + return vma && (vma->vm_end == addr) && (vma->vm_flags & VM_GROWSDOWN); +} + extern unsigned long move_page_tables(struct vm_area_struct *vma, unsigned long old_addr, struct vm_area_struct *new_vma, unsigned long new_addr, unsigned long len); diff --git a/mm/mlock.c b/mm/mlock.c index cbae7c5..b70919c 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -135,12 +135,6 @@ void munlock_vma_page(struct page *page) } } -/* Is the vma a continuation of the stack vma above it? */ -static inline int vma_stack_continue(struct vm_area_struct *vma, unsigned long addr) -{ - return vma && (vma->vm_end == addr) && (vma->vm_flags & VM_GROWSDOWN); -} - static inline int stack_guard_page(struct vm_area_struct *vma, unsigned long addr) { return (vma->vm_flags & VM_GROWSDOWN) &&