From patchwork Tue Oct 18 16:05:45 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Bader X-Patchwork-Id: 120461 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id 504DCB71D8 for ; Wed, 19 Oct 2011 03:06:02 +1100 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1RGCAp-0003fh-6P; Tue, 18 Oct 2011 16:05:51 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1RGCAn-0003ei-CH for kernel-team@lists.ubuntu.com; Tue, 18 Oct 2011 16:05:49 +0000 Received: from p5b2e405e.dip.t-dialin.net ([91.46.64.94] helo=canonical.com) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1RGCAn-00012U-5L for kernel-team@lists.ubuntu.com; Tue, 18 Oct 2011 16:05:49 +0000 From: Stefan Bader To: kernel-team@lists.ubuntu.com Subject: [PATCH Maverick, Lucid] UBUNTU: SAUCE: x86/paravirt: Partially revert "remove lazy mode in interrupts" Date: Tue, 18 Oct 2011 18:05:45 +0200 Message-Id: <1318953945-10171-3-git-send-email-stefan.bader@canonical.com> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1318953945-10171-1-git-send-email-stefan.bader@canonical.com> References: <1318953945-10171-1-git-send-email-stefan.bader@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.13 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com From: Konrad Rzeszutek Wilk which has git commit b8bcfe997e46150fedcc3f5b26b846400122fdd9. The unintended consequence of removing the flushing of MMU updates when doing kmap_atomic (or kunmap_atomic) is that we can hit a dereference bug when processing a "fork()" under a heavy loaded machine. Specifically we can hit: BUG: unable to handle kernel paging request at f573fc8c IP: [] swap_count_continued+0x104/0x180 *pdpt = 000000002a3b9027 *pde = 0000000001bed067 *pte = 0000000000000000 Oops: 0000 [#1] SMP Modules linked in: Pid: 1638, comm: apache2 Not tainted 3.0.4-linode37 #1 EIP: 0061:[] EFLAGS: 00210246 CPU: 3 EIP is at swap_count_continued+0x104/0x180 .. snip.. Call Trace: [] ? __swap_duplicate+0xc2/0x160 [] ? pte_mfn_to_pfn+0x87/0xe0 [] ? swap_duplicate+0x14/0x40 [] ? copy_pte_range+0x45b/0x500 [] ? copy_page_range+0x195/0x200 [] ? dup_mmap+0x1c6/0x2c0 [] ? dup_mm+0xa8/0x130 [] ? copy_process+0x98a/0xb30 [] ? do_fork+0x4f/0x280 [] ? getnstimeofday+0x43/0x100 [] ? sys_clone+0x30/0x40 [] ? ptregs_clone+0x15/0x48 [] ? syscall_call+0x7/0xb The problem looks that in copy_page_range we turn lazy mode on, and then in swap_entry_free we call swap_count_continued which ends up in: map = kmap_atomic(page, KM_USER0) + offset; and then later touches *map. Since we are running in batched mode (lazy) we don't actually set up the PTE mappings and the kmap_atomic is not done synchronously and ends up trying to dereference a page that has not been set. Looking at kmap_atomic_prot_pfn, it uses 'arch_flush_lazy_mmu_mode' and sprinkling that in kmap_atomic_prot and __kunmap_atomic makes the problem go away. CC: Thomas Gleixner CC: Ingo Molnar CC: "H. Peter Anvin" CC: x86@kernel.org CC: Peter Zijlstra CC: Jeremy Fitzhardinge CC: stable@kernel.org Signed-off-by: Konrad Rzeszutek Wilk BugLink: http://bugs.launchpad.net/bugs/854050 (backported from ab67482036cee590753dd42b7f66aada97e6dcde linux-next) Signed-off-by: Stefan Bader --- arch/x86/mm/highmem_32.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c index 63a6ba6..b7d3265 100644 --- a/arch/x86/mm/highmem_32.c +++ b/arch/x86/mm/highmem_32.c @@ -44,6 +44,7 @@ void *kmap_atomic_prot(struct page *page, enum km_type type, pgprot_t prot) vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); BUG_ON(!pte_none(*(kmap_pte-idx))); set_pte(kmap_pte-idx, mk_pte(page, prot)); + arch_flush_lazy_mmu_mode(); return (void *)vaddr; } @@ -73,6 +74,7 @@ void kunmap_atomic(void *kvaddr, enum km_type type) #endif } + arch_flush_lazy_mmu_mode(); pagefault_enable(); }