From patchwork Thu Jul 24 18:48:00 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 373497 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2001:1868:205::9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 8AA921401DB for ; Fri, 25 Jul 2014 04:51:13 +1000 (EST) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XAO4R-000525-Al; Thu, 24 Jul 2014 18:48:51 +0000 Received: from mail-la0-f53.google.com ([209.85.215.53]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XAO4M-0004mE-HN for linux-arm-kernel@lists.infradead.org; Thu, 24 Jul 2014 18:48:48 +0000 Received: by mail-la0-f53.google.com with SMTP id gl10so2198118lab.26 for ; Thu, 24 Jul 2014 11:48:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc :content-type; bh=MwSSbJ9sXMQuwf/G92f6u0BKSSrrAp0pyxSgDClldZE=; b=mRcyahrEjDCz5DTR6my18fHzb3guwE9LiHzvKw1wxvjCD/zk9yMMtDdh8hVPa6Xck/ t+OvNlSJ5inHjlcpw/tRP9qKO8Z7ludkaJSNLRHYHYm5oK+rdx64dDq0883dl2XC2kHh 31xZI4dBa3I2NcD/8cyZTRvRIySNezpHSrDkJQucurYZClT+NPqzA4v6/3zCSfm+x+4Q dY5BD7FvMY2D7Xglm1H81hPTs6hq2ykuj+ABUf9ChsBAk2H04rHJub+GM0QPUKKREArK chQ7JGCzKdrWRryDXQ9Ib01JsGCAydLnbRPf4gPIv2f+WyMzSLRTMxtFaf4t6PZlyVsZ I23Q== X-Gm-Message-State: ALoCoQnJUlz51WjdC6UtUpr4YoiKi2CN3bdXwnKr/+fDVU9Dx4D2iCdww7zAUTfOCF+4DUZUneZK X-Received: by 10.152.20.165 with SMTP id o5mr11278672lae.78.1406227701313; Thu, 24 Jul 2014 11:48:21 -0700 (PDT) MIME-Version: 1.0 Received: by 10.152.36.106 with HTTP; Thu, 24 Jul 2014 11:48:00 -0700 (PDT) From: Andy Lutomirski Date: Thu, 24 Jul 2014 11:48:00 -0700 Message-ID: Subject: (request for -mm inclusion) Re: [PATCH v3] arm64, ia64, ppc, s390, sh, tile, um, x86, mm: Remove default gate area To: Richard Weinberger , Andrew Morton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140724_114846_996802_07B95DD1 X-CRM114-Status: GOOD ( 30.86 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.215.53 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.215.53 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: "linux-ia64@vger.kernel.org" , "linux-sh@vger.kernel.org" , Catalin Marinas , Heiko Carstens , "linux-mm@kvack.org" , Paul Mackerras , "H. Peter Anvin" , linux-arch , "linux-s390@vger.kernel.org" , X86 ML , Ingo Molnar , Benjamin Herrenschmidt , Fenghua Yu , "user-mode-linux-devel@lists.sourceforge.net" , Will Deacon , Jeff Dike , Chris Metcalf , Thomas Gleixner , "linux-arm-kernel@lists.infradead.org" , Tony Luck , Nathan Lynch , "linux-kernel@vger.kernel.org" , Martin Schwidefsky , "linux390@de.ibm.com" , linuxppc-dev@lists.ozlabs.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org Hi akpm- One more try before I spam the world with a new thread. Is this patch okay for -mm? So far, it's accumulated: Acked-by: Nathan Lynch Acked-by: H. Peter Anvin Acked-by: Benjamin Herrenschmidt [in principle] Acked-by: Richard Weinberger [for um] Acked-by: Will Deacon [for arm64] For convenience, I've attached the patch w/ the acked-by's folded in and with no other changes. Thanks, Andy On Fri, Jul 18, 2014 at 9:53 AM, Andy Lutomirski wrote: > > On Jul 18, 2014 3:20 AM, "Richard Weinberger" wrote: >> >> Am 18.07.2014 12:14, schrieb Will Deacon: >> > On Tue, Jul 15, 2014 at 03:47:26PM +0100, Andy Lutomirski wrote: >> >> On Sun, Jul 13, 2014 at 1:01 PM, Andy Lutomirski >> >> wrote: >> >>> The core mm code will provide a default gate area based on >> >>> FIXADDR_USER_START and FIXADDR_USER_END if >> >>> !defined(__HAVE_ARCH_GATE_AREA) && defined(AT_SYSINFO_EHDR). >> >>> >> >>> This default is only useful for ia64. arm64, ppc, s390, sh, tile, >> >>> 64-bit UML, and x86_32 have their own code just to disable it. arm, >> >>> 32-bit UML, and x86_64 have gate areas, but they have their own >> >>> implementations. >> >>> >> >>> This gets rid of the default and moves the code into ia64. >> >>> >> >>> This should save some code on architectures without a gate area: it's >> >>> now possible to inline the gate_area functions in the default case. >> >> >> >> Can one of you pull this somewhere? Otherwise I can put it somewhere >> >> stable and ask for -next inclusion, but that seems like overkill for a >> >> single patch. >> >> For the um bits: >> Acked-by: Richard Weinberger >> >> > I'd be happy to take the arm64 part, but it doesn't feel right for mm/* >> > changes (or changes to other archs) to go via our tree. >> > >> > I'm not sure what the best approach is if you want to send this via a >> > single >> > tree. Maybe you could ask akpm nicely? >> >> Going though Andrew's tree sounds sane to me. > > Splitting this will be annoying: I'd probably have to add a flag asking for > the new behavior, update all the arches, then remove the flag. The chance > of screwing up bisectability in the process seems pretty high. This seems > like overkill for a patch that mostly deletes code. > > Akpm, can you take this? > > --Andy > >> >> Thanks, >> //richard From 3a4ddfaab96d1dd06b9cd6298e74a91c5a956ece Mon Sep 17 00:00:00 2001 Message-Id: <3a4ddfaab96d1dd06b9cd6298e74a91c5a956ece.1406227593.git.luto@amacapital.net> From: Andy Lutomirski Date: Tue, 17 Jun 2014 10:28:20 -0700 Subject: [PATCH] arm64,ia64,ppc,s390,sh,tile,um,x86,mm: Remove default gate area The core mm code will provide a default gate area based on FIXADDR_USER_START and FIXADDR_USER_END if !defined(__HAVE_ARCH_GATE_AREA) && defined(AT_SYSINFO_EHDR). This default is only useful for ia64. arm64, ppc, s390, sh, tile, 64-bit UML, and x86_32 have their own code just to disable it. arm, 32-bit UML, and x86_64 have gate areas, but they have their own implementations. This gets rid of the default and moves the code into ia64. This should save some code on architectures without a gate area: it's now possible to inline the gate_area functions in the default case. Cc: Catalin Marinas Cc: Will Deacon Cc: Tony Luck Cc: Fenghua Yu Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Martin Schwidefsky Cc: Heiko Carstens Cc: linux390@de.ibm.com Cc: Chris Metcalf Cc: Jeff Dike Cc: Richard Weinberger Cc: Thomas Gleixner Cc: Ingo Molnar Cc: "H. Peter Anvin" Cc: Nathan Lynch Cc: x86@kernel.org Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: linux-ia64@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: user-mode-linux-devel@lists.sourceforge.net Cc: linux-mm@kvack.org Acked-by: Nathan Lynch Acked-by: H. Peter Anvin Acked-by: Benjamin Herrenschmidt [in principle] Acked-by: Richard Weinberger [for um] Acked-by: Will Deacon [for arm64] Signed-off-by: Andy Lutomirski --- arch/arm64/include/asm/page.h | 3 --- arch/arm64/kernel/vdso.c | 19 ------------------- arch/ia64/include/asm/page.h | 2 ++ arch/ia64/mm/init.c | 26 ++++++++++++++++++++++++++ arch/powerpc/include/asm/page.h | 3 --- arch/powerpc/kernel/vdso.c | 16 ---------------- arch/s390/include/asm/page.h | 2 -- arch/s390/kernel/vdso.c | 15 --------------- arch/sh/include/asm/page.h | 5 ----- arch/sh/kernel/vsyscall/vsyscall.c | 15 --------------- arch/tile/include/asm/page.h | 6 ------ arch/tile/kernel/vdso.c | 15 --------------- arch/um/include/asm/page.h | 5 +++++ arch/x86/include/asm/page.h | 1 - arch/x86/include/asm/page_64.h | 2 ++ arch/x86/um/asm/elf.h | 1 - arch/x86/um/mem_64.c | 15 --------------- arch/x86/vdso/vdso32-setup.c | 19 +------------------ include/linux/mm.h | 17 ++++++++++++----- mm/memory.c | 38 -------------------------------------- mm/nommu.c | 5 ----- 21 files changed, 48 insertions(+), 182 deletions(-) diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 46bf666..992710f 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -28,9 +28,6 @@ #define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT) #define PAGE_MASK (~(PAGE_SIZE-1)) -/* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */ -#define __HAVE_ARCH_GATE_AREA 1 - #ifndef __ASSEMBLY__ #ifdef CONFIG_ARM64_64K_PAGES diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 50384fe..f630626 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -187,25 +187,6 @@ const char *arch_vma_name(struct vm_area_struct *vma) } /* - * We define AT_SYSINFO_EHDR, so we need these function stubs to keep - * Linux happy. - */ -int in_gate_area_no_mm(unsigned long addr) -{ - return 0; -} - -int in_gate_area(struct mm_struct *mm, unsigned long addr) -{ - return 0; -} - -struct vm_area_struct *get_gate_vma(struct mm_struct *mm) -{ - return NULL; -} - -/* * Update the vDSO data page to keep in sync with kernel timekeeping. */ void update_vsyscall(struct timekeeper *tk) diff --git a/arch/ia64/include/asm/page.h b/arch/ia64/include/asm/page.h index f1e1b2e..1f1bf14 100644 --- a/arch/ia64/include/asm/page.h +++ b/arch/ia64/include/asm/page.h @@ -231,4 +231,6 @@ get_order (unsigned long size) #define PERCPU_ADDR (-PERCPU_PAGE_SIZE) #define LOAD_OFFSET (KERNEL_START - KERNEL_TR_PAGE_SIZE) +#define __HAVE_ARCH_GATE_AREA 1 + #endif /* _ASM_IA64_PAGE_H */ diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index 25c3502..35efaa3 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -278,6 +278,32 @@ setup_gate (void) ia64_patch_gate(); } +static struct vm_area_struct gate_vma; + +static int __init gate_vma_init(void) +{ + gate_vma.vm_mm = NULL; + gate_vma.vm_start = FIXADDR_USER_START; + gate_vma.vm_end = FIXADDR_USER_END; + gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC; + gate_vma.vm_page_prot = __P101; + + return 0; +} +__initcall(gate_vma_init); + +struct vm_area_struct *get_gate_vma(struct mm_struct *mm) +{ + return &gate_vma; +} + +int in_gate_area_no_mm(unsigned long addr) +{ + if ((addr >= FIXADDR_USER_START) && (addr < FIXADDR_USER_END)) + return 1; + return 0; +} + void ia64_mmu_init(void *my_cpu_data) { unsigned long pta, impl_va_bits; diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h index 32e4e21..26fe1ae 100644 --- a/arch/powerpc/include/asm/page.h +++ b/arch/powerpc/include/asm/page.h @@ -48,9 +48,6 @@ extern unsigned int HPAGE_SHIFT; #define HUGE_MAX_HSTATE (MMU_PAGE_COUNT-1) #endif -/* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */ -#define __HAVE_ARCH_GATE_AREA 1 - /* * Subtle: (1 << PAGE_SHIFT) is an int, not an unsigned long. So if we * assign PAGE_MASK to a larger type it gets extended the way we want diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c index ce74c33..f174351 100644 --- a/arch/powerpc/kernel/vdso.c +++ b/arch/powerpc/kernel/vdso.c @@ -840,19 +840,3 @@ static int __init vdso_init(void) return 0; } arch_initcall(vdso_init); - -int in_gate_area_no_mm(unsigned long addr) -{ - return 0; -} - -int in_gate_area(struct mm_struct *mm, unsigned long addr) -{ - return 0; -} - -struct vm_area_struct *get_gate_vma(struct mm_struct *mm) -{ - return NULL; -} - diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h index 114258e..7b2ac6e 100644 --- a/arch/s390/include/asm/page.h +++ b/arch/s390/include/asm/page.h @@ -162,6 +162,4 @@ static inline int devmem_is_allowed(unsigned long pfn) #include #include -#define __HAVE_ARCH_GATE_AREA 1 - #endif /* _S390_PAGE_H */ diff --git a/arch/s390/kernel/vdso.c b/arch/s390/kernel/vdso.c index 6136490..0bbb7e0 100644 --- a/arch/s390/kernel/vdso.c +++ b/arch/s390/kernel/vdso.c @@ -316,18 +316,3 @@ static int __init vdso_init(void) return 0; } early_initcall(vdso_init); - -int in_gate_area_no_mm(unsigned long addr) -{ - return 0; -} - -int in_gate_area(struct mm_struct *mm, unsigned long addr) -{ - return 0; -} - -struct vm_area_struct *get_gate_vma(struct mm_struct *mm) -{ - return NULL; -} diff --git a/arch/sh/include/asm/page.h b/arch/sh/include/asm/page.h index 15d9703..fe20d14 100644 --- a/arch/sh/include/asm/page.h +++ b/arch/sh/include/asm/page.h @@ -186,11 +186,6 @@ typedef struct page *pgtable_t; #include #include -/* vDSO support */ -#ifdef CONFIG_VSYSCALL -#define __HAVE_ARCH_GATE_AREA -#endif - /* * Some drivers need to perform DMA into kmalloc'ed buffers * and so we have to increase the kmalloc minalign for this. diff --git a/arch/sh/kernel/vsyscall/vsyscall.c b/arch/sh/kernel/vsyscall/vsyscall.c index 5ca5797..ea2aa13 100644 --- a/arch/sh/kernel/vsyscall/vsyscall.c +++ b/arch/sh/kernel/vsyscall/vsyscall.c @@ -92,18 +92,3 @@ const char *arch_vma_name(struct vm_area_struct *vma) return NULL; } - -struct vm_area_struct *get_gate_vma(struct mm_struct *mm) -{ - return NULL; -} - -int in_gate_area(struct mm_struct *mm, unsigned long address) -{ - return 0; -} - -int in_gate_area_no_mm(unsigned long address) -{ - return 0; -} diff --git a/arch/tile/include/asm/page.h b/arch/tile/include/asm/page.h index 6727680..a213a8d 100644 --- a/arch/tile/include/asm/page.h +++ b/arch/tile/include/asm/page.h @@ -39,12 +39,6 @@ #define HPAGE_MASK (~(HPAGE_SIZE - 1)) /* - * We do define AT_SYSINFO_EHDR to support vDSO, - * but don't use the gate mechanism. - */ -#define __HAVE_ARCH_GATE_AREA 1 - -/* * If the Kconfig doesn't specify, set a maximum zone order that * is enough so that we can create huge pages from small pages given * the respective sizes of the two page types. See . diff --git a/arch/tile/kernel/vdso.c b/arch/tile/kernel/vdso.c index 1533af2..5bc51d7 100644 --- a/arch/tile/kernel/vdso.c +++ b/arch/tile/kernel/vdso.c @@ -121,21 +121,6 @@ const char *arch_vma_name(struct vm_area_struct *vma) return NULL; } -struct vm_area_struct *get_gate_vma(struct mm_struct *mm) -{ - return NULL; -} - -int in_gate_area(struct mm_struct *mm, unsigned long address) -{ - return 0; -} - -int in_gate_area_no_mm(unsigned long address) -{ - return 0; -} - int setup_vdso_pages(void) { struct page **pagelist; diff --git a/arch/um/include/asm/page.h b/arch/um/include/asm/page.h index 5ff53d9..71c5d13 100644 --- a/arch/um/include/asm/page.h +++ b/arch/um/include/asm/page.h @@ -119,4 +119,9 @@ extern unsigned long uml_physmem; #include #endif /* __ASSEMBLY__ */ + +#ifdef CONFIG_X86_32 +#define __HAVE_ARCH_GATE_AREA 1 +#endif + #endif /* __UM_PAGE_H */ diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index 775873d..802dde3 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -70,7 +70,6 @@ extern bool __virt_addr_valid(unsigned long kaddr); #include #include -#define __HAVE_ARCH_GATE_AREA 1 #define HAVE_ARCH_HUGETLB_UNMAPPED_AREA #endif /* __KERNEL__ */ diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index 0f1ddee..f408caf 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -39,4 +39,6 @@ void copy_page(void *to, void *from); #endif /* !__ASSEMBLY__ */ +#define __HAVE_ARCH_GATE_AREA 1 + #endif /* _ASM_X86_PAGE_64_H */ diff --git a/arch/x86/um/asm/elf.h b/arch/x86/um/asm/elf.h index 0feee2f..25a1022 100644 --- a/arch/x86/um/asm/elf.h +++ b/arch/x86/um/asm/elf.h @@ -216,6 +216,5 @@ extern long elf_aux_hwcap; #define ELF_HWCAP (elf_aux_hwcap) #define SET_PERSONALITY(ex) do ; while(0) -#define __HAVE_ARCH_GATE_AREA 1 #endif diff --git a/arch/x86/um/mem_64.c b/arch/x86/um/mem_64.c index c6492e7..f8fecad 100644 --- a/arch/x86/um/mem_64.c +++ b/arch/x86/um/mem_64.c @@ -9,18 +9,3 @@ const char *arch_vma_name(struct vm_area_struct *vma) return NULL; } - -struct vm_area_struct *get_gate_vma(struct mm_struct *mm) -{ - return NULL; -} - -int in_gate_area(struct mm_struct *mm, unsigned long addr) -{ - return 0; -} - -int in_gate_area_no_mm(unsigned long addr) -{ - return 0; -} diff --git a/arch/x86/vdso/vdso32-setup.c b/arch/x86/vdso/vdso32-setup.c index e4f7781..e904c27 100644 --- a/arch/x86/vdso/vdso32-setup.c +++ b/arch/x86/vdso/vdso32-setup.c @@ -115,23 +115,6 @@ static __init int ia32_binfmt_init(void) return 0; } __initcall(ia32_binfmt_init); -#endif - -#else /* CONFIG_X86_32 */ - -struct vm_area_struct *get_gate_vma(struct mm_struct *mm) -{ - return NULL; -} - -int in_gate_area(struct mm_struct *mm, unsigned long addr) -{ - return 0; -} - -int in_gate_area_no_mm(unsigned long addr) -{ - return 0; -} +#endif /* CONFIG_SYSCTL */ #endif /* CONFIG_X86_64 */ diff --git a/include/linux/mm.h b/include/linux/mm.h index e03dd29..8981cc8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2014,13 +2014,20 @@ static inline bool kernel_page_present(struct page *page) { return true; } #endif /* CONFIG_HIBERNATION */ #endif +#ifdef __HAVE_ARCH_GATE_AREA extern struct vm_area_struct *get_gate_vma(struct mm_struct *mm); -#ifdef __HAVE_ARCH_GATE_AREA -int in_gate_area_no_mm(unsigned long addr); -int in_gate_area(struct mm_struct *mm, unsigned long addr); +extern int in_gate_area_no_mm(unsigned long addr); +extern int in_gate_area(struct mm_struct *mm, unsigned long addr); #else -int in_gate_area_no_mm(unsigned long addr); -#define in_gate_area(mm, addr) ({(void)mm; in_gate_area_no_mm(addr);}) +static inline struct vm_area_struct *get_gate_vma(struct mm_struct *mm) +{ + return NULL; +} +static inline int in_gate_area_no_mm(unsigned long addr) { return 0; } +static inline int in_gate_area(struct mm_struct *mm, unsigned long addr) +{ + return 0; +} #endif /* __HAVE_ARCH_GATE_AREA */ #ifdef CONFIG_SYSCTL diff --git a/mm/memory.c b/mm/memory.c index d67fd9f..099d234 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3399,44 +3399,6 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) } #endif /* __PAGETABLE_PMD_FOLDED */ -#if !defined(__HAVE_ARCH_GATE_AREA) - -#if defined(AT_SYSINFO_EHDR) -static struct vm_area_struct gate_vma; - -static int __init gate_vma_init(void) -{ - gate_vma.vm_mm = NULL; - gate_vma.vm_start = FIXADDR_USER_START; - gate_vma.vm_end = FIXADDR_USER_END; - gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC; - gate_vma.vm_page_prot = __P101; - - return 0; -} -__initcall(gate_vma_init); -#endif - -struct vm_area_struct *get_gate_vma(struct mm_struct *mm) -{ -#ifdef AT_SYSINFO_EHDR - return &gate_vma; -#else - return NULL; -#endif -} - -int in_gate_area_no_mm(unsigned long addr) -{ -#ifdef AT_SYSINFO_EHDR - if ((addr >= FIXADDR_USER_START) && (addr < FIXADDR_USER_END)) - return 1; -#endif - return 0; -} - -#endif /* __HAVE_ARCH_GATE_AREA */ - static int __follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, spinlock_t **ptlp) { diff --git a/mm/nommu.c b/mm/nommu.c index 4a852f6..a881d96 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -1981,11 +1981,6 @@ error: return -ENOMEM; } -int in_gate_area_no_mm(unsigned long addr) -{ - return 0; -} - int filemap_fault(struct vm_area_struct *vma, struct vm_fault *vmf) { BUG(); -- 1.9.3