diff mbox

mm: Use phys_addr_t for reserve_bootmem_region arguments

Message ID 1463491221-10573-1-git-send-email-stefan.bader@canonical.com
State New
Headers show

Commit Message

Stefan Bader May 17, 2016, 1:20 p.m. UTC
Re-posting to a hopefully better suited audience. I hit this problem
when trying to boot a i386 dom0 (PAE enabled) on a 64bit Xen host using
a config which would result in a reserved memory range starting at 4MB.
Due to the usage of unsigned long as arguments for start address and
length, this would wrap and actually mark the lower memory range staring
from 0 as reserved. Between kernel version 4.2 and 4.4 this somehow boots
but starting with 4.4 the result is a panic and reboot.

Not sure this special Xen case is the only one affected, but in general
it seems more correct to use phys_addr_t as the type for start and end
as that is the type used in the memblock region definitions and those
are 64bit (at least with PAE enabled).

-Stefan



From 1588a8b3983f63f8e690b91e99fe631902e38805 Mon Sep 17 00:00:00 2001
From: Stefan Bader <stefan.bader@canonical.com>
Date: Tue, 10 May 2016 19:05:16 +0200
Subject: [PATCH] mm: Use phys_addr_t for reserve_bootmem_region arguments

Since 92923ca the reserved bit is set on reserved memblock regions.
However start and end address are passed as unsigned long. This is
only 32bit on i386, so it can end up marking the wrong pages reserved
for ranges at 4GB and above.

This was observed on a 32bit Xen dom0 which was booted with initial
memory set to a value below 4G but allowing to balloon in memory
(dom0_mem=1024M for example). This would define a reserved bootmem
region for the additional memory (for example on a 8GB system there was
a reverved region covering the 4GB-8GB range). But since the addresses
were passed on as unsigned long, this was actually marking all pages
from 0 to 4GB as reserved.

Fixes: 92923ca "mm: meminit: only set page reserved in the memblock region"
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Cc: <stable@kernel.org> # 4.2+
---
 include/linux/mm.h | 2 +-
 mm/page_alloc.c    | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

Comments

Stefan Bader May 17, 2016, 2:14 p.m. UTC | #1
On 17.05.2016 15:20, Stefan Bader wrote:
> Re-posting to a hopefully better suited audience. I hit this problem
> when trying to boot a i386 dom0 (PAE enabled) on a 64bit Xen host using
> a config which would result in a reserved memory range starting at 4MB.

Of course that ^ should be "starting at 4GB" not MB...

> Due to the usage of unsigned long as arguments for start address and
> length, this would wrap and actually mark the lower memory range staring
> from 0 as reserved. Between kernel version 4.2 and 4.4 this somehow boots
> but starting with 4.4 the result is a panic and reboot.
> 
> Not sure this special Xen case is the only one affected, but in general
> it seems more correct to use phys_addr_t as the type for start and end
> as that is the type used in the memblock region definitions and those
> are 64bit (at least with PAE enabled).
> 
> -Stefan
> 
> 
> 
> From 1588a8b3983f63f8e690b91e99fe631902e38805 Mon Sep 17 00:00:00 2001
> From: Stefan Bader <stefan.bader@canonical.com>
> Date: Tue, 10 May 2016 19:05:16 +0200
> Subject: [PATCH] mm: Use phys_addr_t for reserve_bootmem_region arguments
> 
> Since 92923ca the reserved bit is set on reserved memblock regions.
> However start and end address are passed as unsigned long. This is
> only 32bit on i386, so it can end up marking the wrong pages reserved
> for ranges at 4GB and above.
> 
> This was observed on a 32bit Xen dom0 which was booted with initial
> memory set to a value below 4G but allowing to balloon in memory
> (dom0_mem=1024M for example). This would define a reserved bootmem
> region for the additional memory (for example on a 8GB system there was
> a reverved region covering the 4GB-8GB range). But since the addresses
> were passed on as unsigned long, this was actually marking all pages
> from 0 to 4GB as reserved.
> 
> Fixes: 92923ca "mm: meminit: only set page reserved in the memblock region"
> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
> Cc: <stable@kernel.org> # 4.2+
> ---
>  include/linux/mm.h | 2 +-
>  mm/page_alloc.c    | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index b56ff72..4c1ff62 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1715,7 +1715,7 @@ extern void free_highmem_page(struct page *page);
>  extern void adjust_managed_page_count(struct page *page, long count);
>  extern void mem_init_print_info(const char *str);
>  
> -extern void reserve_bootmem_region(unsigned long start, unsigned long end);
> +extern void reserve_bootmem_region(phys_addr_t start, phys_addr_t end);
>  
>  /* Free the reserved page into the buddy system, so it gets managed. */
>  static inline void __free_reserved_page(struct page *page)
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c69531a..eb66f89 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -951,7 +951,7 @@ static inline void init_reserved_page(unsigned long pfn)
>   * marks the pages PageReserved. The remaining valid pages are later
>   * sent to the buddy page allocator.
>   */
> -void __meminit reserve_bootmem_region(unsigned long start, unsigned long end)
> +void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
>  {
>  	unsigned long start_pfn = PFN_DOWN(start);
>  	unsigned long end_pfn = PFN_UP(end);
>
diff mbox

Patch

diff --git a/include/linux/mm.h b/include/linux/mm.h
index b56ff72..4c1ff62 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1715,7 +1715,7 @@  extern void free_highmem_page(struct page *page);
 extern void adjust_managed_page_count(struct page *page, long count);
 extern void mem_init_print_info(const char *str);
 
-extern void reserve_bootmem_region(unsigned long start, unsigned long end);
+extern void reserve_bootmem_region(phys_addr_t start, phys_addr_t end);
 
 /* Free the reserved page into the buddy system, so it gets managed. */
 static inline void __free_reserved_page(struct page *page)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c69531a..eb66f89 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -951,7 +951,7 @@  static inline void init_reserved_page(unsigned long pfn)
  * marks the pages PageReserved. The remaining valid pages are later
  * sent to the buddy page allocator.
  */
-void __meminit reserve_bootmem_region(unsigned long start, unsigned long end)
+void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
 {
 	unsigned long start_pfn = PFN_DOWN(start);
 	unsigned long end_pfn = PFN_UP(end);