diff mbox series

[bpf] bpf: change size to u64 for bpf_map_{area_alloc,charge_init}()

Message ID 20191029154307.23053-1-bjorn.topel@gmail.com
State Accepted
Delegated to: BPF Maintainers
Headers show
Series [bpf] bpf: change size to u64 for bpf_map_{area_alloc,charge_init}() | expand

Commit Message

Björn Töpel Oct. 29, 2019, 3:43 p.m. UTC
From: Björn Töpel <bjorn.topel@intel.com>

The functions bpf_map_area_alloc() and bpf_map_charge_init() prior
this commit passed the size parameter as size_t. In this commit this
is changed to u64.

All users of these functions avoid size_t overflows on 32-bit systems,
by explicitly using u64 when calculating the allocation size and
memory charge cost. However, since the result was narrowed by the
size_t when passing size and cost to the functions, the overflow
handling was in vain.

Instead of changing all call sites to size_t and handle overflow at
the call site, the parameter is changed to u64 and checked in the
functions above.

Fixes: d407bd25a204 ("bpf: don't trigger OOM killer under pressure with map alloc")
Fixes: c85d69135a91 ("bpf: move memory size checks to bpf_map_charge_init()")
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
---
 include/linux/bpf.h  | 4 ++--
 kernel/bpf/syscall.c | 7 +++++--
 2 files changed, 7 insertions(+), 4 deletions(-)

Comments

Jakub Kicinski Oct. 29, 2019, 4:12 p.m. UTC | #1
On Tue, 29 Oct 2019 16:43:07 +0100, Björn Töpel wrote:
> From: Björn Töpel <bjorn.topel@intel.com>
> 
> The functions bpf_map_area_alloc() and bpf_map_charge_init() prior
> this commit passed the size parameter as size_t. In this commit this
> is changed to u64.
> 
> All users of these functions avoid size_t overflows on 32-bit systems,
> by explicitly using u64 when calculating the allocation size and
> memory charge cost. However, since the result was narrowed by the
> size_t when passing size and cost to the functions, the overflow
> handling was in vain.
> 
> Instead of changing all call sites to size_t and handle overflow at
> the call site, the parameter is changed to u64 and checked in the
> functions above.
> 
> Fixes: d407bd25a204 ("bpf: don't trigger OOM killer under pressure with map alloc")
> Fixes: c85d69135a91 ("bpf: move memory size checks to bpf_map_charge_init()")
> Signed-off-by: Björn Töpel <bjorn.topel@intel.com>

Okay, I guess that's the smallest change we can make here.

I'd prefer we went the way of using the standard overflow handling the
kernel has, rather than proliferating this u64 + U32_MAX comparison
stuff. But it's hard to argue with the patch length in light of the
necessary backports..

Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Björn Töpel Oct. 29, 2019, 4:16 p.m. UTC | #2
On Tue, 29 Oct 2019 at 17:12, Jakub Kicinski
<jakub.kicinski@netronome.com> wrote:
>
> On Tue, 29 Oct 2019 16:43:07 +0100, Björn Töpel wrote:
> > From: Björn Töpel <bjorn.topel@intel.com>
> >
> > The functions bpf_map_area_alloc() and bpf_map_charge_init() prior
> > this commit passed the size parameter as size_t. In this commit this
> > is changed to u64.
> >
> > All users of these functions avoid size_t overflows on 32-bit systems,
> > by explicitly using u64 when calculating the allocation size and
> > memory charge cost. However, since the result was narrowed by the
> > size_t when passing size and cost to the functions, the overflow
> > handling was in vain.
> >
> > Instead of changing all call sites to size_t and handle overflow at
> > the call site, the parameter is changed to u64 and checked in the
> > functions above.
> >
> > Fixes: d407bd25a204 ("bpf: don't trigger OOM killer under pressure with map alloc")
> > Fixes: c85d69135a91 ("bpf: move memory size checks to bpf_map_charge_init()")
> > Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
>
> Okay, I guess that's the smallest change we can make here.
>
> I'd prefer we went the way of using the standard overflow handling the
> kernel has, rather than proliferating this u64 + U32_MAX comparison
> stuff. But it's hard to argue with the patch length in light of the
> necessary backports..
>

I agree with you, but this is a start, and then maps can gradually
move over to standard overflow handling.

> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Daniel Borkmann Oct. 31, 2019, 8:59 p.m. UTC | #3
On Tue, Oct 29, 2019 at 04:43:07PM +0100, Björn Töpel wrote:
> From: Björn Töpel <bjorn.topel@intel.com>
> 
> The functions bpf_map_area_alloc() and bpf_map_charge_init() prior
> this commit passed the size parameter as size_t. In this commit this
> is changed to u64.
> 
> All users of these functions avoid size_t overflows on 32-bit systems,
> by explicitly using u64 when calculating the allocation size and
> memory charge cost. However, since the result was narrowed by the
> size_t when passing size and cost to the functions, the overflow
> handling was in vain.
> 
> Instead of changing all call sites to size_t and handle overflow at
> the call site, the parameter is changed to u64 and checked in the
> functions above.
> 
> Fixes: d407bd25a204 ("bpf: don't trigger OOM killer under pressure with map alloc")
> Fixes: c85d69135a91 ("bpf: move memory size checks to bpf_map_charge_init()")
> Signed-off-by: Björn Töpel <bjorn.topel@intel.com>

Applied, thanks!
diff mbox series

Patch

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 5b9d22338606..3bf3835d0e86 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -656,11 +656,11 @@  void bpf_map_put_with_uref(struct bpf_map *map);
 void bpf_map_put(struct bpf_map *map);
 int bpf_map_charge_memlock(struct bpf_map *map, u32 pages);
 void bpf_map_uncharge_memlock(struct bpf_map *map, u32 pages);
-int bpf_map_charge_init(struct bpf_map_memory *mem, size_t size);
+int bpf_map_charge_init(struct bpf_map_memory *mem, u64 size);
 void bpf_map_charge_finish(struct bpf_map_memory *mem);
 void bpf_map_charge_move(struct bpf_map_memory *dst,
 			 struct bpf_map_memory *src);
-void *bpf_map_area_alloc(size_t size, int numa_node);
+void *bpf_map_area_alloc(u64 size, int numa_node);
 void bpf_map_area_free(void *base);
 void bpf_map_init_from_attr(struct bpf_map *map, union bpf_attr *attr);
 
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 0937719b87e2..ace1cfaa24b6 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -126,7 +126,7 @@  static struct bpf_map *find_and_alloc_map(union bpf_attr *attr)
 	return map;
 }
 
-void *bpf_map_area_alloc(size_t size, int numa_node)
+void *bpf_map_area_alloc(u64 size, int numa_node)
 {
 	/* We really just want to fail instead of triggering OOM killer
 	 * under memory pressure, therefore we set __GFP_NORETRY to kmalloc,
@@ -141,6 +141,9 @@  void *bpf_map_area_alloc(size_t size, int numa_node)
 	const gfp_t flags = __GFP_NOWARN | __GFP_ZERO;
 	void *area;
 
+	if (size >= SIZE_MAX)
+		return NULL;
+
 	if (size <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) {
 		area = kmalloc_node(size, GFP_USER | __GFP_NORETRY | flags,
 				    numa_node);
@@ -197,7 +200,7 @@  static void bpf_uncharge_memlock(struct user_struct *user, u32 pages)
 		atomic_long_sub(pages, &user->locked_vm);
 }
 
-int bpf_map_charge_init(struct bpf_map_memory *mem, size_t size)
+int bpf_map_charge_init(struct bpf_map_memory *mem, u64 size)
 {
 	u32 pages = round_up(size, PAGE_SIZE) >> PAGE_SHIFT;
 	struct user_struct *user;