diff mbox series

[bpf-next] bpf, xskmap: fix crash in xsk_map_alloc error path handling

Message ID 20180504142753.10621-1-daniel@iogearbox.net
State Accepted, archived
Delegated to: BPF Maintainers
Headers show
Series [bpf-next] bpf, xskmap: fix crash in xsk_map_alloc error path handling | expand

Commit Message

Daniel Borkmann May 4, 2018, 2:27 p.m. UTC
If bpf_map_precharge_memlock() did not fail, then we set err to zero.
However, any subsequent failure from either alloc_percpu() or the
bpf_map_area_alloc() will return ERR_PTR(0) which in find_and_alloc_map()
will cause NULL pointer deref.

In devmap we have the convention that we return -EINVAL on page count
overflow, so keep the same logic here and just set err to -ENOMEM
after successful bpf_map_precharge_memlock().

Fixes: fbfc504a24f5 ("bpf: introduce new bpf AF_XDP map type BPF_MAP_TYPE_XSKMAP")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Björn Töpel <bjorn.topel@intel.com>
---
 kernel/bpf/xskmap.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

David Miller May 4, 2018, 3:39 p.m. UTC | #1
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri,  4 May 2018 16:27:53 +0200

> If bpf_map_precharge_memlock() did not fail, then we set err to zero.
> However, any subsequent failure from either alloc_percpu() or the
> bpf_map_area_alloc() will return ERR_PTR(0) which in find_and_alloc_map()
> will cause NULL pointer deref.
> 
> In devmap we have the convention that we return -EINVAL on page count
> overflow, so keep the same logic here and just set err to -ENOMEM
> after successful bpf_map_precharge_memlock().
> 
> Fixes: fbfc504a24f5 ("bpf: introduce new bpf AF_XDP map type BPF_MAP_TYPE_XSKMAP")
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

Acked-by: David S. Miller <davem@davemloft.net>
diff mbox series

Patch

diff --git a/kernel/bpf/xskmap.c b/kernel/bpf/xskmap.c
index 869dbb1..cb3a121 100644
--- a/kernel/bpf/xskmap.c
+++ b/kernel/bpf/xskmap.c
@@ -56,6 +56,8 @@  static struct bpf_map *xsk_map_alloc(union bpf_attr *attr)
 	if (err)
 		goto free_m;
 
+	err = -ENOMEM;
+
 	m->flush_list = alloc_percpu(struct list_head);
 	if (!m->flush_list)
 		goto free_m;