From patchwork Fri Jan 12 04:29:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 859485 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="Iqa3Z+WY"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zHqXb3pvbz9sQm for ; Fri, 12 Jan 2018 15:29:51 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933055AbeALE3t (ORCPT ); Thu, 11 Jan 2018 23:29:49 -0500 Received: from mail-pg0-f68.google.com ([74.125.83.68]:33720 "EHLO mail-pg0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932751AbeALE3q (ORCPT ); Thu, 11 Jan 2018 23:29:46 -0500 Received: by mail-pg0-f68.google.com with SMTP id i196so3834303pgd.0 for ; Thu, 11 Jan 2018 20:29:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nxWGW1apzp6+ziYmDn0+IJ35G6QjXgYQNZWsIHy4Fow=; b=Iqa3Z+WYzRgV2X2rADdzceRs/gsbt6r8pEt2/v7ltwa/uRvu+41AHXHHHyVWo6WfSd r6YcKo/3iStGUl/Oo01XGRhDUkJFmuj9cSM5niTj9Y9IWv31y3dxOPIhpur55ZJeKcFS OLElmKsOp4n29qtLJ0iZAlgCIxmLI9h1mB0bVr3cZ3I6WYE/c5VjmNolAtDCPlBLJBhD 177KYgbqic5unKAXvRaUqPvS8/Wtz7NgOERNsHRvQ0h1akaZGa9kMwaOZZOE2VP9Fn+m XID23tpG4+XRrGjVfJd8AKeH9jzEjDhtTyRpNDyNKLonB5BBW0J8KAevKwBtodH9E7jD OMLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nxWGW1apzp6+ziYmDn0+IJ35G6QjXgYQNZWsIHy4Fow=; b=LJqh/oyIDfP2KpygCrqQpVRiURcVVmsNT1y+9Ikf9dT31bFlGqaQYL0gIOcNzTqvIm DPX71sjiwWSSiY1pR1/qAN/wSShfo0/D1lzyLQUBXCN4kxsjN9jiZ/XlqJa3fwNPpE79 AMK7i6vH6uRHu2mXnF/qqWkYEF50ihymLxywtPz3fhGXndNAGQ5u0JpImmOlnl1zKXJn NIU7hR6VUWhiZJTyjUSfBxhJAD8I0gH0k9CaohXLhjnEdar942inrvuujdAqxJDJ11ll UMSFSBmV771x8KxRETF+OuNSgZvxc1f5jih4d5cQDd6TSlJiQoWhnW4pU/VyU/k8g1p5 /0tQ== X-Gm-Message-State: AKGB3mKuexctcRpRUMYLRZgsA3omoGiA3pmTxz6LdSd1DtKx1Yq8c0XC osJuW+6tNVkGFgy+xT5KLHmeNw== X-Google-Smtp-Source: ACJfBotOQe7hLrHFAzFXeJyxLUwWzIVchl45JMeeow438RD5JKhT3sttpftuGbYdLph+bxRXJhJlEg== X-Received: by 10.84.131.40 with SMTP id 37mr24721067pld.58.1515731385747; Thu, 11 Jan 2018 20:29:45 -0800 (PST) Received: from jkicinski-Precision-T1700.netronome.com ([75.53.12.129]) by smtp.gmail.com with ESMTPSA id e8sm45863875pfk.6.2018.01.11.20.29.44 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 11 Jan 2018 20:29:45 -0800 (PST) From: Jakub Kicinski To: alexei.starovoitov@gmail.com, daniel@iogearbox.net, davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, tehnerd@fb.com, Jakub Kicinski Subject: [PATCH bpf-next v2 02/15] bpf: hashtab: move attribute validation before allocation Date: Thu, 11 Jan 2018 20:29:04 -0800 Message-Id: <20180112042917.10348-3-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20180112042917.10348-1-jakub.kicinski@netronome.com> References: <20180112042917.10348-1-jakub.kicinski@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Number of attribute checks are currently performed after hashtab is already allocated. Move them to be able to split them out to the check function later on. Checks have to now be performed on the attr union directly instead of the members of bpf_map, since bpf_map will be allocated later. No functional changes. Signed-off-by: Jakub Kicinski Reviewed-by: Quentin Monnet --- kernel/bpf/hashtab.c | 47 +++++++++++++++++++++++------------------------ 1 file changed, 23 insertions(+), 24 deletions(-) diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 3905d4bc5b80..b80f42adf068 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -269,6 +269,28 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr) if (numa_node != NUMA_NO_NODE && (percpu || percpu_lru)) return ERR_PTR(-EINVAL); + /* check sanity of attributes. + * value_size == 0 may be allowed in the future to use map as a set + */ + if (attr->max_entries == 0 || attr->key_size == 0 || + attr->value_size == 0) + return ERR_PTR(-EINVAL); + + if (attr->key_size > MAX_BPF_STACK) + /* eBPF programs initialize keys on stack, so they cannot be + * larger than max stack size + */ + return ERR_PTR(-E2BIG); + + if (attr->value_size >= KMALLOC_MAX_SIZE - + MAX_BPF_STACK - sizeof(struct htab_elem)) + /* if value_size is bigger, the user space won't be able to + * access the elements via bpf syscall. This check also makes + * sure that the elem_size doesn't overflow and it's + * kmalloc-able later in htab_map_update_elem() + */ + return ERR_PTR(-E2BIG); + htab = kzalloc(sizeof(*htab), GFP_USER); if (!htab) return ERR_PTR(-ENOMEM); @@ -281,14 +303,6 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr) htab->map.map_flags = attr->map_flags; htab->map.numa_node = numa_node; - /* check sanity of attributes. - * value_size == 0 may be allowed in the future to use map as a set - */ - err = -EINVAL; - if (htab->map.max_entries == 0 || htab->map.key_size == 0 || - htab->map.value_size == 0) - goto free_htab; - if (percpu_lru) { /* ensure each CPU's lru list has >=1 elements. * since we are at it, make each lru list has the same @@ -304,22 +318,6 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr) /* hash table size must be power of 2 */ htab->n_buckets = roundup_pow_of_two(htab->map.max_entries); - err = -E2BIG; - if (htab->map.key_size > MAX_BPF_STACK) - /* eBPF programs initialize keys on stack, so they cannot be - * larger than max stack size - */ - goto free_htab; - - if (htab->map.value_size >= KMALLOC_MAX_SIZE - - MAX_BPF_STACK - sizeof(struct htab_elem)) - /* if value_size is bigger, the user space won't be able to - * access the elements via bpf syscall. This check also makes - * sure that the elem_size doesn't overflow and it's - * kmalloc-able later in htab_map_update_elem() - */ - goto free_htab; - htab->elem_size = sizeof(struct htab_elem) + round_up(htab->map.key_size, 8); if (percpu) @@ -327,6 +325,7 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr) else htab->elem_size += round_up(htab->map.value_size, 8); + err = -E2BIG; /* prevent zero size kmalloc and check for u32 overflow */ if (htab->n_buckets == 0 || htab->n_buckets > U32_MAX / sizeof(struct bucket))