From patchwork Wed Dec 18 10:53:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1212216 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="Pq3TZgRt"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47dBhp5mdfz9sRv for ; Wed, 18 Dec 2019 21:54:18 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726551AbfLRKyS (ORCPT ); Wed, 18 Dec 2019 05:54:18 -0500 Received: from mail-pl1-f193.google.com ([209.85.214.193]:44422 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725785AbfLRKyS (ORCPT ); Wed, 18 Dec 2019 05:54:18 -0500 Received: by mail-pl1-f193.google.com with SMTP id az3so795122plb.11; Wed, 18 Dec 2019 02:54:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aMb2WPe0/U7BL/joSYeG447Xvcg/coxoX4bqpvpwhVk=; b=Pq3TZgRth15hoVfIa0UPSIxdrezpkG/tiuRRMiOh6e13kcuCaVf/3QgLmKZC8edtgk ykj8mpXu00I0dJjlec1AXPSvRw+mQpLvT7LiJR9icDiS9O5WCffcXsbZy/OMOHCzDQma NjeVTAdDqH6T6vkuRlC3U3nod59FsR/f0rmZnRZtPmTx1DIC2t3YYhRoZ4/FZVpADewe aqom7Mauw92t9TQkD69jtwhqMjHYK3rMKLjp2T0mBLzHpw8pA4tdXFxOj5+nrKtFSS7E IlVfUaCjpDG0/JVn6fawL4kXcAMUEDCMrWdkryUJVyNFMoMRqzfQmzyQZf8pQHEPOPkU Rl2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aMb2WPe0/U7BL/joSYeG447Xvcg/coxoX4bqpvpwhVk=; b=bwNTIYSxhcs/yVwvQNng3HZyK6UuOmSA7knp/SjMS0/y4L/a0wTDNo4S6umqfkYCoX +QyUrOmH1FMwF8knhplG422qFrcbEjJZr3CLAFi3VwE8UI64iZfkKWJ/lN1xHOCZb3Lk DHYaZ5p6+335nn1HjQK9uYphfbejl2spSBEyDySgKr0HZDi+PeHKs+t+c4mdh3EU5dmP srdK00tklLgNorPBfGHyJw7vEKQdTqK6yph//K/vRpjK/OaNUd/LWJNaAtIemMseEJYy WGFaRuFavjh9X8J0vWsumGF0pQQXrdJ8+MwKlWv5T+KcCBGH6HarpNQbU+EARSeeEf4H VdBA== X-Gm-Message-State: APjAAAXfPkpuDFaMAa5yAacX5lXbhWPMdoEUqJPZg3SUMkXVCsb4+10+ v94su4tl5ibUHoEHUBLNUpcDFlJVXnbHlw== X-Google-Smtp-Source: APXvYqyEpFB6noiD5LTD6gcS7Tjimry7RrZ4cdAVo7MV52gnrPa/7IjBLt+d62iReh24Ws7nuQkpKw== X-Received: by 2002:a17:902:7896:: with SMTP id q22mr1874545pll.219.1576666456654; Wed, 18 Dec 2019 02:54:16 -0800 (PST) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id k9sm2339000pje.26.2019.12.18.02.54.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Dec 2019 02:54:16 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , bpf@vger.kernel.org, davem@davemloft.net, jakub.kicinski@netronome.com, hawk@kernel.org, john.fastabend@gmail.com, magnus.karlsson@intel.com, jonathan.lemon@gmail.com Subject: [PATCH bpf-next 1/8] xdp: simplify devmap cleanup Date: Wed, 18 Dec 2019 11:53:53 +0100 Message-Id: <20191218105400.2895-2-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191218105400.2895-1-bjorn.topel@gmail.com> References: <20191218105400.2895-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel After the RCU flavor consolidation [1], call_rcu() and synchronize_rcu() waits for preempt-disable regions (NAPI) in addition to the read-side critical sections. As a result of this, the cleanup code in devmap can be simplified * There is no longer a need to flush in __dev_map_entry_free, since we know that this has been done when the call_rcu() callback is triggered. * When freeing the map, there is no need to explicitly wait for a flush. It's guaranteed to be done after the synchronize_rcu() call in dev_map_free(). The rcu_barrier() is still needed, so that the map is not freed prior the elements. [1] https://lwn.net/Articles/777036/ Signed-off-by: Björn Töpel Acked-by: Toke Høiland-Jørgensen --- kernel/bpf/devmap.c | 41 ++++------------------------------------- 1 file changed, 4 insertions(+), 37 deletions(-) diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index 3d3d61b5985b..1fcafc641c12 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -221,18 +221,6 @@ static void dev_map_free(struct bpf_map *map) /* Make sure prior __dev_map_entry_free() have completed. */ rcu_barrier(); - /* To ensure all pending flush operations have completed wait for flush - * list to empty on _all_ cpus. - * Because the above synchronize_rcu() ensures the map is disconnected - * from the program we can assume no new items will be added. - */ - for_each_online_cpu(cpu) { - struct list_head *flush_list = per_cpu_ptr(dtab->flush_list, cpu); - - while (!list_empty(flush_list)) - cond_resched(); - } - if (dtab->map.map_type == BPF_MAP_TYPE_DEVMAP_HASH) { for (i = 0; i < dtab->n_buckets; i++) { struct bpf_dtab_netdev *dev; @@ -345,8 +333,7 @@ static int dev_map_hash_get_next_key(struct bpf_map *map, void *key, return -ENOENT; } -static int bq_xmit_all(struct xdp_bulk_queue *bq, u32 flags, - bool in_napi_ctx) +static int bq_xmit_all(struct xdp_bulk_queue *bq, u32 flags) { struct bpf_dtab_netdev *obj = bq->obj; struct net_device *dev = obj->dev; @@ -384,11 +371,7 @@ static int bq_xmit_all(struct xdp_bulk_queue *bq, u32 flags, for (i = 0; i < bq->count; i++) { struct xdp_frame *xdpf = bq->q[i]; - /* RX path under NAPI protection, can return frames faster */ - if (likely(in_napi_ctx)) - xdp_return_frame_rx_napi(xdpf); - else - xdp_return_frame(xdpf); + xdp_return_frame_rx_napi(xdpf); drops++; } goto out; @@ -409,7 +392,7 @@ void __dev_map_flush(struct bpf_map *map) rcu_read_lock(); list_for_each_entry_safe(bq, tmp, flush_list, flush_node) - bq_xmit_all(bq, XDP_XMIT_FLUSH, true); + bq_xmit_all(bq, XDP_XMIT_FLUSH); rcu_read_unlock(); } @@ -440,7 +423,7 @@ static int bq_enqueue(struct bpf_dtab_netdev *obj, struct xdp_frame *xdpf, struct xdp_bulk_queue *bq = this_cpu_ptr(obj->bulkq); if (unlikely(bq->count == DEV_MAP_BULK_SIZE)) - bq_xmit_all(bq, 0, true); + bq_xmit_all(bq, 0); /* Ingress dev_rx will be the same for all xdp_frame's in * bulk_queue, because bq stored per-CPU and must be flushed @@ -509,27 +492,11 @@ static void *dev_map_hash_lookup_elem(struct bpf_map *map, void *key) return dev ? &dev->ifindex : NULL; } -static void dev_map_flush_old(struct bpf_dtab_netdev *dev) -{ - if (dev->dev->netdev_ops->ndo_xdp_xmit) { - struct xdp_bulk_queue *bq; - int cpu; - - rcu_read_lock(); - for_each_online_cpu(cpu) { - bq = per_cpu_ptr(dev->bulkq, cpu); - bq_xmit_all(bq, XDP_XMIT_FLUSH, false); - } - rcu_read_unlock(); - } -} - static void __dev_map_entry_free(struct rcu_head *rcu) { struct bpf_dtab_netdev *dev; dev = container_of(rcu, struct bpf_dtab_netdev, rcu); - dev_map_flush_old(dev); free_percpu(dev->bulkq); dev_put(dev->dev); kfree(dev); From patchwork Wed Dec 18 10:53:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1212218 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="mSf00JoN"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47dBht1BDlz9sRv for ; Wed, 18 Dec 2019 21:54:22 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726774AbfLRKyV (ORCPT ); Wed, 18 Dec 2019 05:54:21 -0500 Received: from mail-pl1-f193.google.com ([209.85.214.193]:34096 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725785AbfLRKyV (ORCPT ); Wed, 18 Dec 2019 05:54:21 -0500 Received: by mail-pl1-f193.google.com with SMTP id x17so816776pln.1; Wed, 18 Dec 2019 02:54:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+Qd5NflARgViIrHC8MwB4Q/qP/0pF3sL/L5rhW4kGOM=; b=mSf00JoNxGcHkycXAMjhmgRt7fLNc46tq33/tPeTl7bMyIlye/CDURpKDceWe3zX2i OspYR0I9kwux7l5BhGb/1KoCTdChR+3mxZHvQMhomYVNhIldBZgHwqdgYXNaYq3BKXP6 NJfT6LgwYN9ay7sN9O8OhtzpT6I7t25zfI8EN5YgSdA4nEvM6ozzHDcSo4FgLVzITt7O TiEWNn9fRKMyWm6bG472kJtuIEBDIGtI6Y9iodZ+1r1jH1Hllx3GNjB6cOUTlv3/FsF+ lep1xgNCyfsjjZcWzVJ7H5o22r3ITxLOeP4EVw1y4DBnppHACaZ5Zr3dXOtJp0eoKU0r dBdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+Qd5NflARgViIrHC8MwB4Q/qP/0pF3sL/L5rhW4kGOM=; b=L49rPignB8M+4OiUq9TBdHv1E2be4/hhjeYRq6PbWOq/1i2AEJxuro69K+R/0ZxqaY Z+urAR4FGzJvHMYCsvchuq2bQqWX3cAy4PGCNSW6Sh8ePFeDSrCnAINgf3tdZVuFCitZ BenliKJHsp1e9SwUnIeWmBoRX4OA4vbqZEy4spQzvo1Mhao3Gi6Rfl0h1WAfyIJf9LUZ 1xP/lvWWsTc2/aMqsgoCVCwkV9EZxb7AO7k+8Fj8ri5tj2Rbwa5exL958FozYvXpmVNb o5+FtmakAQdw7dwo9cDR8pLDDQbkyO9EWWrD3deTvS4vgy5JhzLOZmbPC99+yVMwz3Ws vIEw== X-Gm-Message-State: APjAAAWsJYWX3x09ROuJWOtXTkqUinjLxNjbCWqncoeJl2OulQXL8q7h A5v1v+axWNn6Ernbs3fDqavJhk19/YJ65g== X-Google-Smtp-Source: APXvYqwGGmPJ0bxZoFL8BhqLYqCPFanX7pK7dKAk7rc4XjtQlKabAmK2yK3EFlUvgK/UKpgrnk3mnQ== X-Received: by 2002:a17:902:7207:: with SMTP id ba7mr2053548plb.254.1576666460204; Wed, 18 Dec 2019 02:54:20 -0800 (PST) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id k9sm2339000pje.26.2019.12.18.02.54.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Dec 2019 02:54:19 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , bpf@vger.kernel.org, davem@davemloft.net, jakub.kicinski@netronome.com, hawk@kernel.org, john.fastabend@gmail.com, magnus.karlsson@intel.com, jonathan.lemon@gmail.com Subject: [PATCH bpf-next 2/8] xdp: simplify cpumap cleanup Date: Wed, 18 Dec 2019 11:53:54 +0100 Message-Id: <20191218105400.2895-3-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191218105400.2895-1-bjorn.topel@gmail.com> References: <20191218105400.2895-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel After the RCU flavor consolidation [1], call_rcu() and synchronize_rcu() waits for preempt-disable regions (NAPI) in addition to the read-side critical sections. As a result of this, the cleanup code in cpumap can be simplified * There is no longer a need to flush in __cpu_map_entry_free, since we know that this has been done when the call_rcu() callback is triggered. * When freeing the map, there is no need to explicitly wait for a flush. It's guaranteed to be done after the synchronize_rcu() call in cpu_map_free(). [1] https://lwn.net/Articles/777036/ Signed-off-by: Björn Töpel Acked-by: Toke Høiland-Jørgensen --- kernel/bpf/cpumap.c | 33 +++++---------------------------- 1 file changed, 5 insertions(+), 28 deletions(-) diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index ef49e17ae47c..fbf176e0a2ab 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -75,7 +75,7 @@ struct bpf_cpu_map { struct list_head __percpu *flush_list; }; -static int bq_flush_to_queue(struct xdp_bulk_queue *bq, bool in_napi_ctx); +static int bq_flush_to_queue(struct xdp_bulk_queue *bq); static struct bpf_map *cpu_map_alloc(union bpf_attr *attr) { @@ -399,7 +399,6 @@ static struct bpf_cpu_map_entry *__cpu_map_entry_alloc(u32 qsize, u32 cpu, static void __cpu_map_entry_free(struct rcu_head *rcu) { struct bpf_cpu_map_entry *rcpu; - int cpu; /* This cpu_map_entry have been disconnected from map and one * RCU graze-period have elapsed. Thus, XDP cannot queue any @@ -408,13 +407,6 @@ static void __cpu_map_entry_free(struct rcu_head *rcu) */ rcpu = container_of(rcu, struct bpf_cpu_map_entry, rcu); - /* Flush remaining packets in percpu bulkq */ - for_each_online_cpu(cpu) { - struct xdp_bulk_queue *bq = per_cpu_ptr(rcpu->bulkq, cpu); - - /* No concurrent bq_enqueue can run at this point */ - bq_flush_to_queue(bq, false); - } free_percpu(rcpu->bulkq); /* Cannot kthread_stop() here, last put free rcpu resources */ put_cpu_map_entry(rcpu); @@ -522,18 +514,6 @@ static void cpu_map_free(struct bpf_map *map) bpf_clear_redirect_map(map); synchronize_rcu(); - /* To ensure all pending flush operations have completed wait for flush - * list be empty on _all_ cpus. Because the above synchronize_rcu() - * ensures the map is disconnected from the program we can assume no new - * items will be added to the list. - */ - for_each_online_cpu(cpu) { - struct list_head *flush_list = per_cpu_ptr(cmap->flush_list, cpu); - - while (!list_empty(flush_list)) - cond_resched(); - } - /* For cpu_map the remote CPUs can still be using the entries * (struct bpf_cpu_map_entry). */ @@ -599,7 +579,7 @@ const struct bpf_map_ops cpu_map_ops = { .map_check_btf = map_check_no_btf, }; -static int bq_flush_to_queue(struct xdp_bulk_queue *bq, bool in_napi_ctx) +static int bq_flush_to_queue(struct xdp_bulk_queue *bq) { struct bpf_cpu_map_entry *rcpu = bq->obj; unsigned int processed = 0, drops = 0; @@ -620,10 +600,7 @@ static int bq_flush_to_queue(struct xdp_bulk_queue *bq, bool in_napi_ctx) err = __ptr_ring_produce(q, xdpf); if (err) { drops++; - if (likely(in_napi_ctx)) - xdp_return_frame_rx_napi(xdpf); - else - xdp_return_frame(xdpf); + xdp_return_frame_rx_napi(xdpf); } processed++; } @@ -646,7 +623,7 @@ static int bq_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf) struct xdp_bulk_queue *bq = this_cpu_ptr(rcpu->bulkq); if (unlikely(bq->count == CPU_MAP_BULK_SIZE)) - bq_flush_to_queue(bq, true); + bq_flush_to_queue(bq); /* Notice, xdp_buff/page MUST be queued here, long enough for * driver to code invoking us to finished, due to driver @@ -688,7 +665,7 @@ void __cpu_map_flush(struct bpf_map *map) struct xdp_bulk_queue *bq, *tmp; list_for_each_entry_safe(bq, tmp, flush_list, flush_node) { - bq_flush_to_queue(bq, true); + bq_flush_to_queue(bq); /* If already running, costs spin_lock_irqsave + smb_mb */ wake_up_process(bq->obj->kthread); From patchwork Wed Dec 18 10:53:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1212220 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="WwQ7e0ob"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47dBhx1ZBCz9sRv for ; Wed, 18 Dec 2019 21:54:25 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725785AbfLRKyY (ORCPT ); Wed, 18 Dec 2019 05:54:24 -0500 Received: from mail-pg1-f195.google.com ([209.85.215.195]:35999 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726701AbfLRKyY (ORCPT ); Wed, 18 Dec 2019 05:54:24 -0500 Received: by mail-pg1-f195.google.com with SMTP id k3so1077514pgc.3; Wed, 18 Dec 2019 02:54:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cptuS3gx2ZAH9t+h/JLl0P3ml44OP5bSJzkDO+68Zwo=; b=WwQ7e0ob39UWjt5asi76dFoNjDEcxULrI3GRO8M3RbgxU+orGCF/7oNK4W12LsirrX nNM7N77dFWaQ//IUXRo6/rfxQ/Tf2JFbc1X1jn6iIIbixVUIWQ5N2r0cPFJgZxQyaZJh gI5HVyY2awnGZ02HPlbybGrEZjdi5nRspbmEbiYaILa4Pjjw9kboeJWOxafdEIVvNNFv PPmvAgHbh4f2iCPh72g1ZjmJnO6KVq3qwAw0KIPn1BNh2vz8scni2CjIT/CAKJBAAK20 NoyWbYqArJbL/+12eJefWIbyxzlIXh57pmz/eHfl+Xd8+BNFmJTUDvz5SIwljF1Vh+3C qnfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cptuS3gx2ZAH9t+h/JLl0P3ml44OP5bSJzkDO+68Zwo=; b=k1Et2fjEoQVuzCuhQlFOdnfcl/kN8OOtmLL+Z1d01oVbRnmJqhl4EUFlkTmCH1U5dQ l4GKCO3sSJBRLtcmHdgUxrSeiRW0OnVK0eXJLbRY4fST2k4ScjMk/nUNQr8Mdu+dvU2B 9mGvRV5pcpxdBaej8Gf7JAuetoQcS7/B06iS95KoAyU9ujlVG5KL7/UvZ7YxJWiVmnxA dZnlh0WnArujPjjLGuyzQSALXDqyz5VhYiRYh+AlhUbK+EC/DCQ02bLGLNnX/fYy7eO0 rj0ulMUJhifcx73HNbIkCtjX/Rp2qVLtji9znCA/hde4o2UDXsownMF9Vk2IemUP1CE/ e74g== X-Gm-Message-State: APjAAAVwRR3tyhFRZv47MnLILPa9qETp3uarfqn4sGZkS83xgsxJXbVv Ys9bhZXkp/09jEg8OS4cUgXOsGVZYqqMQw== X-Google-Smtp-Source: APXvYqwVuONYIhAVbng0k9Uf6hxtz5ULw6uxZi58lYsrsyJqtFg+ehHZQ2/M0dULKz0Xc9yhecljZg== X-Received: by 2002:aa7:9315:: with SMTP id 21mr2269066pfj.187.1576666463748; Wed, 18 Dec 2019 02:54:23 -0800 (PST) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id k9sm2339000pje.26.2019.12.18.02.54.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Dec 2019 02:54:23 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , bpf@vger.kernel.org, davem@davemloft.net, jakub.kicinski@netronome.com, hawk@kernel.org, john.fastabend@gmail.com, magnus.karlsson@intel.com, jonathan.lemon@gmail.com Subject: [PATCH bpf-next 3/8] xdp: fix graze->grace type-o in cpumap comments Date: Wed, 18 Dec 2019 11:53:55 +0100 Message-Id: <20191218105400.2895-4-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191218105400.2895-1-bjorn.topel@gmail.com> References: <20191218105400.2895-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel Simple spelling fix. Signed-off-by: Björn Töpel Acked-by: Toke Høiland-Jørgensen --- kernel/bpf/cpumap.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index fbf176e0a2ab..66948fbc58d8 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -401,7 +401,7 @@ static void __cpu_map_entry_free(struct rcu_head *rcu) struct bpf_cpu_map_entry *rcpu; /* This cpu_map_entry have been disconnected from map and one - * RCU graze-period have elapsed. Thus, XDP cannot queue any + * RCU grace-period have elapsed. Thus, XDP cannot queue any * new packets and cannot change/set flush_needed that can * find this entry. */ @@ -428,7 +428,7 @@ static void __cpu_map_entry_free(struct rcu_head *rcu) * percpu bulkq to queue. Due to caller map_delete_elem() disable * preemption, cannot call kthread_stop() to make sure queue is empty. * Instead a work_queue is started for stopping kthread, - * cpu_map_kthread_stop, which waits for an RCU graze period before + * cpu_map_kthread_stop, which waits for an RCU grace period before * stopping kthread, emptying the queue. */ static void __cpu_map_entry_replace(struct bpf_cpu_map *cmap, @@ -524,7 +524,7 @@ static void cpu_map_free(struct bpf_map *map) if (!rcpu) continue; - /* bq flush and cleanup happens after RCU graze-period */ + /* bq flush and cleanup happens after RCU grace-period */ __cpu_map_entry_replace(cmap, i, NULL); /* call_rcu */ } free_percpu(cmap->flush_list); From patchwork Wed Dec 18 10:53:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1212222 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="Lmo2q2v7"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47dBj147xNz9sS9 for ; Wed, 18 Dec 2019 21:54:29 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726834AbfLRKy2 (ORCPT ); Wed, 18 Dec 2019 05:54:28 -0500 Received: from mail-pl1-f193.google.com ([209.85.214.193]:39725 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726701AbfLRKy2 (ORCPT ); Wed, 18 Dec 2019 05:54:28 -0500 Received: by mail-pl1-f193.google.com with SMTP id z3so804449plk.6; Wed, 18 Dec 2019 02:54:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/YjCXVv+uOihZ50ctG/6piwPgqsBGgthVMgNHNysIhY=; b=Lmo2q2v7ClL30RHJ73+khG8mcEBaQPGEc+WW9TVAsjFPPeE3k0oPyQd6wN43zQImot G8nTuF+AD0fH5n0ioJCBKVvJnZT4VwSGAOF6GLuKU8duc/Ntgk7R+OWpdYj85zrERDzq f8klz26RBLSZHvJCnyliBQJMifjytYuCLzkAtBl6IOuPFgcxdkLL87rB+xgW3BOKBTW/ ZKeRI3s3gAfg6Tp5QEW8QB24IUKLKuWb38j02oaEBRRVdQkLQW5/gCBvcPF63airKX5+ 89RyD2qRHCD7Ku4bXlQKWUl22skf59Ar/VDLTes7bMUrL1hWn72MuZvpKkH5fUT3CJnY xffA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/YjCXVv+uOihZ50ctG/6piwPgqsBGgthVMgNHNysIhY=; b=diNQsdWWPiubFXU/00Ob5MwdtPVekBQVYBdzSaBBN12EE0cnvTWsXP3SJJBCpNbABT kiyGA9KFQ2+i82jHNUuYsW4sLDuJygjFmsuQCweWn6PnQqCWGiYk49f1Y9PhXvf76i9Y jbW3We1evdUTmJAb7/Df7V2QNyQHW6HH5ZU4UYhrqiRzSxHWnMWTNI/Zv32sTwVGsiha GZIVK2z+57LFjDpgfk6hRccTFM4VtcW5pyug2wfC9vRvDBEdjx7WlKQfgbwYvcaNwyWL EwEars24WXPJjQh/tgDAQfl2vqVGpN99Ksy58Q9F5h6Jdtfx4AwNWfBLP9rPvp1vt+WO TRtg== X-Gm-Message-State: APjAAAXosIuD6PN+8EatAxXo34ur/YrL1FsEPHEv4co06XBqLTei5gH1 P2/UnGKPbTflNV+gFAfjN7Py38GP2geLiA== X-Google-Smtp-Source: APXvYqy6IE57sbTpTbOH9FB78lpF2PdxvIoUrp3PJHZNagXKRr22Q0kQouLlUosu3yJpknUBA26xXg== X-Received: by 2002:a17:902:b087:: with SMTP id p7mr2132285plr.10.1576666467218; Wed, 18 Dec 2019 02:54:27 -0800 (PST) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id k9sm2339000pje.26.2019.12.18.02.54.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Dec 2019 02:54:26 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , bpf@vger.kernel.org, davem@davemloft.net, jakub.kicinski@netronome.com, hawk@kernel.org, john.fastabend@gmail.com, magnus.karlsson@intel.com, jonathan.lemon@gmail.com Subject: [PATCH bpf-next 4/8] xsk: make xskmap flush_list common for all map instances Date: Wed, 18 Dec 2019 11:53:56 +0100 Message-Id: <20191218105400.2895-5-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191218105400.2895-1-bjorn.topel@gmail.com> References: <20191218105400.2895-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel The xskmap flush list is used to track entries that need to flushed from via the xdp_do_flush_map() function. This list used to be per-map, but there is really no reason for that. Instead make the flush list global for all xskmaps, which simplifies __xsk_map_flush() and xsk_map_alloc(). Signed-off-by: Björn Töpel Acked-by: Toke Høiland-Jørgensen --- include/net/xdp_sock.h | 11 ++++------- kernel/bpf/xskmap.c | 16 ++-------------- net/core/filter.c | 9 ++++----- net/xdp/xsk.c | 17 +++++++++-------- 4 files changed, 19 insertions(+), 34 deletions(-) diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h index e3780e4b74e1..48594740d67c 100644 --- a/include/net/xdp_sock.h +++ b/include/net/xdp_sock.h @@ -72,7 +72,6 @@ struct xdp_umem { struct xsk_map { struct bpf_map map; - struct list_head __percpu *flush_list; spinlock_t lock; /* Synchronize map updates */ struct xdp_sock *xsk_map[]; }; @@ -139,9 +138,8 @@ void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs, struct xdp_sock **map_entry); int xsk_map_inc(struct xsk_map *map); void xsk_map_put(struct xsk_map *map); -int __xsk_map_redirect(struct bpf_map *map, struct xdp_buff *xdp, - struct xdp_sock *xs); -void __xsk_map_flush(struct bpf_map *map); +int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp); +void __xsk_map_flush(void); static inline struct xdp_sock *__xsk_map_lookup_elem(struct bpf_map *map, u32 key) @@ -369,13 +367,12 @@ static inline u64 xsk_umem_adjust_offset(struct xdp_umem *umem, u64 handle, return 0; } -static inline int __xsk_map_redirect(struct bpf_map *map, struct xdp_buff *xdp, - struct xdp_sock *xs) +static inline int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp) { return -EOPNOTSUPP; } -static inline void __xsk_map_flush(struct bpf_map *map) +static inline void __xsk_map_flush(void) { } diff --git a/kernel/bpf/xskmap.c b/kernel/bpf/xskmap.c index 90c4fce1c981..3757a7a50ab7 100644 --- a/kernel/bpf/xskmap.c +++ b/kernel/bpf/xskmap.c @@ -74,7 +74,7 @@ static struct bpf_map *xsk_map_alloc(union bpf_attr *attr) struct bpf_map_memory mem; int cpu, err, numa_node; struct xsk_map *m; - u64 cost, size; + u64 size; if (!capable(CAP_NET_ADMIN)) return ERR_PTR(-EPERM); @@ -86,9 +86,8 @@ static struct bpf_map *xsk_map_alloc(union bpf_attr *attr) numa_node = bpf_map_attr_numa_node(attr); size = struct_size(m, xsk_map, attr->max_entries); - cost = size + array_size(sizeof(*m->flush_list), num_possible_cpus()); - err = bpf_map_charge_init(&mem, cost); + err = bpf_map_charge_init(&mem, size); if (err < 0) return ERR_PTR(err); @@ -102,16 +101,6 @@ static struct bpf_map *xsk_map_alloc(union bpf_attr *attr) bpf_map_charge_move(&m->map.memory, &mem); spin_lock_init(&m->lock); - m->flush_list = alloc_percpu(struct list_head); - if (!m->flush_list) { - bpf_map_charge_finish(&m->map.memory); - bpf_map_area_free(m); - return ERR_PTR(-ENOMEM); - } - - for_each_possible_cpu(cpu) - INIT_LIST_HEAD(per_cpu_ptr(m->flush_list, cpu)); - return &m->map; } @@ -121,7 +110,6 @@ static void xsk_map_free(struct bpf_map *map) bpf_clear_redirect_map(map); synchronize_net(); - free_percpu(m->flush_list); bpf_map_area_free(m); } diff --git a/net/core/filter.c b/net/core/filter.c index a411f7835dee..c51678c473c5 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3511,8 +3511,7 @@ xdp_do_redirect_slow(struct net_device *dev, struct xdp_buff *xdp, static int __bpf_tx_xdp_map(struct net_device *dev_rx, void *fwd, struct bpf_map *map, - struct xdp_buff *xdp, - u32 index) + struct xdp_buff *xdp) { int err; @@ -3537,7 +3536,7 @@ static int __bpf_tx_xdp_map(struct net_device *dev_rx, void *fwd, case BPF_MAP_TYPE_XSKMAP: { struct xdp_sock *xs = fwd; - err = __xsk_map_redirect(map, xdp, xs); + err = __xsk_map_redirect(xs, xdp); return err; } default: @@ -3562,7 +3561,7 @@ void xdp_do_flush_map(void) __cpu_map_flush(map); break; case BPF_MAP_TYPE_XSKMAP: - __xsk_map_flush(map); + __xsk_map_flush(); break; default: break; @@ -3619,7 +3618,7 @@ static int xdp_do_redirect_map(struct net_device *dev, struct xdp_buff *xdp, if (ri->map_to_flush && unlikely(ri->map_to_flush != map)) xdp_do_flush_map(); - err = __bpf_tx_xdp_map(dev, fwd, map, xdp, index); + err = __bpf_tx_xdp_map(dev, fwd, map, xdp); if (unlikely(err)) goto err; diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 956793893c9d..e45c27f5cfca 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -31,6 +31,8 @@ #define TX_BATCH_SIZE 16 +static DEFINE_PER_CPU(struct list_head, xskmap_flush_list); + bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs) { return READ_ONCE(xs->rx) && READ_ONCE(xs->umem) && @@ -264,11 +266,9 @@ int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) return err; } -int __xsk_map_redirect(struct bpf_map *map, struct xdp_buff *xdp, - struct xdp_sock *xs) +int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp) { - struct xsk_map *m = container_of(map, struct xsk_map, map); - struct list_head *flush_list = this_cpu_ptr(m->flush_list); + struct list_head *flush_list = this_cpu_ptr(&xskmap_flush_list); int err; err = xsk_rcv(xs, xdp); @@ -281,10 +281,9 @@ int __xsk_map_redirect(struct bpf_map *map, struct xdp_buff *xdp, return 0; } -void __xsk_map_flush(struct bpf_map *map) +void __xsk_map_flush(void) { - struct xsk_map *m = container_of(map, struct xsk_map, map); - struct list_head *flush_list = this_cpu_ptr(m->flush_list); + struct list_head *flush_list = this_cpu_ptr(&xskmap_flush_list); struct xdp_sock *xs, *tmp; list_for_each_entry_safe(xs, tmp, flush_list, flush_node) { @@ -1177,7 +1176,7 @@ static struct pernet_operations xsk_net_ops = { static int __init xsk_init(void) { - int err; + int err, cpu; err = proto_register(&xsk_proto, 0 /* no slab */); if (err) @@ -1195,6 +1194,8 @@ static int __init xsk_init(void) if (err) goto out_pernet; + for_each_possible_cpu(cpu) + INIT_LIST_HEAD(&per_cpu(xskmap_flush_list, cpu)); return 0; out_pernet: From patchwork Wed Dec 18 10:53:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1212224 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="fjAQqhzg"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47dBj45gB7z9sRv for ; Wed, 18 Dec 2019 21:54:32 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726836AbfLRKyc (ORCPT ); Wed, 18 Dec 2019 05:54:32 -0500 Received: from mail-pg1-f196.google.com ([209.85.215.196]:42482 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726701AbfLRKyc (ORCPT ); Wed, 18 Dec 2019 05:54:32 -0500 Received: by mail-pg1-f196.google.com with SMTP id s64so1064463pgb.9; Wed, 18 Dec 2019 02:54:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aX+RSJUAErZVjQSmhFzZLtLEVib2oRvWACoJXzWZ8lw=; b=fjAQqhzgdjaHeR11Ro3XTN/7s3Jui7+ND/PsdJ6oYq/b1qZvDzPTBenr1Bhc4DML3Z vVQ16cmu3ULCOqAtOacms6D3icYQxFz1xSGqGTJ+gH8m00p+cY12O1mXhoptKZIttE4G 9ipcpIPTK0hAT+4XZame94KhQYnKRn38lM4mapwLQ/FrQbfGDmK+ADa7ktTrCf3h0YIl rEgfVJDclpFOXWTLAHcFvJHeij1UbTd0rT+hbZYsvS/KnVM5tGlP2Ii94ySkcloSqwwR bpArkrgAWwCM0jFcT4+W/6wOjJ6ZoQ+yFAJGd4ImJI22re3i8xh03kE71RJS84mffG2Y vYEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aX+RSJUAErZVjQSmhFzZLtLEVib2oRvWACoJXzWZ8lw=; b=NQSnYN8D21wcqybAc9mD1MN7+wmzGt1qLDQByQNOC6px+oDfZMd/A4MGG9wGd5paPc kFOVlPQtfBCk+59fddNAT8WPvYwE7nWe96p34VOsybFGE7ol3zEhV9ulEeqgcn+NHYp4 yite67k4Xq3I4phkAr3KCktXqoMRQMfYUVVMm2bJByJVxa1H9yPJvoIf0VB7Zjs6YrZu rOOMXM4JWLCxEwV14tTkGHzrGgBiP2CxqGKT2cUhld7/J8MUUsyYW6U88HOQRUAbp5gd UcEqrF70uHdtr9/TXEBVgjLquxW2+RqgwC+WlPQdOin022al7F4FU+1ZELk/KlO1ZaFu EG+A== X-Gm-Message-State: APjAAAUBV0cwN1Xcq1VD8RUgXasu7LbHpxmklrI3Utgk0mV6VEVd80qS OR4ouITWvnYVWm5QaHyLRwx6zKImkb8Iig== X-Google-Smtp-Source: APXvYqz5QC11vn7uKVCg9WQJa0KQEgo4Jlrj2O6j9ZbweZvtS7awIMwzLgRgb8oigW3bxmFrUm3ZHA== X-Received: by 2002:aa7:8a99:: with SMTP id a25mr2318135pfc.233.1576666471104; Wed, 18 Dec 2019 02:54:31 -0800 (PST) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id k9sm2339000pje.26.2019.12.18.02.54.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Dec 2019 02:54:30 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , bpf@vger.kernel.org, davem@davemloft.net, jakub.kicinski@netronome.com, hawk@kernel.org, john.fastabend@gmail.com, magnus.karlsson@intel.com, jonathan.lemon@gmail.com Subject: [PATCH bpf-next 5/8] xdp: make devmap flush_list common for all map instances Date: Wed, 18 Dec 2019 11:53:57 +0100 Message-Id: <20191218105400.2895-6-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191218105400.2895-1-bjorn.topel@gmail.com> References: <20191218105400.2895-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel The devmap flush list is used to track entries that need to flushed from via the xdp_do_flush_map() function. This list used to be per-map, but there is really no reason for that. Instead make the flush list global for all devmaps, which simplifies __dev_map_flush() and dev_map_init_map(). Signed-off-by: Björn Töpel Acked-by: Toke Høiland-Jørgensen --- include/linux/bpf.h | 4 ++-- kernel/bpf/devmap.c | 37 ++++++++++++++----------------------- net/core/filter.c | 2 +- 3 files changed, 17 insertions(+), 26 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index d467983e61bb..31191804ca09 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -959,7 +959,7 @@ struct sk_buff; struct bpf_dtab_netdev *__dev_map_lookup_elem(struct bpf_map *map, u32 key); struct bpf_dtab_netdev *__dev_map_hash_lookup_elem(struct bpf_map *map, u32 key); -void __dev_map_flush(struct bpf_map *map); +void __dev_map_flush(void); int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp, struct net_device *dev_rx); int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, @@ -1068,7 +1068,7 @@ static inline struct net_device *__dev_map_hash_lookup_elem(struct bpf_map *map return NULL; } -static inline void __dev_map_flush(struct bpf_map *map) +static inline void __dev_map_flush(void) { } diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index 1fcafc641c12..da9c832fc5c8 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -75,7 +75,6 @@ struct bpf_dtab_netdev { struct bpf_dtab { struct bpf_map map; struct bpf_dtab_netdev **netdev_map; /* DEVMAP type only */ - struct list_head __percpu *flush_list; struct list_head list; /* these are only used for DEVMAP_HASH type maps */ @@ -85,6 +84,7 @@ struct bpf_dtab { u32 n_buckets; }; +static DEFINE_PER_CPU(struct list_head, dev_map_flush_list); static DEFINE_SPINLOCK(dev_map_lock); static LIST_HEAD(dev_map_list); @@ -109,8 +109,8 @@ static inline struct hlist_head *dev_map_index_hash(struct bpf_dtab *dtab, static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr) { - int err, cpu; - u64 cost; + u64 cost = 0; + int err; /* check sanity of attributes */ if (attr->max_entries == 0 || attr->key_size != 4 || @@ -125,9 +125,6 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr) bpf_map_init_from_attr(&dtab->map, attr); - /* make sure page count doesn't overflow */ - cost = (u64) sizeof(struct list_head) * num_possible_cpus(); - if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) { dtab->n_buckets = roundup_pow_of_two(dtab->map.max_entries); @@ -143,17 +140,10 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr) if (err) return -EINVAL; - dtab->flush_list = alloc_percpu(struct list_head); - if (!dtab->flush_list) - goto free_charge; - - for_each_possible_cpu(cpu) - INIT_LIST_HEAD(per_cpu_ptr(dtab->flush_list, cpu)); - if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) { dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets); if (!dtab->dev_index_head) - goto free_percpu; + goto free_charge; spin_lock_init(&dtab->index_lock); } else { @@ -161,13 +151,11 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr) sizeof(struct bpf_dtab_netdev *), dtab->map.numa_node); if (!dtab->netdev_map) - goto free_percpu; + goto free_charge; } return 0; -free_percpu: - free_percpu(dtab->flush_list); free_charge: bpf_map_charge_finish(&dtab->map.memory); return -ENOMEM; @@ -201,7 +189,7 @@ static struct bpf_map *dev_map_alloc(union bpf_attr *attr) static void dev_map_free(struct bpf_map *map) { struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map); - int i, cpu; + int i; /* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0, * so the programs (can be more than one that used this map) were @@ -254,7 +242,6 @@ static void dev_map_free(struct bpf_map *map) bpf_map_area_free(dtab->netdev_map); } - free_percpu(dtab->flush_list); kfree(dtab); } @@ -384,10 +371,9 @@ static int bq_xmit_all(struct xdp_bulk_queue *bq, u32 flags) * net device can be torn down. On devmap tear down we ensure the flush list * is empty before completing to ensure all flush operations have completed. */ -void __dev_map_flush(struct bpf_map *map) +void __dev_map_flush(void) { - struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map); - struct list_head *flush_list = this_cpu_ptr(dtab->flush_list); + struct list_head *flush_list = this_cpu_ptr(&dev_map_flush_list); struct xdp_bulk_queue *bq, *tmp; rcu_read_lock(); @@ -419,7 +405,7 @@ static int bq_enqueue(struct bpf_dtab_netdev *obj, struct xdp_frame *xdpf, struct net_device *dev_rx) { - struct list_head *flush_list = this_cpu_ptr(obj->dtab->flush_list); + struct list_head *flush_list = this_cpu_ptr(&dev_map_flush_list); struct xdp_bulk_queue *bq = this_cpu_ptr(obj->bulkq); if (unlikely(bq->count == DEV_MAP_BULK_SIZE)) @@ -777,10 +763,15 @@ static struct notifier_block dev_map_notifier = { static int __init dev_map_init(void) { + int cpu; + /* Assure tracepoint shadow struct _bpf_dtab_netdev is in sync */ BUILD_BUG_ON(offsetof(struct bpf_dtab_netdev, dev) != offsetof(struct _bpf_dtab_netdev, dev)); register_netdevice_notifier(&dev_map_notifier); + + for_each_possible_cpu(cpu) + INIT_LIST_HEAD(&per_cpu(dev_map_flush_list, cpu)); return 0; } diff --git a/net/core/filter.c b/net/core/filter.c index c51678c473c5..b7570cb84902 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3555,7 +3555,7 @@ void xdp_do_flush_map(void) switch (map->map_type) { case BPF_MAP_TYPE_DEVMAP: case BPF_MAP_TYPE_DEVMAP_HASH: - __dev_map_flush(map); + __dev_map_flush(); break; case BPF_MAP_TYPE_CPUMAP: __cpu_map_flush(map); From patchwork Wed Dec 18 10:53:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1212226 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="itr3kaUn"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47dBj82pF4z9sRv for ; Wed, 18 Dec 2019 21:54:36 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726856AbfLRKyg (ORCPT ); Wed, 18 Dec 2019 05:54:36 -0500 Received: from mail-pg1-f194.google.com ([209.85.215.194]:40208 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726701AbfLRKyf (ORCPT ); Wed, 18 Dec 2019 05:54:35 -0500 Received: by mail-pg1-f194.google.com with SMTP id k25so1066707pgt.7; Wed, 18 Dec 2019 02:54:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CFptD5VMBrgxsiklhFlu+20zkzfOIgtFOkPz86thMKA=; b=itr3kaUnXcPpTVyQq4W6JxNGomgy8qleI88xzrHmx3sv4xqWHK8tDDq1ZoDVZREaUO Cc9oXIdEZqC2qukn3upilbwQ7aTSAjzUJdYJm7CD6ayHBv0E8BvXZm0RAZ/EBk0FfTyF dMS6c4pFIkqb2kd9mvCK4lqMRFxXCplbX3YXFWEFZ0GX5rwe/MmYIbZiqT+TeIdz9j2o h7O5T10Jj5p2jyWjgDpoLRb8MMoxIKRClPU+mFzMs4mQsV4/bC/ND79yVU0HvY+IlQCg 2bz/COE9Wo+8Z0TY9fOyVqI/o7SqfyW9AOeKqNZnqq6RU722vNEwJ89S5yes0JrxmSa1 K+9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CFptD5VMBrgxsiklhFlu+20zkzfOIgtFOkPz86thMKA=; b=h+ZNRxp3hXDcJ4NFPC6yuNNcUWyC4Hw2TprXLgkYmGIt8YYJJyWmSy55AE3Fu02kEc /sryWnspz1Z+izcPGgRtPCYRvg5KPkUoGK/vYv1z2ISdpouZcIRFc0Xg4x+9bqTcwJkn buHDj1a8rIgfgrZKx3GpB2zOVmV/h9RBT0/JcZ7fQhqT9mTySlHpmPs7cTqeBMY+0+0x DDXYcT2KKwwYDLaTS1pFsVIOtxflKj+8WvpK2kD2CBPRAoSQiIcRyCq0rqPjX320C8He 4xWDZlheFJTOQMdc+la6l+M+CnqE5+xPwI+NewqJNWRfkdLqjQZqplsi5cZkI9UzTVyb vZsg== X-Gm-Message-State: APjAAAV/I0T9tf13Qw7zOlpRzL/K678bHOuDpD0v2Krn2dV2YysYXLtx jvri4KuDdWjuqchXiMM4Lpt+Q9e/IKL8IQ== X-Google-Smtp-Source: APXvYqwbBQHlbzVZlzdsy6zAc1kM9AbUARftSz7AYIjTePGXdVu1ilvQPtsBr4haTYhhD59ITTfohw== X-Received: by 2002:aa7:93ce:: with SMTP id y14mr2284340pff.185.1576666474528; Wed, 18 Dec 2019 02:54:34 -0800 (PST) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id k9sm2339000pje.26.2019.12.18.02.54.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Dec 2019 02:54:34 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , bpf@vger.kernel.org, davem@davemloft.net, jakub.kicinski@netronome.com, hawk@kernel.org, john.fastabend@gmail.com, magnus.karlsson@intel.com, jonathan.lemon@gmail.com Subject: [PATCH bpf-next 6/8] xdp: make cpumap flush_list common for all map instances Date: Wed, 18 Dec 2019 11:53:58 +0100 Message-Id: <20191218105400.2895-7-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191218105400.2895-1-bjorn.topel@gmail.com> References: <20191218105400.2895-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel The cpumap flush list is used to track entries that need to flushed from via the xdp_do_flush_map() function. This list used to be per-map, but there is really no reason for that. Instead make the flush list global for all devmaps, which simplifies __cpu_map_flush() and cpu_map_alloc(). Signed-off-by: Björn Töpel Acked-by: Toke Høiland-Jørgensen --- include/linux/bpf.h | 4 ++-- kernel/bpf/cpumap.c | 37 ++++++++++++++++++------------------- net/core/filter.c | 2 +- 3 files changed, 21 insertions(+), 22 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 31191804ca09..8f3e00c84f39 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -966,7 +966,7 @@ int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, struct bpf_prog *xdp_prog); struct bpf_cpu_map_entry *__cpu_map_lookup_elem(struct bpf_map *map, u32 key); -void __cpu_map_flush(struct bpf_map *map); +void __cpu_map_flush(void); int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp, struct net_device *dev_rx); @@ -1097,7 +1097,7 @@ struct bpf_cpu_map_entry *__cpu_map_lookup_elem(struct bpf_map *map, u32 key) return NULL; } -static inline void __cpu_map_flush(struct bpf_map *map) +static inline void __cpu_map_flush(void) { } diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 66948fbc58d8..70f71b154fa5 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -72,17 +72,18 @@ struct bpf_cpu_map { struct bpf_map map; /* Below members specific for map type */ struct bpf_cpu_map_entry **cpu_map; - struct list_head __percpu *flush_list; }; +static DEFINE_PER_CPU(struct list_head, cpu_map_flush_list); + static int bq_flush_to_queue(struct xdp_bulk_queue *bq); static struct bpf_map *cpu_map_alloc(union bpf_attr *attr) { struct bpf_cpu_map *cmap; int err = -ENOMEM; - int ret, cpu; u64 cost; + int ret; if (!capable(CAP_SYS_ADMIN)) return ERR_PTR(-EPERM); @@ -106,7 +107,6 @@ static struct bpf_map *cpu_map_alloc(union bpf_attr *attr) /* make sure page count doesn't overflow */ cost = (u64) cmap->map.max_entries * sizeof(struct bpf_cpu_map_entry *); - cost += sizeof(struct list_head) * num_possible_cpus(); /* Notice returns -EPERM on if map size is larger than memlock limit */ ret = bpf_map_charge_init(&cmap->map.memory, cost); @@ -115,23 +115,14 @@ static struct bpf_map *cpu_map_alloc(union bpf_attr *attr) goto free_cmap; } - cmap->flush_list = alloc_percpu(struct list_head); - if (!cmap->flush_list) - goto free_charge; - - for_each_possible_cpu(cpu) - INIT_LIST_HEAD(per_cpu_ptr(cmap->flush_list, cpu)); - /* Alloc array for possible remote "destination" CPUs */ cmap->cpu_map = bpf_map_area_alloc(cmap->map.max_entries * sizeof(struct bpf_cpu_map_entry *), cmap->map.numa_node); if (!cmap->cpu_map) - goto free_percpu; + goto free_charge; return &cmap->map; -free_percpu: - free_percpu(cmap->flush_list); free_charge: bpf_map_charge_finish(&cmap->map.memory); free_cmap: @@ -499,7 +490,6 @@ static int cpu_map_update_elem(struct bpf_map *map, void *key, void *value, static void cpu_map_free(struct bpf_map *map) { struct bpf_cpu_map *cmap = container_of(map, struct bpf_cpu_map, map); - int cpu; u32 i; /* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0, @@ -527,7 +517,6 @@ static void cpu_map_free(struct bpf_map *map) /* bq flush and cleanup happens after RCU grace-period */ __cpu_map_entry_replace(cmap, i, NULL); /* call_rcu */ } - free_percpu(cmap->flush_list); bpf_map_area_free(cmap->cpu_map); kfree(cmap); } @@ -619,7 +608,7 @@ static int bq_flush_to_queue(struct xdp_bulk_queue *bq) */ static int bq_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf) { - struct list_head *flush_list = this_cpu_ptr(rcpu->cmap->flush_list); + struct list_head *flush_list = this_cpu_ptr(&cpu_map_flush_list); struct xdp_bulk_queue *bq = this_cpu_ptr(rcpu->bulkq); if (unlikely(bq->count == CPU_MAP_BULK_SIZE)) @@ -658,10 +647,9 @@ int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp, return 0; } -void __cpu_map_flush(struct bpf_map *map) +void __cpu_map_flush(void) { - struct bpf_cpu_map *cmap = container_of(map, struct bpf_cpu_map, map); - struct list_head *flush_list = this_cpu_ptr(cmap->flush_list); + struct list_head *flush_list = this_cpu_ptr(&cpu_map_flush_list); struct xdp_bulk_queue *bq, *tmp; list_for_each_entry_safe(bq, tmp, flush_list, flush_node) { @@ -671,3 +659,14 @@ void __cpu_map_flush(struct bpf_map *map) wake_up_process(bq->obj->kthread); } } + +static int __init cpu_map_init(void) +{ + int cpu; + + for_each_possible_cpu(cpu) + INIT_LIST_HEAD(&per_cpu(cpu_map_flush_list, cpu)); + return 0; +} + +subsys_initcall(cpu_map_init); diff --git a/net/core/filter.c b/net/core/filter.c index b7570cb84902..c706325b3e66 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3558,7 +3558,7 @@ void xdp_do_flush_map(void) __dev_map_flush(); break; case BPF_MAP_TYPE_CPUMAP: - __cpu_map_flush(map); + __cpu_map_flush(); break; case BPF_MAP_TYPE_XSKMAP: __xsk_map_flush(); From patchwork Wed Dec 18 10:53:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1212228 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="BXKUQmdR"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47dBjC3vrTz9sRv for ; Wed, 18 Dec 2019 21:54:39 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726875AbfLRKyj (ORCPT ); Wed, 18 Dec 2019 05:54:39 -0500 Received: from mail-pf1-f196.google.com ([209.85.210.196]:35187 "EHLO mail-pf1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726699AbfLRKyi (ORCPT ); Wed, 18 Dec 2019 05:54:38 -0500 Received: by mail-pf1-f196.google.com with SMTP id b19so1011962pfo.2; Wed, 18 Dec 2019 02:54:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ffll5z/HdKuY2j0qz/oK6leLDyBgLuz3FCqAnXD0Cd8=; b=BXKUQmdRSUI41KXNfEcH/+xohlUI4YPrqpTuRJiuDUfohbKlG2Oz7WguEXVW6E8vD2 K7ExI6jhLOzkMCgVt40F43fAtk9le1YFkjRZ19rNcsdg/ZA0EsGXLCTzoA+GLokHj0Ef NEgFK2iyqLO08IplMz+KenYaD7kT6EecoPQy1XHhZkFSei+KskEcKKYahfAC7bLZ6Znv SZg9EPOSEYtMNANFHuL7YA39ubDvodtYapNSYHF6D2TCvFEvvLJSRJXITVWu72qWPT/u SROzM3htvN9QLZbh+aVbOXzXN8OhPQkiBpysidPgZ5TSqF6D/4r7hxTMJZF91KDRw+KB HM0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ffll5z/HdKuY2j0qz/oK6leLDyBgLuz3FCqAnXD0Cd8=; b=i+LVGyc9UJZBf1Cqy22Ifylr8vh9R2GHre+gNhY+EHdeB5IiqbWfgybp78vpKJb+gi pjhV91+XeOkosxJD2CIJFDV4IJ2wvEFTLo+WNi8WyK91M+Lgmt4i57M1FLfHIou0B8re W1bAyXsmNBqtfdTUNmG4LTn73BHYqGyoOCaFb/5qyGiVmoDa4+sdts+mE2ZejNEpzADS 1qD6OBE1+gcmSUWGmtj+ws/EE2wSQ0TUKwUMhpZu9FFnI8tM/Aa+i9cO2cxwGojVr3nG xf5hPk2+rhSTcEX9NeMEdGlX5ZolNltmRhIW84+Iu5yhKDUiexdjgPl0Tv+t3UlB0KD8 OpDA== X-Gm-Message-State: APjAAAUyDC49kja6NtV6YmXUTWLv/P9JPGSs7nRgLG7pm/+7zVEsfZAo 4HpgmiudVLU5euAz1lgcsXZV1Z5r1ZxcQw== X-Google-Smtp-Source: APXvYqzoZLkyHqkShChLJuv1/HsXqPHUyFmP9qkVBoW0kbbUv0hTVTxNPPPERYtMVxdYqV2qg8CFUA== X-Received: by 2002:a63:6c82:: with SMTP id h124mr2316444pgc.328.1576666478031; Wed, 18 Dec 2019 02:54:38 -0800 (PST) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id k9sm2339000pje.26.2019.12.18.02.54.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Dec 2019 02:54:37 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , bpf@vger.kernel.org, davem@davemloft.net, jakub.kicinski@netronome.com, hawk@kernel.org, john.fastabend@gmail.com, magnus.karlsson@intel.com, jonathan.lemon@gmail.com Subject: [PATCH bpf-next 7/8] xdp: remove map_to_flush and map swap detection Date: Wed, 18 Dec 2019 11:53:59 +0100 Message-Id: <20191218105400.2895-8-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191218105400.2895-1-bjorn.topel@gmail.com> References: <20191218105400.2895-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel Now that all XDP maps that can be used with bpf_redirect_map() tracks entries to be flushed in a global fashion, there is not need to track that the map has changed and flush from xdp_do_generic_map() anymore. All entries will be flushed in xdp_do_flush_map(). This means that the map_to_flush can be removed, and the corresponding checks. Moving the flush logic to one place, xdp_do_flush_map(), give a bulking behavior and performance boost. Signed-off-by: Björn Töpel Acked-by: Toke Høiland-Jørgensen --- include/linux/filter.h | 1 - net/core/filter.c | 27 +++------------------------ 2 files changed, 3 insertions(+), 25 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index 37ac7025031d..69d6706fc889 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -592,7 +592,6 @@ struct bpf_redirect_info { u32 tgt_index; void *tgt_value; struct bpf_map *map; - struct bpf_map *map_to_flush; u32 kern_flags; }; diff --git a/net/core/filter.c b/net/core/filter.c index c706325b3e66..d9caa3e57ea1 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3547,26 +3547,9 @@ static int __bpf_tx_xdp_map(struct net_device *dev_rx, void *fwd, void xdp_do_flush_map(void) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); - struct bpf_map *map = ri->map_to_flush; - - ri->map_to_flush = NULL; - if (map) { - switch (map->map_type) { - case BPF_MAP_TYPE_DEVMAP: - case BPF_MAP_TYPE_DEVMAP_HASH: - __dev_map_flush(); - break; - case BPF_MAP_TYPE_CPUMAP: - __cpu_map_flush(); - break; - case BPF_MAP_TYPE_XSKMAP: - __xsk_map_flush(); - break; - default: - break; - } - } + __dev_map_flush(); + __cpu_map_flush(); + __xsk_map_flush(); } EXPORT_SYMBOL_GPL(xdp_do_flush_map); @@ -3615,14 +3598,10 @@ static int xdp_do_redirect_map(struct net_device *dev, struct xdp_buff *xdp, ri->tgt_value = NULL; WRITE_ONCE(ri->map, NULL); - if (ri->map_to_flush && unlikely(ri->map_to_flush != map)) - xdp_do_flush_map(); - err = __bpf_tx_xdp_map(dev, fwd, map, xdp); if (unlikely(err)) goto err; - ri->map_to_flush = map; _trace_xdp_redirect_map(dev, xdp_prog, fwd, map, index); return 0; err: From patchwork Wed Dec 18 10:54:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1212230 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="sXxYQ7OF"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47dBjH0sF3z9sRv for ; Wed, 18 Dec 2019 21:54:43 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726698AbfLRKym (ORCPT ); Wed, 18 Dec 2019 05:54:42 -0500 Received: from mail-pf1-f195.google.com ([209.85.210.195]:46751 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726699AbfLRKym (ORCPT ); Wed, 18 Dec 2019 05:54:42 -0500 Received: by mail-pf1-f195.google.com with SMTP id y14so983320pfm.13; Wed, 18 Dec 2019 02:54:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nOoyfv29tdJB4Cs2FbUE63tPrsUbwCLUOgeYj7AkNpw=; b=sXxYQ7OFkbeLBwrv7XlGtpc7WKBBp/am6dU7zwMn2JLT+DNeyeVLLyGWS5n9tnZBVH s7CLWdnr2Wsv7n+vNB9r78jCR/88fUeb15dSlznMWHDfZV4T5mEeI+YmLSMCeDfZzcjZ tKGw4+fgkuH6AZeQgHuDMdPZ/aMD9Uf4pZBJWzvZ59qTALg1ObDYMzT8qIMi4FJOgja2 TRz/B26E9vHcekNxEKwvXP7+H+sF75om0nlkOSxzZAbwZ8TSSGLErRDZG2XmKRoDdU9T LKROOBFO5eSZ/aeSdtn/7rgn12AnnQNJoFtGSXJS9FIyN+AU1zcJ5Cz0j5pEj+8g6txR zdPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nOoyfv29tdJB4Cs2FbUE63tPrsUbwCLUOgeYj7AkNpw=; b=j7Y8SApdMwrwrtGZPj4qcsmen0UyLHNmWH1UjbO4QerWkGOUYAfUJoNnrdtsy3ehrQ xjBS9UMtv9QhavIHbEfUtFOqDEdCXG9xIQYcfm44acqIsJRceG5c8LixiRcjN5uRgxGr GsltyFo9RfCVVAJd0OAWVqwf3iatSLsOK+s8kSMFQ9G5uTecS1m1F2KOH8Y5SqZwunqp KeTBfitcJLcxYHTirmVugU1OlELMsGVqHhC/8ocoOdrVmCVCZc3mDG3PaSUHaYA5ZG0N lWdKnpN8i1cYa8bn8l1ChuWU5KQ6SSL0MjQfBcImsioIIoqb6s7ZhnL+b1kgqxK1/l05 ruhg== X-Gm-Message-State: APjAAAUftpRJMkTduwIakQuekn+c3xC+zF8tCKYvxP07TjdRFrRI60HY hVthiJi6A5g657aeMbTJoZOTNm/WEFKEsw== X-Google-Smtp-Source: APXvYqz5AWUhF8otd0DX09ERUQqvzpklTKXj3rtZoEZowWQjaT1ZIYXzww66RXXMUIVJK9TQ9Qktlw== X-Received: by 2002:a63:5062:: with SMTP id q34mr2311286pgl.378.1576666481581; Wed, 18 Dec 2019 02:54:41 -0800 (PST) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id k9sm2339000pje.26.2019.12.18.02.54.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Dec 2019 02:54:41 -0800 (PST) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , bpf@vger.kernel.org, davem@davemloft.net, jakub.kicinski@netronome.com, hawk@kernel.org, john.fastabend@gmail.com, magnus.karlsson@intel.com, jonathan.lemon@gmail.com Subject: [PATCH bpf-next 8/8] xdp: simplify __bpf_tx_xdp_map() Date: Wed, 18 Dec 2019 11:54:00 +0100 Message-Id: <20191218105400.2895-9-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191218105400.2895-1-bjorn.topel@gmail.com> References: <20191218105400.2895-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel The explicit error checking is not needed. Simply return the error instead. Signed-off-by: Björn Töpel Acked-by: Toke Høiland-Jørgensen --- net/core/filter.c | 33 +++++++-------------------------- 1 file changed, 7 insertions(+), 26 deletions(-) diff --git a/net/core/filter.c b/net/core/filter.c index d9caa3e57ea1..217af9974c86 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3510,35 +3510,16 @@ xdp_do_redirect_slow(struct net_device *dev, struct xdp_buff *xdp, } static int __bpf_tx_xdp_map(struct net_device *dev_rx, void *fwd, - struct bpf_map *map, - struct xdp_buff *xdp) + struct bpf_map *map, struct xdp_buff *xdp) { - int err; - switch (map->map_type) { case BPF_MAP_TYPE_DEVMAP: - case BPF_MAP_TYPE_DEVMAP_HASH: { - struct bpf_dtab_netdev *dst = fwd; - - err = dev_map_enqueue(dst, xdp, dev_rx); - if (unlikely(err)) - return err; - break; - } - case BPF_MAP_TYPE_CPUMAP: { - struct bpf_cpu_map_entry *rcpu = fwd; - - err = cpu_map_enqueue(rcpu, xdp, dev_rx); - if (unlikely(err)) - return err; - break; - } - case BPF_MAP_TYPE_XSKMAP: { - struct xdp_sock *xs = fwd; - - err = __xsk_map_redirect(xs, xdp); - return err; - } + case BPF_MAP_TYPE_DEVMAP_HASH: + return dev_map_enqueue(fwd, xdp, dev_rx); + case BPF_MAP_TYPE_CPUMAP: + return cpu_map_enqueue(fwd, xdp, dev_rx); + case BPF_MAP_TYPE_XSKMAP: + return __xsk_map_redirect(fwd, xdp); default: break; }