From patchwork Wed May 22 13:37:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1103341 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="L3lg3Dsh"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 458DGt72tSz9s5c for ; Wed, 22 May 2019 23:38:14 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729370AbfEVNiO (ORCPT ); Wed, 22 May 2019 09:38:14 -0400 Received: from mail-pf1-f195.google.com ([209.85.210.195]:46579 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728827AbfEVNiO (ORCPT ); Wed, 22 May 2019 09:38:14 -0400 Received: by mail-pf1-f195.google.com with SMTP id y11so1335540pfm.13; Wed, 22 May 2019 06:38:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5a9ErX5xgYj9FbshvnkicWIvEwd3TmfZNe/3LlrHbkw=; b=L3lg3DshZP/WGCf3PhCj2c+FeD92YWYe2GAANLmM9Eu4n28F+AnlLsqMLegr2CZqQY PzHPUU6CHe9eJLv2kQ/hENcDI5rOwn/Z0AlLpVdD+xexPCE5UwE6wg9QNFndc0mK/7ht DUQDh9HBKgZKsdD++/jE0vmjpH8t+F6v88hx+N4LDcu/xK1KZ8YeLNlzMM07v6+L2mW7 UH7dPPUljlM8PvWsgZzTGi4VrmZU20M2U/9ZuHSR2diFD3mecira+apfO3ejGla9c1H8 128V/48sUz1T9lSfDGkyWrSgg0m4XG1R9RAdsnlzioxumO2bEbTl2vR1kGD+UHinZkPh eqpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5a9ErX5xgYj9FbshvnkicWIvEwd3TmfZNe/3LlrHbkw=; b=ckhvsUBVsF1f3tve8W/Z+dquOstJnwo7B29UYw/6XUVsquOU6njJtXm7zYE7WwgVSI QIEjzZR3wa4JIFXoEtl/Bd9dHXqHEKFP6dsxgB+v+AAl0f5ekWa/da8VL3dsYIYYsbBk XR86whRQIg9vu0g9R4znjk0ktSHpKSeYLksNGqH6JgP5UARmSf+E0XKvDIXtIIGznNQC AmCBX8EJklJGLwcz51NHkMEn7NHQMgpGnYnoBGJ6aH+l4tajh4jbbNbm2b2zI6zfV6TW coZWnb8U1ITfBi3LQsJPbY+hAayPv8JGeA7ArOCqnRgswFLXXQCQ/jcO32tHJclT7qRd Xzvw== X-Gm-Message-State: APjAAAWpxZ/tJ0twFC+kWybQwdQM+SPb385WqmZ5bTaAmt5cf2nK0He6 VA2N+HHMebCPqjx64C0kqU0= X-Google-Smtp-Source: APXvYqxRAR3BOEykmuVuQ0QkhmltUET1scE9hb/ERtG6g94xrBSLZNK8bBW+9Dfp6CfuI/SVSJVu1w== X-Received: by 2002:a62:ea0a:: with SMTP id t10mr95325018pfh.236.1558532293445; Wed, 22 May 2019 06:38:13 -0700 (PDT) Received: from btopel-mobl.ger.intel.com ([192.55.54.45]) by smtp.gmail.com with ESMTPSA id o6sm53908997pfa.88.2019.05.22.06.38.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 May 2019 06:38:12 -0700 (PDT) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, bruce.richardson@intel.com, bpf@vger.kernel.org Subject: [PATCH bpf-next v2 1/2] xsk: remove AF_XDP socket from map when the socket is released Date: Wed, 22 May 2019 15:37:41 +0200 Message-Id: <20190522133742.7654-2-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190522133742.7654-1-bjorn.topel@gmail.com> References: <20190522133742.7654-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel When an AF_XDP socket is released/closed the XSKMAP still holds a reference to the socket in a "released" state. The socket will still use the netdev queue resource, and block newly created sockets from attaching to that queue, but no user application can access the fill/complete/rx/tx queues. This results in that all applications need to explicitly clear the map entry from the old "zombie state" socket. This should be done automatically. After this patch, when a socket is released, it will remove itself from all the XSKMAPs it resides in, allowing the socket application to remove the code that cleans the XSKMAP entry. This behavior is also closer to that of SOCKMAP, making the two socket maps more consistent. Suggested-by: Bruce Richardson Signed-off-by: Björn Töpel --- include/net/xdp_sock.h | 3 ++ kernel/bpf/xskmap.c | 101 +++++++++++++++++++++++++++++++++++------ net/xdp/xsk.c | 25 ++++++++++ 3 files changed, 116 insertions(+), 13 deletions(-) diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h index d074b6d60f8a..b5f8f9f826d0 100644 --- a/include/net/xdp_sock.h +++ b/include/net/xdp_sock.h @@ -68,6 +68,8 @@ struct xdp_sock { */ spinlock_t tx_completion_lock; u64 rx_dropped; + struct list_head map_list; + spinlock_t map_list_lock; }; struct xdp_buff; @@ -87,6 +89,7 @@ struct xdp_umem_fq_reuse *xsk_reuseq_swap(struct xdp_umem *umem, struct xdp_umem_fq_reuse *newq); void xsk_reuseq_free(struct xdp_umem_fq_reuse *rq); struct xdp_umem *xdp_get_umem_from_qid(struct net_device *dev, u16 queue_id); +void xsk_map_delete_from_node(struct xdp_sock *xs, struct list_head *node); static inline char *xdp_umem_get_data(struct xdp_umem *umem, u64 addr) { diff --git a/kernel/bpf/xskmap.c b/kernel/bpf/xskmap.c index 686d244e798d..318f6a07fa31 100644 --- a/kernel/bpf/xskmap.c +++ b/kernel/bpf/xskmap.c @@ -13,8 +13,58 @@ struct xsk_map { struct bpf_map map; struct xdp_sock **xsk_map; struct list_head __percpu *flush_list; + spinlock_t lock; }; +/* Nodes are linked in the struct xdp_sock map_list field, and used to + * track which maps a certain socket reside in. + */ +struct xsk_map_node { + struct list_head node; + struct xsk_map *map; + struct xdp_sock **map_entry; +}; + +static struct xsk_map_node *xsk_map_node_alloc(void) +{ + return kzalloc(sizeof(struct xsk_map_node), GFP_ATOMIC | __GFP_NOWARN); +} + +static void xsk_map_node_free(struct xsk_map_node *node) +{ + kfree(node); +} + +static void xsk_map_node_init(struct xsk_map_node *node, + struct xsk_map *map, + struct xdp_sock **map_entry) +{ + node->map = map; + node->map_entry = map_entry; +} + +static void xsk_map_add_node(struct xdp_sock *xs, struct xsk_map_node *node) +{ + spin_lock_bh(&xs->map_list_lock); + list_add_tail(&node->node, &xs->map_list); + spin_unlock_bh(&xs->map_list_lock); +} + +static void xsk_map_del_node(struct xdp_sock *xs, struct xdp_sock **map_entry) +{ + struct xsk_map_node *n, *tmp; + + spin_lock_bh(&xs->map_list_lock); + list_for_each_entry_safe(n, tmp, &xs->map_list, node) { + if (map_entry == n->map_entry) { + list_del(&n->node); + xsk_map_node_free(n); + } + } + spin_unlock_bh(&xs->map_list_lock); + +} + static struct bpf_map *xsk_map_alloc(union bpf_attr *attr) { int cpu, err = -EINVAL; @@ -34,6 +84,7 @@ static struct bpf_map *xsk_map_alloc(union bpf_attr *attr) return ERR_PTR(-ENOMEM); bpf_map_init_from_attr(&m->map, attr); + spin_lock_init(&m->lock); cost = (u64)m->map.max_entries * sizeof(struct xdp_sock *); cost += sizeof(struct list_head) * num_possible_cpus(); @@ -78,15 +129,16 @@ static void xsk_map_free(struct bpf_map *map) bpf_clear_redirect_map(map); synchronize_net(); + spin_lock_bh(&m->lock); for (i = 0; i < map->max_entries; i++) { - struct xdp_sock *xs; - - xs = m->xsk_map[i]; - if (!xs) - continue; + struct xdp_sock **map_entry = &m->xsk_map[i]; + struct xdp_sock *old_xs; - sock_put((struct sock *)xs); + old_xs = xchg(map_entry, NULL); + if (old_xs) + xsk_map_del_node(old_xs, map_entry); } + spin_unlock_bh(&m->lock); free_percpu(m->flush_list); bpf_map_area_free(m->xsk_map); @@ -162,7 +214,8 @@ static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value, { struct xsk_map *m = container_of(map, struct xsk_map, map); u32 i = *(u32 *)key, fd = *(u32 *)value; - struct xdp_sock *xs, *old_xs; + struct xdp_sock *xs, *old_xs, **entry; + struct xsk_map_node *node; struct socket *sock; int err; @@ -189,11 +242,20 @@ static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value, return -EOPNOTSUPP; } - sock_hold(sock->sk); + node = xsk_map_node_alloc(); + if (!node) { + sockfd_put(sock); + return -ENOMEM; + } - old_xs = xchg(&m->xsk_map[i], xs); + spin_lock_bh(&m->lock); + entry = &m->xsk_map[i]; + xsk_map_node_init(node, m, entry); + xsk_map_add_node(xs, node); + old_xs = xchg(entry, xs); if (old_xs) - sock_put((struct sock *)old_xs); + xsk_map_del_node(old_xs, entry); + spin_unlock_bh(&m->lock); sockfd_put(sock); return 0; @@ -202,19 +264,32 @@ static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value, static int xsk_map_delete_elem(struct bpf_map *map, void *key) { struct xsk_map *m = container_of(map, struct xsk_map, map); - struct xdp_sock *old_xs; + struct xdp_sock *old_xs, **map_entry; int k = *(u32 *)key; if (k >= map->max_entries) return -EINVAL; - old_xs = xchg(&m->xsk_map[k], NULL); + spin_lock_bh(&m->lock); + map_entry = &m->xsk_map[k]; + old_xs = xchg(map_entry, NULL); if (old_xs) - sock_put((struct sock *)old_xs); + xsk_map_del_node(old_xs, map_entry); + spin_unlock_bh(&m->lock); return 0; } +void xsk_map_delete_from_node(struct xdp_sock *xs, struct list_head *node) +{ + struct xsk_map_node *n = list_entry(node, struct xsk_map_node, node); + + spin_lock_bh(&n->map->lock); + *n->map_entry = NULL; + spin_unlock_bh(&n->map->lock); + xsk_map_node_free(n); +} + const struct bpf_map_ops xsk_map_ops = { .map_alloc = xsk_map_alloc, .map_free = xsk_map_free, diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index a14e8864e4fa..1931d98a7754 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -335,6 +335,27 @@ static int xsk_init_queue(u32 entries, struct xsk_queue **queue, return 0; } +static struct list_head *xsk_map_list_pop(struct xdp_sock *xs) +{ + struct list_head *node = NULL; + + spin_lock_bh(&xs->map_list_lock); + if (!list_empty(&xs->map_list)) { + node = xs->map_list.next; + list_del(node); + } + spin_unlock_bh(&xs->map_list_lock); + return node; +} + +static void xsk_delete_from_maps(struct xdp_sock *xs) +{ + struct list_head *node; + + while ((node = xsk_map_list_pop(xs))) + xsk_map_delete_from_node(xs, node); +} + static int xsk_release(struct socket *sock) { struct sock *sk = sock->sk; @@ -354,6 +375,7 @@ static int xsk_release(struct socket *sock) sock_prot_inuse_add(net, sk->sk_prot, -1); local_bh_enable(); + xsk_delete_from_maps(xs); if (xs->dev) { struct net_device *dev = xs->dev; @@ -767,6 +789,9 @@ static int xsk_create(struct net *net, struct socket *sock, int protocol, mutex_init(&xs->mutex); spin_lock_init(&xs->tx_completion_lock); + INIT_LIST_HEAD(&xs->map_list); + spin_lock_init(&xs->map_list_lock); + mutex_lock(&net->xdp.lock); sk_add_node_rcu(sk, &net->xdp.list); mutex_unlock(&net->xdp.lock); From patchwork Wed May 22 13:37:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1103343 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="KN7cK4oU"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 458DGz15fxz9s5c for ; Wed, 22 May 2019 23:38:19 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729423AbfEVNiS (ORCPT ); Wed, 22 May 2019 09:38:18 -0400 Received: from mail-pf1-f195.google.com ([209.85.210.195]:46584 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729405AbfEVNiS (ORCPT ); Wed, 22 May 2019 09:38:18 -0400 Received: by mail-pf1-f195.google.com with SMTP id y11so1335652pfm.13; Wed, 22 May 2019 06:38:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lqLhChu2AsifnciOctMcQL0LYazbPIk0otwAyvD0vQ4=; b=KN7cK4oU3JUBe5aAgjz0Vr4WbCT7H/MVYSlBFhBevsSITO3IlyItwak5wrlHfz8437 6kShjBjFKuQWOWMBtB5dmsEKxKIGN6ZVK31/11EpYXw8kBxNMSjsk7rD62/4qzG+rRGa CZL2wM/4JrcdHhFKP4eEUJvWIMAE/6P19xqYlxOm9RE7AV+UCa8HAH2G4z+DblRwR2+1 wa54tN86b4sKoiqwxIVbyaaHYqCv0Mz3N388SjR7pD9gEMrRUiTMke7y/9uVTqgBpTIO 9ypFNa08fPCvxCeN31LoPJkE76rW+wIt0ucLqmYIEns/JjWEEd2GHepBI4p4eREEpnTh 9Wkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lqLhChu2AsifnciOctMcQL0LYazbPIk0otwAyvD0vQ4=; b=LK8unc/WCVhPWjMOqRY1ziJG7WkM2PaLvVZyGE49/fRdZKMh4BYxkb4X+v9g3xLo0r yposbJDfpGnHslUjcYuJXIUlw/bDUm7c/PnygouWlRC5xaP/Jgc2oJ//cjnUpZRMJvdi sUNR5eo9WDO/S7fN7ENUBCI2w0LmDh8ymiAPmu1U04PXRbWkn8b4qsyxttH/A89xBQRS aPPjD73iT/HXDRBKv/TGU8fgefdmt/tvIidmQzKyBJeyxl7gTo9ELLZQ4eZzexryo3WU pGfEVSwzpRTbPPIl1BfGDRPuwqyhTI3i2M+XnKRwz0Cbjy0vvPmOsNPPiVyQfaZI5bfg EWBg== X-Gm-Message-State: APjAAAUj93yC8y4p2B6zili1v/DrfqZZswiqhFbFkelhXcZPCghPIOa5 P/k7lLPFpIz/vE5/VSCN7iE= X-Google-Smtp-Source: APXvYqz5qu66R/TJIzScwirrAp+FjNoSaSqIXH0L04bbxaboXwmr8a3v0dfjyOR66MmPHARoxNmG2g== X-Received: by 2002:aa7:8e55:: with SMTP id d21mr95218713pfr.62.1558532297557; Wed, 22 May 2019 06:38:17 -0700 (PDT) Received: from btopel-mobl.ger.intel.com ([192.55.54.45]) by smtp.gmail.com with ESMTPSA id o6sm53908997pfa.88.2019.05.22.06.38.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 May 2019 06:38:16 -0700 (PDT) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, bruce.richardson@intel.com, bpf@vger.kernel.org Subject: [PATCH bpf-next v2 2/2] xsk: support BPF_EXIST and BPF_NOEXIST flags in XSKMAP Date: Wed, 22 May 2019 15:37:42 +0200 Message-Id: <20190522133742.7654-3-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190522133742.7654-1-bjorn.topel@gmail.com> References: <20190522133742.7654-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel The XSKMAP did not honor the BPF_EXIST/BPF_NOEXIST flags when updating an entry. This patch addressed that. Signed-off-by: Björn Töpel --- kernel/bpf/xskmap.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/kernel/bpf/xskmap.c b/kernel/bpf/xskmap.c index 318f6a07fa31..7f4f75ff466b 100644 --- a/kernel/bpf/xskmap.c +++ b/kernel/bpf/xskmap.c @@ -223,8 +223,6 @@ static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value, return -EINVAL; if (unlikely(i >= m->map.max_entries)) return -E2BIG; - if (unlikely(map_flags == BPF_NOEXIST)) - return -EEXIST; sock = sockfd_lookup(fd, &err); if (!sock) @@ -250,15 +248,29 @@ static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value, spin_lock_bh(&m->lock); entry = &m->xsk_map[i]; + old_xs = *entry; + if (old_xs && map_flags == BPF_NOEXIST) { + err = -EEXIST; + goto out; + } else if (!old_xs && map_flags == BPF_EXIST) { + err = -ENOENT; + goto out; + } xsk_map_node_init(node, m, entry); xsk_map_add_node(xs, node); - old_xs = xchg(entry, xs); + *entry = xs; if (old_xs) xsk_map_del_node(old_xs, entry); spin_unlock_bh(&m->lock); sockfd_put(sock); return 0; + +out: + spin_unlock_bh(&m->lock); + sockfd_put(sock); + xsk_map_node_free(node); + return err; } static int xsk_map_delete_elem(struct bpf_map *map, void *key)