From patchwork Fri Sep 4 13:53:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1357479 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=nGTEE85e; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BjfLW3spwz9sVq for ; Fri, 4 Sep 2020 23:54:43 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730715AbgIDNyj (ORCPT ); Fri, 4 Sep 2020 09:54:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730567AbgIDNyW (ORCPT ); Fri, 4 Sep 2020 09:54:22 -0400 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6C48C061251; Fri, 4 Sep 2020 06:54:14 -0700 (PDT) Received: by mail-pj1-x1044.google.com with SMTP id mm21so3231175pjb.4; Fri, 04 Sep 2020 06:54:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6p167BPsiF1zCEf+N843XcEpo+Z9u7WNQshAXfhw53E=; b=nGTEE85eET4Yo/K0po0+3VC/lho4Sptjai1sa27si/N2Pw9KQwRo/4JcoMOUr3AUIC OkZj0RRwmNGcYQOet6ZCVuUpdP3FSoudXnbTtIwVB5OyXVM0DexMLRilWu39Zi7zJew8 SNir4AG8wQL8ytTXD0RCe0+pYBHAS32jDxSnyHFB5S2vQ6cqY2InZ3SLbjUB9UtcWHr5 xHhUW7NT3cCcJiwZC21q1JBiJ12flAWDXjp6fjKQdifZVeUE8L1IbrISXNVvwJNFReN4 EIPGXPS3WhRlmMG2xwNJyUYYiwHBWWQWfTBfA/eaFVAQNEJxv2cLMHSVKWiceNV2nV7s qbnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6p167BPsiF1zCEf+N843XcEpo+Z9u7WNQshAXfhw53E=; b=oHSRAV/0ZGP7qkIzqvaSIvAd0sxFWAvj98Xukftwvuq+r47/QXbtHmtPI3CVNGTkUn 3wXbGJi18QpBv3+WHw36kqOtefbWe7hjHqz7HLjhVlvzMZQ23IWRSdxolfoxN0GtbhkO 8n7fK4jqWqUhEaFgn73NHNBivDa4Y2uXQ8HQPIp79NQ2XqW9kPn5wyLBSLVKlodtFwkt 59k2YUfBRbDBlcnokF28EgaGsAxM3PzrtfWXj03s1T5Lr6Q0K9CkmaKqj2ro2Ho61QY+ eFhWNlB8WMEJ5Eky+Uc1abc552uhKxnMT7ekRnn4UBhKyKQZUS2i3eCjr70pqHZwfkq0 1ydw== X-Gm-Message-State: AOAM532hFD+YJCOJ8xZosU+ym6izf/sBloSUvMFaQntG//KBRf3YV/Z9 5UASUyCMyTLIq3QfkNgiDkM= X-Google-Smtp-Source: ABdhPJz49bRPhGh1Ztm9ufAlKTs0J93VJw8p0pB+61IOeisgN9GSLaKg24JMuQcmFL63AhwbMGyVJQ== X-Received: by 2002:a17:90a:4046:: with SMTP id k6mr8232176pjg.11.1599227654415; Fri, 04 Sep 2020 06:54:14 -0700 (PDT) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id g9sm6931239pfr.172.2020.09.04.06.54.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Sep 2020 06:54:13 -0700 (PDT) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, davem@davemloft.net, kuba@kernel.org, hawk@kernel.org, john.fastabend@gmail.com, intel-wired-lan@lists.osuosl.org Subject: [PATCH bpf-next 1/6] xsk: improve xdp_do_redirect() error codes Date: Fri, 4 Sep 2020 15:53:26 +0200 Message-Id: <20200904135332.60259-2-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200904135332.60259-1-bjorn.topel@gmail.com> References: <20200904135332.60259-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel The error codes returned by xdp_do_redirect() when redirecting a frame to an AF_XDP socket has not been very useful. A driver could not distinguish between different errors. Prior this change the following codes where used: Socket not bound or incorrect queue/netdev: EINVAL XDP frame/AF_XDP buffer size mismatch: ENOSPC Could not allocate buffer (copy mode): ENOSPC AF_XDP Rx buffer full: ENOSPC After this change: Socket not bound or incorrect queue/netdev: EINVAL XDP frame/AF_XDP buffer size mismatch: ENOSPC Could not allocate buffer (copy mode): ENOMEM AF_XDP Rx buffer full: ENOBUFS An AF_XDP zero-copy driver can now potentially determine if the failure was due to a full Rx buffer, and if so stop processing more frames, yielding to the userland AF_XDP application. Signed-off-by: Björn Töpel --- net/xdp/xsk.c | 2 +- net/xdp/xsk_queue.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 3895697f8540..db38560c4af7 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -197,7 +197,7 @@ static int __xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len, xsk_xdp = xsk_buff_alloc(xs->pool); if (!xsk_xdp) { xs->rx_dropped++; - return -ENOSPC; + return -ENOMEM; } xsk_copy_xdp(xsk_xdp, xdp, len); diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h index 2d883f631c85..b76966cf122e 100644 --- a/net/xdp/xsk_queue.h +++ b/net/xdp/xsk_queue.h @@ -305,7 +305,7 @@ static inline int xskq_prod_reserve_desc(struct xsk_queue *q, u32 idx; if (xskq_prod_is_full(q)) - return -ENOSPC; + return -ENOBUFS; /* A, matches D */ idx = q->cached_prod++ & q->ring_mask; From patchwork Fri Sep 4 13:53:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1357494 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=bK3iNa4A; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BjfMf6z5dz9sVq for ; Fri, 4 Sep 2020 23:55:42 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730685AbgIDNzh (ORCPT ); Fri, 4 Sep 2020 09:55:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730504AbgIDNyW (ORCPT ); Fri, 4 Sep 2020 09:54:22 -0400 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DD48C061244; Fri, 4 Sep 2020 06:54:20 -0700 (PDT) Received: by mail-pj1-x1044.google.com with SMTP id np15so5235830pjb.0; Fri, 04 Sep 2020 06:54:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=m9z4cRfJn8UcZ/9frCTkEjGN764ybO6hPefQpHU5TmI=; b=bK3iNa4A/UEw1MU4mezHth5dLxzdli+bEgNfEwtdEWI7IUOhw4S1wwu8BH3kcP1d0J rsfeBwNGMswAm/Rlv2BvFHNUkuG6l4QHhuzL3PANSlg0vUv6bQphl7N54K9spydzSPD6 ajDD+Q1uf8UAZXp6XMuUoSqw6jUxZA3nuvmUV8GdnAYb2jTR9Qm5TbIjCp3LT3kWXjNM /cxxVDEDMFdWVWmiE/mcZMbI17ASZ4yN7KVLNzqPlwJ+SFF79rqmQYpJe1dGq4qJYzK5 gVJRNAN1dUHkn8tyDGQCXyW4QulPcGPEgrgbGmpb7QxS5UviAP0k/HqhSvS/+vhVMlEe DCqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=m9z4cRfJn8UcZ/9frCTkEjGN764ybO6hPefQpHU5TmI=; b=QpsMmUEPLaObl1L3OpvL0FQl4aH2KPbo1zJWnmh8C7+5A2wmtbtUCErsW+VacVQ0lH k0fksWATQ928OwuDHatleCHfSjTHNIHwC3uA/ubsXPC0wOOEd9Zexg0ZqKzfAgEFxujE 8Zy8SC1il0H5R/02+DAwWX21saFM+X4LnMjjVu6cHJJ8wMfgMxkQ5lOhzBtDk6hLTZQW EpLt5Z3nPjpzYzTtc12Mw2rXCV3QHWDZTaouRj43Zu6g+ALcyoNGK+UJ0lixT6l9b1BU vF00X6vfQWlBy/kgO5SXFQlCJXHK3UIoRJtywqSZ0FARqrlppSpn2KHwH909tpfm3+AH b8DQ== X-Gm-Message-State: AOAM532i3HwpvuTwykleLFEEbZ/5G+lAoCdRx2vYoahe0Mtt/jcuu2oc CnEb2IW6GWnBUtBPZL4E1Cg= X-Google-Smtp-Source: ABdhPJwJ8TlaOVV3wCqW/daU3Hy40stCQrDheIx0EIvFRROBLU822+X3TsrXxGs4jJ/3wpxjw+F59g== X-Received: by 2002:a17:902:848a:: with SMTP id c10mr8598273plo.8.1599227659617; Fri, 04 Sep 2020 06:54:19 -0700 (PDT) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id g9sm6931239pfr.172.2020.09.04.06.54.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Sep 2020 06:54:19 -0700 (PDT) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, davem@davemloft.net, kuba@kernel.org, hawk@kernel.org, john.fastabend@gmail.com, intel-wired-lan@lists.osuosl.org Subject: [PATCH bpf-next 2/6] xdp: introduce xdp_do_redirect_ext() function Date: Fri, 4 Sep 2020 15:53:27 +0200 Message-Id: <20200904135332.60259-3-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200904135332.60259-1-bjorn.topel@gmail.com> References: <20200904135332.60259-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel Introduce the xdp_do_redirect_ext() which returns additional information to the caller. For now, it is the type of map that the packet was redirected to. This enables the driver to have more fine-grained control, e.g. is the redirect fails due to full AF_XDP Rx queue (error code ENOBUFS and map is XSKMAP), a zero-copy enabled driver should yield to userland as soon as possible. Signed-off-by: Björn Töpel --- include/linux/filter.h | 2 ++ net/core/filter.c | 16 ++++++++++++++-- 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index 995625950cc1..0060c2c8abc3 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -942,6 +942,8 @@ static inline int xdp_ok_fwd_dev(const struct net_device *fwd, */ int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, struct xdp_buff *xdp, struct bpf_prog *prog); +int xdp_do_redirect_ext(struct net_device *dev, struct xdp_buff *xdp, + struct bpf_prog *xdp_prog, enum bpf_map_type *map_type); int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, struct bpf_prog *prog); diff --git a/net/core/filter.c b/net/core/filter.c index 47eef9a0be6a..ce6098210a23 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3596,8 +3596,8 @@ void bpf_clear_redirect_map(struct bpf_map *map) } } -int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) +int xdp_do_redirect_ext(struct net_device *dev, struct xdp_buff *xdp, + struct bpf_prog *xdp_prog, enum bpf_map_type *map_type) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); struct bpf_map *map = READ_ONCE(ri->map); @@ -3609,6 +3609,8 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, ri->tgt_value = NULL; WRITE_ONCE(ri->map, NULL); + *map_type = BPF_MAP_TYPE_UNSPEC; + if (unlikely(!map)) { fwd = dev_get_by_index_rcu(dev_net(dev), index); if (unlikely(!fwd)) { @@ -3618,6 +3620,7 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, err = dev_xdp_enqueue(fwd, xdp, dev); } else { + *map_type = map->map_type; err = __bpf_tx_xdp_map(dev, fwd, map, xdp); } @@ -3630,6 +3633,15 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, _trace_xdp_redirect_map_err(dev, xdp_prog, fwd, map, index, err); return err; } +EXPORT_SYMBOL_GPL(xdp_do_redirect_ext); + +int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, + struct bpf_prog *xdp_prog) +{ + enum bpf_map_type dummy; + + return xdp_do_redirect_ext(dev, xdp, xdp_prog, &dummy); +} EXPORT_SYMBOL_GPL(xdp_do_redirect); static int xdp_do_generic_redirect_map(struct net_device *dev, From patchwork Fri Sep 4 13:53:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1357490 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Edvfi7J9; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BjfMS41Bgz9sVt for ; Fri, 4 Sep 2020 23:55:32 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730569AbgIDNzS (ORCPT ); Fri, 4 Sep 2020 09:55:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730705AbgIDNy0 (ORCPT ); Fri, 4 Sep 2020 09:54:26 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE4E5C061246; Fri, 4 Sep 2020 06:54:24 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id k13so1172037plk.13; Fri, 04 Sep 2020 06:54:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ndlvsQFIeAXfc3bcbH6mbqQWsrzfWC6PhjjCGX9XLA8=; b=Edvfi7J9dq/mGVeTaUoSyXhsYATv0l+s2ALa97MuHv8o4cr4zqviEdBmBps0FWI+XC DnpzaaBRMb/pmRy9XntxBWagDNu+rKf8OXgltLwPK8S1Ye4vvu176e5ICCHhsg1HW8kR SC/khNiqpJTblNrrFCODwQmfhdklLZAVno2vEgO7eaLdOEZPWwlgg5iQLORKUipo/TzM IDrp6GMOmjd+ieuKLb7FyFr8Rv7a13+xQg4zvfmKNujpgionXQFFyn6wvRiwo0HAxzsE L/60XL562FzCK7CF7iLLG7oRN1PGfrhTEiDU2+mjhNmOB+zlBZty6TDk/nC8q7njJQgY GUCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ndlvsQFIeAXfc3bcbH6mbqQWsrzfWC6PhjjCGX9XLA8=; b=qBqi+ZLTi1ks8w6ivuJbMMktdNcF0LW5o0m9A4avqOvCnEa5iJgpZdSlYQn1e1ikbX ry8nErpXG8TJ2gR2ZYnDJP4/xEBAwdSlgLozQTs7kNGWPIMEpMkgvK/omckn9+Hg4Gel s5M98aAUZErkBubNw2xhSpL719D7qiHDFMvtEofwJS8laZ+9UEzsDPdt/ZRZ7KAdHFXX jDG7UWdxbavq+5Ye9P2+QnVkxcHxcrhRJrMbFxUA1IO8VCDAZDsAnWZZu34/SWemR3DJ 235JywMx2BFMDrokrtoEmDyDknPT84y0Hct38SubyAO6J+K3rz98dDmlPi7WEYZnw0wY bQjg== X-Gm-Message-State: AOAM533M7G3hiy7B4RksM1snElfyc/4ZEVLgZqXt09u1pIqY4qf8zibr eM8gJlc53SL3ij5Bllb2nxE= X-Google-Smtp-Source: ABdhPJweTAGZ9F7Kn14qT3nNdQ72xvhDC1jpCcaeNMvBJHFxl/NA7IL76Ulfc8SEUIDLSb+nE6kthA== X-Received: by 2002:a17:902:7607:: with SMTP id k7mr8577410pll.91.1599227664587; Fri, 04 Sep 2020 06:54:24 -0700 (PDT) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id g9sm6931239pfr.172.2020.09.04.06.54.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Sep 2020 06:54:24 -0700 (PDT) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, davem@davemloft.net, kuba@kernel.org, hawk@kernel.org, john.fastabend@gmail.com, intel-wired-lan@lists.osuosl.org Subject: [PATCH bpf-next 3/6] xsk: introduce xsk_do_redirect_rx_full() helper Date: Fri, 4 Sep 2020 15:53:28 +0200 Message-Id: <20200904135332.60259-4-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200904135332.60259-1-bjorn.topel@gmail.com> References: <20200904135332.60259-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel The xsk_do_redirect_rx_full() helper can be used to check if a failure of xdp_do_redirect() was due to the AF_XDP socket had a full Rx ring. Signed-off-by: Björn Töpel --- include/net/xdp_sock_drv.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h index 5b1ee8a9976d..34c58b5fbc28 100644 --- a/include/net/xdp_sock_drv.h +++ b/include/net/xdp_sock_drv.h @@ -116,6 +116,11 @@ static inline void xsk_buff_raw_dma_sync_for_device(struct xsk_buff_pool *pool, xp_dma_sync_for_device(pool, dma, size); } +static inline bool xsk_do_redirect_rx_full(int err, enum bpf_map_type map_type) +{ + return err == -ENOBUFS && map_type == BPF_MAP_TYPE_XSKMAP; +} + #else static inline void xsk_tx_completed(struct xsk_buff_pool *pool, u32 nb_entries) @@ -235,6 +240,10 @@ static inline void xsk_buff_raw_dma_sync_for_device(struct xsk_buff_pool *pool, { } +static inline bool xsk_do_redirect_rx_full(int err, enum bpf_map_type map_type) +{ + return false; +} #endif /* CONFIG_XDP_SOCKETS */ #endif /* _LINUX_XDP_SOCK_DRV_H */ From patchwork Fri Sep 4 13:53:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1357487 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=QxrCXwLX; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BjfMD0MY1z9sW6 for ; Fri, 4 Sep 2020 23:55:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730542AbgIDNzQ (ORCPT ); Fri, 4 Sep 2020 09:55:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730606AbgIDNyd (ORCPT ); Fri, 4 Sep 2020 09:54:33 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4EFEC061244; Fri, 4 Sep 2020 06:54:31 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id u128so4643108pfb.6; Fri, 04 Sep 2020 06:54:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CJi+EkzTJVtrdukKQRD3YbRAKR4DiANTw+RCP7poH4U=; b=QxrCXwLX360vUP9Kvsxo+bukkSWUCqiex301/++ROFdWJDaxvjxvttG78X9cRc19KT VmG1qYGEEcHvigEsLhSu90Y+XhklxmvQFfcf8h93d4g+F9qd9yV8r/wGLBPgOxriPjbI 680b6F9b2Ln9D3sx6zy0z5SnVmlQxbwE6iaQ4N0DQL5pW6tcnX+Uvd8TGy5MxitXz364 8d2qWpyFmjjLAUZEQhIX2vn7TAcMSgjdnpbRRljzNAqD/lBRTG1T52aX6nVC2+4al9sH /YtmyOiA3U66/4GhEgQzn/CqDKRJyM4EnZ0QbLRVIwmCnGimaetCbJOpEKTe6CgwdDoM 7gtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CJi+EkzTJVtrdukKQRD3YbRAKR4DiANTw+RCP7poH4U=; b=owuB75/9yNPpqUmhJvmSv6ttR6b2YTkQPY2hqbWYPzi45aWzklhDycMETb39cM1zVJ 28tVs+2YlXyt9Unq6E27zblBpCmWPmDJRTpIwTiQdU+0rmMMytqSev98YtjPDMKZEVuu W9OCB1RoMSS8H2J+WtZnGpf65MzUsjpGrLLbNR+LuL4gqOdtnNtP5A0w2TEH76KZoXPC OAqRlXxbU2LQpViHpgHXIsWUyve3aX5pDWuRjZxOqE2RKB51yV2Ae1BU+LfKdOV+07N8 qzBOCSj7EQzs5ERPsVmhVdu9yfsqkkYeW6cyezXIDepYOi3lpC4TCGLE7NhTzJYK7zEb vJAQ== X-Gm-Message-State: AOAM532YVhRXhai0U9wsEV88btxKtaELRTdoLnT/Q2yDmvfSAHaP29aw qw1y8EWEkQomrRXhSWkYx94= X-Google-Smtp-Source: ABdhPJy/nQhhEvXFOw5z1ThFseXe9AUccyxj5amHrf+Q0FPN3Ei2V+6LoD93g//7dqLVidUG6qx6yQ== X-Received: by 2002:a63:5f8b:: with SMTP id t133mr7522134pgb.238.1599227669730; Fri, 04 Sep 2020 06:54:29 -0700 (PDT) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id g9sm6931239pfr.172.2020.09.04.06.54.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Sep 2020 06:54:29 -0700 (PDT) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, davem@davemloft.net, kuba@kernel.org, hawk@kernel.org, john.fastabend@gmail.com, intel-wired-lan@lists.osuosl.org Subject: [PATCH bpf-next 4/6] i40e, xsk: finish napi loop if AF_XDP Rx queue is full Date: Fri, 4 Sep 2020 15:53:29 +0200 Message-Id: <20200904135332.60259-5-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200904135332.60259-1-bjorn.topel@gmail.com> References: <20200904135332.60259-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel Make the AF_XDP zero-copy path aware that the reason for redirect failure was due to full Rx queue. If so, exit the napi loop as soon as possible (exit the softirq processing), so that the userspace AF_XDP process can hopefully empty the Rx queue. This mainly helps the "one core scenario", where the userland process and Rx softirq processing is on the same core. Note that the early exit can only be performed if the "need wakeup" feature is enabled, because otherwise there is no notification mechanism available from the kernel side. This requires that the driver starts using the newly introduced xdp_do_redirect_ext() and xsk_do_redirect_rx_full() functions. Signed-off-by: Björn Töpel --- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 23 +++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index 2a1153d8957b..3ac803ee8d51 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -142,13 +142,15 @@ int i40e_xsk_pool_setup(struct i40e_vsi *vsi, struct xsk_buff_pool *pool, * i40e_run_xdp_zc - Executes an XDP program on an xdp_buff * @rx_ring: Rx ring * @xdp: xdp_buff used as input to the XDP program + * @early_exit: true means that the napi loop should exit early * * Returns any of I40E_XDP_{PASS, CONSUMED, TX, REDIR} **/ -static int i40e_run_xdp_zc(struct i40e_ring *rx_ring, struct xdp_buff *xdp) +static int i40e_run_xdp_zc(struct i40e_ring *rx_ring, struct xdp_buff *xdp, bool *early_exit) { int err, result = I40E_XDP_PASS; struct i40e_ring *xdp_ring; + enum bpf_map_type map_type; struct bpf_prog *xdp_prog; u32 act; @@ -167,8 +169,13 @@ static int i40e_run_xdp_zc(struct i40e_ring *rx_ring, struct xdp_buff *xdp) result = i40e_xmit_xdp_tx_ring(xdp, xdp_ring); break; case XDP_REDIRECT: - err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); - result = !err ? I40E_XDP_REDIR : I40E_XDP_CONSUMED; + err = xdp_do_redirect_ext(rx_ring->netdev, xdp, xdp_prog, &map_type); + if (err) { + *early_exit = xsk_do_redirect_rx_full(err, map_type); + result = I40E_XDP_CONSUMED; + } else { + result = I40E_XDP_REDIR; + } break; default: bpf_warn_invalid_xdp_action(act); @@ -268,8 +275,8 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) { unsigned int total_rx_bytes = 0, total_rx_packets = 0; u16 cleaned_count = I40E_DESC_UNUSED(rx_ring); + bool early_exit = false, failure = false; unsigned int xdp_res, xdp_xmit = 0; - bool failure = false; struct sk_buff *skb; while (likely(total_rx_packets < (unsigned int)budget)) { @@ -316,7 +323,7 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) (*bi)->data_end = (*bi)->data + size; xsk_buff_dma_sync_for_cpu(*bi, rx_ring->xsk_pool); - xdp_res = i40e_run_xdp_zc(rx_ring, *bi); + xdp_res = i40e_run_xdp_zc(rx_ring, *bi, &early_exit); if (xdp_res) { if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) xdp_xmit |= xdp_res; @@ -329,6 +336,8 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) cleaned_count++; i40e_inc_ntc(rx_ring); + if (early_exit) + break; continue; } @@ -363,12 +372,12 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) i40e_update_rx_stats(rx_ring, total_rx_bytes, total_rx_packets); if (xsk_uses_need_wakeup(rx_ring->xsk_pool)) { - if (failure || rx_ring->next_to_clean == rx_ring->next_to_use) + if (early_exit || failure || rx_ring->next_to_clean == rx_ring->next_to_use) xsk_set_rx_need_wakeup(rx_ring->xsk_pool); else xsk_clear_rx_need_wakeup(rx_ring->xsk_pool); - return (int)total_rx_packets; + return early_exit ? 0 : (int)total_rx_packets; } return failure ? budget : (int)total_rx_packets; } From patchwork Fri Sep 4 13:53:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1357481 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=eEtCcZfJ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BjfLc1zJXz9sVq for ; Fri, 4 Sep 2020 23:54:48 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730721AbgIDNyn (ORCPT ); Fri, 4 Sep 2020 09:54:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730712AbgIDNyj (ORCPT ); Fri, 4 Sep 2020 09:54:39 -0400 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6D65C061251; Fri, 4 Sep 2020 06:54:35 -0700 (PDT) Received: by mail-pg1-x544.google.com with SMTP id v15so4363382pgh.6; Fri, 04 Sep 2020 06:54:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3Br4mU3YiIC4VRDX0+F3OpOlmSyiXlfba1PWnn12174=; b=eEtCcZfJUz2RsWJUBs5W2X/s5sEfMqXT4rlVpRRcCoJmiqFZPRb85GFIko7b/IbW9o gCz6V+k7YM0Bzo7A/sqeM/YUNVTmUimNxL9DQuu5S+y4PV8Y/r4bQrlr4kGsj55JMdjg 5lwGPxhAIgKaGL/4OpmwbBt7yqDDBojMoJYXOFCdXbra5W5aAjCE9BKog3w8T55uAyAt 9iL6r7s5sd8bsSwW2uESigaZAuY+gE1ZLnkkECL7dW5pIExz2bG/N54U2/XyQq5ReNB4 /bwo5DD0ns2bQa3Z1ZKhdNyx5aw1TXwzwiTeRmHjlEtaSXzFb5AOmvcWmIGAEv1YBNln m0VA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3Br4mU3YiIC4VRDX0+F3OpOlmSyiXlfba1PWnn12174=; b=PeUH7Pooobz5K4YctgVVWCRAHNJDmz9QVc6AHcnk47vD2l5v8eY1POuWzid43Os8n2 +n+QkL8Y3rQQcZwRKmeix/gg3RY+lMJbnjk0VIA/oKHGkDITpvyBe6OLPH+tnT1/t1WZ VmrOvwn2kn+eO3VvMwIBIPYJ4csmJ6HrcQ5wBUuMOh1JQZYAiTZwoRg+1eHX7/NLq78W Lpw0riaNMi0d5QSwrmFDON0RZdfs2H22kXC3kxLOyDE45+vAHrXVKBKcSfnagLx5GWOj nghgC8oRp5inS/rK2MiI+GJRMBo89yeqhyO0rWXdlfD4DWVnEZycfEjUBbhI0Lm15Dpf cpZA== X-Gm-Message-State: AOAM532oK1+gfv8qTZpYmy/KsmFKoJ4ygHahM4BD5zOL5ecuICxax9Pk wSw85rKyhAZa5/cdkMWdLPY= X-Google-Smtp-Source: ABdhPJyi17LWympwfyGZ/fupgfwzB/WqdpEYgSTY9m58p2VgIMdE2I7itIooXJ3yJtNtKqvns2tY/A== X-Received: by 2002:a63:2d42:: with SMTP id t63mr7458188pgt.450.1599227675061; Fri, 04 Sep 2020 06:54:35 -0700 (PDT) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id g9sm6931239pfr.172.2020.09.04.06.54.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Sep 2020 06:54:34 -0700 (PDT) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, davem@davemloft.net, kuba@kernel.org, hawk@kernel.org, john.fastabend@gmail.com, intel-wired-lan@lists.osuosl.org Subject: [PATCH bpf-next 5/6] ice, xsk: finish napi loop if AF_XDP Rx queue is full Date: Fri, 4 Sep 2020 15:53:30 +0200 Message-Id: <20200904135332.60259-6-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200904135332.60259-1-bjorn.topel@gmail.com> References: <20200904135332.60259-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel Make the AF_XDP zero-copy path aware that the reason for redirect failure was due to full Rx queue. If so, exit the napi loop as soon as possible (exit the softirq processing), so that the userspace AF_XDP process can hopefully empty the Rx queue. This mainly helps the "one core scenario", where the userland process and Rx softirq processing is on the same core. Note that the early exit can only be performed if the "need wakeup" feature is enabled, because otherwise there is no notification mechanism available from the kernel side. This requires that the driver starts using the newly introduced xdp_do_redirect_ext() and xsk_do_redirect_rx_full() functions. Signed-off-by: Björn Töpel --- drivers/net/ethernet/intel/ice/ice_xsk.c | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 797886524054..f698d0199b0a 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -502,13 +502,15 @@ ice_construct_skb_zc(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf) * ice_run_xdp_zc - Executes an XDP program in zero-copy path * @rx_ring: Rx ring * @xdp: xdp_buff used as input to the XDP program + * @early_exit: true means that the napi loop should exit early * * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR} */ static int -ice_run_xdp_zc(struct ice_ring *rx_ring, struct xdp_buff *xdp) +ice_run_xdp_zc(struct ice_ring *rx_ring, struct xdp_buff *xdp, bool *early_exit) { int err, result = ICE_XDP_PASS; + enum bpf_map_type map_type; struct bpf_prog *xdp_prog; struct ice_ring *xdp_ring; u32 act; @@ -529,8 +531,13 @@ ice_run_xdp_zc(struct ice_ring *rx_ring, struct xdp_buff *xdp) result = ice_xmit_xdp_buff(xdp, xdp_ring); break; case XDP_REDIRECT: - err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); - result = !err ? ICE_XDP_REDIR : ICE_XDP_CONSUMED; + err = xdp_do_redirect_ext(rx_ring->netdev, xdp, xdp_prog, &map_type); + if (err) { + *early_exit = xsk_do_redirect_rx_full(err, map_type); + result = ICE_XDP_CONSUMED; + } else { + result = ICE_XDP_REDIR; + } break; default: bpf_warn_invalid_xdp_action(act); @@ -558,8 +565,8 @@ int ice_clean_rx_irq_zc(struct ice_ring *rx_ring, int budget) { unsigned int total_rx_bytes = 0, total_rx_packets = 0; u16 cleaned_count = ICE_DESC_UNUSED(rx_ring); + bool early_exit = false, failure = false; unsigned int xdp_xmit = 0; - bool failure = false; while (likely(total_rx_packets < (unsigned int)budget)) { union ice_32b_rx_flex_desc *rx_desc; @@ -597,7 +604,7 @@ int ice_clean_rx_irq_zc(struct ice_ring *rx_ring, int budget) rx_buf->xdp->data_end = rx_buf->xdp->data + size; xsk_buff_dma_sync_for_cpu(rx_buf->xdp, rx_ring->xsk_pool); - xdp_res = ice_run_xdp_zc(rx_ring, rx_buf->xdp); + xdp_res = ice_run_xdp_zc(rx_ring, rx_buf->xdp, &early_exit); if (xdp_res) { if (xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR)) xdp_xmit |= xdp_res; @@ -610,6 +617,8 @@ int ice_clean_rx_irq_zc(struct ice_ring *rx_ring, int budget) cleaned_count++; ice_bump_ntc(rx_ring); + if (early_exit) + break; continue; } @@ -646,12 +655,12 @@ int ice_clean_rx_irq_zc(struct ice_ring *rx_ring, int budget) ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes); if (xsk_uses_need_wakeup(rx_ring->xsk_pool)) { - if (failure || rx_ring->next_to_clean == rx_ring->next_to_use) + if (early_exit || failure || rx_ring->next_to_clean == rx_ring->next_to_use) xsk_set_rx_need_wakeup(rx_ring->xsk_pool); else xsk_clear_rx_need_wakeup(rx_ring->xsk_pool); - return (int)total_rx_packets; + return early_exit ? 0 : (int)total_rx_packets; } return failure ? budget : (int)total_rx_packets; From patchwork Fri Sep 4 13:53:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1357485 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Gwve0xHq; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BjfM55t7Sz9sW0 for ; Fri, 4 Sep 2020 23:55:13 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730571AbgIDNzK (ORCPT ); Fri, 4 Sep 2020 09:55:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730718AbgIDNyk (ORCPT ); Fri, 4 Sep 2020 09:54:40 -0400 Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E1501C061244; Fri, 4 Sep 2020 06:54:39 -0700 (PDT) Received: by mail-pl1-x643.google.com with SMTP id h2so1209136plr.0; Fri, 04 Sep 2020 06:54:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=00gl+iNRqCD/06CwSuYnvYFJj+D6FqLV6Ha+6OdW5o0=; b=Gwve0xHqOESk5XRj2J5YoY/XY0pDvDUcYxoL58FBqKdVN1BIs4zdd3sOpy+MeOKCOM oFUMmc2jeF89PGEwjJk54Aq7DvFkrtBJaUrQAu/6cg5ZqpwcAyaUzRnIzi1mnOPOKzhM TdUu2VW1//Sla4CybPIA4ma6sT3pTurtil5ZXLUzT3E9wJNG0pa8faFRZxREJhGK9H4+ ZvkyrdGYwQdVwHDldV6Jeqjo/+uwN9ye5WE75ZrlcoO4ZjjndDjAUu07DyqYzXc6xebk p8UlENYIAZvWRB0tHx82snhOYmp45rfSS48WmgAAtNxEn/5EjaCQ/7VRbIF33Nn1A1Cv wN8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=00gl+iNRqCD/06CwSuYnvYFJj+D6FqLV6Ha+6OdW5o0=; b=X6JmRced95TvKYIZx2rMHr5mclqWspAteKJsom8ofj/gXjNwBtafGG6Z89yN2MgL4L r9DzSaHNSdOwsXw5XRVDMyXN2cxhrLkMs2wkU5udKRAW82O/iD5NdNOxBKUthDjqniAM 73/sYbketqEd6YZGJJs0Dqciy3DSvXg3afDkUphy1WDK1rrpGbmzwzdbKxgRoN5FT3qy 0wHa73KAPrtlTE0rbXxRXiFwoeRNUxaTgAn72LT4ytALFlLgRKJLs/cBw0PredKJMtZh yHtcre0bLCV+VMQIm8k5xpJuVP9UkF4N78xWLcTdk7ChWVKjhG8gmGyDEtiI4ZKM+R8H DyPQ== X-Gm-Message-State: AOAM530S/8evzQqvMaIxPFDIkrEyosAhgGWBsfaePc9vL+3AOAvYb2zT yeFFsSZMD1xY4ZySz/4LIUs= X-Google-Smtp-Source: ABdhPJzJbE5rpgp1DiX2blOXUdtGirT7ZFHRnzhE+YUtNvIjD9OForeut3ciweX78vAgCcr9gajPjA== X-Received: by 2002:a17:90a:ca03:: with SMTP id x3mr8376030pjt.92.1599227679442; Fri, 04 Sep 2020 06:54:39 -0700 (PDT) Received: from btopel-mobl.ger.intel.com ([192.55.55.41]) by smtp.gmail.com with ESMTPSA id g9sm6931239pfr.172.2020.09.04.06.54.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Sep 2020 06:54:38 -0700 (PDT) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, davem@davemloft.net, kuba@kernel.org, hawk@kernel.org, john.fastabend@gmail.com, intel-wired-lan@lists.osuosl.org Subject: [PATCH bpf-next 6/6] ixgbe, xsk: finish napi loop if AF_XDP Rx queue is full Date: Fri, 4 Sep 2020 15:53:31 +0200 Message-Id: <20200904135332.60259-7-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200904135332.60259-1-bjorn.topel@gmail.com> References: <20200904135332.60259-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Björn Töpel Make the AF_XDP zero-copy path aware that the reason for redirect failure was due to full Rx queue. If so, exit the napi loop as soon as possible (exit the softirq processing), so that the userspace AF_XDP process can hopefully empty the Rx queue. This mainly helps the "one core scenario", where the userland process and Rx softirq processing is on the same core. Note that the early exit can only be performed if the "need wakeup" feature is enabled, because otherwise there is no notification mechanism available from the kernel side. This requires that the driver starts using the newly introduced xdp_do_redirect_ext() and xsk_do_redirect_rx_full() functions. Signed-off-by: Björn Töpel --- drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 23 ++++++++++++++------ 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c index 3771857cf887..a4aebfd986b3 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c @@ -93,9 +93,11 @@ int ixgbe_xsk_pool_setup(struct ixgbe_adapter *adapter, static int ixgbe_run_xdp_zc(struct ixgbe_adapter *adapter, struct ixgbe_ring *rx_ring, - struct xdp_buff *xdp) + struct xdp_buff *xdp, + bool *early_exit) { int err, result = IXGBE_XDP_PASS; + enum bpf_map_type map_type; struct bpf_prog *xdp_prog; struct xdp_frame *xdpf; u32 act; @@ -116,8 +118,13 @@ static int ixgbe_run_xdp_zc(struct ixgbe_adapter *adapter, result = ixgbe_xmit_xdp_ring(adapter, xdpf); break; case XDP_REDIRECT: - err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); - result = !err ? IXGBE_XDP_REDIR : IXGBE_XDP_CONSUMED; + err = xdp_do_redirect_ext(rx_ring->netdev, xdp, xdp_prog, &map_type); + if (err) { + *early_exit = xsk_do_redirect_rx_full(err, map_type); + result = IXGBE_XDP_CONSUMED; + } else { + result = IXGBE_XDP_REDIR; + } break; default: bpf_warn_invalid_xdp_action(act); @@ -235,8 +242,8 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector, unsigned int total_rx_bytes = 0, total_rx_packets = 0; struct ixgbe_adapter *adapter = q_vector->adapter; u16 cleaned_count = ixgbe_desc_unused(rx_ring); + bool early_exit = false, failure = false; unsigned int xdp_res, xdp_xmit = 0; - bool failure = false; struct sk_buff *skb; while (likely(total_rx_packets < budget)) { @@ -288,7 +295,7 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector, bi->xdp->data_end = bi->xdp->data + size; xsk_buff_dma_sync_for_cpu(bi->xdp, rx_ring->xsk_pool); - xdp_res = ixgbe_run_xdp_zc(adapter, rx_ring, bi->xdp); + xdp_res = ixgbe_run_xdp_zc(adapter, rx_ring, bi->xdp, &early_exit); if (xdp_res) { if (xdp_res & (IXGBE_XDP_TX | IXGBE_XDP_REDIR)) @@ -302,6 +309,8 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector, cleaned_count++; ixgbe_inc_ntc(rx_ring); + if (early_exit) + break; continue; } @@ -346,12 +355,12 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector, q_vector->rx.total_bytes += total_rx_bytes; if (xsk_uses_need_wakeup(rx_ring->xsk_pool)) { - if (failure || rx_ring->next_to_clean == rx_ring->next_to_use) + if (early_exit || failure || rx_ring->next_to_clean == rx_ring->next_to_use) xsk_set_rx_need_wakeup(rx_ring->xsk_pool); else xsk_clear_rx_need_wakeup(rx_ring->xsk_pool); - return (int)total_rx_packets; + return early_exit ? 0 : (int)total_rx_packets; } return failure ? budget : (int)total_rx_packets; }