From patchwork Fri Jul 10 13:56:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1326860 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=BVGDUL/V; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3F494sVRz9sT6 for ; Fri, 10 Jul 2020 23:58:01 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728113AbgGJN6A (ORCPT ); Fri, 10 Jul 2020 09:58:00 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:38125 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726965AbgGJN6A (ORCPT ); Fri, 10 Jul 2020 09:58:00 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id E2BB758046E; Fri, 10 Jul 2020 09:57:57 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Fri, 10 Jul 2020 09:57:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=YdYs9IdfpiQfZnUf6YPsWJw90mFjLasPJpvtGeRzDcs=; b=BVGDUL/V XMc6BtRqSks+m51L8oREsSWK+kUT605BeLfh4T8jJ0007Y/esCSwI9tyHr9qt6tf XfmAVPMHveVqY6xeD0cPdsXymds3XbBT3oaJOtEwVAeZ25f8S0FEA/ct2gFFD7ec i5cUQbNU0umgp9OmL7xkYG7ovG3XoOyEZJD/hMjoRMWudYLZpy24aqjhOqFETmH2 LMGulEfV7M0+cOxTqzBo1HnpabLgZsSwSyfuv8raRLX/tePzk096jSXX+d/jW3Iq PxfOhRZw6pAk7+Y65VvbbgxkLweelYxL39KkKnMzIn7o+ckQzMfaJ+OxWci5ZUsO LowF0v/gRdxRWg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrvddugdejgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpefkughoucfutghhihhmmhgvlhcuoehiughoshgthhesihguohhs tghhrdhorhhgqeenucggtffrrghtthgvrhhnpeduteeiveffffevleekleejffekhfekhe fgtdfftefhledvjefggfehgfevjeekhfenucfkphepuddtledrieeirdduledrudeffeen ucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiughosh gthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (bzq-109-66-19-133.red.bezeqint.net [109.66.19.133]) by mail.messagingengine.com (Postfix) with ESMTPA id 91034328005A; Fri, 10 Jul 2020 09:57:52 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@mellanox.com, petrm@mellanox.com, mlxsw@mellanox.com, michael.chan@broadcom.com, saeedm@mellanox.com, leon@kernel.org, pablo@netfilter.org, kadlec@netfilter.org, fw@strlen.de, jhs@mojatatu.com, xiyou.wangcong@gmail.com, simon.horman@netronome.com, Ido Schimmel Subject: [PATCH net-next 01/13] net: sched: Pass qdisc reference in struct flow_block_offload Date: Fri, 10 Jul 2020 16:56:54 +0300 Message-Id: <20200710135706.601409-2-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200710135706.601409-1-idosch@idosch.org> References: <20200710135706.601409-1-idosch@idosch.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Petr Machata Previously, shared blocks were only relevant for the pseudo-qdiscs ingress and clsact. Recently, a qevent facility was introduced, which allows to bind blocks to well-defined slots of a qdisc instance. RED in particular got two qevents: early_drop and mark. Drivers that wish to offload these blocks will be sent the usual notification, and need to know which qdisc it is related to. To that end, extend flow_block_offload with a "sch" pointer, and initialize as appropriate. This prompts changes in the indirect block facility, which now tracks the scheduler instead of the netdevice. Update signatures of several functions similarly. Deduce the device from the scheduler when necessary. Signed-off-by: Petr Machata Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel --- drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c | 11 ++++++---- .../ethernet/mellanox/mlx5/core/en/rep/tc.c | 11 +++++----- .../net/ethernet/netronome/nfp/flower/main.h | 2 +- .../ethernet/netronome/nfp/flower/offload.c | 11 ++++++---- include/net/flow_offload.h | 9 ++++---- net/core/flow_offload.c | 12 +++++------ net/netfilter/nf_flow_table_offload.c | 17 +++++++-------- net/netfilter/nf_tables_offload.c | 20 ++++++++++-------- net/sched/cls_api.c | 21 +++++++++++-------- 9 files changed, 63 insertions(+), 51 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c index 0a9a4467d7c7..fd016adfde5d 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c @@ -1888,10 +1888,11 @@ static void bnxt_tc_setup_indr_rel(void *cb_priv) kfree(priv); } -static int bnxt_tc_setup_indr_block(struct net_device *netdev, struct bnxt *bp, +static int bnxt_tc_setup_indr_block(struct Qdisc *sch, struct bnxt *bp, struct flow_block_offload *f, void *data, void (*cleanup)(struct flow_block_cb *block_cb)) { + struct net_device *netdev = sch->dev_queue->dev; struct bnxt_flower_indr_block_cb_priv *cb_priv; struct flow_block_cb *block_cb; @@ -1911,7 +1912,7 @@ static int bnxt_tc_setup_indr_block(struct net_device *netdev, struct bnxt *bp, block_cb = flow_indr_block_cb_alloc(bnxt_tc_setup_indr_block_cb, cb_priv, cb_priv, bnxt_tc_setup_indr_rel, f, - netdev, data, bp, cleanup); + sch, data, bp, cleanup); if (IS_ERR(block_cb)) { list_del(&cb_priv->list); kfree(cb_priv); @@ -1946,17 +1947,19 @@ static bool bnxt_is_netdev_indr_offload(struct net_device *netdev) return netif_is_vxlan(netdev); } -static int bnxt_tc_setup_indr_cb(struct net_device *netdev, void *cb_priv, +static int bnxt_tc_setup_indr_cb(struct Qdisc *sch, void *cb_priv, enum tc_setup_type type, void *type_data, void *data, void (*cleanup)(struct flow_block_cb *block_cb)) { + struct net_device *netdev = sch->dev_queue->dev; + if (!bnxt_is_netdev_indr_offload(netdev)) return -EOPNOTSUPP; switch (type) { case TC_SETUP_BLOCK: - return bnxt_tc_setup_indr_block(netdev, cb_priv, type_data, data, + return bnxt_tc_setup_indr_block(sch, cb_priv, type_data, data, cleanup); default: break; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c index eefeb1cdc2ee..4fc42c1955ff 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c @@ -404,7 +404,7 @@ static void mlx5e_rep_indr_block_unbind(void *cb_priv) static LIST_HEAD(mlx5e_block_cb_list); static int -mlx5e_rep_indr_setup_block(struct net_device *netdev, +mlx5e_rep_indr_setup_block(struct Qdisc *sch, struct mlx5e_rep_priv *rpriv, struct flow_block_offload *f, flow_setup_cb_t *setup_cb, @@ -412,6 +412,7 @@ mlx5e_rep_indr_setup_block(struct net_device *netdev, void (*cleanup)(struct flow_block_cb *block_cb)) { struct mlx5e_priv *priv = netdev_priv(rpriv->netdev); + struct net_device *netdev = sch->dev_queue->dev; struct mlx5e_rep_indr_block_priv *indr_priv; struct flow_block_cb *block_cb; @@ -442,7 +443,7 @@ mlx5e_rep_indr_setup_block(struct net_device *netdev, block_cb = flow_indr_block_cb_alloc(setup_cb, indr_priv, indr_priv, mlx5e_rep_indr_block_unbind, - f, netdev, data, rpriv, + f, sch, data, rpriv, cleanup); if (IS_ERR(block_cb)) { list_del(&indr_priv->list); @@ -472,18 +473,18 @@ mlx5e_rep_indr_setup_block(struct net_device *netdev, } static -int mlx5e_rep_indr_setup_cb(struct net_device *netdev, void *cb_priv, +int mlx5e_rep_indr_setup_cb(struct Qdisc *sch, void *cb_priv, enum tc_setup_type type, void *type_data, void *data, void (*cleanup)(struct flow_block_cb *block_cb)) { switch (type) { case TC_SETUP_BLOCK: - return mlx5e_rep_indr_setup_block(netdev, cb_priv, type_data, + return mlx5e_rep_indr_setup_block(sch, cb_priv, type_data, mlx5e_rep_indr_setup_tc_cb, data, cleanup); case TC_SETUP_FT: - return mlx5e_rep_indr_setup_block(netdev, cb_priv, type_data, + return mlx5e_rep_indr_setup_block(sch, cb_priv, type_data, mlx5e_rep_indr_setup_ft_cb, data, cleanup); default: diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h index 7f54a620acad..84f1b69bc6dd 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/main.h +++ b/drivers/net/ethernet/netronome/nfp/flower/main.h @@ -458,7 +458,7 @@ void nfp_flower_qos_cleanup(struct nfp_app *app); int nfp_flower_setup_qos_offload(struct nfp_app *app, struct net_device *netdev, struct tc_cls_matchall_offload *flow); void nfp_flower_stats_rlim_reply(struct nfp_app *app, struct sk_buff *skb); -int nfp_flower_indr_setup_tc_cb(struct net_device *netdev, void *cb_priv, +int nfp_flower_indr_setup_tc_cb(struct Qdisc *sch, void *cb_priv, enum tc_setup_type type, void *type_data, void *data, void (*cleanup)(struct flow_block_cb *block_cb)); diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c index 3af27bb5f4b0..f2acce64613c 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/offload.c +++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c @@ -1646,10 +1646,11 @@ void nfp_flower_setup_indr_tc_release(void *cb_priv) } static int -nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app, +nfp_flower_setup_indr_tc_block(struct Qdisc *sch, struct nfp_app *app, struct flow_block_offload *f, void *data, void (*cleanup)(struct flow_block_cb *block_cb)) { + struct net_device *netdev = sch->dev_queue->dev; struct nfp_flower_indr_block_cb_priv *cb_priv; struct nfp_flower_priv *priv = app->priv; struct flow_block_cb *block_cb; @@ -1680,7 +1681,7 @@ nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app, block_cb = flow_indr_block_cb_alloc(nfp_flower_setup_indr_block_cb, cb_priv, cb_priv, nfp_flower_setup_indr_tc_release, - f, netdev, data, app, cleanup); + f, sch, data, app, cleanup); if (IS_ERR(block_cb)) { list_del(&cb_priv->list); kfree(cb_priv); @@ -1711,17 +1712,19 @@ nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app, } int -nfp_flower_indr_setup_tc_cb(struct net_device *netdev, void *cb_priv, +nfp_flower_indr_setup_tc_cb(struct Qdisc *sch, void *cb_priv, enum tc_setup_type type, void *type_data, void *data, void (*cleanup)(struct flow_block_cb *block_cb)) { + struct net_device *netdev = sch->dev_queue->dev; + if (!nfp_fl_is_netdev_to_offload(netdev)) return -EOPNOTSUPP; switch (type) { case TC_SETUP_BLOCK: - return nfp_flower_setup_indr_tc_block(netdev, cb_priv, + return nfp_flower_setup_indr_tc_block(sch, cb_priv, type_data, data, cleanup); default: return -EOPNOTSUPP; diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h index de395498440d..fda29140bdc5 100644 --- a/include/net/flow_offload.h +++ b/include/net/flow_offload.h @@ -444,6 +444,7 @@ struct flow_block_offload { struct list_head cb_list; struct list_head *driver_block_list; struct netlink_ext_ack *extack; + struct Qdisc *sch; }; enum tc_setup_type; @@ -454,7 +455,7 @@ struct flow_block_cb; struct flow_block_indr { struct list_head list; - struct net_device *dev; + struct Qdisc *sch; enum flow_block_binder_type binder_type; void *data; void *cb_priv; @@ -479,7 +480,7 @@ struct flow_block_cb *flow_indr_block_cb_alloc(flow_setup_cb_t *cb, void *cb_ident, void *cb_priv, void (*release)(void *cb_priv), struct flow_block_offload *bo, - struct net_device *dev, void *data, + struct Qdisc *sch, void *data, void *indr_cb_priv, void (*cleanup)(struct flow_block_cb *block_cb)); void flow_block_cb_free(struct flow_block_cb *block_cb); @@ -553,7 +554,7 @@ static inline void flow_block_init(struct flow_block *flow_block) INIT_LIST_HEAD(&flow_block->cb_list); } -typedef int flow_indr_block_bind_cb_t(struct net_device *dev, void *cb_priv, +typedef int flow_indr_block_bind_cb_t(struct Qdisc *sch, void *cb_priv, enum tc_setup_type type, void *type_data, void *data, void (*cleanup)(struct flow_block_cb *block_cb)); @@ -561,7 +562,7 @@ typedef int flow_indr_block_bind_cb_t(struct net_device *dev, void *cb_priv, int flow_indr_dev_register(flow_indr_block_bind_cb_t *cb, void *cb_priv); void flow_indr_dev_unregister(flow_indr_block_bind_cb_t *cb, void *cb_priv, void (*release)(void *cb_priv)); -int flow_indr_dev_setup_offload(struct net_device *dev, +int flow_indr_dev_setup_offload(struct Qdisc *sch, enum tc_setup_type type, void *data, struct flow_block_offload *bo, void (*cleanup)(struct flow_block_cb *block_cb)); diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c index b739cfab796e..9877c55f3e77 100644 --- a/net/core/flow_offload.c +++ b/net/core/flow_offload.c @@ -429,14 +429,14 @@ EXPORT_SYMBOL(flow_indr_dev_unregister); static void flow_block_indr_init(struct flow_block_cb *flow_block, struct flow_block_offload *bo, - struct net_device *dev, void *data, + struct Qdisc *sch, void *data, void *cb_priv, void (*cleanup)(struct flow_block_cb *block_cb)) { flow_block->indr.binder_type = bo->binder_type; flow_block->indr.data = data; flow_block->indr.cb_priv = cb_priv; - flow_block->indr.dev = dev; + flow_block->indr.sch = sch; flow_block->indr.cleanup = cleanup; } @@ -444,7 +444,7 @@ struct flow_block_cb *flow_indr_block_cb_alloc(flow_setup_cb_t *cb, void *cb_ident, void *cb_priv, void (*release)(void *cb_priv), struct flow_block_offload *bo, - struct net_device *dev, void *data, + struct Qdisc *sch, void *data, void *indr_cb_priv, void (*cleanup)(struct flow_block_cb *block_cb)) { @@ -454,7 +454,7 @@ struct flow_block_cb *flow_indr_block_cb_alloc(flow_setup_cb_t *cb, if (IS_ERR(block_cb)) goto out; - flow_block_indr_init(block_cb, bo, dev, data, indr_cb_priv, cleanup); + flow_block_indr_init(block_cb, bo, sch, data, indr_cb_priv, cleanup); list_add(&block_cb->indr.list, &flow_block_indr_list); out: @@ -462,7 +462,7 @@ struct flow_block_cb *flow_indr_block_cb_alloc(flow_setup_cb_t *cb, } EXPORT_SYMBOL(flow_indr_block_cb_alloc); -int flow_indr_dev_setup_offload(struct net_device *dev, +int flow_indr_dev_setup_offload(struct Qdisc *sch, enum tc_setup_type type, void *data, struct flow_block_offload *bo, void (*cleanup)(struct flow_block_cb *block_cb)) @@ -471,7 +471,7 @@ int flow_indr_dev_setup_offload(struct net_device *dev, mutex_lock(&flow_indr_block_lock); list_for_each_entry(this, &flow_block_indr_dev_list, list) - this->cb(dev, this->cb_priv, type, bo, data, cleanup); + this->cb(sch, this->cb_priv, type, bo, data, cleanup); mutex_unlock(&flow_indr_block_lock); diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c index 5fff1e040168..2319190b1364 100644 --- a/net/netfilter/nf_flow_table_offload.c +++ b/net/netfilter/nf_flow_table_offload.c @@ -928,26 +928,27 @@ static int nf_flow_table_block_setup(struct nf_flowtable *flowtable, } static void nf_flow_table_block_offload_init(struct flow_block_offload *bo, - struct net *net, + struct net_device *dev, enum flow_block_command cmd, struct nf_flowtable *flowtable, struct netlink_ext_ack *extack) { memset(bo, 0, sizeof(*bo)); - bo->net = net; + bo->net = dev_net(dev); bo->block = &flowtable->flow_block; bo->command = cmd; bo->binder_type = FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS; bo->extack = extack; + bo->sch = dev_ingress_queue(dev)->qdisc_sleeping; INIT_LIST_HEAD(&bo->cb_list); } static void nf_flow_table_indr_cleanup(struct flow_block_cb *block_cb) { struct nf_flowtable *flowtable = block_cb->indr.data; - struct net_device *dev = block_cb->indr.dev; + struct Qdisc *sch = block_cb->indr.sch; - nf_flow_table_gc_cleanup(flowtable, dev); + nf_flow_table_gc_cleanup(flowtable, sch->dev_queue->dev); down_write(&flowtable->flow_block_lock); list_del(&block_cb->list); list_del(&block_cb->driver_list); @@ -961,10 +962,9 @@ static int nf_flow_table_indr_offload_cmd(struct flow_block_offload *bo, enum flow_block_command cmd, struct netlink_ext_ack *extack) { - nf_flow_table_block_offload_init(bo, dev_net(dev), cmd, flowtable, - extack); + nf_flow_table_block_offload_init(bo, dev, cmd, flowtable, extack); - return flow_indr_dev_setup_offload(dev, TC_SETUP_FT, flowtable, bo, + return flow_indr_dev_setup_offload(bo->sch, TC_SETUP_FT, flowtable, bo, nf_flow_table_indr_cleanup); } @@ -976,8 +976,7 @@ static int nf_flow_table_offload_cmd(struct flow_block_offload *bo, { int err; - nf_flow_table_block_offload_init(bo, dev_net(dev), cmd, flowtable, - extack); + nf_flow_table_block_offload_init(bo, dev, cmd, flowtable, extack); err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_FT, bo); if (err < 0) return err; diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c index c7cf1cde46de..78dc93607d09 100644 --- a/net/netfilter/nf_tables_offload.c +++ b/net/netfilter/nf_tables_offload.c @@ -254,17 +254,18 @@ static int nft_block_setup(struct nft_base_chain *basechain, } static void nft_flow_block_offload_init(struct flow_block_offload *bo, - struct net *net, + struct net_device *dev, enum flow_block_command cmd, struct nft_base_chain *basechain, struct netlink_ext_ack *extack) { memset(bo, 0, sizeof(*bo)); - bo->net = net; + bo->net = dev_net(dev); bo->block = &basechain->flow_block; bo->command = cmd; bo->binder_type = FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS; bo->extack = extack; + bo->sch = dev_ingress_queue(dev)->qdisc_sleeping; INIT_LIST_HEAD(&bo->cb_list); } @@ -276,7 +277,7 @@ static int nft_block_offload_cmd(struct nft_base_chain *chain, struct flow_block_offload bo; int err; - nft_flow_block_offload_init(&bo, dev_net(dev), cmd, chain, &extack); + nft_flow_block_offload_init(&bo, dev, cmd, chain, &extack); err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_BLOCK, &bo); if (err < 0) @@ -288,13 +289,14 @@ static int nft_block_offload_cmd(struct nft_base_chain *chain, static void nft_indr_block_cleanup(struct flow_block_cb *block_cb) { struct nft_base_chain *basechain = block_cb->indr.data; - struct net_device *dev = block_cb->indr.dev; + struct Qdisc *sch = block_cb->indr.sch; struct netlink_ext_ack extack = {}; - struct net *net = dev_net(dev); + struct net *net = qdisc_net(sch); struct flow_block_offload bo; + struct net_device *dev; - nft_flow_block_offload_init(&bo, dev_net(dev), FLOW_BLOCK_UNBIND, - basechain, &extack); + dev = sch->dev_queue->dev; + nft_flow_block_offload_init(&bo, dev, FLOW_BLOCK_UNBIND, basechain, &extack); mutex_lock(&net->nft.commit_mutex); list_del(&block_cb->driver_list); list_move(&block_cb->list, &bo.cb_list); @@ -310,9 +312,9 @@ static int nft_indr_block_offload_cmd(struct nft_base_chain *basechain, struct flow_block_offload bo; int err; - nft_flow_block_offload_init(&bo, dev_net(dev), cmd, basechain, &extack); + nft_flow_block_offload_init(&bo, dev, cmd, basechain, &extack); - err = flow_indr_dev_setup_offload(dev, TC_SETUP_BLOCK, basechain, &bo, + err = flow_indr_dev_setup_offload(bo.sch, TC_SETUP_BLOCK, basechain, &bo, nft_indr_block_cleanup); if (err < 0) return err; diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index e9e119ea6813..0e80b4a7f5fd 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -622,18 +622,21 @@ static int tcf_block_setup(struct tcf_block *block, struct flow_block_offload *bo); static void tcf_block_offload_init(struct flow_block_offload *bo, - struct net_device *dev, + struct Qdisc *sch, enum flow_block_command command, enum flow_block_binder_type binder_type, struct flow_block *flow_block, bool shared, struct netlink_ext_ack *extack) { + struct net_device *dev = sch->dev_queue->dev; + bo->net = dev_net(dev); bo->command = command; bo->binder_type = binder_type; bo->block = flow_block; bo->block_shared = shared; bo->extack = extack; + bo->sch = sch; INIT_LIST_HEAD(&bo->cb_list); } @@ -643,11 +646,11 @@ static void tcf_block_unbind(struct tcf_block *block, static void tc_block_indr_cleanup(struct flow_block_cb *block_cb) { struct tcf_block *block = block_cb->indr.data; - struct net_device *dev = block_cb->indr.dev; + struct Qdisc *sch = block_cb->indr.sch; struct netlink_ext_ack extack = {}; struct flow_block_offload bo; - tcf_block_offload_init(&bo, dev, FLOW_BLOCK_UNBIND, + tcf_block_offload_init(&bo, sch, FLOW_BLOCK_UNBIND, block_cb->indr.binder_type, &block->flow_block, tcf_block_shared(block), &extack); @@ -666,14 +669,15 @@ static bool tcf_block_offload_in_use(struct tcf_block *block) } static int tcf_block_offload_cmd(struct tcf_block *block, - struct net_device *dev, + struct Qdisc *sch, struct tcf_block_ext_info *ei, enum flow_block_command command, struct netlink_ext_ack *extack) { + struct net_device *dev = sch->dev_queue->dev; struct flow_block_offload bo = {}; - tcf_block_offload_init(&bo, dev, command, ei->binder_type, + tcf_block_offload_init(&bo, sch, command, ei->binder_type, &block->flow_block, tcf_block_shared(block), extack); @@ -690,7 +694,7 @@ static int tcf_block_offload_cmd(struct tcf_block *block, return tcf_block_setup(block, &bo); } - flow_indr_dev_setup_offload(dev, TC_SETUP_BLOCK, block, &bo, + flow_indr_dev_setup_offload(sch, TC_SETUP_BLOCK, block, &bo, tc_block_indr_cleanup); tcf_block_setup(block, &bo); @@ -717,7 +721,7 @@ static int tcf_block_offload_bind(struct tcf_block *block, struct Qdisc *q, goto err_unlock; } - err = tcf_block_offload_cmd(block, dev, ei, FLOW_BLOCK_BIND, extack); + err = tcf_block_offload_cmd(block, q, ei, FLOW_BLOCK_BIND, extack); if (err == -EOPNOTSUPP) goto no_offload_dev_inc; if (err) @@ -740,11 +744,10 @@ static int tcf_block_offload_bind(struct tcf_block *block, struct Qdisc *q, static void tcf_block_offload_unbind(struct tcf_block *block, struct Qdisc *q, struct tcf_block_ext_info *ei) { - struct net_device *dev = q->dev_queue->dev; int err; down_write(&block->cb_lock); - err = tcf_block_offload_cmd(block, dev, ei, FLOW_BLOCK_UNBIND, NULL); + err = tcf_block_offload_cmd(block, q, ei, FLOW_BLOCK_UNBIND, NULL); if (err == -EOPNOTSUPP) goto no_offload_dev_dec; up_write(&block->cb_lock); From patchwork Fri Jul 10 13:56:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1326861 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=bC8Fi+Ou; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3F4D2K2Vz9sT6 for ; Fri, 10 Jul 2020 23:58:04 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728122AbgGJN6D (ORCPT ); Fri, 10 Jul 2020 09:58:03 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:60771 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726965AbgGJN6C (ORCPT ); Fri, 10 Jul 2020 09:58:02 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id CA27658058A; Fri, 10 Jul 2020 09:58:01 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Fri, 10 Jul 2020 09:58:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=EYN9v2G1qiA5xza4vBdLgOUVmxeku6BsokFrmDC/8qQ=; b=bC8Fi+Ou M0RA2+20+eXZ5GtJOYwXzRGEnNmzEDHWTk55pFcvZyeTPI//mwSriUxc6w2fYjmK oEU5CvAmgS3AvY6t+6eFfMgZ8x43IsQmq/5BeCFyBOw2+oQGGeyJEm7s+D8QbM+Z wtjtTRDZrkSdfr8ysvH2ybnvHJ0JmN7fu7yoH6B8f7lMXmiFLFFTzzL1s0MQN1gX Z9DF9SkrDOI8eJq9miHGYEDZwuZxNdlaBFXitwN+edjO8do4XYVG4DiEsK7dIu3F qDq2iVwdsY2Eykl31BIj0l2T+iMPh37xL1ARGbVwB3FvQSzRPktmyDhusat9HBVN 5QEXELMb5ICi0w== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrvddugdejgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepkfguohcuufgt hhhimhhmvghluceoihguohhstghhsehiughoshgthhdrohhrgheqnecuggftrfgrthhtvg hrnhepudetieevffffveelkeeljeffkefhkeehgfdtffethfelvdejgffghefgveejkefh necukfhppedutdelrdeiiedrudelrddufeefnecuvehluhhsthgvrhfuihiivgepudenuc frrghrrghmpehmrghilhhfrhhomhepihguohhstghhsehiughoshgthhdrohhrgh X-ME-Proxy: Received: from shredder.mtl.com (bzq-109-66-19-133.red.bezeqint.net [109.66.19.133]) by mail.messagingengine.com (Postfix) with ESMTPA id 37143328005D; Fri, 10 Jul 2020 09:57:58 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@mellanox.com, petrm@mellanox.com, mlxsw@mellanox.com, michael.chan@broadcom.com, saeedm@mellanox.com, leon@kernel.org, pablo@netfilter.org, kadlec@netfilter.org, fw@strlen.de, jhs@mojatatu.com, xiyou.wangcong@gmail.com, simon.horman@netronome.com, Amit Cohen , Ido Schimmel Subject: [PATCH net-next 02/13] mlxsw: reg: Add Monitoring Mirror Trigger Enable Register Date: Fri, 10 Jul 2020 16:56:55 +0300 Message-Id: <20200710135706.601409-3-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200710135706.601409-1-idosch@idosch.org> References: <20200710135706.601409-1-idosch@idosch.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Amit Cohen This register is used to configure the mirror enable for different mirror reasons. Signed-off-by: Amit Cohen Reviewed-by: Jiri Pirko Reviewed-by: Petr Machata Signed-off-by: Petr Machata Signed-off-by: Ido Schimmel --- drivers/net/ethernet/mellanox/mlxsw/reg.h | 50 +++++++++++++++++++++++ 1 file changed, 50 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h index b76c839223b5..aa2fd7debec2 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/reg.h +++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h @@ -9502,6 +9502,55 @@ MLXSW_ITEM32(reg, mogcr, ptp_iftc, 0x00, 1, 1); */ MLXSW_ITEM32(reg, mogcr, ptp_eftc, 0x00, 0, 1); +/* MOMTE - Monitoring Mirror Trigger Enable Register + * ------------------------------------------------- + * This register is used to configure the mirror enable for different mirror + * reasons. + */ +#define MLXSW_REG_MOMTE_ID 0x908D +#define MLXSW_REG_MOMTE_LEN 0x10 + +MLXSW_REG_DEFINE(momte, MLXSW_REG_MOMTE_ID, MLXSW_REG_MOMTE_LEN); + +/* reg_momte_local_port + * Local port number. + * Access: Index + */ +MLXSW_ITEM32(reg, momte, local_port, 0x00, 16, 8); + +enum mlxsw_reg_momte_type { + MLXSW_REG_MOMTE_TYPE_WRED = 0x20, + MLXSW_REG_MOMTE_TYPE_SHARED_BUFFER_TCLASS = 0x31, + MLXSW_REG_MOMTE_TYPE_SHARED_BUFFER_TCLASS_DESCRIPTORS = 0x32, + MLXSW_REG_MOMTE_TYPE_SHARED_BUFFER_EGRESS_PORT = 0x33, + MLXSW_REG_MOMTE_TYPE_ING_CONG = 0x40, + MLXSW_REG_MOMTE_TYPE_EGR_CONG = 0x50, + MLXSW_REG_MOMTE_TYPE_ECN = 0x60, + MLXSW_REG_MOMTE_TYPE_HIGH_LATENCY = 0x70, +}; + +/* reg_momte_type + * Type of mirroring. + * Access: Index + */ +MLXSW_ITEM32(reg, momte, type, 0x04, 0, 8); + +/* reg_momte_tclass_en + * TClass/PG mirror enable. Each bit represents corresponding tclass. + * 0: disable (default) + * 1: enable + * Access: RW + */ +MLXSW_ITEM_BIT_ARRAY(reg, momte, tclass_en, 0x08, 0x08, 1); + +static inline void mlxsw_reg_momte_pack(char *payload, u8 local_port, + enum mlxsw_reg_momte_type type) +{ + MLXSW_REG_ZERO(momte, payload); + mlxsw_reg_momte_local_port_set(payload, local_port); + mlxsw_reg_momte_type_set(payload, type); +} + /* MTPPPC - Time Precision Packet Port Configuration * ------------------------------------------------- * This register serves for configuration of which PTP messages should be @@ -10853,6 +10902,7 @@ static const struct mlxsw_reg_info *mlxsw_reg_infos[] = { MLXSW_REG(mgpc), MLXSW_REG(mprs), MLXSW_REG(mogcr), + MLXSW_REG(momte), MLXSW_REG(mtpppc), MLXSW_REG(mtpptr), MLXSW_REG(mtptpt), From patchwork Fri Jul 10 13:56:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1326862 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=GSowZ7GG; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3F4N1STbz9sRf for ; Fri, 10 Jul 2020 23:58:12 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728153AbgGJN6H (ORCPT ); Fri, 10 Jul 2020 09:58:07 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:56737 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726965AbgGJN6H (ORCPT ); Fri, 10 Jul 2020 09:58:07 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id 355C858058B; Fri, 10 Jul 2020 09:58:06 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Fri, 10 Jul 2020 09:58:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=mqaXeefT+RT+euyy7dSPY6o0V/kL+Npk1B4vDO8I/Vw=; b=GSowZ7GG 6mCIVhP3bkzBIbR0c0DJLBqpIn1IMruStIpZvkeJPwbgfLmio/X3i6bTelz1HitU w/hLm20478399r01NCMbWvNgSQGUgGEWe4cMEofpvIyiokeW7DAjqlsSgxqEOtSb 7Wzv6S7eJT0noD5OFuAuKsPotVck4oSb1HeV8q7FRIgUokK54mw0AfPlTRRb2Iq2 SWQMlT4Eu9AXN5zCUFT6Q7o1X/SRlSzCksHejo/fb+wyPvVooHSft6jVP46K3+kj ezZNH8i7t3YGyivPfGpZTvaj65J9R53R1yK5CliD137Bsbv1c2GOHTZjWgb4BOjn wjbbKA5OKMi/xQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrvddugdejgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepkfguohcuufgt hhhimhhmvghluceoihguohhstghhsehiughoshgthhdrohhrgheqnecuggftrfgrthhtvg hrnhepudetieevffffveelkeeljeffkefhkeehgfdtffethfelvdejgffghefgveejkefh necukfhppedutdelrdeiiedrudelrddufeefnecuvehluhhsthgvrhfuihiivgepvdenuc frrghrrghmpehmrghilhhfrhhomhepihguohhstghhsehiughoshgthhdrohhrgh X-ME-Proxy: Received: from shredder.mtl.com (bzq-109-66-19-133.red.bezeqint.net [109.66.19.133]) by mail.messagingengine.com (Postfix) with ESMTPA id E98923280064; Fri, 10 Jul 2020 09:58:01 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@mellanox.com, petrm@mellanox.com, mlxsw@mellanox.com, michael.chan@broadcom.com, saeedm@mellanox.com, leon@kernel.org, pablo@netfilter.org, kadlec@netfilter.org, fw@strlen.de, jhs@mojatatu.com, xiyou.wangcong@gmail.com, simon.horman@netronome.com, Amit Cohen , Ido Schimmel Subject: [PATCH net-next 03/13] mlxsw: reg: Add Monitoring Port Analyzer Global Register Date: Fri, 10 Jul 2020 16:56:56 +0300 Message-Id: <20200710135706.601409-4-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200710135706.601409-1-idosch@idosch.org> References: <20200710135706.601409-1-idosch@idosch.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Amit Cohen This register is used for global port analyzer configurations. Signed-off-by: Amit Cohen Reviewed-by: Jiri Pirko Reviewed-by: Petr Machata Signed-off-by: Petr Machata Signed-off-by: Ido Schimmel --- drivers/net/ethernet/mellanox/mlxsw/reg.h | 52 +++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h index aa2fd7debec2..76f61bef03f8 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/reg.h +++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h @@ -9502,6 +9502,57 @@ MLXSW_ITEM32(reg, mogcr, ptp_iftc, 0x00, 1, 1); */ MLXSW_ITEM32(reg, mogcr, ptp_eftc, 0x00, 0, 1); +/* MPAGR - Monitoring Port Analyzer Global Register + * ------------------------------------------------ + * This register is used for global port analyzer configurations. + * Note: This register is not supported by current FW versions for Spectrum-1. + */ +#define MLXSW_REG_MPAGR_ID 0x9089 +#define MLXSW_REG_MPAGR_LEN 0x0C + +MLXSW_REG_DEFINE(mpagr, MLXSW_REG_MPAGR_ID, MLXSW_REG_MPAGR_LEN); + +enum mlxsw_reg_mpagr_trigger { + MLXSW_REG_MPAGR_TRIGGER_EGRESS, + MLXSW_REG_MPAGR_TRIGGER_INGRESS, + MLXSW_REG_MPAGR_TRIGGER_INGRESS_WRED, + MLXSW_REG_MPAGR_TRIGGER_INGRESS_SHARED_BUFFER, + MLXSW_REG_MPAGR_TRIGGER_INGRESS_ING_CONG, + MLXSW_REG_MPAGR_TRIGGER_INGRESS_EGR_CONG, + MLXSW_REG_MPAGR_TRIGGER_EGRESS_ECN, + MLXSW_REG_MPAGR_TRIGGER_EGRESS_HIGH_LATENCY, +}; + +/* reg_mpagr_trigger + * Mirror trigger. + * Access: Index + */ +MLXSW_ITEM32(reg, mpagr, trigger, 0x00, 0, 4); + +/* reg_mpagr_pa_id + * Port analyzer ID. + * Access: RW + */ +MLXSW_ITEM32(reg, mpagr, pa_id, 0x04, 0, 4); + +/* reg_mpagr_probability_rate + * Sampling rate. + * Valid values are: 1 to 3.5*10^9 + * Value of 1 means "sample all". Default is 1. + * Access: RW + */ +MLXSW_ITEM32(reg, mpagr, probability_rate, 0x08, 0, 32); + +static inline void mlxsw_reg_mpagr_pack(char *payload, + enum mlxsw_reg_mpagr_trigger trigger, + u8 pa_id, u32 probability_rate) +{ + MLXSW_REG_ZERO(mpagr, payload); + mlxsw_reg_mpagr_trigger_set(payload, trigger); + mlxsw_reg_mpagr_pa_id_set(payload, pa_id); + mlxsw_reg_mpagr_probability_rate_set(payload, probability_rate); +} + /* MOMTE - Monitoring Mirror Trigger Enable Register * ------------------------------------------------- * This register is used to configure the mirror enable for different mirror @@ -10902,6 +10953,7 @@ static const struct mlxsw_reg_info *mlxsw_reg_infos[] = { MLXSW_REG(mgpc), MLXSW_REG(mprs), MLXSW_REG(mogcr), + MLXSW_REG(mpagr), MLXSW_REG(momte), MLXSW_REG(mtpppc), MLXSW_REG(mtpptr), From patchwork Fri Jul 10 13:56:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1326863 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=pbPUGjV5; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3F4T0whKz9sT6 for ; Fri, 10 Jul 2020 23:58:17 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728159AbgGJN6O (ORCPT ); Fri, 10 Jul 2020 09:58:14 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:56311 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727978AbgGJN6M (ORCPT ); Fri, 10 Jul 2020 09:58:12 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id DE3EA58058B; Fri, 10 Jul 2020 09:58:10 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Fri, 10 Jul 2020 09:58:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=jypGJqlTwa6i2+3TSVNnr/M4gVm8qopOLYOV+XniFMk=; b=pbPUGjV5 f2DTmxmQMPs1+xj3bJuEIYix/E61hSM2EJ0cJlUabBGXCSml5VBrFSEEpNIlbQGl St16Qs0926Rr7ZWRQpHrvxEvbCEFV760J3TG7RUXMJkFYhKo7szGk6CpGCXyT3Rt ztYTK67ayio+PrmRZq80wTQaI/rwKSylgwWFWFUO1kcRPhs/+MmCssR071GAk3dh TrNUhOZ6E74GbAwjbEfGTCBPQKXFcchLdUxx7wF0bjF/1ZSbp0YaC4r/XewaSoc6 H89jtIUOKRz26fs6dytcctHWCqd6NIr76c8mVqnSyKMo1/gPFfGLNRjRyQQtcgEY sntzaMjSdP4JIg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrvddugdejgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpefkughoucfutghhihhmmhgvlhcuoehiughoshgthhesihguohhs tghhrdhorhhgqeenucggtffrrghtthgvrhhnpeduteeiveffffevleekleejffekhfekhe fgtdfftefhledvjefggfehgfevjeekhfenucfkphepuddtledrieeirdduledrudeffeen ucevlhhushhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiughosh gthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (bzq-109-66-19-133.red.bezeqint.net [109.66.19.133]) by mail.messagingengine.com (Postfix) with ESMTPA id 5D977328005D; Fri, 10 Jul 2020 09:58:06 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@mellanox.com, petrm@mellanox.com, mlxsw@mellanox.com, michael.chan@broadcom.com, saeedm@mellanox.com, leon@kernel.org, pablo@netfilter.org, kadlec@netfilter.org, fw@strlen.de, jhs@mojatatu.com, xiyou.wangcong@gmail.com, simon.horman@netronome.com, Ido Schimmel Subject: [PATCH net-next 04/13] mlxsw: spectrum_span: Move SPAN operations out of global file Date: Fri, 10 Jul 2020 16:56:57 +0300 Message-Id: <20200710135706.601409-5-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200710135706.601409-1-idosch@idosch.org> References: <20200710135706.601409-1-idosch@idosch.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ido Schimmel The per-ASIC SPAN operations are relevant to the SPAN module and therefore should be implemented there and not in the main driver file. Move them. These operations will be extended later on. Reviewed-by: Jiri Pirko Reviewed-by: Petr Machata Signed-off-by: Petr Machata Signed-off-by: Ido Schimmel --- .../net/ethernet/mellanox/mlxsw/spectrum.c | 50 ------------------- .../net/ethernet/mellanox/mlxsw/spectrum.h | 1 - .../ethernet/mellanox/mlxsw/spectrum_span.c | 47 +++++++++++++++++ .../ethernet/mellanox/mlxsw/spectrum_span.h | 8 +++ 4 files changed, 55 insertions(+), 51 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c index eeeafd1d82ce..636dd09cbbbc 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c @@ -175,10 +175,6 @@ struct mlxsw_sp_mlxfw_dev { struct mlxsw_sp *mlxsw_sp; }; -struct mlxsw_sp_span_ops { - u32 (*buffsize_get)(int mtu, u32 speed); -}; - static int mlxsw_sp_component_query(struct mlxfw_dev *mlxfw_dev, u16 component_index, u32 *p_max_size, u8 *p_align_bits, u16 *p_max_write_size) @@ -2812,52 +2808,6 @@ static const struct mlxsw_sp_ptp_ops mlxsw_sp2_ptp_ops = { .get_stats = mlxsw_sp2_get_stats, }; -static u32 mlxsw_sp1_span_buffsize_get(int mtu, u32 speed) -{ - return mtu * 5 / 2; -} - -static const struct mlxsw_sp_span_ops mlxsw_sp1_span_ops = { - .buffsize_get = mlxsw_sp1_span_buffsize_get, -}; - -#define MLXSW_SP2_SPAN_EG_MIRROR_BUFFER_FACTOR 38 -#define MLXSW_SP3_SPAN_EG_MIRROR_BUFFER_FACTOR 50 - -static u32 __mlxsw_sp_span_buffsize_get(int mtu, u32 speed, u32 buffer_factor) -{ - return 3 * mtu + buffer_factor * speed / 1000; -} - -static u32 mlxsw_sp2_span_buffsize_get(int mtu, u32 speed) -{ - int factor = MLXSW_SP2_SPAN_EG_MIRROR_BUFFER_FACTOR; - - return __mlxsw_sp_span_buffsize_get(mtu, speed, factor); -} - -static const struct mlxsw_sp_span_ops mlxsw_sp2_span_ops = { - .buffsize_get = mlxsw_sp2_span_buffsize_get, -}; - -static u32 mlxsw_sp3_span_buffsize_get(int mtu, u32 speed) -{ - int factor = MLXSW_SP3_SPAN_EG_MIRROR_BUFFER_FACTOR; - - return __mlxsw_sp_span_buffsize_get(mtu, speed, factor); -} - -static const struct mlxsw_sp_span_ops mlxsw_sp3_span_ops = { - .buffsize_get = mlxsw_sp3_span_buffsize_get, -}; - -u32 mlxsw_sp_span_buffsize_get(struct mlxsw_sp *mlxsw_sp, int mtu, u32 speed) -{ - u32 buffsize = mlxsw_sp->span_ops->buffsize_get(speed, mtu); - - return mlxsw_sp_bytes_cells(mlxsw_sp, buffsize) + 1; -} - static int mlxsw_sp_netdevice_event(struct notifier_block *unused, unsigned long event, void *ptr); diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h index 1d6b2bc2774c..18c64f7b265d 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h @@ -539,7 +539,6 @@ int mlxsw_sp_flow_counter_alloc(struct mlxsw_sp *mlxsw_sp, unsigned int *p_counter_index); void mlxsw_sp_flow_counter_free(struct mlxsw_sp *mlxsw_sp, unsigned int counter_index); -u32 mlxsw_sp_span_buffsize_get(struct mlxsw_sp *mlxsw_sp, int mtu, u32 speed); bool mlxsw_sp_port_dev_check(const struct net_device *dev); struct mlxsw_sp *mlxsw_sp_lower_get(struct net_device *dev); struct mlxsw_sp_port *mlxsw_sp_port_dev_lower_find(struct net_device *dev); diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c index 92351a79addc..49e2a417ec0e 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c @@ -766,6 +766,14 @@ static int mlxsw_sp_span_entry_put(struct mlxsw_sp *mlxsw_sp, return 0; } +static u32 mlxsw_sp_span_buffsize_get(struct mlxsw_sp *mlxsw_sp, int mtu, + u32 speed) +{ + u32 buffsize = mlxsw_sp->span_ops->buffsize_get(speed, mtu); + + return mlxsw_sp_bytes_cells(mlxsw_sp, buffsize) + 1; +} + static int mlxsw_sp_span_port_buffer_update(struct mlxsw_sp_port *mlxsw_sp_port, u16 mtu) { @@ -1207,3 +1215,42 @@ void mlxsw_sp_span_agent_unbind(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_span_trigger_entry_destroy(mlxsw_sp->span, trigger_entry); } + +static u32 mlxsw_sp1_span_buffsize_get(int mtu, u32 speed) +{ + return mtu * 5 / 2; +} + +const struct mlxsw_sp_span_ops mlxsw_sp1_span_ops = { + .buffsize_get = mlxsw_sp1_span_buffsize_get, +}; + +#define MLXSW_SP2_SPAN_EG_MIRROR_BUFFER_FACTOR 38 +#define MLXSW_SP3_SPAN_EG_MIRROR_BUFFER_FACTOR 50 + +static u32 __mlxsw_sp_span_buffsize_get(int mtu, u32 speed, u32 buffer_factor) +{ + return 3 * mtu + buffer_factor * speed / 1000; +} + +static u32 mlxsw_sp2_span_buffsize_get(int mtu, u32 speed) +{ + int factor = MLXSW_SP2_SPAN_EG_MIRROR_BUFFER_FACTOR; + + return __mlxsw_sp_span_buffsize_get(mtu, speed, factor); +} + +const struct mlxsw_sp_span_ops mlxsw_sp2_span_ops = { + .buffsize_get = mlxsw_sp2_span_buffsize_get, +}; + +static u32 mlxsw_sp3_span_buffsize_get(int mtu, u32 speed) +{ + int factor = MLXSW_SP3_SPAN_EG_MIRROR_BUFFER_FACTOR; + + return __mlxsw_sp_span_buffsize_get(mtu, speed, factor); +} + +const struct mlxsw_sp_span_ops mlxsw_sp3_span_ops = { + .buffsize_get = mlxsw_sp3_span_buffsize_get, +}; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h index 9f6dd2d0f4e6..440551ec0dba 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h @@ -34,6 +34,10 @@ struct mlxsw_sp_span_trigger_parms { struct mlxsw_sp_span_entry_ops; +struct mlxsw_sp_span_ops { + u32 (*buffsize_get)(int mtu, u32 speed); +}; + struct mlxsw_sp_span_entry { const struct net_device *to_dev; const struct mlxsw_sp_span_entry_ops *ops; @@ -82,4 +86,8 @@ mlxsw_sp_span_agent_unbind(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_port *mlxsw_sp_port, const struct mlxsw_sp_span_trigger_parms *parms); +extern const struct mlxsw_sp_span_ops mlxsw_sp1_span_ops; +extern const struct mlxsw_sp_span_ops mlxsw_sp2_span_ops; +extern const struct mlxsw_sp_span_ops mlxsw_sp3_span_ops; + #endif From patchwork Fri Jul 10 13:56:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1326865 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=gCsAaOdq; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3F4d25Nqz9sT6 for ; Fri, 10 Jul 2020 23:58:25 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728190AbgGJN6T (ORCPT ); Fri, 10 Jul 2020 09:58:19 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:40323 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728102AbgGJN6P (ORCPT ); Fri, 10 Jul 2020 09:58:15 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id 6701558058B; Fri, 10 Jul 2020 09:58:14 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Fri, 10 Jul 2020 09:58:14 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=H87IeOXUfJL39qpvvMRSMd+c1lEzl2hPnMXuwsnQ8fY=; b=gCsAaOdq EgYm8BrhosW2R9cUOZBN+FlMl4pFRRcHL6XAUe0b0PvPPM3juayIgDe4ZT98hxR3 kb7pD9/alf5a9nbPNGbw5lkT7Vm6RtLRUHr76WSS2H8vxnzXAxQwuw2hPIAi7JJW NCka2TB2PFWcOMIkfDs9PCui4DpypVN/lIh4SxgSyud5hVFnrTqDBZOlb7vbbloW VkePZW+0avdT16S83OS2qdfit+0+nbl/EfCuCjH18Pd34H8rMQXAvXng8MsTgKUU NxGVK0KB9UHMCOYsBgLemPc6IJnGYx/sfTp/9R0nVBMmqSCiQFF30AtnYG8QIk9Q pV3b2de1h3Fc7A== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrvddugdejgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpefkughoucfutghhihhmmhgvlhcuoehiughoshgthhesihguohhs tghhrdhorhhgqeenucggtffrrghtthgvrhhnpeduteeiveffffevleekleejffekhfekhe fgtdfftefhledvjefggfehgfevjeekhfenucfkphepuddtledrieeirdduledrudeffeen ucevlhhushhtvghrufhiiigvpeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiughosh gthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (bzq-109-66-19-133.red.bezeqint.net [109.66.19.133]) by mail.messagingengine.com (Postfix) with ESMTPA id BC76E328005E; Fri, 10 Jul 2020 09:58:10 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@mellanox.com, petrm@mellanox.com, mlxsw@mellanox.com, michael.chan@broadcom.com, saeedm@mellanox.com, leon@kernel.org, pablo@netfilter.org, kadlec@netfilter.org, fw@strlen.de, jhs@mojatatu.com, xiyou.wangcong@gmail.com, simon.horman@netronome.com, Ido Schimmel Subject: [PATCH net-next 05/13] mlxsw: spectrum_span: Prepare for global mirroring triggers Date: Fri, 10 Jul 2020 16:56:58 +0300 Message-Id: <20200710135706.601409-6-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200710135706.601409-1-idosch@idosch.org> References: <20200710135706.601409-1-idosch@idosch.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ido Schimmel Currently, a SPAN agent can only be bound to a per-port trigger where the trigger is either an incoming packet (INGRESS) or an outgoing packet (EGRESS) to / from the port. The subsequent patch will introduce the concept of global mirroring triggers. The binding / unbinding of global triggers is different than that of per-port triggers. Such triggers also need to be enabled / disabled on a per-{port, TC} basis and are only supported from Spectrum-2 onwards. Add trigger operations that allow us to abstract these differences. Only implement the operations for per-port triggers. Next patch will implement the operations for global triggers. Reviewed-by: Petr Machata Reviewed-by: Jiri Pirko Signed-off-by: Petr Machata Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_span.c | 119 +++++++++++++++--- .../ethernet/mellanox/mlxsw/spectrum_span.h | 1 + 2 files changed, 103 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c index 49e2a417ec0e..b20422dde147 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c @@ -21,6 +21,7 @@ struct mlxsw_sp_span { struct work_struct work; struct mlxsw_sp *mlxsw_sp; + const struct mlxsw_sp_span_trigger_ops **span_trigger_ops_arr; struct list_head analyzed_ports_list; struct mutex analyzed_ports_lock; /* Protects analyzed_ports_list */ struct list_head trigger_entries_list; @@ -38,12 +39,26 @@ struct mlxsw_sp_span_analyzed_port { struct mlxsw_sp_span_trigger_entry { struct list_head list; /* Member of trigger_entries_list */ + struct mlxsw_sp_span *span; + const struct mlxsw_sp_span_trigger_ops *ops; refcount_t ref_count; u8 local_port; enum mlxsw_sp_span_trigger trigger; struct mlxsw_sp_span_trigger_parms parms; }; +enum mlxsw_sp_span_trigger_type { + MLXSW_SP_SPAN_TRIGGER_TYPE_PORT, +}; + +struct mlxsw_sp_span_trigger_ops { + int (*bind)(struct mlxsw_sp_span_trigger_entry *trigger_entry); + void (*unbind)(struct mlxsw_sp_span_trigger_entry *trigger_entry); + bool (*matches)(struct mlxsw_sp_span_trigger_entry *trigger_entry, + enum mlxsw_sp_span_trigger trigger, + struct mlxsw_sp_port *mlxsw_sp_port); +}; + static void mlxsw_sp_span_respin_work(struct work_struct *work); static u64 mlxsw_sp_span_occ_get(void *priv) @@ -57,7 +72,7 @@ int mlxsw_sp_span_init(struct mlxsw_sp *mlxsw_sp) { struct devlink *devlink = priv_to_devlink(mlxsw_sp->core); struct mlxsw_sp_span *span; - int i, entries_count; + int i, entries_count, err; if (!MLXSW_CORE_RES_VALID(mlxsw_sp->core, MAX_SPAN)) return -EIO; @@ -77,11 +92,20 @@ int mlxsw_sp_span_init(struct mlxsw_sp *mlxsw_sp) for (i = 0; i < mlxsw_sp->span->entries_count; i++) mlxsw_sp->span->entries[i].id = i; + err = mlxsw_sp->span_ops->init(mlxsw_sp); + if (err) + goto err_init; + devlink_resource_occ_get_register(devlink, MLXSW_SP_RESOURCE_SPAN, mlxsw_sp_span_occ_get, mlxsw_sp); INIT_WORK(&span->work, mlxsw_sp_span_respin_work); return 0; + +err_init: + mutex_destroy(&mlxsw_sp->span->analyzed_ports_lock); + kfree(mlxsw_sp->span); + return err; } void mlxsw_sp_span_fini(struct mlxsw_sp *mlxsw_sp) @@ -1059,9 +1083,9 @@ void mlxsw_sp_span_analyzed_port_put(struct mlxsw_sp_port *mlxsw_sp_port, } static int -__mlxsw_sp_span_trigger_entry_bind(struct mlxsw_sp_span *span, - struct mlxsw_sp_span_trigger_entry * - trigger_entry, bool enable) +__mlxsw_sp_span_trigger_port_bind(struct mlxsw_sp_span *span, + struct mlxsw_sp_span_trigger_entry * + trigger_entry, bool enable) { char mpar_pl[MLXSW_REG_MPAR_LEN]; enum mlxsw_reg_mpar_i_e i_e; @@ -1084,19 +1108,60 @@ __mlxsw_sp_span_trigger_entry_bind(struct mlxsw_sp_span *span, } static int -mlxsw_sp_span_trigger_entry_bind(struct mlxsw_sp_span *span, - struct mlxsw_sp_span_trigger_entry * - trigger_entry) +mlxsw_sp_span_trigger_port_bind(struct mlxsw_sp_span_trigger_entry * + trigger_entry) { - return __mlxsw_sp_span_trigger_entry_bind(span, trigger_entry, true); + return __mlxsw_sp_span_trigger_port_bind(trigger_entry->span, + trigger_entry, true); } static void -mlxsw_sp_span_trigger_entry_unbind(struct mlxsw_sp_span *span, - struct mlxsw_sp_span_trigger_entry * - trigger_entry) +mlxsw_sp_span_trigger_port_unbind(struct mlxsw_sp_span_trigger_entry * + trigger_entry) { - __mlxsw_sp_span_trigger_entry_bind(span, trigger_entry, false); + __mlxsw_sp_span_trigger_port_bind(trigger_entry->span, trigger_entry, + false); +} + +static bool +mlxsw_sp_span_trigger_port_matches(struct mlxsw_sp_span_trigger_entry * + trigger_entry, + enum mlxsw_sp_span_trigger trigger, + struct mlxsw_sp_port *mlxsw_sp_port) +{ + return trigger_entry->trigger == trigger && + trigger_entry->local_port == mlxsw_sp_port->local_port; +} + +static const struct mlxsw_sp_span_trigger_ops +mlxsw_sp_span_trigger_port_ops = { + .bind = mlxsw_sp_span_trigger_port_bind, + .unbind = mlxsw_sp_span_trigger_port_unbind, + .matches = mlxsw_sp_span_trigger_port_matches, +}; + +static const struct mlxsw_sp_span_trigger_ops * +mlxsw_sp_span_trigger_ops_arr[] = { + [MLXSW_SP_SPAN_TRIGGER_TYPE_PORT] = &mlxsw_sp_span_trigger_port_ops, +}; + +static void +mlxsw_sp_span_trigger_ops_set(struct mlxsw_sp_span_trigger_entry *trigger_entry) +{ + struct mlxsw_sp_span *span = trigger_entry->span; + enum mlxsw_sp_span_trigger_type type; + + switch (trigger_entry->trigger) { + case MLXSW_SP_SPAN_TRIGGER_INGRESS: /* fall-through */ + case MLXSW_SP_SPAN_TRIGGER_EGRESS: + type = MLXSW_SP_SPAN_TRIGGER_TYPE_PORT; + break; + default: + WARN_ON_ONCE(1); + return; + } + + trigger_entry->ops = span->span_trigger_ops_arr[type]; } static struct mlxsw_sp_span_trigger_entry * @@ -1114,12 +1179,15 @@ mlxsw_sp_span_trigger_entry_create(struct mlxsw_sp_span *span, return ERR_PTR(-ENOMEM); refcount_set(&trigger_entry->ref_count, 1); - trigger_entry->local_port = mlxsw_sp_port->local_port; + trigger_entry->local_port = mlxsw_sp_port ? mlxsw_sp_port->local_port : + 0; trigger_entry->trigger = trigger; memcpy(&trigger_entry->parms, parms, sizeof(trigger_entry->parms)); + trigger_entry->span = span; + mlxsw_sp_span_trigger_ops_set(trigger_entry); list_add_tail(&trigger_entry->list, &span->trigger_entries_list); - err = mlxsw_sp_span_trigger_entry_bind(span, trigger_entry); + err = trigger_entry->ops->bind(trigger_entry); if (err) goto err_trigger_entry_bind; @@ -1136,7 +1204,7 @@ mlxsw_sp_span_trigger_entry_destroy(struct mlxsw_sp_span *span, struct mlxsw_sp_span_trigger_entry * trigger_entry) { - mlxsw_sp_span_trigger_entry_unbind(span, trigger_entry); + trigger_entry->ops->unbind(trigger_entry); list_del(&trigger_entry->list); kfree(trigger_entry); } @@ -1149,8 +1217,8 @@ mlxsw_sp_span_trigger_entry_find(struct mlxsw_sp_span *span, struct mlxsw_sp_span_trigger_entry *trigger_entry; list_for_each_entry(trigger_entry, &span->trigger_entries_list, list) { - if (trigger_entry->trigger == trigger && - trigger_entry->local_port == mlxsw_sp_port->local_port) + if (trigger_entry->ops->matches(trigger_entry, trigger, + mlxsw_sp_port)) return trigger_entry; } @@ -1216,15 +1284,30 @@ void mlxsw_sp_span_agent_unbind(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_span_trigger_entry_destroy(mlxsw_sp->span, trigger_entry); } +static int mlxsw_sp1_span_init(struct mlxsw_sp *mlxsw_sp) +{ + mlxsw_sp->span->span_trigger_ops_arr = mlxsw_sp_span_trigger_ops_arr; + + return 0; +} + static u32 mlxsw_sp1_span_buffsize_get(int mtu, u32 speed) { return mtu * 5 / 2; } const struct mlxsw_sp_span_ops mlxsw_sp1_span_ops = { + .init = mlxsw_sp1_span_init, .buffsize_get = mlxsw_sp1_span_buffsize_get, }; +static int mlxsw_sp2_span_init(struct mlxsw_sp *mlxsw_sp) +{ + mlxsw_sp->span->span_trigger_ops_arr = mlxsw_sp_span_trigger_ops_arr; + + return 0; +} + #define MLXSW_SP2_SPAN_EG_MIRROR_BUFFER_FACTOR 38 #define MLXSW_SP3_SPAN_EG_MIRROR_BUFFER_FACTOR 50 @@ -1241,6 +1324,7 @@ static u32 mlxsw_sp2_span_buffsize_get(int mtu, u32 speed) } const struct mlxsw_sp_span_ops mlxsw_sp2_span_ops = { + .init = mlxsw_sp2_span_init, .buffsize_get = mlxsw_sp2_span_buffsize_get, }; @@ -1252,5 +1336,6 @@ static u32 mlxsw_sp3_span_buffsize_get(int mtu, u32 speed) } const struct mlxsw_sp_span_ops mlxsw_sp3_span_ops = { + .init = mlxsw_sp2_span_init, .buffsize_get = mlxsw_sp3_span_buffsize_get, }; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h index 440551ec0dba..b9acecaf6ee2 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h @@ -35,6 +35,7 @@ struct mlxsw_sp_span_trigger_parms { struct mlxsw_sp_span_entry_ops; struct mlxsw_sp_span_ops { + int (*init)(struct mlxsw_sp *mlxsw_sp); u32 (*buffsize_get)(int mtu, u32 speed); }; From patchwork Fri Jul 10 13:56:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1326864 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=J0wV6Rd+; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3F4Y52vlz9sDX for ; Fri, 10 Jul 2020 23:58:21 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728206AbgGJN6U (ORCPT ); Fri, 10 Jul 2020 09:58:20 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:43831 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727906AbgGJN6T (ORCPT ); Fri, 10 Jul 2020 09:58:19 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id 85A7758059A; Fri, 10 Jul 2020 09:58:18 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Fri, 10 Jul 2020 09:58:18 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=blA7I+RWGFsXSmmtUbIYN0TNsaWuWKmmmdYwxNVmAK4=; b=J0wV6Rd+ cjzn45er2TYhQIenB2sausG1vkuwbBJCzYn5Q1cpSmBh/Bwpy+HkE4DC6TdWqzcT fKCAKHCNUObrjAI6JuF9LClmpv2qrnJ1JQBC5MpMS7TG4GayDFneWLXIlYLbbXUB BpdMC3wi0UN9xSY41zHD8Ag3un6YdeQX0wGSkqQCcFhLV81swH19E6yNgu402g1i dIsdlWfDQ9r4WG874TJTOEXbr8pTmncwm+dTC8LiadQC8AsHQNhzaOVunD6ukRZo 0BaBY8Z98Vo38HyzcrlhoiXJLBXc68m+Mu6Rx1xsLPTAFtwSx/47nlASZgrf+x+t wYexPzOr2umtkg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrvddugdejgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpefkughoucfutghhihhmmhgvlhcuoehiughoshgthhesihguohhs tghhrdhorhhgqeenucggtffrrghtthgvrhhnpeduteeiveffffevleekleejffekhfekhe fgtdfftefhledvjefggfehgfevjeekhfenucfkphepuddtledrieeirdduledrudeffeen ucevlhhushhtvghrufhiiigvpeehnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiughosh gthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (bzq-109-66-19-133.red.bezeqint.net [109.66.19.133]) by mail.messagingengine.com (Postfix) with ESMTPA id 7C72D328005E; Fri, 10 Jul 2020 09:58:14 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@mellanox.com, petrm@mellanox.com, mlxsw@mellanox.com, michael.chan@broadcom.com, saeedm@mellanox.com, leon@kernel.org, pablo@netfilter.org, kadlec@netfilter.org, fw@strlen.de, jhs@mojatatu.com, xiyou.wangcong@gmail.com, simon.horman@netronome.com, Ido Schimmel Subject: [PATCH net-next 06/13] mlxsw: spectrum_span: Add support for global mirroring triggers Date: Fri, 10 Jul 2020 16:56:59 +0300 Message-Id: <20200710135706.601409-7-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200710135706.601409-1-idosch@idosch.org> References: <20200710135706.601409-1-idosch@idosch.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ido Schimmel Global mirroring triggers are triggers that are only keyed by their trigger, as opposed to per-port triggers, which are keyed by their trigger and port. Such triggers allow mirroring packets that were tail/early dropped or ECN marked to a SPAN agent. Implement the previously added trigger operations for these global triggers. Since such triggers are only supported from Spectrum-2 onwards, have the Spectrum-1 operations return an error. Reviewed-by: Petr Machata Reviewed-by: Jiri Pirko Signed-off-by: Petr Machata Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_span.c | 104 +++++++++++++++++- .../ethernet/mellanox/mlxsw/spectrum_span.h | 3 + 2 files changed, 104 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c index b20422dde147..fa223c1351b4 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c @@ -49,6 +49,7 @@ struct mlxsw_sp_span_trigger_entry { enum mlxsw_sp_span_trigger_type { MLXSW_SP_SPAN_TRIGGER_TYPE_PORT, + MLXSW_SP_SPAN_TRIGGER_TYPE_GLOBAL, }; struct mlxsw_sp_span_trigger_ops { @@ -1140,9 +1141,101 @@ mlxsw_sp_span_trigger_port_ops = { .matches = mlxsw_sp_span_trigger_port_matches, }; +static int +mlxsw_sp1_span_trigger_global_bind(struct mlxsw_sp_span_trigger_entry * + trigger_entry) +{ + return -EOPNOTSUPP; +} + +static void +mlxsw_sp1_span_trigger_global_unbind(struct mlxsw_sp_span_trigger_entry * + trigger_entry) +{ +} + +static bool +mlxsw_sp1_span_trigger_global_matches(struct mlxsw_sp_span_trigger_entry * + trigger_entry, + enum mlxsw_sp_span_trigger trigger, + struct mlxsw_sp_port *mlxsw_sp_port) +{ + WARN_ON_ONCE(1); + return false; +} + +static const struct mlxsw_sp_span_trigger_ops +mlxsw_sp1_span_trigger_global_ops = { + .bind = mlxsw_sp1_span_trigger_global_bind, + .unbind = mlxsw_sp1_span_trigger_global_unbind, + .matches = mlxsw_sp1_span_trigger_global_matches, +}; + +static const struct mlxsw_sp_span_trigger_ops * +mlxsw_sp1_span_trigger_ops_arr[] = { + [MLXSW_SP_SPAN_TRIGGER_TYPE_PORT] = &mlxsw_sp_span_trigger_port_ops, + [MLXSW_SP_SPAN_TRIGGER_TYPE_GLOBAL] = + &mlxsw_sp1_span_trigger_global_ops, +}; + +static int +mlxsw_sp2_span_trigger_global_bind(struct mlxsw_sp_span_trigger_entry * + trigger_entry) +{ + struct mlxsw_sp *mlxsw_sp = trigger_entry->span->mlxsw_sp; + enum mlxsw_reg_mpagr_trigger trigger; + char mpagr_pl[MLXSW_REG_MPAGR_LEN]; + + switch (trigger_entry->trigger) { + case MLXSW_SP_SPAN_TRIGGER_TAIL_DROP: + trigger = MLXSW_REG_MPAGR_TRIGGER_INGRESS_SHARED_BUFFER; + break; + case MLXSW_SP_SPAN_TRIGGER_EARLY_DROP: + trigger = MLXSW_REG_MPAGR_TRIGGER_INGRESS_WRED; + break; + case MLXSW_SP_SPAN_TRIGGER_ECN: + trigger = MLXSW_REG_MPAGR_TRIGGER_EGRESS_ECN; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } + + mlxsw_reg_mpagr_pack(mpagr_pl, trigger, trigger_entry->parms.span_id, + 1); + return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(mpagr), mpagr_pl); +} + +static void +mlxsw_sp2_span_trigger_global_unbind(struct mlxsw_sp_span_trigger_entry * + trigger_entry) +{ + /* There is no unbinding for global triggers. The trigger should be + * disabled on all ports by now. + */ +} + +static bool +mlxsw_sp2_span_trigger_global_matches(struct mlxsw_sp_span_trigger_entry * + trigger_entry, + enum mlxsw_sp_span_trigger trigger, + struct mlxsw_sp_port *mlxsw_sp_port) +{ + return trigger_entry->trigger == trigger; +} + +static const struct mlxsw_sp_span_trigger_ops +mlxsw_sp2_span_trigger_global_ops = { + .bind = mlxsw_sp2_span_trigger_global_bind, + .unbind = mlxsw_sp2_span_trigger_global_unbind, + .matches = mlxsw_sp2_span_trigger_global_matches, +}; + static const struct mlxsw_sp_span_trigger_ops * -mlxsw_sp_span_trigger_ops_arr[] = { +mlxsw_sp2_span_trigger_ops_arr[] = { [MLXSW_SP_SPAN_TRIGGER_TYPE_PORT] = &mlxsw_sp_span_trigger_port_ops, + [MLXSW_SP_SPAN_TRIGGER_TYPE_GLOBAL] = + &mlxsw_sp2_span_trigger_global_ops, }; static void @@ -1156,6 +1249,11 @@ mlxsw_sp_span_trigger_ops_set(struct mlxsw_sp_span_trigger_entry *trigger_entry) case MLXSW_SP_SPAN_TRIGGER_EGRESS: type = MLXSW_SP_SPAN_TRIGGER_TYPE_PORT; break; + case MLXSW_SP_SPAN_TRIGGER_TAIL_DROP: /* fall-through */ + case MLXSW_SP_SPAN_TRIGGER_EARLY_DROP: /* fall-through */ + case MLXSW_SP_SPAN_TRIGGER_ECN: + type = MLXSW_SP_SPAN_TRIGGER_TYPE_GLOBAL; + break; default: WARN_ON_ONCE(1); return; @@ -1286,7 +1384,7 @@ void mlxsw_sp_span_agent_unbind(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp1_span_init(struct mlxsw_sp *mlxsw_sp) { - mlxsw_sp->span->span_trigger_ops_arr = mlxsw_sp_span_trigger_ops_arr; + mlxsw_sp->span->span_trigger_ops_arr = mlxsw_sp1_span_trigger_ops_arr; return 0; } @@ -1303,7 +1401,7 @@ const struct mlxsw_sp_span_ops mlxsw_sp1_span_ops = { static int mlxsw_sp2_span_init(struct mlxsw_sp *mlxsw_sp) { - mlxsw_sp->span->span_trigger_ops_arr = mlxsw_sp_span_trigger_ops_arr; + mlxsw_sp->span->span_trigger_ops_arr = mlxsw_sp2_span_trigger_ops_arr; return 0; } diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h index b9acecaf6ee2..bb7939b3f09c 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h @@ -26,6 +26,9 @@ struct mlxsw_sp_span_parms { enum mlxsw_sp_span_trigger { MLXSW_SP_SPAN_TRIGGER_INGRESS, MLXSW_SP_SPAN_TRIGGER_EGRESS, + MLXSW_SP_SPAN_TRIGGER_TAIL_DROP, + MLXSW_SP_SPAN_TRIGGER_EARLY_DROP, + MLXSW_SP_SPAN_TRIGGER_ECN, }; struct mlxsw_sp_span_trigger_parms { From patchwork Fri Jul 10 13:57:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1326866 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=sRLCLnPm; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3F4j18wHz9sT6 for ; Fri, 10 Jul 2020 23:58:29 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727948AbgGJN60 (ORCPT ); Fri, 10 Jul 2020 09:58:26 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:44053 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726962AbgGJN6Y (ORCPT ); Fri, 10 Jul 2020 09:58:24 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id AD5C658058B; Fri, 10 Jul 2020 09:58:22 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Fri, 10 Jul 2020 09:58:22 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=MCQpCMz5d4VBJYiaWzCnTWcQsN9XpgaUMe53K0M2Bu4=; b=sRLCLnPm HGiQF7eMWlDheQy85ELvXoas6gJBQkopNx+RN9EM0SxW83nHBujdrfdpDBqlc8SQ YdBp800HTUU93Fo09ilsIltd1OicSIHOmqcwi1C45ijgpjfQNYbeoaT476Qt53Sf tARt0k1qOOWOpQa5qfGs+2r5v6mZ8dNlx6stBRxDfVG6siD5qyI/t2+4lZttVBvu juo9/P+xn5JQadfH6qkICAP2FQGqNx86rPQCqlBdiTgr246lZpxVz44Sl6hWGUoT laujt3yCMKYmiSFSm3gF0K1d32lydiOK4IWMD4Y+Bl07+QvDq3C5em7Z+cma+sJ2 eaQZK9/Yd0lnAA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrvddugdejgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpefkughoucfutghhihhmmhgvlhcuoehiughoshgthhesihguohhs tghhrdhorhhgqeenucggtffrrghtthgvrhhnpeduteeiveffffevleekleejffekhfekhe fgtdfftefhledvjefggfehgfevjeekhfenucfkphepuddtledrieeirdduledrudeffeen ucevlhhushhtvghrufhiiigvpeehnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiughosh gthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (bzq-109-66-19-133.red.bezeqint.net [109.66.19.133]) by mail.messagingengine.com (Postfix) with ESMTPA id C54E8328005A; Fri, 10 Jul 2020 09:58:18 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@mellanox.com, petrm@mellanox.com, mlxsw@mellanox.com, michael.chan@broadcom.com, saeedm@mellanox.com, leon@kernel.org, pablo@netfilter.org, kadlec@netfilter.org, fw@strlen.de, jhs@mojatatu.com, xiyou.wangcong@gmail.com, simon.horman@netronome.com, Ido Schimmel Subject: [PATCH net-next 07/13] mlxsw: spectrum_span: Add APIs to enable / disable global mirroring triggers Date: Fri, 10 Jul 2020 16:57:00 +0300 Message-Id: <20200710135706.601409-8-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200710135706.601409-1-idosch@idosch.org> References: <20200710135706.601409-1-idosch@idosch.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ido Schimmel While the binding of global mirroring triggers to a SPAN agent is global, packets are only mirrored if they belong to a port and TC on which the trigger was enabled. This allows, for example, to mirror packets that were tail-dropped on a specific netdev. Implement the operations that allow to enable / disable a global mirroring trigger on a specific port and TC. Reviewed-by: Petr Machata Reviewed-by: Jiri Pirko Signed-off-by: Petr Machata Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_span.c | 135 ++++++++++++++++++ .../ethernet/mellanox/mlxsw/spectrum_span.h | 4 + 2 files changed, 139 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c index fa223c1351b4..6374765a112d 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c @@ -58,6 +58,10 @@ struct mlxsw_sp_span_trigger_ops { bool (*matches)(struct mlxsw_sp_span_trigger_entry *trigger_entry, enum mlxsw_sp_span_trigger trigger, struct mlxsw_sp_port *mlxsw_sp_port); + int (*enable)(struct mlxsw_sp_span_trigger_entry *trigger_entry, + struct mlxsw_sp_port *mlxsw_sp_port, u8 tc); + void (*disable)(struct mlxsw_sp_span_trigger_entry *trigger_entry, + struct mlxsw_sp_port *mlxsw_sp_port, u8 tc); }; static void mlxsw_sp_span_respin_work(struct work_struct *work); @@ -1134,11 +1138,29 @@ mlxsw_sp_span_trigger_port_matches(struct mlxsw_sp_span_trigger_entry * trigger_entry->local_port == mlxsw_sp_port->local_port; } +static int +mlxsw_sp_span_trigger_port_enable(struct mlxsw_sp_span_trigger_entry * + trigger_entry, + struct mlxsw_sp_port *mlxsw_sp_port, u8 tc) +{ + /* Port trigger are enabled during binding. */ + return 0; +} + +static void +mlxsw_sp_span_trigger_port_disable(struct mlxsw_sp_span_trigger_entry * + trigger_entry, + struct mlxsw_sp_port *mlxsw_sp_port, u8 tc) +{ +} + static const struct mlxsw_sp_span_trigger_ops mlxsw_sp_span_trigger_port_ops = { .bind = mlxsw_sp_span_trigger_port_bind, .unbind = mlxsw_sp_span_trigger_port_unbind, .matches = mlxsw_sp_span_trigger_port_matches, + .enable = mlxsw_sp_span_trigger_port_enable, + .disable = mlxsw_sp_span_trigger_port_disable, }; static int @@ -1164,11 +1186,30 @@ mlxsw_sp1_span_trigger_global_matches(struct mlxsw_sp_span_trigger_entry * return false; } +static int +mlxsw_sp1_span_trigger_global_enable(struct mlxsw_sp_span_trigger_entry * + trigger_entry, + struct mlxsw_sp_port *mlxsw_sp_port, + u8 tc) +{ + return -EOPNOTSUPP; +} + +static void +mlxsw_sp1_span_trigger_global_disable(struct mlxsw_sp_span_trigger_entry * + trigger_entry, + struct mlxsw_sp_port *mlxsw_sp_port, + u8 tc) +{ +} + static const struct mlxsw_sp_span_trigger_ops mlxsw_sp1_span_trigger_global_ops = { .bind = mlxsw_sp1_span_trigger_global_bind, .unbind = mlxsw_sp1_span_trigger_global_unbind, .matches = mlxsw_sp1_span_trigger_global_matches, + .enable = mlxsw_sp1_span_trigger_global_enable, + .disable = mlxsw_sp1_span_trigger_global_disable, }; static const struct mlxsw_sp_span_trigger_ops * @@ -1224,11 +1265,71 @@ mlxsw_sp2_span_trigger_global_matches(struct mlxsw_sp_span_trigger_entry * return trigger_entry->trigger == trigger; } +static int +__mlxsw_sp2_span_trigger_global_enable(struct mlxsw_sp_span_trigger_entry * + trigger_entry, + struct mlxsw_sp_port *mlxsw_sp_port, + u8 tc, bool enable) +{ + struct mlxsw_sp *mlxsw_sp = trigger_entry->span->mlxsw_sp; + char momte_pl[MLXSW_REG_MOMTE_LEN]; + enum mlxsw_reg_momte_type type; + int err; + + switch (trigger_entry->trigger) { + case MLXSW_SP_SPAN_TRIGGER_TAIL_DROP: + type = MLXSW_REG_MOMTE_TYPE_SHARED_BUFFER_TCLASS; + break; + case MLXSW_SP_SPAN_TRIGGER_EARLY_DROP: + type = MLXSW_REG_MOMTE_TYPE_WRED; + break; + case MLXSW_SP_SPAN_TRIGGER_ECN: + type = MLXSW_REG_MOMTE_TYPE_ECN; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } + + /* Query existing configuration in order to only change the state of + * the specified traffic class. + */ + mlxsw_reg_momte_pack(momte_pl, mlxsw_sp_port->local_port, type); + err = mlxsw_reg_query(mlxsw_sp->core, MLXSW_REG(momte), momte_pl); + if (err) + return err; + + mlxsw_reg_momte_tclass_en_set(momte_pl, tc, enable); + return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(momte), momte_pl); +} + +static int +mlxsw_sp2_span_trigger_global_enable(struct mlxsw_sp_span_trigger_entry * + trigger_entry, + struct mlxsw_sp_port *mlxsw_sp_port, + u8 tc) +{ + return __mlxsw_sp2_span_trigger_global_enable(trigger_entry, + mlxsw_sp_port, tc, true); +} + +static void +mlxsw_sp2_span_trigger_global_disable(struct mlxsw_sp_span_trigger_entry * + trigger_entry, + struct mlxsw_sp_port *mlxsw_sp_port, + u8 tc) +{ + __mlxsw_sp2_span_trigger_global_enable(trigger_entry, mlxsw_sp_port, tc, + false); +} + static const struct mlxsw_sp_span_trigger_ops mlxsw_sp2_span_trigger_global_ops = { .bind = mlxsw_sp2_span_trigger_global_bind, .unbind = mlxsw_sp2_span_trigger_global_unbind, .matches = mlxsw_sp2_span_trigger_global_matches, + .enable = mlxsw_sp2_span_trigger_global_enable, + .disable = mlxsw_sp2_span_trigger_global_disable, }; static const struct mlxsw_sp_span_trigger_ops * @@ -1382,6 +1483,40 @@ void mlxsw_sp_span_agent_unbind(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_span_trigger_entry_destroy(mlxsw_sp->span, trigger_entry); } +int mlxsw_sp_span_trigger_enable(struct mlxsw_sp_port *mlxsw_sp_port, + enum mlxsw_sp_span_trigger trigger, u8 tc) +{ + struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; + struct mlxsw_sp_span_trigger_entry *trigger_entry; + + ASSERT_RTNL(); + + trigger_entry = mlxsw_sp_span_trigger_entry_find(mlxsw_sp->span, + trigger, + mlxsw_sp_port); + if (WARN_ON_ONCE(!trigger_entry)) + return -EINVAL; + + return trigger_entry->ops->enable(trigger_entry, mlxsw_sp_port, tc); +} + +void mlxsw_sp_span_trigger_disable(struct mlxsw_sp_port *mlxsw_sp_port, + enum mlxsw_sp_span_trigger trigger, u8 tc) +{ + struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; + struct mlxsw_sp_span_trigger_entry *trigger_entry; + + ASSERT_RTNL(); + + trigger_entry = mlxsw_sp_span_trigger_entry_find(mlxsw_sp->span, + trigger, + mlxsw_sp_port); + if (WARN_ON_ONCE(!trigger_entry)) + return; + + return trigger_entry->ops->disable(trigger_entry, mlxsw_sp_port, tc); +} + static int mlxsw_sp1_span_init(struct mlxsw_sp *mlxsw_sp) { mlxsw_sp->span->span_trigger_ops_arr = mlxsw_sp1_span_trigger_ops_arr; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h index bb7939b3f09c..29b96b222e25 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h @@ -89,6 +89,10 @@ mlxsw_sp_span_agent_unbind(struct mlxsw_sp *mlxsw_sp, enum mlxsw_sp_span_trigger trigger, struct mlxsw_sp_port *mlxsw_sp_port, const struct mlxsw_sp_span_trigger_parms *parms); +int mlxsw_sp_span_trigger_enable(struct mlxsw_sp_port *mlxsw_sp_port, + enum mlxsw_sp_span_trigger trigger, u8 tc); +void mlxsw_sp_span_trigger_disable(struct mlxsw_sp_port *mlxsw_sp_port, + enum mlxsw_sp_span_trigger trigger, u8 tc); extern const struct mlxsw_sp_span_ops mlxsw_sp1_span_ops; extern const struct mlxsw_sp_span_ops mlxsw_sp2_span_ops; From patchwork Fri Jul 10 13:57:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1326867 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=cqq740kS; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3F4k4NbYz9sTZ for ; Fri, 10 Jul 2020 23:58:30 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728088AbgGJN63 (ORCPT ); Fri, 10 Jul 2020 09:58:29 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:43837 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726832AbgGJN61 (ORCPT ); Fri, 10 Jul 2020 09:58:27 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id 96F3458059E; Fri, 10 Jul 2020 09:58:26 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Fri, 10 Jul 2020 09:58:26 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=nEljDNIKQHe9wQAKAG8KeEF/05EWeQUJlco/xfMtRkU=; b=cqq740kS GbOIhxCtGUMCMYs8Z/y4C1u7LsdN3BEmVJrxPZggInsj+IHHv3X+zVGwZR6o77ja 3FDGkJ/mpc+qfr2TxRHv0KlYHHOqBoOWCwNF6DaCi4iT3BVimhtmFqkRI1l3z2M1 6LBbvcLprRkM/CaXcG5hh0vWryH6mVGU+66GBiZ09ZTdXepoGXnpoFtvOlrxfpdu 0z28PTJMzDxb03Iv6aT7cMy8s9LQm86/iJOsahS6SkZEqOEMs2E+40CCckG4ZJUN qBWqBybcvuUiklxlzQuHzmNmFM1EgxARYFBm1NtVTx1sKI25/l0ft4GqWVy3PmqZ o3NeDYLj/EZedg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrvddugdejgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpefkughoucfutghhihhmmhgvlhcuoehiughoshgthhesihguohhs tghhrdhorhhgqeenucggtffrrghtthgvrhhnpeduteeiveffffevleekleejffekhfekhe fgtdfftefhledvjefggfehgfevjeekhfenucfkphepuddtledrieeirdduledrudeffeen ucevlhhushhtvghrufhiiigvpeejnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiughosh gthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (bzq-109-66-19-133.red.bezeqint.net [109.66.19.133]) by mail.messagingengine.com (Postfix) with ESMTPA id 02DBC328005A; Fri, 10 Jul 2020 09:58:22 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@mellanox.com, petrm@mellanox.com, mlxsw@mellanox.com, michael.chan@broadcom.com, saeedm@mellanox.com, leon@kernel.org, pablo@netfilter.org, kadlec@netfilter.org, fw@strlen.de, jhs@mojatatu.com, xiyou.wangcong@gmail.com, simon.horman@netronome.com, Ido Schimmel Subject: [PATCH net-next 08/13] mlxsw: spectrum_flow: Convert a goto to a return Date: Fri, 10 Jul 2020 16:57:01 +0300 Message-Id: <20200710135706.601409-9-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200710135706.601409-1-idosch@idosch.org> References: <20200710135706.601409-1-idosch@idosch.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Petr Machata No clean-up is performed at the target label of this goto. Convert it to a direct return. Signed-off-by: Petr Machata Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel --- drivers/net/ethernet/mellanox/mlxsw/spectrum_flow.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flow.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flow.c index 47b66f347ff1..421581a85cd6 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flow.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flow.c @@ -219,8 +219,7 @@ static int mlxsw_sp_setup_tc_block_bind(struct mlxsw_sp_port *mlxsw_sp_port, mlxsw_sp_tc_block_release); if (IS_ERR(block_cb)) { mlxsw_sp_flow_block_destroy(flow_block); - err = PTR_ERR(block_cb); - goto err_cb_register; + return PTR_ERR(block_cb); } register_block = true; } else { @@ -247,7 +246,6 @@ static int mlxsw_sp_setup_tc_block_bind(struct mlxsw_sp_port *mlxsw_sp_port, err_block_bind: if (!flow_block_cb_decref(block_cb)) flow_block_cb_free(block_cb); -err_cb_register: return err; } From patchwork Fri Jul 10 13:57:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1326869 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=hBItTu1y; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3F4p48ZMz9sT6 for ; Fri, 10 Jul 2020 23:58:34 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728210AbgGJN6d (ORCPT ); Fri, 10 Jul 2020 09:58:33 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:42923 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726832AbgGJN6b (ORCPT ); Fri, 10 Jul 2020 09:58:31 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id 9642B58058A; Fri, 10 Jul 2020 09:58:30 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Fri, 10 Jul 2020 09:58:30 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=vmfC79TsNsez2Zs9reURvIWYj20QOcgf4gCsymBQrK0=; b=hBItTu1y W2cSF5kCx36zU7f5U8+o/3qBCbAaIOj5MVbnQNCmDklw/FHJuZdIKSyJbAHrj1qs rfRCWonIIDXTWtqis8PjPc06FxTdHCBfjKZn7R6GIqHIs9LvGyLm+pmVk20nsUtR 5cVDGtCusG6xm384y8uG6hEV5E4l4nk4ujR9yJkl+vgziRifbDFJbpTINc6HFCBb cd2qR7BoV8FbVIb8QvBbBx0NLQqGK2fXGNo+HiABMM3iH5YdGcyg3IQzH6/SAhY2 Kz3hKlZZWnsg30z2x5AHGN55n9hzDHmpApzQZf0DCLGXoR0TVeKOvXZafLmNDWyH A8i5i40RF58JpQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrvddugdejgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpefkughoucfutghhihhmmhgvlhcuoehiughoshgthhesihguohhs tghhrdhorhhgqeenucggtffrrghtthgvrhhnpeduteeiveffffevleekleejffekhfekhe fgtdfftefhledvjefggfehgfevjeekhfenucfkphepuddtledrieeirdduledrudeffeen ucevlhhushhtvghrufhiiigvpeeknecurfgrrhgrmhepmhgrihhlfhhrohhmpehiughosh gthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (bzq-109-66-19-133.red.bezeqint.net [109.66.19.133]) by mail.messagingengine.com (Postfix) with ESMTPA id C33EB3280063; Fri, 10 Jul 2020 09:58:26 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@mellanox.com, petrm@mellanox.com, mlxsw@mellanox.com, michael.chan@broadcom.com, saeedm@mellanox.com, leon@kernel.org, pablo@netfilter.org, kadlec@netfilter.org, fw@strlen.de, jhs@mojatatu.com, xiyou.wangcong@gmail.com, simon.horman@netronome.com, Ido Schimmel Subject: [PATCH net-next 09/13] mlxsw: spectrum_flow: Drop an unused field Date: Fri, 10 Jul 2020 16:57:02 +0300 Message-Id: <20200710135706.601409-10-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200710135706.601409-1-idosch@idosch.org> References: <20200710135706.601409-1-idosch@idosch.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Petr Machata The field "dev" in struct mlxsw_sp_flow_block_binding is not used. Drop it. Signed-off-by: Petr Machata Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel --- drivers/net/ethernet/mellanox/mlxsw/spectrum.h | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h index 18c64f7b265d..ab54790d2955 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h @@ -710,7 +710,6 @@ struct mlxsw_sp_flow_block { struct mlxsw_sp_flow_block_binding { struct list_head list; - struct net_device *dev; struct mlxsw_sp_port *mlxsw_sp_port; bool ingress; }; From patchwork Fri Jul 10 13:57:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1326870 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=gEs0U88a; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3F4v5Wthz9sRR for ; Fri, 10 Jul 2020 23:58:39 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728235AbgGJN6h (ORCPT ); Fri, 10 Jul 2020 09:58:37 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:38005 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726832AbgGJN6f (ORCPT ); Fri, 10 Jul 2020 09:58:35 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id 6DABD58058A; Fri, 10 Jul 2020 09:58:34 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Fri, 10 Jul 2020 09:58:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=V99AkssyUfTMO/32s6GAy48cqgcqeeUq/6za1QuWhWs=; b=gEs0U88a q9B3ti2JKCbfoU49k/XMaB6Rc33B9rcYdKPjHho5HvDoluYYwHTv23zqIzN66jnh WpOW/ERabwHDqxfxe/5NQaCQVsvitYX/Dv8Sz993MzfrxAgNEVuZRK4EIqWvlRlx B59BHWpjyNuJ/BNbACM5iYLsuPZN/lNkWwu+NXho8hYk+2O1wubnyreNYjztpWhA xRXUa1ImJ2GoQkFokDM1MbT3Mrx0iWUYZiBiMhcVdQsJJKOMhxs4vQpp8pdmfqQe Mwz241J6WG6YqGgEwTB3d6+ouJPsXeRFa6KUcKqes0Db3SZrYYI1BWEutBMhWB00 FhbAp96tKcHWkw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrvddugdejgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpefkughoucfutghhihhmmhgvlhcuoehiughoshgthhesihguohhs tghhrdhorhhgqeenucggtffrrghtthgvrhhnpeduteeiveffffevleekleejffekhfekhe fgtdfftefhledvjefggfehgfevjeekhfenucfkphepuddtledrieeirdduledrudeffeen ucevlhhushhtvghrufhiiigvpeeknecurfgrrhgrmhepmhgrihhlfhhrohhmpehiughosh gthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (bzq-109-66-19-133.red.bezeqint.net [109.66.19.133]) by mail.messagingengine.com (Postfix) with ESMTPA id C90F83280063; Fri, 10 Jul 2020 09:58:30 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@mellanox.com, petrm@mellanox.com, mlxsw@mellanox.com, michael.chan@broadcom.com, saeedm@mellanox.com, leon@kernel.org, pablo@netfilter.org, kadlec@netfilter.org, fw@strlen.de, jhs@mojatatu.com, xiyou.wangcong@gmail.com, simon.horman@netronome.com, Ido Schimmel Subject: [PATCH net-next 10/13] mlxsw: spectrum_matchall: Publish matchall data structures Date: Fri, 10 Jul 2020 16:57:03 +0300 Message-Id: <20200710135706.601409-11-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200710135706.601409-1-idosch@idosch.org> References: <20200710135706.601409-1-idosch@idosch.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Petr Machata A following patch introduces offloading of filters attached to blocks bound to the RED tail_drop qevent. The only classifier that mlxsw will permit in this role is matchall. mlxsw currently offloads matchall filters used with clsact qdisc. The data structures used for that offload will come handy for the qevent offload as well. Publish them in spectrum.h. Signed-off-by: Petr Machata Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../net/ethernet/mellanox/mlxsw/spectrum.h | 24 +++++++++++++++++++ .../mellanox/mlxsw/spectrum_matchall.c | 23 ------------------ 2 files changed, 24 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h index ab54790d2955..51047b1aa23a 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h @@ -960,6 +960,30 @@ extern const struct mlxsw_afk_ops mlxsw_sp1_afk_ops; extern const struct mlxsw_afk_ops mlxsw_sp2_afk_ops; /* spectrum_matchall.c */ +enum mlxsw_sp_mall_action_type { + MLXSW_SP_MALL_ACTION_TYPE_MIRROR, + MLXSW_SP_MALL_ACTION_TYPE_SAMPLE, + MLXSW_SP_MALL_ACTION_TYPE_TRAP, +}; + +struct mlxsw_sp_mall_mirror_entry { + const struct net_device *to_dev; + int span_id; +}; + +struct mlxsw_sp_mall_entry { + struct list_head list; + unsigned long cookie; + unsigned int priority; + enum mlxsw_sp_mall_action_type type; + bool ingress; + union { + struct mlxsw_sp_mall_mirror_entry mirror; + struct mlxsw_sp_port_sample sample; + }; + struct rcu_head rcu; +}; + int mlxsw_sp_mall_replace(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_flow_block *block, struct tc_cls_matchall_offload *f); diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_matchall.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_matchall.c index f1a44a8eda55..195e28ab8e65 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_matchall.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_matchall.c @@ -10,29 +10,6 @@ #include "spectrum_span.h" #include "reg.h" -enum mlxsw_sp_mall_action_type { - MLXSW_SP_MALL_ACTION_TYPE_MIRROR, - MLXSW_SP_MALL_ACTION_TYPE_SAMPLE, -}; - -struct mlxsw_sp_mall_mirror_entry { - const struct net_device *to_dev; - int span_id; -}; - -struct mlxsw_sp_mall_entry { - struct list_head list; - unsigned long cookie; - unsigned int priority; - enum mlxsw_sp_mall_action_type type; - bool ingress; - union { - struct mlxsw_sp_mall_mirror_entry mirror; - struct mlxsw_sp_port_sample sample; - }; - struct rcu_head rcu; -}; - static struct mlxsw_sp_mall_entry * mlxsw_sp_mall_entry_find(struct mlxsw_sp_flow_block *block, unsigned long cookie) { From patchwork Fri Jul 10 13:57:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1326871 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=b1jZ3QnD; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3F503ggfz9sTH for ; Fri, 10 Jul 2020 23:58:44 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728254AbgGJN6n (ORCPT ); Fri, 10 Jul 2020 09:58:43 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:45879 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728212AbgGJN6l (ORCPT ); Fri, 10 Jul 2020 09:58:41 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id ED13258058A; Fri, 10 Jul 2020 09:58:40 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Fri, 10 Jul 2020 09:58:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=xAw5wtejAkxmVhl45XYwlhOXAeuxG9k8SIO5DVKAuoY=; b=b1jZ3QnD rwgx1mLSWKO+sDgMjoBXMuMK0ATB6uyXyE6853wpaxAbposUo2pT8pCyLsZjpEPS mYF1z0IGyB6P8ZL9sK7IL+i2hBpd6DYnn/z5SAT14prLoMh+mKEPB3DthaX0G76+ JY0L/6WhmjDsPveUDyXJI1Y3X+SrkrSposHHSIVYBMOqy+kSaLIqLbI7Sv+jSaeU be5ht9fsuJkdnVD07g2QzBDLDwn2SR9oecen0TEzLpfOUhDokozDo2WqxLYFwEUn E9Dyyq5y2pA7D2vendd3KPCHiLtJYGvkZXbVDYNbOV3PwTnPUuliHSZoCLn8kom8 qvr2isPwjzWGIw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrvddugdejgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpefkughoucfutghhihhmmhgvlhcuoehiughoshgthhesihguohhs tghhrdhorhhgqeenucggtffrrghtthgvrhhnpeduteeiveffffevleekleejffekhfekhe fgtdfftefhledvjefggfehgfevjeekhfenucfkphepuddtledrieeirdduledrudeffeen ucevlhhushhtvghrufhiiigvpedutdenucfrrghrrghmpehmrghilhhfrhhomhepihguoh hstghhsehiughoshgthhdrohhrgh X-ME-Proxy: Received: from shredder.mtl.com (bzq-109-66-19-133.red.bezeqint.net [109.66.19.133]) by mail.messagingengine.com (Postfix) with ESMTPA id 8FA1B328005D; Fri, 10 Jul 2020 09:58:34 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@mellanox.com, petrm@mellanox.com, mlxsw@mellanox.com, michael.chan@broadcom.com, saeedm@mellanox.com, leon@kernel.org, pablo@netfilter.org, kadlec@netfilter.org, fw@strlen.de, jhs@mojatatu.com, xiyou.wangcong@gmail.com, simon.horman@netronome.com, Ido Schimmel Subject: [PATCH net-next 11/13] mlxsw: spectrum_flow: Promote binder-type dispatch to spectrum.c Date: Fri, 10 Jul 2020 16:57:04 +0300 Message-Id: <20200710135706.601409-12-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200710135706.601409-1-idosch@idosch.org> References: <20200710135706.601409-1-idosch@idosch.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Petr Machata Two RED qevents have been introduced recently. From the point of view of a driver, qevents are simply blocks with unusual binder types. However they need to be handled by different logic than ACL-like flows. Thus rename mlxsw_sp_setup_tc_block() to mlxsw_sp_setup_tc_block_clsact() and move the binder-type dispatch from there to spectrum.c into a new function of the original name. The new dispatcher is easier to extend with new binder types. Signed-off-by: Petr Machata Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel --- drivers/net/ethernet/mellanox/mlxsw/spectrum.c | 13 +++++++++++++ drivers/net/ethernet/mellanox/mlxsw/spectrum.h | 5 +++-- .../net/ethernet/mellanox/mlxsw/spectrum_flow.c | 14 +++----------- 3 files changed, 19 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c index 636dd09cbbbc..2235c4bf330d 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c @@ -1329,6 +1329,19 @@ static int mlxsw_sp_port_kill_vid(struct net_device *dev, return 0; } +static int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port, + struct flow_block_offload *f) +{ + switch (f->binder_type) { + case FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS: + return mlxsw_sp_setup_tc_block_clsact(mlxsw_sp_port, f, true); + case FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS: + return mlxsw_sp_setup_tc_block_clsact(mlxsw_sp_port, f, false); + default: + return -EOPNOTSUPP; + } +} + static int mlxsw_sp_setup_tc(struct net_device *dev, enum tc_setup_type type, void *type_data) { diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h index 51047b1aa23a..ee9a19f28b97 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h @@ -767,8 +767,9 @@ mlxsw_sp_flow_block_is_mixed_bound(const struct mlxsw_sp_flow_block *block) struct mlxsw_sp_flow_block *mlxsw_sp_flow_block_create(struct mlxsw_sp *mlxsw_sp, struct net *net); void mlxsw_sp_flow_block_destroy(struct mlxsw_sp_flow_block *block); -int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port, - struct flow_block_offload *f); +int mlxsw_sp_setup_tc_block_clsact(struct mlxsw_sp_port *mlxsw_sp_port, + struct flow_block_offload *f, + bool ingress); /* spectrum_acl.c */ struct mlxsw_sp_acl_ruleset; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flow.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flow.c index 421581a85cd6..0456cda33808 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flow.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flow.c @@ -277,18 +277,10 @@ static void mlxsw_sp_setup_tc_block_unbind(struct mlxsw_sp_port *mlxsw_sp_port, } } -int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port, - struct flow_block_offload *f) +int mlxsw_sp_setup_tc_block_clsact(struct mlxsw_sp_port *mlxsw_sp_port, + struct flow_block_offload *f, + bool ingress) { - bool ingress; - - if (f->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS) - ingress = true; - else if (f->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS) - ingress = false; - else - return -EOPNOTSUPP; - f->driver_block_list = &mlxsw_sp_block_cb_list; switch (f->command) { From patchwork Fri Jul 10 13:57:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1326872 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=FKG8zWW4; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3F56365Bz9sT6 for ; Fri, 10 Jul 2020 23:58:50 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728260AbgGJN6t (ORCPT ); Fri, 10 Jul 2020 09:58:49 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:51899 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728212AbgGJN6s (ORCPT ); Fri, 10 Jul 2020 09:58:48 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id 4A4A158058A; Fri, 10 Jul 2020 09:58:46 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Fri, 10 Jul 2020 09:58:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=Tj6aNTnbslWDGgcv0ADpqSM03qdpe+ARy33X4GCQXZc=; b=FKG8zWW4 P9XFELwYqnzHMQOIak+vODsNmrfoibczP7oteQmazrEuzUgaMEPyHoH5n3tbvwwC tP9jCJOZtVxqyIlubd6DqYTJqJCQ06TfhxJrJw71PnenMPNw8yZ2cD8ryET7TB6K QCBNrJeA5HbRueD6pBIBia8HlIobRLnFBFxGSLIYwQBaGYaPZMvvDrvLiegmrNEn hLm1tuNpQF+aMcLOk9tuoXvTmtVii5h85Lmv0Ooj2Qhw3QoxplyfNQ2c4UKHLYj5 sNAVAaGwByNCC+t4s+tZBFwyjltp3BKjy7x2p+pzzMGkb3npOik9JDXWldZ98PvP 8qCGZiM+Fyy5Cw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrvddugdejgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpefkughoucfutghhihhmmhgvlhcuoehiughoshgthhesihguohhs tghhrdhorhhgqeenucggtffrrghtthgvrhhnpeduteeiveffffevleekleejffekhfekhe fgtdfftefhledvjefggfehgfevjeekhfenucfkphepuddtledrieeirdduledrudeffeen ucevlhhushhtvghrufhiiigvpeduudenucfrrghrrghmpehmrghilhhfrhhomhepihguoh hstghhsehiughoshgthhdrohhrgh X-ME-Proxy: Received: from shredder.mtl.com (bzq-109-66-19-133.red.bezeqint.net [109.66.19.133]) by mail.messagingengine.com (Postfix) with ESMTPA id 52729328005A; Fri, 10 Jul 2020 09:58:41 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@mellanox.com, petrm@mellanox.com, mlxsw@mellanox.com, michael.chan@broadcom.com, saeedm@mellanox.com, leon@kernel.org, pablo@netfilter.org, kadlec@netfilter.org, fw@strlen.de, jhs@mojatatu.com, xiyou.wangcong@gmail.com, simon.horman@netronome.com, Ido Schimmel Subject: [PATCH net-next 12/13] mlxsw: spectrum_qdisc: Offload mirroring on RED qevent early_drop Date: Fri, 10 Jul 2020 16:57:05 +0300 Message-Id: <20200710135706.601409-13-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200710135706.601409-1-idosch@idosch.org> References: <20200710135706.601409-1-idosch@idosch.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Petr Machata The RED qevents early_drop and mark can be offloaded under the following fairly strict conditions: - At most one filter is configured at the qevent block - The protocol is "any" - The classifier is matchall - The action is trap, sample, or mirror with the same conditions as with other SPAN offloads - The hw_counters type is none In this patchset, implement offload of mirror for early_drop qevent. The ECN trigger is currently not implemented in the FW and therefore the mark qevent is not supported. The qevent notifications look exactly like regular block binding notifications with a binder type that identifies them as qevents. Therefore the details of processing this binding are fairly similar to the matchall offload. struct flow_block_offload.sch points at the qdisc in question. Use it to figure out if the qdisc is offloaded at all and what TC it configures. Bounce bindings on not-offloaded qdiscs. Individual bindings are kept in a list so that several qevents can share the same block and all binding points get configured as the configured filters change. Signed-off-by: Petr Machata Signed-off-by: Ido Schimmel --- .../net/ethernet/mellanox/mlxsw/spectrum.c | 2 + .../net/ethernet/mellanox/mlxsw/spectrum.h | 2 + .../ethernet/mellanox/mlxsw/spectrum_qdisc.c | 472 ++++++++++++++++++ 3 files changed, 476 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c index 2235c4bf330d..4ac634bd3571 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c @@ -1337,6 +1337,8 @@ static int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port, return mlxsw_sp_setup_tc_block_clsact(mlxsw_sp_port, f, true); case FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS: return mlxsw_sp_setup_tc_block_clsact(mlxsw_sp_port, f, false); + case FLOW_BLOCK_BINDER_TYPE_RED_EARLY_DROP: + return mlxsw_sp_setup_tc_block_qevent_early_drop(mlxsw_sp_port, f); default: return -EOPNOTSUPP; } diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h index ee9a19f28b97..c00811178637 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h @@ -1031,6 +1031,8 @@ int mlxsw_sp_setup_tc_tbf(struct mlxsw_sp_port *mlxsw_sp_port, struct tc_tbf_qopt_offload *p); int mlxsw_sp_setup_tc_fifo(struct mlxsw_sp_port *mlxsw_sp_port, struct tc_fifo_qopt_offload *p); +int mlxsw_sp_setup_tc_block_qevent_early_drop(struct mlxsw_sp_port *mlxsw_sp_port, + struct flow_block_offload *f); /* spectrum_fid.c */ bool mlxsw_sp_fid_is_dummy(struct mlxsw_sp *mlxsw_sp, u16 fid_index); diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_qdisc.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_qdisc.c index 670a43fe2a00..901acd87353f 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_qdisc.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_qdisc.c @@ -8,6 +8,7 @@ #include #include "spectrum.h" +#include "spectrum_span.h" #include "reg.h" #define MLXSW_SP_PRIO_BAND_TO_TCLASS(band) (IEEE_8021QAZ_MAX_TCS - band - 1) @@ -1272,6 +1273,477 @@ int mlxsw_sp_setup_tc_ets(struct mlxsw_sp_port *mlxsw_sp_port, } } +struct mlxsw_sp_qevent_block { + struct list_head binding_list; + struct list_head mall_entry_list; + struct mlxsw_sp *mlxsw_sp; +}; + +struct mlxsw_sp_qevent_binding { + struct list_head list; + struct mlxsw_sp_port *mlxsw_sp_port; + u32 handle; + int tclass_num; + enum mlxsw_sp_span_trigger span_trigger; +}; + +static LIST_HEAD(mlxsw_sp_qevent_block_cb_list); + +static int mlxsw_sp_qevent_mirror_configure(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_mall_entry *mall_entry, + struct mlxsw_sp_qevent_binding *qevent_binding) +{ + struct mlxsw_sp_port *mlxsw_sp_port = qevent_binding->mlxsw_sp_port; + struct mlxsw_sp_span_trigger_parms trigger_parms = {}; + int span_id; + int err; + + err = mlxsw_sp_span_agent_get(mlxsw_sp, mall_entry->mirror.to_dev, &span_id); + if (err) + return err; + + err = mlxsw_sp_span_analyzed_port_get(mlxsw_sp_port, true); + if (err) + goto err_analyzed_port_get; + + trigger_parms.span_id = span_id; + err = mlxsw_sp_span_agent_bind(mlxsw_sp, qevent_binding->span_trigger, mlxsw_sp_port, + &trigger_parms); + if (err) + goto err_agent_bind; + + err = mlxsw_sp_span_trigger_enable(mlxsw_sp_port, qevent_binding->span_trigger, + qevent_binding->tclass_num); + if (err) + goto err_trigger_enable; + + mall_entry->mirror.span_id = span_id; + return 0; + +err_trigger_enable: + mlxsw_sp_span_agent_unbind(mlxsw_sp, qevent_binding->span_trigger, mlxsw_sp_port, + &trigger_parms); +err_agent_bind: + mlxsw_sp_span_analyzed_port_put(mlxsw_sp_port, true); +err_analyzed_port_get: + mlxsw_sp_span_agent_put(mlxsw_sp, span_id); + return err; +} + +static void mlxsw_sp_qevent_mirror_deconfigure(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_mall_entry *mall_entry, + struct mlxsw_sp_qevent_binding *qevent_binding) +{ + struct mlxsw_sp_port *mlxsw_sp_port = qevent_binding->mlxsw_sp_port; + struct mlxsw_sp_span_trigger_parms trigger_parms = { + .span_id = mall_entry->mirror.span_id, + }; + + mlxsw_sp_span_trigger_disable(mlxsw_sp_port, qevent_binding->span_trigger, + qevent_binding->tclass_num); + mlxsw_sp_span_agent_unbind(mlxsw_sp, qevent_binding->span_trigger, mlxsw_sp_port, + &trigger_parms); + mlxsw_sp_span_analyzed_port_put(mlxsw_sp_port, true); + mlxsw_sp_span_agent_put(mlxsw_sp, mall_entry->mirror.span_id); +} + +static int mlxsw_sp_qevent_entry_configure(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_mall_entry *mall_entry, + struct mlxsw_sp_qevent_binding *qevent_binding) +{ + switch (mall_entry->type) { + case MLXSW_SP_MALL_ACTION_TYPE_MIRROR: + return mlxsw_sp_qevent_mirror_configure(mlxsw_sp, mall_entry, qevent_binding); + default: + /* This should have been validated away. */ + WARN_ON(1); + return -EOPNOTSUPP; + } +} + +static void mlxsw_sp_qevent_entry_deconfigure(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_mall_entry *mall_entry, + struct mlxsw_sp_qevent_binding *qevent_binding) +{ + switch (mall_entry->type) { + case MLXSW_SP_MALL_ACTION_TYPE_MIRROR: + return mlxsw_sp_qevent_mirror_deconfigure(mlxsw_sp, mall_entry, qevent_binding); + default: + WARN_ON(1); + return; + } +} + +static int mlxsw_sp_qevent_binding_configure(struct mlxsw_sp_qevent_block *qevent_block, + struct mlxsw_sp_qevent_binding *qevent_binding) +{ + struct mlxsw_sp_mall_entry *mall_entry; + int err; + + list_for_each_entry(mall_entry, &qevent_block->mall_entry_list, list) { + err = mlxsw_sp_qevent_entry_configure(qevent_block->mlxsw_sp, mall_entry, + qevent_binding); + if (err) + goto err_entry_configure; + } + + return 0; + +err_entry_configure: + list_for_each_entry_continue_reverse(mall_entry, &qevent_block->mall_entry_list, list) + mlxsw_sp_qevent_entry_deconfigure(qevent_block->mlxsw_sp, mall_entry, + qevent_binding); + return err; +} + +static void mlxsw_sp_qevent_binding_deconfigure(struct mlxsw_sp_qevent_block *qevent_block, + struct mlxsw_sp_qevent_binding *qevent_binding) +{ + struct mlxsw_sp_mall_entry *mall_entry; + + list_for_each_entry(mall_entry, &qevent_block->mall_entry_list, list) + mlxsw_sp_qevent_entry_deconfigure(qevent_block->mlxsw_sp, mall_entry, + qevent_binding); +} + +static int mlxsw_sp_qevent_block_configure(struct mlxsw_sp_qevent_block *qevent_block) +{ + struct mlxsw_sp_qevent_binding *qevent_binding; + int err; + + list_for_each_entry(qevent_binding, &qevent_block->binding_list, list) { + err = mlxsw_sp_qevent_binding_configure(qevent_block, qevent_binding); + if (err) + goto err_binding_configure; + } + + return 0; + +err_binding_configure: + list_for_each_entry_continue_reverse(qevent_binding, &qevent_block->binding_list, list) + mlxsw_sp_qevent_binding_deconfigure(qevent_block, qevent_binding); + return err; +} + +static void mlxsw_sp_qevent_block_deconfigure(struct mlxsw_sp_qevent_block *qevent_block) +{ + struct mlxsw_sp_qevent_binding *qevent_binding; + + list_for_each_entry(qevent_binding, &qevent_block->binding_list, list) + mlxsw_sp_qevent_binding_deconfigure(qevent_block, qevent_binding); +} + +static struct mlxsw_sp_mall_entry * +mlxsw_sp_qevent_mall_entry_find(struct mlxsw_sp_qevent_block *block, unsigned long cookie) +{ + struct mlxsw_sp_mall_entry *mall_entry; + + list_for_each_entry(mall_entry, &block->mall_entry_list, list) + if (mall_entry->cookie == cookie) + return mall_entry; + + return NULL; +} + +static int mlxsw_sp_qevent_mall_replace(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_qevent_block *qevent_block, + struct tc_cls_matchall_offload *f) +{ + struct mlxsw_sp_mall_entry *mall_entry; + struct flow_action_entry *act; + int err; + + /* It should not currently be possible to replace a matchall rule. So + * this must be a new rule. + */ + if (!list_empty(&qevent_block->mall_entry_list)) { + NL_SET_ERR_MSG(f->common.extack, "At most one filter supported"); + return -EOPNOTSUPP; + } + if (f->rule->action.num_entries != 1) { + NL_SET_ERR_MSG(f->common.extack, "Only singular actions supported"); + return -EOPNOTSUPP; + } + if (f->common.chain_index) { + NL_SET_ERR_MSG(f->common.extack, "Only chain 0 is supported"); + return -EOPNOTSUPP; + } + if (f->common.protocol != htons(ETH_P_ALL)) { + NL_SET_ERR_MSG(f->common.extack, "Protocol matching not supported"); + return -EOPNOTSUPP; + } + + act = &f->rule->action.entries[0]; + if (!(act->hw_stats & FLOW_ACTION_HW_STATS_DISABLED)) { + NL_SET_ERR_MSG(f->common.extack, "HW counters not supported on qevents"); + return -EOPNOTSUPP; + } + + mall_entry = kzalloc(sizeof(*mall_entry), GFP_KERNEL); + if (!mall_entry) + return -ENOMEM; + mall_entry->cookie = f->cookie; + + if (act->id == FLOW_ACTION_MIRRED) { + mall_entry->type = MLXSW_SP_MALL_ACTION_TYPE_MIRROR; + mall_entry->mirror.to_dev = act->dev; + } else { + NL_SET_ERR_MSG(f->common.extack, "Unsupported action"); + err = -EOPNOTSUPP; + goto err_unsupported_action; + } + + list_add_tail(&mall_entry->list, &qevent_block->mall_entry_list); + + err = mlxsw_sp_qevent_block_configure(qevent_block); + if (err) + goto err_block_configure; + + return 0; + +err_block_configure: + list_del(&mall_entry->list); +err_unsupported_action: + kfree(mall_entry); + return err; +} + +static void mlxsw_sp_qevent_mall_destroy(struct mlxsw_sp_qevent_block *qevent_block, + struct tc_cls_matchall_offload *f) +{ + struct mlxsw_sp_mall_entry *mall_entry; + + mall_entry = mlxsw_sp_qevent_mall_entry_find(qevent_block, f->cookie); + if (!mall_entry) + return; + + mlxsw_sp_qevent_block_deconfigure(qevent_block); + + list_del(&mall_entry->list); + kfree(mall_entry); +} + +static int mlxsw_sp_qevent_block_mall_cb(struct mlxsw_sp_qevent_block *qevent_block, + struct tc_cls_matchall_offload *f) +{ + struct mlxsw_sp *mlxsw_sp = qevent_block->mlxsw_sp; + + switch (f->command) { + case TC_CLSMATCHALL_REPLACE: + return mlxsw_sp_qevent_mall_replace(mlxsw_sp, qevent_block, f); + case TC_CLSMATCHALL_DESTROY: + mlxsw_sp_qevent_mall_destroy(qevent_block, f); + return 0; + default: + return -EOPNOTSUPP; + } +} + +static int mlxsw_sp_qevent_block_cb(enum tc_setup_type type, void *type_data, void *cb_priv) +{ + struct mlxsw_sp_qevent_block *qevent_block = cb_priv; + + switch (type) { + case TC_SETUP_CLSMATCHALL: + return mlxsw_sp_qevent_block_mall_cb(qevent_block, type_data); + default: + return -EOPNOTSUPP; + } +} + +static struct mlxsw_sp_qevent_block *mlxsw_sp_qevent_block_create(struct mlxsw_sp *mlxsw_sp, + struct net *net) +{ + struct mlxsw_sp_qevent_block *qevent_block; + + qevent_block = kzalloc(sizeof(*qevent_block), GFP_KERNEL); + if (!qevent_block) + return NULL; + + INIT_LIST_HEAD(&qevent_block->binding_list); + INIT_LIST_HEAD(&qevent_block->mall_entry_list); + qevent_block->mlxsw_sp = mlxsw_sp; + return qevent_block; +} + +static void +mlxsw_sp_qevent_block_destroy(struct mlxsw_sp_qevent_block *qevent_block) +{ + WARN_ON(!list_empty(&qevent_block->binding_list)); + WARN_ON(!list_empty(&qevent_block->mall_entry_list)); + kfree(qevent_block); +} + +static void mlxsw_sp_qevent_block_release(void *cb_priv) +{ + struct mlxsw_sp_qevent_block *qevent_block = cb_priv; + + mlxsw_sp_qevent_block_destroy(qevent_block); +} + +static struct mlxsw_sp_qevent_binding * +mlxsw_sp_qevent_binding_create(struct mlxsw_sp_port *mlxsw_sp_port, u32 handle, int tclass_num, + enum mlxsw_sp_span_trigger span_trigger) +{ + struct mlxsw_sp_qevent_binding *binding; + + binding = kzalloc(sizeof(*binding), GFP_KERNEL); + if (!binding) + return ERR_PTR(-ENOMEM); + + binding->mlxsw_sp_port = mlxsw_sp_port; + binding->handle = handle; + binding->tclass_num = tclass_num; + binding->span_trigger = span_trigger; + return binding; +} + +static void +mlxsw_sp_qevent_binding_destroy(struct mlxsw_sp_qevent_binding *binding) +{ + kfree(binding); +} + +static struct mlxsw_sp_qevent_binding * +mlxsw_sp_qevent_binding_lookup(struct mlxsw_sp_qevent_block *block, + struct mlxsw_sp_port *mlxsw_sp_port, + u32 handle, + enum mlxsw_sp_span_trigger span_trigger) +{ + struct mlxsw_sp_qevent_binding *qevent_binding; + + list_for_each_entry(qevent_binding, &block->binding_list, list) + if (qevent_binding->mlxsw_sp_port == mlxsw_sp_port && + qevent_binding->handle == handle && + qevent_binding->span_trigger == span_trigger) + return qevent_binding; + return NULL; +} + +static int mlxsw_sp_setup_tc_block_qevent_bind(struct mlxsw_sp_port *mlxsw_sp_port, + struct flow_block_offload *f, + enum mlxsw_sp_span_trigger span_trigger) +{ + struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; + struct mlxsw_sp_qevent_binding *qevent_binding; + struct mlxsw_sp_qevent_block *qevent_block; + struct flow_block_cb *block_cb; + struct mlxsw_sp_qdisc *qdisc; + bool register_block = false; + int err; + + block_cb = flow_block_cb_lookup(f->block, mlxsw_sp_qevent_block_cb, mlxsw_sp); + if (!block_cb) { + qevent_block = mlxsw_sp_qevent_block_create(mlxsw_sp, f->net); + if (!qevent_block) + return -ENOMEM; + block_cb = flow_block_cb_alloc(mlxsw_sp_qevent_block_cb, mlxsw_sp, qevent_block, + mlxsw_sp_qevent_block_release); + if (IS_ERR(block_cb)) { + mlxsw_sp_qevent_block_destroy(qevent_block); + return PTR_ERR(block_cb); + } + register_block = true; + } else { + qevent_block = flow_block_cb_priv(block_cb); + } + flow_block_cb_incref(block_cb); + + qdisc = mlxsw_sp_qdisc_find_by_handle(mlxsw_sp_port, f->sch->handle); + if (!qdisc) { + NL_SET_ERR_MSG(f->extack, "Qdisc not offloaded"); + err = -ENOENT; + goto err_find_qdisc; + } + + if (WARN_ON(mlxsw_sp_qevent_binding_lookup(qevent_block, mlxsw_sp_port, f->sch->handle, + span_trigger))) { + err = -EEXIST; + goto err_binding_exists; + } + + qevent_binding = mlxsw_sp_qevent_binding_create(mlxsw_sp_port, f->sch->handle, + qdisc->tclass_num, span_trigger); + if (IS_ERR(qevent_binding)) { + err = PTR_ERR(qevent_binding); + goto err_binding_create; + } + + err = mlxsw_sp_qevent_binding_configure(qevent_block, qevent_binding); + if (err) + goto err_binding_configure; + + list_add(&qevent_binding->list, &qevent_block->binding_list); + + if (register_block) { + flow_block_cb_add(block_cb, f); + list_add_tail(&block_cb->driver_list, &mlxsw_sp_qevent_block_cb_list); + } + + return 0; + +err_binding_configure: + mlxsw_sp_qevent_binding_destroy(qevent_binding); +err_binding_create: +err_binding_exists: +err_find_qdisc: + if (!flow_block_cb_decref(block_cb)) + flow_block_cb_free(block_cb); + return err; +} + +static void mlxsw_sp_setup_tc_block_qevent_unbind(struct mlxsw_sp_port *mlxsw_sp_port, + struct flow_block_offload *f, + enum mlxsw_sp_span_trigger span_trigger) +{ + struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; + struct mlxsw_sp_qevent_binding *qevent_binding; + struct mlxsw_sp_qevent_block *qevent_block; + struct flow_block_cb *block_cb; + + block_cb = flow_block_cb_lookup(f->block, mlxsw_sp_qevent_block_cb, mlxsw_sp); + if (!block_cb) + return; + qevent_block = flow_block_cb_priv(block_cb); + + qevent_binding = mlxsw_sp_qevent_binding_lookup(qevent_block, mlxsw_sp_port, f->sch->handle, + span_trigger); + if (!qevent_binding) + return; + + list_del(&qevent_binding->list); + mlxsw_sp_qevent_binding_deconfigure(qevent_block, qevent_binding); + mlxsw_sp_qevent_binding_destroy(qevent_binding); + + if (!flow_block_cb_decref(block_cb)) { + flow_block_cb_remove(block_cb, f); + list_del(&block_cb->driver_list); + } +} + +static int mlxsw_sp_setup_tc_block_qevent(struct mlxsw_sp_port *mlxsw_sp_port, + struct flow_block_offload *f, + enum mlxsw_sp_span_trigger span_trigger) +{ + f->driver_block_list = &mlxsw_sp_qevent_block_cb_list; + + switch (f->command) { + case FLOW_BLOCK_BIND: + return mlxsw_sp_setup_tc_block_qevent_bind(mlxsw_sp_port, f, span_trigger); + case FLOW_BLOCK_UNBIND: + mlxsw_sp_setup_tc_block_qevent_unbind(mlxsw_sp_port, f, span_trigger); + return 0; + default: + return -EOPNOTSUPP; + } +} + +int mlxsw_sp_setup_tc_block_qevent_early_drop(struct mlxsw_sp_port *mlxsw_sp_port, + struct flow_block_offload *f) +{ + return mlxsw_sp_setup_tc_block_qevent(mlxsw_sp_port, f, MLXSW_SP_SPAN_TRIGGER_EARLY_DROP); +} + int mlxsw_sp_tc_qdisc_init(struct mlxsw_sp_port *mlxsw_sp_port) { struct mlxsw_sp_qdisc_state *qdisc_state; From patchwork Fri Jul 10 13:57:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1326873 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=Pb8oA/Qq; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3F594Jmhz9sT6 for ; Fri, 10 Jul 2020 23:58:53 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728265AbgGJN6x (ORCPT ); Fri, 10 Jul 2020 09:58:53 -0400 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:48879 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728199AbgGJN6v (ORCPT ); Fri, 10 Jul 2020 09:58:51 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id 6D59058059E; Fri, 10 Jul 2020 09:58:50 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Fri, 10 Jul 2020 09:58:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=gdJlc80kkmB28sMn3OWS6YQjXmbSrJ2kpKk7Jl1Wfq8=; b=Pb8oA/Qq spfILyXiouSMjUIdY0idollc6IlA5Yfgt70ytb9K1QPYzQx3iTdzOgPnwR5acHd3 PLrevfUUSfaaVyyMAOvVhGG/HpPbBYbcQxLIwNEujakyhfi0KmMRzydLYSDDBsae q4fKj5REdppT5gWHHlLRAsZQnN+PuSe/RjMMkxfNGhcMFiabgNC/9fUXg1JcxlRf mwNeVNOrNvIljFfhY80p/YeCDqs8rHI50eqSm/Ux+0U7YN9neY3w9RfLI56XYZD4 DCMDqM+JyQyLsMkDPuKLIx1Wuk4a3BNcsQWUj8iPQsAuTX7Fqi4oHVi+3phW9gnv se0urABRXRac2w== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrvddugdejgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpefkughoucfutghhihhmmhgvlhcuoehiughoshgthhesihguohhs tghhrdhorhhgqeenucggtffrrghtthgvrhhnpeduteeiveffffevleekleejffekhfekhe fgtdfftefhledvjefggfehgfevjeekhfenucfkphepuddtledrieeirdduledrudeffeen ucevlhhushhtvghrufhiiigvpeduvdenucfrrghrrghmpehmrghilhhfrhhomhepihguoh hstghhsehiughoshgthhdrohhrgh X-ME-Proxy: Received: from shredder.mtl.com (bzq-109-66-19-133.red.bezeqint.net [109.66.19.133]) by mail.messagingengine.com (Postfix) with ESMTPA id 71AEC328005A; Fri, 10 Jul 2020 09:58:46 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@mellanox.com, petrm@mellanox.com, mlxsw@mellanox.com, michael.chan@broadcom.com, saeedm@mellanox.com, leon@kernel.org, pablo@netfilter.org, kadlec@netfilter.org, fw@strlen.de, jhs@mojatatu.com, xiyou.wangcong@gmail.com, simon.horman@netronome.com, Ido Schimmel Subject: [PATCH net-next 13/13] selftests: mlxsw: RED: Test offload of mirror on RED early_drop qevent Date: Fri, 10 Jul 2020 16:57:06 +0300 Message-Id: <20200710135706.601409-14-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200710135706.601409-1-idosch@idosch.org> References: <20200710135706.601409-1-idosch@idosch.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Petr Machata Add a selftest for offloading a mirror action attached to the block associated with RED early_drop qevent. Signed-off-by: Petr Machata Signed-off-by: Ido Schimmel --- .../drivers/net/mlxsw/sch_red_core.sh | 106 +++++++++++++++++- .../drivers/net/mlxsw/sch_red_ets.sh | 11 ++ .../drivers/net/mlxsw/sch_red_root.sh | 8 ++ 3 files changed, 122 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh b/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh index 0d347d48c112..45042105ead7 100644 --- a/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh +++ b/tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh @@ -121,6 +121,7 @@ h1_destroy() h2_create() { host_create $h2 2 + tc qdisc add dev $h2 clsact # Some of the tests in this suite use multicast traffic. As this traffic # enters BR2_10 resp. BR2_11, it is flooded to all other ports. Thus @@ -141,6 +142,7 @@ h2_create() h2_destroy() { ethtool -s $h2 autoneg on + tc qdisc del dev $h2 clsact host_destroy $h2 } @@ -336,6 +338,17 @@ get_qdisc_npackets() qdisc_stats_get $swp3 $(get_qdisc_handle $vlan) .packets } +send_packets() +{ + local vlan=$1; shift + local proto=$1; shift + local pkts=$1; shift + + $MZ $h2.$vlan -p 8000 -a own -b $h3_mac \ + -A $(ipaddr 2 $vlan) -B $(ipaddr 3 $vlan) \ + -t $proto -q -c $pkts "$@" +} + # This sends traffic in an attempt to build a backlog of $size. Returns 0 on # success. After 10 failed attempts it bails out and returns 1. It dumps the # backlog size to stdout. @@ -364,9 +377,7 @@ build_backlog() return 1 fi - $MZ $h2.$vlan -p 8000 -a own -b $h3_mac \ - -A $(ipaddr 2 $vlan) -B $(ipaddr 3 $vlan) \ - -t $proto -q -c $pkts "$@" + send_packets $vlan $proto $pkts "$@" done } @@ -531,3 +542,92 @@ do_mc_backlog_test() log_test "TC $((vlan - 10)): Qdisc reports MC backlog" } + +do_drop_test() +{ + local vlan=$1; shift + local limit=$1; shift + local trigger=$1; shift + local subtest=$1; shift + local fetch_counter=$1; shift + local backlog + local base + local now + local pct + + RET=0 + + start_traffic $h1.$vlan $(ipaddr 1 $vlan) $(ipaddr 3 $vlan) $h3_mac + + # Create a bit of a backlog and observe no mirroring due to drops. + qevent_rule_install_$subtest + base=$($fetch_counter) + + build_backlog $vlan $((2 * limit / 3)) udp >/dev/null + + busywait 1100 until_counter_is ">= $((base + 1))" $fetch_counter >/dev/null + check_fail $? "Spurious packets observed without buffer pressure" + + qevent_rule_uninstall_$subtest + + # Push to the queue until it's at the limit. The configured limit is + # rounded by the qdisc and then by the driver, so this is the best we + # can do to get to the real limit of the system. Do this with the rules + # uninstalled so that the inevitable drops don't get counted. + build_backlog $vlan $((3 * limit / 2)) udp >/dev/null + + qevent_rule_install_$subtest + base=$($fetch_counter) + + send_packets $vlan udp 11 + + now=$(busywait 1100 until_counter_is ">= $((base + 10))" $fetch_counter) + check_err $? "Dropped packets not observed: 11 expected, $((now - base)) seen" + + # When no extra traffic is injected, there should be no mirroring. + busywait 1100 until_counter_is ">= $((base + 20))" $fetch_counter >/dev/null + check_fail $? "Spurious packets observed" + + # When the rule is uninstalled, there should be no mirroring. + qevent_rule_uninstall_$subtest + send_packets $vlan udp 11 + busywait 1100 until_counter_is ">= $((base + 20))" $fetch_counter >/dev/null + check_fail $? "Spurious packets observed after uninstall" + + log_test "TC $((vlan - 10)): ${trigger}ped packets $subtest'd" + + stop_traffic + sleep 1 +} + +qevent_rule_install_mirror() +{ + tc filter add block 10 pref 1234 handle 102 matchall skip_sw \ + action mirred egress mirror dev $swp2 hw_stats disabled +} + +qevent_rule_uninstall_mirror() +{ + tc filter del block 10 pref 1234 handle 102 matchall +} + +qevent_counter_fetch_mirror() +{ + tc_rule_handle_stats_get "dev $h2 ingress" 101 +} + +do_drop_mirror_test() +{ + local vlan=$1; shift + local limit=$1; shift + local qevent_name=$1; shift + + tc filter add dev $h2 ingress pref 1 handle 101 prot ip \ + flower skip_sw ip_proto udp \ + action drop + + do_drop_test "$vlan" "$limit" "$qevent_name" mirror \ + qevent_counter_fetch_mirror + + tc filter del dev $h2 ingress pref 1 handle 101 flower +} diff --git a/tools/testing/selftests/drivers/net/mlxsw/sch_red_ets.sh b/tools/testing/selftests/drivers/net/mlxsw/sch_red_ets.sh index 1c36c576613b..c8968b041bea 100755 --- a/tools/testing/selftests/drivers/net/mlxsw/sch_red_ets.sh +++ b/tools/testing/selftests/drivers/net/mlxsw/sch_red_ets.sh @@ -7,6 +7,7 @@ ALL_TESTS=" ecn_nodrop_test red_test mc_backlog_test + red_mirror_test " : ${QDISC:=ets} source sch_red_core.sh @@ -83,6 +84,16 @@ mc_backlog_test() uninstall_qdisc } +red_mirror_test() +{ + install_qdisc qevent early_drop block 10 + + do_drop_mirror_test 10 $BACKLOG1 early_drop + do_drop_mirror_test 11 $BACKLOG2 early_drop + + uninstall_qdisc +} + trap cleanup EXIT setup_prepare diff --git a/tools/testing/selftests/drivers/net/mlxsw/sch_red_root.sh b/tools/testing/selftests/drivers/net/mlxsw/sch_red_root.sh index 558667ea11ec..ede9c38d3eff 100755 --- a/tools/testing/selftests/drivers/net/mlxsw/sch_red_root.sh +++ b/tools/testing/selftests/drivers/net/mlxsw/sch_red_root.sh @@ -7,6 +7,7 @@ ALL_TESTS=" ecn_nodrop_test red_test mc_backlog_test + red_mirror_test " source sch_red_core.sh @@ -57,6 +58,13 @@ mc_backlog_test() uninstall_qdisc } +red_mirror_test() +{ + install_qdisc qevent early_drop block 10 + do_drop_mirror_test 10 $BACKLOG + uninstall_qdisc +} + trap cleanup EXIT setup_prepare