From patchwork Tue Jan 16 13:33:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Pirko X-Patchwork-Id: 861527 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=resnulli-us.20150623.gappssmtp.com header.i=@resnulli-us.20150623.gappssmtp.com header.b="fDk5XPPF"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zLWRV3j6Kz9ryk for ; Wed, 17 Jan 2018 00:34:46 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751433AbeAPNdy (ORCPT ); Tue, 16 Jan 2018 08:33:54 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:43241 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751214AbeAPNdv (ORCPT ); Tue, 16 Jan 2018 08:33:51 -0500 Received: by mail-wm0-f66.google.com with SMTP id g1so8395313wmg.2 for ; Tue, 16 Jan 2018 05:33:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=w0NgEuxwmF21Pu1gQFICFAgd3X9Aa05MWwQCAFEVx6s=; b=fDk5XPPFikof9L3Ywn2Jffp3Pn2J19ZVcEvVN94pFDijtp3UZP/lFa2zExW9TjFqPx zQmI/O8AovBWEE/Dqjdv4lBUVcU9Mx1R4533a2j+KOD379i8BJKYfyku2T2g14vb9eYT 4lflZf+wSW9VL2ovGz+Hr01Ee4IRl1ffmN0qbW94nTpM9+vhdZnHCPTUUQ+bhIAnLwDZ bxgiB3wJkDiUpleGMQmXxo1EuJd9HTjtmSuuPuhqYH9yMYu1zmyCcK2DGKE6p01hltSB jXT+op2aEL0PKykEW24mH0ZQbQQ3bPzyzqTw62lA+6DH7QwgEwfdtcX8BfqeZSkJ8hJe dqxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=w0NgEuxwmF21Pu1gQFICFAgd3X9Aa05MWwQCAFEVx6s=; b=O+w7E3AqkwRYZERfyhB0Yze7i+4XZhfreJ1nhfy8R4qdj/zKF2jFN2R0TYidORGN0t F3aCOLJyoHs6enCy9Mspp36y7jkAHCh4rVmCs6ctnV7sCBvsbjSx75PrA7TCvgvZ2ufp /f7orKTf/+83REanmaP942o6LmhZsN8BH/RHfmGnZBeydf9NfXs4cEZGIeF+hUYpH8zo HSMbcJUwoKbRi2WJ/g/xcAhEAI/R0IZvAAW9C+Bh1SnI5H1yQjM3iNBBYl8OEndGqSpd NIWlby/uCw5SdcldpAr9WeMalSlNRRbB/qx6rsthe9N3DaEcroig37mKbUWJhXtGDiIr zfhg== X-Gm-Message-State: AKwxytfJaDEvaTUsjHvI6b59gtAzHtY6ABHEXinJfmxpUIzy7yfG8tOe ifDIN8n5vtxQIONrnQJH4prdDmMT X-Google-Smtp-Source: ACJfBot07xH53OJs81Ylb5X+FI+1ZgQIGHP2+OF6waYLlXxswp05b8QNhW65x2EwRAhl5W/UFnV2/Q== X-Received: by 10.28.136.84 with SMTP id k81mr11953291wmd.62.1516109629531; Tue, 16 Jan 2018 05:33:49 -0800 (PST) Received: from localhost (ip-89-177-135-29.net.upcbroadband.cz. [89.177.135.29]) by smtp.gmail.com with ESMTPSA id 32sm2470454wrm.14.2018.01.16.05.33.48 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 16 Jan 2018 05:33:48 -0800 (PST) From: Jiri Pirko To: netdev@vger.kernel.org Cc: davem@davemloft.net, jhs@mojatatu.com, xiyou.wangcong@gmail.com, mlxsw@mellanox.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, saeedm@mellanox.com, matanb@mellanox.com, leonro@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, simon.horman@netronome.com, pieter.jansenvanvuuren@netronome.com, john.hurley@netronome.com, alexander.h.duyck@intel.com, ogerlitz@mellanox.com, john.fastabend@gmail.com, daniel@iogearbox.net, dsahern@gmail.com, roopa@cumulusnetworks.com Subject: [patch net-next v9 01/13] net: sched: introduce support for multiple filter chain pointers registration Date: Tue, 16 Jan 2018 14:33:35 +0100 Message-Id: <20180116133347.2207-2-jiri@resnulli.us> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180116133347.2207-1-jiri@resnulli.us> References: <20180116133347.2207-1-jiri@resnulli.us> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko So far, there was possible only to register a single filter chain pointer to block->chain[0]. However, when the blocks will get shareable, we need to allow multiple filter chain pointers registration. Signed-off-by: Jiri Pirko --- v6->v7: - unsquashed shared block patch that was previously squashed by mistake - fixed error path in block create - freeing chain 0 v3->v4: - rebased on top of the current net-next - added some extack strings v2->v3: - rebased on top of the current net-next --- include/net/sch_generic.h | 3 +- net/sched/cls_api.c | 77 ++++++++++++++++++++++++++++++++++++++++++----- 2 files changed, 70 insertions(+), 10 deletions(-) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index ac029d5..ddb96bb 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -275,8 +275,7 @@ typedef void tcf_chain_head_change_t(struct tcf_proto *tp_head, void *priv); struct tcf_chain { struct tcf_proto __rcu *filter_chain; - tcf_chain_head_change_t *chain_head_change; - void *chain_head_change_priv; + struct list_head filter_chain_list; struct list_head list; struct tcf_block *block; u32 index; /* chain index */ diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index 6708b69..e6b16b3 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -179,6 +179,12 @@ static void tcf_proto_destroy(struct tcf_proto *tp) kfree_rcu(tp, rcu); } +struct tcf_filter_chain_list_item { + struct list_head list; + tcf_chain_head_change_t *chain_head_change; + void *chain_head_change_priv; +}; + static struct tcf_chain *tcf_chain_create(struct tcf_block *block, u32 chain_index) { @@ -187,6 +193,7 @@ static struct tcf_chain *tcf_chain_create(struct tcf_block *block, chain = kzalloc(sizeof(*chain), GFP_KERNEL); if (!chain) return NULL; + INIT_LIST_HEAD(&chain->filter_chain_list); list_add_tail(&chain->list, &block->chain_list); chain->block = block; chain->index = chain_index; @@ -194,12 +201,19 @@ static struct tcf_chain *tcf_chain_create(struct tcf_block *block, return chain; } +static void tcf_chain_head_change_item(struct tcf_filter_chain_list_item *item, + struct tcf_proto *tp_head) +{ + if (item->chain_head_change) + item->chain_head_change(tp_head, item->chain_head_change_priv); +} static void tcf_chain_head_change(struct tcf_chain *chain, struct tcf_proto *tp_head) { - if (chain->chain_head_change) - chain->chain_head_change(tp_head, - chain->chain_head_change_priv); + struct tcf_filter_chain_list_item *item; + + list_for_each_entry(item, &chain->filter_chain_list, list) + tcf_chain_head_change_item(item, tp_head); } static void tcf_chain_flush(struct tcf_chain *chain) @@ -280,6 +294,50 @@ static void tcf_block_offload_unbind(struct tcf_block *block, struct Qdisc *q, tcf_block_offload_cmd(block, q, ei, TC_BLOCK_UNBIND); } +static int +tcf_chain_head_change_cb_add(struct tcf_chain *chain, + struct tcf_block_ext_info *ei, + struct netlink_ext_ack *extack) +{ + struct tcf_filter_chain_list_item *item; + + item = kmalloc(sizeof(*item), GFP_KERNEL); + if (!item) { + NL_SET_ERR_MSG(extack, "Memory allocation for head change callback item failed"); + return -ENOMEM; + } + item->chain_head_change = ei->chain_head_change; + item->chain_head_change_priv = ei->chain_head_change_priv; + if (chain->filter_chain) + tcf_chain_head_change_item(item, chain->filter_chain); + list_add(&item->list, &chain->filter_chain_list); + return 0; +} + +static void +tcf_chain_head_change_cb_del(struct tcf_chain *chain, + struct tcf_block_ext_info *ei) +{ + struct tcf_filter_chain_list_item *item; + + list_for_each_entry(item, &chain->filter_chain_list, list) { + if ((!ei->chain_head_change && !ei->chain_head_change_priv) || + (item->chain_head_change == ei->chain_head_change && + item->chain_head_change_priv == ei->chain_head_change_priv)) { + tcf_chain_head_change_item(item, NULL); + list_del(&item->list); + kfree(item); + return; + } + } + WARN_ON(1); +} + +static struct tcf_chain *tcf_block_chain_zero(struct tcf_block *block) +{ + return list_first_entry(&block->chain_list, struct tcf_chain, list); +} + int tcf_block_get_ext(struct tcf_block **p_block, struct Qdisc *q, struct tcf_block_ext_info *ei, struct netlink_ext_ack *extack) @@ -302,9 +360,10 @@ int tcf_block_get_ext(struct tcf_block **p_block, struct Qdisc *q, err = -ENOMEM; goto err_chain_create; } - WARN_ON(!ei->chain_head_change); - chain->chain_head_change = ei->chain_head_change; - chain->chain_head_change_priv = ei->chain_head_change_priv; + err = tcf_chain_head_change_cb_add(tcf_block_chain_zero(block), + ei, extack); + if (err) + goto err_chain_head_change_cb_add; block->net = qdisc_net(q); block->q = q; tcf_block_offload_bind(block, q, ei); @@ -313,6 +372,8 @@ int tcf_block_get_ext(struct tcf_block **p_block, struct Qdisc *q, err_chain_create: kfree(block); +err_chain_head_change_cb_add: + kfree(chain); return err; } EXPORT_SYMBOL(tcf_block_get_ext); @@ -351,6 +412,7 @@ void tcf_block_put_ext(struct tcf_block *block, struct Qdisc *q, */ if (!block) return; + tcf_chain_head_change_cb_del(tcf_block_chain_zero(block), ei); list_for_each_entry(chain, &block->chain_list, list) tcf_chain_hold(chain); @@ -364,8 +426,7 @@ void tcf_block_put_ext(struct tcf_block *block, struct Qdisc *q, tcf_chain_put(chain); /* Finally, put chain 0 and allow block to be freed. */ - chain = list_first_entry(&block->chain_list, struct tcf_chain, list); - tcf_chain_put(chain); + tcf_chain_put(tcf_block_chain_zero(block)); } EXPORT_SYMBOL(tcf_block_put_ext); From patchwork Tue Jan 16 13:33:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Pirko X-Patchwork-Id: 861517 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=resnulli-us.20150623.gappssmtp.com header.i=@resnulli-us.20150623.gappssmtp.com header.b="DrrIokdR"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zLWQY2VPZz9ryk for ; Wed, 17 Jan 2018 00:33:57 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751626AbeAPNd4 (ORCPT ); Tue, 16 Jan 2018 08:33:56 -0500 Received: from mail-wr0-f194.google.com ([209.85.128.194]:38221 "EHLO mail-wr0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751259AbeAPNdv (ORCPT ); Tue, 16 Jan 2018 08:33:51 -0500 Received: by mail-wr0-f194.google.com with SMTP id x1so11007825wrb.5 for ; Tue, 16 Jan 2018 05:33:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=piPEiNjhyqiKCvKGk8WG3X/hlDYDz7vN0K82pZRHXFg=; b=DrrIokdRXSjS5pWkW/SGKmvoVizMogFA/0vRnmugt4YY2N0g59udJ4XFKH42EI4hvh +sdce3ue6oabQkQqcMhRd6emsK945NhPp10J7zBg7D/7Uy81mNs4dNNGShvgQl/3WaAT zcEysaH11yqgwkhYqYaQ1G+QQfBrf5VX0NJDRdbubaPdV2itrYo6Ly1tqACJojdxcNvn s+fvHR6pLUbgsARSoD0Y7UtD0RQbN1H2k7fxn3senPzrbvOcFv4Q/LJ3TAesLjgQOx8M KUCFdKXgGaUCZIEBGQUtOexOo7SXIXm00eOVD29EwCuWa2xzunZaM94KZtyZPlC9oJnr fy4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=piPEiNjhyqiKCvKGk8WG3X/hlDYDz7vN0K82pZRHXFg=; b=hLzKZy8r7tJIpjZSj5OV5pDPbtvQmXbH8FbFfJK9GL8sfVbBwT1rvVr8SgiwtvcYj+ 8h/tYecx7FE/JKjDyd+2wXUFZM0UQ40S6S5Iywi+JcgFhqT6Dd8VCjHalGPPgbyp1TIq e/G4U5xlEK001ODQXNUECUC/4AJ0DxvKmqTqFNwUwu4mp2CxV+q6W1sxOJFsl7OzbCly uZYVBSE/BimnBXPZzxnLnHimWc9MmqBv04FNtjIESrHDZJU/X33wJsxZt14vRuCHjcsS XhHujrV8j2xHCCvDPSDpFmQgKdON+Cj8yY8HeCH9e98Qu8XOI/9+t0U3FzEhgWclcdhN VVaQ== X-Gm-Message-State: AKwxyteA+EftFv79NpLK2myKH8MrnE6HTdBrU/EV9SXB75CLUfzdXhIk yUwit5BoYDISJM3jSs919rtF1735 X-Google-Smtp-Source: ACJfBos05J1bY0CYqL79TAU6RDq/l3Kci3yiwMCwW6XZixuMQziMXAO61Ctf4KzZEYonygRsOhMhSw== X-Received: by 10.223.199.14 with SMTP id k14mr10458739wrg.276.1516109630391; Tue, 16 Jan 2018 05:33:50 -0800 (PST) Received: from localhost (ip-89-177-135-29.net.upcbroadband.cz. [89.177.135.29]) by smtp.gmail.com with ESMTPSA id w83sm2277839wma.47.2018.01.16.05.33.49 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 16 Jan 2018 05:33:50 -0800 (PST) From: Jiri Pirko To: netdev@vger.kernel.org Cc: davem@davemloft.net, jhs@mojatatu.com, xiyou.wangcong@gmail.com, mlxsw@mellanox.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, saeedm@mellanox.com, matanb@mellanox.com, leonro@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, simon.horman@netronome.com, pieter.jansenvanvuuren@netronome.com, john.hurley@netronome.com, alexander.h.duyck@intel.com, ogerlitz@mellanox.com, john.fastabend@gmail.com, daniel@iogearbox.net, dsahern@gmail.com, roopa@cumulusnetworks.com Subject: [patch net-next v9 02/13] net: sched: introduce shared filter blocks infrastructure Date: Tue, 16 Jan 2018 14:33:36 +0100 Message-Id: <20180116133347.2207-3-jiri@resnulli.us> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180116133347.2207-1-jiri@resnulli.us> References: <20180116133347.2207-1-jiri@resnulli.us> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko Allow qdiscs to share filter blocks among them. Each qdisc type has to use block get/put extended modifications that enable sharing. Shared blocks are tracked within each net namespace and identified by u32 index. This index is passed from user during the qdisc creation. If user passes index that is not used by any other qdisc, new block is created. If user passes index that is already used, the existing block will be re-used. Signed-off-by: Jiri Pirko --- v6->v7: - new patch - splitted from the previous one as it got accidentaly squashed in the rebasing process in the past - converted to idr extended - removed auto-generating of block indexes. Callers have to explicily tell that the block is shared by passing non-zero block index - fixed error path in block get ext - freeing chain 0 --- include/net/pkt_cls.h | 7 ++ include/net/sch_generic.h | 2 + net/sched/cls_api.c | 163 +++++++++++++++++++++++++++++++++++++++------- 3 files changed, 148 insertions(+), 24 deletions(-) diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h index 9c341f0..c564638 100644 --- a/include/net/pkt_cls.h +++ b/include/net/pkt_cls.h @@ -29,6 +29,7 @@ struct tcf_block_ext_info { enum tcf_block_binder_type binder_type; tcf_chain_head_change_t *chain_head_change; void *chain_head_change_priv; + u32 block_index; }; struct tcf_block_cb; @@ -48,8 +49,14 @@ void tcf_block_put(struct tcf_block *block); void tcf_block_put_ext(struct tcf_block *block, struct Qdisc *q, struct tcf_block_ext_info *ei); +static inline bool tcf_block_shared(struct tcf_block *block) +{ + return block->index; +} + static inline struct Qdisc *tcf_block_q(struct tcf_block *block) { + WARN_ON(tcf_block_shared(block)); return block->q; } diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index ddb96bb..5cc4d71 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -284,6 +284,8 @@ struct tcf_chain { struct tcf_block { struct list_head chain_list; + u32 index; /* block index for shared blocks */ + unsigned int refcnt; struct net *net; struct Qdisc *q; struct list_head cb_list; diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index e6b16b3..9b45950 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include #include @@ -333,22 +334,44 @@ tcf_chain_head_change_cb_del(struct tcf_chain *chain, WARN_ON(1); } -static struct tcf_chain *tcf_block_chain_zero(struct tcf_block *block) +struct tcf_net { + struct idr idr; +}; + +static unsigned int tcf_net_id; + +static int tcf_block_insert(struct tcf_block *block, struct net *net, + u32 block_index, struct netlink_ext_ack *extack) { - return list_first_entry(&block->chain_list, struct tcf_chain, list); + struct tcf_net *tn = net_generic(net, tcf_net_id); + int err; + + err = idr_alloc_ext(&tn->idr, block, NULL, block_index, + block_index + 1, GFP_KERNEL); + if (err) + return err; + block->index = block_index; + return 0; } -int tcf_block_get_ext(struct tcf_block **p_block, struct Qdisc *q, - struct tcf_block_ext_info *ei, - struct netlink_ext_ack *extack) +static void tcf_block_remove(struct tcf_block *block, struct net *net) +{ + struct tcf_net *tn = net_generic(net, tcf_net_id); + + idr_remove_ext(&tn->idr, block->index); +} + +static struct tcf_block *tcf_block_create(struct net *net, struct Qdisc *q, + struct netlink_ext_ack *extack) { - struct tcf_block *block = kzalloc(sizeof(*block), GFP_KERNEL); + struct tcf_block *block; struct tcf_chain *chain; int err; + block = kzalloc(sizeof(*block), GFP_KERNEL); if (!block) { NL_SET_ERR_MSG(extack, "Memory allocation for block failed"); - return -ENOMEM; + return ERR_PTR(-ENOMEM); } INIT_LIST_HEAD(&block->chain_list); INIT_LIST_HEAD(&block->cb_list); @@ -360,20 +383,76 @@ int tcf_block_get_ext(struct tcf_block **p_block, struct Qdisc *q, err = -ENOMEM; goto err_chain_create; } + block->net = qdisc_net(q); + block->refcnt = 1; + block->net = net; + block->q = q; + return block; + +err_chain_create: + kfree(block); + return ERR_PTR(err); +} + +static struct tcf_block *tcf_block_lookup(struct net *net, u32 block_index) +{ + struct tcf_net *tn = net_generic(net, tcf_net_id); + + return idr_find_ext(&tn->idr, block_index); +} + +static struct tcf_chain *tcf_block_chain_zero(struct tcf_block *block) +{ + return list_first_entry(&block->chain_list, struct tcf_chain, list); +} + +int tcf_block_get_ext(struct tcf_block **p_block, struct Qdisc *q, + struct tcf_block_ext_info *ei, + struct netlink_ext_ack *extack) +{ + struct net *net = qdisc_net(q); + struct tcf_block *block = NULL; + bool created = false; + int err; + + if (ei->block_index) { + /* block_index not 0 means the shared block is requested */ + block = tcf_block_lookup(net, ei->block_index); + if (block) + block->refcnt++; + } + + if (!block) { + block = tcf_block_create(net, q, extack); + if (IS_ERR(block)) + return PTR_ERR(block); + created = true; + if (ei->block_index) { + err = tcf_block_insert(block, net, + ei->block_index, extack); + if (err) + goto err_block_insert; + } + } + err = tcf_chain_head_change_cb_add(tcf_block_chain_zero(block), ei, extack); if (err) goto err_chain_head_change_cb_add; - block->net = qdisc_net(q); - block->q = q; tcf_block_offload_bind(block, q, ei); *p_block = block; return 0; -err_chain_create: - kfree(block); err_chain_head_change_cb_add: - kfree(chain); + if (created) { + if (tcf_block_shared(block)) + tcf_block_remove(block, net); +err_block_insert: + kfree(tcf_block_chain_zero(block)); + kfree(block); + } else { + block->refcnt--; + } return err; } EXPORT_SYMBOL(tcf_block_get_ext); @@ -407,26 +486,34 @@ void tcf_block_put_ext(struct tcf_block *block, struct Qdisc *q, { struct tcf_chain *chain, *tmp; - /* Hold a refcnt for all chains, so that they don't disappear - * while we are iterating. - */ if (!block) return; tcf_chain_head_change_cb_del(tcf_block_chain_zero(block), ei); - list_for_each_entry(chain, &block->chain_list, list) - tcf_chain_hold(chain); - list_for_each_entry(chain, &block->chain_list, list) - tcf_chain_flush(chain); + if (--block->refcnt == 0) { + if (tcf_block_shared(block)) + tcf_block_remove(block, block->net); + + /* Hold a refcnt for all chains, so that they don't disappear + * while we are iterating. + */ + list_for_each_entry(chain, &block->chain_list, list) + tcf_chain_hold(chain); + + list_for_each_entry(chain, &block->chain_list, list) + tcf_chain_flush(chain); + } tcf_block_offload_unbind(block, q, ei); - /* At this point, all the chains should have refcnt >= 1. */ - list_for_each_entry_safe(chain, tmp, &block->chain_list, list) - tcf_chain_put(chain); + if (block->refcnt == 0) { + /* At this point, all the chains should have refcnt >= 1. */ + list_for_each_entry_safe(chain, tmp, &block->chain_list, list) + tcf_chain_put(chain); - /* Finally, put chain 0 and allow block to be freed. */ - tcf_chain_put(tcf_block_chain_zero(block)); + /* Finally, put chain 0 and allow block to be freed. */ + tcf_chain_put(tcf_block_chain_zero(block)); + } } EXPORT_SYMBOL(tcf_block_put_ext); @@ -1313,12 +1400,40 @@ int tc_setup_cb_call(struct tcf_block *block, struct tcf_exts *exts, } EXPORT_SYMBOL(tc_setup_cb_call); +static __net_init int tcf_net_init(struct net *net) +{ + struct tcf_net *tn = net_generic(net, tcf_net_id); + + idr_init(&tn->idr); + return 0; +} + +static void __net_exit tcf_net_exit(struct net *net) +{ + struct tcf_net *tn = net_generic(net, tcf_net_id); + + idr_destroy(&tn->idr); +} + +static struct pernet_operations tcf_net_ops = { + .init = tcf_net_init, + .exit = tcf_net_exit, + .id = &tcf_net_id, + .size = sizeof(struct tcf_net), +}; + static int __init tc_filter_init(void) { + int err; + tc_filter_wq = alloc_ordered_workqueue("tc_filter_workqueue", 0); if (!tc_filter_wq) return -ENOMEM; + err = register_pernet_subsys(&tcf_net_ops); + if (err) + return err; + rtnl_register(PF_UNSPEC, RTM_NEWTFILTER, tc_ctl_tfilter, NULL, 0); rtnl_register(PF_UNSPEC, RTM_DELTFILTER, tc_ctl_tfilter, NULL, 0); rtnl_register(PF_UNSPEC, RTM_GETTFILTER, tc_ctl_tfilter, From patchwork Tue Jan 16 13:33:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Pirko X-Patchwork-Id: 861529 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=resnulli-us.20150623.gappssmtp.com header.i=@resnulli-us.20150623.gappssmtp.com header.b="yoWFNvp7"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zLWRg4Xxzz9ryk for ; Wed, 17 Jan 2018 00:34:55 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752029AbeAPNey (ORCPT ); Tue, 16 Jan 2018 08:34:54 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:40883 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750954AbeAPNdw (ORCPT ); Tue, 16 Jan 2018 08:33:52 -0500 Received: by mail-wm0-f67.google.com with SMTP id v123so8671712wmd.5 for ; Tue, 16 Jan 2018 05:33:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2IAVhhOMFt2it0vJ8Aw4xyj5iW7qU7tCAsFXrMaDkQs=; b=yoWFNvp7e1PoH5Y7PmzZ9v1OntfhkWD+KDzZb8pB4o9SJocGlMzj34F7xSI5bPpfvy /ZMAjSBTBpcw403ctuhqeV9OZg8pFEOGQe5YpgbapUvAKS/NWYCuulVVEuqZbGYXYArf HPhfWENqKBcIbgAKL7u1TeSSeHZjiemeQXWmIlefDIq7boYZ6gPiYNoxlxJqZ2+9xODz ZUtjN42b1d/b9rPsK7UfXlcoE8OMsF8Ex+Et+ek0gi/IE2Sgh7XTX9x1mI/AQGZaSA8n /odpT2jXqapvj/F7M0cjQB1Hyal34jHu5y/49uiRvGcprcwdZ1P3SjjBX28SrIhC0s/X fPKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2IAVhhOMFt2it0vJ8Aw4xyj5iW7qU7tCAsFXrMaDkQs=; b=JyudX2du165NIbDsIlfsxQFsMNER9uGQ3coKoAfvTFU9rpSv3594BLl/2b5VQTwhm9 n4pE9R1ONFYvxtFJ74lU0xzyqcQ0b0siT37pJzfCZ7Xk/yn49zAkwg3IFoX+hdiD0DNm DeuPMBto3rScmfaAsoMCCQ7WjhGYArV3t8S/MkVrmdgudYeQklnWEd7i8dA9Rn7mvVHa 1Z6qpwV/KuZX+dTP4HUdWCZTKk9keKEUvu7D6z/b/8P5kJlMN9f9HHbRdiQ+/l9E8ZCS TFhF5NaXRS187MTvLUlBzOHiXiqKTUNvrtsIA/mbwyVL5L5tv3/r+Wep1ezs+J6EgZi6 nFCA== X-Gm-Message-State: AKwxytczkgABu4/yu5geFCXMFy9z0wvPfjd+yJzIDArRUQatnctgG2Vo xp9Fei3HjloPv8fblUadigIVXCrH X-Google-Smtp-Source: ACJfBovYLl+aNWcMFQjcP1SkzbphR8Ch9gWEh0+VIAj9n+m0aWkqEuy8wN5l0JhmKGhM2/ufFhLcTQ== X-Received: by 10.28.128.82 with SMTP id b79mr12993642wmd.113.1516109631330; Tue, 16 Jan 2018 05:33:51 -0800 (PST) Received: from localhost (ip-89-177-135-29.net.upcbroadband.cz. [89.177.135.29]) by smtp.gmail.com with ESMTPSA id b127sm1783574wma.26.2018.01.16.05.33.50 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 16 Jan 2018 05:33:50 -0800 (PST) From: Jiri Pirko To: netdev@vger.kernel.org Cc: davem@davemloft.net, jhs@mojatatu.com, xiyou.wangcong@gmail.com, mlxsw@mellanox.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, saeedm@mellanox.com, matanb@mellanox.com, leonro@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, simon.horman@netronome.com, pieter.jansenvanvuuren@netronome.com, john.hurley@netronome.com, alexander.h.duyck@intel.com, ogerlitz@mellanox.com, john.fastabend@gmail.com, daniel@iogearbox.net, dsahern@gmail.com, roopa@cumulusnetworks.com Subject: [patch net-next v9 03/13] net: sched: avoid usage of tp->q in tcf_classify Date: Tue, 16 Jan 2018 14:33:37 +0100 Message-Id: <20180116133347.2207-4-jiri@resnulli.us> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180116133347.2207-1-jiri@resnulli.us> References: <20180116133347.2207-1-jiri@resnulli.us> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko Use block index in the messages instead. Signed-off-by: Jiri Pirko --- net/sched/cls_api.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index 9b45950..31e91dc 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -672,8 +672,9 @@ int tcf_classify(struct sk_buff *skb, const struct tcf_proto *tp, #ifdef CONFIG_NET_CLS_ACT reset: if (unlikely(limit++ >= max_reclassify_loop)) { - net_notice_ratelimited("%s: reclassify loop, rule prio %u, protocol %02x\n", - tp->q->ops->id, tp->prio & 0xffff, + net_notice_ratelimited("%u: reclassify loop, rule prio %u, protocol %02x\n", + tp->chain->block->index, + tp->prio & 0xffff, ntohs(tp->protocol)); return TC_ACT_SHOT; } From patchwork Tue Jan 16 13:33:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Pirko X-Patchwork-Id: 861528 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=resnulli-us.20150623.gappssmtp.com header.i=@resnulli-us.20150623.gappssmtp.com header.b="K3FS+cHt"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zLWRb1tfTz9ryk for ; Wed, 17 Jan 2018 00:34:51 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752011AbeAPNet (ORCPT ); Tue, 16 Jan 2018 08:34:49 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:41445 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751118AbeAPNdx (ORCPT ); Tue, 16 Jan 2018 08:33:53 -0500 Received: by mail-wm0-f66.google.com with SMTP id f71so8421627wmf.0 for ; Tue, 16 Jan 2018 05:33:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wo2hIRSsAjZ1LnwstvKxuuvQKTQlAj6bow4tUEVS5fI=; b=K3FS+cHtfiYAZ78HPIqQCW1X2WZIi0IsRgeL7SSy4mMUHh5/8VeYlIkXfnNury6+Dy nHwMwy8xQihryXF9fgADvhWore8ZbexlPDI1H2CozkWFobLhVAW6KB2Wrb0YKprjXbO3 8dzEkpxrz9/50y8/n7l4m6kbVP3nKU3pUxBIlNM+Cna4ln3nHpukgzgZkeybHUGir0oB GdDWYOdovTsUqzT22tmVmA7ZlZJ0n62PizNqjhVJo85My4W5Mo2gBw2ql3T8ZHNyz3q7 p8cZmDRqwlofVi/1B6LR5QTsekJp6c9dIxg8mw3bVsUni3+WpRUezcXxgQ4Ak0HDLk1f CNmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wo2hIRSsAjZ1LnwstvKxuuvQKTQlAj6bow4tUEVS5fI=; b=r9KFVuwrTt8IA9WNKqQ8mfWMER1kX3tr3liDx8VcvMzmJ7hIfFdIMkBbyVNSvl9rOK 8nN1mfx4MJqbEwJVbhm4icYobZdBFkr6PRQeXlCGed/SGMSa5LXOMSISHLNm6fx0m+V7 cw2JSvUUozj4YmhZRIfQtRFvZw9ezP4a4r0xHr84CtFOOWJ/sCMrgaX1bOKpRxRPn4Gh 7/GcpwF7MOsnqHHJDTeNPR+0CgOykOK7iWYPt5kjVhpKUCDtlR3ZOKMWrx+hff8ZLdwk Jahy70gV/bzFpUR4Yc9ainmXO1WSvIp1i8uLV8cq4A3AkDmKCo1Qdemr60mO7pNsNm4o gvug== X-Gm-Message-State: AKwxytezIRn+DgL3r3Ii5NWWlrK/xWosY0Bu/3cmfJbe9kVFN+aD4GaG DPnsEjqoR9m23hCUYji+45lTOJiv X-Google-Smtp-Source: ACJfBosI5kl2Km4o/7Pxz9JYmZWf5AskJHGzttzG8drhgB0qRNC812Q+ilynrlXWxXUT+sHz52B2OQ== X-Received: by 10.28.120.15 with SMTP id t15mr10422928wmc.34.1516109632195; Tue, 16 Jan 2018 05:33:52 -0800 (PST) Received: from localhost (ip-89-177-135-29.net.upcbroadband.cz. [89.177.135.29]) by smtp.gmail.com with ESMTPSA id i33sm1630832wri.70.2018.01.16.05.33.51 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 16 Jan 2018 05:33:51 -0800 (PST) From: Jiri Pirko To: netdev@vger.kernel.org Cc: davem@davemloft.net, jhs@mojatatu.com, xiyou.wangcong@gmail.com, mlxsw@mellanox.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, saeedm@mellanox.com, matanb@mellanox.com, leonro@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, simon.horman@netronome.com, pieter.jansenvanvuuren@netronome.com, john.hurley@netronome.com, alexander.h.duyck@intel.com, ogerlitz@mellanox.com, john.fastabend@gmail.com, daniel@iogearbox.net, dsahern@gmail.com, roopa@cumulusnetworks.com Subject: [patch net-next v9 04/13] net: sched: introduce block mechanism to handle netif_keep_dst calls Date: Tue, 16 Jan 2018 14:33:38 +0100 Message-Id: <20180116133347.2207-5-jiri@resnulli.us> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180116133347.2207-1-jiri@resnulli.us> References: <20180116133347.2207-1-jiri@resnulli.us> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko Couple of classifiers call netif_keep_dst directly on q->dev. That is not possible to do directly for shared blocke where multiple qdiscs are owning the block. So introduce a infrastructure to keep track of the block owners in list and use this list to implement block variant of netif_keep_dst. Signed-off-by: Jiri Pirko --- v3->v4: - rebased on top of the current net-next v1->v2: - fix binder_type to check "egress" as well as pointed out by Daniel --- include/net/pkt_cls.h | 1 + include/net/sch_generic.h | 2 ++ net/sched/cls_api.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++ net/sched/cls_bpf.c | 4 +-- net/sched/cls_flow.c | 2 +- net/sched/cls_route.c | 2 +- 6 files changed, 76 insertions(+), 4 deletions(-) diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h index c564638..1b42c5c 100644 --- a/include/net/pkt_cls.h +++ b/include/net/pkt_cls.h @@ -39,6 +39,7 @@ bool tcf_queue_work(struct work_struct *work); struct tcf_chain *tcf_chain_get(struct tcf_block *block, u32 chain_index, bool create); void tcf_chain_put(struct tcf_chain *chain); +void tcf_block_netif_keep_dst(struct tcf_block *block); int tcf_block_get(struct tcf_block **p_block, struct tcf_proto __rcu **p_filter_chain, struct Qdisc *q, struct netlink_ext_ack *extack); diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index 5cc4d71..df97c3e 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -289,6 +289,8 @@ struct tcf_block { struct net *net; struct Qdisc *q; struct list_head cb_list; + struct list_head owner_list; + bool keep_dst; }; static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz) diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index 31e91dc..11bdb89 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -375,6 +375,7 @@ static struct tcf_block *tcf_block_create(struct net *net, struct Qdisc *q, } INIT_LIST_HEAD(&block->chain_list); INIT_LIST_HEAD(&block->cb_list); + INIT_LIST_HEAD(&block->owner_list); /* Create chain 0 by default, it has to be always present. */ chain = tcf_chain_create(block, 0); @@ -406,6 +407,65 @@ static struct tcf_chain *tcf_block_chain_zero(struct tcf_block *block) return list_first_entry(&block->chain_list, struct tcf_chain, list); } +struct tcf_block_owner_item { + struct list_head list; + struct Qdisc *q; + enum tcf_block_binder_type binder_type; +}; + +static void +tcf_block_owner_netif_keep_dst(struct tcf_block *block, + struct Qdisc *q, + enum tcf_block_binder_type binder_type) +{ + if (block->keep_dst && + binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS && + binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS) + netif_keep_dst(qdisc_dev(q)); +} + +void tcf_block_netif_keep_dst(struct tcf_block *block) +{ + struct tcf_block_owner_item *item; + + block->keep_dst = true; + list_for_each_entry(item, &block->owner_list, list) + tcf_block_owner_netif_keep_dst(block, item->q, + item->binder_type); +} +EXPORT_SYMBOL(tcf_block_netif_keep_dst); + +static int tcf_block_owner_add(struct tcf_block *block, + struct Qdisc *q, + enum tcf_block_binder_type binder_type) +{ + struct tcf_block_owner_item *item; + + item = kmalloc(sizeof(*item), GFP_KERNEL); + if (!item) + return -ENOMEM; + item->q = q; + item->binder_type = binder_type; + list_add(&item->list, &block->owner_list); + return 0; +} + +static void tcf_block_owner_del(struct tcf_block *block, + struct Qdisc *q, + enum tcf_block_binder_type binder_type) +{ + struct tcf_block_owner_item *item; + + list_for_each_entry(item, &block->owner_list, list) { + if (item->q == q && item->binder_type == binder_type) { + list_del(&item->list); + kfree(item); + return; + } + } + WARN_ON(1); +} + int tcf_block_get_ext(struct tcf_block **p_block, struct Qdisc *q, struct tcf_block_ext_info *ei, struct netlink_ext_ack *extack) @@ -435,6 +495,12 @@ int tcf_block_get_ext(struct tcf_block **p_block, struct Qdisc *q, } } + err = tcf_block_owner_add(block, q, ei->binder_type); + if (err) + goto err_block_owner_add; + + tcf_block_owner_netif_keep_dst(block, q, ei->binder_type); + err = tcf_chain_head_change_cb_add(tcf_block_chain_zero(block), ei, extack); if (err) @@ -444,6 +510,8 @@ int tcf_block_get_ext(struct tcf_block **p_block, struct Qdisc *q, return 0; err_chain_head_change_cb_add: + tcf_block_owner_del(block, q, ei->binder_type); +err_block_owner_add: if (created) { if (tcf_block_shared(block)) tcf_block_remove(block, net); @@ -489,6 +557,7 @@ void tcf_block_put_ext(struct tcf_block *block, struct Qdisc *q, if (!block) return; tcf_chain_head_change_cb_del(tcf_block_chain_zero(block), ei); + tcf_block_owner_del(block, q, ei->binder_type); if (--block->refcnt == 0) { if (tcf_block_shared(block)) diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c index 8d78e7f..d79cc50 100644 --- a/net/sched/cls_bpf.c +++ b/net/sched/cls_bpf.c @@ -392,8 +392,8 @@ static int cls_bpf_prog_from_efd(struct nlattr **tb, struct cls_bpf_prog *prog, prog->bpf_name = name; prog->filter = fp; - if (fp->dst_needed && !(tp->q->flags & TCQ_F_INGRESS)) - netif_keep_dst(qdisc_dev(tp->q)); + if (fp->dst_needed) + tcf_block_netif_keep_dst(tp->chain->block); return 0; } diff --git a/net/sched/cls_flow.c b/net/sched/cls_flow.c index 25c2a88..28cd6fb 100644 --- a/net/sched/cls_flow.c +++ b/net/sched/cls_flow.c @@ -526,7 +526,7 @@ static int flow_change(struct net *net, struct sk_buff *in_skb, timer_setup(&fnew->perturb_timer, flow_perturbation, TIMER_DEFERRABLE); - netif_keep_dst(qdisc_dev(tp->q)); + tcf_block_netif_keep_dst(tp->chain->block); if (tb[TCA_FLOW_KEYS]) { fnew->keymask = keymask; diff --git a/net/sched/cls_route.c b/net/sched/cls_route.c index ac9a5b8..a1f2b1b 100644 --- a/net/sched/cls_route.c +++ b/net/sched/cls_route.c @@ -527,7 +527,7 @@ static int route4_change(struct net *net, struct sk_buff *in_skb, if (f->handle < f1->handle) break; - netif_keep_dst(qdisc_dev(tp->q)); + tcf_block_netif_keep_dst(tp->chain->block); rcu_assign_pointer(f->next, f1); rcu_assign_pointer(*fp, f); From patchwork Tue Jan 16 13:33:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Pirko X-Patchwork-Id: 861525 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=resnulli-us.20150623.gappssmtp.com header.i=@resnulli-us.20150623.gappssmtp.com header.b="z+jUefpi"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zLWRC3htyz9ryk for ; Wed, 17 Jan 2018 00:34:31 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751997AbeAPNe3 (ORCPT ); Tue, 16 Jan 2018 08:34:29 -0500 Received: from mail-wr0-f193.google.com ([209.85.128.193]:36347 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751214AbeAPNdy (ORCPT ); Tue, 16 Jan 2018 08:33:54 -0500 Received: by mail-wr0-f193.google.com with SMTP id d9so15177264wre.3 for ; Tue, 16 Jan 2018 05:33:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=N5wefpdivIsvBkzTpcPfKOVRPNKKIlkzI7d4nw4QdDI=; b=z+jUefpieQ70rzImuV1grg83zoDhMX311yHhUKkexKkF50I7NWxLutNG+DMvf1jh24 PYpNyID4d+ecIwHsVy6ljSapoQ/nbD3TmxempUx29eirrIQE8VWuYpI/Cb46LessJrj+ i6NGSf74R2S0CGfbXLWx/+qEX90g/YWDoQWZCL+I/CfS8m5j/MPjnUUrJdWm4O7hQSYU sgvjt/RDD8oionxgY2zN5187l5HcbVJi+CIrku2iJs3E/DWQtnJyj84HSI+ryisbTKNj t5HdVkY4UFHDYonMp3/ZC5SAL2nsI76Yun4Q24WrN1jFeq9wOUWqEGVM1gnmYY1AmWB3 rndg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=N5wefpdivIsvBkzTpcPfKOVRPNKKIlkzI7d4nw4QdDI=; b=rehGweNZ2F+T0n1iUZhZmbqldjXbBQtIEGe5wxWMzENlKNsst11a01VLNJQTymzo2y 6jELIaSGDo7C8dV0GfSAH+SXku6sbX0TfsNTemFjVptn4mPqlKiIaAVjgHlRElf36m+e PPRnGV6ggp38bwfWCbln4MCIYVSgzs0V9rX1usuovc9rFRkycAZjGwaxLBYf2L14ILja Vw2C7I8F8b5E+fBw8HU2kF0eFlVpQ0AaXfSjBM+PMa/fI8uDonHPrOU/9dgFe+gW7LGP VRhGvRGMHxTRdfgu1TSeHWOXMA0bpArokuA8cws6/Wzu9Qx40R6apGSbb26WlgmCN2zH FSHQ== X-Gm-Message-State: AKwxytebqyePGeCjg4gO+O4fojlxNIJiqrwYZdsy8QP73p89TxS/q71r J61FLyiZLmAfXxVXrP6AjEGX7qCx X-Google-Smtp-Source: ACJfBovNJMmA0g+Wa+l4U5/Mp05e5k7YGyrhdFrp+spgEbQogbcus2nzvJjns+Z1+0OFJefoODK78g== X-Received: by 10.223.133.183 with SMTP id 52mr8980530wrt.78.1516109633145; Tue, 16 Jan 2018 05:33:53 -0800 (PST) Received: from localhost (ip-89-177-135-29.net.upcbroadband.cz. [89.177.135.29]) by smtp.gmail.com with ESMTPSA id s20sm2994874wma.45.2018.01.16.05.33.52 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 16 Jan 2018 05:33:52 -0800 (PST) From: Jiri Pirko To: netdev@vger.kernel.org Cc: davem@davemloft.net, jhs@mojatatu.com, xiyou.wangcong@gmail.com, mlxsw@mellanox.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, saeedm@mellanox.com, matanb@mellanox.com, leonro@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, simon.horman@netronome.com, pieter.jansenvanvuuren@netronome.com, john.hurley@netronome.com, alexander.h.duyck@intel.com, ogerlitz@mellanox.com, john.fastabend@gmail.com, daniel@iogearbox.net, dsahern@gmail.com, roopa@cumulusnetworks.com Subject: [patch net-next v9 05/13] net: sched: remove classid and q fields from tcf_proto Date: Tue, 16 Jan 2018 14:33:39 +0100 Message-Id: <20180116133347.2207-6-jiri@resnulli.us> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180116133347.2207-1-jiri@resnulli.us> References: <20180116133347.2207-1-jiri@resnulli.us> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko Both are no longer used, so remove them. Signed-off-by: Jiri Pirko --- include/net/sch_generic.h | 2 -- net/sched/cls_api.c | 7 ++----- 2 files changed, 2 insertions(+), 7 deletions(-) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index df97c3e..dba2214 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -255,8 +255,6 @@ struct tcf_proto { /* All the rest */ u32 prio; - u32 classid; - struct Qdisc *q; void *data; const struct tcf_proto_ops *ops; struct tcf_chain *chain; diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index 11bdb89..8a130e2 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -122,8 +122,7 @@ static inline u32 tcf_auto_prio(struct tcf_proto *tp) } static struct tcf_proto *tcf_proto_create(const char *kind, u32 protocol, - u32 prio, u32 parent, struct Qdisc *q, - struct tcf_chain *chain) + u32 prio, struct tcf_chain *chain) { struct tcf_proto *tp; int err; @@ -157,8 +156,6 @@ static struct tcf_proto *tcf_proto_create(const char *kind, u32 protocol, tp->classify = tp->ops->classify; tp->protocol = protocol; tp->prio = prio; - tp->classid = parent; - tp->q = q; tp->chain = chain; err = tp->ops->init(tp); @@ -1069,7 +1066,7 @@ static int tc_ctl_tfilter(struct sk_buff *skb, struct nlmsghdr *n, prio = tcf_auto_prio(tcf_chain_tp_prev(&chain_info)); tp = tcf_proto_create(nla_data(tca[TCA_KIND]), - protocol, prio, parent, q, chain); + protocol, prio, chain); if (IS_ERR(tp)) { err = PTR_ERR(tp); goto errout; From patchwork Tue Jan 16 13:33:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Pirko X-Patchwork-Id: 861518 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=resnulli-us.20150623.gappssmtp.com header.i=@resnulli-us.20150623.gappssmtp.com header.b="asMyRZ2B"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zLWQc6dDMz9s7v for ; Wed, 17 Jan 2018 00:34:00 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751712AbeAPNd7 (ORCPT ); Tue, 16 Jan 2018 08:33:59 -0500 Received: from mail-wr0-f195.google.com ([209.85.128.195]:42275 "EHLO mail-wr0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751534AbeAPNdz (ORCPT ); Tue, 16 Jan 2018 08:33:55 -0500 Received: by mail-wr0-f195.google.com with SMTP id e41so14842596wre.9 for ; Tue, 16 Jan 2018 05:33:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2MyYAPJUZDITTOECb+bqB3TX6r8dP4DppGM5ZuI8/AE=; b=asMyRZ2BjCyWYBox0HSS0VY9XwGfmzIxEeCJyySGgHhMLRXiCAvRAO2iKIcItnPNaR sJYgz1wVrQdOTw2QASQKD9aLAmLSnQ9EBVNJEyZVy4ueDa6AfOhqyV7sLcBDvLYSZhST ReijHuxttUL0nAt5PP0/5WNJswEfsguq9kpa24TWFyCuEHMKW+ZL2wrI3SKERbQSkHrt +wHtV7ZuaHlVOENMFjIUA7itjbToATrgmrkmY+8KJkgucmbWFjSw8YQF8kSK4moC3itM Mvj15MlzVDTgYOsCZizrjlFod9E357cAxZNCH+rBwXjqbdRoLwqD8Ukxy8jY8z1zrPCQ Hk6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2MyYAPJUZDITTOECb+bqB3TX6r8dP4DppGM5ZuI8/AE=; b=rht/UEiqkhYEfnFAqJ3yCHiFScg1rYim25GP6ajzeaLhQpeqJsRoOwLSrWTkdSKZzk 1Jg1jsoUJmTv+65F0NYoMtjYqEXm3g5PIs15nOmapH4faKsuGTRmzQiow6l007SbySed lhpJuo4OFfYMRGu4Hvdobx91M2J5+/bnPqLLyfBZXyszCHO7yEvp0BVmy31QOrsjrb8Q cerCJ1Eob/G0d2nwN3shhJKav1ljm+HmAiSVZ4S2fLrpR5tSgP9xB6KTHvpccNW/OWMJ UZL9oTDSEnnOKeAt0zmPF/rS+SEs+Jx6+RFQfw/c1+RHyRJmI5M6XEjnd67UkMSnM/9Z PLGA== X-Gm-Message-State: AKwxytdGtO7/8rqHl4oIAGCjSJJAtAHhrT/KbBxXiADmgbCYEHUmbImR BqfFny0a9kFx/mqehSiMNW8RkB2D X-Google-Smtp-Source: ACJfBosVRmfjbN0puFm9t4OVzuBajmgUv2JsDftgmPWNNyATTiJejdtJlri5MgPyOzI+5cA5uP9Gbg== X-Received: by 10.223.164.4 with SMTP id d4mr12415610wra.49.1516109634075; Tue, 16 Jan 2018 05:33:54 -0800 (PST) Received: from localhost (ip-89-177-135-29.net.upcbroadband.cz. [89.177.135.29]) by smtp.gmail.com with ESMTPSA id 11sm1968136wmd.33.2018.01.16.05.33.53 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 16 Jan 2018 05:33:53 -0800 (PST) From: Jiri Pirko To: netdev@vger.kernel.org Cc: davem@davemloft.net, jhs@mojatatu.com, xiyou.wangcong@gmail.com, mlxsw@mellanox.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, saeedm@mellanox.com, matanb@mellanox.com, leonro@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, simon.horman@netronome.com, pieter.jansenvanvuuren@netronome.com, john.hurley@netronome.com, alexander.h.duyck@intel.com, ogerlitz@mellanox.com, john.fastabend@gmail.com, daniel@iogearbox.net, dsahern@gmail.com, roopa@cumulusnetworks.com Subject: [patch net-next v9 06/13] net: sched: keep track of offloaded filters and check tc offload feature Date: Tue, 16 Jan 2018 14:33:40 +0100 Message-Id: <20180116133347.2207-7-jiri@resnulli.us> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180116133347.2207-1-jiri@resnulli.us> References: <20180116133347.2207-1-jiri@resnulli.us> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko During block bind, we need to check tc offload feature. If it is disabled yet still the block contains offloaded filters, forbid the bind. Also forbid to register callback for a block that already contains offloaded filters, as the play back is not supported now. For keeping track of offloaded filters there is a new counter introduced, alongside with couple of helpers called from cls_* code. These helpers set and clear TCA_CLS_FLAGS_IN_HW flag. Signed-off-by: Jiri Pirko --- v4->v5: - add tracking of binding of devs that are unable to offload and check that before block cbs call. v3->v4: - propagate netdev_ops->ndo_setup_tc error up to tcf_block_offload_bind caller v2->v3: - new patch --- include/net/sch_generic.h | 18 +++++++++++ net/sched/cls_api.c | 80 ++++++++++++++++++++++++++++++++++++++--------- net/sched/cls_bpf.c | 5 ++- net/sched/cls_flower.c | 3 +- net/sched/cls_matchall.c | 3 +- net/sched/cls_u32.c | 13 ++++---- 6 files changed, 99 insertions(+), 23 deletions(-) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index dba2214..ab86b64 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -289,8 +289,26 @@ struct tcf_block { struct list_head cb_list; struct list_head owner_list; bool keep_dst; + unsigned int offloadcnt; /* Number of oddloaded filters */ + unsigned int nooffloaddevcnt; /* Number of devs unable to do offload */ }; +static inline void tcf_block_offload_inc(struct tcf_block *block, u32 *flags) +{ + if (*flags & TCA_CLS_FLAGS_IN_HW) + return; + *flags |= TCA_CLS_FLAGS_IN_HW; + block->offloadcnt++; +} + +static inline void tcf_block_offload_dec(struct tcf_block *block, u32 *flags) +{ + if (!(*flags & TCA_CLS_FLAGS_IN_HW)) + return; + *flags &= ~TCA_CLS_FLAGS_IN_HW; + block->offloadcnt--; +} + static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz) { struct qdisc_skb_cb *qcb; diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index 8a130e2..0ff8ae9 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -265,31 +265,66 @@ void tcf_chain_put(struct tcf_chain *chain) } EXPORT_SYMBOL(tcf_chain_put); -static void tcf_block_offload_cmd(struct tcf_block *block, struct Qdisc *q, - struct tcf_block_ext_info *ei, - enum tc_block_command command) +static bool tcf_block_offload_in_use(struct tcf_block *block) +{ + return block->offloadcnt; +} + +static int tcf_block_offload_cmd(struct tcf_block *block, + struct net_device *dev, + struct tcf_block_ext_info *ei, + enum tc_block_command command) { - struct net_device *dev = q->dev_queue->dev; struct tc_block_offload bo = {}; - if (!dev->netdev_ops->ndo_setup_tc) - return; bo.command = command; bo.binder_type = ei->binder_type; bo.block = block; - dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_BLOCK, &bo); + return dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_BLOCK, &bo); } -static void tcf_block_offload_bind(struct tcf_block *block, struct Qdisc *q, - struct tcf_block_ext_info *ei) +static int tcf_block_offload_bind(struct tcf_block *block, struct Qdisc *q, + struct tcf_block_ext_info *ei) { - tcf_block_offload_cmd(block, q, ei, TC_BLOCK_BIND); + struct net_device *dev = q->dev_queue->dev; + int err; + + if (!dev->netdev_ops->ndo_setup_tc) + goto no_offload_dev_inc; + + /* If tc offload feature is disabled and the block we try to bind + * to already has some offloaded filters, forbid to bind. + */ + if (!tc_can_offload(dev) && tcf_block_offload_in_use(block)) + return -EOPNOTSUPP; + + err = tcf_block_offload_cmd(block, dev, ei, TC_BLOCK_BIND); + if (err == -EOPNOTSUPP) + goto no_offload_dev_inc; + return err; + +no_offload_dev_inc: + if (tcf_block_offload_in_use(block)) + return -EOPNOTSUPP; + block->nooffloaddevcnt++; + return 0; } static void tcf_block_offload_unbind(struct tcf_block *block, struct Qdisc *q, struct tcf_block_ext_info *ei) { - tcf_block_offload_cmd(block, q, ei, TC_BLOCK_UNBIND); + struct net_device *dev = q->dev_queue->dev; + int err; + + if (!dev->netdev_ops->ndo_setup_tc) + goto no_offload_dev_dec; + err = tcf_block_offload_cmd(block, dev, ei, TC_BLOCK_UNBIND); + if (err == -EOPNOTSUPP) + goto no_offload_dev_dec; + return; + +no_offload_dev_dec: + WARN_ON(block->nooffloaddevcnt-- == 0); } static int @@ -502,10 +537,16 @@ int tcf_block_get_ext(struct tcf_block **p_block, struct Qdisc *q, ei, extack); if (err) goto err_chain_head_change_cb_add; - tcf_block_offload_bind(block, q, ei); + + err = tcf_block_offload_bind(block, q, ei); + if (err) + goto err_block_offload_bind; + *p_block = block; return 0; +err_block_offload_bind: + tcf_chain_head_change_cb_del(tcf_block_chain_zero(block), ei); err_chain_head_change_cb_add: tcf_block_owner_del(block, q, ei->binder_type); err_block_owner_add: @@ -637,9 +678,16 @@ struct tcf_block_cb *__tcf_block_cb_register(struct tcf_block *block, { struct tcf_block_cb *block_cb; + /* At this point, playback of previous block cb calls is not supported, + * so forbid to register to block which already has some offloaded + * filters present. + */ + if (tcf_block_offload_in_use(block)) + return ERR_PTR(-EOPNOTSUPP); + block_cb = kzalloc(sizeof(*block_cb), GFP_KERNEL); if (!block_cb) - return NULL; + return ERR_PTR(-ENOMEM); block_cb->cb = cb; block_cb->cb_ident = cb_ident; block_cb->cb_priv = cb_priv; @@ -655,7 +703,7 @@ int tcf_block_cb_register(struct tcf_block *block, struct tcf_block_cb *block_cb; block_cb = __tcf_block_cb_register(block, cb, cb_ident, cb_priv); - return block_cb ? 0 : -ENOMEM; + return IS_ERR(block_cb) ? PTR_ERR(block_cb) : 0; } EXPORT_SYMBOL(tcf_block_cb_register); @@ -685,6 +733,10 @@ static int tcf_block_cb_call(struct tcf_block *block, enum tc_setup_type type, int ok_count = 0; int err; + /* Make sure all netdevs sharing this block are offload-capable. */ + if (block->nooffloaddevcnt && err_stop) + return -EOPNOTSUPP; + list_for_each_entry(block_cb, &block->cb_list, list) { err = block_cb->cb(type, type_data, block_cb->cb_priv); if (err) { diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c index d79cc50..cf72aef 100644 --- a/net/sched/cls_bpf.c +++ b/net/sched/cls_bpf.c @@ -167,13 +167,16 @@ static int cls_bpf_offload_cmd(struct tcf_proto *tp, struct cls_bpf_prog *prog, cls_bpf.exts_integrated = obj->exts_integrated; cls_bpf.gen_flags = obj->gen_flags; + if (oldprog) + tcf_block_offload_dec(block, &oldprog->gen_flags); + err = tc_setup_cb_call(block, NULL, TC_SETUP_CLSBPF, &cls_bpf, skip_sw); if (prog) { if (err < 0) { cls_bpf_offload_cmd(tp, oldprog, prog); return err; } else if (err > 0) { - prog->gen_flags |= TCA_CLS_FLAGS_IN_HW; + tcf_block_offload_inc(block, &prog->gen_flags); } } diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index 6132a73..f61df19 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -229,6 +229,7 @@ static void fl_hw_destroy_filter(struct tcf_proto *tp, struct cls_fl_filter *f) tc_setup_cb_call(block, &f->exts, TC_SETUP_CLSFLOWER, &cls_flower, false); + tcf_block_offload_dec(block, &f->flags); } static int fl_hw_replace_filter(struct tcf_proto *tp, @@ -256,7 +257,7 @@ static int fl_hw_replace_filter(struct tcf_proto *tp, fl_hw_destroy_filter(tp, f); return err; } else if (err > 0) { - f->flags |= TCA_CLS_FLAGS_IN_HW; + tcf_block_offload_inc(block, &f->flags); } if (skip_sw && !(f->flags & TCA_CLS_FLAGS_IN_HW)) diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c index 66d4e00..d0e57c8 100644 --- a/net/sched/cls_matchall.c +++ b/net/sched/cls_matchall.c @@ -81,6 +81,7 @@ static void mall_destroy_hw_filter(struct tcf_proto *tp, cls_mall.cookie = cookie; tc_setup_cb_call(block, NULL, TC_SETUP_CLSMATCHALL, &cls_mall, false); + tcf_block_offload_dec(block, &head->flags); } static int mall_replace_hw_filter(struct tcf_proto *tp, @@ -103,7 +104,7 @@ static int mall_replace_hw_filter(struct tcf_proto *tp, mall_destroy_hw_filter(tp, head, cookie); return err; } else if (err > 0) { - head->flags |= TCA_CLS_FLAGS_IN_HW; + tcf_block_offload_inc(block, &head->flags); } if (skip_sw && !(head->flags & TCA_CLS_FLAGS_IN_HW)) diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c index 507859c..020d328 100644 --- a/net/sched/cls_u32.c +++ b/net/sched/cls_u32.c @@ -529,16 +529,17 @@ static int u32_replace_hw_hnode(struct tcf_proto *tp, struct tc_u_hnode *h, return 0; } -static void u32_remove_hw_knode(struct tcf_proto *tp, u32 handle) +static void u32_remove_hw_knode(struct tcf_proto *tp, struct tc_u_knode *n) { struct tcf_block *block = tp->chain->block; struct tc_cls_u32_offload cls_u32 = {}; tc_cls_common_offload_init(&cls_u32.common, tp); cls_u32.command = TC_CLSU32_DELETE_KNODE; - cls_u32.knode.handle = handle; + cls_u32.knode.handle = n->handle; tc_setup_cb_call(block, NULL, TC_SETUP_CLSU32, &cls_u32, false); + tcf_block_offload_dec(block, &n->flags); } static int u32_replace_hw_knode(struct tcf_proto *tp, struct tc_u_knode *n, @@ -567,10 +568,10 @@ static int u32_replace_hw_knode(struct tcf_proto *tp, struct tc_u_knode *n, err = tc_setup_cb_call(block, NULL, TC_SETUP_CLSU32, &cls_u32, skip_sw); if (err < 0) { - u32_remove_hw_knode(tp, n->handle); + u32_remove_hw_knode(tp, n); return err; } else if (err > 0) { - n->flags |= TCA_CLS_FLAGS_IN_HW; + tcf_block_offload_inc(block, &n->flags); } if (skip_sw && !(n->flags & TCA_CLS_FLAGS_IN_HW)) @@ -589,7 +590,7 @@ static void u32_clear_hnode(struct tcf_proto *tp, struct tc_u_hnode *ht) RCU_INIT_POINTER(ht->ht[h], rtnl_dereference(n->next)); tcf_unbind_filter(tp, &n->res); - u32_remove_hw_knode(tp, n->handle); + u32_remove_hw_knode(tp, n); idr_remove_ext(&ht->handle_idr, n->handle); if (tcf_exts_get_net(&n->exts)) call_rcu(&n->rcu, u32_delete_key_freepf_rcu); @@ -682,7 +683,7 @@ static int u32_delete(struct tcf_proto *tp, void *arg, bool *last) goto out; if (TC_U32_KEY(ht->handle)) { - u32_remove_hw_knode(tp, ht->handle); + u32_remove_hw_knode(tp, (struct tc_u_knode *)ht); ret = u32_delete_key(tp, (struct tc_u_knode *)ht); goto out; } From patchwork Tue Jan 16 13:33:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Pirko X-Patchwork-Id: 861524 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=resnulli-us.20150623.gappssmtp.com header.i=@resnulli-us.20150623.gappssmtp.com header.b="oGyqm7rK"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zLWR95szXz9s7n for ; Wed, 17 Jan 2018 00:34:29 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751966AbeAPNe2 (ORCPT ); Tue, 16 Jan 2018 08:34:28 -0500 Received: from mail-wr0-f194.google.com ([209.85.128.194]:42277 "EHLO mail-wr0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751259AbeAPNd4 (ORCPT ); Tue, 16 Jan 2018 08:33:56 -0500 Received: by mail-wr0-f194.google.com with SMTP id e41so14842644wre.9 for ; Tue, 16 Jan 2018 05:33:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=UU8cGIzSCgTOxuoktXRyGQ4ezl+qjpfqcBlA6zuaz7I=; b=oGyqm7rKsP+oFTDh/9ggZPKboYT8T0LjNKp/10fcdWQcJBg48klSrG/qCBzYbHfJrc VOv1vV3XZS4ZjQS0D3AF68Zm5MfAaG+xteS3kiEL2VYWorjKK9RdphaE+EP8p9iNMLUN 3JsuM3u7Eqz2DrJ7JEHZ1/lJZExwNxp2QvzvIVDVsC55agNV7D3UkpCQ42i6MfPY5Eyh eOnSVox4tmrEDlt8vKhq/Zt7lQBBu1e6jw0yOKiGsq4kOLxEgnoETL5UjOEnWe2KqfNe iycb6sx4PSqnxdmUHt+GWEPNlU0/topSRSiCBAx81H746aDhmEilx8PdgIrepdFYU+AB g5PA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=UU8cGIzSCgTOxuoktXRyGQ4ezl+qjpfqcBlA6zuaz7I=; b=H6/PhzUS0SWVG22cTo1X1qO/VZkdO0pLOE81memBTZ5/DCDr3ivK46EZnhljZ8pPuO HjhfOJ6vVreF1K8o/211/RY3qYOOmedY68Hc3bgUdt3rgiE/oOQOECOOQ4iUS4irCUpv U/zS6jUcjTd9TGnuJLJCzVDtG0x0ndixI5DCDJRbM5UFOwRt/id6TuSBKweiUbFUcoI7 uPKmVu4kpoiVgFRJeSVSOBAE6m08b0DMq9bVBpOc9W7qPgbYVN5TOuTL19S+TV9asCav RHRZ0Kc8ZycvlYm0ZQtPwSh3T2HV0NWgTW35NIx1x4FhsdXLMMK2kHFjFYFd5xVq1YpJ v/bw== X-Gm-Message-State: AKGB3mJbk0OvgDwOCYcPk2V8v0ysN7KiFZMgm340t9GU9Hn0oL9ap15U 47N8Jz4H6r3yvIcEOmI9fCTp2HmP X-Google-Smtp-Source: ACJfBovO91/Ar9sWg0GqW5m2Ct8Tm1wG0Phfa4pco6wDJ/Qq6UjKOpVYUXSuHaqKB4FEh1tH3v62jA== X-Received: by 10.223.147.39 with SMTP id 36mr37141153wro.63.1516109635006; Tue, 16 Jan 2018 05:33:55 -0800 (PST) Received: from localhost (ip-89-177-135-29.net.upcbroadband.cz. [89.177.135.29]) by smtp.gmail.com with ESMTPSA id s105sm3486425wrc.89.2018.01.16.05.33.54 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 16 Jan 2018 05:33:54 -0800 (PST) From: Jiri Pirko To: netdev@vger.kernel.org Cc: davem@davemloft.net, jhs@mojatatu.com, xiyou.wangcong@gmail.com, mlxsw@mellanox.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, saeedm@mellanox.com, matanb@mellanox.com, leonro@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, simon.horman@netronome.com, pieter.jansenvanvuuren@netronome.com, john.hurley@netronome.com, alexander.h.duyck@intel.com, ogerlitz@mellanox.com, john.fastabend@gmail.com, daniel@iogearbox.net, dsahern@gmail.com, roopa@cumulusnetworks.com Subject: [patch net-next v9 07/13] net: sched: use block index as a handle instead of qdisc when block is shared Date: Tue, 16 Jan 2018 14:33:41 +0100 Message-Id: <20180116133347.2207-8-jiri@resnulli.us> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180116133347.2207-1-jiri@resnulli.us> References: <20180116133347.2207-1-jiri@resnulli.us> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko As the tcm_ifindex 0 is invalid ifindex, reuse it to indicate that we work with block, instead of qdisc. So if tcm_ifindex is 0, tcm_parent is used to carry block_index. If the block is set to be shared between at least 2 qdiscs, it is forbidden to use the qdisc handle to add/delete filters. In that case, userspace has to pass block_index. Also, for dump of the filters, in case the block is shared in between at least 2 qdiscs, the each filter is dumped with tcm_ifindex 0 and tcm_parent set to block_index. That gives the user clear indication, that the filter belongs to a shared block and not only to one qdisc under which it is dumped. Suggested-by: David Ahern Signed-off-by: Jiri Pirko --- v7->v8: - added comment to ifindex block magic v6->v7: - changed extack message for block index handle as suggested by DaveA - added extack message when block index does not exist - the block ifindex magic is in define and change to 0xffffffff as suggested by Jamal v5->v6: - new patch --- include/uapi/linux/rtnetlink.h | 10 ++ net/sched/cls_api.c | 202 ++++++++++++++++++++++++----------------- 2 files changed, 128 insertions(+), 84 deletions(-) diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h index 843e29a..da878f2 100644 --- a/include/uapi/linux/rtnetlink.h +++ b/include/uapi/linux/rtnetlink.h @@ -541,9 +541,19 @@ struct tcmsg { int tcm_ifindex; __u32 tcm_handle; __u32 tcm_parent; +/* tcm_block_index is used instead of tcm_parent + * in case tcm_ifindex == TCM_IFINDEX_MAGIC_BLOCK + */ +#define tcm_block_index tcm_parent __u32 tcm_info; }; +/* For manipulation of filters in shared block, tcm_ifindex is set to + * TCM_IFINDEX_MAGIC_BLOCK, and tcm_parent is aliased to tcm_block_index + * which is the block index. + */ +#define TCM_IFINDEX_MAGIC_BLOCK (0xFFFFFFFFU) + enum { TCA_UNSPEC, TCA_KIND, diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index 0ff8ae9..d687e58 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -865,8 +865,9 @@ static struct tcf_proto *tcf_chain_tp_find(struct tcf_chain *chain, } static int tcf_fill_node(struct net *net, struct sk_buff *skb, - struct tcf_proto *tp, struct Qdisc *q, u32 parent, - void *fh, u32 portid, u32 seq, u16 flags, int event) + struct tcf_proto *tp, struct tcf_block *block, + struct Qdisc *q, u32 parent, void *fh, + u32 portid, u32 seq, u16 flags, int event) { struct tcmsg *tcm; struct nlmsghdr *nlh; @@ -879,8 +880,13 @@ static int tcf_fill_node(struct net *net, struct sk_buff *skb, tcm->tcm_family = AF_UNSPEC; tcm->tcm__pad1 = 0; tcm->tcm__pad2 = 0; - tcm->tcm_ifindex = qdisc_dev(q)->ifindex; - tcm->tcm_parent = parent; + if (q) { + tcm->tcm_ifindex = qdisc_dev(q)->ifindex; + tcm->tcm_parent = parent; + } else { + tcm->tcm_ifindex = TCM_IFINDEX_MAGIC_BLOCK; + tcm->tcm_block_index = block->index; + } tcm->tcm_info = TC_H_MAKE(tp->prio, tp->protocol); if (nla_put_string(skb, TCA_KIND, tp->ops->kind)) goto nla_put_failure; @@ -903,8 +909,8 @@ static int tcf_fill_node(struct net *net, struct sk_buff *skb, static int tfilter_notify(struct net *net, struct sk_buff *oskb, struct nlmsghdr *n, struct tcf_proto *tp, - struct Qdisc *q, u32 parent, - void *fh, int event, bool unicast) + struct tcf_block *block, struct Qdisc *q, + u32 parent, void *fh, int event, bool unicast) { struct sk_buff *skb; u32 portid = oskb ? NETLINK_CB(oskb).portid : 0; @@ -913,8 +919,8 @@ static int tfilter_notify(struct net *net, struct sk_buff *oskb, if (!skb) return -ENOBUFS; - if (tcf_fill_node(net, skb, tp, q, parent, fh, portid, n->nlmsg_seq, - n->nlmsg_flags, event) <= 0) { + if (tcf_fill_node(net, skb, tp, block, q, parent, fh, portid, + n->nlmsg_seq, n->nlmsg_flags, event) <= 0) { kfree_skb(skb); return -EINVAL; } @@ -928,8 +934,8 @@ static int tfilter_notify(struct net *net, struct sk_buff *oskb, static int tfilter_del_notify(struct net *net, struct sk_buff *oskb, struct nlmsghdr *n, struct tcf_proto *tp, - struct Qdisc *q, u32 parent, - void *fh, bool unicast, bool *last) + struct tcf_block *block, struct Qdisc *q, + u32 parent, void *fh, bool unicast, bool *last) { struct sk_buff *skb; u32 portid = oskb ? NETLINK_CB(oskb).portid : 0; @@ -939,8 +945,8 @@ static int tfilter_del_notify(struct net *net, struct sk_buff *oskb, if (!skb) return -ENOBUFS; - if (tcf_fill_node(net, skb, tp, q, parent, fh, portid, n->nlmsg_seq, - n->nlmsg_flags, RTM_DELTFILTER) <= 0) { + if (tcf_fill_node(net, skb, tp, block, q, parent, fh, portid, + n->nlmsg_seq, n->nlmsg_flags, RTM_DELTFILTER) <= 0) { kfree_skb(skb); return -EINVAL; } @@ -959,15 +965,16 @@ static int tfilter_del_notify(struct net *net, struct sk_buff *oskb, } static void tfilter_notify_chain(struct net *net, struct sk_buff *oskb, - struct Qdisc *q, u32 parent, - struct nlmsghdr *n, + struct tcf_block *block, struct Qdisc *q, + u32 parent, struct nlmsghdr *n, struct tcf_chain *chain, int event) { struct tcf_proto *tp; for (tp = rtnl_dereference(chain->filter_chain); tp; tp = rtnl_dereference(tp->next)) - tfilter_notify(net, oskb, n, tp, q, parent, 0, event, false); + tfilter_notify(net, oskb, n, tp, block, + q, parent, 0, event, false); } /* Add/change/delete/get a filter node */ @@ -983,13 +990,11 @@ static int tc_ctl_tfilter(struct sk_buff *skb, struct nlmsghdr *n, bool prio_allocate; u32 parent; u32 chain_index; - struct net_device *dev; - struct Qdisc *q; + struct Qdisc *q = NULL; struct tcf_chain_info chain_info; struct tcf_chain *chain = NULL; struct tcf_block *block; struct tcf_proto *tp; - const struct Qdisc_class_ops *cops; unsigned long cl; void *fh; int err; @@ -1036,41 +1041,58 @@ static int tc_ctl_tfilter(struct sk_buff *skb, struct nlmsghdr *n, /* Find head of filter chain. */ - /* Find link */ - dev = __dev_get_by_index(net, t->tcm_ifindex); - if (dev == NULL) - return -ENODEV; - - /* Find qdisc */ - if (!parent) { - q = dev->qdisc; - parent = q->handle; + if (t->tcm_ifindex == TCM_IFINDEX_MAGIC_BLOCK) { + block = tcf_block_lookup(net, t->tcm_block_index); + if (!block) { + NL_SET_ERR_MSG(extack, "Block of given index was not found"); + err = -EINVAL; + goto errout; + } } else { - q = qdisc_lookup(dev, TC_H_MAJ(t->tcm_parent)); - if (q == NULL) - return -EINVAL; - } + const struct Qdisc_class_ops *cops; + struct net_device *dev; - /* Is it classful? */ - cops = q->ops->cl_ops; - if (!cops) - return -EINVAL; + /* Find link */ + dev = __dev_get_by_index(net, t->tcm_ifindex); + if (!dev) + return -ENODEV; - if (!cops->tcf_block) - return -EOPNOTSUPP; + /* Find qdisc */ + if (!parent) { + q = dev->qdisc; + parent = q->handle; + } else { + q = qdisc_lookup(dev, TC_H_MAJ(t->tcm_parent)); + if (!q) + return -EINVAL; + } - /* Do we search for filter, attached to class? */ - if (TC_H_MIN(parent)) { - cl = cops->find(q, parent); - if (cl == 0) - return -ENOENT; - } + /* Is it classful? */ + cops = q->ops->cl_ops; + if (!cops) + return -EINVAL; - /* And the last stroke */ - block = cops->tcf_block(q, cl, extack); - if (!block) { - err = -EINVAL; - goto errout; + if (!cops->tcf_block) + return -EOPNOTSUPP; + + /* Do we search for filter, attached to class? */ + if (TC_H_MIN(parent)) { + cl = cops->find(q, parent); + if (cl == 0) + return -ENOENT; + } + + /* And the last stroke */ + block = cops->tcf_block(q, cl, extack); + if (!block) { + err = -EINVAL; + goto errout; + } + if (tcf_block_shared(block)) { + NL_SET_ERR_MSG(extack, "This filter block is shared. Please use the block index to manipulate the filters"); + err = -EOPNOTSUPP; + goto errout; + } } chain_index = tca[TCA_CHAIN] ? nla_get_u32(tca[TCA_CHAIN]) : 0; @@ -1086,7 +1108,7 @@ static int tc_ctl_tfilter(struct sk_buff *skb, struct nlmsghdr *n, } if (n->nlmsg_type == RTM_DELTFILTER && prio == 0) { - tfilter_notify_chain(net, skb, q, parent, n, + tfilter_notify_chain(net, skb, block, q, parent, n, chain, RTM_DELTFILTER); tcf_chain_flush(chain); err = 0; @@ -1134,7 +1156,7 @@ static int tc_ctl_tfilter(struct sk_buff *skb, struct nlmsghdr *n, if (!fh) { if (n->nlmsg_type == RTM_DELTFILTER && t->tcm_handle == 0) { tcf_chain_tp_remove(chain, &chain_info, tp); - tfilter_notify(net, skb, n, tp, q, parent, fh, + tfilter_notify(net, skb, n, tp, block, q, parent, fh, RTM_DELTFILTER, false); tcf_proto_destroy(tp); err = 0; @@ -1159,8 +1181,8 @@ static int tc_ctl_tfilter(struct sk_buff *skb, struct nlmsghdr *n, } break; case RTM_DELTFILTER: - err = tfilter_del_notify(net, skb, n, tp, q, parent, - fh, false, &last); + err = tfilter_del_notify(net, skb, n, tp, block, + q, parent, fh, false, &last); if (err) goto errout; if (last) { @@ -1169,8 +1191,8 @@ static int tc_ctl_tfilter(struct sk_buff *skb, struct nlmsghdr *n, } goto errout; case RTM_GETTFILTER: - err = tfilter_notify(net, skb, n, tp, q, parent, fh, - RTM_NEWTFILTER, true); + err = tfilter_notify(net, skb, n, tp, block, q, parent, + fh, RTM_NEWTFILTER, true); goto errout; default: err = -EINVAL; @@ -1183,7 +1205,7 @@ static int tc_ctl_tfilter(struct sk_buff *skb, struct nlmsghdr *n, if (err == 0) { if (tp_created) tcf_chain_tp_insert(chain, &chain_info, tp); - tfilter_notify(net, skb, n, tp, q, parent, fh, + tfilter_notify(net, skb, n, tp, block, q, parent, fh, RTM_NEWTFILTER, false); } else { if (tp_created) @@ -1203,6 +1225,7 @@ struct tcf_dump_args { struct tcf_walker w; struct sk_buff *skb; struct netlink_callback *cb; + struct tcf_block *block; struct Qdisc *q; u32 parent; }; @@ -1212,7 +1235,7 @@ static int tcf_node_dump(struct tcf_proto *tp, void *n, struct tcf_walker *arg) struct tcf_dump_args *a = (void *)arg; struct net *net = sock_net(a->skb->sk); - return tcf_fill_node(net, a->skb, tp, a->q, a->parent, + return tcf_fill_node(net, a->skb, tp, a->block, a->q, a->parent, n, NETLINK_CB(a->cb->skb).portid, a->cb->nlh->nlmsg_seq, NLM_F_MULTI, RTM_NEWTFILTER); @@ -1223,6 +1246,7 @@ static bool tcf_chain_dump(struct tcf_chain *chain, struct Qdisc *q, u32 parent, long index_start, long *p_index) { struct net *net = sock_net(skb->sk); + struct tcf_block *block = chain->block; struct tcmsg *tcm = nlmsg_data(cb->nlh); struct tcf_dump_args arg; struct tcf_proto *tp; @@ -1241,7 +1265,7 @@ static bool tcf_chain_dump(struct tcf_chain *chain, struct Qdisc *q, u32 parent, memset(&cb->args[1], 0, sizeof(cb->args) - sizeof(cb->args[0])); if (cb->args[1] == 0) { - if (tcf_fill_node(net, skb, tp, q, parent, 0, + if (tcf_fill_node(net, skb, tp, block, q, parent, 0, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, NLM_F_MULTI, RTM_NEWTFILTER) <= 0) @@ -1254,6 +1278,7 @@ static bool tcf_chain_dump(struct tcf_chain *chain, struct Qdisc *q, u32 parent, arg.w.fn = tcf_node_dump; arg.skb = skb; arg.cb = cb; + arg.block = block; arg.q = q; arg.parent = parent; arg.w.stop = 0; @@ -1272,13 +1297,10 @@ static int tc_dump_tfilter(struct sk_buff *skb, struct netlink_callback *cb) { struct net *net = sock_net(skb->sk); struct nlattr *tca[TCA_MAX + 1]; - struct net_device *dev; - struct Qdisc *q; + struct Qdisc *q = NULL; struct tcf_block *block; struct tcf_chain *chain; struct tcmsg *tcm = nlmsg_data(cb->nlh); - unsigned long cl = 0; - const struct Qdisc_class_ops *cops; long index_start; long index; u32 parent; @@ -1291,32 +1313,44 @@ static int tc_dump_tfilter(struct sk_buff *skb, struct netlink_callback *cb) if (err) return err; - dev = __dev_get_by_index(net, tcm->tcm_ifindex); - if (!dev) - return skb->len; - - parent = tcm->tcm_parent; - if (!parent) { - q = dev->qdisc; - parent = q->handle; + if (tcm->tcm_ifindex == TCM_IFINDEX_MAGIC_BLOCK) { + block = tcf_block_lookup(net, tcm->tcm_block_index); + if (!block) + goto out; } else { - q = qdisc_lookup(dev, TC_H_MAJ(tcm->tcm_parent)); - } - if (!q) - goto out; - cops = q->ops->cl_ops; - if (!cops) - goto out; - if (!cops->tcf_block) - goto out; - if (TC_H_MIN(tcm->tcm_parent)) { - cl = cops->find(q, tcm->tcm_parent); - if (cl == 0) + const struct Qdisc_class_ops *cops; + struct net_device *dev; + unsigned long cl = 0; + + dev = __dev_get_by_index(net, tcm->tcm_ifindex); + if (!dev) + return skb->len; + + parent = tcm->tcm_parent; + if (!parent) { + q = dev->qdisc; + parent = q->handle; + } else { + q = qdisc_lookup(dev, TC_H_MAJ(tcm->tcm_parent)); + } + if (!q) goto out; + cops = q->ops->cl_ops; + if (!cops) + goto out; + if (!cops->tcf_block) + goto out; + if (TC_H_MIN(tcm->tcm_parent)) { + cl = cops->find(q, tcm->tcm_parent); + if (cl == 0) + goto out; + } + block = cops->tcf_block(q, cl, NULL); + if (!block) + goto out; + if (tcf_block_shared(block)) + q = NULL; } - block = cops->tcf_block(q, cl, NULL); - if (!block) - goto out; index_start = cb->args[0]; index = 0; From patchwork Tue Jan 16 13:33:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Pirko X-Patchwork-Id: 861523 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=resnulli-us.20150623.gappssmtp.com header.i=@resnulli-us.20150623.gappssmtp.com header.b="FPF0swi+"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zLWR84Gyhz9ryk for ; Wed, 17 Jan 2018 00:34:28 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751936AbeAPNe0 (ORCPT ); Tue, 16 Jan 2018 08:34:26 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:40905 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751628AbeAPNd5 (ORCPT ); Tue, 16 Jan 2018 08:33:57 -0500 Received: by mail-wm0-f66.google.com with SMTP id v123so8672218wmd.5 for ; Tue, 16 Jan 2018 05:33:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=CneQSO+VaLDD7BZFDXYNuGO865Xo5/GPNG8AKo5hvrY=; b=FPF0swi+LgCqWRg6YKxzJxr7ihxCc54RCtWs2bhel0cw3h4lpFH79jz2TGPX3SyS8Q Xk36W+iUDDaWpX5KSTNMYLz9NvroCeNEyeuTx7gXVf07HQANs32BjNHM4q4am6oLnwWX mLNipIh6VWO2yx2OPFmYjc24llWjGeGCaV+qXrwLDfznXqQaIdHlwgw+qe2HBTFCVDyU ARfecu7SJ+HCFu5voddAbKw5kDOnNvYGVMJBz8EXJtmFGdyXBN6JQ3e69i8r5e9D2pJO qCearo33OMo/CDP70QXUA/JFizJBotcE/b54KfTXeS0VVWV6BTC7aOfLGSm7nHysBdal /hGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=CneQSO+VaLDD7BZFDXYNuGO865Xo5/GPNG8AKo5hvrY=; b=qsXMXP3/jcVY9qM2XjnB+BwXyrz+ypaN6RB38D/EFdF1T0l28pU3AET+9Msvw0XMte 5ArjVpKcy+tYrOI423r9bULq1cG+dH5IKZlR+O7oRvMDFpC60rsuYmfehcst7aE4tPUg UPGNkiIFE0dqAov2MRQW2rUAFdeped5y37kpsEPSpCnqq/PrlHxxgg07hlhg3dwetDZy nHX9lfZeK11530M5R0AbgwUahTjqtSZwpsXx9bDYPldHhsCE0ssXT8X89z72N1OeUx/Z PdSl3GzP110bqkMpX+w97gNEDqELrguACDnX+g3Df+8jVJJGnkxVFtIzi4o3t9Fzk2Fl x0tw== X-Gm-Message-State: AKwxytcIlelMcudU+deJ6ogweJXOqbMUWlcSnhGTqwbK0fdRSis2QFxL BVtIzvmO8zMZzzVWhjosvhHMp0BD X-Google-Smtp-Source: ACJfBosqjENc0oDT12rCNpG3aXYIXMAkXCaxGJ8t9OWBCVF1Wa7/8g+6Bg3FbI6BD/SRGd789uU6ew== X-Received: by 10.28.106.2 with SMTP id f2mr8658263wmc.84.1516109635907; Tue, 16 Jan 2018 05:33:55 -0800 (PST) Received: from localhost (ip-89-177-135-29.net.upcbroadband.cz. [89.177.135.29]) by smtp.gmail.com with ESMTPSA id j43sm1310005wre.86.2018.01.16.05.33.55 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 16 Jan 2018 05:33:55 -0800 (PST) From: Jiri Pirko To: netdev@vger.kernel.org Cc: davem@davemloft.net, jhs@mojatatu.com, xiyou.wangcong@gmail.com, mlxsw@mellanox.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, saeedm@mellanox.com, matanb@mellanox.com, leonro@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, simon.horman@netronome.com, pieter.jansenvanvuuren@netronome.com, john.hurley@netronome.com, alexander.h.duyck@intel.com, ogerlitz@mellanox.com, john.fastabend@gmail.com, daniel@iogearbox.net, dsahern@gmail.com, roopa@cumulusnetworks.com Subject: [patch net-next v9 08/13] net: sched: introduce ingress/egress block index attributes for qdisc Date: Tue, 16 Jan 2018 14:33:42 +0100 Message-Id: <20180116133347.2207-9-jiri@resnulli.us> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180116133347.2207-1-jiri@resnulli.us> References: <20180116133347.2207-1-jiri@resnulli.us> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko Introduce two new attributes to be used for qdisc creation and dumping. One for ingress block, one for egress block. Introduce a set of ops that qdisc which supports block sharing would implement. Passing block indexes in qdisc change is not supported yet and it is checked and forbidded. In future, these attributes are to be reused for specifying block indexes for classes as well. As of this moment however, it is not supported so a check is in place to forbid it. Suggested-by: Roopa Prabhu Signed-off-by: Jiri Pirko --- v7->v8: - new patch --- include/net/sch_generic.h | 7 +++++ include/uapi/linux/rtnetlink.h | 2 ++ net/sched/sch_api.c | 60 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 69 insertions(+) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index ab86b64..83602fe 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -204,6 +204,13 @@ struct Qdisc_ops { int (*dump)(struct Qdisc *, struct sk_buff *); int (*dump_stats)(struct Qdisc *, struct gnet_dump *); + void (*ingress_block_set)(struct Qdisc *sch, + u32 block_index); + void (*egress_block_set)(struct Qdisc *sch, + u32 block_index); + u32 (*ingress_block_get)(struct Qdisc *sch); + u32 (*egress_block_get)(struct Qdisc *sch); + struct module *owner; }; diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h index da878f2..9b15005 100644 --- a/include/uapi/linux/rtnetlink.h +++ b/include/uapi/linux/rtnetlink.h @@ -568,6 +568,8 @@ enum { TCA_DUMP_INVISIBLE, TCA_CHAIN, TCA_HW_OFFLOAD, + TCA_INGRESS_BLOCK, + TCA_EGRESS_BLOCK, __TCA_MAX }; diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c index 8a04c36..9e0e5bf 100644 --- a/net/sched/sch_api.c +++ b/net/sched/sch_api.c @@ -791,6 +791,7 @@ static int tc_fill_qdisc(struct sk_buff *skb, struct Qdisc *q, u32 clid, unsigned char *b = skb_tail_pointer(skb); struct gnet_dump d; struct qdisc_size_table *stab; + u32 block_index; __u32 qlen; cond_resched(); @@ -807,6 +808,18 @@ static int tc_fill_qdisc(struct sk_buff *skb, struct Qdisc *q, u32 clid, tcm->tcm_info = refcount_read(&q->refcnt); if (nla_put_string(skb, TCA_KIND, q->ops->id)) goto nla_put_failure; + if (q->ops->ingress_block_get) { + block_index = q->ops->ingress_block_get(q); + if (block_index && + nla_put_u32(skb, TCA_INGRESS_BLOCK, block_index)) + goto nla_put_failure; + } + if (q->ops->egress_block_get) { + block_index = q->ops->egress_block_get(q); + if (block_index && + nla_put_u32(skb, TCA_EGRESS_BLOCK, block_index)) + goto nla_put_failure; + } if (q->ops->dump && q->ops->dump(q, skb) < 0) goto nla_put_failure; if (nla_put_u8(skb, TCA_HW_OFFLOAD, !!(q->flags & TCQ_F_OFFLOADED))) @@ -994,6 +1007,40 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent, return err; } +static int qdisc_block_indexes_set(struct Qdisc *sch, struct nlattr **tca, + struct netlink_ext_ack *extack) +{ + u32 block_index; + + if (tca[TCA_INGRESS_BLOCK]) { + block_index = nla_get_u32(tca[TCA_INGRESS_BLOCK]); + + if (!block_index) { + NL_SET_ERR_MSG(extack, "Ingress block index cannot be 0"); + return -EINVAL; + } + if (!sch->ops->ingress_block_set) { + NL_SET_ERR_MSG(extack, "Ingress block sharing is not supported"); + return -EOPNOTSUPP; + } + sch->ops->ingress_block_set(sch, block_index); + } + if (tca[TCA_EGRESS_BLOCK]) { + block_index = nla_get_u32(tca[TCA_EGRESS_BLOCK]); + + if (!block_index) { + NL_SET_ERR_MSG(extack, "Egress block index cannot be 0"); + return -EINVAL; + } + if (!sch->ops->egress_block_set) { + NL_SET_ERR_MSG(extack, "Egress block sharing is not supported"); + return -EOPNOTSUPP; + } + sch->ops->egress_block_set(sch, block_index); + } + return 0; +} + /* lockdep annotation is needed for ingress; egress gets it only for name */ static struct lock_class_key qdisc_tx_lock; static struct lock_class_key qdisc_rx_lock; @@ -1088,6 +1135,10 @@ static struct Qdisc *qdisc_create(struct net_device *dev, netdev_info(dev, "Caught tx_queue_len zero misconfig\n"); } + err = qdisc_block_indexes_set(sch, tca, extack); + if (err) + goto err_out3; + if (ops->init) { err = ops->init(sch, tca[TCA_OPTIONS], extack); if (err != 0) @@ -1182,6 +1233,10 @@ static int qdisc_change(struct Qdisc *sch, struct nlattr **tca, NL_SET_ERR_MSG(extack, "Change operation not supported by specified qdisc"); return -EINVAL; } + if (tca[TCA_INGRESS_BLOCK] || tca[TCA_EGRESS_BLOCK]) { + NL_SET_ERR_MSG(extack, "Change of blocks is not supported"); + return -EOPNOTSUPP; + } err = sch->ops->change(sch, tca[TCA_OPTIONS], extack); if (err) return err; @@ -1907,6 +1962,11 @@ static int tc_ctl_tclass(struct sk_buff *skb, struct nlmsghdr *n, } } + if (tca[TCA_INGRESS_BLOCK] || tca[TCA_EGRESS_BLOCK]) { + NL_SET_ERR_MSG(extack, "Shared blocks are not supported for classes"); + return -EOPNOTSUPP; + } + new_cl = cl; err = -EOPNOTSUPP; if (cops->change) From patchwork Tue Jan 16 13:33:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Pirko X-Patchwork-Id: 861526 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=resnulli-us.20150623.gappssmtp.com header.i=@resnulli-us.20150623.gappssmtp.com header.b="VOv83qHV"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zLWRQ54hxz9s7n for ; Wed, 17 Jan 2018 00:34:42 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751925AbeAPNeZ (ORCPT ); Tue, 16 Jan 2018 08:34:25 -0500 Received: from mail-wm0-f68.google.com ([74.125.82.68]:37016 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751644AbeAPNd6 (ORCPT ); Tue, 16 Jan 2018 08:33:58 -0500 Received: by mail-wm0-f68.google.com with SMTP id v71so8705515wmv.2 for ; Tue, 16 Jan 2018 05:33:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=b3iOquWC3hpAhIyobeCtmMbwcQ+OQbEuLt7rvmpEyWs=; b=VOv83qHVrrVZo2iytFtJO6YJglkNJFneI/FveS+r8EEXTL9s+j/vx881Ljx8+Np2+z RtEN+3aKnVTDS3Y3bbtp9oaRhd+Bt+xXnfENgP2Wl62+DxxSbw2aGXFvCawQdSpx7yam /kYGsQv8vt0gX7BDxPGehfOUAH6i01LJsR+U1YnxMk58VhsWHl6aRDwABYqnWQ7DXRFl J1qkZ2AYG8udiz6ThPCy6JsocFBeUBdvjI+q/o8qMYSv3qzdUZ++QQg2Wp5OeMXgCGkT cvjjgbMh7y93niYQ0q7tD8dfuW0BgdSevpPhx9CW+7Rv/mv1Tpg7PzaTlj2CqDnJ5cCt ejRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=b3iOquWC3hpAhIyobeCtmMbwcQ+OQbEuLt7rvmpEyWs=; b=t4oa9MCXw1cmU/UWXHgDumbZh+jizH1pf7OY93SRohPLWbt+0fg5urx7r7m1Rd55Gi q8r97o2l4k3wiqG8vEdqm4jXQdoSy9AGItl9El1leL5oPmxeib7/RP52xxoGHb/+6iHa jT3XW4Pu6T77p3box/vPFEFd/z/9EdGbsQsSewuiAJlVzdlA0CNzpJZqjYCZGDoaGdZ9 sCv8cX+96Kstox0T23nHZEc4jDXpqElo5pQ2ckHdn2uAb36CVGWi8L2YBbRIxlodA2ju Sd9/n4Tcx/KsfZ9j3sc5TT+uvadGbjLdOTevsWrjrs27zLLVPJURCZrxMcBjVPWdT18a dwVQ== X-Gm-Message-State: AKwxytdgfU3tlA5zKM6qSCIGRuFZeWT5uixDh2I2CKafPymSminfptKA /ARwslICWXtipK7UwIDFUSmejLm0 X-Google-Smtp-Source: ACJfBosWN40e94L7L/vHWIJkQM01K7R3hT7plXqRl1IsUpAQPal8EBehgWVWvguXnwffsm7GNG/UJw== X-Received: by 10.28.45.74 with SMTP id t71mr3881039wmt.90.1516109636744; Tue, 16 Jan 2018 05:33:56 -0800 (PST) Received: from localhost (ip-89-177-135-29.net.upcbroadband.cz. [89.177.135.29]) by smtp.gmail.com with ESMTPSA id i33sm1631017wri.70.2018.01.16.05.33.56 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 16 Jan 2018 05:33:56 -0800 (PST) From: Jiri Pirko To: netdev@vger.kernel.org Cc: davem@davemloft.net, jhs@mojatatu.com, xiyou.wangcong@gmail.com, mlxsw@mellanox.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, saeedm@mellanox.com, matanb@mellanox.com, leonro@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, simon.horman@netronome.com, pieter.jansenvanvuuren@netronome.com, john.hurley@netronome.com, alexander.h.duyck@intel.com, ogerlitz@mellanox.com, john.fastabend@gmail.com, daniel@iogearbox.net, dsahern@gmail.com, roopa@cumulusnetworks.com Subject: [patch net-next v9 09/13] net: sched: allow ingress and clsact qdiscs to share filter blocks Date: Tue, 16 Jan 2018 14:33:43 +0100 Message-Id: <20180116133347.2207-10-jiri@resnulli.us> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180116133347.2207-1-jiri@resnulli.us> References: <20180116133347.2207-1-jiri@resnulli.us> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko Benefit from the previously introduced shared filter blocks infrastructure and allow ingress and clsact qdisc instances to share filter blocks. The block index is coming from userspace as qdisc option. Signed-off-by: Jiri Pirko --- v7->v8: - base this on the patch that introduces qdisc-generic block index attributes parsing/dumping v6->v7: - adjust to the core changes and check block index attributes for being 0 v3->v4: - rebased on top of the current net-next v2->v3: - removed "p_" prefix from block index function args --- net/sched/sch_ingress.c | 76 ++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 62 insertions(+), 14 deletions(-) diff --git a/net/sched/sch_ingress.c b/net/sched/sch_ingress.c index 7ca2be2..5f0bf8e 100644 --- a/net/sched/sch_ingress.c +++ b/net/sched/sch_ingress.c @@ -61,6 +61,20 @@ static void clsact_chain_head_change(struct tcf_proto *tp_head, void *priv) struct mini_Qdisc_pair *miniqp = priv; mini_qdisc_pair_swap(miniqp, tp_head); +}; + +static void ingress_ingress_block_set(struct Qdisc *sch, u32 block_index) +{ + struct ingress_sched_data *q = qdisc_priv(sch); + + q->block_info.block_index = block_index; +} + +static u32 ingress_ingress_block_get(struct Qdisc *sch) +{ + struct ingress_sched_data *q = qdisc_priv(sch); + + return q->block_info.block_index; } static int ingress_init(struct Qdisc *sch, struct nlattr *opt, @@ -120,13 +134,15 @@ static const struct Qdisc_class_ops ingress_class_ops = { }; static struct Qdisc_ops ingress_qdisc_ops __read_mostly = { - .cl_ops = &ingress_class_ops, - .id = "ingress", - .priv_size = sizeof(struct ingress_sched_data), - .init = ingress_init, - .destroy = ingress_destroy, - .dump = ingress_dump, - .owner = THIS_MODULE, + .cl_ops = &ingress_class_ops, + .id = "ingress", + .priv_size = sizeof(struct ingress_sched_data), + .init = ingress_init, + .destroy = ingress_destroy, + .dump = ingress_dump, + .ingress_block_set = ingress_ingress_block_set, + .ingress_block_get = ingress_ingress_block_get, + .owner = THIS_MODULE, }; struct clsact_sched_data { @@ -170,6 +186,34 @@ static struct tcf_block *clsact_tcf_block(struct Qdisc *sch, unsigned long cl, } } +static void clsact_ingress_block_set(struct Qdisc *sch, u32 block_index) +{ + struct clsact_sched_data *q = qdisc_priv(sch); + + q->ingress_block_info.block_index = block_index; +} + +static void clsact_egress_block_set(struct Qdisc *sch, u32 block_index) +{ + struct clsact_sched_data *q = qdisc_priv(sch); + + q->egress_block_info.block_index = block_index; +} + +static u32 clsact_ingress_block_get(struct Qdisc *sch) +{ + struct clsact_sched_data *q = qdisc_priv(sch); + + return q->ingress_block_info.block_index; +} + +static u32 clsact_egress_block_get(struct Qdisc *sch) +{ + struct clsact_sched_data *q = qdisc_priv(sch); + + return q->egress_block_info.block_index; +} + static int clsact_init(struct Qdisc *sch, struct nlattr *opt, struct netlink_ext_ack *extack) { @@ -228,13 +272,17 @@ static const struct Qdisc_class_ops clsact_class_ops = { }; static struct Qdisc_ops clsact_qdisc_ops __read_mostly = { - .cl_ops = &clsact_class_ops, - .id = "clsact", - .priv_size = sizeof(struct clsact_sched_data), - .init = clsact_init, - .destroy = clsact_destroy, - .dump = ingress_dump, - .owner = THIS_MODULE, + .cl_ops = &clsact_class_ops, + .id = "clsact", + .priv_size = sizeof(struct clsact_sched_data), + .init = clsact_init, + .destroy = clsact_destroy, + .dump = ingress_dump, + .ingress_block_set = clsact_ingress_block_set, + .egress_block_set = clsact_egress_block_set, + .ingress_block_get = clsact_ingress_block_get, + .egress_block_get = clsact_egress_block_get, + .owner = THIS_MODULE, }; static int __init ingress_module_init(void) From patchwork Tue Jan 16 13:33:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Pirko X-Patchwork-Id: 861520 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=resnulli-us.20150623.gappssmtp.com header.i=@resnulli-us.20150623.gappssmtp.com header.b="nnpOU1Qi"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zLWQq4xyTz9s7v for ; Wed, 17 Jan 2018 00:34:11 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751810AbeAPNeD (ORCPT ); Tue, 16 Jan 2018 08:34:03 -0500 Received: from mail-wr0-f193.google.com ([209.85.128.193]:42288 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751648AbeAPNd7 (ORCPT ); Tue, 16 Jan 2018 08:33:59 -0500 Received: by mail-wr0-f193.google.com with SMTP id e41so14842788wre.9 for ; Tue, 16 Jan 2018 05:33:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=4dnMwSlmPMgT/BKjvkCpjK7ataU5cvXdJh75LsX0hHs=; b=nnpOU1QiYBRR+82JtMnOpATLLlL9qQAwVTU77L4NoC79qh5FB4B1rIh6LRX5ZX/6Cc UCHfowKdVBfUp5kuDn5PHa9J1QgyyUHJKzqbM+aq/AhR2Gmp2rBaPA6Y+rjFcWmPNfg5 gTkJyZ8OIhVoLObJmRTQPs9fk6sJl/OMKRC5ZjBIyQ8TIc3ePemoVwWUyACjTb8WNnqb jgMcg5c/p/3wzLdTnHmi3QhVPNQ1n/VCQP1+ZXfqqV1Kq7s5yM/gfNV+9A+r4n9IB1UD /G4IOHpspBLxGCKZtMfGCgWaK2oDlsPhllKNtuSLsH7JssFBIbFa810/1RK11vQf6bMd Ifdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=4dnMwSlmPMgT/BKjvkCpjK7ataU5cvXdJh75LsX0hHs=; b=bk0JP+T6naoDhGl9JIAtJNQswbgmjo6+g5M70UlcXWpqB3N0IXVjf5RWa7DWNtEViW GcS5A62asef93NTJKObxB3TewtKVfZ84mb67/CyXKOh37HhSW92T6p53ox/gMT1MTIgn PR3l4KnhPaeqWvoEpiS3F2qaKhpWv0bAeE9fpe0zTh8PwbB8kN3oxXYOAtvn0iRTgGio +k8Uo+efckaonOMemuKkBstnXLZ9CU2HDpLOBHSBW6QPk0Kp8KSvfL37eBliII6va0rG wznC5QiaNJx4BSrulpd9hkijx+Fi1rW1+mSikWJHvmVTU9iV+IgWUJwlriBTS6LdwvuU 6ccw== X-Gm-Message-State: AKGB3mIr7K0Z1mVk/EK2ioCMaLq6WWvTRss4z6rIEKjyEsa3uRaAd0Ig ztP8sy8EUDfSt0pODDmoDTSOFuwa X-Google-Smtp-Source: ACJfBovE/sAvdx4Y3wCAwOrgCMSsXVpQbtk8hlOggY9/ZxXJzHS8CrCHKflpm8MWK2IsyyD87/4PKQ== X-Received: by 10.223.172.199 with SMTP id o65mr26963729wrc.215.1516109637591; Tue, 16 Jan 2018 05:33:57 -0800 (PST) Received: from localhost (ip-89-177-135-29.net.upcbroadband.cz. [89.177.135.29]) by smtp.gmail.com with ESMTPSA id w195sm1558518wmw.46.2018.01.16.05.33.57 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 16 Jan 2018 05:33:57 -0800 (PST) From: Jiri Pirko To: netdev@vger.kernel.org Cc: davem@davemloft.net, jhs@mojatatu.com, xiyou.wangcong@gmail.com, mlxsw@mellanox.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, saeedm@mellanox.com, matanb@mellanox.com, leonro@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, simon.horman@netronome.com, pieter.jansenvanvuuren@netronome.com, john.hurley@netronome.com, alexander.h.duyck@intel.com, ogerlitz@mellanox.com, john.fastabend@gmail.com, daniel@iogearbox.net, dsahern@gmail.com, roopa@cumulusnetworks.com Subject: [patch net-next v9 10/13] mlxsw: spectrum_acl: Reshuffle code around mlxsw_sp_acl_ruleset_create/destroy Date: Tue, 16 Jan 2018 14:33:44 +0100 Message-Id: <20180116133347.2207-11-jiri@resnulli.us> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180116133347.2207-1-jiri@resnulli.us> References: <20180116133347.2207-1-jiri@resnulli.us> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko In order to prepare for follow-up changes, make the bind/unbind helpers very simple. That required move of ht insertion/removal and bind/unbind calls into mlxsw_sp_acl_ruleset_create/destroy. Signed-off-by: Jiri Pirko --- drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c | 102 ++++++++++----------- 1 file changed, 46 insertions(+), 56 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c index 93dcd31..ead4cb8 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c @@ -118,8 +118,26 @@ struct mlxsw_sp_fid *mlxsw_sp_acl_dummy_fid(struct mlxsw_sp *mlxsw_sp) return mlxsw_sp->acl->dummy_fid; } +static int mlxsw_sp_acl_ruleset_bind(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_ruleset *ruleset, + struct net_device *dev, bool ingress) +{ + const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops; + + return ops->ruleset_bind(mlxsw_sp, ruleset->priv, dev, ingress); +} + +static void mlxsw_sp_acl_ruleset_unbind(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_ruleset *ruleset) +{ + const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops; + + ops->ruleset_unbind(mlxsw_sp, ruleset->priv); +} + static struct mlxsw_sp_acl_ruleset * -mlxsw_sp_acl_ruleset_create(struct mlxsw_sp *mlxsw_sp, +mlxsw_sp_acl_ruleset_create(struct mlxsw_sp *mlxsw_sp, struct net_device *dev, + bool ingress, u32 chain_index, const struct mlxsw_sp_acl_profile_ops *ops) { struct mlxsw_sp_acl *acl = mlxsw_sp->acl; @@ -132,6 +150,9 @@ mlxsw_sp_acl_ruleset_create(struct mlxsw_sp *mlxsw_sp, if (!ruleset) return ERR_PTR(-ENOMEM); ruleset->ref_count = 1; + ruleset->ht_key.dev = dev; + ruleset->ht_key.ingress = ingress; + ruleset->ht_key.chain_index = chain_index; ruleset->ht_key.ops = ops; err = rhashtable_init(&ruleset->rule_ht, &mlxsw_sp_acl_rule_ht_params); @@ -142,68 +163,49 @@ mlxsw_sp_acl_ruleset_create(struct mlxsw_sp *mlxsw_sp, if (err) goto err_ops_ruleset_add; - return ruleset; - -err_ops_ruleset_add: - rhashtable_destroy(&ruleset->rule_ht); -err_rhashtable_init: - kfree(ruleset); - return ERR_PTR(err); -} - -static void mlxsw_sp_acl_ruleset_destroy(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_acl_ruleset *ruleset) -{ - const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops; - - ops->ruleset_del(mlxsw_sp, ruleset->priv); - rhashtable_destroy(&ruleset->rule_ht); - kfree(ruleset); -} - -static int mlxsw_sp_acl_ruleset_bind(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_acl_ruleset *ruleset, - struct net_device *dev, bool ingress, - u32 chain_index) -{ - const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops; - struct mlxsw_sp_acl *acl = mlxsw_sp->acl; - int err; - - ruleset->ht_key.dev = dev; - ruleset->ht_key.ingress = ingress; - ruleset->ht_key.chain_index = chain_index; err = rhashtable_insert_fast(&acl->ruleset_ht, &ruleset->ht_node, mlxsw_sp_acl_ruleset_ht_params); if (err) - return err; - if (!ruleset->ht_key.chain_index) { + goto err_ht_insert; + + if (!chain_index) { /* We only need ruleset with chain index 0, the implicit one, * to be directly bound to device. The rest of the rulesets * are bound by "Goto action set". */ - err = ops->ruleset_bind(mlxsw_sp, ruleset->priv, dev, ingress); + err = mlxsw_sp_acl_ruleset_bind(mlxsw_sp, ruleset, + dev, ingress); if (err) - goto err_ops_ruleset_bind; + goto err_ruleset_bind; } - return 0; -err_ops_ruleset_bind: + return ruleset; + +err_ruleset_bind: rhashtable_remove_fast(&acl->ruleset_ht, &ruleset->ht_node, mlxsw_sp_acl_ruleset_ht_params); - return err; +err_ht_insert: + ops->ruleset_del(mlxsw_sp, ruleset->priv); +err_ops_ruleset_add: + rhashtable_destroy(&ruleset->rule_ht); +err_rhashtable_init: + kfree(ruleset); + return ERR_PTR(err); } -static void mlxsw_sp_acl_ruleset_unbind(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_acl_ruleset *ruleset) +static void mlxsw_sp_acl_ruleset_destroy(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_ruleset *ruleset) { const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops; struct mlxsw_sp_acl *acl = mlxsw_sp->acl; if (!ruleset->ht_key.chain_index) - ops->ruleset_unbind(mlxsw_sp, ruleset->priv); + mlxsw_sp_acl_ruleset_unbind(mlxsw_sp, ruleset); rhashtable_remove_fast(&acl->ruleset_ht, &ruleset->ht_node, mlxsw_sp_acl_ruleset_ht_params); + ops->ruleset_del(mlxsw_sp, ruleset->priv); + rhashtable_destroy(&ruleset->rule_ht); + kfree(ruleset); } static void mlxsw_sp_acl_ruleset_ref_inc(struct mlxsw_sp_acl_ruleset *ruleset) @@ -216,7 +218,6 @@ static void mlxsw_sp_acl_ruleset_ref_dec(struct mlxsw_sp *mlxsw_sp, { if (--ruleset->ref_count) return; - mlxsw_sp_acl_ruleset_unbind(mlxsw_sp, ruleset); mlxsw_sp_acl_ruleset_destroy(mlxsw_sp, ruleset); } @@ -263,7 +264,6 @@ mlxsw_sp_acl_ruleset_get(struct mlxsw_sp *mlxsw_sp, struct net_device *dev, const struct mlxsw_sp_acl_profile_ops *ops; struct mlxsw_sp_acl *acl = mlxsw_sp->acl; struct mlxsw_sp_acl_ruleset *ruleset; - int err; ops = acl->ops->profile_ops(mlxsw_sp, profile); if (!ops) @@ -275,18 +275,8 @@ mlxsw_sp_acl_ruleset_get(struct mlxsw_sp *mlxsw_sp, struct net_device *dev, mlxsw_sp_acl_ruleset_ref_inc(ruleset); return ruleset; } - ruleset = mlxsw_sp_acl_ruleset_create(mlxsw_sp, ops); - if (IS_ERR(ruleset)) - return ruleset; - err = mlxsw_sp_acl_ruleset_bind(mlxsw_sp, ruleset, dev, - ingress, chain_index); - if (err) - goto err_ruleset_bind; - return ruleset; - -err_ruleset_bind: - mlxsw_sp_acl_ruleset_destroy(mlxsw_sp, ruleset); - return ERR_PTR(err); + return mlxsw_sp_acl_ruleset_create(mlxsw_sp, dev, ingress, + chain_index, ops); } void mlxsw_sp_acl_ruleset_put(struct mlxsw_sp *mlxsw_sp, From patchwork Tue Jan 16 13:33:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Pirko X-Patchwork-Id: 861519 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=resnulli-us.20150623.gappssmtp.com header.i=@resnulli-us.20150623.gappssmtp.com header.b="F2Rca+VI"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zLWQk20F3z9s7F for ; Wed, 17 Jan 2018 00:34:06 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751830AbeAPNeF (ORCPT ); Tue, 16 Jan 2018 08:34:05 -0500 Received: from mail-wm0-f68.google.com ([74.125.82.68]:38354 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751685AbeAPNd7 (ORCPT ); Tue, 16 Jan 2018 08:33:59 -0500 Received: by mail-wm0-f68.google.com with SMTP id 141so8616964wme.3 for ; Tue, 16 Jan 2018 05:33:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+yRpn+th6ThhEuE3RpUYKExnrpHE2j+PQvS82OJo7kk=; b=F2Rca+VIPbxnIvcBo0pDFGqHbjFr62r9Q67DaUnijg3xQ9OEAefd7FJunWPb2DqVkX upabqm782S1HoJS1aepNWC8H9Ck0y1hpt2piD6uo+uSqCpe8tSyEET/q9TcxhgFueaF7 2Ihu6iha2vncCAofCp7lvln34hLJqLEqYQWosRnzQCbuEtkbiGkWKCrml46/z3E6G/YR omzrXT+sPKiBDb5BWzA9Na9ZNaq+a1eZPWEXc0gpl34mrVsDDNHUxXnqK2xt8Ds1VsKt Zkz4FZ7tBkWil3Kg7gosYqcWJkh5sTB0z7VUBkixHrGotlcUREr4dHvd12NTf0kzF3st teWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+yRpn+th6ThhEuE3RpUYKExnrpHE2j+PQvS82OJo7kk=; b=QMprMW/JozdV6us+PcNotN8FlUR+IcamvjvtbWyZRge7Q0V7/q2hTVA3R8wzMDedWm pc2A3xQkWoaB85ja613P1Ab/z9cQLbK/rM5yHak29W86Qp9ZFPSvGwM4DpR8boBXYbBv ASiLTaK5O0lmI0F+LooVY/XGqW4mGEFlmg5c3z4Shus+Wviz2ThWlaFG8cQqeRzH9LUh 3ouZEVjtn2R59Y2ksAXMg8NCXeN0kybRtKOcsswv9jUNbKmRikuc9ggWkkjv0nmjS3Aw d/4eHo586AFD7adMhqRDKqVpi9m//R0xu1E8HLQ09N3xoGHP126fjaGzsNVjOcnYjm/M 8L6g== X-Gm-Message-State: AKwxytc7u51qQjWDRgA5+JlA7vi64HD9X7m+P82w6Ux4YrNPplpbyeSd QtkOcRTTl9nEAmcSbTzgLV6LoO2N X-Google-Smtp-Source: ACJfBotBmKYmUntTiOmwCEZNFZ5r6OWGlPz3RTlCCFSyZxBzS7NczMEB43kO86+Tdwf32U1d2NkPxw== X-Received: by 10.28.175.66 with SMTP id y63mr12359864wme.7.1516109638473; Tue, 16 Jan 2018 05:33:58 -0800 (PST) Received: from localhost (ip-89-177-135-29.net.upcbroadband.cz. [89.177.135.29]) by smtp.gmail.com with ESMTPSA id v191sm1650934wmf.25.2018.01.16.05.33.58 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 16 Jan 2018 05:33:58 -0800 (PST) From: Jiri Pirko To: netdev@vger.kernel.org Cc: davem@davemloft.net, jhs@mojatatu.com, xiyou.wangcong@gmail.com, mlxsw@mellanox.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, saeedm@mellanox.com, matanb@mellanox.com, leonro@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, simon.horman@netronome.com, pieter.jansenvanvuuren@netronome.com, john.hurley@netronome.com, alexander.h.duyck@intel.com, ogerlitz@mellanox.com, john.fastabend@gmail.com, daniel@iogearbox.net, dsahern@gmail.com, roopa@cumulusnetworks.com Subject: [patch net-next v9 11/13] mlxsw: spectrum_acl: Don't store netdev and ingress for ruleset unbind Date: Tue, 16 Jan 2018 14:33:45 +0100 Message-Id: <20180116133347.2207-12-jiri@resnulli.us> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180116133347.2207-1-jiri@resnulli.us> References: <20180116133347.2207-1-jiri@resnulli.us> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko Instead, pass netdev and ingress flag to ruleset unbind op. Signed-off-by: Jiri Pirko --- drivers/net/ethernet/mellanox/mlxsw/spectrum.h | 3 +- drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c | 9 ++++-- .../ethernet/mellanox/mlxsw/spectrum_acl_tcam.c | 33 +++++++++++----------- 3 files changed, 24 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h index 16f8fbd..0e827c5 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h @@ -456,7 +456,8 @@ struct mlxsw_sp_acl_profile_ops { void (*ruleset_del)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv); int (*ruleset_bind)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv, struct net_device *dev, bool ingress); - void (*ruleset_unbind)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv); + void (*ruleset_unbind)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv, + struct net_device *dev, bool ingress); u16 (*ruleset_group_id)(void *ruleset_priv); size_t rule_priv_size; int (*rule_add)(struct mlxsw_sp *mlxsw_sp, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c index ead4cb8..7fb41a4 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c @@ -128,11 +128,12 @@ static int mlxsw_sp_acl_ruleset_bind(struct mlxsw_sp *mlxsw_sp, } static void mlxsw_sp_acl_ruleset_unbind(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_acl_ruleset *ruleset) + struct mlxsw_sp_acl_ruleset *ruleset, + struct net_device *dev, bool ingress) { const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops; - ops->ruleset_unbind(mlxsw_sp, ruleset->priv); + ops->ruleset_unbind(mlxsw_sp, ruleset->priv, dev, ingress); } static struct mlxsw_sp_acl_ruleset * @@ -200,7 +201,9 @@ static void mlxsw_sp_acl_ruleset_destroy(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl *acl = mlxsw_sp->acl; if (!ruleset->ht_key.chain_index) - mlxsw_sp_acl_ruleset_unbind(mlxsw_sp, ruleset); + mlxsw_sp_acl_ruleset_unbind(mlxsw_sp, ruleset, + ruleset->ht_key.dev, + ruleset->ht_key.ingress); rhashtable_remove_fast(&acl->ruleset_ht, &ruleset->ht_node, mlxsw_sp_acl_ruleset_ht_params); ops->ruleset_del(mlxsw_sp, ruleset->priv); diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c index 7e8284b..50b2f9a 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c @@ -154,10 +154,6 @@ struct mlxsw_sp_acl_tcam_group { struct list_head region_list; unsigned int region_count; struct rhashtable chunk_ht; - struct { - u16 local_port; - bool ingress; - } bound; struct mlxsw_sp_acl_tcam_group_ops *ops; const struct mlxsw_sp_acl_tcam_pattern *patterns; unsigned int patterns_count; @@ -271,26 +267,28 @@ mlxsw_sp_acl_tcam_group_bind(struct mlxsw_sp *mlxsw_sp, return -EINVAL; mlxsw_sp_port = netdev_priv(dev); - group->bound.local_port = mlxsw_sp_port->local_port; - group->bound.ingress = ingress; - mlxsw_reg_ppbt_pack(ppbt_pl, - group->bound.ingress ? MLXSW_REG_PXBT_E_IACL : - MLXSW_REG_PXBT_E_EACL, - MLXSW_REG_PXBT_OP_BIND, group->bound.local_port, + mlxsw_reg_ppbt_pack(ppbt_pl, ingress ? MLXSW_REG_PXBT_E_IACL : + MLXSW_REG_PXBT_E_EACL, + MLXSW_REG_PXBT_OP_BIND, mlxsw_sp_port->local_port, group->id); return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ppbt), ppbt_pl); } static void mlxsw_sp_acl_tcam_group_unbind(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_acl_tcam_group *group) + struct mlxsw_sp_acl_tcam_group *group, + struct net_device *dev, bool ingress) { + struct mlxsw_sp_port *mlxsw_sp_port; char ppbt_pl[MLXSW_REG_PPBT_LEN]; - mlxsw_reg_ppbt_pack(ppbt_pl, - group->bound.ingress ? MLXSW_REG_PXBT_E_IACL : - MLXSW_REG_PXBT_E_EACL, - MLXSW_REG_PXBT_OP_UNBIND, group->bound.local_port, + if (WARN_ON(!mlxsw_sp_port_dev_check(dev))) + return; + + mlxsw_sp_port = netdev_priv(dev); + mlxsw_reg_ppbt_pack(ppbt_pl, ingress ? MLXSW_REG_PXBT_E_IACL : + MLXSW_REG_PXBT_E_EACL, + MLXSW_REG_PXBT_OP_UNBIND, mlxsw_sp_port->local_port, group->id); mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ppbt), ppbt_pl); } @@ -1066,11 +1064,12 @@ mlxsw_sp_acl_tcam_flower_ruleset_bind(struct mlxsw_sp *mlxsw_sp, static void mlxsw_sp_acl_tcam_flower_ruleset_unbind(struct mlxsw_sp *mlxsw_sp, - void *ruleset_priv) + void *ruleset_priv, + struct net_device *dev, bool ingress) { struct mlxsw_sp_acl_tcam_flower_ruleset *ruleset = ruleset_priv; - mlxsw_sp_acl_tcam_group_unbind(mlxsw_sp, &ruleset->group); + mlxsw_sp_acl_tcam_group_unbind(mlxsw_sp, &ruleset->group, dev, ingress); } static u16 From patchwork Tue Jan 16 13:33:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Pirko X-Patchwork-Id: 861521 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=resnulli-us.20150623.gappssmtp.com header.i=@resnulli-us.20150623.gappssmtp.com header.b="OlyJFIen"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zLWQw69cKz9s7v for ; Wed, 17 Jan 2018 00:34:16 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751883AbeAPNeO (ORCPT ); Tue, 16 Jan 2018 08:34:14 -0500 Received: from mail-wr0-f193.google.com ([209.85.128.193]:35132 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751534AbeAPNeB (ORCPT ); Tue, 16 Jan 2018 08:34:01 -0500 Received: by mail-wr0-f193.google.com with SMTP id g38so11984222wrd.2 for ; Tue, 16 Jan 2018 05:34:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Erf2nuCp7buX6pRZpxHHebirMwOfmMZXQIMnAsozD/g=; b=OlyJFIen7A9GdZvNohLIMImH2k1HUidaFHLBHvcu9f6i7PdO8ekhWHhSgncQYneqaG Ws0sOoRAUyBqjeujqYLHNFkoQyHIHMWdebpzDjvma1JhVmtXygamxLELaL3LfzPW9gEN 4UNtPqIHln1hBiLujXeOBQ2ai/jUGn6K5aLpSNzp4E/Pzs96XKsa43DFRoPNeSedrb4m yeF0kb7dSM64c/gfxHYCk4WcqJuGCY1GuLN0nOrlYaEfgDkNnyKXqTtq/W/Dmo1ghVNZ 9TQ/spyXS1h9OEPXeUv8TaX0EW00siFkWYIxTVf9ixDxhLBCkIgJ6kxiLffDu6YzRBMX vtqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Erf2nuCp7buX6pRZpxHHebirMwOfmMZXQIMnAsozD/g=; b=HiHSPuXo8ajhUSGW1CDBx4a0MIPsseTVWtyKKivPaq1LV/ydpIYHoali43EMejfFbo AtFuqQqfRWy02RAqIR58Cw608RRm86QF6KZUbjTrDqbdI+TBqmFBWbGz7z8J3mtJHpHE kp/FFxB2WfxoLxHY9UT2/Jt5FgR6a/OzDI1iKfhIqX6OBjvRQdoCRI+75cTzPT8fxx/4 QXxQMewKn8H1+LUhtVcpzLHOoJWbB6d9TgyVnKmzWs2sp8VQwCTOXghpnJDTgmT1xyp8 697GtQiO+mtsTj/IZT5m+nAqmQJbYB9mCJyIJ+0jlehkGTuijAM96V5hSISqcGFLnEMV N85Q== X-Gm-Message-State: AKwxytcHa/iJ89SgNACrY87419G6VyKu2SkrhBj8ZJP14NOZ3wvVxhyr vJtXSJXDsel5wRZH4GHWQaIgy0BU X-Google-Smtp-Source: ACJfBotX90H4l6QmrNIAzVTDoHEYH4BuRnxzXolbfefIyDd1a02ibNxIjFmih+hyomnXPVtduWIdtw== X-Received: by 10.223.128.98 with SMTP id 89mr12132133wrk.280.1516109639395; Tue, 16 Jan 2018 05:33:59 -0800 (PST) Received: from localhost (ip-89-177-135-29.net.upcbroadband.cz. [89.177.135.29]) by smtp.gmail.com with ESMTPSA id i21sm2451632wmd.21.2018.01.16.05.33.58 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 16 Jan 2018 05:33:59 -0800 (PST) From: Jiri Pirko To: netdev@vger.kernel.org Cc: davem@davemloft.net, jhs@mojatatu.com, xiyou.wangcong@gmail.com, mlxsw@mellanox.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, saeedm@mellanox.com, matanb@mellanox.com, leonro@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, simon.horman@netronome.com, pieter.jansenvanvuuren@netronome.com, john.hurley@netronome.com, alexander.h.duyck@intel.com, ogerlitz@mellanox.com, john.fastabend@gmail.com, daniel@iogearbox.net, dsahern@gmail.com, roopa@cumulusnetworks.com Subject: [patch net-next v9 12/13] mlxsw: spectrum_acl: Implement TC block sharing Date: Tue, 16 Jan 2018 14:33:46 +0100 Message-Id: <20180116133347.2207-13-jiri@resnulli.us> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180116133347.2207-1-jiri@resnulli.us> References: <20180116133347.2207-1-jiri@resnulli.us> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko Benefit from the prepared TC and in-driver ACL infrastructure and introduce block sharing offload. For that, a new struct "block" is introduced in spectrum_acl in order to hold a list of specific block-port bindings. Signed-off-by: Jiri Pirko --- v7->v8: - rebased on top of current net-next v2->v3: - add tc offload feature handling v1->v2: - new patch --- drivers/net/ethernet/mellanox/mlxsw/spectrum.c | 182 +++++++++++++--- drivers/net/ethernet/mellanox/mlxsw/spectrum.h | 36 +++- drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c | 237 ++++++++++++++++++--- .../net/ethernet/mellanox/mlxsw/spectrum_flower.c | 41 ++-- 4 files changed, 401 insertions(+), 95 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c index f78bfe3..9e78d32 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c @@ -1747,72 +1747,186 @@ static int mlxsw_sp_setup_tc_cls_matchall(struct mlxsw_sp_port *mlxsw_sp_port, } static int -mlxsw_sp_setup_tc_cls_flower(struct mlxsw_sp_port *mlxsw_sp_port, - struct tc_cls_flower_offload *f, - bool ingress) +mlxsw_sp_setup_tc_cls_flower(struct mlxsw_sp_acl_block *acl_block, + struct tc_cls_flower_offload *f) { + struct mlxsw_sp *mlxsw_sp = mlxsw_sp_acl_block_mlxsw_sp(acl_block); + switch (f->command) { case TC_CLSFLOWER_REPLACE: - return mlxsw_sp_flower_replace(mlxsw_sp_port, ingress, f); + return mlxsw_sp_flower_replace(mlxsw_sp, acl_block, f); case TC_CLSFLOWER_DESTROY: - mlxsw_sp_flower_destroy(mlxsw_sp_port, ingress, f); + mlxsw_sp_flower_destroy(mlxsw_sp, acl_block, f); return 0; case TC_CLSFLOWER_STATS: - return mlxsw_sp_flower_stats(mlxsw_sp_port, ingress, f); + return mlxsw_sp_flower_stats(mlxsw_sp, acl_block, f); default: return -EOPNOTSUPP; } } -static int mlxsw_sp_setup_tc_block_cb(enum tc_setup_type type, void *type_data, - void *cb_priv, bool ingress) +static int mlxsw_sp_setup_tc_block_cb_matchall(enum tc_setup_type type, + void *type_data, + void *cb_priv, bool ingress) { struct mlxsw_sp_port *mlxsw_sp_port = cb_priv; - if (!tc_can_offload(mlxsw_sp_port->dev)) - return -EOPNOTSUPP; - switch (type) { case TC_SETUP_CLSMATCHALL: + if (!tc_can_offload(mlxsw_sp_port->dev)) + return -EOPNOTSUPP; + return mlxsw_sp_setup_tc_cls_matchall(mlxsw_sp_port, type_data, ingress); case TC_SETUP_CLSFLOWER: - return mlxsw_sp_setup_tc_cls_flower(mlxsw_sp_port, type_data, - ingress); + return 0; default: return -EOPNOTSUPP; } } -static int mlxsw_sp_setup_tc_block_cb_ig(enum tc_setup_type type, - void *type_data, void *cb_priv) +static int mlxsw_sp_setup_tc_block_cb_matchall_ig(enum tc_setup_type type, + void *type_data, + void *cb_priv) { - return mlxsw_sp_setup_tc_block_cb(type, type_data, cb_priv, true); + return mlxsw_sp_setup_tc_block_cb_matchall(type, type_data, + cb_priv, true); } -static int mlxsw_sp_setup_tc_block_cb_eg(enum tc_setup_type type, - void *type_data, void *cb_priv) +static int mlxsw_sp_setup_tc_block_cb_matchall_eg(enum tc_setup_type type, + void *type_data, + void *cb_priv) { - return mlxsw_sp_setup_tc_block_cb(type, type_data, cb_priv, false); + return mlxsw_sp_setup_tc_block_cb_matchall(type, type_data, + cb_priv, false); +} + +static int mlxsw_sp_setup_tc_block_cb_flower(enum tc_setup_type type, + void *type_data, void *cb_priv) +{ + struct mlxsw_sp_acl_block *acl_block = cb_priv; + + switch (type) { + case TC_SETUP_CLSMATCHALL: + return 0; + case TC_SETUP_CLSFLOWER: + if (mlxsw_sp_acl_block_disabled(acl_block)) + return -EOPNOTSUPP; + + return mlxsw_sp_setup_tc_cls_flower(acl_block, type_data); + default: + return -EOPNOTSUPP; + } +} + +static int +mlxsw_sp_setup_tc_block_flower_bind(struct mlxsw_sp_port *mlxsw_sp_port, + struct tcf_block *block, bool ingress) +{ + struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; + struct mlxsw_sp_acl_block *acl_block; + struct tcf_block_cb *block_cb; + int err; + + block_cb = tcf_block_cb_lookup(block, mlxsw_sp_setup_tc_block_cb_flower, + mlxsw_sp); + if (!block_cb) { + acl_block = mlxsw_sp_acl_block_create(mlxsw_sp, block->net); + if (!acl_block) + return -ENOMEM; + block_cb = __tcf_block_cb_register(block, + mlxsw_sp_setup_tc_block_cb_flower, + mlxsw_sp, acl_block); + if (IS_ERR(block_cb)) { + err = PTR_ERR(block_cb); + goto err_cb_register; + } + } else { + acl_block = tcf_block_cb_priv(block_cb); + } + tcf_block_cb_incref(block_cb); + err = mlxsw_sp_acl_block_bind(mlxsw_sp, acl_block, + mlxsw_sp_port, ingress); + if (err) + goto err_block_bind; + + if (ingress) + mlxsw_sp_port->ing_acl_block = acl_block; + else + mlxsw_sp_port->eg_acl_block = acl_block; + + return 0; + +err_block_bind: + if (!tcf_block_cb_decref(block_cb)) { + __tcf_block_cb_unregister(block_cb); +err_cb_register: + mlxsw_sp_acl_block_destroy(acl_block); + } + return err; +} + +static void +mlxsw_sp_setup_tc_block_flower_unbind(struct mlxsw_sp_port *mlxsw_sp_port, + struct tcf_block *block, bool ingress) +{ + struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; + struct mlxsw_sp_acl_block *acl_block; + struct tcf_block_cb *block_cb; + int err; + + block_cb = tcf_block_cb_lookup(block, mlxsw_sp_setup_tc_block_cb_flower, + mlxsw_sp); + if (!block_cb) + return; + + if (ingress) + mlxsw_sp_port->ing_acl_block = NULL; + else + mlxsw_sp_port->eg_acl_block = NULL; + + acl_block = tcf_block_cb_priv(block_cb); + err = mlxsw_sp_acl_block_unbind(mlxsw_sp, acl_block, + mlxsw_sp_port, ingress); + if (!err && !tcf_block_cb_decref(block_cb)) { + __tcf_block_cb_unregister(block_cb); + mlxsw_sp_acl_block_destroy(acl_block); + } } static int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port, struct tc_block_offload *f) { tc_setup_cb_t *cb; + bool ingress; + int err; - if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS) - cb = mlxsw_sp_setup_tc_block_cb_ig; - else if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS) - cb = mlxsw_sp_setup_tc_block_cb_eg; - else + if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS) { + cb = mlxsw_sp_setup_tc_block_cb_matchall_ig; + ingress = true; + } else if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS) { + cb = mlxsw_sp_setup_tc_block_cb_matchall_eg; + ingress = false; + } else { return -EOPNOTSUPP; + } switch (f->command) { case TC_BLOCK_BIND: - return tcf_block_cb_register(f->block, cb, mlxsw_sp_port, - mlxsw_sp_port); + err = tcf_block_cb_register(f->block, cb, mlxsw_sp_port, + mlxsw_sp_port); + if (err) + return err; + err = mlxsw_sp_setup_tc_block_flower_bind(mlxsw_sp_port, + f->block, ingress); + if (err) { + tcf_block_cb_unregister(f->block, cb, mlxsw_sp_port); + return err; + } + return 0; case TC_BLOCK_UNBIND: + mlxsw_sp_setup_tc_block_flower_unbind(mlxsw_sp_port, + f->block, ingress); tcf_block_cb_unregister(f->block, cb, mlxsw_sp_port); return 0; default: @@ -1842,10 +1956,18 @@ static int mlxsw_sp_feature_hw_tc(struct net_device *dev, bool enable) { struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev); - if (!enable && (mlxsw_sp_port->acl_rule_count || - !list_empty(&mlxsw_sp_port->mall_tc_list))) { - netdev_err(dev, "Active offloaded tc filters, can't turn hw_tc_offload off\n"); - return -EINVAL; + if (!enable) { + if (mlxsw_sp_acl_block_rule_count(mlxsw_sp_port->ing_acl_block) || + mlxsw_sp_acl_block_rule_count(mlxsw_sp_port->eg_acl_block) || + !list_empty(&mlxsw_sp_port->mall_tc_list)) { + netdev_err(dev, "Active offloaded tc filters, can't turn hw_tc_offload off\n"); + return -EINVAL; + } + mlxsw_sp_acl_block_disable_inc(mlxsw_sp_port->ing_acl_block); + mlxsw_sp_acl_block_disable_inc(mlxsw_sp_port->eg_acl_block); + } else { + mlxsw_sp_acl_block_disable_dec(mlxsw_sp_port->ing_acl_block); + mlxsw_sp_acl_block_disable_dec(mlxsw_sp_port->eg_acl_block); } return 0; } diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h index 0e827c5..f7d10bb 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h @@ -248,6 +248,8 @@ struct mlxsw_sp_port { struct list_head vlans_list; struct mlxsw_sp_qdisc *root_qdisc; unsigned acl_rule_count; + struct mlxsw_sp_acl_block *ing_acl_block; + struct mlxsw_sp_acl_block *eg_acl_block; }; static inline bool @@ -477,17 +479,34 @@ struct mlxsw_sp_acl_ops { enum mlxsw_sp_acl_profile profile); }; +struct mlxsw_sp_acl_block; struct mlxsw_sp_acl_ruleset; /* spectrum_acl.c */ struct mlxsw_afk *mlxsw_sp_acl_afk(struct mlxsw_sp_acl *acl); +struct mlxsw_sp *mlxsw_sp_acl_block_mlxsw_sp(struct mlxsw_sp_acl_block *block); +unsigned int mlxsw_sp_acl_block_rule_count(struct mlxsw_sp_acl_block *block); +void mlxsw_sp_acl_block_disable_inc(struct mlxsw_sp_acl_block *block); +void mlxsw_sp_acl_block_disable_dec(struct mlxsw_sp_acl_block *block); +bool mlxsw_sp_acl_block_disabled(struct mlxsw_sp_acl_block *block); +struct mlxsw_sp_acl_block *mlxsw_sp_acl_block_create(struct mlxsw_sp *mlxsw_sp, + struct net *net); +void mlxsw_sp_acl_block_destroy(struct mlxsw_sp_acl_block *block); +int mlxsw_sp_acl_block_bind(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, + struct mlxsw_sp_port *mlxsw_sp_port, + bool ingress); +int mlxsw_sp_acl_block_unbind(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, + struct mlxsw_sp_port *mlxsw_sp_port, + bool ingress); struct mlxsw_sp_acl_ruleset * -mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp *mlxsw_sp, struct net_device *dev, - bool ingress, u32 chain_index, +mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, u32 chain_index, enum mlxsw_sp_acl_profile profile); struct mlxsw_sp_acl_ruleset * -mlxsw_sp_acl_ruleset_get(struct mlxsw_sp *mlxsw_sp, struct net_device *dev, - bool ingress, u32 chain_index, +mlxsw_sp_acl_ruleset_get(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, u32 chain_index, enum mlxsw_sp_acl_profile profile); void mlxsw_sp_acl_ruleset_put(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_ruleset *ruleset); @@ -554,11 +573,14 @@ void mlxsw_sp_acl_fini(struct mlxsw_sp *mlxsw_sp); extern const struct mlxsw_sp_acl_ops mlxsw_sp_acl_tcam_ops; /* spectrum_flower.c */ -int mlxsw_sp_flower_replace(struct mlxsw_sp_port *mlxsw_sp_port, bool ingress, +int mlxsw_sp_flower_replace(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, struct tc_cls_flower_offload *f); -void mlxsw_sp_flower_destroy(struct mlxsw_sp_port *mlxsw_sp_port, bool ingress, +void mlxsw_sp_flower_destroy(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, struct tc_cls_flower_offload *f); -int mlxsw_sp_flower_stats(struct mlxsw_sp_port *mlxsw_sp_port, bool ingress, +int mlxsw_sp_flower_stats(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, struct tc_cls_flower_offload *f); /* spectrum_qdisc.c */ diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c index 7fb41a4..f98bca9 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c @@ -39,6 +39,7 @@ #include #include #include +#include #include #include "reg.h" @@ -70,9 +71,23 @@ struct mlxsw_afk *mlxsw_sp_acl_afk(struct mlxsw_sp_acl *acl) return acl->afk; } -struct mlxsw_sp_acl_ruleset_ht_key { - struct net_device *dev; /* dev this ruleset is bound to */ +struct mlxsw_sp_acl_block_binding { + struct list_head list; + struct net_device *dev; + struct mlxsw_sp_port *mlxsw_sp_port; bool ingress; +}; + +struct mlxsw_sp_acl_block { + struct list_head binding_list; + struct mlxsw_sp_acl_ruleset *ruleset_zero; + struct mlxsw_sp *mlxsw_sp; + unsigned int rule_count; + unsigned int disable_count; +}; + +struct mlxsw_sp_acl_ruleset_ht_key { + struct mlxsw_sp_acl_block *block; u32 chain_index; const struct mlxsw_sp_acl_profile_ops *ops; }; @@ -118,27 +133,185 @@ struct mlxsw_sp_fid *mlxsw_sp_acl_dummy_fid(struct mlxsw_sp *mlxsw_sp) return mlxsw_sp->acl->dummy_fid; } -static int mlxsw_sp_acl_ruleset_bind(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_acl_ruleset *ruleset, - struct net_device *dev, bool ingress) +struct mlxsw_sp *mlxsw_sp_acl_block_mlxsw_sp(struct mlxsw_sp_acl_block *block) +{ + return block->mlxsw_sp; +} + +unsigned int mlxsw_sp_acl_block_rule_count(struct mlxsw_sp_acl_block *block) +{ + return block ? block->rule_count : 0; +} + +void mlxsw_sp_acl_block_disable_inc(struct mlxsw_sp_acl_block *block) +{ + if (block) + block->disable_count++; +} + +void mlxsw_sp_acl_block_disable_dec(struct mlxsw_sp_acl_block *block) +{ + if (block) + block->disable_count--; +} + +bool mlxsw_sp_acl_block_disabled(struct mlxsw_sp_acl_block *block) { + return block->disable_count; +} + +static int +mlxsw_sp_acl_ruleset_bind(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, + struct mlxsw_sp_acl_block_binding *binding) +{ + struct mlxsw_sp_acl_ruleset *ruleset = block->ruleset_zero; const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops; - return ops->ruleset_bind(mlxsw_sp, ruleset->priv, dev, ingress); + return ops->ruleset_bind(mlxsw_sp, ruleset->priv, + binding->mlxsw_sp_port->dev, binding->ingress); } -static void mlxsw_sp_acl_ruleset_unbind(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_acl_ruleset *ruleset, - struct net_device *dev, bool ingress) +static void +mlxsw_sp_acl_ruleset_unbind(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, + struct mlxsw_sp_acl_block_binding *binding) { + struct mlxsw_sp_acl_ruleset *ruleset = block->ruleset_zero; const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops; - ops->ruleset_unbind(mlxsw_sp, ruleset->priv, dev, ingress); + ops->ruleset_unbind(mlxsw_sp, ruleset->priv, + binding->mlxsw_sp_port->dev, binding->ingress); +} + +static bool mlxsw_sp_acl_ruleset_block_bound(struct mlxsw_sp_acl_block *block) +{ + return block->ruleset_zero; +} + +static int +mlxsw_sp_acl_ruleset_block_bind(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_ruleset *ruleset, + struct mlxsw_sp_acl_block *block) +{ + struct mlxsw_sp_acl_block_binding *binding; + int err; + + block->ruleset_zero = ruleset; + list_for_each_entry(binding, &block->binding_list, list) { + err = mlxsw_sp_acl_ruleset_bind(mlxsw_sp, block, binding); + if (err) + goto rollback; + } + return 0; + +rollback: + list_for_each_entry_continue_reverse(binding, &block->binding_list, + list) + mlxsw_sp_acl_ruleset_unbind(mlxsw_sp, block, binding); + block->ruleset_zero = NULL; + + return err; +} + +static void +mlxsw_sp_acl_ruleset_block_unbind(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_ruleset *ruleset, + struct mlxsw_sp_acl_block *block) +{ + struct mlxsw_sp_acl_block_binding *binding; + + list_for_each_entry(binding, &block->binding_list, list) + mlxsw_sp_acl_ruleset_unbind(mlxsw_sp, block, binding); + block->ruleset_zero = NULL; +} + +struct mlxsw_sp_acl_block *mlxsw_sp_acl_block_create(struct mlxsw_sp *mlxsw_sp, + struct net *net) +{ + struct mlxsw_sp_acl_block *block; + + block = kzalloc(sizeof(*block), GFP_KERNEL); + if (!block) + return NULL; + INIT_LIST_HEAD(&block->binding_list); + block->mlxsw_sp = mlxsw_sp; + return block; +} + +void mlxsw_sp_acl_block_destroy(struct mlxsw_sp_acl_block *block) +{ + WARN_ON(!list_empty(&block->binding_list)); + kfree(block); +} + +static struct mlxsw_sp_acl_block_binding * +mlxsw_sp_acl_block_lookup(struct mlxsw_sp_acl_block *block, + struct mlxsw_sp_port *mlxsw_sp_port, bool ingress) +{ + struct mlxsw_sp_acl_block_binding *binding; + + list_for_each_entry(binding, &block->binding_list, list) + if (binding->mlxsw_sp_port == mlxsw_sp_port && + binding->ingress == ingress) + return binding; + return NULL; +} + +int mlxsw_sp_acl_block_bind(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, + struct mlxsw_sp_port *mlxsw_sp_port, + bool ingress) +{ + struct mlxsw_sp_acl_block_binding *binding; + int err; + + if (WARN_ON(mlxsw_sp_acl_block_lookup(block, mlxsw_sp_port, ingress))) + return -EEXIST; + + binding = kzalloc(sizeof(*binding), GFP_KERNEL); + if (!binding) + return -ENOMEM; + binding->mlxsw_sp_port = mlxsw_sp_port; + binding->ingress = ingress; + + if (mlxsw_sp_acl_ruleset_block_bound(block)) { + err = mlxsw_sp_acl_ruleset_bind(mlxsw_sp, block, binding); + if (err) + goto err_ruleset_bind; + } + + list_add(&binding->list, &block->binding_list); + return 0; + +err_ruleset_bind: + kfree(binding); + return err; +} + +int mlxsw_sp_acl_block_unbind(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, + struct mlxsw_sp_port *mlxsw_sp_port, + bool ingress) +{ + struct mlxsw_sp_acl_block_binding *binding; + + binding = mlxsw_sp_acl_block_lookup(block, mlxsw_sp_port, ingress); + if (!binding) + return -ENOENT; + + list_del(&binding->list); + + if (mlxsw_sp_acl_ruleset_block_bound(block)) + mlxsw_sp_acl_ruleset_unbind(mlxsw_sp, block, binding); + + kfree(binding); + return 0; } static struct mlxsw_sp_acl_ruleset * -mlxsw_sp_acl_ruleset_create(struct mlxsw_sp *mlxsw_sp, struct net_device *dev, - bool ingress, u32 chain_index, +mlxsw_sp_acl_ruleset_create(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, u32 chain_index, const struct mlxsw_sp_acl_profile_ops *ops) { struct mlxsw_sp_acl *acl = mlxsw_sp->acl; @@ -151,8 +324,7 @@ mlxsw_sp_acl_ruleset_create(struct mlxsw_sp *mlxsw_sp, struct net_device *dev, if (!ruleset) return ERR_PTR(-ENOMEM); ruleset->ref_count = 1; - ruleset->ht_key.dev = dev; - ruleset->ht_key.ingress = ingress; + ruleset->ht_key.block = block; ruleset->ht_key.chain_index = chain_index; ruleset->ht_key.ops = ops; @@ -174,8 +346,7 @@ mlxsw_sp_acl_ruleset_create(struct mlxsw_sp *mlxsw_sp, struct net_device *dev, * to be directly bound to device. The rest of the rulesets * are bound by "Goto action set". */ - err = mlxsw_sp_acl_ruleset_bind(mlxsw_sp, ruleset, - dev, ingress); + err = mlxsw_sp_acl_ruleset_block_bind(mlxsw_sp, ruleset, block); if (err) goto err_ruleset_bind; } @@ -198,12 +369,12 @@ static void mlxsw_sp_acl_ruleset_destroy(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_ruleset *ruleset) { const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops; + struct mlxsw_sp_acl_block *block = ruleset->ht_key.block; + u32 chain_index = ruleset->ht_key.chain_index; struct mlxsw_sp_acl *acl = mlxsw_sp->acl; - if (!ruleset->ht_key.chain_index) - mlxsw_sp_acl_ruleset_unbind(mlxsw_sp, ruleset, - ruleset->ht_key.dev, - ruleset->ht_key.ingress); + if (!chain_index) + mlxsw_sp_acl_ruleset_block_unbind(mlxsw_sp, ruleset, block); rhashtable_remove_fast(&acl->ruleset_ht, &ruleset->ht_node, mlxsw_sp_acl_ruleset_ht_params); ops->ruleset_del(mlxsw_sp, ruleset->priv); @@ -225,15 +396,14 @@ static void mlxsw_sp_acl_ruleset_ref_dec(struct mlxsw_sp *mlxsw_sp, } static struct mlxsw_sp_acl_ruleset * -__mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp_acl *acl, struct net_device *dev, - bool ingress, u32 chain_index, +__mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp_acl *acl, + struct mlxsw_sp_acl_block *block, u32 chain_index, const struct mlxsw_sp_acl_profile_ops *ops) { struct mlxsw_sp_acl_ruleset_ht_key ht_key; memset(&ht_key, 0, sizeof(ht_key)); - ht_key.dev = dev; - ht_key.ingress = ingress; + ht_key.block = block; ht_key.chain_index = chain_index; ht_key.ops = ops; return rhashtable_lookup_fast(&acl->ruleset_ht, &ht_key, @@ -241,8 +411,8 @@ __mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp_acl *acl, struct net_device *dev, } struct mlxsw_sp_acl_ruleset * -mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp *mlxsw_sp, struct net_device *dev, - bool ingress, u32 chain_index, +mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, u32 chain_index, enum mlxsw_sp_acl_profile profile) { const struct mlxsw_sp_acl_profile_ops *ops; @@ -252,16 +422,15 @@ mlxsw_sp_acl_ruleset_lookup(struct mlxsw_sp *mlxsw_sp, struct net_device *dev, ops = acl->ops->profile_ops(mlxsw_sp, profile); if (!ops) return ERR_PTR(-EINVAL); - ruleset = __mlxsw_sp_acl_ruleset_lookup(acl, dev, ingress, - chain_index, ops); + ruleset = __mlxsw_sp_acl_ruleset_lookup(acl, block, chain_index, ops); if (!ruleset) return ERR_PTR(-ENOENT); return ruleset; } struct mlxsw_sp_acl_ruleset * -mlxsw_sp_acl_ruleset_get(struct mlxsw_sp *mlxsw_sp, struct net_device *dev, - bool ingress, u32 chain_index, +mlxsw_sp_acl_ruleset_get(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, u32 chain_index, enum mlxsw_sp_acl_profile profile) { const struct mlxsw_sp_acl_profile_ops *ops; @@ -272,14 +441,12 @@ mlxsw_sp_acl_ruleset_get(struct mlxsw_sp *mlxsw_sp, struct net_device *dev, if (!ops) return ERR_PTR(-EINVAL); - ruleset = __mlxsw_sp_acl_ruleset_lookup(acl, dev, ingress, - chain_index, ops); + ruleset = __mlxsw_sp_acl_ruleset_lookup(acl, block, chain_index, ops); if (ruleset) { mlxsw_sp_acl_ruleset_ref_inc(ruleset); return ruleset; } - return mlxsw_sp_acl_ruleset_create(mlxsw_sp, dev, ingress, - chain_index, ops); + return mlxsw_sp_acl_ruleset_create(mlxsw_sp, block, chain_index, ops); } void mlxsw_sp_acl_ruleset_put(struct mlxsw_sp *mlxsw_sp, @@ -528,6 +695,7 @@ int mlxsw_sp_acl_rule_add(struct mlxsw_sp *mlxsw_sp, goto err_rhashtable_insert; list_add_tail(&rule->list, &mlxsw_sp->acl->rules); + ruleset->ht_key.block->rule_count++; return 0; err_rhashtable_insert: @@ -541,6 +709,7 @@ void mlxsw_sp_acl_rule_del(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_ruleset *ruleset = rule->ruleset; const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops; + ruleset->ht_key.block->rule_count--; list_del(&rule->list); rhashtable_remove_fast(&ruleset->rule_ht, &rule->ht_node, mlxsw_sp_acl_rule_ht_params); diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c index 42e8a36..cf7b97d4 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include #include @@ -45,7 +46,7 @@ #include "core_acl_flex_keys.h" static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp, - struct net_device *dev, bool ingress, + struct mlxsw_sp_acl_block *block, struct mlxsw_sp_acl_rule_info *rulei, struct tcf_exts *exts) { @@ -80,8 +81,7 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_ruleset *ruleset; u16 group_id; - ruleset = mlxsw_sp_acl_ruleset_lookup(mlxsw_sp, dev, - ingress, + ruleset = mlxsw_sp_acl_ruleset_lookup(mlxsw_sp, block, chain_index, MLXSW_SP_ACL_PROFILE_FLOWER); if (IS_ERR(ruleset)) @@ -104,9 +104,6 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp, return err; out_dev = tcf_mirred_dev(a); - if (out_dev == dev) - out_dev = NULL; - err = mlxsw_sp_acl_rulei_act_fwd(mlxsw_sp, rulei, out_dev); if (err) @@ -265,7 +262,7 @@ static int mlxsw_sp_flower_parse_ip(struct mlxsw_sp *mlxsw_sp, } static int mlxsw_sp_flower_parse(struct mlxsw_sp *mlxsw_sp, - struct net_device *dev, bool ingress, + struct mlxsw_sp_acl_block *block, struct mlxsw_sp_acl_rule_info *rulei, struct tc_cls_flower_offload *f) { @@ -383,21 +380,19 @@ static int mlxsw_sp_flower_parse(struct mlxsw_sp *mlxsw_sp, if (err) return err; - return mlxsw_sp_flower_parse_actions(mlxsw_sp, dev, ingress, - rulei, f->exts); + return mlxsw_sp_flower_parse_actions(mlxsw_sp, block, rulei, f->exts); } -int mlxsw_sp_flower_replace(struct mlxsw_sp_port *mlxsw_sp_port, bool ingress, +int mlxsw_sp_flower_replace(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, struct tc_cls_flower_offload *f) { - struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; - struct net_device *dev = mlxsw_sp_port->dev; struct mlxsw_sp_acl_rule_info *rulei; struct mlxsw_sp_acl_ruleset *ruleset; struct mlxsw_sp_acl_rule *rule; int err; - ruleset = mlxsw_sp_acl_ruleset_get(mlxsw_sp, dev, ingress, + ruleset = mlxsw_sp_acl_ruleset_get(mlxsw_sp, block, f->common.chain_index, MLXSW_SP_ACL_PROFILE_FLOWER); if (IS_ERR(ruleset)) @@ -410,7 +405,7 @@ int mlxsw_sp_flower_replace(struct mlxsw_sp_port *mlxsw_sp_port, bool ingress, } rulei = mlxsw_sp_acl_rule_rulei(rule); - err = mlxsw_sp_flower_parse(mlxsw_sp, dev, ingress, rulei, f); + err = mlxsw_sp_flower_parse(mlxsw_sp, block, rulei, f); if (err) goto err_flower_parse; @@ -423,7 +418,6 @@ int mlxsw_sp_flower_replace(struct mlxsw_sp_port *mlxsw_sp_port, bool ingress, goto err_rule_add; mlxsw_sp_acl_ruleset_put(mlxsw_sp, ruleset); - mlxsw_sp_port->acl_rule_count++; return 0; err_rule_add: @@ -435,15 +429,15 @@ int mlxsw_sp_flower_replace(struct mlxsw_sp_port *mlxsw_sp_port, bool ingress, return err; } -void mlxsw_sp_flower_destroy(struct mlxsw_sp_port *mlxsw_sp_port, bool ingress, +void mlxsw_sp_flower_destroy(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, struct tc_cls_flower_offload *f) { - struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; struct mlxsw_sp_acl_ruleset *ruleset; struct mlxsw_sp_acl_rule *rule; - ruleset = mlxsw_sp_acl_ruleset_get(mlxsw_sp, mlxsw_sp_port->dev, - ingress, f->common.chain_index, + ruleset = mlxsw_sp_acl_ruleset_get(mlxsw_sp, block, + f->common.chain_index, MLXSW_SP_ACL_PROFILE_FLOWER); if (IS_ERR(ruleset)) return; @@ -455,13 +449,12 @@ void mlxsw_sp_flower_destroy(struct mlxsw_sp_port *mlxsw_sp_port, bool ingress, } mlxsw_sp_acl_ruleset_put(mlxsw_sp, ruleset); - mlxsw_sp_port->acl_rule_count--; } -int mlxsw_sp_flower_stats(struct mlxsw_sp_port *mlxsw_sp_port, bool ingress, +int mlxsw_sp_flower_stats(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_acl_block *block, struct tc_cls_flower_offload *f) { - struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; struct mlxsw_sp_acl_ruleset *ruleset; struct mlxsw_sp_acl_rule *rule; u64 packets; @@ -469,8 +462,8 @@ int mlxsw_sp_flower_stats(struct mlxsw_sp_port *mlxsw_sp_port, bool ingress, u64 bytes; int err; - ruleset = mlxsw_sp_acl_ruleset_get(mlxsw_sp, mlxsw_sp_port->dev, - ingress, f->common.chain_index, + ruleset = mlxsw_sp_acl_ruleset_get(mlxsw_sp, block, + f->common.chain_index, MLXSW_SP_ACL_PROFILE_FLOWER); if (WARN_ON(IS_ERR(ruleset))) return -EINVAL; From patchwork Tue Jan 16 13:33:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Pirko X-Patchwork-Id: 861522 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=resnulli-us.20150623.gappssmtp.com header.i=@resnulli-us.20150623.gappssmtp.com header.b="iZkI4QqW"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zLWR02SDRz9ryk for ; Wed, 17 Jan 2018 00:34:20 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751842AbeAPNeN (ORCPT ); Tue, 16 Jan 2018 08:34:13 -0500 Received: from mail-wm0-f65.google.com ([74.125.82.65]:43306 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751769AbeAPNeB (ORCPT ); Tue, 16 Jan 2018 08:34:01 -0500 Received: by mail-wm0-f65.google.com with SMTP id g1so8396392wmg.2 for ; Tue, 16 Jan 2018 05:34:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xTTcKfSP3bYaa/qC8Bh0JHoZ5oYIfR7NeNCDWEsrduQ=; b=iZkI4QqWSMPkt6CnqhsZXM++VhirMmO0wqsuIYJPNynFAVt+dd31bQpsfNm9F5KAOb t0+VuDDOhvHXgjJDIHcgX9LMzM49ha4bDY46xgQKTeDx4Dx5kfL/L3JB8l39O0b0SAp6 hHco5SzJCZGtq5zyrxVVMcOb+mTL94NxWnqPtsQhF8MoF+thwnJv6q7p3cRgoYIkt+87 7P0H3ap5xcj3LYY5r/Up+QugSbBP0vR5jOJ4fQKMbyTbPeWq+s2jbGzaraIYyQysCd9O YQfLqbdx7X9G+U6wAWY8m06EZfhxDVW8V3EnUIE+MxqoqRnh7GzDXRJLGMng62LRKVg5 yPvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xTTcKfSP3bYaa/qC8Bh0JHoZ5oYIfR7NeNCDWEsrduQ=; b=rX3kJyOUHOXGUBvZnJl0LUn0b5spkp0HgptHq6juNAtVbktQtVJdrwXviRlFkJlWYH aJWnj3E7MKVohFJitac/ZqyMP9FIf2n8my6E7c+UpqQXqFEVAPOBlZ4eM7+CRCl7bJvB k2Lk5affcF60y5zvYEhh18rBSUFtUhYVdvDi1UiopGuQNCAzDtNJCysRXObHDaN2Gt1D fH57e6XXegMX0tKXt+yeM9oVpShRQEj3xrZvqBwtvvOHrMJhpX7W5ytTnIlijv+UToKe H4xcK9zp5isCjbAsuzYfXe1yzCcLmD6XkhTJAdUSVfG2jivBXAAhHpemPN0SMsaRnhuV ATXA== X-Gm-Message-State: AKwxytdFcwR9kWVBi8R0XUb3K/aqrq6nmaWDbMkVR3Zv53GkuOj5FN4U XqSZT+gqOPRThKlAjS4PGXl6B/Um X-Google-Smtp-Source: ACJfBov6K4xWqfotSYAQJci2OT4blcAnN7KFnFHEvHVgdJnc9CO4u1+tLXYiRzmwpp7/BUpPMFLgAQ== X-Received: by 10.28.62.20 with SMTP id l20mr14139689wma.153.1516109640272; Tue, 16 Jan 2018 05:34:00 -0800 (PST) Received: from localhost (ip-89-177-135-29.net.upcbroadband.cz. [89.177.135.29]) by smtp.gmail.com with ESMTPSA id e128sm1789833wmg.1.2018.01.16.05.33.59 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 16 Jan 2018 05:33:59 -0800 (PST) From: Jiri Pirko To: netdev@vger.kernel.org Cc: davem@davemloft.net, jhs@mojatatu.com, xiyou.wangcong@gmail.com, mlxsw@mellanox.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, saeedm@mellanox.com, matanb@mellanox.com, leonro@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, simon.horman@netronome.com, pieter.jansenvanvuuren@netronome.com, john.hurley@netronome.com, alexander.h.duyck@intel.com, ogerlitz@mellanox.com, john.fastabend@gmail.com, daniel@iogearbox.net, dsahern@gmail.com, roopa@cumulusnetworks.com Subject: [patch net-next v9 13/13] mlxsw: spectrum_acl: Pass mlxsw_sp_port down to ruleset bind/unbind ops Date: Tue, 16 Jan 2018 14:33:47 +0100 Message-Id: <20180116133347.2207-14-jiri@resnulli.us> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180116133347.2207-1-jiri@resnulli.us> References: <20180116133347.2207-1-jiri@resnulli.us> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko No need to convert from mlxsw_sp_port to net_device and back again. Signed-off-by: Jiri Pirko --- drivers/net/ethernet/mellanox/mlxsw/spectrum.h | 6 +++-- drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c | 4 ++-- .../ethernet/mellanox/mlxsw/spectrum_acl_tcam.c | 27 +++++++++------------- 3 files changed, 17 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h index f7d10bb..98f6132 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h @@ -457,9 +457,11 @@ struct mlxsw_sp_acl_profile_ops { void *priv, void *ruleset_priv); void (*ruleset_del)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv); int (*ruleset_bind)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv, - struct net_device *dev, bool ingress); + struct mlxsw_sp_port *mlxsw_sp_port, + bool ingress); void (*ruleset_unbind)(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv, - struct net_device *dev, bool ingress); + struct mlxsw_sp_port *mlxsw_sp_port, + bool ingress); u16 (*ruleset_group_id)(void *ruleset_priv); size_t rule_priv_size; int (*rule_add)(struct mlxsw_sp *mlxsw_sp, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c index f98bca9..9439bfa 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c @@ -169,7 +169,7 @@ mlxsw_sp_acl_ruleset_bind(struct mlxsw_sp *mlxsw_sp, const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops; return ops->ruleset_bind(mlxsw_sp, ruleset->priv, - binding->mlxsw_sp_port->dev, binding->ingress); + binding->mlxsw_sp_port, binding->ingress); } static void @@ -181,7 +181,7 @@ mlxsw_sp_acl_ruleset_unbind(struct mlxsw_sp *mlxsw_sp, const struct mlxsw_sp_acl_profile_ops *ops = ruleset->ht_key.ops; ops->ruleset_unbind(mlxsw_sp, ruleset->priv, - binding->mlxsw_sp_port->dev, binding->ingress); + binding->mlxsw_sp_port, binding->ingress); } static bool mlxsw_sp_acl_ruleset_block_bound(struct mlxsw_sp_acl_block *block) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c index 50b2f9a..c6e180c 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c @@ -258,15 +258,11 @@ static void mlxsw_sp_acl_tcam_group_del(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_acl_tcam_group_bind(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_tcam_group *group, - struct net_device *dev, bool ingress) + struct mlxsw_sp_port *mlxsw_sp_port, + bool ingress) { - struct mlxsw_sp_port *mlxsw_sp_port; char ppbt_pl[MLXSW_REG_PPBT_LEN]; - if (!mlxsw_sp_port_dev_check(dev)) - return -EINVAL; - - mlxsw_sp_port = netdev_priv(dev); mlxsw_reg_ppbt_pack(ppbt_pl, ingress ? MLXSW_REG_PXBT_E_IACL : MLXSW_REG_PXBT_E_EACL, MLXSW_REG_PXBT_OP_BIND, mlxsw_sp_port->local_port, @@ -277,15 +273,11 @@ mlxsw_sp_acl_tcam_group_bind(struct mlxsw_sp *mlxsw_sp, static void mlxsw_sp_acl_tcam_group_unbind(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_acl_tcam_group *group, - struct net_device *dev, bool ingress) + struct mlxsw_sp_port *mlxsw_sp_port, + bool ingress) { - struct mlxsw_sp_port *mlxsw_sp_port; char ppbt_pl[MLXSW_REG_PPBT_LEN]; - if (WARN_ON(!mlxsw_sp_port_dev_check(dev))) - return; - - mlxsw_sp_port = netdev_priv(dev); mlxsw_reg_ppbt_pack(ppbt_pl, ingress ? MLXSW_REG_PXBT_E_IACL : MLXSW_REG_PXBT_E_EACL, MLXSW_REG_PXBT_OP_UNBIND, mlxsw_sp_port->local_port, @@ -1054,22 +1046,25 @@ mlxsw_sp_acl_tcam_flower_ruleset_del(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_acl_tcam_flower_ruleset_bind(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv, - struct net_device *dev, bool ingress) + struct mlxsw_sp_port *mlxsw_sp_port, + bool ingress) { struct mlxsw_sp_acl_tcam_flower_ruleset *ruleset = ruleset_priv; return mlxsw_sp_acl_tcam_group_bind(mlxsw_sp, &ruleset->group, - dev, ingress); + mlxsw_sp_port, ingress); } static void mlxsw_sp_acl_tcam_flower_ruleset_unbind(struct mlxsw_sp *mlxsw_sp, void *ruleset_priv, - struct net_device *dev, bool ingress) + struct mlxsw_sp_port *mlxsw_sp_port, + bool ingress) { struct mlxsw_sp_acl_tcam_flower_ruleset *ruleset = ruleset_priv; - mlxsw_sp_acl_tcam_group_unbind(mlxsw_sp, &ruleset->group, dev, ingress); + mlxsw_sp_acl_tcam_group_unbind(mlxsw_sp, &ruleset->group, + mlxsw_sp_port, ingress); } static u16