From patchwork Tue Jan 9 14:07:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Pirko X-Patchwork-Id: 857523 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=resnulli-us.20150623.gappssmtp.com header.i=@resnulli-us.20150623.gappssmtp.com header.b="MWzlKo3/"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zGDX44T5pz9s4q for ; Wed, 10 Jan 2018 01:08:52 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756148AbeAIOIo (ORCPT ); Tue, 9 Jan 2018 09:08:44 -0500 Received: from mail-wr0-f195.google.com ([209.85.128.195]:45136 "EHLO mail-wr0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756000AbeAIOHm (ORCPT ); Tue, 9 Jan 2018 09:07:42 -0500 Received: by mail-wr0-f195.google.com with SMTP id 16so1740821wry.12 for ; Tue, 09 Jan 2018 06:07:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+yTR1urURSPVrIIQaxxwTOO6tDdA62xRpBX+0xlo5K8=; b=MWzlKo3/cvfMmX7qLT15HdPT3jzGSCpv47WHCp3k3RSvzo2qApeGMomENwU4iVyU9p xMvsSISKRRFFTIFYagykGPULhNjoaRXmJFqwAExU3knrHm0fHxNZH+e58x/IUWzO98VU v2RyV7p3cNEHeJDyyEKyygtYmmzupUoZsWrWmVWoJnKxwPORN/Qpp2t95Wpgw7+r6eph J5d1SnzhYScWvwXsKX0fVk/tDgu92Ch04cgVIOiW/KgS9x+BMIrih7MrMsuDFTzEhlrj KlKGiHRB6vJMb/F4l9ryXlfv9ieICwBqIxf6XlewtF0ihUl7Zy3GKoDTC3WtTVd5QJVT 6P3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+yTR1urURSPVrIIQaxxwTOO6tDdA62xRpBX+0xlo5K8=; b=cdT1ApKPRm0CBZ64C4usJDma2m94EL9pGIdzHqVAC7WH0GYrAl4JkcJKTzBg6riHVB FGf23cISUbAf5zLk8d7C15tbVqqGYlDvyg21oD9jPTQZInhEr9+/KlCou3Mhs1PLwLKC CHfz2Q9+/7b7lCoEfWjbE5On7V+pdwZBKWEcSUpcBHh7zDcLPdTi8vxKzjjrv/ZVYB/L h/9q1KaNmL4U0WLoJaYkTVzA/tbw1vUY0yW5fGynjiaZlIo/+1Rlv8pDIUKTtgLLR0mk CRplKjMV/lB4dK1Ai2Ow1AzjHnj/nDJY/YKPr5JFc+HNrnOU3mtSQ2vWlFeZfzUjoJqO blBA== X-Gm-Message-State: AKGB3mKu36eb3I/cdgCY7g323XGSflfYuOf1s/BLjbCU1qYjEfASto9M aH38qxxF3hv0jOkZ8n4uW8njANMg X-Google-Smtp-Source: ACJfBovfcxrXHDgEoq+mpuAbm2KnEuzsW4IsM2fFOZ3JIkqphRx6UlxptSG7V8VzQZP57QT8ZsCYSw== X-Received: by 10.223.128.164 with SMTP id 33mr14735484wrl.85.1515506861046; Tue, 09 Jan 2018 06:07:41 -0800 (PST) Received: from localhost (ip-94-113-220-78.net.upcbroadband.cz. [94.113.220.78]) by smtp.gmail.com with ESMTPSA id h76sm17750641wmd.2.2018.01.09.06.07.40 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 09 Jan 2018 06:07:40 -0800 (PST) From: Jiri Pirko To: netdev@vger.kernel.org Cc: davem@davemloft.net, jhs@mojatatu.com, xiyou.wangcong@gmail.com, mlxsw@mellanox.com, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, saeedm@mellanox.com, matanb@mellanox.com, leonro@mellanox.com, idosch@mellanox.com, jakub.kicinski@netronome.com, simon.horman@netronome.com, pieter.jansenvanvuuren@netronome.com, john.hurley@netronome.com, alexander.h.duyck@intel.com, ogerlitz@mellanox.com, john.fastabend@gmail.com, daniel@iogearbox.net, dsahern@gmail.com Subject: [patch net-next v7 09/13] net: sched: allow ingress and clsact qdiscs to share filter blocks Date: Tue, 9 Jan 2018 15:07:27 +0100 Message-Id: <20180109140731.1022-10-jiri@resnulli.us> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20180109140731.1022-1-jiri@resnulli.us> References: <20180109140731.1022-1-jiri@resnulli.us> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko Benefit from the previously introduced shared filter blocks infrastructure and allow ingress and clsact qdisc instances to share filter blocks. The block index is coming from userspace as qdisc option. Signed-off-by: Jiri Pirko --- v6->v7: - adjust to the core changes and check block index attributes for being 0 v3->v4: - rebased on top of the current net-next v2->v3: - removed "p_" prefix from block index function args --- include/uapi/linux/pkt_sched.h | 11 +++++ net/sched/sch_ingress.c | 101 ++++++++++++++++++++++++++++++++++++++++- 2 files changed, 111 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h index 37b5096..8cc554a 100644 --- a/include/uapi/linux/pkt_sched.h +++ b/include/uapi/linux/pkt_sched.h @@ -934,4 +934,15 @@ enum { #define TCA_CBS_MAX (__TCA_CBS_MAX - 1) +/* Ingress/clsact */ + +enum { + TCA_CLSACT_UNSPEC, + TCA_CLSACT_INGRESS_BLOCK, + TCA_CLSACT_EGRESS_BLOCK, + __TCA_CLSACT_MAX +}; + +#define TCA_CLSACT_MAX (__TCA_CLSACT_MAX - 1) + #endif diff --git a/net/sched/sch_ingress.c b/net/sched/sch_ingress.c index 7ca2be2..1bef8d4 100644 --- a/net/sched/sch_ingress.c +++ b/net/sched/sch_ingress.c @@ -61,6 +61,32 @@ static void clsact_chain_head_change(struct tcf_proto *tp_head, void *priv) struct mini_Qdisc_pair *miniqp = priv; mini_qdisc_pair_swap(miniqp, tp_head); +}; + +static const struct nla_policy ingress_policy[TCA_CLSACT_MAX + 1] = { + [TCA_CLSACT_INGRESS_BLOCK] = { .type = NLA_U32 }, +}; + +static int ingress_parse_opt(struct nlattr *opt, struct tcf_block_ext_info *ei, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[TCA_CLSACT_MAX + 1]; + int err; + + if (!opt) + return 0; + err = nla_parse_nested(tb, TCA_CLSACT_MAX, opt, ingress_policy, NULL); + if (err) + return err; + + if (tb[TCA_CLSACT_INGRESS_BLOCK]) { + ei->block_index = nla_get_u32(tb[TCA_CLSACT_INGRESS_BLOCK]); + if (!ei->block_index) { + NL_SET_ERR_MSG(extack, "Block index cannot be 0"); + return -EINVAL; + } + } + return 0; } static int ingress_init(struct Qdisc *sch, struct nlattr *opt, @@ -74,6 +100,10 @@ static int ingress_init(struct Qdisc *sch, struct nlattr *opt, mini_qdisc_pair_init(&q->miniqp, sch, &dev->miniq_ingress); + err = ingress_parse_opt(opt, &q->block_info, extack); + if (err) + return err; + q->block_info.binder_type = TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS; q->block_info.chain_head_change = clsact_chain_head_change; q->block_info.chain_head_change_priv = &q->miniqp; @@ -97,11 +127,15 @@ static void ingress_destroy(struct Qdisc *sch) static int ingress_dump(struct Qdisc *sch, struct sk_buff *skb) { + struct ingress_sched_data *q = qdisc_priv(sch); struct nlattr *nest; nest = nla_nest_start(skb, TCA_OPTIONS); if (nest == NULL) goto nla_put_failure; + if (q->block->index && + nla_put_u32(skb, TCA_CLSACT_INGRESS_BLOCK, q->block->index)) + goto nla_put_failure; return nla_nest_end(skb, nest); @@ -170,6 +204,44 @@ static struct tcf_block *clsact_tcf_block(struct Qdisc *sch, unsigned long cl, } } +static const struct nla_policy clsact_policy[TCA_CLSACT_MAX + 1] = { + [TCA_CLSACT_INGRESS_BLOCK] = { .type = NLA_U32 }, + [TCA_CLSACT_EGRESS_BLOCK] = { .type = NLA_U32 }, +}; + +static int clsact_parse_opt(struct nlattr *opt, + struct tcf_block_ext_info *ei_ingress, + struct tcf_block_ext_info *ei_egress, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[TCA_CLSACT_MAX + 1]; + int err; + + if (!opt) + return 0; + err = nla_parse_nested(tb, TCA_CLSACT_MAX, opt, clsact_policy, NULL); + if (err) + return err; + + if (tb[TCA_CLSACT_INGRESS_BLOCK]) { + ei_ingress->block_index = + nla_get_u32(tb[TCA_CLSACT_INGRESS_BLOCK]); + if (!ei_ingress->block_index) { + NL_SET_ERR_MSG(extack, "Block index cannot be 0"); + return -EINVAL; + } + } + if (tb[TCA_CLSACT_EGRESS_BLOCK]) { + ei_egress->block_index = + nla_get_u32(tb[TCA_CLSACT_EGRESS_BLOCK]); + if (!ei_egress->block_index) { + NL_SET_ERR_MSG(extack, "Block index cannot be 0"); + return -EINVAL; + } + } + return 0; +} + static int clsact_init(struct Qdisc *sch, struct nlattr *opt, struct netlink_ext_ack *extack) { @@ -182,6 +254,11 @@ static int clsact_init(struct Qdisc *sch, struct nlattr *opt, mini_qdisc_pair_init(&q->miniqp_ingress, sch, &dev->miniq_ingress); + err = clsact_parse_opt(opt, &q->ingress_block_info, + &q->egress_block_info, extack); + if (err) + return err; + q->ingress_block_info.binder_type = TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS; q->ingress_block_info.chain_head_change = clsact_chain_head_change; q->ingress_block_info.chain_head_change_priv = &q->miniqp_ingress; @@ -218,6 +295,28 @@ static void clsact_destroy(struct Qdisc *sch) net_dec_egress_queue(); } +static int clsact_dump(struct Qdisc *sch, struct sk_buff *skb) +{ + struct clsact_sched_data *q = qdisc_priv(sch); + struct nlattr *nest; + + nest = nla_nest_start(skb, TCA_OPTIONS); + if (!nest) + goto nla_put_failure; + if (q->ingress_block->index && + nla_put_u32(skb, TCA_CLSACT_INGRESS_BLOCK, q->ingress_block->index)) + goto nla_put_failure; + if (q->egress_block->index && + nla_put_u32(skb, TCA_CLSACT_EGRESS_BLOCK, q->egress_block->index)) + goto nla_put_failure; + + return nla_nest_end(skb, nest); + +nla_put_failure: + nla_nest_cancel(skb, nest); + return -1; +} + static const struct Qdisc_class_ops clsact_class_ops = { .leaf = ingress_leaf, .find = clsact_find, @@ -233,7 +332,7 @@ static struct Qdisc_ops clsact_qdisc_ops __read_mostly = { .priv_size = sizeof(struct clsact_sched_data), .init = clsact_init, .destroy = clsact_destroy, - .dump = ingress_dump, + .dump = clsact_dump, .owner = THIS_MODULE, };