From patchwork Fri Jan 19 14:12:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Davide Caratti X-Patchwork-Id: 863537 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zNN8Y00Ztz9s7G for ; Sat, 20 Jan 2018 01:13:16 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754964AbeASONN (ORCPT ); Fri, 19 Jan 2018 09:13:13 -0500 Received: from mx1.redhat.com ([209.132.183.28]:34106 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754796AbeASONJ (ORCPT ); Fri, 19 Jan 2018 09:13:09 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 57BEB2FD1; Fri, 19 Jan 2018 14:13:08 +0000 (UTC) Received: from new-host.redhat.com (ovpn-117-60.ams2.redhat.com [10.36.117.60]) by smtp.corp.redhat.com (Postfix) with ESMTP id C016360C8A; Fri, 19 Jan 2018 14:13:06 +0000 (UTC) From: Davide Caratti To: "David S. Miller" Cc: netdev@vger.kernel.org, Cong Wang , Jiri Pirko Subject: [PATCH net-next v2 1/2] net/sched: act_csum: use per-core statistics Date: Fri, 19 Jan 2018 15:12:50 +0100 Message-Id: <88d6e0d86f107270a907e256d866757cc79bf254.1516366191.git.dcaratti@redhat.com> In-Reply-To: References: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Fri, 19 Jan 2018 14:13:08 +0000 (UTC) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org use per-CPU counters, like other TC actions do, instead of maintaining one set of stats across all cores. This allows updating act_csum stats without the need of protecting them using spin_{,un}lock_bh() invocations. Signed-off-by: Davide Caratti --- net/sched/act_csum.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/net/sched/act_csum.c b/net/sched/act_csum.c index af4b8ec60d9a..df22da365cd9 100644 --- a/net/sched/act_csum.c +++ b/net/sched/act_csum.c @@ -67,7 +67,7 @@ static int tcf_csum_init(struct net *net, struct nlattr *nla, if (!tcf_idr_check(tn, parm->index, a, bind)) { ret = tcf_idr_create(tn, parm->index, est, a, - &act_csum_ops, bind, false); + &act_csum_ops, bind, true); if (ret) return ret; ret = ACT_P_CREATED; @@ -542,9 +542,9 @@ static int tcf_csum(struct sk_buff *skb, const struct tc_action *a, int action; u32 update_flags; - spin_lock(&p->tcf_lock); tcf_lastuse_update(&p->tcf_tm); - bstats_update(&p->tcf_bstats, skb); + bstats_cpu_update(this_cpu_ptr(p->common.cpu_bstats), skb); + spin_lock(&p->tcf_lock); action = p->tcf_action; update_flags = p->update_flags; spin_unlock(&p->tcf_lock); @@ -566,9 +566,7 @@ static int tcf_csum(struct sk_buff *skb, const struct tc_action *a, return action; drop: - spin_lock(&p->tcf_lock); - p->tcf_qstats.drops++; - spin_unlock(&p->tcf_lock); + qstats_drop_inc(this_cpu_ptr(p->common.cpu_qstats)); return TC_ACT_SHOT; } From patchwork Fri Jan 19 14:12:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Davide Caratti X-Patchwork-Id: 863539 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zNN8c11n6z9s7g for ; Sat, 20 Jan 2018 01:13:20 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755245AbeASONS (ORCPT ); Fri, 19 Jan 2018 09:13:18 -0500 Received: from mx1.redhat.com ([209.132.183.28]:35260 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754809AbeASONK (ORCPT ); Fri, 19 Jan 2018 09:13:10 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 028E17CBA3; Fri, 19 Jan 2018 14:13:10 +0000 (UTC) Received: from new-host.redhat.com (ovpn-117-60.ams2.redhat.com [10.36.117.60]) by smtp.corp.redhat.com (Postfix) with ESMTP id B348760C8A; Fri, 19 Jan 2018 14:13:08 +0000 (UTC) From: Davide Caratti To: "David S. Miller" Cc: netdev@vger.kernel.org, Cong Wang , Jiri Pirko Subject: [PATCH net-next v2 2/2] net/sched: act_csum: don't use spinlock in the fast path Date: Fri, 19 Jan 2018 15:12:51 +0100 Message-Id: <88f12956018ca5a21d793428ac0b9dc5b3183319.1516366191.git.dcaratti@redhat.com> In-Reply-To: References: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Fri, 19 Jan 2018 14:13:10 +0000 (UTC) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org use RCU instead of spin_{,unlock}_bh() to protect concurrent read/write on act_csum configuration, to reduce the effects of contention in the data path when multiple readers are present. Signed-off-by: Davide Caratti --- include/net/tc_act/tc_csum.h | 16 ++++++++++-- net/sched/act_csum.c | 58 ++++++++++++++++++++++++++++++++++---------- 2 files changed, 59 insertions(+), 15 deletions(-) diff --git a/include/net/tc_act/tc_csum.h b/include/net/tc_act/tc_csum.h index 781f3433a0be..9470fd7e4350 100644 --- a/include/net/tc_act/tc_csum.h +++ b/include/net/tc_act/tc_csum.h @@ -6,10 +6,16 @@ #include #include +struct tcf_csum_params { + int action; + u32 update_flags; + struct rcu_head rcu; +}; + struct tcf_csum { struct tc_action common; - u32 update_flags; + struct tcf_csum_params __rcu *params; }; #define to_tcf_csum(a) ((struct tcf_csum *)a) @@ -24,7 +30,13 @@ static inline bool is_tcf_csum(const struct tc_action *a) static inline u32 tcf_csum_update_flags(const struct tc_action *a) { - return to_tcf_csum(a)->update_flags; + u32 update_flags; + + rcu_read_lock(); + update_flags = rcu_dereference(to_tcf_csum(a)->params)->update_flags; + rcu_read_unlock(); + + return update_flags; } #endif /* __NET_TC_CSUM_H */ diff --git a/net/sched/act_csum.c b/net/sched/act_csum.c index df22da365cd9..93acbaa2bb25 100644 --- a/net/sched/act_csum.c +++ b/net/sched/act_csum.c @@ -49,6 +49,7 @@ static int tcf_csum_init(struct net *net, struct nlattr *nla, int bind) { struct tc_action_net *tn = net_generic(net, csum_net_id); + struct tcf_csum_params *params_old, *params_new; struct nlattr *tb[TCA_CSUM_MAX + 1]; struct tc_csum *parm; struct tcf_csum *p; @@ -80,10 +81,21 @@ static int tcf_csum_init(struct net *net, struct nlattr *nla, } p = to_tcf_csum(*a); - spin_lock_bh(&p->tcf_lock); - p->tcf_action = parm->action; - p->update_flags = parm->update_flags; - spin_unlock_bh(&p->tcf_lock); + ASSERT_RTNL(); + + params_new = kzalloc(sizeof(*params_new), GFP_KERNEL); + if (unlikely(!params_new)) { + if (ret == ACT_P_CREATED) + tcf_idr_release(*a, bind); + return -ENOMEM; + } + params_old = rtnl_dereference(p->params); + + params_new->action = parm->action; + params_new->update_flags = parm->update_flags; + rcu_assign_pointer(p->params, params_new); + if (params_old) + kfree_rcu(params_old, rcu); if (ret == ACT_P_CREATED) tcf_idr_insert(tn, *a); @@ -539,19 +551,21 @@ static int tcf_csum(struct sk_buff *skb, const struct tc_action *a, struct tcf_result *res) { struct tcf_csum *p = to_tcf_csum(a); - int action; + struct tcf_csum_params *params; u32 update_flags; + int action; + + rcu_read_lock(); + params = rcu_dereference(p->params); tcf_lastuse_update(&p->tcf_tm); bstats_cpu_update(this_cpu_ptr(p->common.cpu_bstats), skb); - spin_lock(&p->tcf_lock); - action = p->tcf_action; - update_flags = p->update_flags; - spin_unlock(&p->tcf_lock); + action = params->action; if (unlikely(action == TC_ACT_SHOT)) - goto drop; + goto drop_stats; + update_flags = params->update_flags; switch (tc_skb_protocol(skb)) { case cpu_to_be16(ETH_P_IP): if (!tcf_csum_ipv4(skb, update_flags)) @@ -563,11 +577,16 @@ static int tcf_csum(struct sk_buff *skb, const struct tc_action *a, break; } +unlock: + rcu_read_unlock(); return action; drop: + action = TC_ACT_SHOT; + +drop_stats: qstats_drop_inc(this_cpu_ptr(p->common.cpu_qstats)); - return TC_ACT_SHOT; + goto unlock; } static int tcf_csum_dump(struct sk_buff *skb, struct tc_action *a, int bind, @@ -575,15 +594,18 @@ static int tcf_csum_dump(struct sk_buff *skb, struct tc_action *a, int bind, { unsigned char *b = skb_tail_pointer(skb); struct tcf_csum *p = to_tcf_csum(a); + struct tcf_csum_params *params; struct tc_csum opt = { - .update_flags = p->update_flags, .index = p->tcf_index, - .action = p->tcf_action, .refcnt = p->tcf_refcnt - ref, .bindcnt = p->tcf_bindcnt - bind, }; struct tcf_t t; + params = rcu_dereference(p->params); + opt.action = params->action; + opt.update_flags = params->update_flags; + if (nla_put(skb, TCA_CSUM_PARMS, sizeof(opt), &opt)) goto nla_put_failure; @@ -598,6 +620,15 @@ static int tcf_csum_dump(struct sk_buff *skb, struct tc_action *a, int bind, return -1; } +static void tcf_csum_cleanup(struct tc_action *a) +{ + struct tcf_csum *p = to_tcf_csum(a); + struct tcf_csum_params *params; + + params = rcu_dereference_protected(p->params, 1); + kfree_rcu(params, rcu); +} + static int tcf_csum_walker(struct net *net, struct sk_buff *skb, struct netlink_callback *cb, int type, const struct tc_action_ops *ops) @@ -621,6 +652,7 @@ static struct tc_action_ops act_csum_ops = { .act = tcf_csum, .dump = tcf_csum_dump, .init = tcf_csum_init, + .cleanup = tcf_csum_cleanup, .walk = tcf_csum_walker, .lookup = tcf_csum_search, .size = sizeof(struct tcf_csum),