From patchwork Thu Feb 14 07:47:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlad Buslov X-Patchwork-Id: 1041880 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=mellanox.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 440T5J0zptz9sMp for ; Thu, 14 Feb 2019 18:47:48 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437391AbfBNHrq (ORCPT ); Thu, 14 Feb 2019 02:47:46 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:38539 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2389767AbfBNHrn (ORCPT ); Thu, 14 Feb 2019 02:47:43 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from vladbu@mellanox.com) with ESMTPS (AES256-SHA encrypted); 14 Feb 2019 09:47:37 +0200 Received: from reg-r-vrt-018-180.mtr.labs.mlnx. (reg-r-vrt-018-180.mtr.labs.mlnx [10.213.18.180]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x1E7lbe2014807; Thu, 14 Feb 2019 09:47:37 +0200 From: Vlad Buslov To: netdev@vger.kernel.org Cc: jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, davem@davemloft.net, Vlad Buslov Subject: [PATCH net-next 04/12] net: sched: flower: track filter deletion with flag Date: Thu, 14 Feb 2019 09:47:04 +0200 Message-Id: <20190214074712.17846-5-vladbu@mellanox.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190214074712.17846-1-vladbu@mellanox.com> References: <20190214074712.17846-1-vladbu@mellanox.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In order to prevent double deletion of filter by concurrent tasks when rtnl lock is not used for synchronization, add 'deleted' filter field. Check value of this field when modifying filters and return error if concurrent deletion is detected. Refactor __fl_delete() to accept pointer to 'last' boolean as argument, and return error code as function return value instead. This is necessary to signal concurrent filter delete to caller. Signed-off-by: Vlad Buslov Acked-by: Jiri Pirko --- net/sched/cls_flower.c | 55 ++++++++++++++++++++++++++++++++++---------------- 1 file changed, 38 insertions(+), 17 deletions(-) diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index b216ed26f344..fa5465f890e1 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -110,6 +110,7 @@ struct cls_fl_filter { * synchronization. Use atomic reference counter to be concurrency-safe. */ refcount_t refcnt; + bool deleted; }; static const struct rhashtable_params mask_ht_params = { @@ -454,6 +455,8 @@ static void __fl_put(struct cls_fl_filter *f) if (!refcount_dec_and_test(&f->refcnt)) return; + WARN_ON(!f->deleted); + if (tcf_exts_get_net(&f->exts)) tcf_queue_work(&f->rwork, fl_destroy_filter_work); else @@ -490,22 +493,31 @@ static struct cls_fl_filter *fl_get_next_filter(struct tcf_proto *tp, return f; } -static bool __fl_delete(struct tcf_proto *tp, struct cls_fl_filter *f, - struct netlink_ext_ack *extack) +static int __fl_delete(struct tcf_proto *tp, struct cls_fl_filter *f, + bool *last, struct netlink_ext_ack *extack) { struct cls_fl_head *head = fl_head_dereference(tp); bool async = tcf_exts_get_net(&f->exts); - bool last; - - idr_remove(&head->handle_idr, f->handle); - list_del_rcu(&f->list); - last = fl_mask_put(head, f->mask, async); - if (!tc_skip_hw(f->flags)) - fl_hw_destroy_filter(tp, f, extack); - tcf_unbind_filter(tp, &f->res); - __fl_put(f); + int err = 0; + + (*last) = false; + + if (!f->deleted) { + f->deleted = true; + rhashtable_remove_fast(&f->mask->ht, &f->ht_node, + f->mask->filter_ht_params); + idr_remove(&head->handle_idr, f->handle); + list_del_rcu(&f->list); + (*last) = fl_mask_put(head, f->mask, async); + if (!tc_skip_hw(f->flags)) + fl_hw_destroy_filter(tp, f, extack); + tcf_unbind_filter(tp, &f->res); + __fl_put(f); + } else { + err = -ENOENT; + } - return last; + return err; } static void fl_destroy_sleepable(struct work_struct *work) @@ -525,10 +537,12 @@ static void fl_destroy(struct tcf_proto *tp, bool rtnl_held, struct cls_fl_head *head = fl_head_dereference(tp); struct fl_flow_mask *mask, *next_mask; struct cls_fl_filter *f, *next; + bool last; list_for_each_entry_safe(mask, next_mask, &head->masks, list) { list_for_each_entry_safe(f, next, &mask->filters, list) { - if (__fl_delete(tp, f, extack)) + __fl_delete(tp, f, &last, extack); + if (last) break; } } @@ -1439,6 +1453,12 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, refcount_inc(&fnew->refcnt); if (fold) { + /* Fold filter was deleted concurrently. Retry lookup. */ + if (fold->deleted) { + err = -EAGAIN; + goto errout_hw; + } + fnew->handle = handle; err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node, @@ -1451,6 +1471,7 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, fold->mask->filter_ht_params); idr_replace(&head->handle_idr, fnew, fnew->handle); list_replace_rcu(&fold->list, &fnew->list); + fold->deleted = true; if (!tc_skip_hw(fold->flags)) fl_hw_destroy_filter(tp, fold, NULL); @@ -1520,14 +1541,14 @@ static int fl_delete(struct tcf_proto *tp, void *arg, bool *last, { struct cls_fl_head *head = fl_head_dereference(tp); struct cls_fl_filter *f = arg; + bool last_on_mask; + int err = 0; - rhashtable_remove_fast(&f->mask->ht, &f->ht_node, - f->mask->filter_ht_params); - __fl_delete(tp, f, extack); + err = __fl_delete(tp, f, &last_on_mask, extack); *last = list_empty(&head->masks); __fl_put(f); - return 0; + return err; } static void fl_walk(struct tcf_proto *tp, struct tcf_walker *arg,