From patchwork Thu Jun 5 15:08:24 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pablo Neira Ayuso X-Patchwork-Id: 356494 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id D886314008F for ; Fri, 6 Jun 2014 01:09:09 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752643AbaFEPJA (ORCPT ); Thu, 5 Jun 2014 11:09:00 -0400 Received: from mail.us.es ([193.147.175.20]:39233 "EHLO mail.us.es" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752645AbaFEPIh (ORCPT ); Thu, 5 Jun 2014 11:08:37 -0400 Received: (qmail 1977 invoked from network); 5 Jun 2014 17:08:36 +0200 Received: from unknown (HELO us.es) (192.168.2.13) by us.es with SMTP; 5 Jun 2014 17:08:36 +0200 Received: (qmail 5973 invoked by uid 507); 5 Jun 2014 15:08:35 -0000 X-Qmail-Scanner-Diagnostics: from 127.0.0.1 by antivirus3 (envelope-from , uid 501) with qmail-scanner-2.10 (clamdscan: 0.98.3/19060. spamassassin: 3.3.2. Clear:RC:1(127.0.0.1):SA:0(-101.2/7.5):. Processed in 1.959848 secs); 05 Jun 2014 15:08:35 -0000 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on antivirus3 X-Spam-Level: X-Spam-Status: No, score=-101.2 required=7.5 tests=BAYES_50,SMTPAUTH_US, USER_IN_WHITELIST autolearn=disabled version=3.3.2 X-Spam-ASN: AS12715 87.216.0.0/16 X-Envelope-From: pablo@netfilter.org Received: from unknown (HELO antivirus3) (127.0.0.1) by us.es with SMTP; 5 Jun 2014 15:08:33 -0000 Received: from 192.168.1.13 (192.168.1.13) by antivirus3 (F-Secure/fsigk_smtp/412/antivirus3); Thu, 05 Jun 2014 17:08:33 +0200 (CEST) X-Virus-Status: clean(F-Secure/fsigk_smtp/412/antivirus3) Received: (qmail 14103 invoked from network); 5 Jun 2014 17:08:33 +0200 Received: from 186.169.216.87.static.jazztel.es (HELO localhost.localdomain) (pneira@us.es@87.216.169.186) by mail.us.es with SMTP; 5 Jun 2014 17:08:33 +0200 From: Pablo Neira Ayuso To: netfilter-devel@vger.kernel.org Cc: davem@davemloft.net, netdev@vger.kernel.org Subject: [PATCH 4/6] netfilter: nft_rbtree: introduce locking Date: Thu, 5 Jun 2014 17:08:24 +0200 Message-Id: <1401980906-25290-5-git-send-email-pablo@netfilter.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1401980906-25290-1-git-send-email-pablo@netfilter.org> References: <1401980906-25290-1-git-send-email-pablo@netfilter.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org There's no rbtree rcu version yet, so let's fall back on the spinlock to protect the concurrent access of this structure both from user (to update the set content) and kernel-space (in the packet path). Signed-off-by: Pablo Neira Ayuso --- net/netfilter/nft_rbtree.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/net/netfilter/nft_rbtree.c b/net/netfilter/nft_rbtree.c index 072e611..e1836ff 100644 --- a/net/netfilter/nft_rbtree.c +++ b/net/netfilter/nft_rbtree.c @@ -18,6 +18,8 @@ #include #include +static DEFINE_SPINLOCK(nft_rbtree_lock); + struct nft_rbtree { struct rb_root root; }; @@ -38,6 +40,7 @@ static bool nft_rbtree_lookup(const struct nft_set *set, const struct rb_node *parent = priv->root.rb_node; int d; + spin_lock_bh(&nft_rbtree_lock); while (parent != NULL) { rbe = rb_entry(parent, struct nft_rbtree_elem, node); @@ -53,6 +56,8 @@ found: goto out; if (set->flags & NFT_SET_MAP) nft_data_copy(data, rbe->data); + + spin_unlock_bh(&nft_rbtree_lock); return true; } } @@ -62,6 +67,7 @@ found: goto found; } out: + spin_unlock_bh(&nft_rbtree_lock); return false; } @@ -124,9 +130,12 @@ static int nft_rbtree_insert(const struct nft_set *set, !(rbe->flags & NFT_SET_ELEM_INTERVAL_END)) nft_data_copy(rbe->data, &elem->data); + spin_lock_bh(&nft_rbtree_lock); err = __nft_rbtree_insert(set, rbe); if (err < 0) kfree(rbe); + + spin_unlock_bh(&nft_rbtree_lock); return err; } @@ -136,7 +145,9 @@ static void nft_rbtree_remove(const struct nft_set *set, struct nft_rbtree *priv = nft_set_priv(set); struct nft_rbtree_elem *rbe = elem->cookie; + spin_lock_bh(&nft_rbtree_lock); rb_erase(&rbe->node, &priv->root); + spin_unlock_bh(&nft_rbtree_lock); kfree(rbe); } @@ -147,6 +158,7 @@ static int nft_rbtree_get(const struct nft_set *set, struct nft_set_elem *elem) struct nft_rbtree_elem *rbe; int d; + spin_lock_bh(&nft_rbtree_lock); while (parent != NULL) { rbe = rb_entry(parent, struct nft_rbtree_elem, node); @@ -161,9 +173,11 @@ static int nft_rbtree_get(const struct nft_set *set, struct nft_set_elem *elem) !(rbe->flags & NFT_SET_ELEM_INTERVAL_END)) nft_data_copy(&elem->data, rbe->data); elem->flags = rbe->flags; + spin_unlock_bh(&nft_rbtree_lock); return 0; } } + spin_unlock_bh(&nft_rbtree_lock); return -ENOENT; } @@ -176,6 +190,7 @@ static void nft_rbtree_walk(const struct nft_ctx *ctx, struct nft_set_elem elem; struct rb_node *node; + spin_lock_bh(&nft_rbtree_lock); for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) { if (iter->count < iter->skip) goto cont; @@ -188,11 +203,14 @@ static void nft_rbtree_walk(const struct nft_ctx *ctx, elem.flags = rbe->flags; iter->err = iter->fn(ctx, set, iter, &elem); - if (iter->err < 0) + if (iter->err < 0) { + spin_unlock_bh(&nft_rbtree_lock); return; + } cont: iter->count++; } + spin_unlock_bh(&nft_rbtree_lock); } static unsigned int nft_rbtree_privsize(const struct nlattr * const nla[]) @@ -216,11 +234,13 @@ static void nft_rbtree_destroy(const struct nft_set *set) struct nft_rbtree_elem *rbe; struct rb_node *node; + spin_lock_bh(&nft_rbtree_lock); while ((node = priv->root.rb_node) != NULL) { rb_erase(node, &priv->root); rbe = rb_entry(node, struct nft_rbtree_elem, node); nft_rbtree_elem_destroy(set, rbe); } + spin_unlock_bh(&nft_rbtree_lock); } static bool nft_rbtree_estimate(const struct nft_set_desc *desc, u32 features,