Patchwork inetpeer: fix a race in inetpeer_gc_worker()

login
register
mail settings
Submitter Eric Dumazet
Date June 5, 2012, 9:28 a.m.
Message ID <1338888507.2760.2146.camel@edumazet-glaptop>
Download mbox | patch
Permalink /patch/163054/
State Superseded
Delegated to: David Miller
Headers show

Comments

Eric Dumazet - June 5, 2012, 9:28 a.m.
From: Eric Dumazet <edumazet@google.com>

commit 5faa5df1fa2024 (inetpeer: Invalidate the inetpeer tree along with
the routing cache) added a race :

Before freeing an inetpeer, we must respect a RCU grace period, and make
sure no user will attempt to increase refcnt.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
---
 net/ipv4/inetpeer.c |   14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Steffen Klassert - June 5, 2012, 11:56 a.m.
On Tue, Jun 05, 2012 at 11:28:27AM +0200, Eric Dumazet wrote:
> From: Eric Dumazet <edumazet@google.com>
> 
> commit 5faa5df1fa2024 (inetpeer: Invalidate the inetpeer tree along with
> the routing cache) added a race :
> 
> Before freeing an inetpeer, we must respect a RCU grace period, and make
> sure no user will attempt to increase refcnt.
> 

As already mentioned in the other mail. In this case, I think
we can just delete the inetpeer once the refcount got zero.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch

diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
index d4d61b6..f936e95 100644
--- a/net/ipv4/inetpeer.c
+++ b/net/ipv4/inetpeer.c
@@ -108,6 +108,11 @@  int inet_peer_threshold __read_mostly = 65536 + 128;	/* start to throw entries m
 int inet_peer_minttl __read_mostly = 120 * HZ;	/* TTL under high load: 120 sec */
 int inet_peer_maxttl __read_mostly = 10 * 60 * HZ;	/* usual time to live: 10 min */
 
+static void inetpeer_free_rcu(struct rcu_head *head)
+{
+	kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
+}
+
 static void inetpeer_gc_worker(struct work_struct *work)
 {
 	struct inet_peer *p, *n;
@@ -137,9 +142,9 @@  static void inetpeer_gc_worker(struct work_struct *work)
 
 		n = list_entry(p->gc_list.next, struct inet_peer, gc_list);
 
-		if (!atomic_read(&p->refcnt)) {
+		if (atomic_cmpxchg(&p->refcnt, 0, -1) == 0) {
 			list_del(&p->gc_list);
-			kmem_cache_free(peer_cachep, p);
+			call_rcu(&p->rcu, inetpeer_free_rcu);
 		}
 	}
 
@@ -364,11 +369,6 @@  do {								\
 	peer_avl_rebalance(stack, stackptr, base);		\
 } while (0)
 
-static void inetpeer_free_rcu(struct rcu_head *head)
-{
-	kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
-}
-
 static void unlink_from_pool(struct inet_peer *p, struct inet_peer_base *base,
 			     struct inet_peer __rcu **stack[PEER_MAXDEPTH])
 {