diff mbox

net: nf_conntrack_alloc() fixes

Message ID 4A5E33D9.2030602@gmail.com
State Not Applicable, archived
Delegated to: David Miller
Headers show

Commit Message

Eric Dumazet July 15, 2009, 7:54 p.m. UTC
Patrick McHardy a écrit :
> Eric Dumazet wrote:
>> [PATCH] net: nf_conntrack_alloc() should not use kmem_cache_zalloc()
>>
>> When a slab cache uses SLAB_DESTROY_BY_RCU, we must be careful when allocating
>> objects, since slab allocator could give a freed object still used by lockless
>> readers.
>>
>> In particular, nf_conntrack RCU lookups rely on ct->tuplehash[xxx].hnnode.next
>> being always valid (ie containing a valid 'nulls' value, or a valid pointer to next
>> object in hash chain.)
>>
>> kmem_cache_zalloc() setups object with NULL values, but a NULL value is not valid
>> for ct->tuplehash[xxx].hnnode.next.
>>
>> Fix is to call kmem_cache_alloc() and do the zeroing ourself.
> 
> I think this is still racy, please see below:

Nice catch indeed !

> 
>> diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
>> index 7508f11..23feafa 100644
>> --- a/net/netfilter/nf_conntrack_core.c
>> +++ b/net/netfilter/nf_conntrack_core.c
>> @@ -561,17 +561,28 @@ struct nf_conn *nf_conntrack_alloc(struct net *net,
>>  		}
>>  	}
>>  
>> -	ct = kmem_cache_zalloc(nf_conntrack_cachep, gfp);
>> +	/*
>> +	 * Do not use kmem_cache_zalloc(), as this cache uses
>> +	 * SLAB_DESTROY_BY_RCU.
>> +	 */
>> +	ct = kmem_cache_alloc(nf_conntrack_cachep, gfp);
>>  	if (ct == NULL) {
>>  		pr_debug("nf_conntrack_alloc: Can't alloc conntrack.\n");
>>  		atomic_dec(&net->ct.count);
>>  		return ERR_PTR(-ENOMEM);
>>  	}
>> -
> 
> __nf_conntrack_find() on another CPU finds the entry at this point.
> 
>> +	/*
>> +	 * Let ct->tuplehash[IP_CT_DIR_ORIGINAL].hnnode.next
>> +	 * and ct->tuplehash[IP_CT_DIR_REPLY].hnnode.next unchanged.
>> +	 */
>> +	memset(&ct->tuplehash[IP_CT_DIR_MAX], 0,
>> +	       sizeof(*ct) - offsetof(struct nf_conn, tuplehash[IP_CT_DIR_MAX]));
>>  	spin_lock_init(&ct->lock);
>>  	atomic_set(&ct->ct_general.use, 1);
> 
> nf_conntrack_find_get() successfully tries to atomic_inc_not_zero()
> at this point, following by another tuple comparison which is also
> successful.
> 
> Am I missing something? I think we need to make sure the reference
> count is not increased until the new tuples are visible.

Yes you are right, and Documentation/RCU/rculist_nulls.txt should be
updated to reflect this as well (in insert algo, must use smp_wmb()
between obj->key assignment and refcnt assigment)

We'll have to change socket allocation too, this will be addressed
by a followup patch

Thanks Patrick !

[PATCH] net: nf_conntrack_alloc() fixes

When a slab cache uses SLAB_DESTROY_BY_RCU, we must be careful when allocating
objects, since slab allocator could give a freed object still used by lockless
readers.

In particular, nf_conntrack RCU lookups rely on ct->tuplehash[xxx].hnnode.next
being always valid (ie containing a valid 'nulls' value, or a valid pointer to next
object in hash chain.)

kmem_cache_zalloc() setups object with NULL values, but a NULL value is not valid
for ct->tuplehash[xxx].hnnode.next.

Fix is to call kmem_cache_alloc() and do the zeroing ourself.

As spotted by Patrick, we also need to make sure lookup keys are committed to
memory before setting refcount to 1, or a lockless reader could get a reference
on the old version of the object. Its key re-check could then pass the barrier. 

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
---
 Documentation/RCU/rculist_nulls.txt |    7 ++++++-
 net/netfilter/nf_conntrack_core.c   |   21 ++++++++++++++++++---
 2 files changed, 24 insertions(+), 4 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Patrick McHardy July 16, 2009, 12:05 p.m. UTC | #1
Eric Dumazet wrote:
> [PATCH] net: nf_conntrack_alloc() fixes
> 
> When a slab cache uses SLAB_DESTROY_BY_RCU, we must be careful when allocating
> objects, since slab allocator could give a freed object still used by lockless
> readers.
> 
> In particular, nf_conntrack RCU lookups rely on ct->tuplehash[xxx].hnnode.next
> being always valid (ie containing a valid 'nulls' value, or a valid pointer to next
> object in hash chain.)
> 
> kmem_cache_zalloc() setups object with NULL values, but a NULL value is not valid
> for ct->tuplehash[xxx].hnnode.next.
> 
> Fix is to call kmem_cache_alloc() and do the zeroing ourself.
> 
> As spotted by Patrick, we also need to make sure lookup keys are committed to
> memory before setting refcount to 1, or a lockless reader could get a reference
> on the old version of the object. Its key re-check could then pass the barrier. 

Looks good to me. Applied, thanks Eric. I'll push it to -stable
with the other fixes in a couple of days.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/Documentation/RCU/rculist_nulls.txt b/Documentation/RCU/rculist_nulls.txt
index 93cb28d..18f9651 100644
--- a/Documentation/RCU/rculist_nulls.txt
+++ b/Documentation/RCU/rculist_nulls.txt
@@ -83,11 +83,12 @@  not detect it missed following items in original chain.
 obj = kmem_cache_alloc(...);
 lock_chain(); // typically a spin_lock()
 obj->key = key;
-atomic_inc(&obj->refcnt);
 /*
  * we need to make sure obj->key is updated before obj->next
+ * or obj->refcnt
  */
 smp_wmb();
+atomic_set(&obj->refcnt, 1);
 hlist_add_head_rcu(&obj->obj_node, list);
 unlock_chain(); // typically a spin_unlock()
 
@@ -159,6 +160,10 @@  out:
 obj = kmem_cache_alloc(cachep);
 lock_chain(); // typically a spin_lock()
 obj->key = key;
+/*
+ * changes to obj->key must be visible before refcnt one
+ */
+smp_wmb();
 atomic_set(&obj->refcnt, 1);
 /*
  * insert obj in RCU way (readers might be traversing chain)
diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index 7508f11..b5869b9 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -561,23 +561,38 @@  struct nf_conn *nf_conntrack_alloc(struct net *net,
 		}
 	}
 
-	ct = kmem_cache_zalloc(nf_conntrack_cachep, gfp);
+	/*
+	 * Do not use kmem_cache_zalloc(), as this cache uses
+	 * SLAB_DESTROY_BY_RCU.
+	 */
+	ct = kmem_cache_alloc(nf_conntrack_cachep, gfp);
 	if (ct == NULL) {
 		pr_debug("nf_conntrack_alloc: Can't alloc conntrack.\n");
 		atomic_dec(&net->ct.count);
 		return ERR_PTR(-ENOMEM);
 	}
-
+	/*
+	 * Let ct->tuplehash[IP_CT_DIR_ORIGINAL].hnnode.next
+	 * and ct->tuplehash[IP_CT_DIR_REPLY].hnnode.next unchanged.
+	 */
+	memset(&ct->tuplehash[IP_CT_DIR_MAX], 0,
+	       sizeof(*ct) - offsetof(struct nf_conn, tuplehash[IP_CT_DIR_MAX]));
 	spin_lock_init(&ct->lock);
-	atomic_set(&ct->ct_general.use, 1);
 	ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple = *orig;
+	ct->tuplehash[IP_CT_DIR_ORIGINAL].hnnode.pprev = NULL;
 	ct->tuplehash[IP_CT_DIR_REPLY].tuple = *repl;
+	ct->tuplehash[IP_CT_DIR_REPLY].hnnode.pprev = NULL;
 	/* Don't set timer yet: wait for confirmation */
 	setup_timer(&ct->timeout, death_by_timeout, (unsigned long)ct);
 #ifdef CONFIG_NET_NS
 	ct->ct_net = net;
 #endif
 
+	/*
+	 * changes to lookup keys must be done before setting refcnt to 1
+	 */
+	smp_wmb();
+	atomic_set(&ct->ct_general.use, 1);
 	return ct;
 }
 EXPORT_SYMBOL_GPL(nf_conntrack_alloc);