diff mbox

net: fix kmemcheck annotations

Message ID 4AE96A1D.8050300@gmail.com
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

Eric Dumazet Oct. 29, 2009, 10:10 a.m. UTC
struct sk_buff kmemcheck annotations enlarged this structure by 8/16 bytes

Fix this by moving 'protocol' inside flags1 bitfield,
and queue_mapping inside flags2 bitfield.

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
---

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

David Miller Oct. 29, 2009, 10:17 a.m. UTC | #1
From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Thu, 29 Oct 2009 11:10:37 +0100

> @@ -367,7 +367,6 @@ struct sk_buff {
>  #endif
>  
>  	int			iif;
> -	__u16			queue_mapping;
>  #ifdef CONFIG_NET_SCHED
>  	__u16			tc_index;	/* traffic control index */
>  #ifdef CONFIG_NET_CLS_ACT
> @@ -376,6 +375,7 @@ struct sk_buff {
>  #endif
>  
>  	kmemcheck_bitfield_begin(flags2);
> +	__u16			queue_mapping:16;
>  #ifdef CONFIG_IPV6_NDISC_NODETYPE
>  	__u8			ndisc_nodetype:2;
>  #endif

We may be trading size for performance here, I wonder if
it's wise to move queue_mapping like that and what
locality change we get as a result.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet Oct. 29, 2009, 10:44 a.m. UTC | #2
David Miller a écrit :
> 
> We may be trading size for performance here, I wonder if
> it's wise to move queue_mapping like that and what
> locality change we get as a result.

I agree, but could not convince me it makes a difference.

On 64bit arches, queue_mapping doesnt change its cache line location
(0xa4 -> 0xa8)

On 32bit arches, sizeof(sk_buf)=0xB0, rx and tx paths touch all
 of 3 cache lines anyway.

Only thing I can think at this moment is to reorder skb so that TX 
completion path dont touch all cache lines (putting together cb[] 
and some not touched fields). We could probably gain one cache line miss.

It's a bit hard because of the next/prev fields that must
be first members of structure, but I believe you had some work in progress
in this area, to stick a standard list_head ?

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller Oct. 30, 2009, 5:54 a.m. UTC | #3
From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Thu, 29 Oct 2009 11:44:48 +0100

> David Miller a écrit :
>> 
>> We may be trading size for performance here, I wonder if
>> it's wise to move queue_mapping like that and what
>> locality change we get as a result.
> 
> I agree, but could not convince me it makes a difference.

Ok, I'll apply your patch as-is to net-2.6, thanks!
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 0c68fbd..defd51d 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -354,8 +354,8 @@  struct sk_buff {
 				ipvs_property:1,
 				peeked:1,
 				nf_trace:1;
+	__be16			protocol:16;
 	kmemcheck_bitfield_end(flags1);
-	__be16			protocol;
 
 	void			(*destructor)(struct sk_buff *skb);
 #if defined(CONFIG_NF_CONNTRACK) || defined(CONFIG_NF_CONNTRACK_MODULE)
@@ -367,7 +367,6 @@  struct sk_buff {
 #endif
 
 	int			iif;
-	__u16			queue_mapping;
 #ifdef CONFIG_NET_SCHED
 	__u16			tc_index;	/* traffic control index */
 #ifdef CONFIG_NET_CLS_ACT
@@ -376,6 +375,7 @@  struct sk_buff {
 #endif
 
 	kmemcheck_bitfield_begin(flags2);
+	__u16			queue_mapping:16;
 #ifdef CONFIG_IPV6_NDISC_NODETYPE
 	__u8			ndisc_nodetype:2;
 #endif