Patchwork RFS seems to have incompatiblities with bridged vlans

login
register
mail settings
Submitter Eric Dumazet
Date June 8, 2010, 2:18 p.m.
Message ID <1276006721.2486.141.camel@edumazet-laptop>
Download mbox | patch
Permalink /patch/54984/
State Superseded
Delegated to: David Miller
Headers show

Comments

Eric Dumazet - June 8, 2010, 2:18 p.m.
Le lundi 07 juin 2010 à 15:30 -0700, John Fastabend a écrit :

> There is always a possibility that the underlying device sets the 
> queue_mapping to be greater then num_cpus.  Also I suspect the same 
> issue exists with bonding devices.  Maybe something like the following 
> is worth while? compile tested only,
> 
> [PATCH] 8021q: vlan reassigns dev without check queue_mapping
> 
> recv path reassigns skb->dev without sanity checking the
> queue_mapping field.  This can result in the queue_mapping
> field being set incorrectly if the new dev supports less
> queues then the underlying device.
> 
> This patch just resets the queue_mapping to 0 which should
> resolve this issue?  Any thoughts?
> 
> The same issue could happen on bonding devices as well.
> 
> Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
> ---
> 
>   net/8021q/vlan_core.c |    6 ++++++
>   1 files changed, 6 insertions(+), 0 deletions(-)
> 
> diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c
> index bd537fc..ad309f8 100644
> --- a/net/8021q/vlan_core.c
> +++ b/net/8021q/vlan_core.c
> @@ -21,6 +21,9 @@ int __vlan_hwaccel_rx(struct sk_buff *skb, struct 
> vlan_group *grp,
>   	if (!skb->dev)
>   		goto drop;
> 
> +	if (unlikely(skb->queue_mapping >= skb->dev->real_num_tx_queues))
> +		skb_set_queue_mapping(skb, 0);
> +
>   	return (polling ? netif_receive_skb(skb) : netif_rx(skb));
> 
>   drop:
> @@ -93,6 +96,9 @@ vlan_gro_common(struct napi_struct *napi, struct 
> vlan_group *grp,
>   	if (!skb->dev)
>   		goto drop;
> 
> +	if (unlikely(skb->queue_mapping >= skb->dev->real_num_tx_queues))
> +		skb_set_queue_mapping(skb, 0);
> +
>   	for (p = napi->gro_list; p; p = p->next) {
>   		NAPI_GRO_CB(p)->same_flow =
>   			p->dev == skb->dev && !compare_ether_header(
> --

Only a workaround, added in hot path in a otherwise 'good' driver
(multiqueue enabled and ready)

eth0  -------> bond / bridge ---------> vlan.id
(nbtxq=8)      (ntxbq=1)        (nbtxq=X)

X is capped to 1 because of bond/bridge, while bond has no "queue"
(LLTX driver)

Solutions :

1) queue_mapping could be silently tested in get_rps_cpu()...



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch

diff --git a/net/core/dev.c b/net/core/dev.c
index 6f330ce..3a3f7f6 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2272,14 +2272,11 @@  static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
 
 	if (skb_rx_queue_recorded(skb)) {
 		u16 index = skb_get_rx_queue(skb);
-		if (unlikely(index >= dev->num_rx_queues)) {
-			if (net_ratelimit()) {
-				pr_warning("%s received packet on queue "
-					"%u, but number of RX queues is %u\n",
-					dev->name, index, dev->num_rx_queues);
-			}
-			goto done;
-		}
+		if (WARN_ONCE(index >= dev->num_rx_queues,
+				KERN_WARNING "%s received packet on queue %u, "
+				"but number of RX queues is %u\n",
+				dev->name, index, dev->num_rx_queues))
+			index %= dev->num_rx_queues;
 		rxqueue = dev->_rx + index;
 	} else
 		rxqueue = dev->_rx;



2) bond/bridge should setup more queues, just in case.
   We probably need to be able to make things more dynamic,
   (propagate nbtxq between layers) but not for 2.6.35



diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 5e12462..ce813dd 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -5012,8 +5012,8 @@  int bond_create(struct net *net, const char *name)
 
 	rtnl_lock();
 
-	bond_dev = alloc_netdev(sizeof(struct bonding), name ? name : "",
-				bond_setup);
+	bond_dev = alloc_netdev_mq(sizeof(struct bonding), name ? name : "",
+				   bond_setup, max(64, nr_cpu_ids));
 	if (!bond_dev) {
 		pr_err("%s: eek! can't alloc netdev!\n", name);
 		rtnl_unlock();