diff mbox

[net-next,RESEND] xen-netfront: avoid packet loss when ethernet header crosses page boundary

Message ID 1474023554-24520-1-git-send-email-vkuznets@redhat.com
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

Vitaly Kuznetsov Sept. 16, 2016, 10:59 a.m. UTC
Small packet loss is reported on complex multi host network configurations
including tunnels, NAT, ... My investigation led me to the following check
in netback which drops packets:

        if (unlikely(txreq.size < ETH_HLEN)) {
                netdev_err(queue->vif->dev,
                           "Bad packet size: %d\n", txreq.size);
                xenvif_tx_err(queue, &txreq, extra_count, idx);
                break;
        }

But this check itself is legitimate. SKBs consist of a linear part (which
has to have the ethernet header) and (optionally) a number of frags.
Netfront transmits the head of the linear part up to the page boundary
as the first request and all the rest becomes frags so when we're
reconstructing the SKB in netback we can't distinguish between original
frags and the 'tail' of the linear part. The first SKB needs to be at
least ETH_HLEN size. So in case we have an SKB with its linear part
starting too close to the page boundary the packet is lost.

I see two ways to fix the issue:
- Change the 'wire' protocol between netfront and netback to start keeping
  the original SKB structure. We'll have to add a flag indicating the fact
  that the particular request is a part of the original linear part and not
  a frag. We'll need to know the length of the linear part to pre-allocate
  memory.
- Avoid transmitting SKBs with linear parts starting too close to the page
  boundary. That seems preferable short-term and shouldn't bring
  significant performance degradation as such packets are rare. That's what
  this patch is trying to achieve with skb_copy().

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Acked-by: David Vrabel <david.vrabel@citrix.com>
---
- This is just a RESEND with David's ACK added.
---
 drivers/net/xen-netfront.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

Comments

David Miller Sept. 19, 2016, 2:26 a.m. UTC | #1
From: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Fri, 16 Sep 2016 12:59:14 +0200

> @@ -595,6 +596,19 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	offset = offset_in_page(skb->data);
>  	len = skb_headlen(skb);
>  
> +	/* The first req should be at least ETH_HLEN size or the packet will be
> +	 * dropped by netback.
> +	 */
> +	if (unlikely(PAGE_SIZE - offset < ETH_HLEN)) {
> +		nskb = skb_copy(skb, GFP_ATOMIC);
> +		if (!nskb)
> +			goto drop;
> +		dev_kfree_skb_any(skb);
> +		skb = nskb;
> +		page = virt_to_page(skb->data);
> +		offset = offset_in_page(skb->data);
> +	}
> +
>  	spin_lock_irqsave(&queue->tx_lock, flags);

I think you also have to recalculate 'len' in this case too, as
skb_headlen() will definitely be different for nskb.

In fact, I can't see how this code can work properly without that fix.
Vitaly Kuznetsov Sept. 19, 2016, 10:22 a.m. UTC | #2
David Miller <davem@davemloft.net> writes:

> From: Vitaly Kuznetsov <vkuznets@redhat.com>
> Date: Fri, 16 Sep 2016 12:59:14 +0200
>
>> @@ -595,6 +596,19 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>  	offset = offset_in_page(skb->data);
>>  	len = skb_headlen(skb);
>>  
>> +	/* The first req should be at least ETH_HLEN size or the packet will be
>> +	 * dropped by netback.
>> +	 */
>> +	if (unlikely(PAGE_SIZE - offset < ETH_HLEN)) {
>> +		nskb = skb_copy(skb, GFP_ATOMIC);
>> +		if (!nskb)
>> +			goto drop;
>> +		dev_kfree_skb_any(skb);
>> +		skb = nskb;
>> +		page = virt_to_page(skb->data);
>> +		offset = offset_in_page(skb->data);
>> +	}
>> +
>>  	spin_lock_irqsave(&queue->tx_lock, flags);
>
> I think you also have to recalculate 'len' in this case too, as
> skb_headlen() will definitely be different for nskb.
>
> In fact, I can't see how this code can work properly without that fix.

Thank you for your feedback David,

in my testing (even when I tried doing skb_copy() for all skbs
unconditionally) skb_headlen(nskb) always equals 'len' so I was under an
impression that both 'skb->len' and 'skb->data_len' remain the same when
we do skb_copy(). However, in case you think there are cases when
headlen changes, I see no problem with re-calculating 'len' as it won't
bring any significant performace penalty compared to the already added
skb_copy().

I'll send 'v2'.
David Vrabel Sept. 19, 2016, 10:23 a.m. UTC | #3
On 19/09/16 11:22, Vitaly Kuznetsov wrote:
> David Miller <davem@davemloft.net> writes:
> 
>> From: Vitaly Kuznetsov <vkuznets@redhat.com>
>> Date: Fri, 16 Sep 2016 12:59:14 +0200
>>
>>> @@ -595,6 +596,19 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>>  	offset = offset_in_page(skb->data);
>>>  	len = skb_headlen(skb);
>>>  
>>> +	/* The first req should be at least ETH_HLEN size or the packet will be
>>> +	 * dropped by netback.
>>> +	 */
>>> +	if (unlikely(PAGE_SIZE - offset < ETH_HLEN)) {
>>> +		nskb = skb_copy(skb, GFP_ATOMIC);
>>> +		if (!nskb)
>>> +			goto drop;
>>> +		dev_kfree_skb_any(skb);
>>> +		skb = nskb;
>>> +		page = virt_to_page(skb->data);
>>> +		offset = offset_in_page(skb->data);
>>> +	}
>>> +
>>>  	spin_lock_irqsave(&queue->tx_lock, flags);
>>
>> I think you also have to recalculate 'len' in this case too, as
>> skb_headlen() will definitely be different for nskb.
>>
>> In fact, I can't see how this code can work properly without that fix.
> 
> Thank you for your feedback David,
> 
> in my testing (even when I tried doing skb_copy() for all skbs
> unconditionally) skb_headlen(nskb) always equals 'len' so I was under an
> impression that both 'skb->len' and 'skb->data_len' remain the same when
> we do skb_copy(). However, in case you think there are cases when
> headlen changes, I see no problem with re-calculating 'len' as it won't
> bring any significant performace penalty compared to the already added
> skb_copy().

I think you can move the len = skb_headlen(skb) after the if, no need to
recalculate it.

David
Vitaly Kuznetsov Sept. 19, 2016, 10:36 a.m. UTC | #4
David Vrabel <david.vrabel@citrix.com> writes:

> On 19/09/16 11:22, Vitaly Kuznetsov wrote:
>> David Miller <davem@davemloft.net> writes:
>> 
>>> From: Vitaly Kuznetsov <vkuznets@redhat.com>
>>> Date: Fri, 16 Sep 2016 12:59:14 +0200
>>>
>>>> @@ -595,6 +596,19 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>>>  	offset = offset_in_page(skb->data);
>>>>  	len = skb_headlen(skb);
>>>>  
>>>> +	/* The first req should be at least ETH_HLEN size or the packet will be
>>>> +	 * dropped by netback.
>>>> +	 */
>>>> +	if (unlikely(PAGE_SIZE - offset < ETH_HLEN)) {
>>>> +		nskb = skb_copy(skb, GFP_ATOMIC);
>>>> +		if (!nskb)
>>>> +			goto drop;
>>>> +		dev_kfree_skb_any(skb);
>>>> +		skb = nskb;
>>>> +		page = virt_to_page(skb->data);
>>>> +		offset = offset_in_page(skb->data);
>>>> +	}
>>>> +
>>>>  	spin_lock_irqsave(&queue->tx_lock, flags);
>>>
>>> I think you also have to recalculate 'len' in this case too, as
>>> skb_headlen() will definitely be different for nskb.
>>>
>>> In fact, I can't see how this code can work properly without that fix.
>> 
>> Thank you for your feedback David,
>> 
>> in my testing (even when I tried doing skb_copy() for all skbs
>> unconditionally) skb_headlen(nskb) always equals 'len' so I was under an
>> impression that both 'skb->len' and 'skb->data_len' remain the same when
>> we do skb_copy(). However, in case you think there are cases when
>> headlen changes, I see no problem with re-calculating 'len' as it won't
>> bring any significant performace penalty compared to the already added
>> skb_copy().
>
> I think you can move the len = skb_headlen(skb) after the if, no need to
> recalculate it.

Sorry, I was too fast with sending 'v2' and did the other way
around. I'll do v3.
diff mbox

Patch

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 96ccd4e..28c4a66 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -565,6 +565,7 @@  static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	struct netfront_queue *queue = NULL;
 	unsigned int num_queues = dev->real_num_tx_queues;
 	u16 queue_index;
+	struct sk_buff *nskb;
 
 	/* Drop the packet if no queues are set up */
 	if (num_queues < 1)
@@ -595,6 +596,19 @@  static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	offset = offset_in_page(skb->data);
 	len = skb_headlen(skb);
 
+	/* The first req should be at least ETH_HLEN size or the packet will be
+	 * dropped by netback.
+	 */
+	if (unlikely(PAGE_SIZE - offset < ETH_HLEN)) {
+		nskb = skb_copy(skb, GFP_ATOMIC);
+		if (!nskb)
+			goto drop;
+		dev_kfree_skb_any(skb);
+		skb = nskb;
+		page = virt_to_page(skb->data);
+		offset = offset_in_page(skb->data);
+	}
+
 	spin_lock_irqsave(&queue->tx_lock, flags);
 
 	if (unlikely(!netif_carrier_ok(dev) ||