diff mbox

xen-netfront: Fix handling packets on compound pages with skb_linearize

Message ID 1407778343-13622-1-git-send-email-zoltan.kiss@citrix.com
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

Zoltan Kiss Aug. 11, 2014, 5:32 p.m. UTC
There is a long known problem with the netfront/netback interface: if the guest
tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
it gets dropped. The reason is that netback maps these slots to a frag in the
frags array, which is limited by size. Having so many slots can occur since
compound pages were introduced, as the ring protocol slice them up into
individual (non-compound) page aligned slots. The theoretical worst case
scenario looks like this (note, skbs are limited to 64 Kb here):
linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
using 2 slots
first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
end and the beginning of a page, therefore they use 3 * 15 = 45 slots
last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
Although I don't think this 51 slots skb can really happen, we need a solution
which can deal with every scenario. In real life there is only a few slots
overdue, but usually it causes the TCP stream to be blocked, as the retry will
most likely have the same buffer layout.
This patch solves this problem by linearizing the packet. This is not the
fastest way, and it can fail much easier as it tries to allocate a big linear
area for the whole packet, but probably easier by an order of magnitude than
anything else. Probably this code path is not touched very frequently anyway.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: xen-devel@lists.xenproject.org

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

David Miller Aug. 11, 2014, 9:57 p.m. UTC | #1
From: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Mon, 11 Aug 2014 18:32:23 +0100

> There is a long known problem with the netfront/netback interface: if the guest
> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
> it gets dropped. The reason is that netback maps these slots to a frag in the
> frags array, which is limited by size. Having so many slots can occur since
> compound pages were introduced, as the ring protocol slice them up into
> individual (non-compound) page aligned slots. The theoretical worst case
> scenario looks like this (note, skbs are limited to 64 Kb here):
> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
> using 2 slots
> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
> Although I don't think this 51 slots skb can really happen, we need a solution
> which can deal with every scenario. In real life there is only a few slots
> overdue, but usually it causes the TCP stream to be blocked, as the retry will
> most likely have the same buffer layout.
> This patch solves this problem by linearizing the packet. This is not the
> fastest way, and it can fail much easier as it tries to allocate a big linear
> area for the whole packet, but probably easier by an order of magnitude than
> anything else. Probably this code path is not touched very frequently anyway.
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

Applied.

You may wish to now make your queue stop/wake point be MAX_SKB_FRAGS + 1 slots.
That way you will always abide by the netdev queue management rules in that
if the queue is awake you will always be able to accept at least on more SKB.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stefan Bader Dec. 1, 2014, 8:55 a.m. UTC | #2
On 11.08.2014 19:32, Zoltan Kiss wrote:
> There is a long known problem with the netfront/netback interface: if the guest
> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
> it gets dropped. The reason is that netback maps these slots to a frag in the
> frags array, which is limited by size. Having so many slots can occur since
> compound pages were introduced, as the ring protocol slice them up into
> individual (non-compound) page aligned slots. The theoretical worst case
> scenario looks like this (note, skbs are limited to 64 Kb here):
> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
> using 2 slots
> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
> Although I don't think this 51 slots skb can really happen, we need a solution
> which can deal with every scenario. In real life there is only a few slots
> overdue, but usually it causes the TCP stream to be blocked, as the retry will
> most likely have the same buffer layout.
> This patch solves this problem by linearizing the packet. This is not the
> fastest way, and it can fail much easier as it tries to allocate a big linear
> area for the whole packet, but probably easier by an order of magnitude than
> anything else. Probably this code path is not touched very frequently anyway.
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> Cc: Paul Durrant <paul.durrant@citrix.com>
> Cc: netdev@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Cc: xen-devel@lists.xenproject.org

This does not seem to be marked explicitly as stable. Has someone already asked
David Miller to put it on his stable queue? IMO it qualifies quite well and the
actual change should be simple to pick/backport.

-Stefan

> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 055222b..23359ae 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -628,9 +628,10 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	slots = DIV_ROUND_UP(offset + len, PAGE_SIZE) +
>  		xennet_count_skb_frag_slots(skb);
>  	if (unlikely(slots > MAX_SKB_FRAGS + 1)) {
> -		net_alert_ratelimited(
> -			"xennet: skb rides the rocket: %d slots\n", slots);
> -		goto drop;
> +		net_dbg_ratelimited("xennet: skb rides the rocket: %d slots, %d bytes\n",
> +				    slots, skb->len);
> +		if (skb_linearize(skb))
> +			goto drop;
>  	}
>  
>  	spin_lock_irqsave(&queue->tx_lock, flags);
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
David Vrabel Dec. 1, 2014, 1:36 p.m. UTC | #3
On 01/12/14 08:55, Stefan Bader wrote:
> On 11.08.2014 19:32, Zoltan Kiss wrote:
>> There is a long known problem with the netfront/netback interface: if the guest
>> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
>> it gets dropped. The reason is that netback maps these slots to a frag in the
>> frags array, which is limited by size. Having so many slots can occur since
>> compound pages were introduced, as the ring protocol slice them up into
>> individual (non-compound) page aligned slots. The theoretical worst case
>> scenario looks like this (note, skbs are limited to 64 Kb here):
>> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
>> using 2 slots
>> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
>> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
>> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
>> Although I don't think this 51 slots skb can really happen, we need a solution
>> which can deal with every scenario. In real life there is only a few slots
>> overdue, but usually it causes the TCP stream to be blocked, as the retry will
>> most likely have the same buffer layout.
>> This patch solves this problem by linearizing the packet. This is not the
>> fastest way, and it can fail much easier as it tries to allocate a big linear
>> area for the whole packet, but probably easier by an order of magnitude than
>> anything else. Probably this code path is not touched very frequently anyway.
>>
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>> Cc: Wei Liu <wei.liu2@citrix.com>
>> Cc: Ian Campbell <Ian.Campbell@citrix.com>
>> Cc: Paul Durrant <paul.durrant@citrix.com>
>> Cc: netdev@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Cc: xen-devel@lists.xenproject.org
> 
> This does not seem to be marked explicitly as stable. Has someone already asked
> David Miller to put it on his stable queue? IMO it qualifies quite well and the
> actual change should be simple to pick/backport.

I think it's a candidate, yes.

Can you expand on the user visible impact of the bug this patch fixes?
I think it results in certain types of traffic not working (because the
domU always generates skb's with the problematic frag layout), but I
can't remember the details.

David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Zoltan Kiss Dec. 1, 2014, 1:59 p.m. UTC | #4
On 01/12/14 13:36, David Vrabel wrote:
> On 01/12/14 08:55, Stefan Bader wrote:
>> On 11.08.2014 19:32, Zoltan Kiss wrote:
>>> There is a long known problem with the netfront/netback interface: if the guest
>>> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
>>> it gets dropped. The reason is that netback maps these slots to a frag in the
>>> frags array, which is limited by size. Having so many slots can occur since
>>> compound pages were introduced, as the ring protocol slice them up into
>>> individual (non-compound) page aligned slots. The theoretical worst case
>>> scenario looks like this (note, skbs are limited to 64 Kb here):
>>> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
>>> using 2 slots
>>> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
>>> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
>>> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
>>> Although I don't think this 51 slots skb can really happen, we need a solution
>>> which can deal with every scenario. In real life there is only a few slots
>>> overdue, but usually it causes the TCP stream to be blocked, as the retry will
>>> most likely have the same buffer layout.
>>> This patch solves this problem by linearizing the packet. This is not the
>>> fastest way, and it can fail much easier as it tries to allocate a big linear
>>> area for the whole packet, but probably easier by an order of magnitude than
>>> anything else. Probably this code path is not touched very frequently anyway.
>>>
>>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>>> Cc: Wei Liu <wei.liu2@citrix.com>
>>> Cc: Ian Campbell <Ian.Campbell@citrix.com>
>>> Cc: Paul Durrant <paul.durrant@citrix.com>
>>> Cc: netdev@vger.kernel.org
>>> Cc: linux-kernel@vger.kernel.org
>>> Cc: xen-devel@lists.xenproject.org
>>
>> This does not seem to be marked explicitly as stable. Has someone already asked
>> David Miller to put it on his stable queue? IMO it qualifies quite well and the
>> actual change should be simple to pick/backport.
>
> I think it's a candidate, yes.
>
> Can you expand on the user visible impact of the bug this patch fixes?
> I think it results in certain types of traffic not working (because the
> domU always generates skb's with the problematic frag layout), but I
> can't remember the details.

Yes, this line in the comment talks about it: "In real life there is 
only a few slots overdue, but usually it causes the TCP stream to be 
blocked, as the retry will most likely have the same buffer layout."
Maybe we can add what kind of traffic triggered this so far, AFAIK NFS 
was one of them, and Stefan had an another use case. But my memories are 
blur about this.

Zoli
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stefan Bader Dec. 1, 2014, 2:13 p.m. UTC | #5
On 01.12.2014 14:59, Zoltan Kiss wrote:
> 
> 
> On 01/12/14 13:36, David Vrabel wrote:
>> On 01/12/14 08:55, Stefan Bader wrote:
>>> On 11.08.2014 19:32, Zoltan Kiss wrote:
>>>> There is a long known problem with the netfront/netback interface: if the guest
>>>> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
>>>> it gets dropped. The reason is that netback maps these slots to a frag in the
>>>> frags array, which is limited by size. Having so many slots can occur since
>>>> compound pages were introduced, as the ring protocol slice them up into
>>>> individual (non-compound) page aligned slots. The theoretical worst case
>>>> scenario looks like this (note, skbs are limited to 64 Kb here):
>>>> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
>>>> using 2 slots
>>>> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
>>>> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
>>>> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
>>>> Although I don't think this 51 slots skb can really happen, we need a solution
>>>> which can deal with every scenario. In real life there is only a few slots
>>>> overdue, but usually it causes the TCP stream to be blocked, as the retry will
>>>> most likely have the same buffer layout.
>>>> This patch solves this problem by linearizing the packet. This is not the
>>>> fastest way, and it can fail much easier as it tries to allocate a big linear
>>>> area for the whole packet, but probably easier by an order of magnitude than
>>>> anything else. Probably this code path is not touched very frequently anyway.
>>>>
>>>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>>>> Cc: Wei Liu <wei.liu2@citrix.com>
>>>> Cc: Ian Campbell <Ian.Campbell@citrix.com>
>>>> Cc: Paul Durrant <paul.durrant@citrix.com>
>>>> Cc: netdev@vger.kernel.org
>>>> Cc: linux-kernel@vger.kernel.org
>>>> Cc: xen-devel@lists.xenproject.org
>>>
>>> This does not seem to be marked explicitly as stable. Has someone already asked
>>> David Miller to put it on his stable queue? IMO it qualifies quite well and the
>>> actual change should be simple to pick/backport.
>>
>> I think it's a candidate, yes.
>>
>> Can you expand on the user visible impact of the bug this patch fixes?
>> I think it results in certain types of traffic not working (because the
>> domU always generates skb's with the problematic frag layout), but I
>> can't remember the details.
> 
> Yes, this line in the comment talks about it: "In real life there is only a few
> slots overdue, but usually it causes the TCP stream to be blocked, as the retry
> will most likely have the same buffer layout."
> Maybe we can add what kind of traffic triggered this so far, AFAIK NFS was one
> of them, and Stefan had an another use case. But my memories are blur about this.

We had some report about some web-app hitting packet losses. I suspect that also
was streaming something. For a easy trigger we found redis-benchmark (part of
the redis keyserver) with a larger (iirc 1kB) payload would trigger the
fragmentation/exceeding pages to happen. Though I think it did not fail but
showed a performance drop instead (from memory which also suffers from loosing
detail).

-Stefan
> 
> Zoli
Luis Henriques Dec. 8, 2014, 10:19 a.m. UTC | #6
On Mon, Dec 01, 2014 at 09:55:24AM +0100, Stefan Bader wrote:
> On 11.08.2014 19:32, Zoltan Kiss wrote:
> > There is a long known problem with the netfront/netback interface: if the guest
> > tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
> > it gets dropped. The reason is that netback maps these slots to a frag in the
> > frags array, which is limited by size. Having so many slots can occur since
> > compound pages were introduced, as the ring protocol slice them up into
> > individual (non-compound) page aligned slots. The theoretical worst case
> > scenario looks like this (note, skbs are limited to 64 Kb here):
> > linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
> > using 2 slots
> > first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
> > end and the beginning of a page, therefore they use 3 * 15 = 45 slots
> > last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
> > Although I don't think this 51 slots skb can really happen, we need a solution
> > which can deal with every scenario. In real life there is only a few slots
> > overdue, but usually it causes the TCP stream to be blocked, as the retry will
> > most likely have the same buffer layout.
> > This patch solves this problem by linearizing the packet. This is not the
> > fastest way, and it can fail much easier as it tries to allocate a big linear
> > area for the whole packet, but probably easier by an order of magnitude than
> > anything else. Probably this code path is not touched very frequently anyway.
> > 
> > Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> > Cc: Wei Liu <wei.liu2@citrix.com>
> > Cc: Ian Campbell <Ian.Campbell@citrix.com>
> > Cc: Paul Durrant <paul.durrant@citrix.com>
> > Cc: netdev@vger.kernel.org
> > Cc: linux-kernel@vger.kernel.org
> > Cc: xen-devel@lists.xenproject.org
> 
> This does not seem to be marked explicitly as stable. Has someone already asked
> David Miller to put it on his stable queue? IMO it qualifies quite well and the
> actual change should be simple to pick/backport.
> 

Thank you Stefan, I'm queuing this for the next 3.16 kernel release.

Cheers,
--
Luís

> -Stefan
> 
> > 
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index 055222b..23359ae 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -628,9 +628,10 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
> >  	slots = DIV_ROUND_UP(offset + len, PAGE_SIZE) +
> >  		xennet_count_skb_frag_slots(skb);
> >  	if (unlikely(slots > MAX_SKB_FRAGS + 1)) {
> > -		net_alert_ratelimited(
> > -			"xennet: skb rides the rocket: %d slots\n", slots);
> > -		goto drop;
> > +		net_dbg_ratelimited("xennet: skb rides the rocket: %d slots, %d bytes\n",
> > +				    slots, skb->len);
> > +		if (skb_linearize(skb))
> > +			goto drop;
> >  	}
> >  
> >  	spin_lock_irqsave(&queue->tx_lock, flags);
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> > 
> 
> 


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Vrabel Dec. 8, 2014, 11:11 a.m. UTC | #7
On 08/12/14 10:19, Luis Henriques wrote:
> On Mon, Dec 01, 2014 at 09:55:24AM +0100, Stefan Bader wrote:
>> On 11.08.2014 19:32, Zoltan Kiss wrote:
>>> There is a long known problem with the netfront/netback interface: if the guest
>>> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
>>> it gets dropped. The reason is that netback maps these slots to a frag in the
>>> frags array, which is limited by size. Having so many slots can occur since
>>> compound pages were introduced, as the ring protocol slice them up into
>>> individual (non-compound) page aligned slots. The theoretical worst case
>>> scenario looks like this (note, skbs are limited to 64 Kb here):
>>> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
>>> using 2 slots
>>> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
>>> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
>>> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
>>> Although I don't think this 51 slots skb can really happen, we need a solution
>>> which can deal with every scenario. In real life there is only a few slots
>>> overdue, but usually it causes the TCP stream to be blocked, as the retry will
>>> most likely have the same buffer layout.
>>> This patch solves this problem by linearizing the packet. This is not the
>>> fastest way, and it can fail much easier as it tries to allocate a big linear
>>> area for the whole packet, but probably easier by an order of magnitude than
>>> anything else. Probably this code path is not touched very frequently anyway.
>>>
>>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>>> Cc: Wei Liu <wei.liu2@citrix.com>
>>> Cc: Ian Campbell <Ian.Campbell@citrix.com>
>>> Cc: Paul Durrant <paul.durrant@citrix.com>
>>> Cc: netdev@vger.kernel.org
>>> Cc: linux-kernel@vger.kernel.org
>>> Cc: xen-devel@lists.xenproject.org
>>
>> This does not seem to be marked explicitly as stable. Has someone already asked
>> David Miller to put it on his stable queue? IMO it qualifies quite well and the
>> actual change should be simple to pick/backport.
>>
> 
> Thank you Stefan, I'm queuing this for the next 3.16 kernel release.

Don't backport this yes.  It's broken.  It produces malformed requests
and netback will report a fatal error and stop all traffic on the VIF.

David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Luis Henriques Dec. 9, 2014, 9:54 a.m. UTC | #8
On Mon, Dec 08, 2014 at 11:11:15AM +0000, David Vrabel wrote:
> On 08/12/14 10:19, Luis Henriques wrote:
> > On Mon, Dec 01, 2014 at 09:55:24AM +0100, Stefan Bader wrote:
> >> On 11.08.2014 19:32, Zoltan Kiss wrote:
> >>> There is a long known problem with the netfront/netback interface: if the guest
> >>> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
> >>> it gets dropped. The reason is that netback maps these slots to a frag in the
> >>> frags array, which is limited by size. Having so many slots can occur since
> >>> compound pages were introduced, as the ring protocol slice them up into
> >>> individual (non-compound) page aligned slots. The theoretical worst case
> >>> scenario looks like this (note, skbs are limited to 64 Kb here):
> >>> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
> >>> using 2 slots
> >>> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
> >>> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
> >>> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
> >>> Although I don't think this 51 slots skb can really happen, we need a solution
> >>> which can deal with every scenario. In real life there is only a few slots
> >>> overdue, but usually it causes the TCP stream to be blocked, as the retry will
> >>> most likely have the same buffer layout.
> >>> This patch solves this problem by linearizing the packet. This is not the
> >>> fastest way, and it can fail much easier as it tries to allocate a big linear
> >>> area for the whole packet, but probably easier by an order of magnitude than
> >>> anything else. Probably this code path is not touched very frequently anyway.
> >>>
> >>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> >>> Cc: Wei Liu <wei.liu2@citrix.com>
> >>> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> >>> Cc: Paul Durrant <paul.durrant@citrix.com>
> >>> Cc: netdev@vger.kernel.org
> >>> Cc: linux-kernel@vger.kernel.org
> >>> Cc: xen-devel@lists.xenproject.org
> >>
> >> This does not seem to be marked explicitly as stable. Has someone already asked
> >> David Miller to put it on his stable queue? IMO it qualifies quite well and the
> >> actual change should be simple to pick/backport.
> >>
> > 
> > Thank you Stefan, I'm queuing this for the next 3.16 kernel release.
> 
> Don't backport this yes.  It's broken.  It produces malformed requests
> and netback will report a fatal error and stop all traffic on the VIF.
> 
> David

Ok, thank you.  I've dropped it already.

Cheers,
--
Luís
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 055222b..23359ae 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -628,9 +628,10 @@  static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	slots = DIV_ROUND_UP(offset + len, PAGE_SIZE) +
 		xennet_count_skb_frag_slots(skb);
 	if (unlikely(slots > MAX_SKB_FRAGS + 1)) {
-		net_alert_ratelimited(
-			"xennet: skb rides the rocket: %d slots\n", slots);
-		goto drop;
+		net_dbg_ratelimited("xennet: skb rides the rocket: %d slots, %d bytes\n",
+				    slots, skb->len);
+		if (skb_linearize(skb))
+			goto drop;
 	}
 
 	spin_lock_irqsave(&queue->tx_lock, flags);