diff mbox series

[v3] virtio: Work around frames incorrectly marked as gso

Message ID 20200224132550.2083-1-anton.ivanov@cambridgegreys.com
State Superseded
Headers show
Series [v3] virtio: Work around frames incorrectly marked as gso | expand

Commit Message

Anton Ivanov Feb. 24, 2020, 1:25 p.m. UTC
From: Anton Ivanov <anton.ivanov@cambridgegreys.com>

Some of the locally generated frames marked as GSO which
arrive at virtio_net_hdr_from_skb() have no GSO_TYPE, no
fragments (data_len = 0) and length significantly shorter
than the MTU (752 in my experiments).

This is observed on raw sockets reading off vEth interfaces
in all 4.x and 5.x kernels. The frames are reported as
invalid, while they are in fact gso-less frames.

The easiest way to reproduce is to connect a User Mode
Linux instance to the host using the vector raw transport
and a vEth interface. Vector raw uses recvmmsg/sendmmsg
with virtio headers on af_packet sockets. When running iperf
between the UML and the host, UML regularly complains about
EINVAL return from recvmmsg.

This patch marks the vnet header as non-GSO instead of
reporting it as invalid.

Signed-off-by: Anton Ivanov <anton.ivanov@cambridgegreys.com>
---
 include/linux/virtio_net.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

Comments

Michael S. Tsirkin Feb. 24, 2020, 2:25 p.m. UTC | #1
On Mon, Feb 24, 2020 at 01:25:50PM +0000, anton.ivanov@cambridgegreys.com wrote:
> From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
> 
> Some of the locally generated frames marked as GSO which
> arrive at virtio_net_hdr_from_skb() have no GSO_TYPE, no
> fragments (data_len = 0) and length significantly shorter
> than the MTU (752 in my experiments).
> 
> This is observed on raw sockets reading off vEth interfaces
> in all 4.x and 5.x kernels. The frames are reported as
> invalid, while they are in fact gso-less frames.
> 
> The easiest way to reproduce is to connect a User Mode
> Linux instance to the host using the vector raw transport
> and a vEth interface. Vector raw uses recvmmsg/sendmmsg
> with virtio headers on af_packet sockets. When running iperf
> between the UML and the host, UML regularly complains about
> EINVAL return from recvmmsg.
> 
> This patch marks the vnet header as non-GSO instead of
> reporting it as invalid.
> 
> Signed-off-by: Anton Ivanov <anton.ivanov@cambridgegreys.com>

Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

> ---
>  include/linux/virtio_net.h | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
> index 0d1fe9297ac6..2c99c752cb20 100644
> --- a/include/linux/virtio_net.h
> +++ b/include/linux/virtio_net.h
> @@ -98,10 +98,11 @@ static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb,
>  					  bool has_data_valid,
>  					  int vlan_hlen)
>  {
> +	struct skb_shared_info *sinfo = skb_shinfo(skb);
> +
>  	memset(hdr, 0, sizeof(*hdr));   /* no info leak */
>  
> -	if (skb_is_gso(skb)) {
> -		struct skb_shared_info *sinfo = skb_shinfo(skb);
> +	if (skb_is_gso(skb) && sinfo->gso_type) {
>  
>  		/* This is a hint as to how much should be linear. */
>  		hdr->hdr_len = __cpu_to_virtio16(little_endian,
> -- 
> 2.20.1
Willem de Bruijn Feb. 24, 2020, 7:27 p.m. UTC | #2
On Mon, Feb 24, 2020 at 8:26 AM <anton.ivanov@cambridgegreys.com> wrote:
>
> From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
>
> Some of the locally generated frames marked as GSO which
> arrive at virtio_net_hdr_from_skb() have no GSO_TYPE, no
> fragments (data_len = 0) and length significantly shorter
> than the MTU (752 in my experiments).

Do we understand how these packets are generated? Else it seems this
might be papering over a deeper problem.

The stack should not create GSO packets less than or equal to
skb_shinfo(skb)->gso_size. See for instance the check in
tcp_gso_segment after pulling the tcp header:

        mss = skb_shinfo(skb)->gso_size;
        if (unlikely(skb->len <= mss))
                goto out;

What is the gso_type, and does it include SKB_GSO_DODGY?
Anton Ivanov Feb. 24, 2020, 7:54 p.m. UTC | #3
On 24/02/2020 19:27, Willem de Bruijn wrote:
> On Mon, Feb 24, 2020 at 8:26 AM <anton.ivanov@cambridgegreys.com> wrote:
>>
>> From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
>>
>> Some of the locally generated frames marked as GSO which
>> arrive at virtio_net_hdr_from_skb() have no GSO_TYPE, no
>> fragments (data_len = 0) and length significantly shorter
>> than the MTU (752 in my experiments).
> 
> Do we understand how these packets are generated? 

No, we have not been able to trace them.

The only thing we know is that this is specific to locally generated 
packets. Something arriving from the network does not show this.

> Else it seems this
> might be papering over a deeper problem.
> 
> The stack should not create GSO packets less than or equal to
> skb_shinfo(skb)->gso_size. See for instance the check in
> tcp_gso_segment after pulling the tcp header:
> 
>          mss = skb_shinfo(skb)->gso_size;
>          if (unlikely(skb->len <= mss))
>                  goto out;
> 
> What is the gso_type, and does it include SKB_GSO_DODGY?
> 


0 - not set.
Willem de Bruijn Feb. 24, 2020, 8:20 p.m. UTC | #4
On Mon, Feb 24, 2020 at 2:55 PM Anton Ivanov
<anton.ivanov@cambridgegreys.com> wrote:
>
> On 24/02/2020 19:27, Willem de Bruijn wrote:
> > On Mon, Feb 24, 2020 at 8:26 AM <anton.ivanov@cambridgegreys.com> wrote:
> >>
> >> From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
> >>
> >> Some of the locally generated frames marked as GSO which
> >> arrive at virtio_net_hdr_from_skb() have no GSO_TYPE, no
> >> fragments (data_len = 0) and length significantly shorter
> >> than the MTU (752 in my experiments).
> >
> > Do we understand how these packets are generated?
>
> No, we have not been able to trace them.
>
> The only thing we know is that this is specific to locally generated
> packets. Something arriving from the network does not show this.
>
> > Else it seems this
> > might be papering over a deeper problem.
> >
> > The stack should not create GSO packets less than or equal to
> > skb_shinfo(skb)->gso_size. See for instance the check in
> > tcp_gso_segment after pulling the tcp header:
> >
> >          mss = skb_shinfo(skb)->gso_size;
> >          if (unlikely(skb->len <= mss))
> >                  goto out;
> >
> > What is the gso_type, and does it include SKB_GSO_DODGY?
> >
>
>
> 0 - not set.

Thanks for the follow-up details. Is this something that you can trigger easily?

An skb_dump() + dump_stack() when the packet socket gets such a
packet may point us to the root cause and fix that.
Anton Ivanov Feb. 24, 2020, 8:59 p.m. UTC | #5
On 24/02/2020 20:20, Willem de Bruijn wrote:
> On Mon, Feb 24, 2020 at 2:55 PM Anton Ivanov
> <anton.ivanov@cambridgegreys.com> wrote:
>> On 24/02/2020 19:27, Willem de Bruijn wrote:
>>> On Mon, Feb 24, 2020 at 8:26 AM <anton.ivanov@cambridgegreys.com> wrote:
>>>> From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
>>>>
>>>> Some of the locally generated frames marked as GSO which
>>>> arrive at virtio_net_hdr_from_skb() have no GSO_TYPE, no
>>>> fragments (data_len = 0) and length significantly shorter
>>>> than the MTU (752 in my experiments).
>>> Do we understand how these packets are generated?
>> No, we have not been able to trace them.
>>
>> The only thing we know is that this is specific to locally generated
>> packets. Something arriving from the network does not show this.
>>
>>> Else it seems this
>>> might be papering over a deeper problem.
>>>
>>> The stack should not create GSO packets less than or equal to
>>> skb_shinfo(skb)->gso_size. See for instance the check in
>>> tcp_gso_segment after pulling the tcp header:
>>>
>>>           mss = skb_shinfo(skb)->gso_size;
>>>           if (unlikely(skb->len <= mss))
>>>                   goto out;
>>>
>>> What is the gso_type, and does it include SKB_GSO_DODGY?
>>>
>>
>> 0 - not set.
> Thanks for the follow-up details. Is this something that you can trigger easily?

Yes, if you have a UML instance handy.

Running iperf between the host and a UML guest using raw socket 
transport triggers it immediately.

This is my UML command line:

vmlinux mem=2048M umid=OPX \
     ubd0=OPX-3.0-Work.img \
vec0:transport=raw,ifname=p-veth0,depth=128,gro=1,mac=92:9b:36:5e:38:69 \
     root=/dev/ubda ro con=null con0=null,fd:2 con1=fd:0,fd:1

p-right is a part of a vEth pair:

ip link add l-veth0 type veth peer name p-veth0 && ifconfig p-veth0 up

iperf server is on host, iperf -c in the guest.

>
> An skb_dump() + dump_stack() when the packet socket gets such a
> packet may point us to the root cause and fix that.

We tried dump stack, it was not informative - it was just the recvmmsg 
call stack coming from the UML until it hits the relevant recv bit in 
af_packet - it does not tell us where the packet is coming from.

Quoting from the message earlier in the thread:

[ 2334.180854] Call Trace:
[ 2334.181947]  dump_stack+0x5c/0x80
[ 2334.183021]  packet_recvmsg.cold+0x23/0x49
[ 2334.184063]  ___sys_recvmsg+0xe1/0x1f0
[ 2334.185034]  ? packet_poll+0xca/0x130
[ 2334.186014]  ? sock_poll+0x77/0xb0
[ 2334.186977]  ? ep_item_poll.isra.0+0x3f/0xb0
[ 2334.187936]  ? ep_send_events_proc+0xf1/0x240
[ 2334.188901]  ? dequeue_signal+0xdb/0x180
[ 2334.189848]  do_recvmmsg+0xc8/0x2d0
[ 2334.190728]  ? ep_poll+0x8c/0x470
[ 2334.191581]  __sys_recvmmsg+0x108/0x150
[ 2334.192441]  __x64_sys_recvmmsg+0x25/0x30
[ 2334.193346]  do_syscall_64+0x53/0x140
[ 2334.194262]  entry_SYSCALL_64_after_hwframe+0x44/0xa9

>
Willem de Bruijn Feb. 24, 2020, 10:22 p.m. UTC | #6
On Mon, Feb 24, 2020 at 4:00 PM Anton Ivanov
<anton.ivanov@cambridgegreys.com> wrote:
>
> On 24/02/2020 20:20, Willem de Bruijn wrote:
> > On Mon, Feb 24, 2020 at 2:55 PM Anton Ivanov
> > <anton.ivanov@cambridgegreys.com> wrote:
> >> On 24/02/2020 19:27, Willem de Bruijn wrote:
> >>> On Mon, Feb 24, 2020 at 8:26 AM <anton.ivanov@cambridgegreys.com> wrote:
> >>>> From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
> >>>>
> >>>> Some of the locally generated frames marked as GSO which
> >>>> arrive at virtio_net_hdr_from_skb() have no GSO_TYPE, no
> >>>> fragments (data_len = 0) and length significantly shorter
> >>>> than the MTU (752 in my experiments).
> >>> Do we understand how these packets are generated?
> >> No, we have not been able to trace them.
> >>
> >> The only thing we know is that this is specific to locally generated
> >> packets. Something arriving from the network does not show this.
> >>
> >>> Else it seems this
> >>> might be papering over a deeper problem.
> >>>
> >>> The stack should not create GSO packets less than or equal to
> >>> skb_shinfo(skb)->gso_size. See for instance the check in
> >>> tcp_gso_segment after pulling the tcp header:
> >>>
> >>>           mss = skb_shinfo(skb)->gso_size;
> >>>           if (unlikely(skb->len <= mss))
> >>>                   goto out;
> >>>
> >>> What is the gso_type, and does it include SKB_GSO_DODGY?
> >>>
> >>
> >> 0 - not set.
> > Thanks for the follow-up details. Is this something that you can trigger easily?
>
> Yes, if you have a UML instance handy.
>
> Running iperf between the host and a UML guest using raw socket
> transport triggers it immediately.
>
> This is my UML command line:
>
> vmlinux mem=2048M umid=OPX \
>      ubd0=OPX-3.0-Work.img \
> vec0:transport=raw,ifname=p-veth0,depth=128,gro=1,mac=92:9b:36:5e:38:69 \
>      root=/dev/ubda ro con=null con0=null,fd:2 con1=fd:0,fd:1
>
> p-right is a part of a vEth pair:
>
> ip link add l-veth0 type veth peer name p-veth0 && ifconfig p-veth0 up
>
> iperf server is on host, iperf -c in the guest.
>
> >
> > An skb_dump() + dump_stack() when the packet socket gets such a
> > packet may point us to the root cause and fix that.
>
> We tried dump stack, it was not informative - it was just the recvmmsg
> call stack coming from the UML until it hits the relevant recv bit in
> af_packet - it does not tell us where the packet is coming from.
>
> Quoting from the message earlier in the thread:
>
> [ 2334.180854] Call Trace:
> [ 2334.181947]  dump_stack+0x5c/0x80
> [ 2334.183021]  packet_recvmsg.cold+0x23/0x49
> [ 2334.184063]  ___sys_recvmsg+0xe1/0x1f0
> [ 2334.185034]  ? packet_poll+0xca/0x130
> [ 2334.186014]  ? sock_poll+0x77/0xb0
> [ 2334.186977]  ? ep_item_poll.isra.0+0x3f/0xb0
> [ 2334.187936]  ? ep_send_events_proc+0xf1/0x240
> [ 2334.188901]  ? dequeue_signal+0xdb/0x180
> [ 2334.189848]  do_recvmmsg+0xc8/0x2d0
> [ 2334.190728]  ? ep_poll+0x8c/0x470
> [ 2334.191581]  __sys_recvmmsg+0x108/0x150
> [ 2334.192441]  __x64_sys_recvmmsg+0x25/0x30
> [ 2334.193346]  do_syscall_64+0x53/0x140
> [ 2334.194262]  entry_SYSCALL_64_after_hwframe+0x44/0xa9

That makes sense. skb_dump might show more interesting details about
the packet. From the previous thread, these are assumed to be TCP
packets?

I had missed the original thread. If the packet has

    sinfo(skb)->gso_size = 752.
    skb->len = 818

then this is a GSO packet. Even though UML will correctly process it
as a normal 818 B packet if psock_rcv pretends that it is, treating it
like that is not strictly correct. A related question is how the setup
arrived at that low MTU size, assuming that is not explicitly
configured that low.

As of commit 51466a7545b7 ("tcp: fill shinfo->gso_type at last
moment") tcp unconditionally sets gso_type, even for non gso packets.
So either this is not a tcp packet or the field gets zeroed somewhere
along the way. I could not quickly find a possible path to
skb_gso_reset or a raw write.

It may be useful to insert tests for this condition (skb_is_gso(skb)
&& !skb_shinfo(skb)->gso_type) that call skb_dump at other points in
the network stack. For instance in __ip_queue_xmit and
__dev_queue_xmit.

Since skb segmentation fails in tcp_gso_segment for such packets, it
may also be informative to disable TSO on the veth device and see if
the test fails.
Jason Wang Feb. 25, 2020, 4:02 a.m. UTC | #7
On 2020/2/25 上午6:22, Willem de Bruijn wrote:
> On Mon, Feb 24, 2020 at 4:00 PM Anton Ivanov
> <anton.ivanov@cambridgegreys.com> wrote:
>> On 24/02/2020 20:20, Willem de Bruijn wrote:
>>> On Mon, Feb 24, 2020 at 2:55 PM Anton Ivanov
>>> <anton.ivanov@cambridgegreys.com> wrote:
>>>> On 24/02/2020 19:27, Willem de Bruijn wrote:
>>>>> On Mon, Feb 24, 2020 at 8:26 AM <anton.ivanov@cambridgegreys.com> wrote:
>>>>>> From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
>>>>>>
>>>>>> Some of the locally generated frames marked as GSO which
>>>>>> arrive at virtio_net_hdr_from_skb() have no GSO_TYPE, no
>>>>>> fragments (data_len = 0) and length significantly shorter
>>>>>> than the MTU (752 in my experiments).
>>>>> Do we understand how these packets are generated?
>>>> No, we have not been able to trace them.
>>>>
>>>> The only thing we know is that this is specific to locally generated
>>>> packets. Something arriving from the network does not show this.
>>>>
>>>>> Else it seems this
>>>>> might be papering over a deeper problem.
>>>>>
>>>>> The stack should not create GSO packets less than or equal to
>>>>> skb_shinfo(skb)->gso_size. See for instance the check in
>>>>> tcp_gso_segment after pulling the tcp header:
>>>>>
>>>>>            mss = skb_shinfo(skb)->gso_size;
>>>>>            if (unlikely(skb->len <= mss))
>>>>>                    goto out;
>>>>>
>>>>> What is the gso_type, and does it include SKB_GSO_DODGY?
>>>>>
>>>> 0 - not set.
>>> Thanks for the follow-up details. Is this something that you can trigger easily?
>> Yes, if you have a UML instance handy.
>>
>> Running iperf between the host and a UML guest using raw socket
>> transport triggers it immediately.
>>
>> This is my UML command line:
>>
>> vmlinux mem=2048M umid=OPX \
>>       ubd0=OPX-3.0-Work.img \
>> vec0:transport=raw,ifname=p-veth0,depth=128,gro=1,mac=92:9b:36:5e:38:69 \
>>       root=/dev/ubda ro con=null con0=null,fd:2 con1=fd:0,fd:1
>>
>> p-right is a part of a vEth pair:
>>
>> ip link add l-veth0 type veth peer name p-veth0 && ifconfig p-veth0 up
>>
>> iperf server is on host, iperf -c in the guest.
>>
>>> An skb_dump() + dump_stack() when the packet socket gets such a
>>> packet may point us to the root cause and fix that.
>> We tried dump stack, it was not informative - it was just the recvmmsg
>> call stack coming from the UML until it hits the relevant recv bit in
>> af_packet - it does not tell us where the packet is coming from.
>>
>> Quoting from the message earlier in the thread:
>>
>> [ 2334.180854] Call Trace:
>> [ 2334.181947]  dump_stack+0x5c/0x80
>> [ 2334.183021]  packet_recvmsg.cold+0x23/0x49
>> [ 2334.184063]  ___sys_recvmsg+0xe1/0x1f0
>> [ 2334.185034]  ? packet_poll+0xca/0x130
>> [ 2334.186014]  ? sock_poll+0x77/0xb0
>> [ 2334.186977]  ? ep_item_poll.isra.0+0x3f/0xb0
>> [ 2334.187936]  ? ep_send_events_proc+0xf1/0x240
>> [ 2334.188901]  ? dequeue_signal+0xdb/0x180
>> [ 2334.189848]  do_recvmmsg+0xc8/0x2d0
>> [ 2334.190728]  ? ep_poll+0x8c/0x470
>> [ 2334.191581]  __sys_recvmmsg+0x108/0x150
>> [ 2334.192441]  __x64_sys_recvmmsg+0x25/0x30
>> [ 2334.193346]  do_syscall_64+0x53/0x140
>> [ 2334.194262]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> That makes sense. skb_dump might show more interesting details about
> the packet. From the previous thread, these are assumed to be TCP
> packets?
>
> I had missed the original thread. If the packet has
>
>      sinfo(skb)->gso_size = 752.
>      skb->len = 818
>
> then this is a GSO packet. Even though UML will correctly process it
> as a normal 818 B packet if psock_rcv pretends that it is, treating it
> like that is not strictly correct. A related question is how the setup
> arrived at that low MTU size, assuming that is not explicitly
> configured that low.
>
> As of commit 51466a7545b7 ("tcp: fill shinfo->gso_type at last
> moment") tcp unconditionally sets gso_type, even for non gso packets.
> So either this is not a tcp packet or the field gets zeroed somewhere
> along the way. I could not quickly find a possible path to
> skb_gso_reset or a raw write.
>
> It may be useful to insert tests for this condition (skb_is_gso(skb)
> && !skb_shinfo(skb)->gso_type) that call skb_dump at other points in
> the network stack. For instance in __ip_queue_xmit and
> __dev_queue_xmit.


+1

We meet some customer hit such condition as well which lead over MTU 
packet to be queued by TAP which crashes their buggy userspace application.

We suspect it's the issue of wrong gso_type vs gso_size.

Thanks


>
> Since skb segmentation fails in tcp_gso_segment for such packets, it
> may also be informative to disable TSO on the veth device and see if
> the test fails.
>
Anton Ivanov Feb. 25, 2020, 7:43 a.m. UTC | #8
On 25/02/2020 04:02, Jason Wang wrote:
> 
> On 2020/2/25 上午6:22, Willem de Bruijn wrote:
>> On Mon, Feb 24, 2020 at 4:00 PM Anton Ivanov
>> <anton.ivanov@cambridgegreys.com> wrote:
>>> On 24/02/2020 20:20, Willem de Bruijn wrote:
>>>> On Mon, Feb 24, 2020 at 2:55 PM Anton Ivanov
>>>> <anton.ivanov@cambridgegreys.com> wrote:
>>>>> On 24/02/2020 19:27, Willem de Bruijn wrote:
>>>>>> On Mon, Feb 24, 2020 at 8:26 AM <anton.ivanov@cambridgegreys.com> wrote:
>>>>>>> From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
>>>>>>>
>>>>>>> Some of the locally generated frames marked as GSO which
>>>>>>> arrive at virtio_net_hdr_from_skb() have no GSO_TYPE, no
>>>>>>> fragments (data_len = 0) and length significantly shorter
>>>>>>> than the MTU (752 in my experiments).
>>>>>> Do we understand how these packets are generated?
>>>>> No, we have not been able to trace them.
>>>>>
>>>>> The only thing we know is that this is specific to locally generated
>>>>> packets. Something arriving from the network does not show this.
>>>>>
>>>>>> Else it seems this
>>>>>> might be papering over a deeper problem.
>>>>>>
>>>>>> The stack should not create GSO packets less than or equal to
>>>>>> skb_shinfo(skb)->gso_size. See for instance the check in
>>>>>> tcp_gso_segment after pulling the tcp header:
>>>>>>
>>>>>>            mss = skb_shinfo(skb)->gso_size;
>>>>>>            if (unlikely(skb->len <= mss))
>>>>>>                    goto out;
>>>>>>
>>>>>> What is the gso_type, and does it include SKB_GSO_DODGY?
>>>>>>
>>>>> 0 - not set.
>>>> Thanks for the follow-up details. Is this something that you can trigger easily?
>>> Yes, if you have a UML instance handy.
>>>
>>> Running iperf between the host and a UML guest using raw socket
>>> transport triggers it immediately.
>>>
>>> This is my UML command line:
>>>
>>> vmlinux mem=2048M umid=OPX \
>>>       ubd0=OPX-3.0-Work.img \
>>> vec0:transport=raw,ifname=p-veth0,depth=128,gro=1,mac=92:9b:36:5e:38:69 \
>>>       root=/dev/ubda ro con=null con0=null,fd:2 con1=fd:0,fd:1
>>>
>>> p-right is a part of a vEth pair:
>>>
>>> ip link add l-veth0 type veth peer name p-veth0 && ifconfig p-veth0 up
>>>
>>> iperf server is on host, iperf -c in the guest.
>>>
>>>> An skb_dump() + dump_stack() when the packet socket gets such a
>>>> packet may point us to the root cause and fix that.
>>> We tried dump stack, it was not informative - it was just the recvmmsg
>>> call stack coming from the UML until it hits the relevant recv bit in
>>> af_packet - it does not tell us where the packet is coming from.
>>>
>>> Quoting from the message earlier in the thread:
>>>
>>> [ 2334.180854] Call Trace:
>>> [ 2334.181947]  dump_stack+0x5c/0x80
>>> [ 2334.183021]  packet_recvmsg.cold+0x23/0x49
>>> [ 2334.184063]  ___sys_recvmsg+0xe1/0x1f0
>>> [ 2334.185034]  ? packet_poll+0xca/0x130
>>> [ 2334.186014]  ? sock_poll+0x77/0xb0
>>> [ 2334.186977]  ? ep_item_poll.isra.0+0x3f/0xb0
>>> [ 2334.187936]  ? ep_send_events_proc+0xf1/0x240
>>> [ 2334.188901]  ? dequeue_signal+0xdb/0x180
>>> [ 2334.189848]  do_recvmmsg+0xc8/0x2d0
>>> [ 2334.190728]  ? ep_poll+0x8c/0x470
>>> [ 2334.191581]  __sys_recvmmsg+0x108/0x150
>>> [ 2334.192441]  __x64_sys_recvmmsg+0x25/0x30
>>> [ 2334.193346]  do_syscall_64+0x53/0x140
>>> [ 2334.194262]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>> That makes sense. skb_dump might show more interesting details about
>> the packet. From the previous thread, these are assumed to be TCP
>> packets?
>>
>> I had missed the original thread. If the packet has
>>
>>      sinfo(skb)->gso_size = 752.
>>      skb->len = 818
>>
>> then this is a GSO packet. Even though UML will correctly process it
>> as a normal 818 B packet if psock_rcv pretends that it is, treating it
>> like that is not strictly correct. A related question is how the setup
>> arrived at that low MTU size, assuming that is not explicitly
>> configured that low.
>>
>> As of commit 51466a7545b7 ("tcp: fill shinfo->gso_type at last
>> moment") tcp unconditionally sets gso_type, even for non gso packets.
>> So either this is not a tcp packet or the field gets zeroed somewhere
>> along the way. I could not quickly find a possible path to
>> skb_gso_reset or a raw write.
>>
>> It may be useful to insert tests for this condition (skb_is_gso(skb)
>> && !skb_shinfo(skb)->gso_type) that call skb_dump at other points in
>> the network stack. For instance in __ip_queue_xmit and
>> __dev_queue_xmit.
> 
> 
> +1
> 
> We meet some customer hit such condition as well which lead over MTU packet to be queued by TAP which crashes their buggy userspace application.
> 
> We suspect it's the issue of wrong gso_type vs gso_size.

Well, we now have a test case where all the code is available and 100% under our control :)

Brgds,

> 
> Thanks
> 
> 
>>
>> Since skb segmentation fails in tcp_gso_segment for such packets, it
>> may also be informative to disable TSO on the veth device and see if
>> the test fails.
>>
> 
> 
> _______________________________________________
> linux-um mailing list
> linux-um@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-um
Anton Ivanov Feb. 25, 2020, 7:48 a.m. UTC | #9
On 24/02/2020 22:22, Willem de Bruijn wrote:
> On Mon, Feb 24, 2020 at 4:00 PM Anton Ivanov
> <anton.ivanov@cambridgegreys.com> wrote:
>>
>> On 24/02/2020 20:20, Willem de Bruijn wrote:
>>> On Mon, Feb 24, 2020 at 2:55 PM Anton Ivanov
>>> <anton.ivanov@cambridgegreys.com> wrote:
>>>> On 24/02/2020 19:27, Willem de Bruijn wrote:
>>>>> On Mon, Feb 24, 2020 at 8:26 AM <anton.ivanov@cambridgegreys.com> wrote:
>>>>>> From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
>>>>>>
>>>>>> Some of the locally generated frames marked as GSO which
>>>>>> arrive at virtio_net_hdr_from_skb() have no GSO_TYPE, no
>>>>>> fragments (data_len = 0) and length significantly shorter
>>>>>> than the MTU (752 in my experiments).
>>>>> Do we understand how these packets are generated?
>>>> No, we have not been able to trace them.
>>>>
>>>> The only thing we know is that this is specific to locally generated
>>>> packets. Something arriving from the network does not show this.
>>>>
>>>>> Else it seems this
>>>>> might be papering over a deeper problem.
>>>>>
>>>>> The stack should not create GSO packets less than or equal to
>>>>> skb_shinfo(skb)->gso_size. See for instance the check in
>>>>> tcp_gso_segment after pulling the tcp header:
>>>>>
>>>>>            mss = skb_shinfo(skb)->gso_size;
>>>>>            if (unlikely(skb->len <= mss))
>>>>>                    goto out;
>>>>>
>>>>> What is the gso_type, and does it include SKB_GSO_DODGY?
>>>>>
>>>>
>>>> 0 - not set.
>>> Thanks for the follow-up details. Is this something that you can trigger easily?
>>
>> Yes, if you have a UML instance handy.
>>
>> Running iperf between the host and a UML guest using raw socket
>> transport triggers it immediately.
>>
>> This is my UML command line:
>>
>> vmlinux mem=2048M umid=OPX \
>>       ubd0=OPX-3.0-Work.img \
>> vec0:transport=raw,ifname=p-veth0,depth=128,gro=1,mac=92:9b:36:5e:38:69 \
>>       root=/dev/ubda ro con=null con0=null,fd:2 con1=fd:0,fd:1
>>
>> p-right is a part of a vEth pair:
>>
>> ip link add l-veth0 type veth peer name p-veth0 && ifconfig p-veth0 up
>>
>> iperf server is on host, iperf -c in the guest.
>>
>>>
>>> An skb_dump() + dump_stack() when the packet socket gets such a
>>> packet may point us to the root cause and fix that.
>>
>> We tried dump stack, it was not informative - it was just the recvmmsg
>> call stack coming from the UML until it hits the relevant recv bit in
>> af_packet - it does not tell us where the packet is coming from.
>>
>> Quoting from the message earlier in the thread:
>>
>> [ 2334.180854] Call Trace:
>> [ 2334.181947]  dump_stack+0x5c/0x80
>> [ 2334.183021]  packet_recvmsg.cold+0x23/0x49
>> [ 2334.184063]  ___sys_recvmsg+0xe1/0x1f0
>> [ 2334.185034]  ? packet_poll+0xca/0x130
>> [ 2334.186014]  ? sock_poll+0x77/0xb0
>> [ 2334.186977]  ? ep_item_poll.isra.0+0x3f/0xb0
>> [ 2334.187936]  ? ep_send_events_proc+0xf1/0x240
>> [ 2334.188901]  ? dequeue_signal+0xdb/0x180
>> [ 2334.189848]  do_recvmmsg+0xc8/0x2d0
>> [ 2334.190728]  ? ep_poll+0x8c/0x470
>> [ 2334.191581]  __sys_recvmmsg+0x108/0x150
>> [ 2334.192441]  __x64_sys_recvmmsg+0x25/0x30
>> [ 2334.193346]  do_syscall_64+0x53/0x140
>> [ 2334.194262]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> 
> That makes sense. skb_dump might show more interesting details about
> the packet.

I will add that and retest later today.

> From the previous thread, these are assumed to be TCP
> packets?

Yes

> 
> I had missed the original thread. If the packet has
> 
>      sinfo(skb)->gso_size = 752.
>      skb->len = 818
> 
> then this is a GSO packet. Even though UML will correctly process it
> as a normal 818 B packet if psock_rcv pretends that it is, treating it
> like that is not strictly correct. A related question is how the setup
> arrived at that low MTU size, assuming that is not explicitly
> configured that low.

The mtu on the interface is normal. I suspect it is one of the first packets
in the stream or something iperf uses for communication between the server and
the client which always ends up that size.

> 
> As of commit 51466a7545b7 ("tcp: fill shinfo->gso_type at last
> moment") tcp unconditionally sets gso_type, even for non gso packets.
> So either this is not a tcp packet or the field gets zeroed somewhere
> along the way. I could not quickly find a possible path to
> skb_gso_reset or a raw write.

Same. I have tried to trace a possible origin and I have not seen anything which may cause it.

> 
> It may be useful to insert tests for this condition (skb_is_gso(skb)
> && !skb_shinfo(skb)->gso_type) that call skb_dump at other points in
> the network stack. For instance in __ip_queue_xmit and
> __dev_queue_xmit.
> 
> Since skb segmentation fails in tcp_gso_segment for such packets, it
> may also be informative to disable TSO on the veth device and see if
> the test fails.

Ack.

>
Anton Ivanov Feb. 25, 2020, 9:40 a.m. UTC | #10
On 25/02/2020 07:48, Anton Ivanov wrote:
> 
> 
> On 24/02/2020 22:22, Willem de Bruijn wrote:
>> On Mon, Feb 24, 2020 at 4:00 PM Anton Ivanov
>> <anton.ivanov@cambridgegreys.com> wrote:
>>>
>>> On 24/02/2020 20:20, Willem de Bruijn wrote:
>>>> On Mon, Feb 24, 2020 at 2:55 PM Anton Ivanov
>>>> <anton.ivanov@cambridgegreys.com> wrote:
>>>>> On 24/02/2020 19:27, Willem de Bruijn wrote:
>>>>>> On Mon, Feb 24, 2020 at 8:26 AM <anton.ivanov@cambridgegreys.com> wrote:
>>>>>>> From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
>>>>>>>
>>>>>>> Some of the locally generated frames marked as GSO which
>>>>>>> arrive at virtio_net_hdr_from_skb() have no GSO_TYPE, no
>>>>>>> fragments (data_len = 0) and length significantly shorter
>>>>>>> than the MTU (752 in my experiments).
>>>>>> Do we understand how these packets are generated?
>>>>> No, we have not been able to trace them.
>>>>>
>>>>> The only thing we know is that this is specific to locally generated
>>>>> packets. Something arriving from the network does not show this.
>>>>>
>>>>>> Else it seems this
>>>>>> might be papering over a deeper problem.
>>>>>>
>>>>>> The stack should not create GSO packets less than or equal to
>>>>>> skb_shinfo(skb)->gso_size. See for instance the check in
>>>>>> tcp_gso_segment after pulling the tcp header:
>>>>>>
>>>>>>            mss = skb_shinfo(skb)->gso_size;
>>>>>>            if (unlikely(skb->len <= mss))
>>>>>>                    goto out;
>>>>>>
>>>>>> What is the gso_type, and does it include SKB_GSO_DODGY?
>>>>>>
>>>>>
>>>>> 0 - not set.
>>>> Thanks for the follow-up details. Is this something that you can trigger easily?
>>>
>>> Yes, if you have a UML instance handy.
>>>
>>> Running iperf between the host and a UML guest using raw socket
>>> transport triggers it immediately.
>>>
>>> This is my UML command line:
>>>
>>> vmlinux mem=2048M umid=OPX \
>>>       ubd0=OPX-3.0-Work.img \
>>> vec0:transport=raw,ifname=p-veth0,depth=128,gro=1,mac=92:9b:36:5e:38:69 \
>>>       root=/dev/ubda ro con=null con0=null,fd:2 con1=fd:0,fd:1
>>>
>>> p-right is a part of a vEth pair:
>>>
>>> ip link add l-veth0 type veth peer name p-veth0 && ifconfig p-veth0 up
>>>
>>> iperf server is on host, iperf -c in the guest.
>>>
>>>>
>>>> An skb_dump() + dump_stack() when the packet socket gets such a
>>>> packet may point us to the root cause and fix that.
>>>
>>> We tried dump stack, it was not informative - it was just the recvmmsg
>>> call stack coming from the UML until it hits the relevant recv bit in
>>> af_packet - it does not tell us where the packet is coming from.
>>>
>>> Quoting from the message earlier in the thread:
>>>
>>> [ 2334.180854] Call Trace:
>>> [ 2334.181947]  dump_stack+0x5c/0x80
>>> [ 2334.183021]  packet_recvmsg.cold+0x23/0x49
>>> [ 2334.184063]  ___sys_recvmsg+0xe1/0x1f0
>>> [ 2334.185034]  ? packet_poll+0xca/0x130
>>> [ 2334.186014]  ? sock_poll+0x77/0xb0
>>> [ 2334.186977]  ? ep_item_poll.isra.0+0x3f/0xb0
>>> [ 2334.187936]  ? ep_send_events_proc+0xf1/0x240
>>> [ 2334.188901]  ? dequeue_signal+0xdb/0x180
>>> [ 2334.189848]  do_recvmmsg+0xc8/0x2d0
>>> [ 2334.190728]  ? ep_poll+0x8c/0x470
>>> [ 2334.191581]  __sys_recvmmsg+0x108/0x150
>>> [ 2334.192441]  __x64_sys_recvmmsg+0x25/0x30
>>> [ 2334.193346]  do_syscall_64+0x53/0x140
>>> [ 2334.194262]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>
>> That makes sense. skb_dump might show more interesting details about
>> the packet.
> 
> I will add that and retest later today.


skb len=818 headroom=2 headlen=818 tailroom=908
mac=(2,14) net=(16,0) trans=16
shinfo(txflags=0 nr_frags=0 gso(size=752 type=0 segs=1))
csum(0x100024 ip_summed=3 complete_sw=0 valid=0 level=0)
hash(0x0 sw=0 l4=0) proto=0x0800 pkttype=4 iif=0
sk family=17 type=3 proto=0

Deciphering the actual packet data gives a

TCP packet, ACK and PSH set.

The PSH flag looks like the only "interesting" thing about it in first read.

> 
>> From the previous thread, these are assumed to be TCP
>> packets?
> 
> Yes
> 
>>
>> I had missed the original thread. If the packet has
>>
>>      sinfo(skb)->gso_size = 752.
>>      skb->len = 818
>>
>> then this is a GSO packet. Even though UML will correctly process it
>> as a normal 818 B packet if psock_rcv pretends that it is, treating it
>> like that is not strictly correct. A related question is how the setup
>> arrived at that low MTU size, assuming that is not explicitly
>> configured that low.
> 
> The mtu on the interface is normal. I suspect it is one of the first packets
> in the stream or something iperf uses for communication between the server and
> the client which always ends up that size.
> 
>>
>> As of commit 51466a7545b7 ("tcp: fill shinfo->gso_type at last
>> moment") tcp unconditionally sets gso_type, even for non gso packets.
>> So either this is not a tcp packet or the field gets zeroed somewhere
>> along the way. I could not quickly find a possible path to
>> skb_gso_reset or a raw write.
> 
> Same. I have tried to trace a possible origin and I have not seen anything which may cause it.
> 
>>
>> It may be useful to insert tests for this condition (skb_is_gso(skb)
>> && !skb_shinfo(skb)->gso_type) that call skb_dump at other points in
>> the network stack. For instance in __ip_queue_xmit and
>> __dev_queue_xmit.
>>
>> Since skb segmentation fails in tcp_gso_segment for such packets, it
>> may also be informative to disable TSO on the veth device and see if
>> the test fails.
> 
> Ack.
> 
>>
>
Willem de Bruijn Feb. 25, 2020, 4:26 p.m. UTC | #11
> >>>> An skb_dump() + dump_stack() when the packet socket gets such a
> >>>> packet may point us to the root cause and fix that.
> >>>
> >>> We tried dump stack, it was not informative - it was just the recvmmsg
> >>> call stack coming from the UML until it hits the relevant recv bit in
> >>> af_packet - it does not tell us where the packet is coming from.
> >>>
> >>> Quoting from the message earlier in the thread:
> >>>
> >>> [ 2334.180854] Call Trace:
> >>> [ 2334.181947]  dump_stack+0x5c/0x80
> >>> [ 2334.183021]  packet_recvmsg.cold+0x23/0x49
> >>> [ 2334.184063]  ___sys_recvmsg+0xe1/0x1f0
> >>> [ 2334.185034]  ? packet_poll+0xca/0x130
> >>> [ 2334.186014]  ? sock_poll+0x77/0xb0
> >>> [ 2334.186977]  ? ep_item_poll.isra.0+0x3f/0xb0
> >>> [ 2334.187936]  ? ep_send_events_proc+0xf1/0x240
> >>> [ 2334.188901]  ? dequeue_signal+0xdb/0x180
> >>> [ 2334.189848]  do_recvmmsg+0xc8/0x2d0
> >>> [ 2334.190728]  ? ep_poll+0x8c/0x470
> >>> [ 2334.191581]  __sys_recvmmsg+0x108/0x150
> >>> [ 2334.192441]  __x64_sys_recvmmsg+0x25/0x30
> >>> [ 2334.193346]  do_syscall_64+0x53/0x140
> >>> [ 2334.194262]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> >>
> >> That makes sense. skb_dump might show more interesting details about
> >> the packet.
> >
> > I will add that and retest later today.
>
>
> skb len=818 headroom=2 headlen=818 tailroom=908
> mac=(2,14) net=(16,0) trans=16
> shinfo(txflags=0 nr_frags=0 gso(size=752 type=0 segs=1))
> csum(0x100024 ip_summed=3 complete_sw=0 valid=0 level=0)
> hash(0x0 sw=0 l4=0) proto=0x0800 pkttype=4 iif=0
> sk family=17 type=3 proto=0
>
> Deciphering the actual packet data gives a
>
> TCP packet, ACK and PSH set.
>
> The PSH flag looks like the only "interesting" thing about it in first read.

Thanks.

TCP always sets the PSH bit on a GSO packet as of commit commit
051ba67447de  ("tcp: force a PSH flag on TSO packets"), so that is
definitely informative.

The lower gso size might come from a path mtu probing depending on
tcp_base_mss, but that's definitely wild speculation. Increasing that
value to, say, 1024, could tell us.

In this case it may indeed not be a GSO packet. As 752 is the MSS + 28
B TCP header including timestamp + 20 B IPv4 header + 14B Eth header.
Which adds up to 814 already.

Not sure what those 2 B between skb->data and mac_header are. Was this
captured inside packet_rcv? network_header and transport_header both
at 16B offset is also sketchy, but again may be an artifact of where
exactly this is being read.

Perhaps this is a segment of a larger GSO packet that is retransmitted
in part. Like an mtu probe or loss probe. See for instance this in
tcp_send_loss_probe for  how a single MSS is extracted:

       if ((pcount > 1) && (skb->len > (pcount - 1) * mss)) {
                if (unlikely(tcp_fragment(sk, TCP_FRAG_IN_RTX_QUEUE, skb,
                                          (pcount - 1) * mss, mss,
                                          GFP_ATOMIC)))
                        goto rearm_timer;
                skb = skb_rb_next(skb);
        }

Note that I'm not implicating this specific code. I don't see anything
wrong with it. Just an indication that a trace would be very
informative, as it could tell if any of these edge cases is being hit.
Anton Ivanov Feb. 26, 2020, 7:53 a.m. UTC | #12
On 25/02/2020 16:26, Willem de Bruijn wrote:
>>>>>> An skb_dump() + dump_stack() when the packet socket gets such a
>>>>>> packet may point us to the root cause and fix that.
>>>>>
>>>>> We tried dump stack, it was not informative - it was just the recvmmsg
>>>>> call stack coming from the UML until it hits the relevant recv bit in
>>>>> af_packet - it does not tell us where the packet is coming from.
>>>>>
>>>>> Quoting from the message earlier in the thread:
>>>>>
>>>>> [ 2334.180854] Call Trace:
>>>>> [ 2334.181947]  dump_stack+0x5c/0x80
>>>>> [ 2334.183021]  packet_recvmsg.cold+0x23/0x49
>>>>> [ 2334.184063]  ___sys_recvmsg+0xe1/0x1f0
>>>>> [ 2334.185034]  ? packet_poll+0xca/0x130
>>>>> [ 2334.186014]  ? sock_poll+0x77/0xb0
>>>>> [ 2334.186977]  ? ep_item_poll.isra.0+0x3f/0xb0
>>>>> [ 2334.187936]  ? ep_send_events_proc+0xf1/0x240
>>>>> [ 2334.188901]  ? dequeue_signal+0xdb/0x180
>>>>> [ 2334.189848]  do_recvmmsg+0xc8/0x2d0
>>>>> [ 2334.190728]  ? ep_poll+0x8c/0x470
>>>>> [ 2334.191581]  __sys_recvmmsg+0x108/0x150
>>>>> [ 2334.192441]  __x64_sys_recvmmsg+0x25/0x30
>>>>> [ 2334.193346]  do_syscall_64+0x53/0x140
>>>>> [ 2334.194262]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>>>
>>>> That makes sense. skb_dump might show more interesting details about
>>>> the packet.
>>>
>>> I will add that and retest later today.
>>
>>
>> skb len=818 headroom=2 headlen=818 tailroom=908
>> mac=(2,14) net=(16,0) trans=16
>> shinfo(txflags=0 nr_frags=0 gso(size=752 type=0 segs=1))
>> csum(0x100024 ip_summed=3 complete_sw=0 valid=0 level=0)
>> hash(0x0 sw=0 l4=0) proto=0x0800 pkttype=4 iif=0
>> sk family=17 type=3 proto=0
>>
>> Deciphering the actual packet data gives a
>>
>> TCP packet, ACK and PSH set.
>>
>> The PSH flag looks like the only "interesting" thing about it in first read.
> 
> Thanks.
> 
> TCP always sets the PSH bit on a GSO packet as of commit commit
> 051ba67447de  ("tcp: force a PSH flag on TSO packets"), so that is
> definitely informative.
> 
> The lower gso size might come from a path mtu probing depending on
> tcp_base_mss, but that's definitely wild speculation. Increasing that
> value to, say, 1024, could tell us.
> 
> In this case it may indeed not be a GSO packet. As 752 is the MSS + 28
> B TCP header including timestamp + 20 B IPv4 header + 14B Eth header.
> Which adds up to 814 already.
> 
> Not sure what those 2 B between skb->data and mac_header are. Was this
> captured inside packet_rcv? 

af_packet, packet_rcv

https://elixir.bootlin.com/linux/latest/source/net/packet/af_packet.c#L2026

> network_header and transport_header both
> at 16B offset is also sketchy, but again may be an artifact of where
> exactly this is being read.
> 
> Perhaps this is a segment of a larger GSO packet that is retransmitted
> in part. Like an mtu probe or loss probe. See for instance this in
> tcp_send_loss_probe for  how a single MSS is extracted:
> 
>         if ((pcount > 1) && (skb->len > (pcount - 1) * mss)) {
>                  if (unlikely(tcp_fragment(sk, TCP_FRAG_IN_RTX_QUEUE, skb,
>                                            (pcount - 1) * mss, mss,
>                                            GFP_ATOMIC)))
>                          goto rearm_timer;
>                  skb = skb_rb_next(skb);
>          }
> 
> Note that I'm not implicating this specific code. I don't see anything
> wrong with it. Just an indication that a trace would be very
> informative, as it could tell if any of these edge cases is being hit.

I will be honest, I have found it a bit difficult to trace.

At the point where this is detected, the packet is already in the vEth 
interface queue and is being read by recvmmsg on a raw socket.

The flags + gso size combination happened long before that - even before 
it was being placed in the queue.

What is clear so far is that while the packet has invalid 
gso_size/gso_type combination, it is an otherwise valid tcp frame.

I will stick the debug into is_gso (with a backtrace) instead and re-run 
it later today to see if this can pick it up elsewhere in the stack.

> 
> _______________________________________________
> linux-um mailing list
> linux-um@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-um
>
David Miller Feb. 26, 2020, 7:09 p.m. UTC | #13
From: anton.ivanov@cambridgegreys.com
Date: Mon, 24 Feb 2020 13:25:50 +0000

> From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
> 
> Some of the locally generated frames marked as GSO which
> arrive at virtio_net_hdr_from_skb() have no GSO_TYPE, no
> fragments (data_len = 0) and length significantly shorter
> than the MTU (752 in my experiments).
> 
> This is observed on raw sockets reading off vEth interfaces
> in all 4.x and 5.x kernels. The frames are reported as
> invalid, while they are in fact gso-less frames.
> 
> The easiest way to reproduce is to connect a User Mode
> Linux instance to the host using the vector raw transport
> and a vEth interface. Vector raw uses recvmmsg/sendmmsg
> with virtio headers on af_packet sockets. When running iperf
> between the UML and the host, UML regularly complains about
> EINVAL return from recvmmsg.
> 
> This patch marks the vnet header as non-GSO instead of
> reporting it as invalid.
> 
> Signed-off-by: Anton Ivanov <anton.ivanov@cambridgegreys.com>

I don't feel comfortable applying this until we know where these
weird frames are coming from and how they are created.

Please respin this patch once you know this information and make
sure to mention it in the commit log.

Thank you.
diff mbox series

Patch

diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
index 0d1fe9297ac6..2c99c752cb20 100644
--- a/include/linux/virtio_net.h
+++ b/include/linux/virtio_net.h
@@ -98,10 +98,11 @@  static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb,
 					  bool has_data_valid,
 					  int vlan_hlen)
 {
+	struct skb_shared_info *sinfo = skb_shinfo(skb);
+
 	memset(hdr, 0, sizeof(*hdr));   /* no info leak */
 
-	if (skb_is_gso(skb)) {
-		struct skb_shared_info *sinfo = skb_shinfo(skb);
+	if (skb_is_gso(skb) && sinfo->gso_type) {
 
 		/* This is a hint as to how much should be linear. */
 		hdr->hdr_len = __cpu_to_virtio16(little_endian,