diff mbox

[net] flow_dissector: Fix unaligned access in __skb_flow_dissector when used by eth_get_headlen

Message ID 20160130024556.6227.51798.stgit@localhost.localdomain
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

Alexander Duyck Jan. 30, 2016, 2:49 a.m. UTC
This patch fixes an issue with unaligned accesses when using
eth_get_headlen on a page that was DMA aligned instead of being IP aligned.
The fact is when trying to check the length we don't need to be looking at
the flow label so we can reorder the checks to first check if we are
supposed to gather the flow label and then make the call to actually get
it.

Reported-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
---

 net/core/flow_dissector.c |   15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

Comments

Eric Dumazet Jan. 30, 2016, 3:23 a.m. UTC | #1
On Fri, 2016-01-29 at 18:49 -0800, Alexander Duyck wrote:
> This patch fixes an issue with unaligned accesses when using
> eth_get_headlen on a page that was DMA aligned instead of being IP aligned.
> The fact is when trying to check the length we don't need to be looking at
> the flow label so we can reorder the checks to first check if we are
> supposed to gather the flow label and then make the call to actually get
> it.
> 
> Reported-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
> Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
> ---


You did not quite follow the discussion Alexander, we did not exactly
took a decision about this bug.

As we mentioned, there are other parts that need care.

key_keyid->keyid = *keyid;

Please address them, instead of having to 'wait' for the next crash.

BTW, even a memcpy(&key_addrs->v4addrs, &iph->saddr, 8) could crash, as
the compiler can certainly assume src and dst are 4 bytes aligned, and
could use word accesses when inlining memcpy() even on Sparc.

Apparently the compiler used by Sowmini is gentle.

Thanks.
David Miller Jan. 30, 2016, 3:35 a.m. UTC | #2
From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Fri, 29 Jan 2016 19:23:54 -0800

> On Fri, 2016-01-29 at 18:49 -0800, Alexander Duyck wrote:
>> This patch fixes an issue with unaligned accesses when using
>> eth_get_headlen on a page that was DMA aligned instead of being IP aligned.
>> The fact is when trying to check the length we don't need to be looking at
>> the flow label so we can reorder the checks to first check if we are
>> supposed to gather the flow label and then make the call to actually get
>> it.
>> 
>> Reported-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
>> Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
>> ---
> 
> 
> You did not quite follow the discussion Alexander, we did not exactly
> took a decision about this bug.
> 
> As we mentioned, there are other parts that need care.
> 
> key_keyid->keyid = *keyid;
> 
> Please address them, instead of having to 'wait' for the next crash.

Indeed, this is a more fundamental issue.

This change in and of itself isn't bad, and is probably a suitable
micro-optimization for net-next, but it doesn't fix the bug in
question.
Alexander H Duyck Jan. 30, 2016, 4:46 a.m. UTC | #3
On Fri, Jan 29, 2016 at 7:35 PM, David Miller <davem@davemloft.net> wrote:
> From: Eric Dumazet <eric.dumazet@gmail.com>
> Date: Fri, 29 Jan 2016 19:23:54 -0800
>
>> On Fri, 2016-01-29 at 18:49 -0800, Alexander Duyck wrote:
>>> This patch fixes an issue with unaligned accesses when using
>>> eth_get_headlen on a page that was DMA aligned instead of being IP aligned.
>>> The fact is when trying to check the length we don't need to be looking at
>>> the flow label so we can reorder the checks to first check if we are
>>> supposed to gather the flow label and then make the call to actually get
>>> it.
>>>
>>> Reported-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
>>> Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
>>> ---
>>
>>
>> You did not quite follow the discussion Alexander, we did not exactly
>> took a decision about this bug.
>>
>> As we mentioned, there are other parts that need care.
>>
>> key_keyid->keyid = *keyid;
>>
>> Please address them, instead of having to 'wait' for the next crash.

The key_id bit is not part of the "basic" keys.  The basic keys only
require a 2 byte alignment, it is when you start parsing to populate a
hash that you get into the wider fields.  As far as I know there
aren't any length fields that are wider than 2 bytes.

> Indeed, this is a more fundamental issue.
>
> This change in and of itself isn't bad, and is probably a suitable
> micro-optimization for net-next, but it doesn't fix the bug in
> question.

Actually it does fix the bug as reported.  The problem as reported was
for ixgbe using the function for parsing the length.  In the case of
just getting the length my patch resolves the issue as there are no
other accesses wider than 2 bytes used when *just* computing the
length.  This is something that will have to get addressed sooner or
later anyway, and at least with this patch in the basic case of IPv6
over ixgbe won't cause any issues.

The key_id bit that Eric is pointing out is called in a path that is
only executed if you are actually attempting to compute a hash.  Odds
are the fix for that is going to be much more complicated and extends
well outside this function since it is a fundamental issue with VXLAN
and GRE TEB tunnels as either the inner or outer headers end up being
unaligned unless you want to break them into two buffers so that they
can both be aligned at the same time.

Extending beyond this has anyone taken a look at the transmit path for
these tunnels?  I would suspect we probably also have a number of
issues there since either the inner or outer IP headers have to be
unaligned when we are setting those up.

- Alex
Sowmini Varadhan Jan. 30, 2016, 4:17 p.m. UTC | #4
On (01/29/16 19:23), Eric Dumazet wrote:
> BTW, even a memcpy(&key_addrs->v4addrs, &iph->saddr, 8) could crash, as
> the compiler can certainly assume src and dst are 4 bytes aligned, and
> could use word accesses when inlining memcpy() even on Sparc.
> 
> Apparently the compiler used by Sowmini is gentle.

One more subtlety that I missed until now.. 

eth_get_headlen passes in flow_keys_buf_dissector (NOT flow_keys_dissector!)

So FLOW_DISSECTOR_KEY_IPV4_ADDRS is not set, and this helps to dodge
the unaligned iph->saddr access.

But as others have pointed out, much of this code is brittle
because it's accessing the data before the driver has had a chance
to align things. The page_offset initialization of NET_IP_ALIGN,
with all its weaknesses, at least matches (in principle) the prescription
used for the xmit path.

--Sowmini
Tom Herbert Jan. 30, 2016, 5:43 p.m. UTC | #5
On Sat, Jan 30, 2016 at 8:17 AM, Sowmini Varadhan
<sowmini.varadhan@oracle.com> wrote:
> On (01/29/16 19:23), Eric Dumazet wrote:
>> BTW, even a memcpy(&key_addrs->v4addrs, &iph->saddr, 8) could crash, as
>> the compiler can certainly assume src and dst are 4 bytes aligned, and
>> could use word accesses when inlining memcpy() even on Sparc.
>>
>> Apparently the compiler used by Sowmini is gentle.
>
> One more subtlety that I missed until now..
>
> eth_get_headlen passes in flow_keys_buf_dissector (NOT flow_keys_dissector!)
>
> So FLOW_DISSECTOR_KEY_IPV4_ADDRS is not set, and this helps to dodge
> the unaligned iph->saddr access.
>
> But as others have pointed out, much of this code is brittle
> because it's accessing the data before the driver has had a chance
> to align things. The page_offset initialization of NET_IP_ALIGN,
> with all its weaknesses, at least matches (in principle) the prescription
> used for the xmit path.
>
That is not the only case, If a GRE TEB packet is ever received and
flow dissector is called for any reason (like skb_get_hash) there's
going to be problems-- and this doesn't require GRE to even be
configured on the host.

I have a patch that changes the 32-bit accesses in flow_dissector to
use get_unaligned_be32. I don't have access to any machines that care
about alignment (only x86). Do you know if there is an alternate way
to test this other than running on architecture like Sparc?

Thanks,
Tom

> --Sowmini
>
Alexander H Duyck Jan. 30, 2016, 6:36 p.m. UTC | #6
On Sat, Jan 30, 2016 at 8:17 AM, Sowmini Varadhan
<sowmini.varadhan@oracle.com> wrote:
> On (01/29/16 19:23), Eric Dumazet wrote:
>> BTW, even a memcpy(&key_addrs->v4addrs, &iph->saddr, 8) could crash, as
>> the compiler can certainly assume src and dst are 4 bytes aligned, and
>> could use word accesses when inlining memcpy() even on Sparc.
>>
>> Apparently the compiler used by Sowmini is gentle.
>
> One more subtlety that I missed until now..
>
> eth_get_headlen passes in flow_keys_buf_dissector (NOT flow_keys_dissector!)
>
> So FLOW_DISSECTOR_KEY_IPV4_ADDRS is not set, and this helps to dodge
> the unaligned iph->saddr access.

Right this is a bug that got introduced in 4.2 when the check for !skb
was dropped in favor of the flow keys.

My thought was the patch I provided fixes 4.3, and could be applied to
stable for 4.2.  The concerns about GRE TEB and VXLAN go much further
back then that since if I am not mistaken the transmit path for both
is pretty much borked if we try to do GSO as either the inner headers
or outer headers are not aligned so assembling the header will likely
result in unaligned accesses.

> But as others have pointed out, much of this code is brittle
> because it's accessing the data before the driver has had a chance
> to align things. The page_offset initialization of NET_IP_ALIGN,
> with all its weaknesses, at least matches (in principle) the prescription
> used for the xmit path.

That is essentially an unfortunate side effect of this being a
function that has two very different roles to perform.  One goes
through and just determines the length of the frame, the other goes
through and gathers data for computing a hash.

As far as the NET_IP_ALIGN on the page offset I think it is a horrible
idea.  Basically it means we have to allocate at least 1K more space
than we need since page sizes are powers of 2, and buffer sizes in
hardware are measured in 1K increments.

Adding the unaligned reads probably doesn't do much to fix the actual
issue which is the unaligned headers (either inner or outer) on
architectures that don't support unaligned access.  If anything what I
really think we should look at doing is coming up with a way to just
pad the inner headers by NET_IP_ALIGN bytes if the architecture
doesn't support unaligned access and then output the inner and outer
headers as two separate buffers.

- Alex
Eric Dumazet Jan. 30, 2016, 7:26 p.m. UTC | #7
On Sat, 2016-01-30 at 10:36 -0800, Alexander Duyck wrote:

> As far as the NET_IP_ALIGN on the page offset I think it is a horrible
> idea.  Basically it means we have to allocate at least 1K more space
> than we need since page sizes are powers of 2, and buffer sizes in
> hardware are measured in 1K increments.

I thought Ethernet frames were something like ~1500 bytes.

Can't you program the NIC to not receive frames bigger than MTU + 18, or
at least 2046 ?

Of course, if the NIC is able to push 2048 bytes (instead of 2046 max),
using an offset of 2 is not going to work.
Alexander H Duyck Jan. 31, 2016, 1:13 a.m. UTC | #8
On Sat, Jan 30, 2016 at 11:26 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> On Sat, 2016-01-30 at 10:36 -0800, Alexander Duyck wrote:
>
>> As far as the NET_IP_ALIGN on the page offset I think it is a horrible
>> idea.  Basically it means we have to allocate at least 1K more space
>> than we need since page sizes are powers of 2, and buffer sizes in
>> hardware are measured in 1K increments.
>
> I thought Ethernet frames were something like ~1500 bytes.

It is, but if jumbo frames are enabled or LRO is enabled the NIC will
generate larger frames.

> Can't you program the NIC to not receive frames bigger than MTU + 18, or
> at least 2046 ?

Yes and no.  The NIC will already block frames larger than a fixed
size, but there are features like LRO that will let the NIC assemble a
larger frame.

> Of course, if the NIC is able to push 2048 bytes (instead of 2046 max),
> using an offset of 2 is not going to work.

The NIC hardware only allows us to set receive buffer sizes in 1K
increments.  I think fm10k may be an exception in that it may be able
to support 512 byte, but otherwise igb and ixgbe are strictly set in
1K increments.  So basically if we did the NET_IP_ALIGN of the page we
are limited to 1K buffers if we were to retain the current page reuse,
or we could go up to 3K if we were to allocate order 1 pages for
receive buffers as currently occurs with FCoE.

I would really prefer to keep the pages DMA aligned, and the skb->data
IP aligned.  If nothing else it has the advantage of actually having
the proper alignment on all the headers if I only pull the outer
headers and leave the inner headers in the page.. :-/

- Alex
David Miller Jan. 31, 2016, 3:45 a.m. UTC | #9
From: Alexander Duyck <alexander.duyck@gmail.com>
Date: Sat, 30 Jan 2016 17:13:35 -0800

> The NIC hardware only allows us to set receive buffer sizes in 1K
> increments.  I think fm10k may be an exception in that it may be able
> to support 512 byte, but otherwise igb and ixgbe are strictly set in
> 1K increments.  So basically if we did the NET_IP_ALIGN of the page we
> are limited to 1K buffers if we were to retain the current page reuse,
> or we could go up to 3K if we were to allocate order 1 pages for
> receive buffers as currently occurs with FCoE.
> 
> I would really prefer to keep the pages DMA aligned, and the skb->data
> IP aligned.  If nothing else it has the advantage of actually having
> the proper alignment on all the headers if I only pull the outer
> headers and leave the inner headers in the page.. :-/

I am unconvinced by your arguments.  People getting hit by these
arguments don't care.  For those not hitting the problem, nothing is
going to change.

x86 uses NET_IP_ALIGN==0, and therefore nothing will change for the
only platform people care a lot about.  PowerPC too.

For platforms with NET_IP_ALIGN==2 or similar, all of these cases
where 1K extra buffer is allocator or _whatever_ other side effect you
think is such as huge deal...  trust me it's _nothing_ compared to the
alternative:

1) Unaligned accesses

2) Constantly having to say "oops this case, oops that case, oops shit
   we have to handle this too" as we constantly patch up flow
   dissector and other paths.

Please, pass aligned buffers to the flow dissector, and let's put this
behind us, _provably_, for good.

Thanks.
Alexander H Duyck Jan. 31, 2016, 9:12 a.m. UTC | #10
On Sat, Jan 30, 2016 at 7:45 PM, David Miller <davem@davemloft.net> wrote:
> From: Alexander Duyck <alexander.duyck@gmail.com>
> Date: Sat, 30 Jan 2016 17:13:35 -0800
>
>> The NIC hardware only allows us to set receive buffer sizes in 1K
>> increments.  I think fm10k may be an exception in that it may be able
>> to support 512 byte, but otherwise igb and ixgbe are strictly set in
>> 1K increments.  So basically if we did the NET_IP_ALIGN of the page we
>> are limited to 1K buffers if we were to retain the current page reuse,
>> or we could go up to 3K if we were to allocate order 1 pages for
>> receive buffers as currently occurs with FCoE.
>>
>> I would really prefer to keep the pages DMA aligned, and the skb->data
>> IP aligned.  If nothing else it has the advantage of actually having
>> the proper alignment on all the headers if I only pull the outer
>> headers and leave the inner headers in the page.. :-/
>
> I am unconvinced by your arguments.  People getting hit by these
> arguments don't care.  For those not hitting the problem, nothing is
> going to change.
>
> x86 uses NET_IP_ALIGN==0, and therefore nothing will change for the
> only platform people care a lot about.  PowerPC too.

I think you may be getting confused about the scope of what you are
talking about.  The region pointed to by skb->data is IP aligned when
we pass it to the stack.  So the scope of this is very limited and
should only effect the flow dissector.  If needed I would be more than
happy to just fork the function and write up a version that only gets
the length of headers and is good with a 2 byte aligned header.  This
was one of my original misgivings about merging the functionality of
the flow dissector and the functions I was maintaining in the Intel
drivers.

> For platforms with NET_IP_ALIGN==2 or similar, all of these cases
> where 1K extra buffer is allocator or _whatever_ other side effect you
> think is such as huge deal...  trust me it's _nothing_ compared to the
> alternative:

We pay a pretty hefty truesize penalty for consuming any more memory
then we need to.  The changes as proposed would essentially kick our
truesize from 3K to 5K per frame.  It would double the space needed
since it isn't like I can just tell the page allocater to give me an
additional 2 bytes, it bumps me from 4K to 8K.  There would be a
noticeable performance drop, though I admit I don't know if it would
be a very big group noticing it since we are talking about some lesser
used architectures.

> 1) Unaligned accesses

Changing the driver doesn't fix the unaligned accesses problem.  It is
still there with the inner headers on GRE Transparent Ethernet Bridge.
That is why I believe Tom is still trying to come up with a solution
for it.

> 2) Constantly having to say "oops this case, oops that case, oops shit
>    we have to handle this too" as we constantly patch up flow
>    dissector and other paths.

The flow dissector is the only spot that has to deal with this.  If
you forget I had my own function we were perfectly happy to use with
the Intel drivers but I was told we needed to use this one as we
didn't want to duplicate functionality.

The fact is this function served its purpose perfectly fine until
commit b924933cbbfb "flow_dissector: introduce support for ipv6
addressses".  That is when the bug was introduced that is fixed by the
patch I submitted.

> Please, pass aligned buffers to the flow dissector, and let's put this
> behind us, _provably_, for good.

What is the point.  As pointed out earlier there is still an issue
with GRE TEB.  My patch is the surgical approach and fixes the
original issue and is easy to backport to the other kernels that have
this issue.  What you are asking me for is to go through and make
serious changes to igb, ixgbe, ixgbevf, fm10k, and mlx4.  If anything
I would think that fixing this in net with the least invasive patch
would be preferred.  If we want to go through and rewrite the receive
paths in multiple drivers to allow them to use IP aligned pages on
architectures that many of us don't have then we should focus on doing
that in net-next where we can prove out that we aren't making things
worse.

I'm standing by my original patch and still think it is the best fix
for net.  If you want me to go through and rearchitect the drivers so
that they only ever pass IP aligned page buffers to the flow dissector
then I really think it should wait until net-next as it isn't going to
be a small change set, and I don't have the systems or NICs here to be
able to test all of it.

> Thanks.

If nothing else we can discuss this further in Seville as I am just
not convinced changing the drivers is the right way to go about fixing
this, and I won't have the time to really get around to doing any of
it until I have come back from the conference.

Thanks.

- Alex
David Miller Jan. 31, 2016, 7:17 p.m. UTC | #11
From: Alexander Duyck <alexander.duyck@gmail.com>
Date: Sun, 31 Jan 2016 01:12:03 -0800

> If nothing else we can discuss this further in Seville as I am just
> not convinced changing the drivers is the right way to go about fixing
> this, and I won't have the time to really get around to doing any of
> it until I have come back from the conference.

Ok, let me think about this some more.

It's a shame that "put two garbage bytes into the FIFO at the
beginning of every frame" isn't a standard hw feature.  Or maybe
I misunderstand what's going on in all of these drivers...
Sowmini Varadhan Jan. 31, 2016, 9:23 p.m. UTC | #12
On (01/31/16 11:17), David Miller wrote:
> 
> > If nothing else we can discuss this further in Seville as I am just
> > not convinced changing the drivers is the right way to go about fixing
> > this, and I won't have the time to really get around to doing any of
> > it until I have come back from the conference.
> 
> Ok, let me think about this some more.
> 
> It's a shame that "put two garbage bytes into the FIFO at the
> beginning of every frame" isn't a standard hw feature.  Or maybe
> I misunderstand what's going on in all of these drivers...
> 

We can follow up on this at Seville, but if it is possible,
it might be better to not overload eth_get_headlen to also
do the flow extraction even before the buffer has been aligned
by the driver. Even the NET_IP_ALIGN algorithm will fail if we 
are dealing with IEEE 802.3 headers with LLC/SNAP etc.

And in addition to all this, anything we can do to help align
nested encapsulations  (i.e., Tom's concern) is also needed, and
maybe it will be cleaner to do that if we just split off deep packet
inspection for flow extraction away from eth_get_headlen.

--Sowmini
David Laight Feb. 1, 2016, 4:50 p.m. UTC | #13
From: David Miller
> Sent: 31 January 2016 03:45
> To: alexander.duyck@gmail.com
...
> > I would really prefer to keep the pages DMA aligned, and the skb->data
> > IP aligned.  If nothing else it has the advantage of actually having
> > the proper alignment on all the headers if I only pull the outer
> > headers and leave the inner headers in the page.. :-/
> 
> I am unconvinced by your arguments.  People getting hit by these
> arguments don't care.  For those not hitting the problem, nothing is
> going to change.
> 
> x86 uses NET_IP_ALIGN==0, and therefore nothing will change for the
> only platform people care a lot about.  PowerPC too.
> 
> For platforms with NET_IP_ALIGN==2 or similar, all of these cases
> where 1K extra buffer is allocator or _whatever_ other side effect you
> think is such as huge deal...  trust me it's _nothing_ compared to the
> alternative:
> 
> 1) Unaligned accesses

Remember the even if you do a 'realignment copy' of the IP header,
at some point the 'userdata' of the packet has to be accessed.
Mostly this will be with memcpy() and you want that copy to be aligned.

I really can't believe just how many ethernet chips are now being designed
that can't write received frames to '4n+2' boundaries.
It isn't even a new problem. Sun fixed the sbus 'DMA' part so that it would
to sbus burst transfers to 4n+2 aligned buffers a long time ago...

	David
Alexander H Duyck Feb. 1, 2016, 6:18 p.m. UTC | #14
On Mon, Feb 1, 2016 at 8:50 AM, David Laight <David.Laight@aculab.com> wrote:
> From: David Miller
>> Sent: 31 January 2016 03:45
>> To: alexander.duyck@gmail.com
> ...
>> > I would really prefer to keep the pages DMA aligned, and the skb->data
>> > IP aligned.  If nothing else it has the advantage of actually having
>> > the proper alignment on all the headers if I only pull the outer
>> > headers and leave the inner headers in the page.. :-/
>>
>> I am unconvinced by your arguments.  People getting hit by these
>> arguments don't care.  For those not hitting the problem, nothing is
>> going to change.
>>
>> x86 uses NET_IP_ALIGN==0, and therefore nothing will change for the
>> only platform people care a lot about.  PowerPC too.
>>
>> For platforms with NET_IP_ALIGN==2 or similar, all of these cases
>> where 1K extra buffer is allocator or _whatever_ other side effect you
>> think is such as huge deal...  trust me it's _nothing_ compared to the
>> alternative:
>>
>> 1) Unaligned accesses
>
> Remember the even if you do a 'realignment copy' of the IP header,
> at some point the 'userdata' of the packet has to be accessed.
> Mostly this will be with memcpy() and you want that copy to be aligned.

The problem is, aligned by what?  For x86 anything other than 16 byte
alignment will require some realignment anyway since internally it is
essentially doing some SSE style copy based on rep movsb.  I'm sure it
is similar for other architectures that have moved on from 32b to 64b
in that they want to move by the largest available data unit and 2
byte alignment vs 4 byte doesn't make that much difference anymore.

The thing you have to keep in mind is that your standard TCP frame has
66 bytes of headers(14 Ethernet, 20 IP, 20 TCP, 12 TCP options).  This
means that you are crossing over into the next cache line by either 2
or 4 bytes depending on NET_IP_ALIGN.  On a system with an 8 byte long
it doesn't really make a difference one way or the other since it is
still unaligned in terms of the fastest possible memcpy.

> I really can't believe just how many ethernet chips are now being designed
> that can't write received frames to '4n+2' boundaries.
> It isn't even a new problem. Sun fixed the sbus 'DMA' part so that it would
> to sbus burst transfers to 4n+2 aligned buffers a long time ago...

The problem here is motivation.  It makes sense that Sun would want
their headers to be IP aligned since a Sparc processor still needs
that.  Most NIC vendors don't really care about the more obscure
architectures, and even 32 bit is falling out of favor.  The market
share is just too small and in many cases customers don't run Linux
OSes on them anyway so even driver support is minimal.  They mostly
care about x86_64, PowerPC, and 64 bit ARM.  And all 3 of those
architectures have efficient unaligned accesses.
David Laight Feb. 2, 2016, 9:47 a.m. UTC | #15
From: Alexander Duyck

> Sent: 01 February 2016 18:18

> >> 1) Unaligned accesses

> >

> > Remember the even if you do a 'realignment copy' of the IP header,

> > at some point the 'userdata' of the packet has to be accessed.

> > Mostly this will be with memcpy() and you want that copy to be aligned.

> 

> The problem is, aligned by what?  For x86 anything other than 16 byte

> alignment will require some realignment anyway since internally it is

> essentially doing some SSE style copy based on rep movsb.


That isn't 'SSE' based, there is probably some kind of barrel shifter
associated with the data cache that allows cache-line reads from
the 'wrong' address.

> I'm sure it

> is similar for other architectures that have moved on from 32b to 64b

> in that they want to move by the largest available data unit and 2

> byte alignment vs 4 byte doesn't make that much difference anymore.


There are still plently of 32bit systems out there.
Including SoC systems with cpus that fault on misaligned transfers on
the same silicon as ethernet MACs that can't write 4n+2 aligned frames.

> The thing you have to keep in mind is that your standard TCP frame has

> 66 bytes of headers(14 Ethernet, 20 IP, 20 TCP, 12 TCP options).  This

> means that you are crossing over into the next cache line by either 2

> or 4 bytes depending on NET_IP_ALIGN.  On a system with an 8 byte long

> it doesn't really make a difference one way or the other since it is

> still unaligned in terms of the fastest possible memcpy.


Unless you align the frame on an 8n+6 boundary.

> > I really can't believe just how many ethernet chips are now being designed

> > that can't write received frames to '4n+2' boundaries.

> > It isn't even a new problem. Sun fixed the sbus 'DMA' part so that it would

> > to sbus burst transfers to 4n+2 aligned buffers a long time ago...

> 

> The problem here is motivation.  It makes sense that Sun would want

> their headers to be IP aligned since a Sparc processor still needs

> that.  Most NIC vendors don't really care about the more obscure

> architectures, and even 32 bit is falling out of favor.  The market

> share is just too small and in many cases customers don't run Linux

> OSes on them anyway so even driver support is minimal.  They mostly

> care about x86_64, PowerPC, and 64 bit ARM.  And all 3 of those

> architectures have efficient unaligned accesses.


With the possible exception of the x86 optimised 'rep movsb' on
very recent intel cpus, unaligned transfers are very likely
slower than aligned ones - just a lot faster than aligning the
copy in software.

My suspicion is that many hardware engineers are just not aware
of the problem. Or just leave it as a 'software problem'.

It is the same family of 'problem' that adds more and more
protocol specific 1s compliment checksum offloading.

	David.
diff mbox

Patch

diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
index d79699c9d1b9..f09114f8ee33 100644
--- a/net/core/flow_dissector.c
+++ b/net/core/flow_dissector.c
@@ -208,7 +208,6 @@  ip:
 	case htons(ETH_P_IPV6): {
 		const struct ipv6hdr *iph;
 		struct ipv6hdr _iph;
-		__be32 flow_label;
 
 ipv6:
 		iph = __skb_header_pointer(skb, nhoff, sizeof(_iph), data, hlen, &_iph);
@@ -230,17 +229,19 @@  ipv6:
 			key_control->addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
 		}
 
-		flow_label = ip6_flowlabel(iph);
-		if (flow_label) {
-			if (dissector_uses_key(flow_dissector,
-					       FLOW_DISSECTOR_KEY_FLOW_LABEL)) {
+		if (dissector_uses_key(flow_dissector,
+				       FLOW_DISSECTOR_KEY_FLOW_LABEL)) {
+			__be32 flow_label = ip6_flowlabel(iph);
+
+			if (flow_label) {
 				key_tags = skb_flow_dissector_target(flow_dissector,
 								     FLOW_DISSECTOR_KEY_FLOW_LABEL,
 								     target_container);
 				key_tags->flow_label = ntohl(flow_label);
+
+				if (flags & FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL)
+					goto out_good;
 			}
-			if (flags & FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL)
-				goto out_good;
 		}
 
 		if (flags & FLOW_DISSECTOR_F_STOP_AT_L3)