diff mbox series

[net-next,v3,1/5] net: Introduce NETIF_F_GRO_HW.

Message ID 1512800879-17934-2-git-send-email-michael.chan@broadcom.com
State Superseded, archived
Delegated to: David Miller
Headers show
Series [net-next,v3,1/5] net: Introduce NETIF_F_GRO_HW. | expand

Commit Message

Michael Chan Dec. 9, 2017, 6:27 a.m. UTC
Introduce NETIF_F_GRO_HW feature flag for NICs that support hardware
GRO.  With this flag, we can now independently turn on or off hardware
GRO when GRO is on.  Previously, drivers were using NETIF_F_GRO to
control hardware GRO and so it cannot be independently turned on or
off without affecting GRO.

Hardware GRO (just like GRO) guarantees that packets can be re-segmented
by TSO/GSO to reconstruct the original packet stream.  It is a subset of
NETIF_F_GRO and depends on it, as well as NETIF_F_RXCSUM.

Since NETIF_F_GRO is not propagated between upper and lower devices,
NETIF_F_GRO_HW should follow suit since it is a subset of GRO.  In other
words, a lower device can independent have GRO/GRO_HW enabled or disabled
and no feature propagation is required.  This will preserve the current
GRO behavior.

Cc: Ariel Elior <Ariel.Elior@cavium.com>
Cc: everest-linux-l2@cavium.com
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
---
 Documentation/networking/netdev-features.txt |  8 ++++++++
 include/linux/netdev_features.h              |  3 +++
 net/core/dev.c                               | 12 ++++++++++++
 net/core/ethtool.c                           |  1 +
 4 files changed, 24 insertions(+)

Comments

Alexander H Duyck Dec. 9, 2017, 6:50 p.m. UTC | #1
On Fri, Dec 8, 2017 at 10:27 PM, Michael Chan <michael.chan@broadcom.com> wrote:
> Introduce NETIF_F_GRO_HW feature flag for NICs that support hardware
> GRO.  With this flag, we can now independently turn on or off hardware
> GRO when GRO is on.  Previously, drivers were using NETIF_F_GRO to
> control hardware GRO and so it cannot be independently turned on or
> off without affecting GRO.
>
> Hardware GRO (just like GRO) guarantees that packets can be re-segmented
> by TSO/GSO to reconstruct the original packet stream.  It is a subset of
> NETIF_F_GRO and depends on it, as well as NETIF_F_RXCSUM.

So I would disagree with it being a subset of NETIF_F_GRO. If anything
it is an alternative to NETIF_F_GRO. It is performing GRO much earlier
at the device level in the case of hardware drivers. My concern is
this is probably going to end up applying to things other than just
hardware drivers though. For example what is to prevent this from
being applied to something like a virtio/tap interface? It seems like
this should be something that would be easy to implement in software.
In addition as I said in my earlier comments I think we should
probably look at using this new feature bit to indicate that we allow
GRO to occur at or below this device as opposed to just above it as
currently occurs with conventional GRO.

> Since NETIF_F_GRO is not propagated between upper and lower devices,
> NETIF_F_GRO_HW should follow suit since it is a subset of GRO.  In other
> words, a lower device can independent have GRO/GRO_HW enabled or disabled
> and no feature propagation is required.  This will preserve the current
> GRO behavior.

I'm going to back off on my requirement for you to handle propagation
since after spending a couple hours working on it I did find it was
more complex then I originally thought it would be. With that said
however I would want to see this feature implemented in such a way
that we can deal with propagating the bits in the future if we need to
and that is what I am basing my comments on. My concern is when this
ends up breaking we need to have a way to fix it and I don't want that
fix to end up being having to disable GRO across the board.

> Cc: Ariel Elior <Ariel.Elior@cavium.com>
> Cc: everest-linux-l2@cavium.com
> Signed-off-by: Michael Chan <michael.chan@broadcom.com>



> ---
>  Documentation/networking/netdev-features.txt |  8 ++++++++
>  include/linux/netdev_features.h              |  3 +++
>  net/core/dev.c                               | 12 ++++++++++++
>  net/core/ethtool.c                           |  1 +
>  4 files changed, 24 insertions(+)
>
> diff --git a/Documentation/networking/netdev-features.txt b/Documentation/networking/netdev-features.txt
> index 7413eb0..8f36527 100644
> --- a/Documentation/networking/netdev-features.txt
> +++ b/Documentation/networking/netdev-features.txt
> @@ -163,3 +163,11 @@ This requests that the NIC receive all possible frames, including errored
>  frames (such as bad FCS, etc).  This can be helpful when sniffing a link with
>  bad packets on it.  Some NICs may receive more packets if also put into normal
>  PROMISC mode.
> +
> +*  rx-gro-hw
> +
> +This requests that the NIC enables Hardware GRO (generic receive offload).
> +Hardware GRO is basically the exact reverse of TSO, and is generally
> +stricter than Hardware LRO.  A packet stream merged by Hardware GRO must
> +be re-segmentable by GSO or TSO back to the exact original packet stream.
> +Hardware GRO is dependent on GRO and RXCSUM.
> diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h
> index b1b0ca7..db84c51 100644
> --- a/include/linux/netdev_features.h
> +++ b/include/linux/netdev_features.h
> @@ -78,6 +78,8 @@ enum {
>         NETIF_F_HW_ESP_TX_CSUM_BIT,     /* ESP with TX checksum offload */
>         NETIF_F_RX_UDP_TUNNEL_PORT_BIT, /* Offload of RX port for UDP tunnels */
>
> +       NETIF_F_GRO_HW_BIT,             /* Hardware Generic receive offload */
> +
>         /*
>          * Add your fresh new feature above and remember to update
>          * netdev_features_strings[] in net/core/ethtool.c and maybe
> @@ -97,6 +99,7 @@ enum {
>  #define NETIF_F_FRAGLIST       __NETIF_F(FRAGLIST)
>  #define NETIF_F_FSO            __NETIF_F(FSO)
>  #define NETIF_F_GRO            __NETIF_F(GRO)
> +#define NETIF_F_GRO_HW         __NETIF_F(GRO_HW)
>  #define NETIF_F_GSO            __NETIF_F(GSO)
>  #define NETIF_F_GSO_ROBUST     __NETIF_F(GSO_ROBUST)
>  #define NETIF_F_HIGHDMA                __NETIF_F(HIGHDMA)
> diff --git a/net/core/dev.c b/net/core/dev.c
> index e32cf5c..6ebd0e7 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -7424,6 +7424,18 @@ static netdev_features_t netdev_fix_features(struct net_device *dev,
>                 features &= ~dev->gso_partial_features;
>         }
>
> +       if (features & NETIF_F_GRO_HW) {
> +               /* Hardware GRO depends on GRO and RXCSUM. */
> +               if (!(features & NETIF_F_GRO)) {
> +                       netdev_dbg(dev, "Dropping NETIF_F_GSO_HW since no GRO feature.\n");
> +                       features &= ~NETIF_F_GRO_HW;
> +               }

I still disagree with this bit. I think GRO is a pure software
offload, whereas GRO_HW can represent either a software offload of
some sort occurring in or before the driver, or in the hardware.
Basically the difference between the two as I view it is where the GRO
is occurring. I would like to keep that distinction and make use of
it. As I mentioned before in the case of bonding we currently have no
way to disable GRO on the lower devices partially because GRO is a
pure software feature and always happens at each device along the way.
The nice thing about this new bit is the assumption is that it is
pushing GRO to the lowest possible level and not triggering any side
effects like GRO currently does. I hope to use that logic with stacked
devices so that we could clear the bit and have it disable GRO,
GRO_HW, and LRO on all devices below the device that cleared it.

I think this linking of GRO and GRO_HW is something that would be
better served by moving it into the driver if you are wanting to
maintain the behavior of how this was previously linked to GRO. It
also makes it so that it is much easier to compare the performance for
GRO_HW against just a pure software GRO since you could then enable
them independently. Software GRO can come at a cost, and leaving it
enabled when you want to do it all in hardware is just adding a
penalty of sorts since I know for many of my routing tests I normally
disable GRO as it has a significant per-packet cost for small packet
workloads.

> +               if (!(features & NETIF_F_RXCSUM)) {
> +                       netdev_dbg(dev, "Dropping NETIF_F_GSO_HW since no RXCSUM feature.\n");
> +                       features &= ~NETIF_F_GRO_HW;
> +               }

So I was thinking about this. For LRO it makes sense to disable it in
the case of RXCSUM being disabled since most implementations leave the
Rx checksum mangled. However for GRO I am not sure it makes complete
sense. For software GRO we perform checksum validation in either
tcp4_gro_receive or tcp6_gro_receive. Why should the hardware
implementation behave differently? When a GRO frame is assembled the
checksum is converted to CHECKSUM_PARTIAL anyway even if Rx checksum
validation is disabled for the driver.

I think this may be a hardware/driver specific implementation detail
and may not be generic enough to belong here. Regular GRO works
without RXCSUM, so why should we make an exception for the hardware
based version? I know in the case of the Intel NICs we don't ever
actually disable the checksum validation, we just don't report the
result we were given from the hardware and hand all the frames up the
stack. If we were implementing something like this we could still
support GRO in the hardware without reporting Rx check-sums otherwise.

The alternative way to look at it is that we shouldn't support any
form of packet mangling at he driver level if RXCSUM is disabled. In
which case, I would say we should probably frame it that way and
disable both LRO and GRO_HW if RXCSUM is enabled because this is
another case where this looks more like LRO than GRO.

> +       }
> +
>         return features;
>  }
>
> diff --git a/net/core/ethtool.c b/net/core/ethtool.c
> index f8fcf45..50a7920 100644
> --- a/net/core/ethtool.c
> +++ b/net/core/ethtool.c
> @@ -73,6 +73,7 @@ int ethtool_op_get_ts_info(struct net_device *dev, struct ethtool_ts_info *info)
>         [NETIF_F_LLTX_BIT] =             "tx-lockless",
>         [NETIF_F_NETNS_LOCAL_BIT] =      "netns-local",
>         [NETIF_F_GRO_BIT] =              "rx-gro",
> +       [NETIF_F_GRO_HW_BIT] =           "rx-gro-hw",
>         [NETIF_F_LRO_BIT] =              "rx-lro",
>
>         [NETIF_F_TSO_BIT] =              "tx-tcp-segmentation",
> --
> 1.8.3.1
>
Michael Chan Dec. 9, 2017, 9:31 p.m. UTC | #2
On Sat, Dec 9, 2017 at 10:50 AM, Alexander Duyck
<alexander.duyck@gmail.com> wrote:
> On Fri, Dec 8, 2017 at 10:27 PM, Michael Chan <michael.chan@broadcom.com> wrote:
>> Introduce NETIF_F_GRO_HW feature flag for NICs that support hardware
>> GRO.  With this flag, we can now independently turn on or off hardware
>> GRO when GRO is on.  Previously, drivers were using NETIF_F_GRO to
>> control hardware GRO and so it cannot be independently turned on or
>> off without affecting GRO.
>>
>> Hardware GRO (just like GRO) guarantees that packets can be re-segmented
>> by TSO/GSO to reconstruct the original packet stream.  It is a subset of
>> NETIF_F_GRO and depends on it, as well as NETIF_F_RXCSUM.
>
> So I would disagree with it being a subset of NETIF_F_GRO. If anything
> it is an alternative to NETIF_F_GRO. It is performing GRO much earlier
> at the device level in the case of hardware drivers. My concern is
> this is probably going to end up applying to things other than just
> hardware drivers though. For example what is to prevent this from
> being applied to something like a virtio/tap interface? It seems like
> this should be something that would be easy to implement in software.

If you do it in software, it's called NETIF_F_GRO.  We already have
it.  The whole point of the new flag is that if the device has
software GRO enabled, and if the device supports GRO_HW, then we can
do a subset of GRO in hardware (hopefully faster).

> In addition as I said in my earlier comments I think we should
> probably look at using this new feature bit to indicate that we allow
> GRO to occur at or below this device as opposed to just above it as
> currently occurs with conventional GRO.
>
>> Since NETIF_F_GRO is not propagated between upper and lower devices,
>> NETIF_F_GRO_HW should follow suit since it is a subset of GRO.  In other
>> words, a lower device can independent have GRO/GRO_HW enabled or disabled
>> and no feature propagation is required.  This will preserve the current
>> GRO behavior.
>
> I'm going to back off on my requirement for you to handle propagation
> since after spending a couple hours working on it I did find it was
> more complex then I originally thought it would be. With that said
> however I would want to see this feature implemented in such a way
> that we can deal with propagating the bits in the future if we need to
> and that is what I am basing my comments on.

Nothing stops anyone from propagating the flag.  Just add
NETIF_F_GRO_HW to NETIF_F_UPPER_DISABLES and it will be propagated
just like LRO.


>> @@ -7424,6 +7424,18 @@ static netdev_features_t netdev_fix_features(struct net_device *dev,
>>                 features &= ~dev->gso_partial_features;
>>         }
>>
>> +       if (features & NETIF_F_GRO_HW) {
>> +               /* Hardware GRO depends on GRO and RXCSUM. */
>> +               if (!(features & NETIF_F_GRO)) {
>> +                       netdev_dbg(dev, "Dropping NETIF_F_GSO_HW since no GRO feature.\n");
>> +                       features &= ~NETIF_F_GRO_HW;
>> +               }
>
> I still disagree with this bit. I think GRO is a pure software
> offload, whereas GRO_HW can represent either a software offload of
> some sort occurring in or before the driver, or in the hardware.
> Basically the difference between the two as I view it is where the GRO
> is occurring. I would like to keep that distinction and make use of
> it. As I mentioned before in the case of bonding we currently have no
> way to disable GRO on the lower devices partially because GRO is a
> pure software feature and always happens at each device along the way.
> The nice thing about this new bit is the assumption is that it is
> pushing GRO to the lowest possible level and not triggering any side
> effects like GRO currently does. I hope to use that logic with stacked
> devices so that we could clear the bit and have it disable GRO,
> GRO_HW, and LRO on all devices below the device that cleared it.
>
> I think this linking of GRO and GRO_HW is something that would be
> better served by moving it into the driver if you are wanting to
> maintain the behavior of how this was previously linked to GRO.

If you insist, I can move this to the driver's ndo_fix_features().
But I feel it is much better to enforce this dependency system wide.
Once again, GRO_HW is hardware accelerated GRO and should depend on
it.

> It
> also makes it so that it is much easier to compare the performance for
> GRO_HW against just a pure software GRO since you could then enable
> them independently. Software GRO can come at a cost, and leaving it
> enabled when you want to do it all in hardware is just adding a
> penalty of sorts since I know for many of my routing tests I normally
> disable GRO as it has a significant per-packet cost for small packet
> workloads.
>
>> +               if (!(features & NETIF_F_RXCSUM)) {
>> +                       netdev_dbg(dev, "Dropping NETIF_F_GSO_HW since no RXCSUM feature.\n");
>> +                       features &= ~NETIF_F_GRO_HW;
>> +               }
>
> So I was thinking about this. For LRO it makes sense to disable it in
> the case of RXCSUM being disabled since most implementations leave the
> Rx checksum mangled. However for GRO I am not sure it makes complete
> sense. For software GRO we perform checksum validation in either
> tcp4_gro_receive or tcp6_gro_receive. Why should the hardware
> implementation behave differently? When a GRO frame is assembled the
> checksum is converted to CHECKSUM_PARTIAL anyway even if Rx checksum
> validation is disabled for the driver.

This is a logical feature dependency that Yuval Mintz suggested.  For
GRO_HW to work, hardware must verify the checksum of a packet before
the packet can be merged.

So if the user does not want to do RXCSUM on this device for whatever
reason, it logically means that he also doesn't want to do GRO_HW with
implied RXCSUM performed on each packet that is merged.

So I agree with Yuval that this dependency makes sense.
Alexander H Duyck Dec. 9, 2017, 10:04 p.m. UTC | #3
On Sat, Dec 9, 2017 at 1:31 PM, Michael Chan <michael.chan@broadcom.com> wrote:
> On Sat, Dec 9, 2017 at 10:50 AM, Alexander Duyck
> <alexander.duyck@gmail.com> wrote:
>> On Fri, Dec 8, 2017 at 10:27 PM, Michael Chan <michael.chan@broadcom.com> wrote:
>>> Introduce NETIF_F_GRO_HW feature flag for NICs that support hardware
>>> GRO.  With this flag, we can now independently turn on or off hardware
>>> GRO when GRO is on.  Previously, drivers were using NETIF_F_GRO to
>>> control hardware GRO and so it cannot be independently turned on or
>>> off without affecting GRO.
>>>
>>> Hardware GRO (just like GRO) guarantees that packets can be re-segmented
>>> by TSO/GSO to reconstruct the original packet stream.  It is a subset of
>>> NETIF_F_GRO and depends on it, as well as NETIF_F_RXCSUM.
>>
>> So I would disagree with it being a subset of NETIF_F_GRO. If anything
>> it is an alternative to NETIF_F_GRO. It is performing GRO much earlier
>> at the device level in the case of hardware drivers. My concern is
>> this is probably going to end up applying to things other than just
>> hardware drivers though. For example what is to prevent this from
>> being applied to something like a virtio/tap interface? It seems like
>> this should be something that would be easy to implement in software.
>
> If you do it in software, it's called NETIF_F_GRO.  We already have
> it.  The whole point of the new flag is that if the device has
> software GRO enabled, and if the device supports GRO_HW, then we can
> do a subset of GRO in hardware (hopefully faster).

I can see what you are getting at. But GRO_HW with GRO stacked on top
of it won't necessarily be the fastest form of GRO. If you have a
GRO_HW implementation that is complete enough people may want to
disable Software GRO in order to avoid the extra overhead involved
with using it.

>> In addition as I said in my earlier comments I think we should
>> probably look at using this new feature bit to indicate that we allow
>> GRO to occur at or below this device as opposed to just above it as
>> currently occurs with conventional GRO.
>>
>>> Since NETIF_F_GRO is not propagated between upper and lower devices,
>>> NETIF_F_GRO_HW should follow suit since it is a subset of GRO.  In other
>>> words, a lower device can independent have GRO/GRO_HW enabled or disabled
>>> and no feature propagation is required.  This will preserve the current
>>> GRO behavior.
>>
>> I'm going to back off on my requirement for you to handle propagation
>> since after spending a couple hours working on it I did find it was
>> more complex then I originally thought it would be. With that said
>> however I would want to see this feature implemented in such a way
>> that we can deal with propagating the bits in the future if we need to
>> and that is what I am basing my comments on.
>
> Nothing stops anyone from propagating the flag.  Just add
> NETIF_F_GRO_HW to NETIF_F_UPPER_DISABLES and it will be propagated
> just like LRO.

Yes, but the problem then is it doesn't solve the secondary issue of
no way to propagate down the desire to disable GRO as well. That is
why I am thinking that the new bit could be used to indicate that we
want GRO to be supported either in the driver or below it instead of
only in "hardware". We are much better off with a generic solution and
that is why I think it might be better to use more of a pipeline or
staged type definition for this. Basically with GRO it occurs in the
GRO logic just after the driver hands off the packet, while this new
bit indicates that GRO happens somewhere before then. If we use that
definition for this then it becomes usable to deal with things like
the stacked devices problem where the stacked devices normally have
the GRO flag disabled since we don't want to run GRO multiple times,
but as a result the stacked devices have no way of saying they don't
want GRO. If we tweak the definition of this bit it solves that
problem since it would allow for us disabling GRO, GRO_HW, and LRO on
any devices below a given device.

>>> @@ -7424,6 +7424,18 @@ static netdev_features_t netdev_fix_features(struct net_device *dev,
>>>                 features &= ~dev->gso_partial_features;
>>>         }
>>>
>>> +       if (features & NETIF_F_GRO_HW) {
>>> +               /* Hardware GRO depends on GRO and RXCSUM. */
>>> +               if (!(features & NETIF_F_GRO)) {
>>> +                       netdev_dbg(dev, "Dropping NETIF_F_GSO_HW since no GRO feature.\n");
>>> +                       features &= ~NETIF_F_GRO_HW;
>>> +               }
>>
>> I still disagree with this bit. I think GRO is a pure software
>> offload, whereas GRO_HW can represent either a software offload of
>> some sort occurring in or before the driver, or in the hardware.
>> Basically the difference between the two as I view it is where the GRO
>> is occurring. I would like to keep that distinction and make use of
>> it. As I mentioned before in the case of bonding we currently have no
>> way to disable GRO on the lower devices partially because GRO is a
>> pure software feature and always happens at each device along the way.
>> The nice thing about this new bit is the assumption is that it is
>> pushing GRO to the lowest possible level and not triggering any side
>> effects like GRO currently does. I hope to use that logic with stacked
>> devices so that we could clear the bit and have it disable GRO,
>> GRO_HW, and LRO on all devices below the device that cleared it.
>>
>> I think this linking of GRO and GRO_HW is something that would be
>> better served by moving it into the driver if you are wanting to
>> maintain the behavior of how this was previously linked to GRO.
>
> If you insist, I can move this to the driver's ndo_fix_features().
> But I feel it is much better to enforce this dependency system wide.
> Once again, GRO_HW is hardware accelerated GRO and should depend on
> it.

The question I would have is why? Where is the dependency? I don't see
it. It is GRO in one spot and/or GRO in the other. The two don't
interract directly and I don't believe you can do software GRO on a
frame that has already been coalesced in hardware, and you take a
performance penalty for trying to offload in software what you would
have already been handled in hardware.

Also, when we start propagating this up to indicate it is active we
don't want to have the GRO dependency since it would just make things
more expensive since we only need to do GRO in software once.

>> It
>> also makes it so that it is much easier to compare the performance for
>> GRO_HW against just a pure software GRO since you could then enable
>> them independently. Software GRO can come at a cost, and leaving it
>> enabled when you want to do it all in hardware is just adding a
>> penalty of sorts since I know for many of my routing tests I normally
>> disable GRO as it has a significant per-packet cost for small packet
>> workloads.
>>
>>> +               if (!(features & NETIF_F_RXCSUM)) {
>>> +                       netdev_dbg(dev, "Dropping NETIF_F_GSO_HW since no RXCSUM feature.\n");
>>> +                       features &= ~NETIF_F_GRO_HW;
>>> +               }
>>
>> So I was thinking about this. For LRO it makes sense to disable it in
>> the case of RXCSUM being disabled since most implementations leave the
>> Rx checksum mangled. However for GRO I am not sure it makes complete
>> sense. For software GRO we perform checksum validation in either
>> tcp4_gro_receive or tcp6_gro_receive. Why should the hardware
>> implementation behave differently? When a GRO frame is assembled the
>> checksum is converted to CHECKSUM_PARTIAL anyway even if Rx checksum
>> validation is disabled for the driver.
>
> This is a logical feature dependency that Yuval Mintz suggested.  For
> GRO_HW to work, hardware must verify the checksum of a packet before
> the packet can be merged.
>
> So if the user does not want to do RXCSUM on this device for whatever
> reason, it logically means that he also doesn't want to do GRO_HW with
> implied RXCSUM performed on each packet that is merged.
>
> So I agree with Yuval that this dependency makes sense.

Okay then, so if we are going to go that route we may want to be
complete on this and just disable GRO_HW and LRO if RXCSUM is not
enabled. We might also want to add a comment indicating that we don't
support anything that might mangle a packet at the driver level if
RXCSUM is not enabled. Comments explaining all this would be a good
thing just to keep someone from grabbing GRO and lumping it in at some
point in the future.

I'm still working on trying to propagate the Rx checksum properly
since it should probably follow the same UPPER_DISABLES behavior as
LRO, but I will probably only have a few hours over the next week to
really work on any code and there end up being a number of stacked
drivers that have to be updated. I would be good with just flipping
this logic for now and if RXCSUM is not set, and GRO_HW (just noticed
the typo in your message) is set, then print your message and clear
the bit. I can probably come back later and add LRO once I get the
propagation bits worked out.

As far as patch 2 in the set it would probably be better to either
drop it and just accept it as an outstanding issue, or you could take
on the propagation problems with GRO_HW and RXCSUM since we really
need to get those solved in order for this functionality to fully
work.
Michael Chan Dec. 10, 2017, 6:40 a.m. UTC | #4
On Sat, Dec 9, 2017 at 2:04 PM, Alexander Duyck
<alexander.duyck@gmail.com> wrote:
> On Sat, Dec 9, 2017 at 1:31 PM, Michael Chan <michael.chan@broadcom.com> wrote:
>> On Sat, Dec 9, 2017 at 10:50 AM, Alexander Duyck
>> <alexander.duyck@gmail.com> wrote:
>>> So I would disagree with it being a subset of NETIF_F_GRO. If anything
>>> it is an alternative to NETIF_F_GRO. It is performing GRO much earlier
>>> at the device level in the case of hardware drivers. My concern is
>>> this is probably going to end up applying to things other than just
>>> hardware drivers though. For example what is to prevent this from
>>> being applied to something like a virtio/tap interface? It seems like
>>> this should be something that would be easy to implement in software.
>>
>> If you do it in software, it's called NETIF_F_GRO.  We already have
>> it.  The whole point of the new flag is that if the device has
>> software GRO enabled, and if the device supports GRO_HW, then we can
>> do a subset of GRO in hardware (hopefully faster).
>
> I can see what you are getting at. But GRO_HW with GRO stacked on top
> of it won't necessarily be the fastest form of GRO. If you have a
> GRO_HW implementation that is complete enough people may want to
> disable Software GRO in order to avoid the extra overhead involved
> with using it.

It is possible that if you have incoming packets 1, 2, 3, 4, 5 for a
TCP connection, HW_GRO can aggregate packets 1, 2, 3, but cannot
aggregate packets 4 and 5 due to hardware resource limitation.
Software GRO aggregates 4 and 5.  So it works well together.

>>> I'm going to back off on my requirement for you to handle propagation
>>> since after spending a couple hours working on it I did find it was
>>> more complex then I originally thought it would be. With that said
>>> however I would want to see this feature implemented in such a way
>>> that we can deal with propagating the bits in the future if we need to
>>> and that is what I am basing my comments on.
>>
>> Nothing stops anyone from propagating the flag.  Just add
>> NETIF_F_GRO_HW to NETIF_F_UPPER_DISABLES and it will be propagated
>> just like LRO.
>
> Yes, but the problem then is it doesn't solve the secondary issue of
> no way to propagate down the desire to disable GRO as well. That is
> why I am thinking that the new bit could be used to indicate that we
> want GRO to be supported either in the driver or below it instead of
> only in "hardware". We are much better off with a generic solution and
> that is why I think it might be better to use more of a pipeline or
> staged type definition for this. Basically with GRO it occurs in the
> GRO logic just after the driver hands off the packet, while this new
> bit indicates that GRO happens somewhere before then. If we use that
> definition for this then it becomes usable to deal with things like
> the stacked devices problem where the stacked devices normally have
> the GRO flag disabled since we don't want to run GRO multiple times,
> but as a result the stacked devices have no way of saying they don't
> want GRO. If we tweak the definition of this bit it solves that
> problem since it would allow for us disabling GRO, GRO_HW, and LRO on
> any devices below a given device.

I just don't follow your logic.  First of all, GRO on an upper device
doesn't mean that we are doing GRO on the upper device.  The bonding
driver cannot do GRO because it doesn't call napi_gro_receive().  GRO
always happens on the lower device.  Propagation of GRO can only mean
that if GRO is set on the upper device, GRO is propagated and allowed
on lower devices.  Nothing stops you from doing that if you want to do
that.

>>> I still disagree with this bit. I think GRO is a pure software
>>> offload, whereas GRO_HW can represent either a software offload of
>>> some sort occurring in or before the driver, or in the hardware.
>>> Basically the difference between the two as I view it is where the GRO
>>> is occurring. I would like to keep that distinction and make use of
>>> it. As I mentioned before in the case of bonding we currently have no
>>> way to disable GRO on the lower devices partially because GRO is a
>>> pure software feature and always happens at each device along the way.
>>> The nice thing about this new bit is the assumption is that it is
>>> pushing GRO to the lowest possible level and not triggering any side
>>> effects like GRO currently does. I hope to use that logic with stacked
>>> devices so that we could clear the bit and have it disable GRO,
>>> GRO_HW, and LRO on all devices below the device that cleared it.
>>>
>>> I think this linking of GRO and GRO_HW is something that would be
>>> better served by moving it into the driver if you are wanting to
>>> maintain the behavior of how this was previously linked to GRO.
>>
>> If you insist, I can move this to the driver's ndo_fix_features().
>> But I feel it is much better to enforce this dependency system wide.
>> Once again, GRO_HW is hardware accelerated GRO and should depend on
>> it.
>
> The question I would have is why? Where is the dependency? I don't see
> it. It is GRO in one spot and/or GRO in the other. The two don't
> interract directly and I don't believe you can do software GRO on a
> frame that has already been coalesced in hardware,

Right.  But hardware can do a series of frames and software can do a
different series of frames that have not been aggregated.

>> This is a logical feature dependency that Yuval Mintz suggested.  For
>> GRO_HW to work, hardware must verify the checksum of a packet before
>> the packet can be merged.
>>
>> So if the user does not want to do RXCSUM on this device for whatever
>> reason, it logically means that he also doesn't want to do GRO_HW with
>> implied RXCSUM performed on each packet that is merged.
>>
>> So I agree with Yuval that this dependency makes sense.
>
> Okay then, so if we are going to go that route we may want to be
> complete on this and just disable GRO_HW and LRO if RXCSUM is not
> enabled. We might also want to add a comment indicating that we don't
> support anything that might mangle a packet at the driver level if
> RXCSUM is not enabled. Comments explaining all this would be a good
> thing just to keep someone from grabbing GRO and lumping it in at some
> point in the future.
>
> I'm still working on trying to propagate the Rx checksum properly
> since it should probably follow the same UPPER_DISABLES behavior as
> LRO, but I will probably only have a few hours over the next week to
> really work on any code and there end up being a number of stacked
> drivers that have to be updated. I would be good with just flipping
> this logic for now and if RXCSUM is not set, and GRO_HW (just noticed
> the typo in your message) is set, then print your message and clear
> the bit. I can probably come back later and add LRO once I get the
> propagation bits worked out.

Just fix the netdev_dbg() typo, right?  I don't understand what you
mean by flipping the logic.  It's the same whether you check RXCSUM
first or GRO_HW first.

May be you meant put the RXCSUM check in the outer if statement so
that someone could add more inner checks?  OK, I think that's what you
meant.

>
> As far as patch 2 in the set it would probably be better to either
> drop it and just accept it as an outstanding issue, or you could take
> on the propagation problems with GRO_HW and RXCSUM since we really
> need to get those solved in order for this functionality to fully
> work.

We need patch #2 otherwise generic GRO won't work on these 3 drivers.
I don't think I fully understand your concerns about propagation.  To
me propagation is just a usage model where an upper device will
control the common features of lower devices.  It is more convenient
to have propagation, but requires upper devices to be aware of all
features that propagate (GRO, RXCSUM).  Without propagation, it is
still fine.
Alexander H Duyck Dec. 10, 2017, 5:02 p.m. UTC | #5
On Sat, Dec 9, 2017 at 10:40 PM, Michael Chan <michael.chan@broadcom.com> wrote:
> On Sat, Dec 9, 2017 at 2:04 PM, Alexander Duyck
> <alexander.duyck@gmail.com> wrote:
>> On Sat, Dec 9, 2017 at 1:31 PM, Michael Chan <michael.chan@broadcom.com> wrote:
>>> On Sat, Dec 9, 2017 at 10:50 AM, Alexander Duyck
>>> <alexander.duyck@gmail.com> wrote:
>>>> So I would disagree with it being a subset of NETIF_F_GRO. If anything
>>>> it is an alternative to NETIF_F_GRO. It is performing GRO much earlier
>>>> at the device level in the case of hardware drivers. My concern is
>>>> this is probably going to end up applying to things other than just
>>>> hardware drivers though. For example what is to prevent this from
>>>> being applied to something like a virtio/tap interface? It seems like
>>>> this should be something that would be easy to implement in software.
>>>
>>> If you do it in software, it's called NETIF_F_GRO.  We already have
>>> it.  The whole point of the new flag is that if the device has
>>> software GRO enabled, and if the device supports GRO_HW, then we can
>>> do a subset of GRO in hardware (hopefully faster).
>>
>> I can see what you are getting at. But GRO_HW with GRO stacked on top
>> of it won't necessarily be the fastest form of GRO. If you have a
>> GRO_HW implementation that is complete enough people may want to
>> disable Software GRO in order to avoid the extra overhead involved
>> with using it.
>
> It is possible that if you have incoming packets 1, 2, 3, 4, 5 for a
> TCP connection, HW_GRO can aggregate packets 1, 2, 3, but cannot
> aggregate packets 4 and 5 due to hardware resource limitation.
> Software GRO aggregates 4 and 5.  So it works well together.

Right. But in the case where 1, 2, 3, 4, and 5 were not aggregated by
hardware GRO because the frames could not be aggregated and then GRO
burns cycles coming to the same conclusion you have waste. Same thing
goes for if hardware GRO aggregates 1 through 5 and then SW GRO tries
to see if it can do more.

They are both doing the same thing, but what I see it as is two
passes, not something where they are working together. The hardware
GRO can rely on software GRO for a second pass, but it doesn't need
to. The fact that it doesn't need to tells me that it isn't a hard
requirement to have GRO in order to make use of software GRO.

>>>> I'm going to back off on my requirement for you to handle propagation
>>>> since after spending a couple hours working on it I did find it was
>>>> more complex then I originally thought it would be. With that said
>>>> however I would want to see this feature implemented in such a way
>>>> that we can deal with propagating the bits in the future if we need to
>>>> and that is what I am basing my comments on.
>>>
>>> Nothing stops anyone from propagating the flag.  Just add
>>> NETIF_F_GRO_HW to NETIF_F_UPPER_DISABLES and it will be propagated
>>> just like LRO.
>>
>> Yes, but the problem then is it doesn't solve the secondary issue of
>> no way to propagate down the desire to disable GRO as well. That is
>> why I am thinking that the new bit could be used to indicate that we
>> want GRO to be supported either in the driver or below it instead of
>> only in "hardware". We are much better off with a generic solution and
>> that is why I think it might be better to use more of a pipeline or
>> staged type definition for this. Basically with GRO it occurs in the
>> GRO logic just after the driver hands off the packet, while this new
>> bit indicates that GRO happens somewhere before then. If we use that
>> definition for this then it becomes usable to deal with things like
>> the stacked devices problem where the stacked devices normally have
>> the GRO flag disabled since we don't want to run GRO multiple times,
>> but as a result the stacked devices have no way of saying they don't
>> want GRO. If we tweak the definition of this bit it solves that
>> problem since it would allow for us disabling GRO, GRO_HW, and LRO on
>> any devices below a given device.
>
> I just don't follow your logic.  First of all, GRO on an upper device
> doesn't mean that we are doing GRO on the upper device.  The bonding
> driver cannot do GRO because it doesn't call napi_gro_receive().  GRO
> always happens on the lower device.  Propagation of GRO can only mean
> that if GRO is set on the upper device, GRO is propagated and allowed
> on lower devices.  Nothing stops you from doing that if you want to do
> that.

If my understanding of things is correct it can mean doing GRO on an
upper device if that device does any sort of decapsulation as a result
of something like vxlan, geneve, or either ipsec or macsec encryption
occurring on top of it. It would be a side effect of the gro_cells
logic.

Admittedly I am more familiar with the segmentation side of things
then the reassembly. So my understanding of this could be incorrect.

>>>> I still disagree with this bit. I think GRO is a pure software
>>>> offload, whereas GRO_HW can represent either a software offload of
>>>> some sort occurring in or before the driver, or in the hardware.
>>>> Basically the difference between the two as I view it is where the GRO
>>>> is occurring. I would like to keep that distinction and make use of
>>>> it. As I mentioned before in the case of bonding we currently have no
>>>> way to disable GRO on the lower devices partially because GRO is a
>>>> pure software feature and always happens at each device along the way.
>>>> The nice thing about this new bit is the assumption is that it is
>>>> pushing GRO to the lowest possible level and not triggering any side
>>>> effects like GRO currently does. I hope to use that logic with stacked
>>>> devices so that we could clear the bit and have it disable GRO,
>>>> GRO_HW, and LRO on all devices below the device that cleared it.
>>>>
>>>> I think this linking of GRO and GRO_HW is something that would be
>>>> better served by moving it into the driver if you are wanting to
>>>> maintain the behavior of how this was previously linked to GRO.
>>>
>>> If you insist, I can move this to the driver's ndo_fix_features().
>>> But I feel it is much better to enforce this dependency system wide.
>>> Once again, GRO_HW is hardware accelerated GRO and should depend on
>>> it.
>>
>> The question I would have is why? Where is the dependency? I don't see
>> it. It is GRO in one spot and/or GRO in the other. The two don't
>> interract directly and I don't believe you can do software GRO on a
>> frame that has already been coalesced in hardware,
>
> Right.  But hardware can do a series of frames and software can do a
> different series of frames that have not been aggregated.

Right, but you have yet to define how the hardware offload would be
dependent on the software offload. I would say it makes more sense to
make LRO dependent on GRO_HW then it does to make GRO_HW dependent on
GRO. If I turn off GSO it doesn't turn off TSO and I would argue there
is a much stronger link there since GSO is the fallback for when TSO
fails, whereas for GRO you don't even necessarily need to have a
fallback.

There is a hierarchy to all of these features. GRO is the software
stack doing reversible aggregation, GRO_HW is allowing the hardware to
perform reversible aggregation, and LRO is allowing the hardware to
perform lossy/non-reversible aggregation. In my mind the
differentiation is the pure software solution is done outside of the
driver/hardware that you directly control. It basically just happens.
For the GRO_HW and LRO it requires the hardware/driver to participate
in it. In addition LRO might produce some frames that look identical
to GRO_HW, but it might also produce some frames that aren't
completely reversible depending on the implementation.

>>> This is a logical feature dependency that Yuval Mintz suggested.  For
>>> GRO_HW to work, hardware must verify the checksum of a packet before
>>> the packet can be merged.
>>>
>>> So if the user does not want to do RXCSUM on this device for whatever
>>> reason, it logically means that he also doesn't want to do GRO_HW with
>>> implied RXCSUM performed on each packet that is merged.
>>>
>>> So I agree with Yuval that this dependency makes sense.
>>
>> Okay then, so if we are going to go that route we may want to be
>> complete on this and just disable GRO_HW and LRO if RXCSUM is not
>> enabled. We might also want to add a comment indicating that we don't
>> support anything that might mangle a packet at the driver level if
>> RXCSUM is not enabled. Comments explaining all this would be a good
>> thing just to keep someone from grabbing GRO and lumping it in at some
>> point in the future.
>>
>> I'm still working on trying to propagate the Rx checksum properly
>> since it should probably follow the same UPPER_DISABLES behavior as
>> LRO, but I will probably only have a few hours over the next week to
>> really work on any code and there end up being a number of stacked
>> drivers that have to be updated. I would be good with just flipping
>> this logic for now and if RXCSUM is not set, and GRO_HW (just noticed
>> the typo in your message) is set, then print your message and clear
>> the bit. I can probably come back later and add LRO once I get the
>> propagation bits worked out.
>
> Just fix the netdev_dbg() typo, right?  I don't understand what you
> mean by flipping the logic.  It's the same whether you check RXCSUM
> first or GRO_HW first.

It just saves me work later when I get the propagation problems solved
and have to add LRO to the list of features dropped if RXCSUM is not
enabled.

> May be you meant put the RXCSUM check in the outer if statement so
> that someone could add more inner checks?  OK, I think that's what you
> meant.

Yes that is what I meant.

>>
>> As far as patch 2 in the set it would probably be better to either
>> drop it and just accept it as an outstanding issue, or you could take
>> on the propagation problems with GRO_HW and RXCSUM since we really
>> need to get those solved in order for this functionality to fully
>> work.
>
> We need patch #2 otherwise generic GRO won't work on these 3 drivers.
> I don't think I fully understand your concerns about propagation.  To
> me propagation is just a usage model where an upper device will
> control the common features of lower devices.  It is more convenient
> to have propagation, but requires upper devices to be aware of all
> features that propagate (GRO, RXCSUM).  Without propagation, it is
> still fine.

I'm not sure if it is. It depends on how much XDP depends on the
frames being non-linear. As-is I am pretty sure this doesn't work for
the stacked case anyway since GRO was still enabled for lower devices
anyway. So you might look at just modifying patch 2 to not worry about
the stacked devices case since I think that was already broken with
GRO anyway.

Feel free to try taking on the propagation setup if you want. I'm
stuck in a number of meetings over the next week so I probably won't
be able to have any patches to try to address the issue until a couple
weeks from now.

- Alex
Michael Chan Dec. 11, 2017, 6:39 a.m. UTC | #6
On Sun, Dec 10, 2017 at 9:02 AM, Alexander Duyck
<alexander.duyck@gmail.com> wrote:
> On Sat, Dec 9, 2017 at 10:40 PM, Michael Chan <michael.chan@broadcom.com> wrote:
>> It is possible that if you have incoming packets 1, 2, 3, 4, 5 for a
>> TCP connection, HW_GRO can aggregate packets 1, 2, 3, but cannot
>> aggregate packets 4 and 5 due to hardware resource limitation.
>> Software GRO aggregates 4 and 5.  So it works well together.
>
> Right. But in the case where 1, 2, 3, 4, and 5 were not aggregated by
> hardware GRO because the frames could not be aggregated and then GRO
> burns cycles coming to the same conclusion you have waste. Same thing
> goes for if hardware GRO aggregates 1 through 5 and then SW GRO tries
> to see if it can do more.
>
> They are both doing the same thing, but what I see it as is two
> passes, not something where they are working together. The hardware
> GRO can rely on software GRO for a second pass, but it doesn't need
> to. The fact that it doesn't need to tells me that it isn't a hard
> requirement to have GRO in order to make use of software GRO.

I guess I look at this as feature propagation from net device to
hardware.  To me, it makes a lot of sense.

We've been doing hardware GRO for a while and I never think of it as
replacement for software GRO.  It's hardware accelerated GRO for a
subset of the connections that hardware can handle.

As for the additional GRO pass in software, I think it is quite
efficient.  When hardware has aggregated a GRO frame, software GRO
will effectively "flush" it and never hold it for more aggregation.
After this patchset is done, I can look at the code and see if we can
further optimize the "2nd pass" code path when hardware has already
aggregated the packet.

Anyway, I will move the GRO_HW/GRO dependency to the
ndo_fix_features() of the 3 drivers, so we can move on and get these
patches accepted.

>> May be you meant put the RXCSUM check in the outer if statement so
>> that someone could add more inner checks?  OK, I think that's what you
>> meant.
>
> Yes that is what I meant.

OK.  Will change in v4.

>> We need patch #2 otherwise generic GRO won't work on these 3 drivers.
>> I don't think I fully understand your concerns about propagation.  To
>> me propagation is just a usage model where an upper device will
>> control the common features of lower devices.  It is more convenient
>> to have propagation, but requires upper devices to be aware of all
>> features that propagate (GRO, RXCSUM).  Without propagation, it is
>> still fine.
>
> I'm not sure if it is. It depends on how much XDP depends on the
> frames being non-linear. As-is I am pretty sure this doesn't work for
> the stacked case anyway since GRO was still enabled for lower devices
> anyway. So you might look at just modifying patch 2 to not worry about
> the stacked devices case since I think that was already broken with
> GRO anyway.

OK.  I don't think anyone will run generic XDP on an upper device anyway.
diff mbox series

Patch

diff --git a/Documentation/networking/netdev-features.txt b/Documentation/networking/netdev-features.txt
index 7413eb0..8f36527 100644
--- a/Documentation/networking/netdev-features.txt
+++ b/Documentation/networking/netdev-features.txt
@@ -163,3 +163,11 @@  This requests that the NIC receive all possible frames, including errored
 frames (such as bad FCS, etc).  This can be helpful when sniffing a link with
 bad packets on it.  Some NICs may receive more packets if also put into normal
 PROMISC mode.
+
+*  rx-gro-hw
+
+This requests that the NIC enables Hardware GRO (generic receive offload).
+Hardware GRO is basically the exact reverse of TSO, and is generally
+stricter than Hardware LRO.  A packet stream merged by Hardware GRO must
+be re-segmentable by GSO or TSO back to the exact original packet stream.
+Hardware GRO is dependent on GRO and RXCSUM.
diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h
index b1b0ca7..db84c51 100644
--- a/include/linux/netdev_features.h
+++ b/include/linux/netdev_features.h
@@ -78,6 +78,8 @@  enum {
 	NETIF_F_HW_ESP_TX_CSUM_BIT,	/* ESP with TX checksum offload */
 	NETIF_F_RX_UDP_TUNNEL_PORT_BIT, /* Offload of RX port for UDP tunnels */
 
+	NETIF_F_GRO_HW_BIT,		/* Hardware Generic receive offload */
+
 	/*
 	 * Add your fresh new feature above and remember to update
 	 * netdev_features_strings[] in net/core/ethtool.c and maybe
@@ -97,6 +99,7 @@  enum {
 #define NETIF_F_FRAGLIST	__NETIF_F(FRAGLIST)
 #define NETIF_F_FSO		__NETIF_F(FSO)
 #define NETIF_F_GRO		__NETIF_F(GRO)
+#define NETIF_F_GRO_HW		__NETIF_F(GRO_HW)
 #define NETIF_F_GSO		__NETIF_F(GSO)
 #define NETIF_F_GSO_ROBUST	__NETIF_F(GSO_ROBUST)
 #define NETIF_F_HIGHDMA		__NETIF_F(HIGHDMA)
diff --git a/net/core/dev.c b/net/core/dev.c
index e32cf5c..6ebd0e7 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -7424,6 +7424,18 @@  static netdev_features_t netdev_fix_features(struct net_device *dev,
 		features &= ~dev->gso_partial_features;
 	}
 
+	if (features & NETIF_F_GRO_HW) {
+		/* Hardware GRO depends on GRO and RXCSUM. */
+		if (!(features & NETIF_F_GRO)) {
+			netdev_dbg(dev, "Dropping NETIF_F_GSO_HW since no GRO feature.\n");
+			features &= ~NETIF_F_GRO_HW;
+		}
+		if (!(features & NETIF_F_RXCSUM)) {
+			netdev_dbg(dev, "Dropping NETIF_F_GSO_HW since no RXCSUM feature.\n");
+			features &= ~NETIF_F_GRO_HW;
+		}
+	}
+
 	return features;
 }
 
diff --git a/net/core/ethtool.c b/net/core/ethtool.c
index f8fcf45..50a7920 100644
--- a/net/core/ethtool.c
+++ b/net/core/ethtool.c
@@ -73,6 +73,7 @@  int ethtool_op_get_ts_info(struct net_device *dev, struct ethtool_ts_info *info)
 	[NETIF_F_LLTX_BIT] =             "tx-lockless",
 	[NETIF_F_NETNS_LOCAL_BIT] =      "netns-local",
 	[NETIF_F_GRO_BIT] =              "rx-gro",
+	[NETIF_F_GRO_HW_BIT] =           "rx-gro-hw",
 	[NETIF_F_LRO_BIT] =              "rx-lro",
 
 	[NETIF_F_TSO_BIT] =              "tx-tcp-segmentation",