mbox series

[nf-next,v5,0/4] netfilter: flowtable: add indr-block offload

Message ID 1582521775-25176-1-git-send-email-wenxu@ucloud.cn
Headers show
Series netfilter: flowtable: add indr-block offload | expand

Message

wenxu Feb. 24, 2020, 5:22 a.m. UTC
From: wenxu <wenxu@ucloud.cn>

This patch provide tunnel offload based on route lwtunnel. 
The first two patches support indr callback setup
Then add tunnel match and action offload.

This version modify the second patch: make the dev can bind with different 
flowtable and check the NF_FLOWTABLE_HW_OFFLOAD flags in 
nf_flow_table_indr_block_cb_cmd. 

wenxu (4):
  netfilter: flowtable: add nf_flow_table_block_offload_init()
  netfilter: flowtable: add indr block setup support
  netfilter: flowtable: add tunnel match offload support
  netfilter: flowtable: add tunnel encap/decap action offload support

 net/netfilter/nf_flow_table_offload.c | 233 ++++++++++++++++++++++++++++++++--
 1 file changed, 219 insertions(+), 14 deletions(-)

Comments

wenxu March 3, 2020, 12:13 p.m. UTC | #1
Hi pablo,


How about this series?


BR

wenxu

在 2020/2/24 13:22, wenxu@ucloud.cn 写道:
> From: wenxu <wenxu@ucloud.cn>
>
> This patch provide tunnel offload based on route lwtunnel. 
> The first two patches support indr callback setup
> Then add tunnel match and action offload.
>
> This version modify the second patch: make the dev can bind with different 
> flowtable and check the NF_FLOWTABLE_HW_OFFLOAD flags in 
> nf_flow_table_indr_block_cb_cmd. 
>
> wenxu (4):
>   netfilter: flowtable: add nf_flow_table_block_offload_init()
>   netfilter: flowtable: add indr block setup support
>   netfilter: flowtable: add tunnel match offload support
>   netfilter: flowtable: add tunnel encap/decap action offload support
>
>  net/netfilter/nf_flow_table_offload.c | 233 ++++++++++++++++++++++++++++++++--
>  1 file changed, 219 insertions(+), 14 deletions(-)
>
Pablo Neira Ayuso March 3, 2020, 9:53 p.m. UTC | #2
Hi,

On Mon, Feb 24, 2020 at 01:22:51PM +0800, wenxu@ucloud.cn wrote:
> From: wenxu <wenxu@ucloud.cn>
> 
> This patch provide tunnel offload based on route lwtunnel. 
> The first two patches support indr callback setup
> Then add tunnel match and action offload.
> 
> This version modify the second patch: make the dev can bind with different 
> flowtable and check the NF_FLOWTABLE_HW_OFFLOAD flags in 
> nf_flow_table_indr_block_cb_cmd. 

I found some time to look at this indirect block infrastructure that
you have added to net/core/flow_offload.c

This is _complex_ code, I don't understand why it is so complex.
Frontend calls walks into the driver through callback, then, it gets
back to the front-end code again through another callback to come
back... this is hard to follow.

Then, we still have problem with the existing approach that you
propose, since there is 1:N mapping between the indirect block and the
net_device.

Probably not a requirement in your case, but the same net_device might
be used in several flowtables. Your patch is flawed there and I don't
see an easy way to fix this.

I know there is no way to use ->ndo_setup_tc for tunnel devices, but
you could have just make it work making it look consistent to the
->ndo_setup_tc logic.

I'm inclined to apply this patch though, in the hope that this all can
be revisited later to get it in line with the ->ndo_setup_tc approach.
However, probably I'm hoping for too much.

Thank you.
wenxu March 4, 2020, 12:54 p.m. UTC | #3
在 2020/3/4 5:53, Pablo Neira Ayuso 写道:
> Hi,
>
> On Mon, Feb 24, 2020 at 01:22:51PM +0800, wenxu@ucloud.cn wrote:
>> From: wenxu <wenxu@ucloud.cn>
>>
>> This patch provide tunnel offload based on route lwtunnel. 
>> The first two patches support indr callback setup
>> Then add tunnel match and action offload.
>>
>> This version modify the second patch: make the dev can bind with different 
>> flowtable and check the NF_FLOWTABLE_HW_OFFLOAD flags in 
>> nf_flow_table_indr_block_cb_cmd. 
> I found some time to look at this indirect block infrastructure that
> you have added to net/core/flow_offload.c
>
> This is _complex_ code, I don't understand why it is so complex.
> Frontend calls walks into the driver through callback, then, it gets
> back to the front-end code again through another callback to come
> back... this is hard to follow.
>
> Then, we still have problem with the existing approach that you
> propose, since there is 1:N mapping between the indirect block and the
> net_device.

The indirect block infrastructure is designed by the driver guys. The callbacks

is used for building and finishing relationship between the tunnel device and

the hardware devices. Such as the tunnel device come in and go away and the hardware

device come in and go away. The relationship between the tunnel device and the

hardware devices is so subtle.

> Probably not a requirement in your case, but the same net_device might
> be used in several flowtables. Your patch is flawed there and I don't
> see an easy way to fix this.

The same tunnel device can only be added to one offloaded flowtables. The tunnel device

can build the relationship with the hardware devices one time in the dirver. This is protected

by flow_block_cb_is_busy and xxx_indr_block_cb_priv in driver.


>
> I know there is no way to use ->ndo_setup_tc for tunnel devices, but
> you could have just make it work making it look consistent to the
> ->ndo_setup_tc logic.

I think the difficulty is how to find the hardware device for tunnel device to set the rule

to the hardware.

>
> I'm inclined to apply this patch though, in the hope that this all can
> be revisited later to get it in line with the ->ndo_setup_tc approach.
> However, probably I'm hoping for too much.
>
> Thank you.
>
Pablo Neira Ayuso March 15, 2020, 9:28 p.m. UTC | #4
On Wed, Mar 04, 2020 at 08:54:25PM +0800, wenxu wrote:
> 
> 在 2020/3/4 5:53, Pablo Neira Ayuso 写道:
[...]
> The indirect block infrastructure is designed by the driver guys. The callbacks
> is used for building and finishing relationship between the tunnel device and
> the hardware devices. Such as the tunnel device come in and go away and the hardware
> device come in and go away. The relationship between the tunnel device and the
> hardware devices is so subtle.

I understand that this mechanism provides a way for the driver to
subscribe to tunnel devices that might be offloaded.

> > Probably not a requirement in your case, but the same net_device might
> > be used in several flowtables. Your patch is flawed there and I don't
> > see an easy way to fix this.
> 
> The same tunnel device can only be added to one offloaded flowtables.

This is a limitation that needs to be removed. There are requirements
to allow to make the same tunnel device be part of another flowtable.

> The tunnel device can build the relationship with the hardware
> devices one time in the dirver. This is protected by
> flow_block_cb_is_busy and xxx_indr_block_cb_priv in driver.
>
> > I know there is no way to use ->ndo_setup_tc for tunnel devices, but
> > you could have just make it work making it look consistent to the
> > ->ndo_setup_tc logic.
> 
> I think the difficulty is how to find the hardware device for tunnel
> device to set the rule to the hardware.

Right, this is the problem that the infrastructure is solving,
however, it's a bit of a twisty way to address the problem.

> > I'm inclined to apply this patch though, in the hope that this all can
> > be revisited later to get it in line with the ->ndo_setup_tc approach.
> > However, probably I'm hoping for too much.

I have applied this patchset to nf-next.

Probably, there might be a chance to revisit this indirect block
infrastructure.

Thank you.