diff mbox series

[net-next,3/4] nfp: flower: support offloading multiple rules with same cookie

Message ID 20180425041704.26882-4-jakub.kicinski@netronome.com
State Accepted, archived
Delegated to: David Miller
Headers show
Series nfp: flower tc block support and nfp PCI updates | expand

Commit Message

Jakub Kicinski April 25, 2018, 4:17 a.m. UTC
From: John Hurley <john.hurley@netronome.com>

When multiple netdevs are attached to a tc offload block and register for
callbacks, a rule added to the block will be propogated to all netdevs.
Previously these were detected as duplicates (based on cookie) and
rejected. Modify the rule nfp lookup function to optionally include an
ingress netdev and a host context along with the cookie value when
searching for a rule. When a new rule is passed to the driver, the netdev
the rule is to be attached to is considered when searching for dublicates.
When a stats update is received from HW, the host context is used
alongside the cookie to map to the correct host rule.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/flower/main.h   |  8 +++++--
 .../net/ethernet/netronome/nfp/flower/metadata.c   | 20 +++++++++-------
 .../net/ethernet/netronome/nfp/flower/offload.c    | 27 ++++++++++++++++------
 3 files changed, 38 insertions(+), 17 deletions(-)

Comments

Or Gerlitz April 25, 2018, 6:31 a.m. UTC | #1
On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
<jakub.kicinski@netronome.com> wrote:
> From: John Hurley <john.hurley@netronome.com>
>
> When multiple netdevs are attached to a tc offload block and register for
> callbacks, a rule added to the block will be propogated to all netdevs.
> Previously these were detected as duplicates (based on cookie) and
> rejected. Modify the rule nfp lookup function to optionally include an
> ingress netdev and a host context along with the cookie value when
> searching for a rule. When a new rule is passed to the driver, the netdev
> the rule is to be attached to is considered when searching for dublicates.

so if the same rule (cookie) is provided to the driver through multiple ingress
devices you will not reject it -- what is the use case for that, is it
block sharing?
John Hurley April 25, 2018, 8:51 a.m. UTC | #2
On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
> <jakub.kicinski@netronome.com> wrote:
>> From: John Hurley <john.hurley@netronome.com>
>>
>> When multiple netdevs are attached to a tc offload block and register for
>> callbacks, a rule added to the block will be propogated to all netdevs.
>> Previously these were detected as duplicates (based on cookie) and
>> rejected. Modify the rule nfp lookup function to optionally include an
>> ingress netdev and a host context along with the cookie value when
>> searching for a rule. When a new rule is passed to the driver, the netdev
>> the rule is to be attached to is considered when searching for dublicates.
>
> so if the same rule (cookie) is provided to the driver through multiple ingress
> devices you will not reject it -- what is the use case for that, is it
> block sharing?

Hi Or,
Yes, block sharing is the current use-case.
Simple example for clarity....
Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:

tc qdisc add dev nfp_p0 ingress_block 22 ingress
tc qdisc add dev nfp_p1 ingress_block 22 ingress
tc filter add block 22 protocol ip parent ffff: flower skip_sw
ip_proto tcp action drop
Or Gerlitz April 25, 2018, 8:56 a.m. UTC | #3
On Wed, Apr 25, 2018 at 11:51 AM, John Hurley <john.hurley@netronome.com> wrote:
> On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
>> <jakub.kicinski@netronome.com> wrote:
>>> From: John Hurley <john.hurley@netronome.com>
>>>
>>> When multiple netdevs are attached to a tc offload block and register for
>>> callbacks, a rule added to the block will be propogated to all netdevs.
>>> Previously these were detected as duplicates (based on cookie) and
>>> rejected. Modify the rule nfp lookup function to optionally include an
>>> ingress netdev and a host context along with the cookie value when
>>> searching for a rule. When a new rule is passed to the driver, the netdev
>>> the rule is to be attached to is considered when searching for dublicates.
>>
>> so if the same rule (cookie) is provided to the driver through multiple ingress
>> devices you will not reject it -- what is the use case for that, is it
>> block sharing?
>
> Hi Or,
> Yes, block sharing is the current use-case.
> Simple example for clarity....
> Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:
>
> tc qdisc add dev nfp_p0 ingress_block 22 ingress
> tc qdisc add dev nfp_p1 ingress_block 22 ingress
> tc filter add block 22 protocol ip parent ffff: flower skip_sw
> ip_proto tcp action drop

cool!

Just out of curiosity, do you actually share this HW rule or you duplicate it?
John Hurley April 25, 2018, 9:02 a.m. UTC | #4
On Wed, Apr 25, 2018 at 9:56 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
> On Wed, Apr 25, 2018 at 11:51 AM, John Hurley <john.hurley@netronome.com> wrote:
>> On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>>> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
>>> <jakub.kicinski@netronome.com> wrote:
>>>> From: John Hurley <john.hurley@netronome.com>
>>>>
>>>> When multiple netdevs are attached to a tc offload block and register for
>>>> callbacks, a rule added to the block will be propogated to all netdevs.
>>>> Previously these were detected as duplicates (based on cookie) and
>>>> rejected. Modify the rule nfp lookup function to optionally include an
>>>> ingress netdev and a host context along with the cookie value when
>>>> searching for a rule. When a new rule is passed to the driver, the netdev
>>>> the rule is to be attached to is considered when searching for dublicates.
>>>
>>> so if the same rule (cookie) is provided to the driver through multiple ingress
>>> devices you will not reject it -- what is the use case for that, is it
>>> block sharing?
>>
>> Hi Or,
>> Yes, block sharing is the current use-case.
>> Simple example for clarity....
>> Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:
>>
>> tc qdisc add dev nfp_p0 ingress_block 22 ingress
>> tc qdisc add dev nfp_p1 ingress_block 22 ingress
>> tc filter add block 22 protocol ip parent ffff: flower skip_sw
>> ip_proto tcp action drop
>
> cool!
>
> Just out of curiosity, do you actually share this HW rule or you duplicate it?

It's duplicated.
At HW level the ingress port is part of the match so technically it's
a different rule.
Or Gerlitz April 25, 2018, 9:13 a.m. UTC | #5
On Wed, Apr 25, 2018 at 12:02 PM, John Hurley <john.hurley@netronome.com> wrote:
> On Wed, Apr 25, 2018 at 9:56 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>> On Wed, Apr 25, 2018 at 11:51 AM, John Hurley <john.hurley@netronome.com> wrote:
>>> On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>>>> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
>>>> <jakub.kicinski@netronome.com> wrote:
>>>>> From: John Hurley <john.hurley@netronome.com>
>>>>>
>>>>> When multiple netdevs are attached to a tc offload block and register for
>>>>> callbacks, a rule added to the block will be propogated to all netdevs.
>>>>> Previously these were detected as duplicates (based on cookie) and
>>>>> rejected. Modify the rule nfp lookup function to optionally include an
>>>>> ingress netdev and a host context along with the cookie value when
>>>>> searching for a rule. When a new rule is passed to the driver, the netdev
>>>>> the rule is to be attached to is considered when searching for dublicates.
>>>>
>>>> so if the same rule (cookie) is provided to the driver through multiple ingress
>>>> devices you will not reject it -- what is the use case for that, is it
>>>> block sharing?
>>>
>>> Hi Or,
>>> Yes, block sharing is the current use-case.
>>> Simple example for clarity....
>>> Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:
>>>
>>> tc qdisc add dev nfp_p0 ingress_block 22 ingress
>>> tc qdisc add dev nfp_p1 ingress_block 22 ingress
>>> tc filter add block 22 protocol ip parent ffff: flower skip_sw
>>> ip_proto tcp action drop
>>
>> cool!
>>
>> Just out of curiosity, do you actually share this HW rule or you duplicate it?
>
> It's duplicated. At HW level the ingress port is part of the match so technically it's
> a different rule.

I see, we have also a match on the ingress port as part of the HW API, which
means we will have to apply a similar practice if we want to support
block sharing quickly.

Just to make sure, under tc block sharing the tc stack calls for hw
offloading of the
same rule (same cookie) multiple times, each with different ingress
device, right?


Or.
John Hurley April 25, 2018, 9:27 a.m. UTC | #6
On Wed, Apr 25, 2018 at 10:13 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
> On Wed, Apr 25, 2018 at 12:02 PM, John Hurley <john.hurley@netronome.com> wrote:
>> On Wed, Apr 25, 2018 at 9:56 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>>> On Wed, Apr 25, 2018 at 11:51 AM, John Hurley <john.hurley@netronome.com> wrote:
>>>> On Wed, Apr 25, 2018 at 7:31 AM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>>>>> On Wed, Apr 25, 2018 at 7:17 AM, Jakub Kicinski
>>>>> <jakub.kicinski@netronome.com> wrote:
>>>>>> From: John Hurley <john.hurley@netronome.com>
>>>>>>
>>>>>> When multiple netdevs are attached to a tc offload block and register for
>>>>>> callbacks, a rule added to the block will be propogated to all netdevs.
>>>>>> Previously these were detected as duplicates (based on cookie) and
>>>>>> rejected. Modify the rule nfp lookup function to optionally include an
>>>>>> ingress netdev and a host context along with the cookie value when
>>>>>> searching for a rule. When a new rule is passed to the driver, the netdev
>>>>>> the rule is to be attached to is considered when searching for dublicates.
>>>>>
>>>>> so if the same rule (cookie) is provided to the driver through multiple ingress
>>>>> devices you will not reject it -- what is the use case for that, is it
>>>>> block sharing?
>>>>
>>>> Hi Or,
>>>> Yes, block sharing is the current use-case.
>>>> Simple example for clarity....
>>>> Here we want to offload the filter to both ingress devs nfp_0 and nfp_1:
>>>>
>>>> tc qdisc add dev nfp_p0 ingress_block 22 ingress
>>>> tc qdisc add dev nfp_p1 ingress_block 22 ingress
>>>> tc filter add block 22 protocol ip parent ffff: flower skip_sw
>>>> ip_proto tcp action drop
>>>
>>> cool!
>>>
>>> Just out of curiosity, do you actually share this HW rule or you duplicate it?
>>
>> It's duplicated. At HW level the ingress port is part of the match so technically it's
>> a different rule.
>
> I see, we have also a match on the ingress port as part of the HW API, which
> means we will have to apply a similar practice if we want to support
> block sharing quickly.
>
> Just to make sure, under tc block sharing the tc stack calls for hw
> offloading of the
> same rule (same cookie) multiple times, each with different ingress
> device, right?
>
>
> Or.

So in the example above, when each qdisc add is called, a callback
will be registered to the block.
For each callback, the dev used is passed as priv data (presumably you
do similar).
When the filter is added, the block code triggers all callbacks with
the same rule data [1].
We differentiate the callbacks with the priv data (ingress dev).

[1] https://elixir.bootlin.com/linux/v4.17-rc2/source/net/sched/cls_api.c#L741
diff mbox series

Patch

diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h
index c67e1b54c614..9e6804bc9b40 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/main.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/main.h
@@ -47,6 +47,7 @@ 
 struct net_device;
 struct nfp_app;
 
+#define NFP_FL_STATS_CTX_DONT_CARE	cpu_to_be32(0xffffffff)
 #define NFP_FL_STATS_ENTRY_RS		BIT(20)
 #define NFP_FL_STATS_ELEM_RS		4
 #define NFP_FL_REPEATED_HASH_MAX	BIT(17)
@@ -189,6 +190,7 @@  struct nfp_fl_payload {
 	spinlock_t lock; /* lock stats */
 	struct nfp_fl_stats stats;
 	__be32 nfp_tun_ipv4_addr;
+	struct net_device *ingress_dev;
 	char *unmasked_data;
 	char *mask_data;
 	char *action_data;
@@ -216,12 +218,14 @@  int nfp_flower_compile_action(struct tc_cls_flower_offload *flow,
 			      struct nfp_fl_payload *nfp_flow);
 int nfp_compile_flow_metadata(struct nfp_app *app,
 			      struct tc_cls_flower_offload *flow,
-			      struct nfp_fl_payload *nfp_flow);
+			      struct nfp_fl_payload *nfp_flow,
+			      struct net_device *netdev);
 int nfp_modify_flow_metadata(struct nfp_app *app,
 			     struct nfp_fl_payload *nfp_flow);
 
 struct nfp_fl_payload *
-nfp_flower_search_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie);
+nfp_flower_search_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie,
+			   struct net_device *netdev, __be32 host_ctx);
 struct nfp_fl_payload *
 nfp_flower_remove_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie);
 
diff --git a/drivers/net/ethernet/netronome/nfp/flower/metadata.c b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
index db977cf8e933..21668aa435e8 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/metadata.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
@@ -99,14 +99,18 @@  static int nfp_get_stats_entry(struct nfp_app *app, u32 *stats_context_id)
 
 /* Must be called with either RTNL or rcu_read_lock */
 struct nfp_fl_payload *
-nfp_flower_search_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie)
+nfp_flower_search_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie,
+			   struct net_device *netdev, __be32 host_ctx)
 {
 	struct nfp_flower_priv *priv = app->priv;
 	struct nfp_fl_payload *flower_entry;
 
 	hash_for_each_possible_rcu(priv->flow_table, flower_entry, link,
 				   tc_flower_cookie)
-		if (flower_entry->tc_flower_cookie == tc_flower_cookie)
+		if (flower_entry->tc_flower_cookie == tc_flower_cookie &&
+		    (!netdev || flower_entry->ingress_dev == netdev) &&
+		    (host_ctx == NFP_FL_STATS_CTX_DONT_CARE ||
+		     flower_entry->meta.host_ctx_id == host_ctx))
 			return flower_entry;
 
 	return NULL;
@@ -121,13 +125,11 @@  nfp_flower_update_stats(struct nfp_app *app, struct nfp_fl_stats_frame *stats)
 	flower_cookie = be64_to_cpu(stats->stats_cookie);
 
 	rcu_read_lock();
-	nfp_flow = nfp_flower_search_fl_table(app, flower_cookie);
+	nfp_flow = nfp_flower_search_fl_table(app, flower_cookie, NULL,
+					      stats->stats_con_id);
 	if (!nfp_flow)
 		goto exit_rcu_unlock;
 
-	if (nfp_flow->meta.host_ctx_id != stats->stats_con_id)
-		goto exit_rcu_unlock;
-
 	spin_lock(&nfp_flow->lock);
 	nfp_flow->stats.pkts += be32_to_cpu(stats->pkt_count);
 	nfp_flow->stats.bytes += be64_to_cpu(stats->byte_count);
@@ -317,7 +319,8 @@  nfp_check_mask_remove(struct nfp_app *app, char *mask_data, u32 mask_len,
 
 int nfp_compile_flow_metadata(struct nfp_app *app,
 			      struct tc_cls_flower_offload *flow,
-			      struct nfp_fl_payload *nfp_flow)
+			      struct nfp_fl_payload *nfp_flow,
+			      struct net_device *netdev)
 {
 	struct nfp_flower_priv *priv = app->priv;
 	struct nfp_fl_payload *check_entry;
@@ -348,7 +351,8 @@  int nfp_compile_flow_metadata(struct nfp_app *app,
 	nfp_flow->stats.bytes = 0;
 	nfp_flow->stats.used = jiffies;
 
-	check_entry = nfp_flower_search_fl_table(app, flow->cookie);
+	check_entry = nfp_flower_search_fl_table(app, flow->cookie, netdev,
+						 NFP_FL_STATS_CTX_DONT_CARE);
 	if (check_entry) {
 		if (nfp_release_stats_entry(app, stats_cxt))
 			return -EINVAL;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index 114d2ab02a38..bdc82e11a31e 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -419,6 +419,8 @@  nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev,
 		goto err_free_key_ls;
 	}
 
+	flow_pay->ingress_dev = egress ? NULL : netdev;
+
 	err = nfp_flower_compile_flow_match(flow, key_layer, netdev, flow_pay,
 					    tun_type);
 	if (err)
@@ -428,7 +430,8 @@  nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev,
 	if (err)
 		goto err_destroy_flow;
 
-	err = nfp_compile_flow_metadata(app, flow, flow_pay);
+	err = nfp_compile_flow_metadata(app, flow, flow_pay,
+					flow_pay->ingress_dev);
 	if (err)
 		goto err_destroy_flow;
 
@@ -462,6 +465,7 @@  nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev,
  * @app:	Pointer to the APP handle
  * @netdev:	netdev structure.
  * @flow:	TC flower classifier offload structure
+ * @egress:	Netdev is the egress dev.
  *
  * Removes a flow from the repeated hash structure and clears the
  * action payload.
@@ -470,13 +474,16 @@  nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev,
  */
 static int
 nfp_flower_del_offload(struct nfp_app *app, struct net_device *netdev,
-		       struct tc_cls_flower_offload *flow)
+		       struct tc_cls_flower_offload *flow, bool egress)
 {
 	struct nfp_port *port = nfp_port_from_netdev(netdev);
 	struct nfp_fl_payload *nfp_flow;
+	struct net_device *ingr_dev;
 	int err;
 
-	nfp_flow = nfp_flower_search_fl_table(app, flow->cookie);
+	ingr_dev = egress ? NULL : netdev;
+	nfp_flow = nfp_flower_search_fl_table(app, flow->cookie, ingr_dev,
+					      NFP_FL_STATS_CTX_DONT_CARE);
 	if (!nfp_flow)
 		return -ENOENT;
 
@@ -505,7 +512,9 @@  nfp_flower_del_offload(struct nfp_app *app, struct net_device *netdev,
 /**
  * nfp_flower_get_stats() - Populates flow stats obtained from hardware.
  * @app:	Pointer to the APP handle
+ * @netdev:	Netdev structure.
  * @flow:	TC flower classifier offload structure
+ * @egress:	Netdev is the egress dev.
  *
  * Populates a flow statistics structure which which corresponds to a
  * specific flow.
@@ -513,11 +522,15 @@  nfp_flower_del_offload(struct nfp_app *app, struct net_device *netdev,
  * Return: negative value on error, 0 if stats populated successfully.
  */
 static int
-nfp_flower_get_stats(struct nfp_app *app, struct tc_cls_flower_offload *flow)
+nfp_flower_get_stats(struct nfp_app *app, struct net_device *netdev,
+		     struct tc_cls_flower_offload *flow, bool egress)
 {
 	struct nfp_fl_payload *nfp_flow;
+	struct net_device *ingr_dev;
 
-	nfp_flow = nfp_flower_search_fl_table(app, flow->cookie);
+	ingr_dev = egress ? NULL : netdev;
+	nfp_flow = nfp_flower_search_fl_table(app, flow->cookie, ingr_dev,
+					      NFP_FL_STATS_CTX_DONT_CARE);
 	if (!nfp_flow)
 		return -EINVAL;
 
@@ -543,9 +556,9 @@  nfp_flower_repr_offload(struct nfp_app *app, struct net_device *netdev,
 	case TC_CLSFLOWER_REPLACE:
 		return nfp_flower_add_offload(app, netdev, flower, egress);
 	case TC_CLSFLOWER_DESTROY:
-		return nfp_flower_del_offload(app, netdev, flower);
+		return nfp_flower_del_offload(app, netdev, flower, egress);
 	case TC_CLSFLOWER_STATS:
-		return nfp_flower_get_stats(app, flower);
+		return nfp_flower_get_stats(app, netdev, flower, egress);
 	}
 
 	return -EOPNOTSUPP;