[ovs-dev,V7,10/18] dpif-netdev: Update offloaded flows statistics
diff mbox series

Message ID 20200109074655.1104-11-elibr@mellanox.com
State Accepted
Delegated to: Ilya Maximets
Headers show
Series
  • netdev datapath actions offload
Related show

Commit Message

Eli Britstein Jan. 9, 2020, 7:46 a.m. UTC
From: Ophir Munk <ophirmu@mellanox.com>

In case a flow is HW offloaded, packets do not reach the SW, thus not
counted for statistics. Use netdev flow get API in order to update the
statistics of flows by the HW statistics.

Co-authored-by: Eli Britstein <elibr@mellanox.com>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Reviewed-by: Oz Shlomo <ozsh@mellanox.com>
Signed-off-by: Eli Britstein <elibr@mellanox.com>
---
 lib/dpif-netdev.c | 79 ++++++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 66 insertions(+), 13 deletions(-)

Comments

Ilya Maximets Jan. 14, 2020, 9:53 p.m. UTC | #1
On 09.01.2020 08:46, Eli Britstein wrote:
> From: Ophir Munk <ophirmu@mellanox.com>
> 
> In case a flow is HW offloaded, packets do not reach the SW, thus not
> counted for statistics. Use netdev flow get API in order to update the
> statistics of flows by the HW statistics.
> 
> Co-authored-by: Eli Britstein <elibr@mellanox.com>
> Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
> Reviewed-by: Oz Shlomo <ozsh@mellanox.com>
> Signed-off-by: Eli Britstein <elibr@mellanox.com>
> ---
>  lib/dpif-netdev.c | 79 ++++++++++++++++++++++++++++++++++++++++++++++---------
>  1 file changed, 66 insertions(+), 13 deletions(-)
> 

ofpbuf could be used by offload provider and we have
to allocate some memory and initialize it before calling
netdev_flow_get().

Suggesting following incremental.  I could squash into this
patch before applying the series if it looks OK to you.
What do you think?

---
diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index 84194df5d..5b7bc4a83 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -3037,9 +3037,10 @@ dpif_netdev_get_flow_offload_status(const struct dp_netdev *dp,
                                     struct dpif_flow_attrs *attrs)
 {
     struct nlattr *actions;
-    struct ofpbuf wbuffer;
     struct netdev *netdev;
     struct match match;
+    struct ofpbuf buf;
+    uint64_t act_buf[1024 / 8];
 
     int ret = 0;
 
@@ -3051,11 +3052,12 @@ dpif_netdev_get_flow_offload_status(const struct dp_netdev *dp,
     if (!netdev) {
         return false;
     }
+    ofpbuf_use_stack(&buf, &act_buf, sizeof act_buf);
     /* Taking a global 'port_mutex' to fulfill thread safety
      * restrictions for the netdev-offload-dpdk module. */
     ovs_mutex_lock(&dp->port_mutex);
     ret = netdev_flow_get(netdev, &match, &actions, &netdev_flow->mega_ufid,
-                          stats, attrs, &wbuffer);
+                          stats, attrs, &buf);
     ovs_mutex_unlock(&dp->port_mutex);
     netdev_close(netdev);
     if (ret) {
---

Best regards, Ilya Maximets.
Eli Britstein Jan. 15, 2020, 5:54 a.m. UTC | #2
On 1/14/2020 11:53 PM, Ilya Maximets wrote:
> On 09.01.2020 08:46, Eli Britstein wrote:
>> From: Ophir Munk <ophirmu@mellanox.com>
>>
>> In case a flow is HW offloaded, packets do not reach the SW, thus not
>> counted for statistics. Use netdev flow get API in order to update the
>> statistics of flows by the HW statistics.
>>
>> Co-authored-by: Eli Britstein <elibr@mellanox.com>
>> Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
>> Reviewed-by: Oz Shlomo <ozsh@mellanox.com>
>> Signed-off-by: Eli Britstein <elibr@mellanox.com>
>> ---
>>   lib/dpif-netdev.c | 79 ++++++++++++++++++++++++++++++++++++++++++++++---------
>>   1 file changed, 66 insertions(+), 13 deletions(-)
>>
> ofpbuf could be used by offload provider and we have
> to allocate some memory and initialize it before calling
> netdev_flow_get().
>
> Suggesting following incremental.  I could squash into this
> patch before applying the series if it looks OK to you.
> What do you think?
Looks good. Thanks. Just a cosmetic comment below.
>
> ---
> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
> index 84194df5d..5b7bc4a83 100644
> --- a/lib/dpif-netdev.c
> +++ b/lib/dpif-netdev.c
> @@ -3037,9 +3037,10 @@ dpif_netdev_get_flow_offload_status(const struct dp_netdev *dp,
>                                       struct dpif_flow_attrs *attrs)
>   {
>       struct nlattr *actions;
> -    struct ofpbuf wbuffer;
>       struct netdev *netdev;
>       struct match match;
> +    struct ofpbuf buf;
> +    uint64_t act_buf[1024 / 8];
move this declaration up to keep rev xmas tree.
>   
>       int ret = 0;
>   
> @@ -3051,11 +3052,12 @@ dpif_netdev_get_flow_offload_status(const struct dp_netdev *dp,
>       if (!netdev) {
>           return false;
>       }
> +    ofpbuf_use_stack(&buf, &act_buf, sizeof act_buf);
>       /* Taking a global 'port_mutex' to fulfill thread safety
>        * restrictions for the netdev-offload-dpdk module. */
>       ovs_mutex_lock(&dp->port_mutex);
>       ret = netdev_flow_get(netdev, &match, &actions, &netdev_flow->mega_ufid,
> -                          stats, attrs, &wbuffer);
> +                          stats, attrs, &buf);
>       ovs_mutex_unlock(&dp->port_mutex);
>       netdev_close(netdev);
>       if (ret) {
> ---
>
> Best regards, Ilya Maximets.
Ilya Maximets Jan. 15, 2020, 5:32 p.m. UTC | #3
On 09.01.2020 08:46, Eli Britstein wrote:
> From: Ophir Munk <ophirmu@mellanox.com>
> 
> In case a flow is HW offloaded, packets do not reach the SW, thus not
> counted for statistics. Use netdev flow get API in order to update the
> statistics of flows by the HW statistics.
> 
> Co-authored-by: Eli Britstein <elibr@mellanox.com>
> Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
> Reviewed-by: Oz Shlomo <ozsh@mellanox.com>
> Signed-off-by: Eli Britstein <elibr@mellanox.com>
> ---
>  lib/dpif-netdev.c | 79 ++++++++++++++++++++++++++++++++++++++++++++++---------
>  1 file changed, 66 insertions(+), 13 deletions(-)
> 
> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
> index 24218210d..7ec217f39 100644
> --- a/lib/dpif-netdev.c
> +++ b/lib/dpif-netdev.c
> @@ -3030,10 +3030,49 @@ dp_netdev_pmd_find_flow(const struct dp_netdev_pmd_thread *pmd,
>      return NULL;
>  }
>  
> +static bool
> +dpif_netdev_get_flow_offload_status(const struct dp_netdev *dp,
> +                                    const struct dp_netdev_flow *netdev_flow,
> +                                    struct dpif_flow_stats *stats,
> +                                    struct dpif_flow_attrs *attrs)
> +{
> +    struct nlattr *actions;
> +    struct ofpbuf wbuffer;
> +    struct netdev *netdev;
> +    struct match match;
> +
> +    int ret = 0;
> +
> +    if (!netdev_is_flow_api_enabled()) {
> +        return false;
> +    }
> +
> +    netdev = netdev_ports_get(netdev_flow->flow.in_port.odp_port, dp->class);
> +    if (!netdev) {
> +        return false;
> +    }
> +    /* Taking a global 'port_mutex' to fulfill thread safety
> +     * restrictions for the netdev-offload-dpdk module. */
> +    ovs_mutex_lock(&dp->port_mutex);

Found a bug with taking a mutex here.
Please, take a look at the fix: https://patchwork.ozlabs.org/patch/1223746/

Ian, I'd like to here your opinion on the fix too, if possible.

Best regards, Ilya Maximets.

Patch
diff mbox series

diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index 24218210d..7ec217f39 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -3030,10 +3030,49 @@  dp_netdev_pmd_find_flow(const struct dp_netdev_pmd_thread *pmd,
     return NULL;
 }
 
+static bool
+dpif_netdev_get_flow_offload_status(const struct dp_netdev *dp,
+                                    const struct dp_netdev_flow *netdev_flow,
+                                    struct dpif_flow_stats *stats,
+                                    struct dpif_flow_attrs *attrs)
+{
+    struct nlattr *actions;
+    struct ofpbuf wbuffer;
+    struct netdev *netdev;
+    struct match match;
+
+    int ret = 0;
+
+    if (!netdev_is_flow_api_enabled()) {
+        return false;
+    }
+
+    netdev = netdev_ports_get(netdev_flow->flow.in_port.odp_port, dp->class);
+    if (!netdev) {
+        return false;
+    }
+    /* Taking a global 'port_mutex' to fulfill thread safety
+     * restrictions for the netdev-offload-dpdk module. */
+    ovs_mutex_lock(&dp->port_mutex);
+    ret = netdev_flow_get(netdev, &match, &actions, &netdev_flow->mega_ufid,
+                          stats, attrs, &wbuffer);
+    ovs_mutex_unlock(&dp->port_mutex);
+    netdev_close(netdev);
+    if (ret) {
+        return false;
+    }
+
+    return true;
+}
+
 static void
-get_dpif_flow_stats(const struct dp_netdev_flow *netdev_flow_,
-                    struct dpif_flow_stats *stats)
+get_dpif_flow_status(const struct dp_netdev *dp,
+                     const struct dp_netdev_flow *netdev_flow_,
+                     struct dpif_flow_stats *stats,
+                     struct dpif_flow_attrs *attrs)
 {
+    struct dpif_flow_stats offload_stats;
+    struct dpif_flow_attrs offload_attrs;
     struct dp_netdev_flow *netdev_flow;
     unsigned long long n;
     long long used;
@@ -3049,6 +3088,21 @@  get_dpif_flow_stats(const struct dp_netdev_flow *netdev_flow_,
     stats->used = used;
     atomic_read_relaxed(&netdev_flow->stats.tcp_flags, &flags);
     stats->tcp_flags = flags;
+
+    if (dpif_netdev_get_flow_offload_status(dp, netdev_flow,
+                                            &offload_stats, &offload_attrs)) {
+        stats->n_packets += offload_stats.n_packets;
+        stats->n_bytes += offload_stats.n_bytes;
+        stats->used = MAX(stats->used, offload_stats.used);
+        stats->tcp_flags |= offload_stats.tcp_flags;
+        if (attrs) {
+            attrs->offloaded = offload_attrs.offloaded;
+            attrs->dp_layer = offload_attrs.dp_layer;
+        }
+    } else if (attrs) {
+        attrs->offloaded = false;
+        attrs->dp_layer = "ovs";
+    }
 }
 
 /* Converts to the dpif_flow format, using 'key_buf' and 'mask_buf' for
@@ -3056,7 +3110,8 @@  get_dpif_flow_stats(const struct dp_netdev_flow *netdev_flow_,
  * 'mask_buf'. Actions will be returned without copying, by relying on RCU to
  * protect them. */
 static void
-dp_netdev_flow_to_dpif_flow(const struct dp_netdev_flow *netdev_flow,
+dp_netdev_flow_to_dpif_flow(const struct dp_netdev *dp,
+                            const struct dp_netdev_flow *netdev_flow,
                             struct ofpbuf *key_buf, struct ofpbuf *mask_buf,
                             struct dpif_flow *flow, bool terse)
 {
@@ -3099,10 +3154,8 @@  dp_netdev_flow_to_dpif_flow(const struct dp_netdev_flow *netdev_flow,
     flow->ufid = netdev_flow->ufid;
     flow->ufid_present = true;
     flow->pmd_id = netdev_flow->pmd_id;
-    get_dpif_flow_stats(netdev_flow, &flow->stats);
 
-    flow->attrs.offloaded = false;
-    flow->attrs.dp_layer = "ovs";
+    get_dpif_flow_status(dp, netdev_flow, &flow->stats, &flow->attrs);
 }
 
 static int
@@ -3205,8 +3258,8 @@  dpif_netdev_flow_get(const struct dpif *dpif, const struct dpif_flow_get *get)
         netdev_flow = dp_netdev_pmd_find_flow(pmd, get->ufid, get->key,
                                               get->key_len);
         if (netdev_flow) {
-            dp_netdev_flow_to_dpif_flow(netdev_flow, get->buffer, get->buffer,
-                                        get->flow, false);
+            dp_netdev_flow_to_dpif_flow(dp, netdev_flow, get->buffer,
+                                        get->buffer, get->flow, false);
             error = 0;
             break;
         } else {
@@ -3381,7 +3434,7 @@  flow_put_on_pmd(struct dp_netdev_pmd_thread *pmd,
                                   put->actions, put->actions_len);
 
             if (stats) {
-                get_dpif_flow_stats(netdev_flow, stats);
+                get_dpif_flow_status(pmd->dp, netdev_flow, stats, NULL);
             }
             if (put->flags & DPIF_FP_ZERO_STATS) {
                 /* XXX: The userspace datapath uses thread local statistics
@@ -3500,7 +3553,7 @@  flow_del_on_pmd(struct dp_netdev_pmd_thread *pmd,
                                           del->key_len);
     if (netdev_flow) {
         if (stats) {
-            get_dpif_flow_stats(netdev_flow, stats);
+            get_dpif_flow_status(pmd->dp, netdev_flow, stats, NULL);
         }
         dp_netdev_pmd_remove_flow(pmd, netdev_flow);
     } else {
@@ -3634,13 +3687,13 @@  dpif_netdev_flow_dump_next(struct dpif_flow_dump_thread *thread_,
         = dpif_netdev_flow_dump_thread_cast(thread_);
     struct dpif_netdev_flow_dump *dump = thread->dump;
     struct dp_netdev_flow *netdev_flows[FLOW_DUMP_MAX_BATCH];
+    struct dpif_netdev *dpif = dpif_netdev_cast(thread->up.dpif);
+    struct dp_netdev *dp = get_dp_netdev(&dpif->dpif);
     int n_flows = 0;
     int i;
 
     ovs_mutex_lock(&dump->mutex);
     if (!dump->status) {
-        struct dpif_netdev *dpif = dpif_netdev_cast(thread->up.dpif);
-        struct dp_netdev *dp = get_dp_netdev(&dpif->dpif);
         struct dp_netdev_pmd_thread *pmd = dump->cur_pmd;
         int flow_limit = MIN(max_flows, FLOW_DUMP_MAX_BATCH);
 
@@ -3697,7 +3750,7 @@  dpif_netdev_flow_dump_next(struct dpif_flow_dump_thread *thread_,
 
         ofpbuf_use_stack(&key, keybuf, sizeof *keybuf);
         ofpbuf_use_stack(&mask, maskbuf, sizeof *maskbuf);
-        dp_netdev_flow_to_dpif_flow(netdev_flow, &key, &mask, f,
+        dp_netdev_flow_to_dpif_flow(dp, netdev_flow, &key, &mask, f,
                                     dump->up.terse);
     }