diff mbox series

[ovs-dev,v2] netdev-dpdk: Remove use of rte_mempool_ops_get_count.

Message ID 1527082890-18995-1-git-send-email-ktraynor@redhat.com
State Accepted
Delegated to: Ian Stokes
Headers show
Series [ovs-dev,v2] netdev-dpdk: Remove use of rte_mempool_ops_get_count. | expand

Commit Message

Kevin Traynor May 23, 2018, 1:41 p.m. UTC
rte_mempool_ops_get_count is not exported by DPDK so it means it
cannot be used by OVS when using DPDK as a shared library.

Remove rte_mempool_ops_get_count but still use rte_mempool_full
and document it's behavior.

Fixes: 91fccdad72a2 ("netdev-dpdk: Free mempool only when no in-use mbufs.")
Reported-by: Timothy Redaelli <tredaelli@redhat.com>
Reported-by: Markos Chandras <mchandras@suse.de>
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
---
 lib/netdev-dpdk.c | 25 +++++++++++++------------
 1 file changed, 13 insertions(+), 12 deletions(-)

Comments

Stokes, Ian May 23, 2018, 3:56 p.m. UTC | #1
> rte_mempool_ops_get_count is not exported by DPDK so it means it cannot be
> used by OVS when using DPDK as a shared library.
> 
> Remove rte_mempool_ops_get_count but still use rte_mempool_full and
> document it's behavior.
> 

Thanks for the v2, I've merged this to dpdk_merge and backported it. It will be part of this weeks pull request.

Ian

> Fixes: 91fccdad72a2 ("netdev-dpdk: Free mempool only when no in-use
> mbufs.")
> Reported-by: Timothy Redaelli <tredaelli@redhat.com>
> Reported-by: Markos Chandras <mchandras@suse.de>
> Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
> ---
>  lib/netdev-dpdk.c | 25 +++++++++++++------------
>  1 file changed, 13 insertions(+), 12 deletions(-)
> 
> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index 87152a7..719140f
> 100644
> --- a/lib/netdev-dpdk.c
> +++ b/lib/netdev-dpdk.c
> @@ -530,17 +530,18 @@ static int
>  dpdk_mp_full(const struct rte_mempool *mp) OVS_REQUIRES(dpdk_mp_mutex)  {
> -    unsigned ring_count;
> -    /* This logic is needed because rte_mempool_full() is not guaranteed
> to
> -     * be atomic and mbufs could be moved from mempool cache --> mempool
> ring
> -     * during the call. However, as no mbufs will be taken from the
> mempool
> -     * at this time, we can work around it by also checking the ring
> entries
> -     * separately and ensuring that they have not changed.
> +    /* At this point we want to know if all the mbufs are back
> +     * in the mempool. rte_mempool_full() is not atomic but it's
> +     * the best available and as we are no longer requesting mbufs
> +     * from the mempool, it means mbufs will not move from
> +     * 'mempool ring' --> 'mempool cache'. In rte_mempool_full()
> +     * the ring is counted before caches, so we won't get false
> +     * positives in this use case and we handle false negatives.
> +     *
> +     * If future implementations of rte_mempool_full() were to change
> +     * it could be possible for a false positive. Even that would
> +     * likely be ok, as there are additional checks during mempool
> +     * freeing but it would make things racey.
>       */
> -    ring_count = rte_mempool_ops_get_count(mp);
> -    if (rte_mempool_full(mp) && rte_mempool_ops_get_count(mp) ==
> ring_count) {
> -        return 1;
> -    }
> -
> -    return 0;
> +    return rte_mempool_full(mp);
>  }
> 
> --
> 1.8.3.1
diff mbox series

Patch

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index 87152a7..719140f 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -530,17 +530,18 @@  static int
 dpdk_mp_full(const struct rte_mempool *mp) OVS_REQUIRES(dpdk_mp_mutex)
 {
-    unsigned ring_count;
-    /* This logic is needed because rte_mempool_full() is not guaranteed to
-     * be atomic and mbufs could be moved from mempool cache --> mempool ring
-     * during the call. However, as no mbufs will be taken from the mempool
-     * at this time, we can work around it by also checking the ring entries
-     * separately and ensuring that they have not changed.
+    /* At this point we want to know if all the mbufs are back
+     * in the mempool. rte_mempool_full() is not atomic but it's
+     * the best available and as we are no longer requesting mbufs
+     * from the mempool, it means mbufs will not move from
+     * 'mempool ring' --> 'mempool cache'. In rte_mempool_full()
+     * the ring is counted before caches, so we won't get false
+     * positives in this use case and we handle false negatives.
+     *
+     * If future implementations of rte_mempool_full() were to change
+     * it could be possible for a false positive. Even that would
+     * likely be ok, as there are additional checks during mempool
+     * freeing but it would make things racey.
      */
-    ring_count = rte_mempool_ops_get_count(mp);
-    if (rte_mempool_full(mp) && rte_mempool_ops_get_count(mp) == ring_count) {
-        return 1;
-    }
-
-    return 0;
+    return rte_mempool_full(mp);
 }