diff mbox series

[ovs-dev,v5,2/6] netdev-dpdk: Fix mempool names to reflect socket id.

Message ID 1507737656-31627-3-git-send-email-antonio.fischetti@intel.com
State Superseded
Headers show
Series netdev-dpdk: Fix management of pre-existing mempools. | expand

Commit Message

Fischetti, Antonio Oct. 11, 2017, 4 p.m. UTC
Create mempool names by considering also the NUMA socket number.
So a name reflects what socket the mempool is allocated on.
This change is needed for the NUMA-awareness feature.

CC: Kevin Traynor <ktraynor@redhat.com>
CC: Aaron Conole <aconole@redhat.com>
Reported-by: Ciara Loftus <ciara.loftus@intel.com>
Tested-by: Ciara Loftus <ciara.loftus@intel.com>
Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for each port.")
Signed-off-by: Antonio Fischetti <antonio.fischetti@intel.com>
---
Mempool names now contains the requested socket id and become like:
"ovs_4adb057e_1_2030_20512".

Tested with DPDK 17.05.2 (from dpdk-stable branch).
NUMA-awareness feature enabled (DPDK/config/common_base).

Created 1 single dpdkvhostuser port type.
OvS pmd-cpu-mask=FF00003     # enable cores on both numa nodes
QEMU core mask = 0xFC000     # cores for qemu on numa node 1 only

 Before launching the VM:
 ------------------------
ovs-appctl dpif-netdev/pmd-rxq-show
shows core #1 is serving the vhu port.

pmd thread numa_id 0 core_id 1:
        isolated : false
        port: dpdkvhostuser0    queue-id: 0

 After launching the VM:
 -----------------------
the vhu port is now managed by core #27
pmd thread numa_id 1 core_id 27:
        isolated : false
        port: dpdkvhostuser0    queue-id: 0

and the log shows a new mempool is allocated on NUMA node 1, while
the previous one is deleted:

2017-10-06T14:04:55Z|00105|netdev_dpdk|DBG|Allocated "ovs_4adb057e_1_2030_20512" mempool with 20512 mbufs
2017-10-06T14:04:55Z|00106|netdev_dpdk|DBG|Releasing "ovs_4adb057e_0_2030_20512" mempool
---
 lib/netdev-dpdk.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

Comments

Mark Kavanagh Oct. 13, 2017, 2:50 p.m. UTC | #1
>From: ovs-dev-bounces@openvswitch.org [mailto:ovs-dev-bounces@openvswitch.org]
>On Behalf Of antonio.fischetti@intel.com
>Sent: Wednesday, October 11, 2017 5:01 PM
>To: dev@openvswitch.org
>Subject: [ovs-dev] [PATCH v5 2/6] netdev-dpdk: Fix mempool names to reflect
>socket id.
>
>Create mempool names by considering also the NUMA socket number.
>So a name reflects what socket the mempool is allocated on.
>This change is needed for the NUMA-awareness feature.

Looks good to me.
-Mark

>
>CC: Kevin Traynor <ktraynor@redhat.com>
>CC: Aaron Conole <aconole@redhat.com>
>Reported-by: Ciara Loftus <ciara.loftus@intel.com>
>Tested-by: Ciara Loftus <ciara.loftus@intel.com>
>Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for each
>port.")
>Signed-off-by: Antonio Fischetti <antonio.fischetti@intel.com>
>---
>Mempool names now contains the requested socket id and become like:
>"ovs_4adb057e_1_2030_20512".
>
>Tested with DPDK 17.05.2 (from dpdk-stable branch).
>NUMA-awareness feature enabled (DPDK/config/common_base).
>
>Created 1 single dpdkvhostuser port type.
>OvS pmd-cpu-mask=FF00003     # enable cores on both numa nodes
>QEMU core mask = 0xFC000     # cores for qemu on numa node 1 only
>
> Before launching the VM:
> ------------------------
>ovs-appctl dpif-netdev/pmd-rxq-show
>shows core #1 is serving the vhu port.
>
>pmd thread numa_id 0 core_id 1:
>        isolated : false
>        port: dpdkvhostuser0    queue-id: 0
>
> After launching the VM:
> -----------------------
>the vhu port is now managed by core #27
>pmd thread numa_id 1 core_id 27:
>        isolated : false
>        port: dpdkvhostuser0    queue-id: 0
>
>and the log shows a new mempool is allocated on NUMA node 1, while
>the previous one is deleted:
>
>2017-10-06T14:04:55Z|00105|netdev_dpdk|DBG|Allocated
>"ovs_4adb057e_1_2030_20512" mempool with 20512 mbufs
>2017-10-06T14:04:55Z|00106|netdev_dpdk|DBG|Releasing
>"ovs_4adb057e_0_2030_20512" mempool
>---
> lib/netdev-dpdk.c | 9 +++++----
> 1 file changed, 5 insertions(+), 4 deletions(-)
>
>diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
>index e6f3ca4..e664931 100644
>--- a/lib/netdev-dpdk.c
>+++ b/lib/netdev-dpdk.c
>@@ -499,8 +499,8 @@ dpdk_mp_name(struct dpdk_mp *dmp)
> {
>     uint32_t h = hash_string(dmp->if_name, 0);
>     char *mp_name = xcalloc(RTE_MEMPOOL_NAMESIZE, sizeof *mp_name);
>-    int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%u",
>-                       h, dmp->mtu, dmp->mp_size);
>+    int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%d_%u",
>+                       h, dmp->socket_id, dmp->mtu, dmp->mp_size);
>     if (ret < 0 || ret >= RTE_MEMPOOL_NAMESIZE) {
>         return NULL;
>     }
>@@ -535,9 +535,10 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu, bool
>*mp_exists)
>         char *mp_name = dpdk_mp_name(dmp);
>
>         VLOG_DBG("Requesting a mempool of %u mbufs for netdev %s "
>-                 "with %d Rx and %d Tx queues.",
>+                 "with %d Rx and %d Tx queues, socket id:%d.",
>                  dmp->mp_size, dev->up.name,
>-                 dev->requested_n_rxq, dev->requested_n_txq);
>+                 dev->requested_n_rxq, dev->requested_n_txq,
>+                 dev->requested_socket_id);
>
>         dmp->mp = rte_pktmbuf_pool_create(mp_name, dmp->mp_size,
>                                           MP_CACHE_SZ,
>--
>2.4.11
>
>_______________________________________________
>dev mailing list
>dev@openvswitch.org
>https://mail.openvswitch.org/mailman/listinfo/ovs-dev
diff mbox series

Patch

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index e6f3ca4..e664931 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -499,8 +499,8 @@  dpdk_mp_name(struct dpdk_mp *dmp)
 {
     uint32_t h = hash_string(dmp->if_name, 0);
     char *mp_name = xcalloc(RTE_MEMPOOL_NAMESIZE, sizeof *mp_name);
-    int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%u",
-                       h, dmp->mtu, dmp->mp_size);
+    int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%d_%u",
+                       h, dmp->socket_id, dmp->mtu, dmp->mp_size);
     if (ret < 0 || ret >= RTE_MEMPOOL_NAMESIZE) {
         return NULL;
     }
@@ -535,9 +535,10 @@  dpdk_mp_create(struct netdev_dpdk *dev, int mtu, bool *mp_exists)
         char *mp_name = dpdk_mp_name(dmp);
 
         VLOG_DBG("Requesting a mempool of %u mbufs for netdev %s "
-                 "with %d Rx and %d Tx queues.",
+                 "with %d Rx and %d Tx queues, socket id:%d.",
                  dmp->mp_size, dev->up.name,
-                 dev->requested_n_rxq, dev->requested_n_txq);
+                 dev->requested_n_rxq, dev->requested_n_txq,
+                 dev->requested_socket_id);
 
         dmp->mp = rte_pktmbuf_pool_create(mp_name, dmp->mp_size,
                                           MP_CACHE_SZ,