Message ID | 1508342491-21949-3-git-send-email-antonio.fischetti@intel.com |
---|---|
State | Superseded |
Headers | show |
Series | netdev-dpdk: Fix mempool management and other cleanup. | expand |
On 10/18/2017 05:01 PM, antonio.fischetti@intel.com wrote: > Create mempool names by considering also the NUMA socket number. > So a name reflects what socket the mempool is allocated on. > This change is needed for the NUMA-awareness feature. > > CC: Mark B Kavanagh <mark.b.kavanagh@intel.com> > CC: Kevin Traynor <ktraynor@redhat.com> > CC: Aaron Conole <aconole@redhat.com> > Reported-by: Ciara Loftus <ciara.loftus@intel.com> > Tested-by: Ciara Loftus <ciara.loftus@intel.com> > Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for each port.") > Signed-off-by: Antonio Fischetti <antonio.fischetti@intel.com> If you have to re-spin anyway I'd make the debug sentence a little more natural sounding, but it's ok as is and I don't think it's worth re-spinning just for that. Otherwise LGTM. Acked-by: Kevin Traynor <ktraynor@redhat.com> > --- > Mempool names now contains the requested socket id and become like: > "ovs_4adb057e_1_2030_20512". > > Tested with DPDK 17.05.2 (from dpdk-stable branch). > NUMA-awareness feature enabled (DPDK/config/common_base). > > Created 1 single dpdkvhostuser port type. > OvS pmd-cpu-mask=FF00003 # enable cores on both numa nodes > QEMU core mask = 0xFC000 # cores for qemu on numa node 1 only > > Before launching the VM: > ------------------------ > ovs-appctl dpif-netdev/pmd-rxq-show > shows core #1 is serving the vhu port. > > pmd thread numa_id 0 core_id 1: > isolated : false > port: dpdkvhostuser0 queue-id: 0 > > After launching the VM: > ----------------------- > the vhu port is now managed by core #27 > pmd thread numa_id 1 core_id 27: > isolated : false > port: dpdkvhostuser0 queue-id: 0 > > and the log shows a new mempool is allocated on NUMA node 1, while > the previous one is deleted: > > 2017-10-06T14:04:55Z|00105|netdev_dpdk|DBG|Allocated "ovs_4adb057e_1_2030_20512" mempool with 20512 mbufs > 2017-10-06T14:04:55Z|00106|netdev_dpdk|DBG|Releasing "ovs_4adb057e_0_2030_20512" mempool > --- > --- > lib/netdev-dpdk.c | 9 +++++---- > 1 file changed, 5 insertions(+), 4 deletions(-) > > diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c > index d49afd8..3155505 100644 > --- a/lib/netdev-dpdk.c > +++ b/lib/netdev-dpdk.c > @@ -499,8 +499,8 @@ dpdk_mp_name(struct dpdk_mp *dmp) > { > uint32_t h = hash_string(dmp->if_name, 0); > char *mp_name = xcalloc(RTE_MEMPOOL_NAMESIZE, sizeof *mp_name); > - int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%u", > - h, dmp->mtu, dmp->mp_size); > + int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%d_%u", > + h, dmp->socket_id, dmp->mtu, dmp->mp_size); > if (ret < 0 || ret >= RTE_MEMPOOL_NAMESIZE) { > return NULL; > } > @@ -535,9 +535,10 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu, bool *mp_exists) > char *mp_name = dpdk_mp_name(dmp); > > VLOG_DBG("Requesting a mempool of %u mbufs for netdev %s " > - "with %d Rx and %d Tx queues.", > + "with %d Rx and %d Tx queues, socket id:%d.", > dmp->mp_size, dev->up.name, > - dev->requested_n_rxq, dev->requested_n_txq); > + dev->requested_n_rxq, dev->requested_n_txq, > + dev->requested_socket_id); > > dmp->mp = rte_pktmbuf_pool_create(mp_name, dmp->mp_size, > MP_CACHE_SZ, >
Thanks Kevin for your review. I could re-spin a new version to take into account your comments on the patch #6. I was wondering a better way to rephrase the debug message in this patch. Would it be ok something like: VLOG_DBG("Port %s: Requesting a mempool of %u mbufs " "on socket %d for %d Rx and %d Tx queues.", dev->up.name, dmp->mp_size, dev->requested_socket_id, dev->requested_n_rxq, dev->requested_n_txq); Any suggestion, let me know. -Antonio > -----Original Message----- > From: Kevin Traynor [mailto:ktraynor@redhat.com] > Sent: Thursday, October 19, 2017 11:33 AM > To: Fischetti, Antonio <antonio.fischetti@intel.com>; dev@openvswitch.org > Cc: Kavanagh, Mark B <mark.b.kavanagh@intel.com>; Aaron Conole > <aconole@redhat.com> > Subject: Re: [PATCH v7 2/6] netdev-dpdk: Fix mempool names to reflect socket > id. > > On 10/18/2017 05:01 PM, antonio.fischetti@intel.com wrote: > > Create mempool names by considering also the NUMA socket number. > > So a name reflects what socket the mempool is allocated on. > > This change is needed for the NUMA-awareness feature. > > > > CC: Mark B Kavanagh <mark.b.kavanagh@intel.com> > > CC: Kevin Traynor <ktraynor@redhat.com> > > CC: Aaron Conole <aconole@redhat.com> > > Reported-by: Ciara Loftus <ciara.loftus@intel.com> > > Tested-by: Ciara Loftus <ciara.loftus@intel.com> > > Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for each > port.") > > Signed-off-by: Antonio Fischetti <antonio.fischetti@intel.com> > > If you have to re-spin anyway I'd make the debug sentence a little more > natural sounding, but it's ok as is and I don't think it's worth > re-spinning just for that. > > Otherwise LGTM. > Acked-by: Kevin Traynor <ktraynor@redhat.com> > > > > --- > > Mempool names now contains the requested socket id and become like: > > "ovs_4adb057e_1_2030_20512". > > > > Tested with DPDK 17.05.2 (from dpdk-stable branch). > > NUMA-awareness feature enabled (DPDK/config/common_base). > > > > Created 1 single dpdkvhostuser port type. > > OvS pmd-cpu-mask=FF00003 # enable cores on both numa nodes > > QEMU core mask = 0xFC000 # cores for qemu on numa node 1 only > > > > Before launching the VM: > > ------------------------ > > ovs-appctl dpif-netdev/pmd-rxq-show > > shows core #1 is serving the vhu port. > > > > pmd thread numa_id 0 core_id 1: > > isolated : false > > port: dpdkvhostuser0 queue-id: 0 > > > > After launching the VM: > > ----------------------- > > the vhu port is now managed by core #27 > > pmd thread numa_id 1 core_id 27: > > isolated : false > > port: dpdkvhostuser0 queue-id: 0 > > > > and the log shows a new mempool is allocated on NUMA node 1, while > > the previous one is deleted: > > > > 2017-10-06T14:04:55Z|00105|netdev_dpdk|DBG|Allocated > "ovs_4adb057e_1_2030_20512" mempool with 20512 mbufs > > 2017-10-06T14:04:55Z|00106|netdev_dpdk|DBG|Releasing > "ovs_4adb057e_0_2030_20512" mempool > > --- > > --- > > lib/netdev-dpdk.c | 9 +++++---- > > 1 file changed, 5 insertions(+), 4 deletions(-) > > > > diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c > > index d49afd8..3155505 100644 > > --- a/lib/netdev-dpdk.c > > +++ b/lib/netdev-dpdk.c > > @@ -499,8 +499,8 @@ dpdk_mp_name(struct dpdk_mp *dmp) > > { > > uint32_t h = hash_string(dmp->if_name, 0); > > char *mp_name = xcalloc(RTE_MEMPOOL_NAMESIZE, sizeof *mp_name); > > - int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%u", > > - h, dmp->mtu, dmp->mp_size); > > + int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%d_%u", > > + h, dmp->socket_id, dmp->mtu, dmp->mp_size); > > if (ret < 0 || ret >= RTE_MEMPOOL_NAMESIZE) { > > return NULL; > > } > > @@ -535,9 +535,10 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu, bool > *mp_exists) > > char *mp_name = dpdk_mp_name(dmp); > > > > VLOG_DBG("Requesting a mempool of %u mbufs for netdev %s " > > - "with %d Rx and %d Tx queues.", > > + "with %d Rx and %d Tx queues, socket id:%d.", > > dmp->mp_size, dev->up.name, > > - dev->requested_n_rxq, dev->requested_n_txq); > > + dev->requested_n_rxq, dev->requested_n_txq, > > + dev->requested_socket_id); > > > > dmp->mp = rte_pktmbuf_pool_create(mp_name, dmp->mp_size, > > MP_CACHE_SZ, > >
On 10/19/2017 12:45 PM, Fischetti, Antonio wrote: > Thanks Kevin for your review. > I could re-spin a new version to take into account your comments on the patch #6. > I was wondering a better way to rephrase the debug message in this patch. > Would it be ok something like: > > VLOG_DBG("Port %s: Requesting a mempool of %u mbufs " > "on socket %d for %d Rx and %d Tx queues.", > dev->up.name, dmp->mp_size, > dev->requested_socket_id, > dev->requested_n_rxq, dev->requested_n_txq); > Yep, that reads better, thanks. > Any suggestion, let me know. > > -Antonio > >> -----Original Message----- >> From: Kevin Traynor [mailto:ktraynor@redhat.com] >> Sent: Thursday, October 19, 2017 11:33 AM >> To: Fischetti, Antonio <antonio.fischetti@intel.com>; dev@openvswitch.org >> Cc: Kavanagh, Mark B <mark.b.kavanagh@intel.com>; Aaron Conole >> <aconole@redhat.com> >> Subject: Re: [PATCH v7 2/6] netdev-dpdk: Fix mempool names to reflect socket >> id. >> >> On 10/18/2017 05:01 PM, antonio.fischetti@intel.com wrote: >>> Create mempool names by considering also the NUMA socket number. >>> So a name reflects what socket the mempool is allocated on. >>> This change is needed for the NUMA-awareness feature. >>> >>> CC: Mark B Kavanagh <mark.b.kavanagh@intel.com> >>> CC: Kevin Traynor <ktraynor@redhat.com> >>> CC: Aaron Conole <aconole@redhat.com> >>> Reported-by: Ciara Loftus <ciara.loftus@intel.com> >>> Tested-by: Ciara Loftus <ciara.loftus@intel.com> >>> Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for each >> port.") >>> Signed-off-by: Antonio Fischetti <antonio.fischetti@intel.com> >> >> If you have to re-spin anyway I'd make the debug sentence a little more >> natural sounding, but it's ok as is and I don't think it's worth >> re-spinning just for that. >> >> Otherwise LGTM. >> Acked-by: Kevin Traynor <ktraynor@redhat.com> >> >> >>> --- >>> Mempool names now contains the requested socket id and become like: >>> "ovs_4adb057e_1_2030_20512". >>> >>> Tested with DPDK 17.05.2 (from dpdk-stable branch). >>> NUMA-awareness feature enabled (DPDK/config/common_base). >>> >>> Created 1 single dpdkvhostuser port type. >>> OvS pmd-cpu-mask=FF00003 # enable cores on both numa nodes >>> QEMU core mask = 0xFC000 # cores for qemu on numa node 1 only >>> >>> Before launching the VM: >>> ------------------------ >>> ovs-appctl dpif-netdev/pmd-rxq-show >>> shows core #1 is serving the vhu port. >>> >>> pmd thread numa_id 0 core_id 1: >>> isolated : false >>> port: dpdkvhostuser0 queue-id: 0 >>> >>> After launching the VM: >>> ----------------------- >>> the vhu port is now managed by core #27 >>> pmd thread numa_id 1 core_id 27: >>> isolated : false >>> port: dpdkvhostuser0 queue-id: 0 >>> >>> and the log shows a new mempool is allocated on NUMA node 1, while >>> the previous one is deleted: >>> >>> 2017-10-06T14:04:55Z|00105|netdev_dpdk|DBG|Allocated >> "ovs_4adb057e_1_2030_20512" mempool with 20512 mbufs >>> 2017-10-06T14:04:55Z|00106|netdev_dpdk|DBG|Releasing >> "ovs_4adb057e_0_2030_20512" mempool >>> --- >>> --- >>> lib/netdev-dpdk.c | 9 +++++---- >>> 1 file changed, 5 insertions(+), 4 deletions(-) >>> >>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c >>> index d49afd8..3155505 100644 >>> --- a/lib/netdev-dpdk.c >>> +++ b/lib/netdev-dpdk.c >>> @@ -499,8 +499,8 @@ dpdk_mp_name(struct dpdk_mp *dmp) >>> { >>> uint32_t h = hash_string(dmp->if_name, 0); >>> char *mp_name = xcalloc(RTE_MEMPOOL_NAMESIZE, sizeof *mp_name); >>> - int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%u", >>> - h, dmp->mtu, dmp->mp_size); >>> + int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%d_%u", >>> + h, dmp->socket_id, dmp->mtu, dmp->mp_size); >>> if (ret < 0 || ret >= RTE_MEMPOOL_NAMESIZE) { >>> return NULL; >>> } >>> @@ -535,9 +535,10 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu, bool >> *mp_exists) >>> char *mp_name = dpdk_mp_name(dmp); >>> >>> VLOG_DBG("Requesting a mempool of %u mbufs for netdev %s " >>> - "with %d Rx and %d Tx queues.", >>> + "with %d Rx and %d Tx queues, socket id:%d.", >>> dmp->mp_size, dev->up.name, >>> - dev->requested_n_rxq, dev->requested_n_txq); >>> + dev->requested_n_rxq, dev->requested_n_txq, >>> + dev->requested_socket_id); >>> >>> dmp->mp = rte_pktmbuf_pool_create(mp_name, dmp->mp_size, >>> MP_CACHE_SZ, >>> >
diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index d49afd8..3155505 100644 --- a/lib/netdev-dpdk.c +++ b/lib/netdev-dpdk.c @@ -499,8 +499,8 @@ dpdk_mp_name(struct dpdk_mp *dmp) { uint32_t h = hash_string(dmp->if_name, 0); char *mp_name = xcalloc(RTE_MEMPOOL_NAMESIZE, sizeof *mp_name); - int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%u", - h, dmp->mtu, dmp->mp_size); + int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%d_%u", + h, dmp->socket_id, dmp->mtu, dmp->mp_size); if (ret < 0 || ret >= RTE_MEMPOOL_NAMESIZE) { return NULL; } @@ -535,9 +535,10 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu, bool *mp_exists) char *mp_name = dpdk_mp_name(dmp); VLOG_DBG("Requesting a mempool of %u mbufs for netdev %s " - "with %d Rx and %d Tx queues.", + "with %d Rx and %d Tx queues, socket id:%d.", dmp->mp_size, dev->up.name, - dev->requested_n_rxq, dev->requested_n_txq); + dev->requested_n_rxq, dev->requested_n_txq, + dev->requested_socket_id); dmp->mp = rte_pktmbuf_pool_create(mp_name, dmp->mp_size, MP_CACHE_SZ,