From patchwork Mon Jul 24 11:58:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ilya Maximets X-Patchwork-Id: 792754 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3xGKdz3bhrz9s74 for ; Mon, 24 Jul 2017 21:58:47 +1000 (AEST) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 34817927; Mon, 24 Jul 2017 11:58:44 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id C26DF910 for ; Mon, 24 Jul 2017 11:58:43 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mailout1.w1.samsung.com (mailout1.w1.samsung.com [210.118.77.11]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 95AF5AC for ; Mon, 24 Jul 2017 11:58:41 +0000 (UTC) MIME-version: 1.0 Received: from eucas1p1.samsung.com (unknown [182.198.249.206]) by mailout1.w1.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0OTL00J58FXQO790@mailout1.w1.samsung.com> for dev@openvswitch.org; Mon, 24 Jul 2017 12:58:38 +0100 (BST) Received: from eusmges3.samsung.com (unknown [203.254.199.242]) by eucas1p1.samsung.com (KnoxPortal) with ESMTP id 20170724115838eucas1p1baf56c61e92dd6a6e99db722f5d060cc~UQlc80qL00846508465eucas1p1I; Mon, 24 Jul 2017 11:58:38 +0000 (GMT) Received: from eucas1p2.samsung.com ( [182.198.249.207]) by eusmges3.samsung.com (EUCPMTA) with SMTP id 1A.77.17464.DE0E5795; Mon, 24 Jul 2017 12:58:38 +0100 (BST) Received: from eusmgms2.samsung.com (unknown [182.198.249.180]) by eucas1p2.samsung.com (KnoxPortal) with ESMTP id 20170724115837eucas1p2b078cb67eba1020a01c0a02a8c61930b~UQlb9a8kq0461304613eucas1p2K; Mon, 24 Jul 2017 11:58:37 +0000 (GMT) X-AuditID: cbfec7f2-f797e6d000004438-1a-5975e0ed388b Received: from eusync3.samsung.com ( [203.254.199.213]) by eusmgms2.samsung.com (EUCPMTA) with SMTP id C5.8E.20206.CE0E5795; Mon, 24 Jul 2017 12:58:36 +0100 (BST) Received: from [106.109.129.180] by eusync3.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTPA id <0OTL00HCSFXNTS00@eusync3.samsung.com>; Mon, 24 Jul 2017 12:58:36 +0100 (BST) To: Darrell Ball , "dev@openvswitch.org" From: Ilya Maximets Message-id: <6d2ad4c3-584b-efe8-3633-044d0f3790f2@samsung.com> Date: Mon, 24 Jul 2017 14:58:34 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 In-reply-to: <77AA8328-C16D-4EBD-817D-45921325CC13@vmware.com> Content-language: en-GB X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrIKsWRmVeSWpSXmKPExsWy7djP87rvHpRGGixwsHg1uYHRoqV/JrPF 8xcLmS2Ont7DbHH3xwomi2mfb7NbXGn/yW6x8eFZVou1hz6wW9y9+YfJgctj8Z6XTB7Pbv5n 9Hh+rYfF4/2+q2wefVtWMXq8m/+WLYAtissmJTUnsyy1SN8ugSvjxOpDbAWrJzFW/Nh2i6mB cXpiFyMnh4SAicSkgwfZIWwxiQv31rN1MXJxCAksZZQ4tu0nK4TzmVGiY+U2NpiO5WdfskMk ljFKXJh2nAkkwSsgKPFj8j0WEJtZQFPixZdJLBBFLxgl1j89A7ZDWCBBYuXWC2CTRASCJI6u eQG2j1mgmUliR+shsASbgI7EqdVHGCGm2kmcWXmfGcRmEVCV+PP6A1hcVCBCYtv3GWD1nEA1 F+ftZofYLC5x7P5NRghbXuLgledgV0gIXGOX2Dx/LVADB5AjK7HpADPEOy4S/5sWQgNAWOLV 8S1QtozE5cndUL3NjBINqy4xQjgTGCW+NC9ngqiylzh18yoTxDY+iUnbpjNDLOCV6GgTgijx kDg15SoLhO0osexDFyMkWI6xSDzYdYtlAqPCLKTgm4UUfLOQPDQLyUMLGFlWMYqklhbnpqcW G+sVJ+YWl+al6yXn525iBCas0/+Of9rB+PWE1SFGAQ5GJR7eSNHSSCHWxLLiytxDjBIczEoi vFL3gUK8KYmVValF+fFFpTmpxYcYpTlYlMR5uU5dixASSE8sSc1OTS1ILYLJMnFwSjUw1lX7 iizOujxPcMNU6TcavElr8n/aFCRq73G9EMSQ4aK8k5O7P+ne/6WvW6WfV9rO5Jywvn5mm/El yxuc8ZWbBPZXKHpekxT6tiNozdXOP1UHPpfF+bU/8XzcE3fkxNHPT0z/Na5c/vTcSlHjiXxb 3t6ICFK90/Mrx/xluINXxirefpFu+UcBSizFGYmGWsxFxYkAwYgNn1QDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrPIsWRmVeSWpSXmKPExsVy+t/xq7pvHpRGGlxfYmXxanIDo0VL/0xm i+cvFjJbHD29h9ni7o8VTBbTPt9mt7jS/pPdYuPDs6wWaw99YLe4e/MPkwOXx+I9L5k8nt38 z+jx/FoPi8f7fVfZPPq2rGL0eDf/LVsAW5SbTUZqYkpqkUJqXnJ+SmZeuq1SaIibroWSQl5i bqqtUoSub0iQkkJZYk4pkGdkgAYcnAPcg5X07RLcMk6sPsRWsHoSY8WPbbeYGhinJ3YxcnJI CJhILD/7kh3CFpO4cG89WxcjF4eQwBJGifN/drGAJHgFBCV+TL4HZHNwMAuoS0yZkgtR84JR 4sbie4wgNcICCRIrt15gA7FFBAIkFqwBGcoFVN/MJPF02WN2iI5jLBK95xcwg1SxCehInFp9 hBFig53EmZX3weIsAqoSf15/AIuLCkRI9L29DHYeJ1DNxXm7wWxmAXGJY/dvMkLY8hIHrzxn mcAoOAvJsbMQjp2FpGMWko4FjCyrGEVSS4tz03OLjfSKE3OLS/PS9ZLzczcxAiN327GfW3Yw dr0LPsQowMGoxMMbIVoaKcSaWFZcmXuIUYKDWUmEV+o+UIg3JbGyKrUoP76oNCe1+BCjKdAP E5mlRJPzgUklryTe0MTQ3NLQyNjCwtzISEmcd+qHK+FCAumJJanZqakFqUUwfUwcnFINjEzG f0qUrMvL/5vlie+IdFvPE+G14N7OffVvpHkd9s1KmLbTU6WuyeKRsvQv9WROrvdvXYJC9j9q DfjKc8vkVtBtR4ZXq1JrnJ1/bP2WsSp1e02NoRPTvVaDnjRhSZ60iWu/bfuzdZ9KB6dllmBe yrrmpTuqJE6wPhDs07Dg+H5Wn/GhgNpvJZbijERDLeai4kQA39Vez/ICAAA= X-MTR: 20000000000000000@CPGS X-CMS-MailID: 20170724115837eucas1p2b078cb67eba1020a01c0a02a8c61930b X-Msg-Generator: CA X-Sender-IP: 182.198.249.180 X-Local-Sender: =?UTF-8?B?SWx5YSBNYXhpbWV0cxtTUlItVmlydHVhbGl6YXRpb24gTGFi?= =?UTF-8?B?G+yCvOyEseyghOyekBtMZWFkaW5nIEVuZ2luZWVy?= X-Global-Sender: =?UTF-8?B?SWx5YSBNYXhpbWV0cxtTUlItVmlydHVhbGl6YXRpb24gTGFi?= =?UTF-8?B?G1NhbXN1bmcgRWxlY3Ryb25pY3MbTGVhZGluZyBFbmdpbmVlcg==?= X-Sender-Code: =?UTF-8?B?QzEwG0NJU0hRG0MxMEdEMDFHRDAxMDE1NA==?= CMS-TYPE: 201P X-HopCount: 7 X-CMS-RootMailID: 20170530141247eucas1p14f3f3aa7b36d3e7b4b5039429f1a8781 X-RootMTR: 20170530141247eucas1p14f3f3aa7b36d3e7b4b5039429f1a8781 References: <1496153546-20532-1-git-send-email-i.maximets@samsung.com> <1496153546-20532-3-git-send-email-i.maximets@samsung.com> <66DD3FE4-1DD1-4E96-889C-670B2AAE8287@vmware.com> <0ac5b953-d6e7-f3e9-4ef0-f11ddd2e4b71@samsung.com> <122FD58C-0968-4B3A-8C58-7CEF8A7952E9@vmware.com> <77AA8328-C16D-4EBD-817D-45921325CC13@vmware.com> X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: Heetae Ahn Subject: Re: [ovs-dev] [PATCH v2 2/3] dpif-netdev: Avoid port's reconfiguration on pmd-cpu-mask changes. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org On 23.07.2017 20:20, Darrell Ball wrote: > Comments inline but this is mostly repetitive. > > -----Original Message----- > From: Ilya Maximets > Date: Monday, July 10, 2017 at 11:55 PM > To: Darrell Ball , "dev@openvswitch.org" > Cc: Heetae Ahn , Daniele Di Proietto , Ben Pfaff , Pravin Shelar , Ciara Loftus , Ian Stokes , Kevin Traynor > Subject: Re: [PATCH v2 2/3] dpif-netdev: Avoid port's reconfiguration on pmd-cpu-mask changes. > > On 11.07.2017 05:10, Darrell Ball wrote: > > > > > > On 7/10/17, 12:41 AM, "Ilya Maximets" wrote: > > > > On 07.07.2017 21:09, Darrell Ball wrote: > > > > > > > > > On 7/6/17, 11:11 PM, "Ilya Maximets" wrote: > > > > > > On 07.07.2017 08:08, Darrell Ball wrote: > > > > > > > > > > > > On 5/30/17, 7:12 AM, "Ilya Maximets" wrote: > > > > > > > > Reconfiguration of HW NICs may lead to packet drops. > > > > In current model all physical ports will be reconfigured each > > > > time number of PMD threads changed. Since we not stopping > > > > threads on pmd-cpu-mask changes, this patch will help to further > > > > decrease port's downtime by setting the maximum possible number > > > > of wanted tx queues to avoid unnecessary reconfigurations. > > > > > > > > Signed-off-by: Ilya Maximets > > > > --- > > > > lib/dpif-netdev.c | 26 +++++++++++++++++++++----- > > > > 1 file changed, 21 insertions(+), 5 deletions(-) > > > > > > > > diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c > > > > index 596d133..79db770 100644 > > > > --- a/lib/dpif-netdev.c > > > > +++ b/lib/dpif-netdev.c > > > > @@ -3453,7 +3453,7 @@ reconfigure_datapath(struct dp_netdev *dp) > > > > { > > > > struct dp_netdev_pmd_thread *pmd; > > > > struct dp_netdev_port *port; > > > > - int wanted_txqs; > > > > + int needed_txqs, wanted_txqs; > > > > > > > > dp->last_reconfigure_seq = seq_read(dp->reconfigure_seq); > > > > > > > > @@ -3461,7 +3461,15 @@ reconfigure_datapath(struct dp_netdev *dp) > > > > * on the system and the user configuration. */ > > > > reconfigure_pmd_threads(dp); > > > > > > > > - wanted_txqs = cmap_count(&dp->poll_threads); > > > > + /* We need 1 Tx queue for each thread to avoid locking, but we will try > > > > + * to allocate the maximum possible value to minimize the number of port > > > > + * reconfigurations. */ > > > > + needed_txqs = cmap_count(&dp->poll_threads); > > > > + /* (n_cores + 1) is the maximum that we might need to have. > > > > + * Additional queue is for non-PMD threads. */ > > > > + wanted_txqs = ovs_numa_get_n_cores(); > > > > + ovs_assert(wanted_txqs != OVS_CORE_UNSPEC); > > > > + wanted_txqs++; > > > > > > > > I don’t think PMD mask changes are common, so this patch is trying to optimize a > > > > rare disruptive event that can/will be scheduled by the administrator. > > > > > > > > Based on the actual number of queues supported and the number of cores present, > > > > this optimization may or may not work. It is unpredictable whether there will be benefit > > > > in a particular case from the user POV. > > > > If I were the administrator, I would try to error on the conservative side anyways if I could > > > > not predict the result. > > > > > > > > Did I miss something ? > > > > > > In NFV environment if you want to add one more VM to your hosts you will have to > > > choose between: > > > > > > * not creating a new PMD thread > > > -> performance degradation of networking for other working VMs > > > > > > > > > * adding of the new PMD thread > > > -> desruption of the networking for the whole host for the time > > > of OVS reconfiguration. > > > > > > This patch removes the cost of the second option. > > > > > > In general, adding a PMD thread per VM may not always (or even usually) make sense. > > > > Not per VM. Lets assume that all existing threads are already overloaded. > > > > That would be less common; it is not the usual case. > > We have different points of view. We need more opinions on that. > > > > > > There are use cases for sure, but using a PMD core per VM is often viewed as dpdk using too much > > > cpu resources and limiting the adoption of dpdk. > > > > > > Furthermore, for dpdk gateways it is irrelevant. > > > > Disagree with that statement. > > > > dpdk gateways should not usually require changing pmd-cpu-mask when a VM is added to hypervisor > > somewhere. Sometimes you need to deploy more resources for sure, but this should be much less > > frequent than adding or removing VMs. > > The point is the user should be rarely changing the pmd-cpu-mask and this can be done during > > a maintenance window when needed. > > > > > > > > > > I don't understand what you mean saying 'unpredictable benefit'. The benefit is clear > > > and this optimization will work, except for HW NICs with very low number of HW queues. > > > > > > HW nic interfaces carry aggregated traffic for multiple VMs etc, so these cases are most important. > > > > And that is the point why this patch implemented. This patch tries to > > minimize the packet drops on the "shared" between VMs HW NICs. It > > allows to add new threads without breaking networking for others not > > reconfiguring device that handles traffic for other VMs at this moment. > > > > I understand what this patch wants to do and I understand what the last designs wanted to > > do as well. > > > > > > > It is the ratio of the number of cores to hw queues that matter. If queues are less than > > > cores, then the benefit may not occur. > > > > That's right. But are you saying that we should not apply optimization if > > it covers only 50% of cases? (There is bigger percentage, actually, because > > modern NICs supports pretty much tx queues) > > > > Nobody knows what the number is, but I know that the number of cpus is increasing > > and making assumptions about the relative numbers of cores vs queues can only work by luck. > > As you mentioned previously, you think that creating large number of PMD > threads is not a common case (this follows from the "adding a PMD thread per > VM may not always (or even usually) make sense."). This means that optimization > from that patch will usually work because number of PMD threads is much less > than number of cores and usually will be comparable or less than number of queues. > > Again, what I previously said is that I don’t expect each VM to have a dedicated PMD thread in the general case. > The reasoning comes the cost based on utilization. In general, the high reservation of cores for PMD threads > is persistently mentioned as a major impediment for DPDK adoption. > > For DPDK gateways, I would expect anything is possible including most cores having PMD threads running. > > Again, in general, I am not making any assumptions on the ratio of cores to queues, because I think these assumptions > will break. The two parameters should be treated as independent parameters, which they are. > This patch makes assumptions about the ratio of these two independent parameters. > > > And this will work now, not in some ephemeral future where DPDK will return real > numbers for available queues. In the future this maybe will work too, because not > only number of CPU cores increasing. > > Again, the issue here is not whether it will work in some cases. The issues here is to give > feedback to the user when it might work. Changing the pmd-cpu-mask is an infrequent administrative > command used on a ‘non-experimental product’, so the user should have feedback on what > would to be the result of the command. > I might be ok with this patch if OVS-DPDK were still labelled as experimental, but I would not > want to go back to the experimental tag if possible. > > > It is inherently unpredictable and unpredictable for a user initiated/controlled change is ugly. > > This is even worse when the user initiated change is rare because the user does not remember > > the full context of the command. > > > > > > > This patch makes assumptions about the relative number of cores vs the number of queues. > > > These two parameters should be treated as independent – this part really bothers me. > > > > It doesn't make any assumptions, it just optimizes one of 2 possible situations. > > If the number of queues will be less, the behaviour will be exactly same as in > > current master. > > > > We are repeating the same thing. > > > > > > > I think we could solve this in a coordinated generic fashion with a user management command > > > that would also give feedback to the user as to the possible success and maybe provide options. > > > I wonder if we will eventually go in that direction, in the future, anyways ? > > > > I don't think so. We don't need to have user management command because the > > only sane values for the number of queues right now is the number of threads > > or the maximum number of available queues in hardware. We're using the > > first one only because HW is not able to say how many queues it able to > > successfully configure. > > > > The fact that hardware cannot always report the correct number of queues is an issue > > that will get solved over time and should be part of what a user command reports > > (eg) “unknown queue number, therefore it is unknown whether a pmd-cpu-mask change > > will be hitless”, issue the following command ‘X’ when you would like to make the change”). > > Reconfiguration will never be hitless until we're not even trying to make > it more lightweight. > > > > > A wrapper user command(s) would provide visibility and predictability to the user. > > i.e. we can do better. > > *Why we need the configuration knob which can only decrease the performance?* > > This will end up in situation where users will just be forced to configure > number of tx queues for each port equal to number of cores in cpu mask. > Or they will just set up it to 1000 for all the ports to be sure that maximum > possible value was configured. > > The key aspect here is about giving feedback to user about the result of a potential > pmd-cpu-mask change. I think it can done with the results being one of three possibilities. > 1/ Command will be hitless. > 2/ Command will not be hitless. > 3/ Command hitless impact is unknown; assume traffic will be affected. > > This should be supported by documentation. > > With this patch as is, the result is unknown in all cases. How about following incremental: --8<----------------------------------------------------------------------->8-- --8<----------------------------------------------------------------------->8-- This will give an administrator all the required information to predict performance impact of pmd-cpu-mask changes. > > And this option will have to configure maximum possible value less or equal > to passed one because we have no way to find the maximum value until dpdk not > fixed (which is actually the hard problem). > Otherwise, they will have to bisect it and that is the ugly thing. > > ‘IF’ there are issues with rte_eth_dev_info_get(), ‘in some modes’, with returning the real number > of hardware queues, then this can be characterized (meaning in which cases) and fixed. > In such a case, the feedback to the user should be conservative and fall into ‘3’ above. > > > > This patch allows us to use the second value in all cases where possible. > > > > > > > > But situation will never be worse than without this patch anyway. > > > > > > About PMD mask changes: > > > Even one change in a week can be disruptive in high-loaded NFV environment. > > > Also, this changes can be performed automatically by monitoring tools while > > > VMs migrations or rebalancing the load between PMD threads to free the CPU > > > cores for other applications/VMs. > > > > > > > > > > > /* The number of pmd threads might have changed, or a port can be new: > > > > * adjust the txqs. */ > > > > @@ -3474,9 +3482,17 @@ reconfigure_datapath(struct dp_netdev *dp) > > > > > > > > /* Check for all the ports that need reconfiguration. We cache this in > > > > * 'port->reconfigure', because netdev_is_reconf_required() can change at > > > > - * any time. */ > > > > + * any time. > > > > + * Also mark for reconfiguration all ports which will likely change their > > > > + * 'dynamic_txqs' parameter. It's required to stop using them before > > > > + * changing this setting and it's simpler to mark ports here and allow > > > > + * 'pmd_remove_stale_ports' to remove them from threads. There will be > > > > + * no actual reconfiguration in 'port_reconfigure' because it's > > > > + * unnecessary. */ > > > > HMAP_FOR_EACH (port, node, &dp->ports) { > > > > - if (netdev_is_reconf_required(port->netdev)) { > > > > + if (netdev_is_reconf_required(port->netdev) > > > > + || (port->dynamic_txqs > > > > + != netdev_n_txq(port->netdev) < needed_txqs)) { > > > > port->need_reconfigure = true; > > > > } > > > > } > > > > @@ -3510,7 +3526,7 @@ reconfigure_datapath(struct dp_netdev *dp) > > > > seq_change(dp->port_seq); > > > > port_destroy(port); > > > > } else { > > > > - port->dynamic_txqs = netdev_n_txq(port->netdev) < wanted_txqs; > > > > + port->dynamic_txqs = netdev_n_txq(port->netdev) < needed_txqs; > > > > } > > > > } > > > > > > > > -- > > > > 2.7.4 > > diff --git a/Documentation/intro/install/dpdk.rst b/Documentation/intro/install/dpdk.rst index a05aa1a..0462784 100644 --- a/Documentation/intro/install/dpdk.rst +++ b/Documentation/intro/install/dpdk.rst @@ -458,8 +458,19 @@ affinitized accordingly. $ ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x4 .. note:: - pmd thread on a NUMA node is only created if there is at least one DPDK - interface from that NUMA node added to OVS. + - pmd thread on a NUMA node is only created if there is at least one DPDK + interface from that NUMA node added to OVS. + + - Addition/deletion of PMD threads may be hitless for Rx queues pinned via + ``pmd-rxq-affinity`` if the threads being deleted does not handle them + and ports has only one or big enough number of Tx queues. If device's + performance could be affected during ``pmd-cpu-mask`` changes, a warning + appears in the log: :: + + Port 'port' has low number of Tx queues. Traffic through this port + may be affected during reconfiguration if the number of PMD threads + is changed to { less than or equal to | greater than } N. + - QEMU vCPU thread Affinity diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index 74d3535..2a92d86 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -3531,7 +3531,19 @@ reconfigure_datapath(struct dp_netdev *dp) seq_change(dp->port_seq); port_destroy(port); } else { - port->dynamic_txqs = netdev_n_txq(port->netdev) < needed_txqs; + int n_txq = netdev_n_txq(port->netdev); + + port->dynamic_txqs = n_txq < needed_txqs; + + if (n_txq > 1 && n_txq < wanted_txqs) { + VLOG_WARN("Port '%s' has low number of Tx queues. " + "Traffic through this port may be affected during " + "reconfiguration if the number of PMD threads is " + "changed to %s %d.", netdev_get_name(port->netdev), + port->dynamic_txqs ? "less than or equal to" + : "greater than", + n_txq - 1); + } } }