diff mbox series

[ovs-dev,dpdk-latest,v2] netdev-dpdk: Set Vhost port maximum number of queue pairs.

Message ID 20241107165101.3244471-1-maxime.coquelin@redhat.com
State Changes Requested
Delegated to: Kevin Traynor
Headers show
Series [ovs-dev,dpdk-latest,v2] netdev-dpdk: Set Vhost port maximum number of queue pairs. | expand

Checks

Context Check Description
ovsrobot/apply-robot success apply and check: success
ovsrobot/github-robot-_Build_and_Test success github build: passed

Commit Message

Maxime Coquelin Nov. 7, 2024, 4:51 p.m. UTC
This patch uses the new rte_vhost_driver_set_max_queue_num
API to set the maximum number of queue pairs supported by
the Vhost-user port.

This is required for VDUSE which needs to specify the
maximum number of queue pairs at creation time. Without it
128 queue pairs metadata would be allocated.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
Changes in v2:
==============
- Address checkpatch warnings.

---
 lib/netdev-dpdk.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

Comments

Eelco Chaudron Nov. 14, 2024, 12:45 p.m. UTC | #1
On 7 Nov 2024, at 17:51, Maxime Coquelin wrote:

> This patch uses the new rte_vhost_driver_set_max_queue_num
> API to set the maximum number of queue pairs supported by
> the Vhost-user port.
>
> This is required for VDUSE which needs to specify the
> maximum number of queue pairs at creation time. Without it
> 128 queue pairs metadata would be allocated.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>

By accident, I replied to the v1, so here we go again.

Acked-by: Eelco Chaudron <echaudro@redhat.com>

//Eelco


> ---
> Changes in v2:
> ==============
> - Address checkpatch warnings.
>
> ---
>  lib/netdev-dpdk.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
>
> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
> index e454a4a5d..966063ea7 100644
> --- a/lib/netdev-dpdk.c
> +++ b/lib/netdev-dpdk.c
> @@ -2506,6 +2506,9 @@ netdev_dpdk_vhost_client_set_config(struct netdev *netdev,
>          VLOG_INFO("Max Tx retries for vhost device '%s' set to %d",
>                    netdev_get_name(netdev), max_tx_retries);
>      }
> +
> +    dpdk_set_rxq_config(dev, args);
> +
>      ovs_mutex_unlock(&dev->mutex);
>
>      return 0;
> @@ -6298,6 +6301,7 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)
>          uint64_t virtio_unsup_features = 0;
>          uint64_t vhost_flags = 0;
>          bool enable_tso;
> +        int nr_qp;
>
>          enable_tso = userspace_tso_enabled()
>                       && dev->virtio_features_state & OVS_VIRTIO_F_CLEAN;
> @@ -6371,6 +6375,14 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)
>              goto unlock;
>          }
>
> +        nr_qp = MAX(dev->requested_n_rxq, dev->requested_n_txq);
> +        err = rte_vhost_driver_set_max_queue_num(dev->vhost_id, nr_qp);
> +        if (err) {
> +            VLOG_ERR("rte_vhost_driver_set_max_queue_num failed for "
> +                    "vhost-user client port: %s\n", dev->up.name);
> +            goto unlock;
> +        }
> +
>          err = rte_vhost_driver_start(dev->vhost_id);
>          if (err) {
>              VLOG_ERR("rte_vhost_driver_start failed for vhost user "
> -- 
> 2.46.2
Kevin Traynor Nov. 25, 2024, 5:17 p.m. UTC | #2
Hi Maxime,

On 07/11/2024 17:51, Maxime Coquelin wrote:
> This patch uses the new rte_vhost_driver_set_max_queue_num
> API to set the maximum number of queue pairs supported by
> the Vhost-user port.
> 
> This is required for VDUSE which needs to specify the
> maximum number of queue pairs at creation time. Without it
> 128 queue pairs metadata would be allocated.
> 

This reuses the options:n_rxq that is used to set the phy device rxqs.

n_rxq for phy is the actual number of queues that are configured (or at
least attempted), they are then used in so far as they are created and
polled by the pmd cores. So there is some semantic difference in the
usage for vduse.

I tested a bit with vhost-user (even though it's noop), just to see the
effect of changing n_rxq, see comments below.

> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> Changes in v2:
> ==============
> - Address checkpatch warnings.
> 
> ---
>  lib/netdev-dpdk.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
> index e454a4a5d..966063ea7 100644
> --- a/lib/netdev-dpdk.c
> +++ b/lib/netdev-dpdk.c
> @@ -2506,6 +2506,9 @@ netdev_dpdk_vhost_client_set_config(struct netdev *netdev,
>          VLOG_INFO("Max Tx retries for vhost device '%s' set to %d",
>                    netdev_get_name(netdev), max_tx_retries);
>      }
> +
> +    dpdk_set_rxq_config(dev, args);
> +
>      ovs_mutex_unlock(&dev->mutex);
>  
>      return 0;
> @@ -6298,6 +6301,7 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)
>          uint64_t virtio_unsup_features = 0;
>          uint64_t vhost_flags = 0;
>          bool enable_tso;
> +        int nr_qp;
>  
>          enable_tso = userspace_tso_enabled()
>                       && dev->virtio_features_state & OVS_VIRTIO_F_CLEAN;
> @@ -6371,6 +6375,14 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)
>              goto unlock;
>          }
>  
> +        nr_qp = MAX(dev->requested_n_rxq, dev->requested_n_txq);
> +        err = rte_vhost_driver_set_max_queue_num(dev->vhost_id, nr_qp);
> +        if (err) {
> +            VLOG_ERR("rte_vhost_driver_set_max_queue_num failed for "
> +                    "vhost-user client port: %s\n", dev->up.name);
> +            goto unlock;
> +        }
> +

requested_n_rxq is defaulted to NR_QUEUE (1) and this code won't be
reached after the device is registered (minus one exception), so it
seems to not pick up the n_rxq value.

Also for n_rxq, it can be changed anytime on the fly, which i suppose
will not be possible here.

For vhost-user this code has no effect but we are calling and logging an
error on fail. I didn't look closely at vduse, but if we can't
differentiate at this point, at least we can add some comments to explain.

thanks,
Kevin.


>          err = rte_vhost_driver_start(dev->vhost_id);
>          if (err) {
>              VLOG_ERR("rte_vhost_driver_start failed for vhost user "
Maxime Coquelin Dec. 4, 2024, 9:26 a.m. UTC | #3
Hi Kevin,

On 11/25/24 18:17, Kevin Traynor wrote:
> Hi Maxime,
> 
> On 07/11/2024 17:51, Maxime Coquelin wrote:
>> This patch uses the new rte_vhost_driver_set_max_queue_num
>> API to set the maximum number of queue pairs supported by
>> the Vhost-user port.
>>
>> This is required for VDUSE which needs to specify the
>> maximum number of queue pairs at creation time. Without it
>> 128 queue pairs metadata would be allocated.
>>
> 
> This reuses the options:n_rxq that is used to set the phy device rxqs.
> 
> n_rxq for phy is the actual number of queues that are configured (or at
> least attempted), they are then used in so far as they are created and
> polled by the pmd cores. So there is some semantic difference in the
> usage for vduse.

Yes, I agree.
In the case of VDUSE it is the maximum number of queue pairs device will
support, but the driver side may use less.

> I tested a bit with vhost-user (even though it's noop), just to see the
> effect of changing n_rxq, see comments below.

Indeed, this is a noop in the case of Vhost-user.

> 
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>> Changes in v2:
>> ==============
>> - Address checkpatch warnings.
>>
>> ---
>>   lib/netdev-dpdk.c | 12 ++++++++++++
>>   1 file changed, 12 insertions(+)
>>
>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
>> index e454a4a5d..966063ea7 100644
>> --- a/lib/netdev-dpdk.c
>> +++ b/lib/netdev-dpdk.c
>> @@ -2506,6 +2506,9 @@ netdev_dpdk_vhost_client_set_config(struct netdev *netdev,
>>           VLOG_INFO("Max Tx retries for vhost device '%s' set to %d",
>>                     netdev_get_name(netdev), max_tx_retries);
>>       }
>> +
>> +    dpdk_set_rxq_config(dev, args);
>> +
>>       ovs_mutex_unlock(&dev->mutex);
>>   
>>       return 0;
>> @@ -6298,6 +6301,7 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)
>>           uint64_t virtio_unsup_features = 0;
>>           uint64_t vhost_flags = 0;
>>           bool enable_tso;
>> +        int nr_qp;
>>   
>>           enable_tso = userspace_tso_enabled()
>>                        && dev->virtio_features_state & OVS_VIRTIO_F_CLEAN;
>> @@ -6371,6 +6375,14 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)
>>               goto unlock;
>>           }
>>   
>> +        nr_qp = MAX(dev->requested_n_rxq, dev->requested_n_txq);
>> +        err = rte_vhost_driver_set_max_queue_num(dev->vhost_id, nr_qp);
>> +        if (err) {
>> +            VLOG_ERR("rte_vhost_driver_set_max_queue_num failed for "
>> +                    "vhost-user client port: %s\n", dev->up.name);
>> +            goto unlock;
>> +        }
>> +
> 
> requested_n_rxq is defaulted to NR_QUEUE (1) and this code won't be
> reached after the device is registered (minus one exception), so it
> seems to not pick up the n_rxq value.
> 
> Also for n_rxq, it can be changed anytime on the fly, which i suppose
> will not be possible here.

Right, the implementation is bogues in this version, and I agree reusing
n_rxq might not be the best way. A dedicated option might be more
appriopriate.

I'm sending a new version introducing a new option.

Thanks,
Maxime

> 
> For vhost-user this code has no effect but we are calling and logging an
> error on fail. I didn't look closely at vduse, but if we can't
> differentiate at this point, at least we can add some comments to explain.
> 
> thanks,
> Kevin.
> 
> 
>>           err = rte_vhost_driver_start(dev->vhost_id);
>>           if (err) {
>>               VLOG_ERR("rte_vhost_driver_start failed for vhost user "
>
diff mbox series

Patch

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index e454a4a5d..966063ea7 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -2506,6 +2506,9 @@  netdev_dpdk_vhost_client_set_config(struct netdev *netdev,
         VLOG_INFO("Max Tx retries for vhost device '%s' set to %d",
                   netdev_get_name(netdev), max_tx_retries);
     }
+
+    dpdk_set_rxq_config(dev, args);
+
     ovs_mutex_unlock(&dev->mutex);
 
     return 0;
@@ -6298,6 +6301,7 @@  netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)
         uint64_t virtio_unsup_features = 0;
         uint64_t vhost_flags = 0;
         bool enable_tso;
+        int nr_qp;
 
         enable_tso = userspace_tso_enabled()
                      && dev->virtio_features_state & OVS_VIRTIO_F_CLEAN;
@@ -6371,6 +6375,14 @@  netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)
             goto unlock;
         }
 
+        nr_qp = MAX(dev->requested_n_rxq, dev->requested_n_txq);
+        err = rte_vhost_driver_set_max_queue_num(dev->vhost_id, nr_qp);
+        if (err) {
+            VLOG_ERR("rte_vhost_driver_set_max_queue_num failed for "
+                    "vhost-user client port: %s\n", dev->up.name);
+            goto unlock;
+        }
+
         err = rte_vhost_driver_start(dev->vhost_id);
         if (err) {
             VLOG_ERR("rte_vhost_driver_start failed for vhost user "