diff mbox series

[ovs-dev,5/5] dpif-netdev: Allow pin rxq and non-isolate PMD.

Message ID 20210604211856.915563-6-ktraynor@redhat.com
State New
Headers show
Series Rxq scheduling updates. | expand

Commit Message

Kevin Traynor June 4, 2021, 9:18 p.m. UTC
Pinning an rxq to a PMD with pmd-rxq-affinity may be done for
various reasons such as reserving a full PMD for an rxq, or to
ensure that multiple rxqs from a port are handled on different PMDs.

Previously pmd-rxq-affinity always isolated the PMD so no other rxqs
could be assigned to it by OVS. There may be cases where there is
unused cycles on those pmds and the user would like other rxqs to
also be able to be assigned to it be OVS.

Add an option to pin the rxq and non-isolate. The default behaviour is
unchanged, which is pin and isolate.

In order to pin and non-isolate:
ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-isolate=false

Note this is available only with group assignment type.

Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
---
 Documentation/topics/dpdk/pmd.rst |  9 ++++++--
 lib/dpif-netdev.c                 | 37 +++++++++++++++++++++++++------
 vswitchd/vswitch.xml              | 19 ++++++++++++++++
 3 files changed, 56 insertions(+), 9 deletions(-)

Comments

David Marchand June 24, 2021, 3:24 p.m. UTC | #1
On Fri, Jun 4, 2021 at 11:19 PM Kevin Traynor <ktraynor@redhat.com> wrote:
>
> Pinning an rxq to a PMD with pmd-rxq-affinity may be done for
> various reasons such as reserving a full PMD for an rxq, or to
> ensure that multiple rxqs from a port are handled on different PMDs.
>
> Previously pmd-rxq-affinity always isolated the PMD so no other rxqs
> could be assigned to it by OVS. There may be cases where there is
> unused cycles on those pmds and the user would like other rxqs to
> also be able to be assigned to it be OVS.
>
> Add an option to pin the rxq and non-isolate. The default behaviour is
> unchanged, which is pin and isolate.
>
> In order to pin and non-isolate:
> ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-isolate=false
>
> Note this is available only with group assignment type.

I am actually wondering what impact it would have on having this
config considered in other algorithms.
Is there an issue?


>
> Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
> ---
>  Documentation/topics/dpdk/pmd.rst |  9 ++++++--
>  lib/dpif-netdev.c                 | 37 +++++++++++++++++++++++++------
>  vswitchd/vswitch.xml              | 19 ++++++++++++++++
>  3 files changed, 56 insertions(+), 9 deletions(-)
>
> diff --git a/Documentation/topics/dpdk/pmd.rst b/Documentation/topics/dpdk/pmd.rst
> index 29ba53954..a24a59430 100644
> --- a/Documentation/topics/dpdk/pmd.rst
> +++ b/Documentation/topics/dpdk/pmd.rst
> @@ -102,6 +102,11 @@ like so:
>  - Queue #3 pinned to core 8
>
> -PMD threads on cores where Rx queues are *pinned* will become *isolated*. This
> -means that this thread will only poll the *pinned* Rx queues.
> +PMD threads on cores where Rx queues are *pinned* will become *isolated* by
> +default. This means that this thread will only poll the *pinned* Rx queues.
> +
> +If using ``pmd-rxq-assign=group`` PMD threads with *pinned* Rxqs can be
> +*non-isolated* by setting::
> +
> +  $ ovs-vsctl set Open_vSwitch . other_config:pmd-isolate=false

pmd-rxq-isolate


>
>  .. warning::
diff mbox series

Patch

diff --git a/Documentation/topics/dpdk/pmd.rst b/Documentation/topics/dpdk/pmd.rst
index 29ba53954..a24a59430 100644
--- a/Documentation/topics/dpdk/pmd.rst
+++ b/Documentation/topics/dpdk/pmd.rst
@@ -102,6 +102,11 @@  like so:
 - Queue #3 pinned to core 8
 
-PMD threads on cores where Rx queues are *pinned* will become *isolated*. This
-means that this thread will only poll the *pinned* Rx queues.
+PMD threads on cores where Rx queues are *pinned* will become *isolated* by
+default. This means that this thread will only poll the *pinned* Rx queues.
+
+If using ``pmd-rxq-assign=group`` PMD threads with *pinned* Rxqs can be
+*non-isolated* by setting::
+
+  $ ovs-vsctl set Open_vSwitch . other_config:pmd-isolate=false
 
 .. warning::
diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index 377573233..cf592a23e 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -375,4 +375,5 @@  struct dp_netdev {
     /* Use measured cycles for rxq to pmd assignment. */
     enum sched_assignment_type pmd_rxq_assign_cyc;
+    bool pmd_iso;
 
     /* Protects the access of the 'struct dp_netdev_pmd_thread'
@@ -4370,4 +4371,22 @@  dpif_netdev_set_config(struct dpif *dpif, const struct smap *other_config)
     }
 
+    bool pmd_iso = smap_get_bool(other_config, "pmd-rxq-isolate", true);
+
+    if (pmd_rxq_assign_cyc != SCHED_GROUP && pmd_iso == false) {
+        /* Invalid combination*/
+        VLOG_WARN("pmd-rxq-isolate can only be set false "
+                  "when using pmd-rxq-assign=group");
+        pmd_iso = true;
+    }
+    if (dp->pmd_iso != pmd_iso) {
+        dp->pmd_iso = pmd_iso;
+        if (pmd_iso) {
+            VLOG_INFO("pmd-rxq-affinity isolates PMD core");
+        } else {
+            VLOG_INFO("pmd-rxq-affinity does not isolate PMD core");
+        }
+        dp_netdev_request_reconfigure(dp);
+    }
+
     struct pmd_auto_lb *pmd_alb = &dp->pmd_alb;
     bool cur_rebalance_requested = pmd_alb->auto_lb_requested;
@@ -5107,5 +5126,5 @@  sched_numa_list_assignments(struct sched_numa_list *numa_list,
             sched_pmd = find_sched_pmd_by_pmd(numa_list, rxq->pmd);
             if (sched_pmd) {
-                if (rxq->core_id != OVS_CORE_UNSPEC) {
+                if (rxq->core_id != OVS_CORE_UNSPEC && dp->pmd_iso) {
                     sched_pmd->isolated = true;
                 }
@@ -5417,4 +5436,5 @@  sched_numa_list_schedule(struct sched_numa_list *numa_list,
                 struct dp_netdev_pmd_thread *pmd;
                 struct sched_numa *numa;
+                bool iso = dp->pmd_iso;
                 uint64_t proc_cycles;
                 char rxq_cyc_log[MAX_RXQ_CYC_STRLEN];
@@ -5437,10 +5457,13 @@  sched_numa_list_schedule(struct sched_numa_list *numa_list,
                     continue;
                 }
-                /* Mark PMD as isolated if not done already. */
-                if (sched_pmd->isolated == false) {
-                    sched_pmd->isolated = true;
-                    numa = sched_numa_list_find_numa(numa_list,
-                                                     sched_pmd);
-                    numa->n_iso++;
+                /* Check if isolating PMDs with pinned rxqs.*/
+                if (iso) {
+                    /* Mark PMD as isolated if not done already. */
+                    if (sched_pmd->isolated == false) {
+                        sched_pmd->isolated = true;
+                        numa = sched_numa_list_find_numa(numa_list,
+                                                         sched_pmd);
+                        numa->n_iso++;
+                    }
                 }
                 proc_cycles = dp_netdev_rxq_get_cycles(rxq,
diff --git a/vswitchd/vswitch.xml b/vswitchd/vswitch.xml
index 14cb8a2c6..dca334961 100644
--- a/vswitchd/vswitch.xml
+++ b/vswitchd/vswitch.xml
@@ -545,4 +545,23 @@ 
       </column>
 
+      <column name="other_config" key="pmd-rxq-isolate"
+              type='{"type": "boolean"}'>
+        <p>
+          Specifies if a CPU core will be isolated after being pinned with
+          an Rx queue.
+        <p/>
+          Set this value to <code>false</code> to non-isolate a CPU core after
+          it is pinned with an Rxq using <code>pmd-rxq-affinity</code>. This
+          will allow OVS to assign other Rxqs to that CPU core.
+        </p>
+        <p>
+          The default value is <code>true</code>.
+        </p>
+        <p>
+          This can only be <code>false</code> when <code>pmd-rxq-assign</code>
+          is set to <code>group</code>.
+        </p>
+      </column>
+
       <column name="other_config" key="n-handler-threads"
               type='{"type": "integer", "minInteger": 1}'>