From patchwork Fri Jun 4 21:18:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Traynor X-Patchwork-Id: 1488117 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.138; helo=smtp1.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=S/3hTGoZ; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4FxbHw3Q1Xz9sRK for ; Sat, 5 Jun 2021 07:19:40 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id C78B283CC0; Fri, 4 Jun 2021 21:19:37 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 8QOGcGG5731c; Fri, 4 Jun 2021 21:19:34 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp1.osuosl.org (Postfix) with ESMTP id 40FAC83D6F; Fri, 4 Jun 2021 21:19:30 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id CF2C3C002F; Fri, 4 Jun 2021 21:19:28 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by lists.linuxfoundation.org (Postfix) with ESMTP id A0846C0019 for ; Fri, 4 Jun 2021 21:19:27 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 74A7F40660 for ; Fri, 4 Jun 2021 21:19:26 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Authentication-Results: smtp4.osuosl.org (amavisd-new); dkim=pass (1024-bit key) header.d=redhat.com Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 9IERg4EYQTDt for ; Fri, 4 Jun 2021 21:19:25 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by smtp4.osuosl.org (Postfix) with ESMTPS id D181F4068A for ; Fri, 4 Jun 2021 21:19:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1622841563; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=N/sh9a4yMuxpLbZGXNVxobcy+ALJt3TwKq/mHpeUbFU=; b=S/3hTGoZbH0quASKH4HCvq7XL1Kgu3syoUgrFic9Wizgm9YvHoWdSdeZdNGetofsYPrLdN +h0651t9988LJsMHdR7zLpAvk/rNKkjwRwM+dwfCM9fnzT5l67KBYAA1kZmw5BaoErjdru CJoc1jJFq0z8Sc4mc8MiIrLLdBWU3ys= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-15-URzEFg8QMj-O1O7DTmX3eA-1; Fri, 04 Jun 2021 17:19:22 -0400 X-MC-Unique: URzEFg8QMj-O1O7DTmX3eA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 44361180FD66; Fri, 4 Jun 2021 21:19:21 +0000 (UTC) Received: from rh.redhat.com (ovpn-114-242.ams2.redhat.com [10.36.114.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1678A100EB3D; Fri, 4 Jun 2021 21:19:19 +0000 (UTC) From: Kevin Traynor To: dev@openvswitch.org Date: Fri, 4 Jun 2021 22:18:54 +0100 Message-Id: <20210604211856.915563-4-ktraynor@redhat.com> In-Reply-To: <20210604211856.915563-1-ktraynor@redhat.com> References: <20210604211856.915563-1-ktraynor@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ktraynor@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Cc: david.marchand@redhat.com Subject: [ovs-dev] [PATCH 3/5] dpif-netdev: Add group rxq scheduling assignment type. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" Add an rxq scheduling option that allows rxqs to be grouped on a pmd based purely on their load. The current default 'cycles' assignment sorts rxqs by measured processing load and then assigns them to a list of round robin PMDs. This helps to keep the rxqs that require most processing on different cores but as it selects the PMDs in round robin order, it equally distributes rxqs to PMDs. 'cycles' assignment has the advantage in that it separates the most loaded rxqs from being on the same core but maintains the rxqs being spread across a broad range of PMDs to mitigate against changes to traffic pattern. 'cycles' assignment has the disadvantage that in order to make the trade off between optimising for current traffic load and mitigating against future changes, it tries to assign and equal amount of rxqs per PMD in a round robin manner and this can lead to less than optimal balance of the processing load. Now that PMD auto load balance can help mitigate with future changes in traffic patterns, a 'group' assignment can be used to assign rxqs based on their measured cycles and the estimated running total of the PMDs. In this case, there is no restriction about keeping equal number of rxqs per PMD as it is purely load based. This means that one PMD may have a group of low load rxqs assigned to it while another PMD has one high load rxq assigned to it, as that is the best balance of their measured loads across the PMDs. Signed-off-by: Kevin Traynor --- Documentation/topics/dpdk/pmd.rst | 26 ++++++ lib/dpif-netdev.c | 141 +++++++++++++++++++++++++----- vswitchd/vswitch.xml | 5 +- 3 files changed, 148 insertions(+), 24 deletions(-) diff --git a/Documentation/topics/dpdk/pmd.rst b/Documentation/topics/dpdk/pmd.rst index e481e7941..d1c45cdfb 100644 --- a/Documentation/topics/dpdk/pmd.rst +++ b/Documentation/topics/dpdk/pmd.rst @@ -137,4 +137,30 @@ The Rx queues will be assigned to the cores in the following order:: Core 8: Q3 (60%) | Q0 (30%) +``group`` assignment is similar to ``cycles`` in that the Rxqs will be +ordered by their measured processing cycles before being assigned to PMDs. +It differs from ``cycles`` in that it uses a running estimate of the cycles +that will be on each PMD to select the PMD with the lowest load for each Rxq. + +This means that there can be a group of low traffic Rxqs on one PMD, while a +high traffic Rxq may have a PMD to itself. Where ``cycles`` kept as close to +the same number of Rxqs per PMD as possible, with ``group`` this restriction is +removed for a better balance of the workload across PMDs. + +For example, where there are five Rx queues and three cores - 3, 7, and 8 - +available and the measured usage of core cycles per Rx queue over the last +interval is seen to be: + +- Queue #0: 10% +- Queue #1: 80% +- Queue #3: 50% +- Queue #4: 70% +- Queue #5: 10% + +The Rx queues will be assigned to the cores in the following order:: + + Core 3: Q1 (80%) | + Core 7: Q4 (70%) | + Core 8: Q3 (50%) | Q0 (10%) | Q5 (10%) + Alternatively, ``roundrobin`` assignment can be used, where the Rxqs are assigned to PMDs in a round-robined fashion. This algorithm was used by diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index eaa4e9733..61e0a516f 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -306,4 +306,11 @@ struct pmd_auto_lb { }; +enum sched_assignment_type { + SCHED_ROUNDROBIN, + SCHED_CYCLES, /* Default.*/ + SCHED_GROUP, + SCHED_MAX +}; + /* Datapath based on the network device interface from netdev.h. * @@ -367,5 +374,5 @@ struct dp_netdev { struct ovs_mutex tx_qid_pool_mutex; /* Use measured cycles for rxq to pmd assignment. */ - bool pmd_rxq_assign_cyc; + enum sched_assignment_type pmd_rxq_assign_cyc; /* Protects the access of the 'struct dp_netdev_pmd_thread' @@ -1799,5 +1806,5 @@ create_dp_netdev(const char *name, const struct dpif_class *class, cmap_init(&dp->poll_threads); - dp->pmd_rxq_assign_cyc = true; + dp->pmd_rxq_assign_cyc = SCHED_CYCLES; ovs_mutex_init(&dp->tx_qid_pool_mutex); @@ -4223,5 +4230,5 @@ set_pmd_auto_lb(struct dp_netdev *dp, bool always_log) bool enable_alb = false; bool multi_rxq = false; - bool pmd_rxq_assign_cyc = dp->pmd_rxq_assign_cyc; + enum sched_assignment_type pmd_rxq_assign_cyc = dp->pmd_rxq_assign_cyc; /* Ensure that there is at least 2 non-isolated PMDs and @@ -4242,6 +4249,6 @@ set_pmd_auto_lb(struct dp_netdev *dp, bool always_log) } - /* Enable auto LB if it is requested and cycle based assignment is true. */ - enable_alb = enable_alb && pmd_rxq_assign_cyc && + /* Enable auto LB if requested and not using roundrobin assignment. */ + enable_alb = enable_alb && pmd_rxq_assign_cyc != SCHED_ROUNDROBIN && pmd_alb->auto_lb_requested; @@ -4284,4 +4291,5 @@ dpif_netdev_set_config(struct dpif *dpif, const struct smap *other_config) uint8_t rebalance_improve; bool log_autolb = false; + enum sched_assignment_type pmd_rxq_assign_cyc; tx_flush_interval = smap_get_int(other_config, "tx-flush-interval", @@ -4342,9 +4350,15 @@ dpif_netdev_set_config(struct dpif *dpif, const struct smap *other_config) } - bool pmd_rxq_assign_cyc = !strcmp(pmd_rxq_assign, "cycles"); - if (!pmd_rxq_assign_cyc && strcmp(pmd_rxq_assign, "roundrobin")) { - VLOG_WARN("Unsupported Rxq to PMD assignment mode in pmd-rxq-assign. " - "Defaulting to 'cycles'."); - pmd_rxq_assign_cyc = true; + if (!strcmp(pmd_rxq_assign, "roundrobin")) { + pmd_rxq_assign_cyc = SCHED_ROUNDROBIN; + } else if (!strcmp(pmd_rxq_assign, "cycles")) { + pmd_rxq_assign_cyc = SCHED_CYCLES; + } else if (!strcmp(pmd_rxq_assign, "group")) { + pmd_rxq_assign_cyc = SCHED_GROUP; + } else { + /* default */ + VLOG_WARN("Unsupported rx queue to PMD assignment mode in " + "pmd-rxq-assign. Defaulting to 'cycles'."); + pmd_rxq_assign_cyc = SCHED_CYCLES; pmd_rxq_assign = "cycles"; } @@ -5171,4 +5185,61 @@ compare_rxq_cycles(const void *a, const void *b) } +static struct sched_pmd * +get_lowest_num_rxq_pmd(struct sched_numa *numa) +{ + struct sched_pmd *lowest_rxqs_sched_pmd = NULL; + unsigned lowest_rxqs = UINT_MAX; + + /* find the pmd with lowest number of rxqs */ + for (unsigned i = 0; i < numa->n_pmds; i++) { + struct sched_pmd *sched_pmd; + unsigned num_rxqs; + + sched_pmd = &numa->pmds[i]; + num_rxqs = sched_pmd->n_rxq; + if (sched_pmd->isolated) { + continue; + } + + /* If this current load is higher we can go to the next one */ + if (num_rxqs > lowest_rxqs) { + continue; + } + if (num_rxqs < lowest_rxqs) { + lowest_rxqs = num_rxqs; + lowest_rxqs_sched_pmd = sched_pmd; + } + } + return lowest_rxqs_sched_pmd; +} + +static struct sched_pmd * +get_lowest_proc_pmd(struct sched_numa *numa) +{ + struct sched_pmd *lowest_loaded_sched_pmd = NULL; + uint64_t lowest_load = UINT64_MAX; + + /* find the pmd with the lowest load */ + for (unsigned i = 0; i < numa->n_pmds; i++) { + struct sched_pmd *sched_pmd; + uint64_t pmd_load; + + sched_pmd = &numa->pmds[i]; + if (sched_pmd->isolated) { + continue; + } + pmd_load = sched_pmd->pmd_proc_cycles; + /* If this current load is higher we can go to the next one */ + if (pmd_load > lowest_load) { + continue; + } + if (pmd_load < lowest_load) { + lowest_load = pmd_load; + lowest_loaded_sched_pmd = sched_pmd; + } + } + return lowest_loaded_sched_pmd; +} + /* * Returns the next pmd from the numa node. @@ -5229,16 +5300,40 @@ get_available_rr_pmd(struct sched_numa *numa, bool updown) static struct sched_pmd * -get_next_pmd(struct sched_numa *numa, bool algo) +get_next_pmd(struct sched_numa *numa, enum sched_assignment_type algo, + bool has_proc) { - return get_available_rr_pmd(numa, algo); + if (algo == SCHED_GROUP) { + struct sched_pmd *sched_pmd = NULL; + + /* Check if the rxq has associated cycles. This is handled differently + * as adding an zero cycles rxq to a PMD will mean that the lowest + * core would not change on a subsequent call and all zero rxqs would + * be assigned to the same PMD. */ + if (has_proc) { + sched_pmd = get_lowest_proc_pmd(numa); + } else { + sched_pmd = get_lowest_num_rxq_pmd(numa); + } + /* If there is a pmd selected, return it now. */ + if (sched_pmd) { + return sched_pmd; + } + } + + /* By default or as a last resort, just RR the PMDs. */ + return get_available_rr_pmd(numa, algo == SCHED_CYCLES ? true : false); } static const char * -get_assignment_type_string(bool algo) +get_assignment_type_string(enum sched_assignment_type algo) { - if (algo == false) { - return "roundrobin"; + switch (algo) { + case SCHED_ROUNDROBIN: return "roundrobin"; + case SCHED_CYCLES: return "cycles"; + case SCHED_GROUP: return "group"; + case SCHED_MAX: + /* fall through */ + default: return "Unknown"; } - return "cycles"; } @@ -5246,9 +5341,9 @@ get_assignment_type_string(bool algo) static bool -get_rxq_cyc_log(char *a, bool algo, uint64_t cycles) +get_rxq_cyc_log(char *a, enum sched_assignment_type algo, uint64_t cycles) { int ret = 0; - if (algo) { + if (algo != SCHED_ROUNDROBIN) { ret = snprintf(a, MAX_RXQ_CYC_STRLEN, " (measured processing cycles %"PRIu64").", @@ -5261,5 +5356,5 @@ static void sched_numa_list_schedule(struct sched_numa_list *numa_list, struct dp_netdev *dp, - bool algo, + enum sched_assignment_type algo, enum vlog_level level) OVS_REQUIRES(dp->port_mutex) @@ -5285,5 +5380,5 @@ sched_numa_list_schedule(struct sched_numa_list *numa_list, rxqs[n_rxqs++] = rxq; - if (algo == true) { + if (algo != SCHED_ROUNDROBIN) { uint64_t cycle_hist = 0; @@ -5341,5 +5436,5 @@ sched_numa_list_schedule(struct sched_numa_list *numa_list, } - if (n_rxqs > 1 && algo) { + if (n_rxqs > 1 && algo != SCHED_ROUNDROBIN) { /* Sort the queues in order of the processing cycles * they consumed during their last pmd interval. */ @@ -5401,5 +5496,5 @@ sched_numa_list_schedule(struct sched_numa_list *numa_list, if (numa) { /* Select the PMD that should be used for this rxq. */ - sched_pmd = get_next_pmd(numa, algo); + sched_pmd = get_next_pmd(numa, algo, proc_cycles ? true : false); if (sched_pmd) { VLOG(level, "Core %2u on numa node %d assigned port \'%s\' " @@ -5431,5 +5526,5 @@ rxq_scheduling(struct dp_netdev *dp) OVS_REQUIRES(dp->port_mutex) { struct sched_numa_list *numa_list; - bool algo = dp->pmd_rxq_assign_cyc; + enum sched_assignment_type algo = dp->pmd_rxq_assign_cyc; numa_list = xzalloc(sizeof *numa_list); diff --git a/vswitchd/vswitch.xml b/vswitchd/vswitch.xml index 4597a215d..14cb8a2c6 100644 --- a/vswitchd/vswitch.xml +++ b/vswitchd/vswitch.xml @@ -520,5 +520,5 @@ + "enum": ["set", ["cycles", "roundrobin", "group"]]}'>

Specifies how RX queues will be automatically assigned to CPU cores. @@ -530,4 +530,7 @@

roundrobin
Rxqs will be round-robined across CPU cores.
+
group
+
Rxqs will be sorted by order of measured processing cycles + before being assigned to CPU cores with lowest estimated load.