From patchwork Mon Jan 4 16:36:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Van Haaren, Harry" X-Patchwork-Id: 1422168 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.133; helo=hemlock.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4D8hC034RHz9sVr for ; Tue, 5 Jan 2021 03:38:20 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id B561887176; Mon, 4 Jan 2021 16:38:18 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id pFl2AGuldLN3; Mon, 4 Jan 2021 16:38:18 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by hemlock.osuosl.org (Postfix) with ESMTP id 2D7FB8719C; Mon, 4 Jan 2021 16:38:06 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id E4CF3C1E6F; Mon, 4 Jan 2021 16:38:05 +0000 (UTC) X-Original-To: ovs-dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by lists.linuxfoundation.org (Postfix) with ESMTP id 119D7C013A for ; Mon, 4 Jan 2021 16:38:04 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id 0106B86968 for ; Mon, 4 Jan 2021 16:38:04 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id YkIOut3U-J6U for ; Mon, 4 Jan 2021 16:37:59 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by whitealder.osuosl.org (Postfix) with ESMTPS id C643986992 for ; Mon, 4 Jan 2021 16:37:48 +0000 (UTC) IronPort-SDR: eBjf2WtCgByPkRq2aLyFcNYqwKVpP0yvMOWotcVcNbHSjDyGFFjVXi6j7q5LJdTAusduobd8QJ nrzEedD16jcw== X-IronPort-AV: E=McAfee;i="6000,8403,9854"; a="177130157" X-IronPort-AV: E=Sophos;i="5.78,474,1599548400"; d="scan'208";a="177130157" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2021 08:37:48 -0800 IronPort-SDR: 7jzZAqEZa/GQkt5VGoAANt0iPwn/mU7/bvmjSBMawwe0bVRnl+dVZ73kAm7cheb66sVJbLSWZ/ tn0Hlee8RnoA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,474,1599548400"; d="scan'208";a="462000419" Received: from silpixa00400633.ir.intel.com ([10.237.213.44]) by fmsmga001.fm.intel.com with ESMTP; 04 Jan 2021 08:37:46 -0800 From: Harry van Haaren To: ovs-dev@openvswitch.org Date: Mon, 4 Jan 2021 16:36:46 +0000 Message-Id: <20210104163653.2218575-10-harry.van.haaren@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210104163653.2218575-1-harry.van.haaren@intel.com> References: <20201216181033.572425-2-harry.van.haaren@intel.com> <20210104163653.2218575-1-harry.van.haaren@intel.com> MIME-Version: 1.0 Cc: i.maximets@ovn.org Subject: [ovs-dev] [PATCH v8 09/16] dpif-netdev: Move pmd_try_optimize function in file. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" This commit moves the pmd_try_optimize function to a more appropriate location in the file - currently it sits in the DPCLS section, which is not its correct home. Signed-off-by: Harry van Haaren --- lib/dpif-netdev.c | 146 +++++++++++++++++++++++----------------------- 1 file changed, 73 insertions(+), 73 deletions(-) diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index 4c074995c..eea6c11f0 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -5638,6 +5638,79 @@ reload: return NULL; } +static inline void +dp_netdev_pmd_try_optimize(struct dp_netdev_pmd_thread *pmd, + struct polled_queue *poll_list, int poll_cnt) +{ + struct dpcls *cls; + uint64_t tot_idle = 0, tot_proc = 0; + unsigned int pmd_load = 0; + + if (pmd->ctx.now > pmd->rxq_next_cycle_store) { + uint64_t curr_tsc; + struct pmd_auto_lb *pmd_alb = &pmd->dp->pmd_alb; + if (pmd_alb->is_enabled && !pmd->isolated + && (pmd->perf_stats.counters.n[PMD_CYCLES_ITER_IDLE] >= + pmd->prev_stats[PMD_CYCLES_ITER_IDLE]) + && (pmd->perf_stats.counters.n[PMD_CYCLES_ITER_BUSY] >= + pmd->prev_stats[PMD_CYCLES_ITER_BUSY])) + { + tot_idle = pmd->perf_stats.counters.n[PMD_CYCLES_ITER_IDLE] - + pmd->prev_stats[PMD_CYCLES_ITER_IDLE]; + tot_proc = pmd->perf_stats.counters.n[PMD_CYCLES_ITER_BUSY] - + pmd->prev_stats[PMD_CYCLES_ITER_BUSY]; + + if (tot_proc) { + pmd_load = ((tot_proc * 100) / (tot_idle + tot_proc)); + } + + if (pmd_load >= ALB_PMD_LOAD_THRESHOLD) { + atomic_count_inc(&pmd->pmd_overloaded); + } else { + atomic_count_set(&pmd->pmd_overloaded, 0); + } + } + + pmd->prev_stats[PMD_CYCLES_ITER_IDLE] = + pmd->perf_stats.counters.n[PMD_CYCLES_ITER_IDLE]; + pmd->prev_stats[PMD_CYCLES_ITER_BUSY] = + pmd->perf_stats.counters.n[PMD_CYCLES_ITER_BUSY]; + + /* Get the cycles that were used to process each queue and store. */ + for (unsigned i = 0; i < poll_cnt; i++) { + uint64_t rxq_cyc_curr = dp_netdev_rxq_get_cycles(poll_list[i].rxq, + RXQ_CYCLES_PROC_CURR); + dp_netdev_rxq_set_intrvl_cycles(poll_list[i].rxq, rxq_cyc_curr); + dp_netdev_rxq_set_cycles(poll_list[i].rxq, RXQ_CYCLES_PROC_CURR, + 0); + } + curr_tsc = cycles_counter_update(&pmd->perf_stats); + if (pmd->intrvl_tsc_prev) { + /* There is a prev timestamp, store a new intrvl cycle count. */ + atomic_store_relaxed(&pmd->intrvl_cycles, + curr_tsc - pmd->intrvl_tsc_prev); + } + pmd->intrvl_tsc_prev = curr_tsc; + /* Start new measuring interval */ + pmd->rxq_next_cycle_store = pmd->ctx.now + PMD_RXQ_INTERVAL_LEN; + } + + if (pmd->ctx.now > pmd->next_optimization) { + /* Try to obtain the flow lock to block out revalidator threads. + * If not possible, just try next time. */ + if (!ovs_mutex_trylock(&pmd->flow_mutex)) { + /* Optimize each classifier */ + CMAP_FOR_EACH (cls, node, &pmd->classifiers) { + dpcls_sort_subtable_vector(cls); + } + ovs_mutex_unlock(&pmd->flow_mutex); + /* Start new measuring interval */ + pmd->next_optimization = pmd->ctx.now + + DPCLS_OPTIMIZATION_INTERVAL; + } + } +} + static void dp_netdev_disable_upcall(struct dp_netdev *dp) OVS_ACQUIRES(dp->upcall_rwlock) @@ -8304,79 +8377,6 @@ dpcls_sort_subtable_vector(struct dpcls *cls) pvector_publish(pvec); } -static inline void -dp_netdev_pmd_try_optimize(struct dp_netdev_pmd_thread *pmd, - struct polled_queue *poll_list, int poll_cnt) -{ - struct dpcls *cls; - uint64_t tot_idle = 0, tot_proc = 0; - unsigned int pmd_load = 0; - - if (pmd->ctx.now > pmd->rxq_next_cycle_store) { - uint64_t curr_tsc; - struct pmd_auto_lb *pmd_alb = &pmd->dp->pmd_alb; - if (pmd_alb->is_enabled && !pmd->isolated - && (pmd->perf_stats.counters.n[PMD_CYCLES_ITER_IDLE] >= - pmd->prev_stats[PMD_CYCLES_ITER_IDLE]) - && (pmd->perf_stats.counters.n[PMD_CYCLES_ITER_BUSY] >= - pmd->prev_stats[PMD_CYCLES_ITER_BUSY])) - { - tot_idle = pmd->perf_stats.counters.n[PMD_CYCLES_ITER_IDLE] - - pmd->prev_stats[PMD_CYCLES_ITER_IDLE]; - tot_proc = pmd->perf_stats.counters.n[PMD_CYCLES_ITER_BUSY] - - pmd->prev_stats[PMD_CYCLES_ITER_BUSY]; - - if (tot_proc) { - pmd_load = ((tot_proc * 100) / (tot_idle + tot_proc)); - } - - if (pmd_load >= ALB_PMD_LOAD_THRESHOLD) { - atomic_count_inc(&pmd->pmd_overloaded); - } else { - atomic_count_set(&pmd->pmd_overloaded, 0); - } - } - - pmd->prev_stats[PMD_CYCLES_ITER_IDLE] = - pmd->perf_stats.counters.n[PMD_CYCLES_ITER_IDLE]; - pmd->prev_stats[PMD_CYCLES_ITER_BUSY] = - pmd->perf_stats.counters.n[PMD_CYCLES_ITER_BUSY]; - - /* Get the cycles that were used to process each queue and store. */ - for (unsigned i = 0; i < poll_cnt; i++) { - uint64_t rxq_cyc_curr = dp_netdev_rxq_get_cycles(poll_list[i].rxq, - RXQ_CYCLES_PROC_CURR); - dp_netdev_rxq_set_intrvl_cycles(poll_list[i].rxq, rxq_cyc_curr); - dp_netdev_rxq_set_cycles(poll_list[i].rxq, RXQ_CYCLES_PROC_CURR, - 0); - } - curr_tsc = cycles_counter_update(&pmd->perf_stats); - if (pmd->intrvl_tsc_prev) { - /* There is a prev timestamp, store a new intrvl cycle count. */ - atomic_store_relaxed(&pmd->intrvl_cycles, - curr_tsc - pmd->intrvl_tsc_prev); - } - pmd->intrvl_tsc_prev = curr_tsc; - /* Start new measuring interval */ - pmd->rxq_next_cycle_store = pmd->ctx.now + PMD_RXQ_INTERVAL_LEN; - } - - if (pmd->ctx.now > pmd->next_optimization) { - /* Try to obtain the flow lock to block out revalidator threads. - * If not possible, just try next time. */ - if (!ovs_mutex_trylock(&pmd->flow_mutex)) { - /* Optimize each classifier */ - CMAP_FOR_EACH (cls, node, &pmd->classifiers) { - dpcls_sort_subtable_vector(cls); - } - ovs_mutex_unlock(&pmd->flow_mutex); - /* Start new measuring interval */ - pmd->next_optimization = pmd->ctx.now - + DPCLS_OPTIMIZATION_INTERVAL; - } - } -} - /* Insert 'rule' into 'cls'. */ static void dpcls_insert(struct dpcls *cls, struct dpcls_rule *rule,