From patchwork Thu Aug 24 23:38:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Traynor X-Patchwork-Id: 805700 X-Patchwork-Delegate: dlu998@gmail.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3xdgqq4Rq7z9t3f for ; Fri, 25 Aug 2017 09:44:23 +1000 (AEST) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id AA4A0B15; Thu, 24 Aug 2017 23:42:26 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 2B608A67 for ; Thu, 24 Aug 2017 23:42:25 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 148BB42C for ; Thu, 24 Aug 2017 23:39:57 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0E9B080F75; Thu, 24 Aug 2017 23:38:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 0E9B080F75 Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=ktraynor@redhat.com Received: from ktraynor.remote.csb (unknown [10.36.118.65]) by smtp.corp.redhat.com (Postfix) with ESMTP id 335A27F38E; Thu, 24 Aug 2017 23:38:32 +0000 (UTC) From: Kevin Traynor To: dev@openvswitch.org, dball@vmware.com, ian.stokes@intel.com, jan.scheurich@ericsson.com, bhanuprakash.bodireddy@intel.com, mark.b.kavanagh@intel.com, gvrose8192@gmail.com, fbl@redhat.com Date: Fri, 25 Aug 2017 00:38:02 +0100 Message-Id: <1503617885-32501-4-git-send-email-ktraynor@redhat.com> In-Reply-To: <1503617885-32501-1-git-send-email-ktraynor@redhat.com> References: <1503495281-1741-1-git-send-email-ktraynor@redhat.com> <1503617885-32501-1-git-send-email-ktraynor@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Thu, 24 Aug 2017 23:38:38 +0000 (UTC) X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Subject: [ovs-dev] [PATCH v6 3/6] dpif-netdev: Count the rxq processing cycles for an rxq. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org Count the cycles used for processing an rxq during the pmd rxq interval. As this is an in flight counter and pmds run independently, also store the total cycles used during the last full interval. Signed-off-by: Kevin Traynor --- lib/dpif-netdev.c | 75 ++++++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 71 insertions(+), 4 deletions(-) diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index 8731435..cdf2662 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -182,4 +182,8 @@ struct emc_cache { #define DPCLS_OPTIMIZATION_INTERVAL 1000 +/* Time in ms of the interval in which rxq processing cycles used in + * rxq to pmd assignments is measured and stored. */ +#define PMD_RXQ_INTERVAL_LEN 10000 + /* Number of intervals for which cycles are stored * and used during rxq to pmd assignment. */ @@ -570,4 +574,7 @@ struct dp_netdev_pmd_thread { /* Periodically sort subtable vectors according to hit frequencies */ long long int next_optimization; + /* End of the next time interval for which processing cycles + are stored for each polled rxq. */ + long long int rxq_interval; /* Statistics. */ @@ -696,5 +703,16 @@ static void pmd_load_cached_ports(struct dp_netdev_pmd_thread *pmd) OVS_REQUIRES(pmd->port_mutex); static inline void -dp_netdev_pmd_try_optimize(struct dp_netdev_pmd_thread *pmd); +dp_netdev_pmd_try_optimize(struct dp_netdev_pmd_thread *pmd, + struct polled_queue *poll_list, int poll_cnt); +static void +dp_netdev_rxq_set_cycles(struct dp_netdev_rxq *rx, + enum rxq_cycles_counter_type type, + unsigned long long cycles); +static uint64_t +dp_netdev_rxq_get_cycles(struct dp_netdev_rxq *rx, + enum rxq_cycles_counter_type type); +static void +dp_netdev_rxq_set_intrvl_cycles(struct dp_netdev_rxq *rx, + unsigned long long cycles); static void dpif_netdev_xps_revalidate_pmd(const struct dp_netdev_pmd_thread *pmd, @@ -3126,4 +3144,29 @@ cycles_count_intermediate(struct dp_netdev_pmd_thread *pmd, } +static void +dp_netdev_rxq_set_cycles(struct dp_netdev_rxq *rx, + enum rxq_cycles_counter_type type, + unsigned long long cycles) +{ + atomic_store_relaxed(&rx->cycles[type], cycles); +} + +static uint64_t +dp_netdev_rxq_get_cycles(struct dp_netdev_rxq *rx, + enum rxq_cycles_counter_type type) +{ + unsigned long long processing_cycles; + atomic_read_relaxed(&rx->cycles[type], &processing_cycles); + return processing_cycles; +} + +static void +dp_netdev_rxq_set_intrvl_cycles(struct dp_netdev_rxq *rx, + unsigned long long cycles) +{ + atomic_store_relaxed(&rx->cycles_intrvl[rx->intrvl_idx++ + % PMD_RXQ_INTERVAL_MAX], cycles); +} + static int dp_netdev_process_rxq_port(struct dp_netdev_pmd_thread *pmd, @@ -3179,4 +3222,5 @@ port_reconfigure(struct dp_netdev_port *port) port->rxqs[i].rx = NULL; } + unsigned last_nrxq = port->n_rxq; port->n_rxq = 0; @@ -3199,4 +3243,12 @@ port_reconfigure(struct dp_netdev_port *port) for (i = 0; i < netdev_n_rxq(netdev); i++) { port->rxqs[i].port = port; + if (i >= last_nrxq) { + /* Only reset cycle stats for new queues */ + dp_netdev_rxq_set_cycles(&port->rxqs[i], RXQ_CYCLES_PROC_CURR, 0); + dp_netdev_rxq_set_cycles(&port->rxqs[i], RXQ_CYCLES_PROC_HIST, 0); + for (unsigned j = 0; j < PMD_RXQ_INTERVAL_MAX; j++) { + dp_netdev_rxq_set_intrvl_cycles(&port->rxqs[i], 0); + } + } err = netdev_rxq_open(netdev, &port->rxqs[i].rx, i); if (err) { @@ -3879,5 +3931,5 @@ reload: dp_netdev_process_rxq_port(pmd, poll_list[i].rxq->rx, poll_list[i].port_no); - cycles_count_intermediate(pmd, NULL, + cycles_count_intermediate(pmd, poll_list[i].rxq, process_packets ? PMD_CYCLES_PROCESSING : PMD_CYCLES_IDLE); @@ -3890,5 +3942,5 @@ reload: coverage_try_clear(); - dp_netdev_pmd_try_optimize(pmd); + dp_netdev_pmd_try_optimize(pmd, poll_list, poll_cnt); if (!ovsrcu_try_quiesce()) { emc_cache_slow_sweep(&pmd->flow_cache); @@ -4334,4 +4386,5 @@ dp_netdev_configure_pmd(struct dp_netdev_pmd_thread *pmd, struct dp_netdev *dp, cmap_init(&pmd->classifiers); pmd->next_optimization = time_msec() + DPCLS_OPTIMIZATION_INTERVAL; + pmd->rxq_interval = time_msec() + PMD_RXQ_INTERVAL_LEN; hmap_init(&pmd->poll_list); hmap_init(&pmd->tx_ports); @@ -5771,9 +5824,23 @@ dpcls_sort_subtable_vector(struct dpcls *cls) static inline void -dp_netdev_pmd_try_optimize(struct dp_netdev_pmd_thread *pmd) +dp_netdev_pmd_try_optimize(struct dp_netdev_pmd_thread *pmd, + struct polled_queue *poll_list, int poll_cnt) { struct dpcls *cls; long long int now = time_msec(); + if (now > pmd->rxq_interval) { + /* Get the cycles that were used to process each queue and store. */ + for (unsigned i = 0; i < poll_cnt; i++) { + uint64_t rxq_cyc_curr = dp_netdev_rxq_get_cycles(poll_list[i].rxq, + RXQ_CYCLES_PROC_CURR); + dp_netdev_rxq_set_intrvl_cycles(poll_list[i].rxq, rxq_cyc_curr); + dp_netdev_rxq_set_cycles(poll_list[i].rxq, RXQ_CYCLES_PROC_CURR, + 0); + } + /* Start new measuring interval */ + pmd->rxq_interval = now + PMD_RXQ_INTERVAL_LEN; + } + if (now > pmd->next_optimization) { /* Try to obtain the flow lock to block out revalidator threads.