From patchwork Sat Dec 3 02:14:07 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniele Di Proietto X-Patchwork-Id: 702229 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3tVvwW62c2z9t0m for ; Sat, 3 Dec 2016 13:24:11 +1100 (AEDT) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 8F0A1BFA; Sat, 3 Dec 2016 02:15:04 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id C4997B9F for ; Sat, 3 Dec 2016 02:14:52 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com [208.91.0.189]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 68EF8141 for ; Sat, 3 Dec 2016 02:14:52 +0000 (UTC) Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Fri, 2 Dec 2016 18:13:50 -0800 Received: from sc9-mailhost1.vmware.com (htb-1n-eng-dhcp161.eng.vmware.com [10.33.74.161]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 8725B40636; Fri, 2 Dec 2016 18:14:42 -0800 (PST) From: Daniele Di Proietto To: Date: Fri, 2 Dec 2016 18:14:07 -0800 Message-ID: <20161203021418.103114-9-diproiettod@vmware.com> X-Mailer: git-send-email 2.10.2 In-Reply-To: <20161203021418.103114-1-diproiettod@vmware.com> References: <20161203021418.103114-1-diproiettod@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: diproiettod@vmware.com does not designate permitted sender hosts) X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: Ilya Maximets , Daniele Di Proietto Subject: [ovs-dev] [PATCH v2 08/19] dpif-netdev: Block pmd threads if there are no ports. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org There's no reason for a pmd thread to perform its main loop if there are no queues in its poll_list. This commit introduces a seq object on which the pmd thread can be blocked, if there are no queues. When the main thread wants to reload a pmd threads it must now change the seq object (in case it's blocked) and set 'reload' to true. This is useful to avoid wasting CPU cycles and is also necessary for a future commit. Signed-off-by: Daniele Di Proietto --- lib/dpif-netdev.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index 5be3acf..4a10956 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -485,6 +485,8 @@ struct dp_netdev_pmd_thread { unsigned long long last_cycles; struct latch exit_latch; /* For terminating the pmd thread. */ + struct seq *reload_seq; + uint64_t last_reload_seq; atomic_bool reload; /* Do we need to reload ports? */ pthread_t thread; unsigned core_id; /* CPU core id of this pmd thread. */ @@ -1209,6 +1211,7 @@ dp_netdev_reload_pmd__(struct dp_netdev_pmd_thread *pmd) } ovs_mutex_lock(&pmd->cond_mutex); + seq_change(pmd->reload_seq); atomic_store_relaxed(&pmd->reload, true); ovs_mutex_cond_wait(&pmd->cond, &pmd->cond_mutex); ovs_mutex_unlock(&pmd->cond_mutex); @@ -3145,6 +3148,14 @@ reload: netdev_rxq_get_queue_id(poll_list[i].rx)); } + if (!poll_cnt) { + while (seq_read(pmd->reload_seq) == pmd->last_reload_seq) { + seq_wait(pmd->reload_seq, pmd->last_reload_seq); + poll_block(); + } + lc = 1025; + } + for (;;) { for (i = 0; i < poll_cnt; i++) { dp_netdev_process_rxq_port(pmd, poll_list[i].port, poll_list[i].rx); @@ -3220,6 +3231,7 @@ dp_netdev_pmd_reload_done(struct dp_netdev_pmd_thread *pmd) { ovs_mutex_lock(&pmd->cond_mutex); atomic_store_relaxed(&pmd->reload, false); + pmd->last_reload_seq = seq_read(pmd->reload_seq); xpthread_cond_signal(&pmd->cond); ovs_mutex_unlock(&pmd->cond_mutex); } @@ -3314,6 +3326,8 @@ dp_netdev_configure_pmd(struct dp_netdev_pmd_thread *pmd, struct dp_netdev *dp, ovs_refcount_init(&pmd->ref_cnt); latch_init(&pmd->exit_latch); + pmd->reload_seq = seq_create(); + pmd->last_reload_seq = seq_read(pmd->reload_seq); atomic_init(&pmd->reload, false); xpthread_cond_init(&pmd->cond, NULL); ovs_mutex_init(&pmd->cond_mutex); @@ -3353,6 +3367,7 @@ dp_netdev_destroy_pmd(struct dp_netdev_pmd_thread *pmd) cmap_destroy(&pmd->flow_table); ovs_mutex_destroy(&pmd->flow_mutex); latch_destroy(&pmd->exit_latch); + seq_destroy(pmd->reload_seq); xpthread_cond_destroy(&pmd->cond); ovs_mutex_destroy(&pmd->cond_mutex); ovs_mutex_destroy(&pmd->port_mutex);