From patchwork Tue Jul 9 16:19:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 1129909 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 45jnp23ZlFz9sBt for ; Wed, 10 Jul 2019 02:29:14 +1000 (AEST) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 0BD433C07; Tue, 9 Jul 2019 16:26:50 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id EE6413321 for ; Tue, 9 Jul 2019 16:21:27 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 800A1899 for ; Tue, 9 Jul 2019 16:21:27 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6DD4130C01A9; Tue, 9 Jul 2019 16:21:22 +0000 (UTC) Received: from dmarchan.remote.csb (ovpn-204-39.brq.redhat.com [10.40.204.39]) by smtp.corp.redhat.com (Postfix) with ESMTP id C4A4A8845F; Tue, 9 Jul 2019 16:21:14 +0000 (UTC) From: David Marchand To: dev@openvswitch.org Date: Tue, 9 Jul 2019 18:19:55 +0200 Message-Id: <1562689198-21481-3-git-send-email-david.marchand@redhat.com> In-Reply-To: <1562689198-21481-1-git-send-email-david.marchand@redhat.com> References: <1562689198-21481-1-git-send-email-david.marchand@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.49]); Tue, 09 Jul 2019 16:21:22 +0000 (UTC) X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: i.maximets@samsung.com Subject: [ovs-dev] [PATCH v4 2/5] dpif-netdev: Trigger parallel pmd reloads. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org pmd reloads are currently serialised in each steps calling reload_affected_pmds. Any pmd processing packets, waiting on a mutex etc... will make other pmd threads wait for a delay that can be undeterministic when syscalls adds up. Switch to a little busy loop on the control thread using the existing per-pmd reload boolean. The memory order on this atomic is rel-acq to have an explicit synchronisation between the pmd threads and the control thread. Signed-off-by: David Marchand Acked-by: Eelco Chaudron Acked-by: Ilya Maximets Acked-by: Eelco Chaudron --- Changelog since v3: - explicitly do not wait for non pmd reload in patch 2 (Ilya) - added Eelco ack Changelog since v2: - remove unneeded synchronisation on pmd thread join (Ilya) - "inlined" synchronisation loop in reload_affected_pmds Changelog since v1: - removed the introduced reloading_pmds atomic and reuse the existing pmd->reload boolean as a single synchronisation point (Ilya) Changelog since RFC v1: - added memory ordering on 'reloading_pmds' atomic to serve as a synchronisation point between pmd threads and control thread --- lib/dpif-netdev.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index 8eeec63..e0ffa4a 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -649,9 +649,6 @@ struct dp_netdev_pmd_thread { struct ovs_refcount ref_cnt; /* Every reference must be refcount'ed. */ struct cmap_node node; /* In 'dp->poll_threads'. */ - pthread_cond_t cond; /* For synchronizing pmd thread reload. */ - struct ovs_mutex cond_mutex; /* Mutex for condition variable. */ - /* Per thread exact-match cache. Note, the instance for cpu core * NON_PMD_CORE_ID can be accessed by multiple threads, and thusly * need to be protected by 'non_pmd_mutex'. Every other instance @@ -1758,11 +1755,8 @@ dp_netdev_reload_pmd__(struct dp_netdev_pmd_thread *pmd) return; } - ovs_mutex_lock(&pmd->cond_mutex); seq_change(pmd->reload_seq); atomic_store_explicit(&pmd->reload, true, memory_order_release); - ovs_mutex_cond_wait(&pmd->cond, &pmd->cond_mutex); - ovs_mutex_unlock(&pmd->cond_mutex); } static uint32_t @@ -4655,6 +4649,19 @@ reload_affected_pmds(struct dp_netdev *dp) if (pmd->need_reload) { flow_mark_flush(pmd); dp_netdev_reload_pmd__(pmd); + } + } + + CMAP_FOR_EACH (pmd, node, &dp->poll_threads) { + if (pmd->need_reload) { + if (pmd->core_id != NON_PMD_CORE_ID) { + bool reload; + + do { + atomic_read_explicit(&pmd->reload, &reload, + memory_order_acquire); + } while (reload); + } pmd->need_reload = false; } } @@ -5842,11 +5849,8 @@ dpif_netdev_enable_upcall(struct dpif *dpif) static void dp_netdev_pmd_reload_done(struct dp_netdev_pmd_thread *pmd) { - ovs_mutex_lock(&pmd->cond_mutex); - atomic_store_relaxed(&pmd->reload, false); pmd->last_reload_seq = seq_read(pmd->reload_seq); - xpthread_cond_signal(&pmd->cond); - ovs_mutex_unlock(&pmd->cond_mutex); + atomic_store_explicit(&pmd->reload, false, memory_order_release); } /* Finds and refs the dp_netdev_pmd_thread on core 'core_id'. Returns @@ -5931,8 +5935,6 @@ dp_netdev_configure_pmd(struct dp_netdev_pmd_thread *pmd, struct dp_netdev *dp, pmd->reload_seq = seq_create(); pmd->last_reload_seq = seq_read(pmd->reload_seq); atomic_init(&pmd->reload, false); - xpthread_cond_init(&pmd->cond, NULL); - ovs_mutex_init(&pmd->cond_mutex); ovs_mutex_init(&pmd->flow_mutex); ovs_mutex_init(&pmd->port_mutex); cmap_init(&pmd->flow_table); @@ -5975,8 +5977,6 @@ dp_netdev_destroy_pmd(struct dp_netdev_pmd_thread *pmd) cmap_destroy(&pmd->flow_table); ovs_mutex_destroy(&pmd->flow_mutex); seq_destroy(pmd->reload_seq); - xpthread_cond_destroy(&pmd->cond); - ovs_mutex_destroy(&pmd->cond_mutex); ovs_mutex_destroy(&pmd->port_mutex); free(pmd); }