From patchwork Wed Jul 5 12:27:12 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shachar Beiser X-Patchwork-Id: 784577 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3x2gFt4sdgz9s7g for ; Wed, 5 Jul 2017 22:30:58 +1000 (AEST) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 87F0BB2E; Wed, 5 Jul 2017 12:28:24 +0000 (UTC) X-Original-To: ovs-dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id BCF4EAB8 for ; Wed, 5 Jul 2017 12:28:19 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by smtp1.linuxfoundation.org (Postfix) with ESMTP id DB94D3E0 for ; Wed, 5 Jul 2017 12:28:17 +0000 (UTC) Received: from Internal Mail-Server by MTLPINE1 (envelope-from shacharbe@mellanox.com) with ESMTPS (AES256-SHA encrypted); 5 Jul 2017 15:27:50 +0300 Received: from r-aa-dragon21.mtr.labs.mlnx (r-aa-dragon21.mtr.labs.mlnx [10.209.68.158]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id v65CRogk009612; Wed, 5 Jul 2017 15:27:50 +0300 Received: from r-aa-dragon21.mtr.labs.mlnx (localhost [127.0.0.1]) by r-aa-dragon21.mtr.labs.mlnx (8.14.7/8.14.7) with ESMTP id v65CRnqT025923; Wed, 5 Jul 2017 12:27:49 GMT Received: (from shacharbe@localhost) by r-aa-dragon21.mtr.labs.mlnx (8.14.7/8.14.7/Submit) id v65CRnYj025922; Wed, 5 Jul 2017 12:27:49 GMT From: Shachar Beiser To: ovs-dev@openvswitch.org Date: Wed, 5 Jul 2017 12:27:12 +0000 Message-Id: X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: Shachar Beiser Subject: [ovs-dev] [PATCH 05/11] ovs/dp-cls: free HW pipeline X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org The HW pipeline is made of 3 entites: dp-cls thread, a pool of flow tags and a message queue between the pmd context and dp-cls offload thread. This patch frees those 3 entities. Signed-off-by: Shachar Beiser --- lib/dpif-netdev.c | 7 +++++-- lib/hw-pipeline.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 65 insertions(+), 2 deletions(-) diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index ef3083b..b02edfc 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -1135,6 +1135,9 @@ dp_netdev_free(struct dp_netdev *dp) ovs_mutex_destroy(&dp->meter_locks[i]); } + if (dp->ppl_md.id == HW_OFFLOAD_PIPELINE) { + hw_pipeline_uninit(dp); + } free(dp->pmd_cmask); free(CONST_CAST(char *, dp->name)); free(dp); @@ -4633,12 +4636,12 @@ dp_netdev_input__(struct dp_netdev_pmd_thread *pmd, /* All the flow batches need to be reset before any call to * packet_batch_per_flow_execute() as it could potentially trigger - * recirculation. When a packet matching flow ‘j’ happens to be + * recirculation. When a packet matching flow 'j' happens to be * recirculated, the nested call to dp_netdev_input__() could potentially * classify the packet as matching another flow - say 'k'. It could happen * that in the previous call to dp_netdev_input__() that same flow 'k' had * already its own batches[k] still waiting to be served. So if its - * ‘batch’ member is not reset, the recirculated packet would be wrongly + * 'batch' member is not reset, the recirculated packet would be wrongly * appended to batches[k] of the 1st call to dp_netdev_input__(). */ size_t i; for (i = 0; i < n_batches; i++) { diff --git a/lib/hw-pipeline.c b/lib/hw-pipeline.c index 1720c12..24045ed 100644 --- a/lib/hw-pipeline.c +++ b/lib/hw-pipeline.c @@ -39,10 +39,12 @@ VLOG_DEFINE_THIS_MODULE(hw_pipeline); // Internal functions Flow Tags Pool uint32_t hw_pipeline_ft_pool_init(flow_tag_pool *p,uint32_t pool_size); +uint32_t hw_pipeline_ft_pool_uninit(flow_tag_pool *p); // Internal functions Message Queue static int hw_pipeline_msg_queue_init(msg_queue *message_queue, unsigned core_id); +static int hw_pipeline_msg_queue_clear(msg_queue *message_queue); void *hw_pipeline_thread(void *pdp); @@ -78,6 +80,24 @@ uint32_t hw_pipeline_ft_pool_init(flow_tag_pool *p, return 0; } +uint32_t hw_pipeline_ft_pool_uninit(flow_tag_pool *p) +{ + uint32_t ii=0; + if (OVS_UNLIKELY(p==NULL||p->ft_data==NULL)) { + VLOG_ERR("No pool or no data allocated \n"); + return -1; + } + rte_spinlock_lock(&p->lock); + p->head=0; + p->tail=0; + for (ii=0; ii < p->pool_size; ii++) { + p->ft_data[ii].next = 0; + p->ft_data[ii].valid=false; + } + free(p->ft_data); + rte_spinlock_unlock(&p->lock); + return 0; +} /*************************************************************************/ // Msg Queue // A queue that contains pairs : (flow , key ) @@ -146,6 +166,28 @@ static int hw_pipeline_msg_queue_init(msg_queue *message_queue, return 0; } +static int hw_pipeline_msg_queue_clear(msg_queue *message_queue) +{ + int ret =0; + ret = close(message_queue->readFd); + if (OVS_UNLIKELY( ret == -1 )) { + VLOG_ERR("Error while closing the read file descriptor."); + return -1; + } + ret = close(message_queue->writeFd); + if (OVS_UNLIKELY( ret == -1 )) { + VLOG_ERR("Error while closing the write file descriptor."); + return -1; + } + + ret = unlink(message_queue->pipeName); + if (OVS_UNLIKELY( ret < 0 )) { + VLOG_ERR("Remove fifo failed .\n"); + return -1; + } + + return 0; +} void *hw_pipeline_thread(void *pdp) { struct dp_netdev *dp= (struct dp_netdev *)pdp; @@ -181,3 +223,21 @@ int hw_pipeline_init(struct dp_netdev *dp) dp->ppl_md.id = HW_OFFLOAD_PIPELINE; return 0; } + +int hw_pipeline_uninit(struct dp_netdev *dp) +{ + int ret=0; + ret = hw_pipeline_ft_pool_uninit(&dp->ft_pool); + if (OVS_UNLIKELY( ret != 0 )) { + VLOG_ERR(" hw_pipeline_ft_pool_uninit failed \n"); + return ret; + } + ret = hw_pipeline_msg_queue_clear(&dp->message_queue); + if (OVS_UNLIKELY( ret != 0 )) { + VLOG_ERR(" hw_pipeline_msg_queue_clear failed \n"); + return ret; + } + xpthread_join(dp->thread_ofload, NULL); + dp->ppl_md.id = DEFAULT_SW_PIPELINE; + return 0; +}