From patchwork Wed Oct 5 01:22:19 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniele Di Proietto X-Patchwork-Id: 678335 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3spdN21XMTz9ryZ for ; Wed, 5 Oct 2016 12:23:46 +1100 (AEDT) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id 79981106B5; Tue, 4 Oct 2016 18:23:04 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v3.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id 9741C106AC for ; Tue, 4 Oct 2016 18:23:03 -0700 (PDT) Received: from bar6.cudamail.com (localhost [127.0.0.1]) by mx3v3.cudamail.com (Postfix) with ESMTPS id 2835D1614E3 for ; Tue, 4 Oct 2016 19:23:03 -0600 (MDT) X-ASG-Debug-ID: 1475630582-0b3237531766380001-byXFYA Received: from mx3-pf2.cudamail.com ([192.168.14.1]) by bar6.cudamail.com with ESMTP id ephgNFZ1ZBrNk3Ji (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Tue, 04 Oct 2016 19:23:02 -0600 (MDT) X-Barracuda-Envelope-From: diproiettod@vmware.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.1 Received: from unknown (HELO EX13-EDG-OU-002.vmware.com) (208.91.0.190) by mx3-pf2.cudamail.com with ESMTPS (AES256-SHA encrypted); 5 Oct 2016 01:23:02 -0000 Received-SPF: error (mx3-pf2.cudamail.com: error in processing during lookup of vmware.com: DNS problem) Received: from sc9-mailhost1.vmware.com (10.113.161.71) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Tue, 4 Oct 2016 18:22:26 -0700 Received: from sc9-mailhost2.vmware.com (htb-1n-eng-dhcp102.eng.vmware.com [10.33.74.102]) by sc9-mailhost1.vmware.com (Postfix) with ESMTP id 4824E18709; Tue, 4 Oct 2016 18:23:01 -0700 (PDT) X-CudaMail-Envelope-Sender: diproiettod@vmware.com From: Daniele Di Proietto To: X-CudaMail-Whitelist-To: dev@openvswitch.org X-CudaMail-MID: CM-V2-1003065742 X-CudaMail-DTE: 100416 X-CudaMail-Originating-IP: 208.91.0.190 Date: Tue, 4 Oct 2016 18:22:19 -0700 X-ASG-Orig-Subj: [##CM-V2-1003065742##][PATCH 08/13] netdev-dpdk: Remove useless nonpmd_mempool_mutex. Message-ID: <20161005012224.107729-9-diproiettod@vmware.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20161005012224.107729-1-diproiettod@vmware.com> References: <20161005012224.107729-1-diproiettod@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-002.vmware.com: diproiettod@vmware.com does not designate permitted sender hosts) X-Barracuda-Connect: UNKNOWN[192.168.14.1] X-Barracuda-Start-Time: 1475630582 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-ASG-Whitelist: Header =?UTF-8?B?eFwtY3VkYW1haWxcLXdoaXRlbGlzdFwtdG8=?= X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 Subject: [ovs-dev] [PATCH 08/13] netdev-dpdk: Remove useless nonpmd_mempool_mutex. X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@openvswitch.org Sender: "dev" Since DPDK commit 30e639989227("mempool: support non-EAL thread"), non-EAL threads can use the mempool API safely. Plus, nonpmd threads access to netdev is already serialized with 'non_pmd_mutex' in dpif-netdev. Signed-off-by: Daniele Di Proietto Acked-by: Ben Pfaff --- lib/netdev-dpdk.c | 25 ------------------------- 1 file changed, 25 deletions(-) diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index cbb74cb..15250dc 100644 --- a/lib/netdev-dpdk.c +++ b/lib/netdev-dpdk.c @@ -288,10 +288,6 @@ static struct ovs_mutex dpdk_mp_mutex OVS_ACQ_AFTER(dpdk_mutex) static struct ovs_list dpdk_mp_list OVS_GUARDED_BY(dpdk_mp_mutex) = OVS_LIST_INITIALIZER(&dpdk_mp_list); -/* This mutex must be used by non pmd threads when allocating or freeing - * mbufs through mempools. */ -static struct ovs_mutex nonpmd_mempool_mutex = OVS_MUTEX_INITIALIZER; - struct dpdk_mp { struct rte_mempool *mp; int mtu; @@ -405,8 +401,6 @@ struct netdev_rxq_dpdk { int port_id; }; -static bool dpdk_thread_is_pmd(void); - static int netdev_dpdk_construct(struct netdev *); int netdev_dpdk_get_vid(const struct netdev_dpdk *dev); @@ -445,8 +439,6 @@ dpdk_rte_mzalloc(size_t sz) return rte_zmalloc(OVS_VPORT_DPDK, sz, OVS_CACHE_LINE_SIZE); } -/* XXX this function should be called only by pmd threads (or by non pmd - * threads holding the nonpmd_mempool_mutex) */ void free_dpdk_buf(struct dp_packet *p) { @@ -1634,13 +1626,6 @@ dpdk_do_tx_copy(struct netdev *netdev, int qid, struct dp_packet_batch *batch) int newcnt = 0; int i; - /* If we are on a non pmd thread we have to use the mempool mutex, because - * every non pmd thread shares the same mempool cache */ - - if (!dpdk_thread_is_pmd()) { - ovs_mutex_lock(&nonpmd_mempool_mutex); - } - dp_packet_batch_apply_cutlen(batch); for (i = 0; i < batch->count; i++) { @@ -1689,10 +1674,6 @@ dpdk_do_tx_copy(struct netdev *netdev, int qid, struct dp_packet_batch *batch) dev->stats.tx_dropped += dropped; rte_spinlock_unlock(&dev->stats_lock); } - - if (!dpdk_thread_is_pmd()) { - ovs_mutex_unlock(&nonpmd_mempool_mutex); - } } static int @@ -3609,9 +3590,3 @@ dpdk_set_lcore_id(unsigned cpu) ovs_assert(cpu != NON_PMD_CORE_ID); RTE_PER_LCORE(_lcore_id) = cpu; } - -static bool -dpdk_thread_is_pmd(void) -{ - return rte_lcore_id() != NON_PMD_CORE_ID; -}