From patchwork Tue Mar 15 22:29:54 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniele Di Proietto X-Patchwork-Id: 597875 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3qPq823fN3z9sRB for ; Wed, 16 Mar 2016 09:30:42 +1100 (AEDT) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id A2CB41087A; Tue, 15 Mar 2016 15:30:36 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx1e4.cudamail.com (mx1.cudamail.com [69.90.118.67]) by archives.nicira.com (Postfix) with ESMTPS id C7EF8107D6 for ; Tue, 15 Mar 2016 15:30:33 -0700 (PDT) Received: from bar5.cudamail.com (unknown [192.168.21.12]) by mx1e4.cudamail.com (Postfix) with ESMTPS id 513D91E0388 for ; Tue, 15 Mar 2016 16:30:33 -0600 (MDT) X-ASG-Debug-ID: 1458081032-09eadd30611e5730001-byXFYA Received: from mx3-pf1.cudamail.com ([192.168.14.2]) by bar5.cudamail.com with ESMTP id s1O6AehHAQQhOygb (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Tue, 15 Mar 2016 16:30:32 -0600 (MDT) X-Barracuda-Envelope-From: diproiettod@vmware.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.2 Received: from unknown (HELO smtp-outbound-2.vmware.com) (208.91.2.13) by mx3-pf1.cudamail.com with ESMTPS (DHE-RSA-AES256-SHA encrypted); 15 Mar 2016 22:30:32 -0000 Received-SPF: error (mx3-pf1.cudamail.com: error in processing during lookup of vmware.com: DNS problem) X-Barracuda-Apparent-Source-IP: 208.91.2.13 X-Barracuda-RBL-IP: 208.91.2.13 Received: from sc9-mailhost1.vmware.com (sc9-mailhost1.vmware.com [10.113.161.71]) by smtp-outbound-2.vmware.com (Postfix) with ESMTP id CB1BE28B9D; Tue, 15 Mar 2016 15:30:30 -0700 (PDT) Received: from sc9-mailhost2.vmware.com (unknown [10.33.74.30]) by sc9-mailhost1.vmware.com (Postfix) with ESMTP id CE2C218CDA; Tue, 15 Mar 2016 15:30:31 -0700 (PDT) X-CudaMail-Envelope-Sender: diproiettod@vmware.com From: Daniele Di Proietto To: dev@openvswitch.org X-CudaMail-Whitelist-To: dev@openvswitch.org X-CudaMail-MID: CM-V1-314067257 X-CudaMail-DTE: 031516 X-CudaMail-Originating-IP: 208.91.2.13 Date: Tue, 15 Mar 2016 15:29:54 -0700 X-ASG-Orig-Subj: [##CM-V1-314067257##][PATCH v3 02/11] dpif-netdev: Keep count of elements in port->rxq[]. Message-Id: <1458081003-82542-3-git-send-email-diproiettod@vmware.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1458081003-82542-1-git-send-email-diproiettod@vmware.com> References: <1458081003-82542-1-git-send-email-diproiettod@vmware.com> X-Barracuda-Connect: UNKNOWN[192.168.14.2] X-Barracuda-Start-Time: 1458081032 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-ASG-Whitelist: Header =?UTF-8?B?eFwtY3VkYW1haWxcLXdoaXRlbGlzdFwtdG8=?= X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-ASG-Whitelist: EmailCat (corporate) Cc: Ilya Maximets Subject: [ovs-dev] [PATCH v3 02/11] dpif-netdev: Keep count of elements in port->rxq[]. X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" This will ease deleting a port with no open rxqs. Signed-off-by: Daniele Di Proietto --- lib/dpif-netdev.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index 9c30dad..a2281b8 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -250,6 +250,7 @@ struct dp_netdev_port { struct netdev *netdev; struct cmap_node node; /* Node in dp_netdev's 'ports'. */ struct netdev_saved_flags *sf; + unsigned n_rxq; /* Number of elements in 'rxq' */ struct netdev_rxq **rxq; struct ovs_refcount ref_cnt; char *type; /* Port type as requested by user. */ @@ -1151,11 +1152,12 @@ do_add_port(struct dp_netdev *dp, const char *devname, const char *type, port = xzalloc(sizeof *port); port->port_no = port_no; port->netdev = netdev; - port->rxq = xmalloc(sizeof *port->rxq * netdev_n_rxq(netdev)); + port->n_rxq = netdev_n_rxq(netdev); + port->rxq = xmalloc(sizeof *port->rxq * port->n_rxq); port->type = xstrdup(type); port->latest_requested_n_rxq = netdev_requested_n_rxq(netdev); - for (i = 0; i < netdev_n_rxq(netdev); i++) { + for (i = 0; i < port->n_rxq; i++) { error = netdev_rxq_open(netdev, &port->rxq[i], i); if (error) { VLOG_ERR("%s: cannot receive packets on this network device (%s)", @@ -1288,13 +1290,12 @@ static void port_unref(struct dp_netdev_port *port) { if (port && ovs_refcount_unref_relaxed(&port->ref_cnt) == 1) { - int n_rxq = netdev_n_rxq(port->netdev); int i; netdev_close(port->netdev); netdev_restore_flags(port->sf); - for (i = 0; i < n_rxq; i++) { + for (i = 0; i < port->n_rxq; i++) { netdev_rxq_close(port->rxq[i]); } free(port->rxq); @@ -2461,6 +2462,7 @@ dpif_netdev_pmd_set(struct dpif *dpif, const char *cmask) netdev_rxq_close(port->rxq[i]); port->rxq[i] = NULL; } + port->n_rxq = 0; /* Sets the new rx queue config. */ err = netdev_set_multiq(port->netdev, @@ -2474,9 +2476,9 @@ dpif_netdev_pmd_set(struct dpif *dpif, const char *cmask) } port->latest_requested_n_rxq = requested_n_rxq; /* If the set_multiq() above succeeds, reopens the 'rxq's. */ - port->rxq = xrealloc(port->rxq, sizeof *port->rxq - * netdev_n_rxq(port->netdev)); - for (i = 0; i < netdev_n_rxq(port->netdev); i++) { + port->n_rxq = netdev_n_rxq(port->netdev); + port->rxq = xrealloc(port->rxq, sizeof *port->rxq * port->n_rxq); + for (i = 0; i < port->n_rxq; i++) { netdev_rxq_open(port->netdev, &port->rxq[i], i); } } @@ -2604,7 +2606,7 @@ dpif_netdev_run(struct dpif *dpif) if (!netdev_is_pmd(port->netdev)) { int i; - for (i = 0; i < netdev_n_rxq(port->netdev); i++) { + for (i = 0; i < port->n_rxq; i++) { dp_netdev_process_rxq_port(non_pmd, port, port->rxq[i]); } } @@ -2634,7 +2636,7 @@ dpif_netdev_wait(struct dpif *dpif) if (!netdev_is_pmd(port->netdev)) { int i; - for (i = 0; i < netdev_n_rxq(port->netdev); i++) { + for (i = 0; i < port->n_rxq; i++) { netdev_rxq_wait(port->rxq[i]); } } @@ -3099,7 +3101,7 @@ dp_netdev_add_port_to_pmds(struct dp_netdev *dp, struct dp_netdev_port *port) /* Cannot create pmd threads for invalid numa node. */ ovs_assert(ovs_numa_numa_id_is_valid(numa_id)); - for (i = 0; i < netdev_n_rxq(port->netdev); i++) { + for (i = 0; i < port->n_rxq; i++) { pmd = dp_netdev_less_loaded_pmd_on_numa(dp, numa_id); if (!pmd) { /* There is no pmd threads on this numa node. */ @@ -3167,7 +3169,7 @@ dp_netdev_set_pmds_on_numa(struct dp_netdev *dp, int numa_id) CMAP_FOR_EACH (port, node, &dp->ports) { if (netdev_is_pmd(port->netdev) && netdev_get_numa_id(port->netdev) == numa_id) { - for (i = 0; i < netdev_n_rxq(port->netdev); i++) { + for (i = 0; i < port->n_rxq; i++) { /* Make thread-safety analyser happy. */ ovs_mutex_lock(&pmds[index]->poll_mutex); dp_netdev_add_rxq_to_pmd(pmds[index], port, port->rxq[i]);