From patchwork Thu Oct 5 15:05:56 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Maximets X-Patchwork-Id: 821872 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3y7GMT2CZ2z9sNw for ; Fri, 6 Oct 2017 02:07:01 +1100 (AEDT) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id CB44B99F; Thu, 5 Oct 2017 15:06:24 +0000 (UTC) X-Original-To: ovs-dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 2ACE2989 for ; Thu, 5 Oct 2017 15:06:23 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mailout1.w1.samsung.com (mailout1.w1.samsung.com [210.118.77.11]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 9B8643FA for ; Thu, 5 Oct 2017 15:06:21 +0000 (UTC) Received: from eucas1p1.samsung.com (unknown [182.198.249.206]) by mailout1.w1.samsung.com (KnoxPortal) with ESMTP id 20171005150619euoutp01ececc9861ecc11a2040aae25c8b6e2c6~qtPKfjMxb1669816698euoutp01j; Thu, 5 Oct 2017 15:06:19 +0000 (GMT) Received: from eusmges4.samsung.com (unknown [203.254.199.244]) by eucas1p2.samsung.com (KnoxPortal) with ESMTP id 20171005150618eucas1p2449b67754fa02faf7d9da87c4a607e49~qtPJwgCv92619726197eucas1p2Y; Thu, 5 Oct 2017 15:06:18 +0000 (GMT) Received: from eucas1p2.samsung.com ( [182.198.249.207]) by eusmges4.samsung.com (EUCPMTA) with SMTP id 00.6D.12944.A6A46D95; Thu, 5 Oct 2017 16:06:18 +0100 (BST) Received: from eusmgms2.samsung.com (unknown [182.198.249.180]) by eucas1p1.samsung.com (KnoxPortal) with ESMTP id 20171005150618eucas1p13e171f7a9feb7cb7b7a4a8b6fd24b905~qtPJJdlhu1417814178eucas1p1r; Thu, 5 Oct 2017 15:06:18 +0000 (GMT) X-AuditID: cbfec7f4-f79ab6d000003290-29-59d64a6a8c37 Received: from eusync2.samsung.com ( [203.254.199.212]) by eusmgms2.samsung.com (EUCPMTA) with SMTP id 2E.64.20118.96A46D95; Thu, 5 Oct 2017 16:06:17 +0100 (BST) Received: from imaximets.rnd.samsung.ru ([106.109.129.180]) by eusync2.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTPA id <0OXC0096ZVA4TQ80@eusync2.samsung.com>; Thu, 05 Oct 2017 16:06:17 +0100 (BST) From: Ilya Maximets To: ovs-dev@openvswitch.org, Bhanuprakash Bodireddy Date: Thu, 05 Oct 2017 18:05:56 +0300 Message-id: <1507215962-17692-2-git-send-email-i.maximets@samsung.com> X-Mailer: git-send-email 2.7.4 In-reply-to: <1507215962-17692-1-git-send-email-i.maximets@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrPIsWRmVeSWpSXmKPExsWy7djP87pZXtciDebekbBY/YvTYuczZYuW /pnMFs9fLGS2uHPlJ5vFtM+32S2utP9ktzjy/TSjxdpDH9gt5n56zujA5fHr61U2j8V7XjJ5 PLv5n9Hj/T4gt2/LKkaPd/PfsgWwRXHZpKTmZJalFunbJXBlPLv6jq3g9kTGihs3elgbGH+k dTFycEgImEi07Q3qYuQEMsUkLtxbzwZiCwksZZR49VGxi5ELyP7MKHHzyGRmmPrZrxIg4ssY JT6+eMAO4TQzSaz/Np8dpJtNQEfi1OojjCC2iECExIPvs8DizAInmSSezQ0FsYUFAiReLdzI DGKzCKhKNM26AlbDK+AmMXvDJ3aIi+Qkbp7rBKvhFHCXuLDjMyNEfAubxKvlWhC2i8TcKW+h 6oUlXh3fAmXLSFye3M0CcpyEQDOjRMOqS4wQzgRGiS/Ny5kgquwlTt28ygRxHZ/EpG3Tod7k lehoE4Io8ZC4NnMbVLmjxOYdv6FBNItR4scbtQmM0gsYGVYxiqSWFuempxab6BUn5haX5qXr JefnbmIERvPpf8e/7GBcfMzqEKMAB6MSDy/DgyuRQqyJZcWVuYcYJTiYlUR4G12vRQrxpiRW VqUW5ccXleakFh9ilOZgURLntY1qixQSSE8sSc1OTS1ILYLJMnFwSjUwFs7jsPzlpuEbMM1y V//8s8kVVw+vfL3ce3/2tO9b2+7fy52dMidF9/m7M2p2tYLuc9bGRemnnH7rn/GHY57H0f3s hSr73oQ9imFmNDAQ3LDTkvnov+mz1s5cKfOVc6bXhS8X06VfTFxxIbzWySzqZygHy24r9pfe eWys0+7ME0wIMUxbcCIwU4mlOCPRUIu5qDgRAPkVPK/iAgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrOLMWRmVeSWpSXmKPExsVy+t/xK7qZXtciDR4etLFY/YvTYuczZYuW /pnMFs9fLGS2uHPlJ5vFtM+32S2utP9ktzjy/TSjxdpDH9gt5n56zujA5fHr61U2j8V7XjJ5 PLv5n9Hj/T4gt2/LKkaPd/PfsgWwRXHZpKTmZJalFunbJXBlPLv6jq3g9kTGihs3elgbGH+k dTFycEgImEjMfpXQxcgJZIpJXLi3nq2LkYtDSGAJo8TdN/fYIZxWJomd39pYQKrYBHQkTq0+ wghiiwhESLTMWc8IUsQscJpJon/xeVaQhLCAn8T25ceYQGwWAVWJpllX2EFsXgE3idkbPrFD rJOTuHmukxnE5hRwl7iw4zPYUCGgmumL2pgmMPIuYGRYxSiSWlqcm55bbKRXnJhbXJqXrpec n7uJERh824793LKDsetd8CFGAQ5GJR5ehgdXIoVYE8uKK3MPMUpwMCuJ8Da6XosU4k1JrKxK LcqPLyrNSS0+xCjNwaIkztu7Z3WkkEB6YklqdmpqQWoRTJaJg1OqgTFzr8P7auWO+fz+VcIT /zQ5e2/brHvGYRqf3kHzs0JbJtg3up8S/F3vErSqJHbrrEUBYbF+u/n2OOtnXbviYN4stmbp k3fPlDX3vPC++KlGmcl6zvPEudxOB539Axet+hfMkheslfZyzZWLAZWfZ/xzX+CkvHbam17l og+N5/w/vVx6bLpU4n8lluKMREMt5qLiRADN0qSQOgIAAA== X-CMS-MailID: 20171005150618eucas1p13e171f7a9feb7cb7b7a4a8b6fd24b905 X-Msg-Generator: CA X-Sender-IP: 182.198.249.180 X-Local-Sender: =?utf-8?q?Ilya_Maximets=1BSRR-Virtualization_Lab=1B?= =?utf-8?b?7IK87ISx7KCE7J6QG0xlYWRpbmcgRW5naW5lZXI=?= X-Global-Sender: =?utf-8?q?Ilya_Maximets=1BSRR-Virtualization_Lab=1BSamsu?= =?utf-8?q?ng_Electronics=1BLeading_Engineer?= X-Sender-Code: =?utf-8?q?C10=1BCISHQ=1BC10GD01GD010154?= CMS-TYPE: 201P X-CMS-RootMailID: 20171005150618eucas1p13e171f7a9feb7cb7b7a4a8b6fd24b905 X-RootMTR: 20171005150618eucas1p13e171f7a9feb7cb7b7a4a8b6fd24b905 References: <1507215962-17692-1-git-send-email-i.maximets@samsung.com> X-Spam-Status: No, score=-5.0 required=5.0 tests=RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD autolearn=disabled version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: Heetae Ahn , Ilya Maximets Subject: [ovs-dev] [PATCH v4 1/7] dpif-netdev: Keep latest measured time for PMD thread. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org In current implementation 'now' variable updated once on each receive cycle and passed through the whole datapath via function arguments. It'll be better to keep this variable inside PMD thread structure to be able to get it at any time. Such solution will save the stack memory and simplify possible modifications in current logic. This patch introduces new structure 'dp_netdev_pmd_thread_ctx' contained by 'struct dp_netdev_pmd_thread' to store any processing context of this PMD thread. For now, only time and cycles moved to that structure. Can be extended in the future. Signed-off-by: Ilya Maximets --- lib/dpif-netdev.c | 129 ++++++++++++++++++++++++++++++------------------------ 1 file changed, 73 insertions(+), 56 deletions(-) diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index d5eb830..1a3d8b4 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -528,6 +528,19 @@ struct tx_port { struct hmap_node node; }; +/* A set of properties for the current processing loop that is not directly + * associated with the pmd thread itself, but with the packets being + * processed or the short-term system configuration (for example, time). + * Contained by struct dp_netdev_pmd_thread's 'ctx' member. */ +struct dp_netdev_pmd_thread_ctx { + /* Latest measured time. */ + long long now; + /* Used to count cycles. See 'cycles_count_end()' */ + unsigned long long last_cycles; +}; + +static void pmd_thread_ctx_time_update(struct dp_netdev_pmd_thread *); + /* PMD: Poll modes drivers. PMD accesses devices via polling to eliminate * the performance overhead of interrupt processing. Therefore netdev can * not implement rx-wait for these devices. dpif-netdev needs to poll @@ -583,8 +596,8 @@ struct dp_netdev_pmd_thread { /* Cycles counters */ struct dp_netdev_pmd_cycles cycles; - /* Used to count cicles. See 'cycles_counter_end()' */ - unsigned long long last_cycles; + /* Current context of the PMD thread. */ + struct dp_netdev_pmd_thread_ctx ctx; struct latch exit_latch; /* For terminating the pmd thread. */ struct seq *reload_seq; @@ -657,8 +670,7 @@ static void dp_netdev_execute_actions(struct dp_netdev_pmd_thread *pmd, struct dp_packet_batch *, bool may_steal, const struct flow *flow, const struct nlattr *actions, - size_t actions_len, - long long now); + size_t actions_len); static void dp_netdev_input(struct dp_netdev_pmd_thread *, struct dp_packet_batch *, odp_port_t port_no); static void dp_netdev_recirculate(struct dp_netdev_pmd_thread *, @@ -718,9 +730,9 @@ static uint64_t dp_netdev_rxq_get_intrvl_cycles(struct dp_netdev_rxq *rx, unsigned idx); static void dpif_netdev_xps_revalidate_pmd(const struct dp_netdev_pmd_thread *pmd, - long long now, bool purge); + bool purge); static int dpif_netdev_xps_get_tx_qid(const struct dp_netdev_pmd_thread *pmd, - struct tx_port *tx, long long now); + struct tx_port *tx); static inline bool emc_entry_alive(struct emc_entry *ce); static void emc_clear_entry(struct emc_entry *ce); @@ -764,6 +776,12 @@ emc_cache_slow_sweep(struct emc_cache *flow_cache) flow_cache->sweep_idx = (flow_cache->sweep_idx + 1) & EM_FLOW_HASH_MASK; } +static inline void +pmd_thread_ctx_time_update(struct dp_netdev_pmd_thread *pmd) +{ + pmd->ctx.now = time_msec(); +} + /* Returns true if 'dpif' is a netdev or dummy dpif, false otherwise. */ bool dpif_is_netdev(const struct dpif *dpif) @@ -2912,6 +2930,9 @@ dpif_netdev_execute(struct dpif *dpif, struct dpif_execute *execute) ovs_mutex_lock(&dp->non_pmd_mutex); } + /* Update current time in PMD context. */ + pmd_thread_ctx_time_update(pmd); + /* The action processing expects the RSS hash to be valid, because * it's always initialized at the beginning of datapath processing. * In this case, though, 'execute->packet' may not have gone through @@ -2924,8 +2945,7 @@ dpif_netdev_execute(struct dpif *dpif, struct dpif_execute *execute) dp_packet_batch_init_packet(&pp, execute->packet); dp_netdev_execute_actions(pmd, &pp, false, execute->flow, - execute->actions, execute->actions_len, - time_msec()); + execute->actions, execute->actions_len); if (pmd->core_id == NON_PMD_CORE_ID) { ovs_mutex_unlock(&dp->non_pmd_mutex); @@ -3146,7 +3166,7 @@ cycles_count_start(struct dp_netdev_pmd_thread *pmd) OVS_ACQUIRES(&cycles_counter_fake_mutex) OVS_NO_THREAD_SAFETY_ANALYSIS { - pmd->last_cycles = cycles_counter(); + pmd->ctx.last_cycles = cycles_counter(); } /* Stop counting cycles and add them to the counter 'type' */ @@ -3156,7 +3176,7 @@ cycles_count_end(struct dp_netdev_pmd_thread *pmd, OVS_RELEASES(&cycles_counter_fake_mutex) OVS_NO_THREAD_SAFETY_ANALYSIS { - unsigned long long interval = cycles_counter() - pmd->last_cycles; + unsigned long long interval = cycles_counter() - pmd->ctx.last_cycles; non_atomic_ullong_add(&pmd->cycles.n[type], interval); } @@ -3169,8 +3189,8 @@ cycles_count_intermediate(struct dp_netdev_pmd_thread *pmd, OVS_NO_THREAD_SAFETY_ANALYSIS { unsigned long long new_cycles = cycles_counter(); - unsigned long long interval = new_cycles - pmd->last_cycles; - pmd->last_cycles = new_cycles; + unsigned long long interval = new_cycles - pmd->ctx.last_cycles; + pmd->ctx.last_cycles = new_cycles; non_atomic_ullong_add(&pmd->cycles.n[type], interval); if (rxq && (type == PMD_CYCLES_PROCESSING)) { @@ -3871,7 +3891,8 @@ dpif_netdev_run(struct dpif *dpif) } } cycles_count_end(non_pmd, PMD_CYCLES_IDLE); - dpif_netdev_xps_revalidate_pmd(non_pmd, time_msec(), false); + pmd_thread_ctx_time_update(non_pmd); + dpif_netdev_xps_revalidate_pmd(non_pmd, false); ovs_mutex_unlock(&dp->non_pmd_mutex); dp_netdev_pmd_unref(non_pmd); @@ -3922,7 +3943,7 @@ pmd_free_cached_ports(struct dp_netdev_pmd_thread *pmd) struct tx_port *tx_port_cached; /* Free all used tx queue ids. */ - dpif_netdev_xps_revalidate_pmd(pmd, 0, true); + dpif_netdev_xps_revalidate_pmd(pmd, true); HMAP_FOR_EACH_POP (tx_port_cached, node, &pmd->tnl_port_cache) { free(tx_port_cached); @@ -4515,8 +4536,9 @@ dp_netdev_configure_pmd(struct dp_netdev_pmd_thread *pmd, struct dp_netdev *dp, ovs_mutex_init(&pmd->port_mutex); cmap_init(&pmd->flow_table); cmap_init(&pmd->classifiers); - pmd->next_optimization = time_msec() + DPCLS_OPTIMIZATION_INTERVAL; - pmd->rxq_interval = time_msec() + PMD_RXQ_INTERVAL_LEN; + pmd_thread_ctx_time_update(pmd); + pmd->next_optimization = pmd->ctx.now + DPCLS_OPTIMIZATION_INTERVAL; + pmd->rxq_interval = pmd->ctx.now + PMD_RXQ_INTERVAL_LEN; hmap_init(&pmd->poll_list); hmap_init(&pmd->tx_ports); hmap_init(&pmd->tnl_port_cache); @@ -4845,19 +4867,18 @@ packet_batch_per_flow_init(struct packet_batch_per_flow *batch, static inline void packet_batch_per_flow_execute(struct packet_batch_per_flow *batch, - struct dp_netdev_pmd_thread *pmd, - long long now) + struct dp_netdev_pmd_thread *pmd) { struct dp_netdev_actions *actions; struct dp_netdev_flow *flow = batch->flow; dp_netdev_flow_used(flow, batch->array.count, batch->byte_count, - batch->tcp_flags, now); + batch->tcp_flags, pmd->ctx.now); actions = dp_netdev_flow_get_actions(flow); dp_netdev_execute_actions(pmd, &batch->array, true, &flow->flow, - actions->actions, actions->size, now); + actions->actions, actions->size); } static inline void @@ -4965,7 +4986,7 @@ handle_packet_upcall(struct dp_netdev_pmd_thread *pmd, struct dp_packet *packet, const struct netdev_flow_key *key, struct ofpbuf *actions, struct ofpbuf *put_actions, - int *lost_cnt, long long now) + int *lost_cnt) { struct ofpbuf *add_actions; struct dp_packet_batch b; @@ -5004,7 +5025,7 @@ handle_packet_upcall(struct dp_netdev_pmd_thread *pmd, * we'll send the packet up twice. */ dp_packet_batch_init_packet(&b, packet); dp_netdev_execute_actions(pmd, &b, true, &match.flow, - actions->data, actions->size, now); + actions->data, actions->size); add_actions = put_actions->size ? put_actions : actions; if (OVS_LIKELY(error != ENOSPC)) { @@ -5032,9 +5053,9 @@ static inline void fast_path_processing(struct dp_netdev_pmd_thread *pmd, struct dp_packet_batch *packets_, struct netdev_flow_key *keys, - struct packet_batch_per_flow batches[], size_t *n_batches, - odp_port_t in_port, - long long now) + struct packet_batch_per_flow batches[], + size_t *n_batches, + odp_port_t in_port) { const size_t cnt = dp_packet_batch_size(packets_); #if !defined(__CHECKER__) && !defined(_WIN32) @@ -5091,7 +5112,7 @@ fast_path_processing(struct dp_netdev_pmd_thread *pmd, miss_cnt++; handle_packet_upcall(pmd, packet, &keys[i], &actions, - &put_actions, &lost_cnt, now); + &put_actions, &lost_cnt); } ofpbuf_uninit(&actions); @@ -5144,18 +5165,19 @@ dp_netdev_input__(struct dp_netdev_pmd_thread *pmd, OVS_ALIGNED_VAR(CACHE_LINE_SIZE) struct netdev_flow_key keys[PKT_ARRAY_SIZE]; struct packet_batch_per_flow batches[PKT_ARRAY_SIZE]; - long long now = time_msec(); size_t n_batches; odp_port_t in_port; + pmd_thread_ctx_time_update(pmd); + n_batches = 0; emc_processing(pmd, packets, keys, batches, &n_batches, md_is_valid, port_no); if (!dp_packet_batch_is_empty(packets)) { /* Get ingress port from first packet's metadata. */ in_port = packets->packets[0]->md.in_port.odp_port; - fast_path_processing(pmd, packets, keys, batches, &n_batches, - in_port, now); + fast_path_processing(pmd, packets, keys, + batches, &n_batches, in_port); } /* All the flow batches need to be reset before any call to @@ -5173,7 +5195,7 @@ dp_netdev_input__(struct dp_netdev_pmd_thread *pmd, } for (i = 0; i < n_batches; i++) { - packet_batch_per_flow_execute(&batches[i], pmd, now); + packet_batch_per_flow_execute(&batches[i], pmd); } } @@ -5194,7 +5216,6 @@ dp_netdev_recirculate(struct dp_netdev_pmd_thread *pmd, struct dp_netdev_execute_aux { struct dp_netdev_pmd_thread *pmd; - long long now; const struct flow *flow; }; @@ -5218,7 +5239,7 @@ dpif_netdev_register_upcall_cb(struct dpif *dpif, upcall_callback *cb, static void dpif_netdev_xps_revalidate_pmd(const struct dp_netdev_pmd_thread *pmd, - long long now, bool purge) + bool purge) { struct tx_port *tx; struct dp_netdev_port *port; @@ -5228,7 +5249,7 @@ dpif_netdev_xps_revalidate_pmd(const struct dp_netdev_pmd_thread *pmd, if (!tx->port->dynamic_txqs) { continue; } - interval = now - tx->last_used; + interval = pmd->ctx.now - tx->last_used; if (tx->qid >= 0 && (purge || interval >= XPS_TIMEOUT_MS)) { port = tx->port; ovs_mutex_lock(&port->txq_used_mutex); @@ -5241,18 +5262,14 @@ dpif_netdev_xps_revalidate_pmd(const struct dp_netdev_pmd_thread *pmd, static int dpif_netdev_xps_get_tx_qid(const struct dp_netdev_pmd_thread *pmd, - struct tx_port *tx, long long now) + struct tx_port *tx) { struct dp_netdev_port *port; long long interval; int i, min_cnt, min_qid; - if (OVS_UNLIKELY(!now)) { - now = time_msec(); - } - - interval = now - tx->last_used; - tx->last_used = now; + interval = pmd->ctx.now - tx->last_used; + tx->last_used = pmd->ctx.now; if (OVS_LIKELY(tx->qid >= 0 && interval < XPS_TIMEOUT_MS)) { return tx->qid; @@ -5280,7 +5297,7 @@ dpif_netdev_xps_get_tx_qid(const struct dp_netdev_pmd_thread *pmd, ovs_mutex_unlock(&port->txq_used_mutex); - dpif_netdev_xps_revalidate_pmd(pmd, now, false); + dpif_netdev_xps_revalidate_pmd(pmd, false); VLOG_DBG("Core %d: New TX queue ID %d for port \'%s\'.", pmd->core_id, tx->qid, netdev_get_name(tx->port->netdev)); @@ -5331,7 +5348,7 @@ dp_execute_userspace_action(struct dp_netdev_pmd_thread *pmd, struct dp_packet *packet, bool may_steal, struct flow *flow, ovs_u128 *ufid, struct ofpbuf *actions, - const struct nlattr *userdata, long long now) + const struct nlattr *userdata) { struct dp_packet_batch b; int error; @@ -5344,7 +5361,7 @@ dp_execute_userspace_action(struct dp_netdev_pmd_thread *pmd, if (!error || error == ENOSPC) { dp_packet_batch_init_packet(&b, packet); dp_netdev_execute_actions(pmd, &b, may_steal, flow, - actions->data, actions->size, now); + actions->data, actions->size); } else if (may_steal) { dp_packet_delete(packet); } @@ -5360,7 +5377,6 @@ dp_execute_cb(void *aux_, struct dp_packet_batch *packets_, struct dp_netdev_pmd_thread *pmd = aux->pmd; struct dp_netdev *dp = pmd->dp; int type = nl_attr_type(a); - long long now = aux->now; struct tx_port *p; switch ((enum ovs_action_attr)type) { @@ -5372,7 +5388,7 @@ dp_execute_cb(void *aux_, struct dp_packet_batch *packets_, dynamic_txqs = p->port->dynamic_txqs; if (dynamic_txqs) { - tx_qid = dpif_netdev_xps_get_tx_qid(pmd, p, now); + tx_qid = dpif_netdev_xps_get_tx_qid(pmd, p); } else { tx_qid = pmd->static_tx_qid; } @@ -5455,7 +5471,7 @@ dp_execute_cb(void *aux_, struct dp_packet_batch *packets_, flow_extract(packet, &flow); dpif_flow_hash(dp->dpif, &flow, sizeof flow, &ufid); dp_execute_userspace_action(pmd, packet, may_steal, &flow, - &ufid, &actions, userdata, now); + &ufid, &actions, userdata); } if (clone) { @@ -5616,13 +5632,13 @@ dp_execute_cb(void *aux_, struct dp_packet_batch *packets_, conntrack_execute(&dp->conntrack, packets_, aux->flow->dl_type, force, commit, zone, setmark, setlabel, helper, - nat_action_info_ref, now); + nat_action_info_ref, pmd->ctx.now); break; } case OVS_ACTION_ATTR_METER: dp_netdev_run_meter(pmd->dp, packets_, nl_attr_get_u32(a), - time_msec()); + pmd->ctx.now); break; case OVS_ACTION_ATTR_PUSH_VLAN: @@ -5651,10 +5667,9 @@ static void dp_netdev_execute_actions(struct dp_netdev_pmd_thread *pmd, struct dp_packet_batch *packets, bool may_steal, const struct flow *flow, - const struct nlattr *actions, size_t actions_len, - long long now) + const struct nlattr *actions, size_t actions_len) { - struct dp_netdev_execute_aux aux = { pmd, now, flow }; + struct dp_netdev_execute_aux aux = { pmd, flow }; odp_execute_actions(&aux, packets, may_steal, actions, actions_len, dp_execute_cb); @@ -5981,9 +5996,10 @@ dp_netdev_pmd_try_optimize(struct dp_netdev_pmd_thread *pmd, struct polled_queue *poll_list, int poll_cnt) { struct dpcls *cls; - long long int now = time_msec(); - if (now > pmd->rxq_interval) { + pmd_thread_ctx_time_update(pmd); + + if (pmd->ctx.now > pmd->rxq_interval) { /* Get the cycles that were used to process each queue and store. */ for (unsigned i = 0; i < poll_cnt; i++) { uint64_t rxq_cyc_curr = dp_netdev_rxq_get_cycles(poll_list[i].rxq, @@ -5993,10 +6009,10 @@ dp_netdev_pmd_try_optimize(struct dp_netdev_pmd_thread *pmd, 0); } /* Start new measuring interval */ - pmd->rxq_interval = now + PMD_RXQ_INTERVAL_LEN; + pmd->rxq_interval = pmd->ctx.now + PMD_RXQ_INTERVAL_LEN; } - if (now > pmd->next_optimization) { + if (pmd->ctx.now > pmd->next_optimization) { /* Try to obtain the flow lock to block out revalidator threads. * If not possible, just try next time. */ if (!ovs_mutex_trylock(&pmd->flow_mutex)) { @@ -6006,7 +6022,8 @@ dp_netdev_pmd_try_optimize(struct dp_netdev_pmd_thread *pmd, } ovs_mutex_unlock(&pmd->flow_mutex); /* Start new measuring interval */ - pmd->next_optimization = now + DPCLS_OPTIMIZATION_INTERVAL; + pmd->next_optimization = pmd->ctx.now + + DPCLS_OPTIMIZATION_INTERVAL; } } }