From patchwork Wed Jan 5 08:19:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 1575534 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=MtaUq4c6; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.138; helo=smtp1.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Received: from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4JTMqF5HGJz9sRR for ; Wed, 5 Jan 2022 19:20:09 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 22AAA82E4C; Wed, 5 Jan 2022 08:20:06 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id sFs7IaoSFGE1; Wed, 5 Jan 2022 08:20:04 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp1.osuosl.org (Postfix) with ESMTPS id 83B2082CCB; Wed, 5 Jan 2022 08:20:03 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 52BF0C0038; Wed, 5 Jan 2022 08:20:03 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by lists.linuxfoundation.org (Postfix) with ESMTP id 0B301C001E for ; Wed, 5 Jan 2022 08:20:02 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 1B3A882C21 for ; Wed, 5 Jan 2022 08:20:01 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PgNzhE1fCsID for ; Wed, 5 Jan 2022 08:20:00 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by smtp1.osuosl.org (Postfix) with ESMTPS id 083B6813A2 for ; Wed, 5 Jan 2022 08:19:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1641370793; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fuH8EL+GlHpSJQ+iH5jJZEMrHaoNSR67/AIkwRQA/Yc=; b=MtaUq4c69YYlIskDuBrLcAVEI1kyKuLHFImtSghswsDxjLO/7DDfBBV+SjIOSuwQUSuL2o NC/1ev2bBB4z1qPqAk6g83oiS4/d+/myo3uGFFUw7EEuCFuxoQtOydgfx1H3xw180h8Jki zapgjld/5JBRjxxheXkdusn9G2pn/Ec= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-633-4epCvgl7PbqeMwzTgEnCxQ-1; Wed, 05 Jan 2022 03:19:48 -0500 X-MC-Unique: 4epCvgl7PbqeMwzTgEnCxQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F35E3835E42; Wed, 5 Jan 2022 08:19:46 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.14]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9748B16A51; Wed, 5 Jan 2022 08:19:44 +0000 (UTC) From: Maxime Coquelin To: dev@openvswitch.org Date: Wed, 5 Jan 2022 09:19:22 +0100 Message-Id: <20220105081926.613684-2-maxime.coquelin@redhat.com> In-Reply-To: <20220105081926.613684-1-maxime.coquelin@redhat.com> References: <20220105081926.613684-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Cc: fbl@sysclose.org, Maxime Coquelin , i.maximets@ovn.org, david.marchand@redhat.com Subject: [ovs-dev] [PATCH v5 1/5] netdev-dpdk: Introduce per rxq/txq Vhost-user statistics. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" Hash-based Tx steering feature will enable steering Tx packets on transmit queues based on their hashes. In order to test the feature, it is needed to be able to get the per-queue statistics for Vhost-user ports. This patch introduces "bytes", "packets" and "error" per-queue custom statistics for Vhost-user ports. Suggested-by David Marchand Signed-off-by: Maxime Coquelin Reviewed-by: David Marchand --- lib/netdev-dpdk.c | 147 +++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 138 insertions(+), 9 deletions(-) diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index 6782d3e8f..6d301cd2e 100644 --- a/lib/netdev-dpdk.c +++ b/lib/netdev-dpdk.c @@ -192,6 +192,13 @@ static const struct rte_vhost_device_ops virtio_net_device_ops = .guest_notified = vhost_guest_notified, }; +/* Custom software per-queue stats for vhost ports */ +struct netdev_dpdk_vhost_q_stats { + uint64_t bytes; + uint64_t packets; + uint64_t errors; +}; + /* Custom software stats for dpdk ports */ struct netdev_dpdk_sw_stats { /* No. of retries when unable to transmit. */ @@ -479,9 +486,10 @@ struct netdev_dpdk { PADDED_MEMBERS(CACHE_LINE_SIZE, struct netdev_stats stats; struct netdev_dpdk_sw_stats *sw_stats; + struct netdev_dpdk_vhost_q_stats *vhost_txq_stats; + struct netdev_dpdk_vhost_q_stats *vhost_rxq_stats; /* Protects stats */ rte_spinlock_t stats_lock; - /* 36 pad bytes here. */ ); PADDED_MEMBERS(CACHE_LINE_SIZE, @@ -1276,6 +1284,13 @@ common_construct(struct netdev *netdev, dpdk_port_t port_no, dev->sw_stats = xzalloc(sizeof *dev->sw_stats); dev->sw_stats->tx_retries = (dev->type == DPDK_DEV_VHOST) ? 0 : UINT64_MAX; + if (dev->type == DPDK_DEV_VHOST) { + dev->vhost_txq_stats = xcalloc(netdev->n_txq, + sizeof *dev->vhost_txq_stats); + dev->vhost_rxq_stats = xcalloc(netdev->n_rxq, + sizeof *dev->vhost_rxq_stats); + } + return 0; } @@ -2354,17 +2369,21 @@ netdev_dpdk_vhost_update_rx_size_counters(struct netdev_stats *stats, } static inline void -netdev_dpdk_vhost_update_rx_counters(struct netdev_dpdk *dev, +netdev_dpdk_vhost_update_rx_counters(struct netdev_dpdk *dev, int qid, struct dp_packet **packets, int count, int qos_drops) { + struct netdev_dpdk_vhost_q_stats *q_stats = &dev->vhost_rxq_stats[qid]; struct netdev_stats *stats = &dev->stats; struct dp_packet *packet; unsigned int packet_size; int i; stats->rx_packets += count; + q_stats->packets += count; stats->rx_dropped += qos_drops; + q_stats->errors += qos_drops; + for (i = 0; i < count; i++) { packet = packets[i]; packet_size = dp_packet_size(packet); @@ -2375,6 +2394,7 @@ netdev_dpdk_vhost_update_rx_counters(struct netdev_dpdk *dev, * further processing. */ stats->rx_errors++; stats->rx_length_errors++; + q_stats->errors++; continue; } @@ -2386,6 +2406,7 @@ netdev_dpdk_vhost_update_rx_counters(struct netdev_dpdk *dev, } stats->rx_bytes += packet_size; + q_stats->bytes += packet_size; } if (OVS_UNLIKELY(qos_drops)) { @@ -2438,7 +2459,7 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq, } rte_spinlock_lock(&dev->stats_lock); - netdev_dpdk_vhost_update_rx_counters(dev, batch->packets, + netdev_dpdk_vhost_update_rx_counters(dev, rxq->queue_id, batch->packets, nb_rx, qos_drops); rte_spinlock_unlock(&dev->stats_lock); @@ -2552,11 +2573,12 @@ netdev_dpdk_filter_packet_len(struct netdev_dpdk *dev, struct rte_mbuf **pkts, } static inline void -netdev_dpdk_vhost_update_tx_counters(struct netdev_dpdk *dev, +netdev_dpdk_vhost_update_tx_counters(struct netdev_dpdk *dev, int qid, struct dp_packet **packets, int attempted, struct netdev_dpdk_sw_stats *sw_stats_add) { + struct netdev_dpdk_vhost_q_stats *q_stats = &dev->vhost_txq_stats[qid]; int dropped = sw_stats_add->tx_mtu_exceeded_drops + sw_stats_add->tx_qos_drops + sw_stats_add->tx_failure_drops + @@ -2566,10 +2588,15 @@ netdev_dpdk_vhost_update_tx_counters(struct netdev_dpdk *dev, int i; stats->tx_packets += sent; + q_stats->packets += sent; stats->tx_dropped += dropped; + q_stats->errors += dropped; for (i = 0; i < sent; i++) { - stats->tx_bytes += dp_packet_size(packets[i]); + uint64_t bytes = dp_packet_size(packets[i]); + + stats->tx_bytes += bytes; + q_stats->bytes += bytes; } if (OVS_UNLIKELY(dropped || sw_stats_add->tx_retries)) { @@ -2657,7 +2684,7 @@ __netdev_dpdk_vhost_send(struct netdev *netdev, int qid, sw_stats_add.tx_retries = MIN(retries, max_retries); rte_spinlock_lock(&dev->stats_lock); - netdev_dpdk_vhost_update_tx_counters(dev, pkts, total_packets, + netdev_dpdk_vhost_update_tx_counters(dev, qid, pkts, total_packets, &sw_stats_add); rte_spinlock_unlock(&dev->stats_lock); @@ -3287,6 +3314,76 @@ netdev_dpdk_get_sw_custom_stats(const struct netdev *netdev, return 0; } +static int +netdev_dpdk_vhost_get_custom_stats(const struct netdev *netdev, + struct netdev_custom_stats *custom_stats) +{ + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + int sw_stats_size, i, j; + + netdev_dpdk_get_sw_custom_stats(netdev, custom_stats); + + ovs_mutex_lock(&dev->mutex); + +#define VHOST_Q_STATS \ + VHOST_Q_STAT(bytes) \ + VHOST_Q_STAT(packets) \ + VHOST_Q_STAT(errors) + + sw_stats_size = custom_stats->size; +#define VHOST_Q_STAT(NAME) + netdev->n_rxq + custom_stats->size += VHOST_Q_STATS; +#undef VHOST_Q_STAT +#define VHOST_Q_STAT(NAME) + netdev->n_txq + custom_stats->size += VHOST_Q_STATS; +#undef VHOST_Q_STAT + custom_stats->counters = xrealloc(custom_stats->counters, + custom_stats->size * + sizeof *custom_stats->counters); + + j = 0; + for (i = 0; i < netdev->n_rxq; i++) { +#define VHOST_Q_STAT(NAME) \ + snprintf(custom_stats->counters[sw_stats_size + j++].name, \ + NETDEV_CUSTOM_STATS_NAME_SIZE, "rx_q%d_"#NAME, i); + VHOST_Q_STATS +#undef VHOST_Q_STAT + } + + for (i = 0; i < netdev->n_txq; i++) { +#define VHOST_Q_STAT(NAME) \ + snprintf(custom_stats->counters[sw_stats_size + j++].name, \ + NETDEV_CUSTOM_STATS_NAME_SIZE, "tx_q%d_"#NAME, i); + VHOST_Q_STATS +#undef VHOST_Q_STAT + } + + rte_spinlock_lock(&dev->stats_lock); + + j = 0; + for (i = 0; i < netdev->n_rxq; i++) { +#define VHOST_Q_STAT(NAME) \ + custom_stats->counters[sw_stats_size + j++].value = \ + dev->vhost_rxq_stats[i].NAME; + VHOST_Q_STATS +#undef VHOST_Q_STAT + } + + for (i = 0; i < netdev->n_txq; i++) { +#define VHOST_Q_STAT(NAME) \ + custom_stats->counters[sw_stats_size + j++].value = \ + dev->vhost_txq_stats[i].NAME; + VHOST_Q_STATS +#undef VHOST_Q_STAT + } + + rte_spinlock_unlock(&dev->stats_lock); + + ovs_mutex_unlock(&dev->mutex); + + return 0; +} + static int netdev_dpdk_get_features(const struct netdev *netdev, enum netdev_features *current, @@ -3556,6 +3653,11 @@ netdev_dpdk_update_flags__(struct netdev_dpdk *dev, if (NETDEV_UP & on) { rte_spinlock_lock(&dev->stats_lock); memset(&dev->stats, 0, sizeof dev->stats); + memset(dev->sw_stats, 0, sizeof *dev->sw_stats); + memset(dev->vhost_rxq_stats, 0, + dev->up.n_rxq * sizeof *dev->vhost_rxq_stats); + memset(dev->vhost_txq_stats, 0, + dev->up.n_txq * sizeof *dev->vhost_txq_stats); rte_spinlock_unlock(&dev->stats_lock); } } @@ -5048,9 +5150,12 @@ static int dpdk_vhost_reconfigure_helper(struct netdev_dpdk *dev) OVS_REQUIRES(dev->mutex) { + int old_n_txq = dev->up.n_txq; + int old_n_rxq = dev->up.n_rxq; + int err; + dev->up.n_txq = dev->requested_n_txq; dev->up.n_rxq = dev->requested_n_rxq; - int err; /* Always keep RX queue 0 enabled for implementations that won't * report vring states. */ @@ -5068,6 +5173,30 @@ dpdk_vhost_reconfigure_helper(struct netdev_dpdk *dev) netdev_dpdk_remap_txqs(dev); + /* Reset all stats if number of queues changed. */ + if (dev->up.n_txq != old_n_txq || dev->up.n_rxq != old_n_rxq) { + struct netdev_dpdk_vhost_q_stats *old_txq_stats, *new_txq_stats; + struct netdev_dpdk_vhost_q_stats *old_rxq_stats, *new_rxq_stats; + + new_txq_stats = xcalloc(dev->up.n_txq, sizeof *dev->vhost_txq_stats); + new_rxq_stats = xcalloc(dev->up.n_rxq, sizeof *dev->vhost_rxq_stats); + + rte_spinlock_lock(&dev->stats_lock); + + memset(&dev->stats, 0, sizeof dev->stats); + memset(dev->sw_stats, 0, sizeof *dev->sw_stats); + + old_txq_stats = dev->vhost_txq_stats; + dev->vhost_txq_stats = new_txq_stats; + old_rxq_stats = dev->vhost_rxq_stats; + dev->vhost_rxq_stats = new_rxq_stats; + + rte_spinlock_unlock(&dev->stats_lock); + + free(old_txq_stats); + free(old_rxq_stats); + } + err = netdev_dpdk_mempool_configure(dev); if (!err) { /* A new mempool was created or re-used. */ @@ -5473,7 +5602,7 @@ static const struct netdev_class dpdk_vhost_class = { .send = netdev_dpdk_vhost_send, .get_carrier = netdev_dpdk_vhost_get_carrier, .get_stats = netdev_dpdk_vhost_get_stats, - .get_custom_stats = netdev_dpdk_get_sw_custom_stats, + .get_custom_stats = netdev_dpdk_vhost_get_custom_stats, .get_status = netdev_dpdk_vhost_user_get_status, .reconfigure = netdev_dpdk_vhost_reconfigure, .rxq_recv = netdev_dpdk_vhost_rxq_recv, @@ -5489,7 +5618,7 @@ static const struct netdev_class dpdk_vhost_client_class = { .send = netdev_dpdk_vhost_send, .get_carrier = netdev_dpdk_vhost_get_carrier, .get_stats = netdev_dpdk_vhost_get_stats, - .get_custom_stats = netdev_dpdk_get_sw_custom_stats, + .get_custom_stats = netdev_dpdk_vhost_get_custom_stats, .get_status = netdev_dpdk_vhost_user_get_status, .reconfigure = netdev_dpdk_vhost_client_reconfigure, .rxq_recv = netdev_dpdk_vhost_rxq_recv,