From patchwork Mon Jan 13 15:56:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eelco Chaudron X-Patchwork-Id: 1222198 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.136; helo=silver.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=VYHjg2lD; dkim-atps=neutral Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47xJ9c5CGbz9sP6 for ; Tue, 14 Jan 2020 02:56:36 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id E816520433; Mon, 13 Jan 2020 15:56:34 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6mlIJtfv1Lyy; Mon, 13 Jan 2020 15:56:32 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by silver.osuosl.org (Postfix) with ESMTP id 703C8203ED; Mon, 13 Jan 2020 15:56:32 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 60915C1D88; Mon, 13 Jan 2020 15:56:32 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id 4F736C077D for ; Mon, 13 Jan 2020 15:56:31 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 4C7C9857DC for ; Mon, 13 Jan 2020 15:56:31 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 8SvvJ3wb4ykT for ; Mon, 13 Jan 2020 15:56:30 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by hemlock.osuosl.org (Postfix) with ESMTPS id F2283858E1 for ; Mon, 13 Jan 2020 15:56:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1578930988; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DE/AITKuphyN8JLSsd5f8HcUk1oorX2EQ8jwJS8YhIs=; b=VYHjg2lDQmNlwoYFsIPvwsd1XxAuXcrhSyU3xmwIUFfEQcSwlkrEScQyW+Mwg2Hm+giD2H ikYTPb89uoeNfPH+Lk2pvN+fbydf21Qs9opri3wWnsTqZnwFIeieFGkp6nA8KyglIuztmE CK4K0Le2MN3fIavhYpqmxHt2phEjikQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-330-pNdwq32rPMCfwOOM3z_MNw-1; Mon, 13 Jan 2020 10:56:26 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6BDAEDC20; Mon, 13 Jan 2020 15:56:25 +0000 (UTC) Received: from netdev64.ntdv.lab.eng.bos.redhat.com (wsfd-netdev64.ntdv.lab.eng.bos.redhat.com [10.19.188.127]) by smtp.corp.redhat.com (Postfix) with ESMTP id 179C95C1B0; Mon, 13 Jan 2020 15:56:25 +0000 (UTC) From: Eelco Chaudron To: dev@openvswitch.org Date: Mon, 13 Jan 2020 10:56:23 -0500 Message-Id: <20200113155623.7981.14378.stgit@netdev64> In-Reply-To: <20200113155611.7981.1156.stgit@netdev64> References: <20200113155611.7981.1156.stgit@netdev64> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-MC-Unique: pNdwq32rPMCfwOOM3z_MNw-1 X-Mimecast-Spam-Score: 0 Subject: [ovs-dev] [PATCH v4 1/2] netdev-dpdk: Add support for multi-queue QoS to the DPDK datapath X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" This patch adds support for multi-queue QoS to the DPDK datapath. Most of the code is based on an earlier patch from a patchset sent out by zhaozhanxu. The patch was titled "[ovs-dev, v2, 1/4] netdev-dpdk.c: Support the multi-queue QoS configuration for dpdk datapath" Co-authored-by: zhaozhanxu Signed-off-by: Eelco Chaudron --- lib/netdev-dpdk.c | 219 ++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 213 insertions(+), 6 deletions(-) diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index 8198a0b..128963f 100644 --- a/lib/netdev-dpdk.c +++ b/lib/netdev-dpdk.c @@ -219,6 +219,13 @@ struct qos_conf { rte_spinlock_t lock; }; +/* QoS queue information used by the netdev queue dump functions. */ +struct netdev_dpdk_queue_state { + uint32_t *queues; + size_t cur_queue; + size_t n_queues; +}; + /* A particular implementation of dpdk QoS operations. * * The functions below return 0 if successful or a positive errno value on @@ -285,6 +292,41 @@ struct dpdk_qos_ops { */ int (*qos_run)(struct qos_conf *qos_conf, struct rte_mbuf **pkts, int pkt_cnt, bool should_steal); + + /* Called to construct a QoS Queue. The implementation should make + * the appropriate calls to configure QoS Queue according to 'details'. + * + * The contents of 'details' should be documented as valid for 'ovs_name' + * in the "other_config" column in the "QoS" table in vswitchd/vswitch.xml + * (which is built as ovs-vswitchd.conf.db(8)). + * + * This function must return 0 if and only if it constructs + * QoS queue successfully. + */ + int (*qos_queue_construct)(const struct smap *details, + uint32_t queue_id, struct qos_conf *conf); + + /* Destroys the QoS Queue. */ + void (*qos_queue_destruct)(struct qos_conf *conf, uint32_t queue_id); + + /* Retrieves details of QoS Queue configuration into 'details'. + * + * The contents of 'details' should be documented as valid for 'ovs_name' + * in the "other_config" column in the "QoS" table in vswitchd/vswitch.xml + * (which is built as ovs-vswitchd.conf.db(8)). + */ + int (*qos_queue_get)(struct smap *details, uint32_t queue_id, + const struct qos_conf *conf); + + /* Retrieves statistics of QoS Queue configuration into 'stats'. */ + int (*qos_queue_get_stats)(const struct qos_conf *conf, uint32_t queue_id, + struct netdev_queue_stats *stats); + + /* Setup the 'netdev_dpdk_queue_state' structure used by the dpdk queue + * dump functions. + */ + int (*qos_queue_dump_state_init)(const struct qos_conf *conf, + struct netdev_dpdk_queue_state *state); }; /* dpdk_qos_ops for each type of user space QoS implementation */ @@ -4191,6 +4233,164 @@ netdev_dpdk_set_qos(struct netdev *netdev, const char *type, return error; } +static int +netdev_dpdk_get_queue(const struct netdev *netdev, uint32_t queue_id, + struct smap *details) +{ + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + struct qos_conf *qos_conf; + int error = 0; + + ovs_mutex_lock(&dev->mutex); + + qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf); + if (!qos_conf || !qos_conf->ops || !qos_conf->ops->qos_queue_get) { + error = EOPNOTSUPP; + } else { + error = qos_conf->ops->qos_queue_get(details, queue_id, qos_conf); + } + + ovs_mutex_unlock(&dev->mutex); + + return error; +} + +static int +netdev_dpdk_set_queue(struct netdev *netdev, uint32_t queue_id, + const struct smap *details) +{ + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + struct qos_conf *qos_conf; + int error = 0; + + ovs_mutex_lock(&dev->mutex); + + qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf); + if (!qos_conf || !qos_conf->ops || !qos_conf->ops->qos_queue_construct) { + error = EOPNOTSUPP; + } else { + error = qos_conf->ops->qos_queue_construct(details, queue_id, + qos_conf); + } + + if (error && error != EOPNOTSUPP) { + VLOG_ERR("Failed to set QoS queue %d on port %s: %s", + queue_id, netdev_get_name(netdev), rte_strerror(error)); + } + + ovs_mutex_unlock(&dev->mutex); + + return error; +} + +static int +netdev_dpdk_delete_queue(struct netdev *netdev, uint32_t queue_id) +{ + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + struct qos_conf *qos_conf; + int error = 0; + + ovs_mutex_lock(&dev->mutex); + + qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf); + if (qos_conf && qos_conf->ops && qos_conf->ops->qos_queue_destruct) { + qos_conf->ops->qos_queue_destruct(qos_conf, queue_id); + } else { + error = EOPNOTSUPP; + } + + ovs_mutex_unlock(&dev->mutex); + + return error; +} + +static int +netdev_dpdk_get_queue_stats(const struct netdev *netdev, uint32_t queue_id, + struct netdev_queue_stats *stats) +{ + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + struct qos_conf *qos_conf; + int error = 0; + + ovs_mutex_lock(&dev->mutex); + + qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf); + if (qos_conf && qos_conf->ops && qos_conf->ops->qos_queue_get_stats) { + qos_conf->ops->qos_queue_get_stats(qos_conf, queue_id, stats); + } else { + error = EOPNOTSUPP; + } + + ovs_mutex_unlock(&dev->mutex); + + return error; +} + +static int +netdev_dpdk_queue_dump_start(const struct netdev *netdev, void **statep) +{ + int error = 0; + struct qos_conf *qos_conf; + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + + ovs_mutex_lock(&dev->mutex); + + qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf); + if (qos_conf && qos_conf->ops + && qos_conf->ops->qos_queue_dump_state_init) { + struct netdev_dpdk_queue_state *state; + + *statep = state = xmalloc(sizeof *state); + error = qos_conf->ops->qos_queue_dump_state_init(qos_conf, state); + } else { + error = EOPNOTSUPP; + } + + ovs_mutex_unlock(&dev->mutex); + + return error; +} + +static int +netdev_dpdk_queue_dump_next(const struct netdev *netdev, void *state_, + uint32_t *queue_idp, struct smap *details) +{ + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + struct netdev_dpdk_queue_state *state = state_; + struct qos_conf *qos_conf; + int error = EOF; + + ovs_mutex_lock(&dev->mutex); + + while (state->cur_queue < state->n_queues) { + uint32_t queue_id = state->queues[state->cur_queue++]; + + qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf); + if (qos_conf && qos_conf->ops && qos_conf->ops->qos_queue_get) { + *queue_idp = queue_id; + error = qos_conf->ops->qos_queue_get(details, queue_id, qos_conf); + break; + } + } + + ovs_mutex_unlock(&dev->mutex); + + return error; +} + +static int +netdev_dpdk_queue_dump_done(const struct netdev *netdev OVS_UNUSED, + void *state_) +{ + struct netdev_dpdk_queue_state *state = state_; + + free(state->queues); + free(state); + return 0; +} + + + /* egress-policer details */ struct egress_policer { @@ -4288,12 +4488,12 @@ egress_policer_run(struct qos_conf *conf, struct rte_mbuf **pkts, int pkt_cnt, } static const struct dpdk_qos_ops egress_policer_ops = { - "egress-policer", /* qos_name */ - egress_policer_qos_construct, - egress_policer_qos_destruct, - egress_policer_qos_get, - egress_policer_qos_is_equal, - egress_policer_run + .qos_name = "egress-policer", /* qos_name */ + .qos_construct = egress_policer_qos_construct, + .qos_destruct = egress_policer_qos_destruct, + .qos_get = egress_policer_qos_get, + .qos_is_equal = egress_policer_qos_is_equal, + .qos_run = egress_policer_run }; static int @@ -4558,6 +4758,13 @@ netdev_dpdk_rte_flow_create(struct netdev *netdev, .get_qos_types = netdev_dpdk_get_qos_types, \ .get_qos = netdev_dpdk_get_qos, \ .set_qos = netdev_dpdk_set_qos, \ + .get_queue = netdev_dpdk_get_queue, \ + .set_queue = netdev_dpdk_set_queue, \ + .delete_queue = netdev_dpdk_delete_queue, \ + .get_queue_stats = netdev_dpdk_get_queue_stats, \ + .queue_dump_start = netdev_dpdk_queue_dump_start, \ + .queue_dump_next = netdev_dpdk_queue_dump_next, \ + .queue_dump_done = netdev_dpdk_queue_dump_done, \ .update_flags = netdev_dpdk_update_flags, \ .rxq_alloc = netdev_dpdk_rxq_alloc, \ .rxq_construct = netdev_dpdk_rxq_construct, \