From patchwork Tue Oct 1 14:10:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eelco Chaudron X-Patchwork-Id: 1169967 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 46jLmP3L0jz9sDB for ; Wed, 2 Oct 2019 00:11:33 +1000 (AEST) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 203DE13CB; Tue, 1 Oct 2019 14:10:48 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id C400C1078 for ; Tue, 1 Oct 2019 14:10:30 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 1B4048B0 for ; Tue, 1 Oct 2019 14:10:30 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id AA1D310C0947 for ; Tue, 1 Oct 2019 14:10:29 +0000 (UTC) Received: from netdev64.ntdv.lab.eng.bos.redhat.com (wsfd-netdev64.ntdv.lab.eng.bos.redhat.com [10.19.188.127]) by smtp.corp.redhat.com (Postfix) with ESMTP id 67D1960624 for ; Tue, 1 Oct 2019 14:10:29 +0000 (UTC) From: Eelco Chaudron To: dev@openvswitch.org Date: Tue, 1 Oct 2019 10:10:29 -0400 Message-Id: <20191001141028.26768.94594.stgit@netdev64> In-Reply-To: <20191001141018.26768.18970.stgit@netdev64> References: <20191001141018.26768.18970.stgit@netdev64> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (mx1.redhat.com [10.5.110.66]); Tue, 01 Oct 2019 14:10:29 +0000 (UTC) X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_FILL_THIS_FORM_SHORT autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Subject: [ovs-dev] [dpdk-latest PATCH v3 1/2] netdev-dpdk: Add support for multi-queue QoS to the DPDK datapath X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org This patch adds support for multi-queue QoS to the DPDK datapath. Most of the code is based on an earlier patch from a patchset sent out by zhaozhanxu. The patch was titled "[ovs-dev, v2, 1/4] netdev-dpdk.c: Support the multi-queue QoS configuration for dpdk datapath" Signed-off-by: Eelco Chaudron --- lib/netdev-dpdk.c | 216 ++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 210 insertions(+), 6 deletions(-) diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index ba92e89..072ce96 100644 --- a/lib/netdev-dpdk.c +++ b/lib/netdev-dpdk.c @@ -197,6 +197,13 @@ struct qos_conf { rte_spinlock_t lock; }; +/* QoS queue information used by the netdev queue dump functions*/ +struct netdev_dpdk_queue_state { + uint32_t *queues; + size_t cur_queue; + size_t n_queues; +}; + /* A particular implementation of dpdk QoS operations. * * The functions below return 0 if successful or a positive errno value on @@ -263,6 +270,41 @@ struct dpdk_qos_ops { */ int (*qos_run)(struct qos_conf *qos_conf, struct rte_mbuf **pkts, int pkt_cnt, bool should_steal); + + /* Called to construct a QoS Queue. The implementation should make + * the appropriate calls to configure QoS Queue according to 'details'. + * + * The contents of 'details' should be documented as valid for 'ovs_name' + * in the "other_config" column in the "QoS" table in vswitchd/vswitch.xml + * (which is built as ovs-vswitchd.conf.db(8)). + * + * This function must return 0 if and only if it constructs + * qos queue successfully. + */ + int (*qos_queue_construct)(const struct smap *details, + uint32_t queue_id, struct qos_conf *conf); + + /* Destroys the Qos Queue */ + void (*qos_queue_destruct)(struct qos_conf *conf, uint32_t queue_id); + + /* Retrieves details of QoS Queue configuration into 'details'. + * + * The contents of 'details' should be documented as valid for 'ovs_name' + * in the "other_config" column in the "QoS" table in vswitchd/vswitch.xml + * (which is built as ovs-vswitchd.conf.db(8)). + */ + int (*qos_queue_get)(struct smap *details, uint32_t queue_id, + const struct qos_conf *conf); + + /* Retrieves statistics of QoS Queue configuration into 'stats'. */ + int (*qos_queue_get_stats)(const struct qos_conf *conf, uint32_t queue_id, + struct netdev_queue_stats *stats); + + /* Setup the 'netdev_dpdk_queue_state' structure used by the dpdk queue + * dump functions. + */ + int (*qos_queue_dump_state_init)(const struct qos_conf *conf, + struct netdev_dpdk_queue_state *state); }; /* dpdk_qos_ops for each type of user space QoS implementation */ @@ -4032,6 +4074,161 @@ netdev_dpdk_set_qos(struct netdev *netdev, const char *type, return error; } +static int +netdev_dpdk_get_queue(const struct netdev *netdev, uint32_t queue_id, + struct smap *details) +{ + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + struct qos_conf *qos_conf; + int error = 0; + + ovs_mutex_lock(&dev->mutex); + + qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf); + if (!qos_conf || !qos_conf->ops || !qos_conf->ops->qos_queue_get) { + error = EOPNOTSUPP; + } else { + error = qos_conf->ops->qos_queue_get(details, queue_id, qos_conf); + } + + ovs_mutex_unlock(&dev->mutex); + + return error; +} + +static int +netdev_dpdk_set_queue(struct netdev *netdev, uint32_t queue_id, + const struct smap *details) +{ + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + struct qos_conf *qos_conf; + int error = 0; + + ovs_mutex_lock(&dev->mutex); + + qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf); + if (!qos_conf || !qos_conf->ops || !qos_conf->ops->qos_queue_construct) { + error = EOPNOTSUPP; + } else { + error = qos_conf->ops->qos_queue_construct(details, queue_id, + qos_conf); + } + + if (error && error != EOPNOTSUPP) { + VLOG_ERR("Failed to set QoS queue %d on port %s: %s", + queue_id, netdev->name, rte_strerror(error)); + } + + ovs_mutex_unlock(&dev->mutex); + + return error; +} + +static int +netdev_dpdk_delete_queue(struct netdev *netdev, uint32_t queue_id) +{ + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + struct qos_conf *qos_conf; + + ovs_mutex_lock(&dev->mutex); + + qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf); + if (qos_conf && qos_conf->ops && qos_conf->ops->qos_queue_destruct) { + qos_conf->ops->qos_queue_destruct(qos_conf, queue_id); + } + + ovs_mutex_unlock(&dev->mutex); + + return 0; +} + +static int +netdev_dpdk_get_queue_stats(const struct netdev *netdev, uint32_t queue_id, + struct netdev_queue_stats *stats) +{ + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + struct qos_conf *qos_conf; + int error = 0; + + ovs_mutex_lock(&dev->mutex); + + qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf); + if (qos_conf && qos_conf->ops && qos_conf->ops->qos_queue_get_stats) { + qos_conf->ops->qos_queue_get_stats(qos_conf, queue_id, stats); + } else { + error = EOPNOTSUPP; + } + + ovs_mutex_unlock(&dev->mutex); + + return error; +} + +static int +netdev_dpdk_queue_dump_start(const struct netdev *netdev, void **statep) +{ + int error = 0; + struct qos_conf *qos_conf; + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + + ovs_mutex_lock(&dev->mutex); + + qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf); + if (qos_conf && qos_conf->ops + && qos_conf->ops->qos_queue_dump_state_init) { + struct netdev_dpdk_queue_state *state; + + *statep = state = xmalloc(sizeof *state); + error = qos_conf->ops->qos_queue_dump_state_init(qos_conf, state); + } else { + error = EOPNOTSUPP; + } + + ovs_mutex_unlock(&dev->mutex); + + return error; +} + +static int +netdev_dpdk_queue_dump_next(const struct netdev *netdev, void *state_, + uint32_t *queue_idp, struct smap *details) +{ + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + struct netdev_dpdk_queue_state *state = state_; + struct qos_conf *qos_conf; + int error = EOF; + + ovs_mutex_lock(&dev->mutex); + + while (state->cur_queue < state->n_queues) { + uint32_t queue_id = state->queues[state->cur_queue++]; + + qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf); + if (qos_conf && qos_conf->ops && qos_conf->ops->qos_queue_get) { + *queue_idp = queue_id; + error = qos_conf->ops->qos_queue_get(details, queue_id, qos_conf); + break; + } + } + + ovs_mutex_unlock(&dev->mutex); + + return error; +} + +static int +netdev_dpdk_queue_dump_done(const struct netdev *netdev OVS_UNUSED, + void *state_) +{ + struct netdev_dpdk_queue_state *state = state_; + + free(state->queues); + free(state); + return 0; +} + + + /* egress-policer details */ struct egress_policer { @@ -4129,12 +4326,12 @@ egress_policer_run(struct qos_conf *conf, struct rte_mbuf **pkts, int pkt_cnt, } static const struct dpdk_qos_ops egress_policer_ops = { - "egress-policer", /* qos_name */ - egress_policer_qos_construct, - egress_policer_qos_destruct, - egress_policer_qos_get, - egress_policer_qos_is_equal, - egress_policer_run + .qos_name = "egress-policer", /* qos_name */ + .qos_construct = egress_policer_qos_construct, + .qos_destruct = egress_policer_qos_destruct, + .qos_get = egress_policer_qos_get, + .qos_is_equal = egress_policer_qos_is_equal, + .qos_run = egress_policer_run }; static int @@ -4392,6 +4589,13 @@ netdev_dpdk_rte_flow_create(struct netdev *netdev, .get_qos_types = netdev_dpdk_get_qos_types, \ .get_qos = netdev_dpdk_get_qos, \ .set_qos = netdev_dpdk_set_qos, \ + .get_queue = netdev_dpdk_get_queue, \ + .set_queue = netdev_dpdk_set_queue, \ + .delete_queue = netdev_dpdk_delete_queue, \ + .get_queue_stats = netdev_dpdk_get_queue_stats, \ + .queue_dump_start = netdev_dpdk_queue_dump_start, \ + .queue_dump_next = netdev_dpdk_queue_dump_next, \ + .queue_dump_done = netdev_dpdk_queue_dump_done, \ .update_flags = netdev_dpdk_update_flags, \ .rxq_alloc = netdev_dpdk_rxq_alloc, \ .rxq_construct = netdev_dpdk_rxq_construct, \