From patchwork Tue Jun 25 13:32:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eelco Chaudron X-Patchwork-Id: 1122063 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 45Y6Yp1CMHz9sN6 for ; Tue, 25 Jun 2019 23:33:34 +1000 (AEST) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id C54DEF2C; Tue, 25 Jun 2019 13:32:19 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 65E0DF25 for ; Tue, 25 Jun 2019 13:32:18 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 88A4882C for ; Tue, 25 Jun 2019 13:32:17 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 193057FDCC; Tue, 25 Jun 2019 13:32:07 +0000 (UTC) Received: from localhost.localdomain (ovpn-116-166.ams2.redhat.com [10.36.116.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id 437F95D756; Tue, 25 Jun 2019 13:32:06 +0000 (UTC) From: Eelco Chaudron To: dev@openvswitch.org Date: Tue, 25 Jun 2019 13:32:03 +0000 Message-Id: <156146952221.2873.3269873643722554480.stgit@dbuild> In-Reply-To: <156146949761.2873.11955780115679866432.stgit@dbuild> References: <156146949761.2873.11955780115679866432.stgit@dbuild> User-Agent: StGit/unknown-version MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Tue, 25 Jun 2019 13:32:12 +0000 (UTC) X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Subject: [ovs-dev] [dpdk-latest PATCH RFCv2 2/2] netdev-dpdk: Add new DPDK RFC 4115 egress policer X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org This patch adds a new policer to the DPDK datapath based on RFC 4115's Two-Rate, Three-Color marker. It's a two-level hierarchical policer which first does a color-blind marking of the traffic at the queue level, followed by a color-aware marking at the port level. At the end traffic marked as Green or Yellow is forwarded, Red is dropped. For details on how traffic is marked, see RFC 4115. This egress policer can be used to limit traffic at different rated based on the queues the traffic is in. In addition, it can also be used to prioritize certain traffic over others at a port level. For example, the following configuration will limit the traffic rate at a port level to a maximum of 2000 packets a second (64 bytes IPv4 packets). 100pps as CIR (Committed Information Rate) and 1000pps as EIR (Excess Information Rate). High priority traffic is routed to queue 10, which marks all traffic as CIR, i.e. Green. All low priority traffic, queue 20, is marked as EIR, i.e. Yellow. ovs-vsctl --timeout=5 set port dpdk1 qos=@myqos -- \ --id=@myqos create qos type=trtcm-policer \ other-config:cir=52000 other-config:cbs=2048 \ other-config:eir=52000 other-config:ebs=2048 \ queues:10=@dpdk1Q10 queues:20=@dpdk1Q20 -- \ --id=@dpdk1Q10 create queue \ other-config:cir=41600000 other-config:cbs=2048 \ other-config:eir=0 other-config:ebs=0 -- \ --id=@dpdk1Q20 create queue \ other-config:cir=0 other-config:cbs=0 \ other-config:eir=41600000 other-config:ebs=2048 \ This configuration accomplishes that the high priority traffic has a guaranteed bandwidth egressing the ports at CIR (1000pps), but it can also use the EIR, so a total of 2000pps at max. These additional 1000pps is shared with the low priority traffic. The low priority traffic can use at maximum 1000pps. Signed-off-by: Eelco Chaudron --- lib/netdev-dpdk.c | 321 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 320 insertions(+), 1 deletion(-) diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index 37284c708..f9954a3b3 100644 --- a/lib/netdev-dpdk.c +++ b/lib/netdev-dpdk.c @@ -328,13 +328,14 @@ struct dpdk_qos_ops { /* dpdk_qos_ops for each type of user space QoS implementation */ static const struct dpdk_qos_ops egress_policer_ops; - +static const struct dpdk_qos_ops trtcm_policer_ops; /* * Array of dpdk_qos_ops, contains pointer to all supported QoS * operations. */ static const struct dpdk_qos_ops *const qos_confs[] = { &egress_policer_ops, + &trtcm_policer_ops, NULL }; @@ -4226,6 +4227,324 @@ static const struct dpdk_qos_ops egress_policer_ops = { .qos_run = egress_policer_run }; +/* trtcm-policer details */ + +struct trtcm_policer { + struct qos_conf qos_conf; + struct rte_meter_trtcm_rfc4115_params meter_params; + struct rte_meter_trtcm_rfc4115_profile meter_profile; + struct rte_meter_trtcm_rfc4115 meter; + struct netdev_queue_stats stats; + struct hmap queues; +}; + +struct trtcm_policer_queue { + struct hmap_node hmap_node; + uint32_t queue_id; + struct rte_meter_trtcm_rfc4115_params meter_params; + struct rte_meter_trtcm_rfc4115_profile meter_profile; + struct rte_meter_trtcm_rfc4115 meter; + struct netdev_queue_stats stats; +}; + +static void +trtcm_policer_details_to_param(const struct smap *details, + struct rte_meter_trtcm_rfc4115_params *params) +{ + memset(params, 0, sizeof *params); + params->cir = smap_get_ullong(details, "cir", 0); + params->eir = smap_get_ullong(details, "eir", 0); + params->cbs = smap_get_ullong(details, "cbs", 0); + params->ebs = smap_get_ullong(details, "ebs", 0); +} + +static void +trtcm_policer_param_to_detail( + const struct rte_meter_trtcm_rfc4115_params *params, + struct smap *details) +{ + smap_add_format(details, "cir", "%"PRIu64, params->cir); + smap_add_format(details, "eir", "%"PRIu64, params->eir); + smap_add_format(details, "cbs", "%"PRIu64, params->cbs); + smap_add_format(details, "ebs", "%"PRIu64, params->ebs); +} + + +static int +trtcm_policer_qos_construct(const struct smap *details, + struct qos_conf **conf) +{ + struct trtcm_policer *policer; + int err = 0; + + policer = xmalloc(sizeof *policer); + qos_conf_init(&policer->qos_conf, &trtcm_policer_ops); + trtcm_policer_details_to_param(details, &policer->meter_params); + err = rte_meter_trtcm_rfc4115_profile_config(&policer->meter_profile, + &policer->meter_params); + if (!err) { + err = rte_meter_trtcm_rfc4115_config(&policer->meter, + &policer->meter_profile); + } + if (!err) { + *conf = &policer->qos_conf; + memset(&policer->stats, 0, sizeof policer->stats); + hmap_init(&policer->queues); + } else { + free(policer); + *conf = NULL; + err = -err; + } + return err; +} + +static void +trtcm_policer_qos_destruct(struct qos_conf *conf) +{ + struct trtcm_policer_queue *queue, *next_queue; + struct trtcm_policer *policer = CONTAINER_OF(conf, struct trtcm_policer, + qos_conf); + + HMAP_FOR_EACH_SAFE (queue, next_queue, hmap_node, &policer->queues) { + hmap_remove(&policer->queues, &queue->hmap_node); + free(queue); + } + hmap_destroy(&policer->queues); + free(policer); +} + +static int +trtcm_policer_qos_get(const struct qos_conf *conf, struct smap *details) +{ + struct trtcm_policer *policer = CONTAINER_OF(conf, struct trtcm_policer, + qos_conf); + + trtcm_policer_param_to_detail(&policer->meter_params, details); + return 0; +} + +static bool +trtcm_policer_qos_is_equal(const struct qos_conf *conf, + const struct smap *details) +{ + struct trtcm_policer *policer = CONTAINER_OF(conf, struct trtcm_policer, + qos_conf); + struct rte_meter_trtcm_rfc4115_params params; + + trtcm_policer_details_to_param(details, ¶ms); + + return !memcmp(¶ms, &policer->meter_params, sizeof params); +} + +static struct trtcm_policer_queue * +trtcm_policer_qos_find_queue(struct trtcm_policer *policer, uint32_t queue_id) +{ + struct trtcm_policer_queue *queue; + HMAP_FOR_EACH_WITH_HASH (queue, hmap_node, hash_2words(queue_id, 0), + &policer->queues) { + if (queue->queue_id == queue_id) { + return queue; + } + } + return NULL; +} + +static inline bool +trtcm_policer_run_single_packet(struct trtcm_policer *policer, + struct rte_mbuf *pkt, uint64_t time) +{ + enum rte_color pkt_color; + struct trtcm_policer_queue *queue; + uint32_t pkt_len = rte_pktmbuf_pkt_len(pkt) - sizeof(struct rte_ether_hdr); + struct dp_packet *dpkt = CONTAINER_OF(pkt, struct dp_packet, mbuf); + + queue = trtcm_policer_qos_find_queue(policer, dpkt->md.skb_priority); + if (!queue) { + /* If no queue is found, use the default queue, which MUST exist */ + queue = trtcm_policer_qos_find_queue(policer, 0); + if (!queue) { + return false; + } + } + + pkt_color = rte_meter_trtcm_rfc4115_color_blind_check(&queue->meter, + &queue->meter_profile, + time, + pkt_len); + + if (pkt_color == RTE_COLOR_RED) { + queue->stats.tx_errors++; + } else { + queue->stats.tx_bytes += pkt_len; + queue->stats.tx_packets++; + } + + pkt_color = rte_meter_trtcm_rfc4115_color_aware_check(&policer->meter, + &policer->meter_profile, + time, pkt_len, + pkt_color); + + if (pkt_color == RTE_COLOR_RED) { + policer->stats.tx_errors++; + return false; + } + + policer->stats.tx_bytes += pkt_len; + policer->stats.tx_packets++; + return true; +} + +static int +trtcm_policer_run(struct qos_conf *conf, struct rte_mbuf **pkts, int pkt_cnt, + bool should_steal) +{ + int i = 0; + int cnt = 0; + struct rte_mbuf *pkt = NULL; + uint64_t current_time = rte_rdtsc(); + + struct trtcm_policer *policer = CONTAINER_OF(conf, struct trtcm_policer, + qos_conf); + + for (i = 0; i < pkt_cnt; i++) { + pkt = pkts[i]; + + if (trtcm_policer_run_single_packet(policer, pkt, current_time)) { + if (cnt != i) { + pkts[cnt] = pkt; + } + cnt++; + } else { + if (should_steal) { + rte_pktmbuf_free(pkt); + } + } + } + return cnt; +} + +static int +trtcm_policer_qos_queue_construct(const struct smap *details, + uint32_t queue_id, struct qos_conf *conf) +{ + int err = 0; + struct trtcm_policer_queue *queue; + struct trtcm_policer *policer = CONTAINER_OF(conf, struct trtcm_policer, + qos_conf); + + queue = trtcm_policer_qos_find_queue(policer, queue_id); + if (!queue) { + queue = xmalloc(sizeof *queue); + queue->queue_id = queue_id; + memset(&queue->stats, 0, sizeof queue->stats); + queue->stats.created = time_msec(); + hmap_insert(&policer->queues, &queue->hmap_node, + hash_2words(queue_id, 0)); + } + if (queue_id == 0 && smap_is_empty(details)) { + /* No default queue configured, use port values */ + memcpy(&queue->meter_params, &policer->meter_params, + sizeof queue->meter_params); + } else { + trtcm_policer_details_to_param(details, &queue->meter_params); + } + + err = rte_meter_trtcm_rfc4115_profile_config(&queue->meter_profile, + &queue->meter_params); + + if (!err) { + err = rte_meter_trtcm_rfc4115_config(&queue->meter, + &queue->meter_profile); + } + if (err) { + hmap_remove(&policer->queues, &queue->hmap_node); + free(queue); + err = -err; + } + return err; +} + +static void +trtcm_policer_qos_queue_destruct(struct qos_conf *conf, uint32_t queue_id) +{ + struct trtcm_policer_queue *queue; + struct trtcm_policer *policer = CONTAINER_OF(conf, struct trtcm_policer, + qos_conf); + + queue = trtcm_policer_qos_find_queue(policer, queue_id); + if (queue) { + hmap_remove(&policer->queues, &queue->hmap_node); + free(queue); + } +} + +static int +trtcm_policer_qos_queue_get(struct smap *details, uint32_t queue_id, + const struct qos_conf *conf) +{ + struct trtcm_policer_queue *queue; + struct trtcm_policer *policer = CONTAINER_OF(conf, struct trtcm_policer, + qos_conf); + + queue = trtcm_policer_qos_find_queue(policer, queue_id); + if (!queue) { + return EINVAL; + } + + trtcm_policer_param_to_detail(&queue->meter_params, details); + return 0; +} + +static int +trtcm_policer_qos_queue_get_stats(const struct qos_conf *conf, + uint32_t queue_id, + struct netdev_queue_stats *stats) +{ + struct trtcm_policer_queue *queue; + struct trtcm_policer *policer = CONTAINER_OF(conf, struct trtcm_policer, + qos_conf); + + queue = trtcm_policer_qos_find_queue(policer, queue_id); + if (!queue) { + return EINVAL; + } + memcpy(stats, &queue->stats, sizeof *stats); + return 0; +} + +static int +trtcm_policer_qos_queue_dump_state_init(const struct qos_conf *conf, + struct netdev_dpdk_queue_state *state) +{ + uint32_t i = 0; + struct trtcm_policer_queue *queue; + struct trtcm_policer *policer = CONTAINER_OF(conf, struct trtcm_policer, + qos_conf); + + state->n_queues = hmap_count(&policer->queues); + state->cur_queue = 0; + state->queues = xmalloc(state->n_queues * sizeof *state->queues); + + HMAP_FOR_EACH (queue, hmap_node, &policer->queues) { + state->queues[i++] = queue->queue_id; + } + return 0; +} + +static const struct dpdk_qos_ops trtcm_policer_ops = { + .qos_name = "trtcm-policer", + .qos_construct = trtcm_policer_qos_construct, + .qos_destruct = trtcm_policer_qos_destruct, + .qos_get = trtcm_policer_qos_get, + .qos_is_equal = trtcm_policer_qos_is_equal, + .qos_run = trtcm_policer_run, + .qos_queue_construct = trtcm_policer_qos_queue_construct, + .qos_queue_destruct = trtcm_policer_qos_queue_destruct, + .qos_queue_get = trtcm_policer_qos_queue_get, + .qos_queue_get_stats = trtcm_policer_qos_queue_get_stats, + .qos_queue_dump_state_init = trtcm_policer_qos_queue_dump_state_init +}; + static int netdev_dpdk_reconfigure(struct netdev *netdev) {