From patchwork Sun Mar 31 17:40:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gautam Ramakrishnan X-Patchwork-Id: 1071834 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="V9Ox8p+t"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44XN8G6nntz9sR0 for ; Mon, 1 Apr 2019 04:42:06 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731329AbfCaRmE (ORCPT ); Sun, 31 Mar 2019 13:42:04 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:36630 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731237AbfCaRmD (ORCPT ); Sun, 31 Mar 2019 13:42:03 -0400 Received: by mail-pg1-f193.google.com with SMTP id 85so3558904pgc.3 for ; Sun, 31 Mar 2019 10:42:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=B0u3uxpUD9YWLBfpJEEQNLnEEe5CKfiK7Vk5M/Q3Ilc=; b=V9Ox8p+t3XunSCDeGlFPDHSeJ3yx0SJVobYHumNGI6NVuP/MFXwc/0EPnwgIE3qloB yCbhwf2Fr5NVqf5ItfTMDYK7USy6CQRtesowf8NGpY+o2vCwaN1r57k4N9G4GbCgpNX+ W2H3BvCRUhE+hP3kp6XZzOcRyo5f6HzyFx65qMYnyiUYGff1wUa51M8QcOah+6bwvzzo Sn2jQ/xMWXYQuw5nJ7eLPn22K1869dYYlKgPn9wBIcf7Z6OulCyKLLmzNw2VuAgHbrIc //7o7vLTR91LADB7yNU4i4QfPhXxpwLvtNqJ0xib4gaLuTSOhzJi20gGfRwNXjkCGrx3 jCWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=B0u3uxpUD9YWLBfpJEEQNLnEEe5CKfiK7Vk5M/Q3Ilc=; b=HYXOi0ZFSz68jg53d3jDiWYkfS+zIMsjHg679J9EbOd6zsQB2AQpOwDSNV7X6Qrsdc e67cr4Ax247KwNQ0zLTx5pBlET4kHsL4C402c2zbrPk3arkzg+tgHmwsLlPWIA4FNl07 t5w7TEshCjJfqb6dpk16y73Pc7SsJxiTynjEIPJlw+h9wNbtw2XIog6nU9C0p79LB1G8 7Zjl7FvCDBydm0oDhGPkkdEwB6CUcf0IMJfP4bBWx7WaNyFrXeFhRgfVp35h04uHzdm1 wZhMjxppfO/xbjPHS5JEqvnh0qcQv059+C9S0Ac9yUeYqHs6cR0dCNwK4w8h741ME/QO ufSg== X-Gm-Message-State: APjAAAVruzIFkwzEdi4UtVCRpYXmBzPTRHKhMEnJhVLOX2wE3XZRf1S8 OgsydYxW4f3+WHpOd9W7+D8RvoH7S/u36Q== X-Google-Smtp-Source: APXvYqwDWhC57iocvALYbj45QSPo3sBM1OmX2qFh+NjACVI25ymAoDN3S3Hm24PHqrjzc3smouTm2w== X-Received: by 2002:aa7:83c1:: with SMTP id j1mr2946012pfn.241.1554054122598; Sun, 31 Mar 2019 10:42:02 -0700 (PDT) Received: from localhost.localdomain ([223.186.242.214]) by smtp.gmail.com with ESMTPSA id e123sm10581744pgc.14.2019.03.31.10.41.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Mar 2019 10:42:02 -0700 (PDT) From: Gautam Ramakrishnan To: jhs@mojatatu.com Cc: davem@davemloft.net, netdev@vger.kernel.org, Gautam Ramakrishnan , "Mohit P . Tahiliani" , "Sachin D . Patil" , Mohit Bhasi , "V . Saicharan" , Leslie Monis , Dave Taht Subject: [RFC net-next 1/2] net: sched: pie: refactor PIE Date: Sun, 31 Mar 2019 23:10:04 +0530 Message-Id: <20190331174005.5841-2-gautamramk@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190331174005.5841-1-gautamramk@gmail.com> References: <20190331174005.5841-1-gautamramk@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mohit P. Tahiliani This patch refactors the code for the PIE qdisc. It breaks sch_pie.c down into 'sch_pie.c' and 'pie.h'. This enables the addition of qdiscs such as FQ-PIE that make use of the components of PIE. Signed-off-by: Mohit P. Tahiliani Signed-off-by: Sachin D. Patil Signed-off-by: Mohit Bhasi Signed-off-by: V. Saicharan Signed-off-by: Leslie Monis Signed-off-by: Gautam Ramakrishnan Cc: Dave Taht --- include/net/pie.h | 330 ++++++++++++++++++++++++++++++++++++++++++++ net/sched/sch_pie.c | 314 +---------------------------------------- 2 files changed, 336 insertions(+), 308 deletions(-) create mode 100644 include/net/pie.h diff --git a/include/net/pie.h b/include/net/pie.h new file mode 100644 index 000000000000..1cc295f468b4 --- /dev/null +++ b/include/net/pie.h @@ -0,0 +1,330 @@ +/* + * include/net/pie.h Proportional Integral Controller Enhanced + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * Original Author: Vijay Subramanian + * Original Author: Mythili Prabhu + * + * ECN support is added by Naeem Khademi + * University of Oslo, Norway. + * + * References: + * RFC 8033: https://tools.ietf.org/html/rfc8033 + */ + +#ifndef __NET_SCHED_PIE_H +#define __NET_SCHED_PIE_H + +#include +#include +#include +#include +#include +#include +#include +#include + +#define QUEUE_THRESHOLD 16384 +#define DQCOUNT_INVALID -1 +#define MAX_PROB 0xffffffffffffffff +#define PIE_SCALE 8 + +/* parameters used */ +struct pie_params { + psched_time_t target; /* user specified target delay in pschedtime */ + u32 tupdate; /* timer frequency (in jiffies) */ + u32 limit; /* number of packets that can be enqueued */ + u32 alpha; /* alpha and beta are between 0 and 32 */ + u32 beta; /* and are used for shift relative to 1 */ + bool ecn; /* true if ecn is enabled */ + bool bytemode; /* to scale drop early prob based on pkt size */ +}; + +/* variables used */ +struct pie_vars { + u64 prob; /* probability but scaled by u64 limit. */ + psched_time_t burst_time; + psched_time_t qdelay; + psched_time_t qdelay_old; + u64 dq_count; /* measured in bytes */ + psched_time_t dq_tstamp; /* drain rate */ + u64 accu_prob; /* accumulated drop probability */ + u32 avg_dq_rate; /* bytes per pschedtime tick,scaled */ + u32 qlen_old; /* in bytes */ + u8 accu_prob_overflows; /* overflows of accu_prob */ +}; + +/* statistics gathering */ +struct pie_stats { + u32 packets_in; /* total number of packets enqueued */ + u32 dropped; /* packets dropped due to pie_action */ + u32 overlimit; /* dropped due to lack of space in queue */ + u32 maxq; /* maximum queue size */ + u32 ecn_mark; /* packets marked with ECN */ +}; + +static void pie_params_init(struct pie_params *params) +{ + params->alpha = 2; + params->beta = 20; + params->tupdate = usecs_to_jiffies(15 * USEC_PER_MSEC); /* 15 ms */ + params->limit = 1000; /* default of 1000 packets */ + params->target = PSCHED_NS2TICKS(15 * NSEC_PER_MSEC); /* 15 ms */ + params->ecn = false; + params->bytemode = false; +} + +static void pie_vars_init(struct pie_vars *vars) +{ + vars->dq_count = DQCOUNT_INVALID; + vars->accu_prob = 0; + vars->avg_dq_rate = 0; + /* default of 150 ms in pschedtime */ + vars->burst_time = PSCHED_NS2TICKS(150 * NSEC_PER_MSEC); + vars->accu_prob_overflows = 0; +} + +static bool drop_early(struct Qdisc *sch, u32 qlen, u32 packet_size, + struct pie_vars *vars, struct pie_params *params) +{ + u64 rnd; + u64 local_prob = vars->prob; + u32 mtu = psched_mtu(qdisc_dev(sch)); + + /* If there is still burst allowance left skip random early drop */ + if (vars->burst_time > 0) + return false; + + /* If current delay is less than half of target, and + * if drop prob is low already, disable early_drop + */ + if ((vars->qdelay < params->target / 2) && + (vars->prob < MAX_PROB / 5)) + return false; + + /* If we have fewer than 2 mtu-sized packets, disable drop_early, + * similar to min_th in RED + */ + if (qlen < 2 * mtu) + return false; + + /* If bytemode is turned on, use packet size to compute new + * probablity. Smaller packets will have lower drop prob in this case + */ + if (params->bytemode && packet_size <= mtu) + local_prob = (u64)packet_size * div_u64(local_prob, mtu); + else + local_prob = vars->prob; + + if (local_prob == 0) { + vars->accu_prob = 0; + vars->accu_prob_overflows = 0; + } + + if (local_prob > MAX_PROB - vars->accu_prob) + vars->accu_prob_overflows++; + + vars->accu_prob += local_prob; + + if (vars->accu_prob_overflows == 0 && + vars->accu_prob < (MAX_PROB / 100) * 85) + return false; + if (vars->accu_prob_overflows == 8 && + vars->accu_prob >= MAX_PROB / 2) + return true; + + prandom_bytes(&rnd, 8); + if (rnd < local_prob) { + vars->accu_prob = 0; + vars->accu_prob_overflows = 0; + return true; + } + + return false; +} + +static void pie_process_dequeue(u32 qlen, struct sk_buff *skb, + struct pie_vars *vars) +{ + /* If current queue is about 10 packets or more and dq_count is unset + * we have enough packets to calculate the drain rate. Save + * current time as dq_tstamp and start measurement cycle. + */ + if (qlen >= QUEUE_THRESHOLD && vars->dq_count == DQCOUNT_INVALID) { + vars->dq_tstamp = psched_get_time(); + vars->dq_count = 0; + } + + /* Calculate the average drain rate from this value. If queue length + * has receded to a small value viz., <= QUEUE_THRESHOLD bytes,reset + * the dq_count to -1 as we don't have enough packets to calculate the + * drain rate anymore The following if block is entered only when we + * have a substantial queue built up (QUEUE_THRESHOLD bytes or more) + * and we calculate the drain rate for the threshold here. dq_count is + * in bytes, time difference in psched_time, hence rate is in + * bytes/psched_time. + */ + if (vars->dq_count != DQCOUNT_INVALID) { + vars->dq_count += skb->len; + + if (vars->dq_count >= QUEUE_THRESHOLD) { + psched_time_t now = psched_get_time(); + u32 dtime = now - vars->dq_tstamp; + u32 count = vars->dq_count << PIE_SCALE; + + if (dtime == 0) + return; + + count = count / dtime; + + if (vars->avg_dq_rate == 0) + vars->avg_dq_rate = count; + else + vars->avg_dq_rate = + (vars->avg_dq_rate - + (vars->avg_dq_rate >> 3)) + (count >> 3); + + /* If the queue has receded below the threshold, we hold + * on to the last drain rate calculated, else we reset + * dq_count to 0 to re-enter the if block when the next + * packet is dequeued + */ + if (qlen < QUEUE_THRESHOLD) { + vars->dq_count = DQCOUNT_INVALID; + } else { + vars->dq_count = 0; + vars->dq_tstamp = psched_get_time(); + } + + if (vars->burst_time > 0) { + if (vars->burst_time > dtime) + vars->burst_time -= dtime; + else + vars->burst_time = 0; + } + } + } +} + +static void calculate_probability(u32 qlen, struct pie_vars *vars, + struct pie_params *params) +{ + psched_time_t qdelay = 0; /* in pschedtime */ + psched_time_t qdelay_old = vars->qdelay; /* in pschedtime */ + s64 delta = 0; /* determines the change in probability */ + u64 oldprob; + u64 alpha, beta; + u32 power; + bool update_prob = true; + + vars->qdelay_old = vars->qdelay; + + if (vars->avg_dq_rate > 0) + qdelay = (qlen << PIE_SCALE) / vars->avg_dq_rate; + else + qdelay = 0; + + /* If qdelay is zero and qlen is not, it means qlen is very small, less + * than dequeue_rate, so we do not update probabilty in this round + */ + if (qdelay == 0 && qlen != 0) + update_prob = false; + + /* In the algorithm, alpha and beta are between 0 and 2 with typical + * value for alpha as 0.125. In this implementation, we use values 0-32 + * passed from user space to represent this. Also, alpha and beta have + * unit of HZ and need to be scaled before they can used to update + * probability. alpha/beta are updated locally below by scaling down + * by 16 to come to 0-2 range. + */ + alpha = ((u64)params->alpha * (MAX_PROB / PSCHED_TICKS_PER_SEC)) >> 4; + beta = ((u64)params->beta * (MAX_PROB / PSCHED_TICKS_PER_SEC)) >> 4; + + /* We scale alpha and beta differently depending on how heavy the + * congestion is. Please see RFC 8033 for details. + */ + if (vars->prob < MAX_PROB / 10) { + alpha >>= 1; + beta >>= 1; + + power = 100; + while (vars->prob < div_u64(MAX_PROB, power) && + power <= 1000000) { + alpha >>= 2; + beta >>= 2; + power *= 10; + } + } + + /* alpha and beta should be between 0 and 32, in multiples of 1/16 */ + delta += alpha * (u64)(qdelay - params->target); + delta += beta * (u64)(qdelay - qdelay_old); + + oldprob = vars->prob; + + /* to ensure we increase probability in steps of no more than 2% */ + if (delta > (s64)(MAX_PROB / (100 / 2)) && + vars->prob >= MAX_PROB / 10) + delta = (MAX_PROB / 100) * 2; + + /* Non-linear drop: + * Tune drop probability to increase quickly for high delays(>= 250ms) + * 250ms is derived through experiments and provides error protection + */ + + if (qdelay > (PSCHED_NS2TICKS(250 * NSEC_PER_MSEC))) + delta += MAX_PROB / (100 / 2); + + vars->prob += delta; + + if (delta > 0) { + /* prevent overflow */ + if (vars->prob < oldprob) { + vars->prob = MAX_PROB; + /* Prevent normalization error. If probability is at + * maximum value already, we normalize it here, and + * skip the check to do a non-linear drop in the next + * section. + */ + update_prob = false; + } + } else { + /* prevent underflow */ + if (vars->prob > oldprob) + vars->prob = 0; + } + + /* Non-linear drop in probability: Reduce drop probability quickly if + * delay is 0 for 2 consecutive Tupdate periods. + */ + + if (qdelay == 0 && qdelay_old == 0 && update_prob) + /* Reduce drop probability to 98.4% */ + vars->prob -= vars->prob / 64u; + + vars->qdelay = qdelay; + vars->qlen_old = qlen; + + /* We restart the measurement cycle if the following conditions are met + * 1. If the delay has been low for 2 consecutive Tupdate periods + * 2. Calculated drop probability is zero + * 3. We have atleast one estimate for the avg_dq_rate ie., + * is a non-zero value + */ + if ((vars->qdelay < params->target / 2) && + (vars->qdelay_old < params->target / 2) && + vars->prob == 0 && + vars->avg_dq_rate > 0) + pie_vars_init(vars); +} + +#endif /* __NET_SCHED_PIE_H */ diff --git a/net/sched/sch_pie.c b/net/sched/sch_pie.c index 1cc0c7b74aa3..f81a8c145914 100644 --- a/net/sched/sch_pie.c +++ b/net/sched/sch_pie.c @@ -20,53 +20,7 @@ * RFC 8033: https://tools.ietf.org/html/rfc8033 */ -#include -#include -#include -#include -#include -#include -#include -#include - -#define QUEUE_THRESHOLD 16384 -#define DQCOUNT_INVALID -1 -#define MAX_PROB 0xffffffffffffffff -#define PIE_SCALE 8 - -/* parameters used */ -struct pie_params { - psched_time_t target; /* user specified target delay in pschedtime */ - u32 tupdate; /* timer frequency (in jiffies) */ - u32 limit; /* number of packets that can be enqueued */ - u32 alpha; /* alpha and beta are between 0 and 32 */ - u32 beta; /* and are used for shift relative to 1 */ - bool ecn; /* true if ecn is enabled */ - bool bytemode; /* to scale drop early prob based on pkt size */ -}; - -/* variables used */ -struct pie_vars { - u64 prob; /* probability but scaled by u64 limit. */ - psched_time_t burst_time; - psched_time_t qdelay; - psched_time_t qdelay_old; - u64 dq_count; /* measured in bytes */ - psched_time_t dq_tstamp; /* drain rate */ - u64 accu_prob; /* accumulated drop probability */ - u32 avg_dq_rate; /* bytes per pschedtime tick,scaled */ - u32 qlen_old; /* in bytes */ - u8 accu_prob_overflows; /* overflows of accu_prob */ -}; - -/* statistics gathering */ -struct pie_stats { - u32 packets_in; /* total number of packets enqueued */ - u32 dropped; /* packets dropped due to pie_action */ - u32 overlimit; /* dropped due to lack of space in queue */ - u32 maxq; /* maximum queue size */ - u32 ecn_mark; /* packets marked with ECN */ -}; +#include /* private data for the Qdisc */ struct pie_sched_data { @@ -77,86 +31,6 @@ struct pie_sched_data { struct Qdisc *sch; }; -static void pie_params_init(struct pie_params *params) -{ - params->alpha = 2; - params->beta = 20; - params->tupdate = usecs_to_jiffies(15 * USEC_PER_MSEC); /* 15 ms */ - params->limit = 1000; /* default of 1000 packets */ - params->target = PSCHED_NS2TICKS(15 * NSEC_PER_MSEC); /* 15 ms */ - params->ecn = false; - params->bytemode = false; -} - -static void pie_vars_init(struct pie_vars *vars) -{ - vars->dq_count = DQCOUNT_INVALID; - vars->accu_prob = 0; - vars->avg_dq_rate = 0; - /* default of 150 ms in pschedtime */ - vars->burst_time = PSCHED_NS2TICKS(150 * NSEC_PER_MSEC); - vars->accu_prob_overflows = 0; -} - -static bool drop_early(struct Qdisc *sch, u32 packet_size) -{ - struct pie_sched_data *q = qdisc_priv(sch); - u64 rnd; - u64 local_prob = q->vars.prob; - u32 mtu = psched_mtu(qdisc_dev(sch)); - - /* If there is still burst allowance left skip random early drop */ - if (q->vars.burst_time > 0) - return false; - - /* If current delay is less than half of target, and - * if drop prob is low already, disable early_drop - */ - if ((q->vars.qdelay < q->params.target / 2) && - (q->vars.prob < MAX_PROB / 5)) - return false; - - /* If we have fewer than 2 mtu-sized packets, disable drop_early, - * similar to min_th in RED - */ - if (sch->qstats.backlog < 2 * mtu) - return false; - - /* If bytemode is turned on, use packet size to compute new - * probablity. Smaller packets will have lower drop prob in this case - */ - if (q->params.bytemode && packet_size <= mtu) - local_prob = (u64)packet_size * div_u64(local_prob, mtu); - else - local_prob = q->vars.prob; - - if (local_prob == 0) { - q->vars.accu_prob = 0; - q->vars.accu_prob_overflows = 0; - } - - if (local_prob > MAX_PROB - q->vars.accu_prob) - q->vars.accu_prob_overflows++; - - q->vars.accu_prob += local_prob; - - if (q->vars.accu_prob_overflows == 0 && - q->vars.accu_prob < (MAX_PROB / 100) * 85) - return false; - if (q->vars.accu_prob_overflows == 8 && - q->vars.accu_prob >= MAX_PROB / 2) - return true; - - prandom_bytes(&rnd, 8); - if (rnd < local_prob) { - q->vars.accu_prob = 0; - q->vars.accu_prob_overflows = 0; - return true; - } - - return false; -} - static int pie_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free) { @@ -168,7 +42,8 @@ static int pie_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch, goto out; } - if (!drop_early(sch, skb->len)) { + if (!drop_early(sch, sch->qstats.backlog, skb->len, &q->vars, + &q->params)) { enqueue = true; } else if (q->params.ecn && (q->vars.prob <= MAX_PROB / 10) && INET_ECN_set_ce(skb)) { @@ -270,184 +145,6 @@ static int pie_change(struct Qdisc *sch, struct nlattr *opt, return 0; } -static void pie_process_dequeue(struct Qdisc *sch, struct sk_buff *skb) -{ - struct pie_sched_data *q = qdisc_priv(sch); - int qlen = sch->qstats.backlog; /* current queue size in bytes */ - - /* If current queue is about 10 packets or more and dq_count is unset - * we have enough packets to calculate the drain rate. Save - * current time as dq_tstamp and start measurement cycle. - */ - if (qlen >= QUEUE_THRESHOLD && q->vars.dq_count == DQCOUNT_INVALID) { - q->vars.dq_tstamp = psched_get_time(); - q->vars.dq_count = 0; - } - - /* Calculate the average drain rate from this value. If queue length - * has receded to a small value viz., <= QUEUE_THRESHOLD bytes,reset - * the dq_count to -1 as we don't have enough packets to calculate the - * drain rate anymore The following if block is entered only when we - * have a substantial queue built up (QUEUE_THRESHOLD bytes or more) - * and we calculate the drain rate for the threshold here. dq_count is - * in bytes, time difference in psched_time, hence rate is in - * bytes/psched_time. - */ - if (q->vars.dq_count != DQCOUNT_INVALID) { - q->vars.dq_count += skb->len; - - if (q->vars.dq_count >= QUEUE_THRESHOLD) { - psched_time_t now = psched_get_time(); - u32 dtime = now - q->vars.dq_tstamp; - u32 count = q->vars.dq_count << PIE_SCALE; - - if (dtime == 0) - return; - - count = count / dtime; - - if (q->vars.avg_dq_rate == 0) - q->vars.avg_dq_rate = count; - else - q->vars.avg_dq_rate = - (q->vars.avg_dq_rate - - (q->vars.avg_dq_rate >> 3)) + (count >> 3); - - /* If the queue has receded below the threshold, we hold - * on to the last drain rate calculated, else we reset - * dq_count to 0 to re-enter the if block when the next - * packet is dequeued - */ - if (qlen < QUEUE_THRESHOLD) { - q->vars.dq_count = DQCOUNT_INVALID; - } else { - q->vars.dq_count = 0; - q->vars.dq_tstamp = psched_get_time(); - } - - if (q->vars.burst_time > 0) { - if (q->vars.burst_time > dtime) - q->vars.burst_time -= dtime; - else - q->vars.burst_time = 0; - } - } - } -} - -static void calculate_probability(struct Qdisc *sch) -{ - struct pie_sched_data *q = qdisc_priv(sch); - u32 qlen = sch->qstats.backlog; /* queue size in bytes */ - psched_time_t qdelay = 0; /* in pschedtime */ - psched_time_t qdelay_old = q->vars.qdelay; /* in pschedtime */ - s64 delta = 0; /* determines the change in probability */ - u64 oldprob; - u64 alpha, beta; - u32 power; - bool update_prob = true; - - q->vars.qdelay_old = q->vars.qdelay; - - if (q->vars.avg_dq_rate > 0) - qdelay = (qlen << PIE_SCALE) / q->vars.avg_dq_rate; - else - qdelay = 0; - - /* If qdelay is zero and qlen is not, it means qlen is very small, less - * than dequeue_rate, so we do not update probabilty in this round - */ - if (qdelay == 0 && qlen != 0) - update_prob = false; - - /* In the algorithm, alpha and beta are between 0 and 2 with typical - * value for alpha as 0.125. In this implementation, we use values 0-32 - * passed from user space to represent this. Also, alpha and beta have - * unit of HZ and need to be scaled before they can used to update - * probability. alpha/beta are updated locally below by scaling down - * by 16 to come to 0-2 range. - */ - alpha = ((u64)q->params.alpha * (MAX_PROB / PSCHED_TICKS_PER_SEC)) >> 4; - beta = ((u64)q->params.beta * (MAX_PROB / PSCHED_TICKS_PER_SEC)) >> 4; - - /* We scale alpha and beta differently depending on how heavy the - * congestion is. Please see RFC 8033 for details. - */ - if (q->vars.prob < MAX_PROB / 10) { - alpha >>= 1; - beta >>= 1; - - power = 100; - while (q->vars.prob < div_u64(MAX_PROB, power) && - power <= 1000000) { - alpha >>= 2; - beta >>= 2; - power *= 10; - } - } - - /* alpha and beta should be between 0 and 32, in multiples of 1/16 */ - delta += alpha * (u64)(qdelay - q->params.target); - delta += beta * (u64)(qdelay - qdelay_old); - - oldprob = q->vars.prob; - - /* to ensure we increase probability in steps of no more than 2% */ - if (delta > (s64)(MAX_PROB / (100 / 2)) && - q->vars.prob >= MAX_PROB / 10) - delta = (MAX_PROB / 100) * 2; - - /* Non-linear drop: - * Tune drop probability to increase quickly for high delays(>= 250ms) - * 250ms is derived through experiments and provides error protection - */ - - if (qdelay > (PSCHED_NS2TICKS(250 * NSEC_PER_MSEC))) - delta += MAX_PROB / (100 / 2); - - q->vars.prob += delta; - - if (delta > 0) { - /* prevent overflow */ - if (q->vars.prob < oldprob) { - q->vars.prob = MAX_PROB; - /* Prevent normalization error. If probability is at - * maximum value already, we normalize it here, and - * skip the check to do a non-linear drop in the next - * section. - */ - update_prob = false; - } - } else { - /* prevent underflow */ - if (q->vars.prob > oldprob) - q->vars.prob = 0; - } - - /* Non-linear drop in probability: Reduce drop probability quickly if - * delay is 0 for 2 consecutive Tupdate periods. - */ - - if (qdelay == 0 && qdelay_old == 0 && update_prob) - /* Reduce drop probability to 98.4% */ - q->vars.prob -= q->vars.prob / 64u; - - q->vars.qdelay = qdelay; - q->vars.qlen_old = qlen; - - /* We restart the measurement cycle if the following conditions are met - * 1. If the delay has been low for 2 consecutive Tupdate periods - * 2. Calculated drop probability is zero - * 3. We have atleast one estimate for the avg_dq_rate ie., - * is a non-zero value - */ - if ((q->vars.qdelay < q->params.target / 2) && - (q->vars.qdelay_old < q->params.target / 2) && - q->vars.prob == 0 && - q->vars.avg_dq_rate > 0) - pie_vars_init(&q->vars); -} - static void pie_timer(struct timer_list *t) { struct pie_sched_data *q = from_timer(q, t, adapt_timer); @@ -455,7 +152,7 @@ static void pie_timer(struct timer_list *t) spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch)); spin_lock(root_lock); - calculate_probability(sch); + calculate_probability(sch->qstats.backlog, &q->vars, &q->params); /* reset the timer to fire after 'tupdate'. tupdate is in jiffies. */ if (q->params.tupdate) @@ -537,12 +234,13 @@ static int pie_dump_stats(struct Qdisc *sch, struct gnet_dump *d) static struct sk_buff *pie_qdisc_dequeue(struct Qdisc *sch) { + struct pie_sched_data *q = qdisc_priv(sch); struct sk_buff *skb = qdisc_dequeue_head(sch); if (!skb) return NULL; - pie_process_dequeue(sch, skb); + pie_process_dequeue(sch->qstats.backlog, skb, &q->vars); return skb; } From patchwork Sun Mar 31 17:40:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gautam Ramakrishnan X-Patchwork-Id: 1071835 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="EX4FuYkY"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44XN8c2cPBz9sR0 for ; Mon, 1 Apr 2019 04:42:24 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731341AbfCaRmW (ORCPT ); Sun, 31 Mar 2019 13:42:22 -0400 Received: from mail-pl1-f194.google.com ([209.85.214.194]:34188 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731237AbfCaRmW (ORCPT ); Sun, 31 Mar 2019 13:42:22 -0400 Received: by mail-pl1-f194.google.com with SMTP id y6so3329602plt.1 for ; Sun, 31 Mar 2019 10:42:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YXGxkqWNgEkNh657GKx82CbHt/br0pzxJxncP5r2FNI=; b=EX4FuYkYbrw2hojKmm4j63l4fmPrbwVByLm5E+jcy9pnEvFMW0spN9q79D1AfXubfo RQLgluor5m9XiKFrheQ0Knzxg5YljindY5wEuKSiuV4hD2L8A2bIFxcvNPgnbZEsRZuk 0tJpKRogXADI98r1uWiWQp98xCxAXwJl8TFroNq1YvGvehoxTztM5PW+kRrCtf7fNr+E kl1/HnhV6brLnFZcQBb9naqnHOmC3D6NI2x/02kVHhe9W2yz3NuUHrAZRKre1l4jO8Tm ZNq3SG20b9zb2qJvV5TC3A9tNeBHHGzuDHcb7yZ3weq2ImL0In8bVx9ErwCJeFJ1KReU YoQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YXGxkqWNgEkNh657GKx82CbHt/br0pzxJxncP5r2FNI=; b=ih8nTW4HxEvxsutdCC4uSlbu20yRDR1Jv5DD9CSCJLCoahXJfdCnU9bzGAFTGPMWRG 1m/KIXEy0rHIxY3CwMmX9ljT7ep8FgpdluORoCRaMTJgZPuRb/crVmjL21fzKILV2ycd yaqmZF95rFTspBXCwEpgUZ41Ygz+phdF6UwRsf/wVFZTLvkDWH4zTjfp0qm+z64yIo6l EbPcJEmhcdDo3ckY7BbXYYTle+dYbJUKKn/zw/zd0TZpYrT+e0hbrn5LGHwpU3vmSg+B gbYlgru5jFb680wTv4zUoE4RLxJ848fj01o/5N6t41uxzuCCdrHtMc6JMJ6xBOuVM2uz qymQ== X-Gm-Message-State: APjAAAX2e86u/MZcGXpmmShp28Ta2nzsoV9Tqfag0AHwnbU1O6KAvwGo 6DXK9h83YIBEUZGVbi9EISs= X-Google-Smtp-Source: APXvYqynxvtu1m1zN9DIm1w/NlYDqtREO9b1XLgcWJAvlPmzkcZ35ijt1iJd2QZIP+eRh/Z9HLps3w== X-Received: by 2002:a17:902:aa87:: with SMTP id d7mr58306128plr.146.1554054140962; Sun, 31 Mar 2019 10:42:20 -0700 (PDT) Received: from localhost.localdomain ([223.186.242.214]) by smtp.gmail.com with ESMTPSA id e123sm10581744pgc.14.2019.03.31.10.42.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Mar 2019 10:42:20 -0700 (PDT) From: Gautam Ramakrishnan To: jhs@mojatatu.com Cc: davem@davemloft.net, netdev@vger.kernel.org, Gautam Ramakrishnan , "Mohit P . Tahiliani" , "Sachin D . Patil" , Mohit Bhasi , "V . Saicharan" , Leslie Monis , Dave Taht Subject: [RFC net-next 2/2] net: sched: fq_pie: Flow Queue PIE AQM Date: Sun, 31 Mar 2019 23:10:05 +0530 Message-Id: <20190331174005.5841-3-gautamramk@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190331174005.5841-1-gautamramk@gmail.com> References: <20190331174005.5841-1-gautamramk@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mohit P. Tahiliani FQ-PIE incorporates fair/flow queuing in which every queue is managed by an instance of PIE queueing discipline. The algorithm provides good control over the queueing delay while at the same time, ensures fairness across various flows sharing the same link. Principles: - Packets are classified stochastically on flows. - Each flow has a PIE managed queue. - Flows are linked onto two (Round Robin) lists, so that new flows have priority on old ones. - For a given flow, packets are dropped on arrival according to a drop probability. - Tail drops only. Usage: tc qdisc ... fq_pie [ limit PACKETS ] [ flows NUMBER ] [ alpha NUMBER ] [ beta NUMBER ] [ target TIME us ] [ tupdate TIME us ] [ bytemode ] [ quantum BYTES ] [ ecn | ecn_prob PERCENTAGE ] defaults: 1024 flows, 10240 packets limit, quantum : device MTU target: 15ms tupdate: 15ms alpha: 2 (on a scale of 0 to 16) beta: 20 (on a scale of 0 to 16) ecn: false ecn_prob: 10% Signed-off-by: Mohit P. Tahiliani Signed-off-by: Sachin D. Patil Signed-off-by: Mohit Bhasi Signed-off-by: V. Saicharan Signed-off-by: Leslie Monis Signed-off-by: Gautam Ramakrishnan Cc: Dave Taht --- include/uapi/linux/pkt_sched.h | 28 ++ net/sched/Kconfig | 14 +- net/sched/Makefile | 1 + net/sched/sch_fq_pie.c | 485 +++++++++++++++++++++++++++++++++ 4 files changed, 527 insertions(+), 1 deletion(-) create mode 100644 net/sched/sch_fq_pie.c diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h index 7ee74c3474bf..005413bd09ee 100644 --- a/include/uapi/linux/pkt_sched.h +++ b/include/uapi/linux/pkt_sched.h @@ -964,6 +964,34 @@ struct tc_pie_xstats { __u32 ecn_mark; /* packets marked with ecn*/ }; +/* FQ PIE */ +enum { + TCA_FQ_PIE_UNSPEC, + TCA_FQ_PIE_TARGET, + TCA_FQ_PIE_LIMIT, + TCA_FQ_PIE_TUPDATE, + TCA_FQ_PIE_ALPHA, + TCA_FQ_PIE_BETA, + TCA_FQ_PIE_ECN, + TCA_FQ_PIE_QUANTUM, + TCA_FQ_PIE_BYTEMODE, + TCA_FQ_PIE_FLOWS, + TCA_FQ_PIE_ECN_PROB, + __TCA_FQ_PIE_MAX +}; +#define TCA_FQ_PIE_MAX (__TCA_FQ_PIE_MAX - 1) + +struct tc_fq_pie_xstats { + __u32 packets_in; /* total number of packets enqueued */ + __u32 dropped; /* packets dropped due to fq_pie_action */ + __u32 overlimit; /* dropped due to lack of space in queue */ + __u32 ecn_mark; /* packets marked with ecn*/ + __u32 new_flow_count; /* number of time packets + created a 'new flow' */ + __u32 new_flows_len; /* count of flows in new list */ + __u32 old_flows_len; /* count of flows in old list */ +}; + /* CBS */ struct tc_cbs_qopt { __u8 offload; diff --git a/net/sched/Kconfig b/net/sched/Kconfig index 5c02ad97ef23..49f4dd9894a0 100644 --- a/net/sched/Kconfig +++ b/net/sched/Kconfig @@ -358,13 +358,25 @@ config NET_SCH_PIE help Say Y here if you want to use the Proportional Integral controller Enhanced scheduler packet scheduling algorithm. - For more information, please see https://tools.ietf.org/html/rfc8033 + For more information, please see + http://tools.ietf.org/html/draft-pan-tsvwg-pie-00 To compile this driver as a module, choose M here: the module will be called sch_pie. If unsure, say N. +config NET_SCH_FQ_PIE + tristate "Flow Queue Proportional Integral controller Enhanced (FQ-PIE) scheduler" + help + Say Y here if you want to use the Flow Queue Proportional Integral controller + Enhanced scheduler packet scheduling algorithm. + + To compile this driver as a module, choose M here: the module + will be called sch_fq_pie. + + If unsure, say N. + config NET_SCH_INGRESS tristate "Ingress/classifier-action Qdisc" depends on NET_CLS_ACT diff --git a/net/sched/Makefile b/net/sched/Makefile index 8a40431d7b5c..fdcd3f7b2fb2 100644 --- a/net/sched/Makefile +++ b/net/sched/Makefile @@ -55,6 +55,7 @@ obj-$(CONFIG_NET_SCH_CAKE) += sch_cake.o obj-$(CONFIG_NET_SCH_FQ) += sch_fq.o obj-$(CONFIG_NET_SCH_HHF) += sch_hhf.o obj-$(CONFIG_NET_SCH_PIE) += sch_pie.o +obj-$(CONFIG_NET_SCH_FQ_PIE) += sch_fq_pie.o obj-$(CONFIG_NET_SCH_CBS) += sch_cbs.o obj-$(CONFIG_NET_SCH_ETF) += sch_etf.o obj-$(CONFIG_NET_SCH_TAPRIO) += sch_taprio.o diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c new file mode 100644 index 000000000000..4ccefa0bc7f0 --- /dev/null +++ b/net/sched/sch_fq_pie.c @@ -0,0 +1,485 @@ +/* + * net/sched/sch_fq_pie.c + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * Author: Mohit P. Tahiliani + * Author: Sachin D. Patil + * Author: Mohit Bhasi + * Author: V Saicharan + * Author: Leslie Monis + * Author: Gautam Ramakrishnan + * + * References: + * RFC 8033: https://tools.ietf.org/html/rfc8033 + */ + +#include +#include +#include + +struct fq_pie_params { + struct pie_params p_params; + u32 ecn_prob; + u32 flows_cnt; +}; + +struct fq_pie_stats { + u32 packets_in; /* total number of packets enqueued */ + u32 dropped; /* packets dropped due to fq_pie action */ + u32 overlimit; /* dropped due to lack of space in queue */ + u32 ecn_mark; /* packets marked with ECN */ + u32 new_flow_count; /* number of time packets created a new flow */ +}; + +struct fq_pie_flow { + s32 deficit; /* number of credits remaining for the flow */ + u32 backlog; /* size of data in the flow */ + u32 qlen; /* number of packets in the flow */ + struct sk_buff *head; + struct sk_buff *tail; + struct list_head flowchain; + struct pie_vars vars; /* pie vars for the flow */ + struct pie_stats stats; /* pie stats for the flow */ +}; + +struct fq_pie_sched_data { + u32 quantum; /* number of credits in deficit round robin */ + struct fq_pie_flow *flows; + struct Qdisc *sch; + struct fq_pie_params params; + struct fq_pie_stats stats; + struct list_head old_flows; + struct list_head new_flows; + struct timer_list adapt_timer; +}; + +static void fq_pie_params_init(struct fq_pie_params *params) +{ + pie_params_init(¶ms->p_params); + params->ecn_prob = 10; + params->flows_cnt = 1024; +} + +static inline void flow_queue_add(struct fq_pie_flow *flow, + struct sk_buff *skb) +{ + if (!flow->head) + flow->head = skb; + else + flow->tail->next = skb; + flow->tail = skb; + skb->next = NULL; +} + +static int fq_pie_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch, + struct sk_buff **to_free) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + struct fq_pie_flow *sel_flow; + u32 pkt_len; + u32 idx; + u8 enqueue = false; + + /* Classifies packet into corresponding flow */ + idx = reciprocal_scale(skb_get_hash(skb), q->params.flows_cnt); + sel_flow = &q->flows[idx]; + + /* Checks if the qdisc is full */ + if (unlikely(qdisc_qlen(sch) >= sch->limit)) { + q->stats.overlimit++; + sel_flow->stats.overlimit++; + goto out; + } + + if (!drop_early(sch, sel_flow->backlog, skb->len, &sel_flow->vars, + &q->params.p_params)) { + enqueue = true; + } else if (q->params.p_params.ecn && + sel_flow->vars.prob <= + (MAX_PROB / 100) * q->params.ecn_prob && + INET_ECN_set_ce(skb)) { + /* If packet is ecn capable, mark it if drop probability + * is lower than the parameter ecn_prob, else drop it. + */ + q->stats.ecn_mark++; + sel_flow->stats.ecn_mark++; + enqueue = true; + } + if (enqueue) { + pkt_len = qdisc_pkt_len(skb); + q->stats.packets_in++; + sch->qstats.backlog += pkt_len; + sch->q.qlen++; + flow_queue_add(sel_flow, skb); + if (list_empty(&sel_flow->flowchain)) { + list_add_tail(&sel_flow->flowchain, &q->new_flows); + q->stats.new_flow_count++; + sel_flow->deficit = q->quantum; + sel_flow->stats.dropped = 0; + sel_flow->qlen = 0; + sel_flow->backlog = 0; + } + sel_flow->qlen++; + sel_flow->stats.packets_in++; + sel_flow->backlog += pkt_len; + return NET_XMIT_SUCCESS; + } +out: + q->stats.dropped++; + sel_flow->stats.dropped++; + return qdisc_drop(skb, sch, to_free); +} + +static const struct nla_policy fq_pie_policy[TCA_FQ_PIE_MAX + 1] = { + [TCA_FQ_PIE_TARGET] = {.type = NLA_U32}, + [TCA_FQ_PIE_LIMIT] = {.type = NLA_U32}, + [TCA_FQ_PIE_TUPDATE] = {.type = NLA_U32}, + [TCA_FQ_PIE_ALPHA] = {.type = NLA_U32}, + [TCA_FQ_PIE_BETA] = {.type = NLA_U32}, + [TCA_FQ_PIE_ECN] = {.type = NLA_U32}, + [TCA_FQ_PIE_QUANTUM] = {.type = NLA_U32}, + [TCA_FQ_PIE_BYTEMODE] = {.type = NLA_U32}, + [TCA_FQ_PIE_FLOWS] = {.type = NLA_U32}, + [TCA_FQ_PIE_ECN_PROB] = {.type = NLA_U32} +}; + +static inline struct sk_buff *dequeue_head(struct fq_pie_flow *flow) +{ + struct sk_buff *skb = flow->head; + + flow->head = skb->next; + skb->next = NULL; + return skb; +} + +static struct sk_buff *fq_pie_qdisc_dequeue(struct Qdisc *sch) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + struct sk_buff *skb = NULL; + struct fq_pie_flow *flow; + struct list_head *head; + u32 pkt_len; + +begin: + head = &q->new_flows; + if (list_empty(head)) { + head = &q->old_flows; + if (list_empty(head)) + return NULL; + } + + flow = list_first_entry(head, struct fq_pie_flow, flowchain); + /* Flow has exhausted all its credits */ + if (flow->deficit <= 0) { + flow->deficit += q->quantum; + list_move_tail(&flow->flowchain, &q->old_flows); + goto begin; + } + + if (flow->head) { + skb = dequeue_head(flow); + pkt_len = qdisc_pkt_len(skb); + sch->qstats.backlog -= pkt_len; + sch->q.qlen--; + qdisc_bstats_update(sch, skb); + } + + if (!skb) { + if (head == &q->new_flows && !list_empty(&q->old_flows)) + list_move_tail(&flow->flowchain, &q->old_flows); + else + list_del_init(&flow->flowchain); + goto begin; + } + + flow->qlen--; + flow->deficit -= pkt_len; + flow->backlog -= pkt_len; + pie_process_dequeue(flow->backlog, skb, &flow->vars); + return skb; +} + +static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt, + struct netlink_ext_ack *extack) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + struct nlattr *tb[TCA_FQ_PIE_MAX + 1]; + unsigned int len_dropped = 0; + unsigned int num_dropped = 0; + unsigned int qlen; + int err; + + if (!opt) + return -EINVAL; + + err = nla_parse_nested(tb, TCA_FQ_PIE_MAX, opt, fq_pie_policy, NULL); + if (err < 0) + return err; + + if (tb[TCA_FQ_PIE_FLOWS]) { + if (q->flows) + return -EINVAL; + q->params.flows_cnt = nla_get_u32(tb[TCA_FQ_PIE_FLOWS]); + if (!q->params.flows_cnt || + q->params.flows_cnt > 65536) + return -EINVAL; + } + + sch_tree_lock(sch); + + /* convert from microseconds to pschedtime */ + if (tb[TCA_FQ_PIE_TARGET]) { + /* target is in us */ + u32 target = nla_get_u32(tb[TCA_FQ_PIE_TARGET]); + + /* convert to pschedtime */ + q->params.p_params.target = + PSCHED_NS2TICKS((u64)target * NSEC_PER_USEC); + } + + /* tupdate is in jiffies */ + if (tb[TCA_FQ_PIE_TUPDATE]) + q->params.p_params.tupdate = + usecs_to_jiffies(nla_get_u32(tb[TCA_FQ_PIE_TUPDATE])); + + if (tb[TCA_FQ_PIE_LIMIT]) { + u32 limit = nla_get_u32(tb[TCA_FQ_PIE_LIMIT]); + + q->params.p_params.limit = limit; + sch->limit = limit; + } + + if (tb[TCA_FQ_PIE_ECN_PROB]) + q->params.ecn_prob = nla_get_u32(tb[TCA_FQ_PIE_ECN_PROB]); + + if (tb[TCA_FQ_PIE_ALPHA]) + q->params.p_params.alpha = nla_get_u32(tb[TCA_FQ_PIE_ALPHA]); + + if (tb[TCA_FQ_PIE_BETA]) + q->params.p_params.beta = nla_get_u32(tb[TCA_FQ_PIE_BETA]); + + if (tb[TCA_FQ_PIE_ECN]) + q->params.p_params.ecn = nla_get_u32(tb[TCA_FQ_PIE_ECN]); + + if (tb[TCA_FQ_PIE_QUANTUM]) + q->quantum = nla_get_u32(tb[TCA_FQ_PIE_QUANTUM]); + + if (tb[TCA_FQ_PIE_BYTEMODE]) + q->params.p_params.bytemode = + nla_get_u32(tb[TCA_FQ_PIE_BYTEMODE]); + + /* Drop excess packets if new limit is lower */ + qlen = sch->q.qlen; + while (sch->q.qlen > sch->limit) { + struct sk_buff *skb = fq_pie_qdisc_dequeue(sch); + + kfree_skb(skb); + len_dropped += qdisc_pkt_len(skb); + num_dropped += 1; + } + qdisc_tree_reduce_backlog(sch, num_dropped, len_dropped); + + sch_tree_unlock(sch); + return 0; +} + +static void fq_pie_timer(struct timer_list *t) +{ + struct fq_pie_sched_data *q = from_timer(q, t, adapt_timer); + struct Qdisc *sch = q->sch; + spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch)); + u16 idx; + + spin_lock(root_lock); + + for (idx = 0; idx < q->params.flows_cnt; idx++) + calculate_probability(q->flows[idx].backlog, + &q->flows[idx].vars, + &q->params.p_params); + + // reset the timer to fire after 'tupdate'. tupdate is in jiffies. + if (q->params.p_params.tupdate) + mod_timer(&q->adapt_timer, + jiffies + q->params.p_params.tupdate); + spin_unlock(root_lock); +} + +static int fq_pie_init(struct Qdisc *sch, struct nlattr *opt, + struct netlink_ext_ack *extack) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + int err; + int i; + + fq_pie_params_init(&q->params); + sch->limit = 10 * 1024; + q->params.p_params.limit = sch->limit; + q->quantum = psched_mtu(qdisc_dev(sch)); + q->sch = sch; + + INIT_LIST_HEAD(&q->new_flows); + INIT_LIST_HEAD(&q->old_flows); + + timer_setup(&q->adapt_timer, fq_pie_timer, 0); + mod_timer(&q->adapt_timer, jiffies + HZ / 2); + + if (opt) { + int err = fq_pie_change(sch, opt, extack); + + if (err) + return err; + } + + if (!q->flows) { + q->flows = kvcalloc(q->params.flows_cnt, + sizeof(struct fq_pie_flow), + GFP_KERNEL); + if (!q->flows) { + err = -ENOMEM; + goto init_failure; + } + for (i = 0; i < q->params.flows_cnt; i++) { + struct fq_pie_flow *flow = q->flows + i; + + INIT_LIST_HEAD(&flow->flowchain); + pie_vars_init(&flow->vars); + } + } + return 0; + +init_failure: + q->params.flows_cnt = 0; + + return err; +} + +static int fq_pie_dump(struct Qdisc *sch, struct sk_buff *skb) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + struct nlattr *opts; + + opts = nla_nest_start(skb, TCA_OPTIONS); + if (!opts) + goto nla_put_failure; + + /* convert target from pschedtime to us */ + if (nla_put_u32(skb, TCA_FQ_PIE_TARGET, + ((u32)PSCHED_TICKS2NS(q->params.p_params.target)) / + NSEC_PER_USEC) || + nla_put_u32(skb, TCA_FQ_PIE_LIMIT, sch->limit) || + nla_put_u32(skb, TCA_FQ_PIE_TUPDATE, + jiffies_to_usecs(q->params.p_params.tupdate)) || + nla_put_u32(skb, TCA_FQ_PIE_ALPHA, q->params.p_params.alpha) || + nla_put_u32(skb, TCA_FQ_PIE_BETA, q->params.p_params.beta) || + nla_put_u32(skb, TCA_FQ_PIE_ECN, q->params.p_params.ecn) || + nla_put_u32(skb, TCA_FQ_PIE_BYTEMODE, + q->params.p_params.bytemode) || + nla_put_u32(skb, TCA_FQ_PIE_QUANTUM, q->quantum) || + nla_put_u32(skb, TCA_FQ_PIE_FLOWS, q->params.flows_cnt) || + nla_put_u32(skb, TCA_FQ_PIE_ECN_PROB, q->params.ecn_prob)) + goto nla_put_failure; + + return nla_nest_end(skb, opts); + +nla_put_failure: + return -1; +} + +static int fq_pie_dump_stats(struct Qdisc *sch, struct gnet_dump *d) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + struct tc_fq_pie_xstats st = { + .packets_in = q->stats.packets_in, + .overlimit = q->stats.overlimit, + .dropped = q->stats.dropped, + .ecn_mark = q->stats.ecn_mark, + .new_flow_count = q->stats.new_flow_count, + }; + struct list_head *pos; + + sch_tree_lock(sch); + list_for_each(pos, &q->new_flows) + st.new_flows_len++; + + list_for_each(pos, &q->old_flows) + st.old_flows_len++; + sch_tree_unlock(sch); + + return gnet_stats_copy_app(d, &st, sizeof(st)); +} + +static void fq_pie_reset(struct Qdisc *sch) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + u16 idx; + + INIT_LIST_HEAD(&q->new_flows); + INIT_LIST_HEAD(&q->old_flows); + for (idx = 0; idx < q->params.flows_cnt; idx++) { + struct fq_pie_flow *flow = q->flows + idx; + + /* Removes all packets from flow */ + rtnl_kfree_skbs(flow->head, flow->tail); + flow->head = NULL; + + INIT_LIST_HEAD(&flow->flowchain); + pie_vars_init(&flow->vars); + } + + sch->q.qlen = 0; + sch->qstats.backlog = 0; +} + +static void fq_pie_destroy(struct Qdisc *sch) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + + kvfree(q->flows); + del_timer_sync(&q->adapt_timer); +} + +static struct Qdisc_ops fq_pie_qdisc_ops __read_mostly = { + .id = "fq_pie", + .priv_size = sizeof(struct fq_pie_sched_data), + .enqueue = fq_pie_qdisc_enqueue, + .dequeue = fq_pie_qdisc_dequeue, + .peek = qdisc_peek_dequeued, + .init = fq_pie_init, + .destroy = fq_pie_destroy, + .reset = fq_pie_reset, + .change = fq_pie_change, + .dump = fq_pie_dump, + .dump_stats = fq_pie_dump_stats, + .owner = THIS_MODULE, +}; + +static int __init fq_pie_module_init(void) +{ + return register_qdisc(&fq_pie_qdisc_ops); +} + +static void __exit fq_pie_module_exit(void) +{ + unregister_qdisc(&fq_pie_qdisc_ops); +} + +module_init(fq_pie_module_init); +module_exit(fq_pie_module_exit); + +MODULE_DESCRIPTION("Flow Queue Proportional Integral controller Enhanced (FQ-PIE) scheduler"); +MODULE_AUTHOR("Mohit P. Tahiliani"); +MODULE_AUTHOR("Sachin D. Patil"); +MODULE_AUTHOR("Mohit Bhasi"); +MODULE_AUTHOR("V Saicharan"); +MODULE_AUTHOR("Leslie Monis"); +MODULE_AUTHOR("Gautam Ramakrishnan"); +MODULE_LICENSE("GPL");