From patchwork Sun Mar 31 17:40:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gautam Ramakrishnan X-Patchwork-Id: 1071834 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="V9Ox8p+t"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44XN8G6nntz9sR0 for ; Mon, 1 Apr 2019 04:42:06 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731329AbfCaRmE (ORCPT ); Sun, 31 Mar 2019 13:42:04 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:36630 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731237AbfCaRmD (ORCPT ); Sun, 31 Mar 2019 13:42:03 -0400 Received: by mail-pg1-f193.google.com with SMTP id 85so3558904pgc.3 for ; Sun, 31 Mar 2019 10:42:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=B0u3uxpUD9YWLBfpJEEQNLnEEe5CKfiK7Vk5M/Q3Ilc=; b=V9Ox8p+t3XunSCDeGlFPDHSeJ3yx0SJVobYHumNGI6NVuP/MFXwc/0EPnwgIE3qloB yCbhwf2Fr5NVqf5ItfTMDYK7USy6CQRtesowf8NGpY+o2vCwaN1r57k4N9G4GbCgpNX+ W2H3BvCRUhE+hP3kp6XZzOcRyo5f6HzyFx65qMYnyiUYGff1wUa51M8QcOah+6bwvzzo Sn2jQ/xMWXYQuw5nJ7eLPn22K1869dYYlKgPn9wBIcf7Z6OulCyKLLmzNw2VuAgHbrIc //7o7vLTR91LADB7yNU4i4QfPhXxpwLvtNqJ0xib4gaLuTSOhzJi20gGfRwNXjkCGrx3 jCWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=B0u3uxpUD9YWLBfpJEEQNLnEEe5CKfiK7Vk5M/Q3Ilc=; b=HYXOi0ZFSz68jg53d3jDiWYkfS+zIMsjHg679J9EbOd6zsQB2AQpOwDSNV7X6Qrsdc e67cr4Ax247KwNQ0zLTx5pBlET4kHsL4C402c2zbrPk3arkzg+tgHmwsLlPWIA4FNl07 t5w7TEshCjJfqb6dpk16y73Pc7SsJxiTynjEIPJlw+h9wNbtw2XIog6nU9C0p79LB1G8 7Zjl7FvCDBydm0oDhGPkkdEwB6CUcf0IMJfP4bBWx7WaNyFrXeFhRgfVp35h04uHzdm1 wZhMjxppfO/xbjPHS5JEqvnh0qcQv059+C9S0Ac9yUeYqHs6cR0dCNwK4w8h741ME/QO ufSg== X-Gm-Message-State: APjAAAVruzIFkwzEdi4UtVCRpYXmBzPTRHKhMEnJhVLOX2wE3XZRf1S8 OgsydYxW4f3+WHpOd9W7+D8RvoH7S/u36Q== X-Google-Smtp-Source: APXvYqwDWhC57iocvALYbj45QSPo3sBM1OmX2qFh+NjACVI25ymAoDN3S3Hm24PHqrjzc3smouTm2w== X-Received: by 2002:aa7:83c1:: with SMTP id j1mr2946012pfn.241.1554054122598; Sun, 31 Mar 2019 10:42:02 -0700 (PDT) Received: from localhost.localdomain ([223.186.242.214]) by smtp.gmail.com with ESMTPSA id e123sm10581744pgc.14.2019.03.31.10.41.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Mar 2019 10:42:02 -0700 (PDT) From: Gautam Ramakrishnan To: jhs@mojatatu.com Cc: davem@davemloft.net, netdev@vger.kernel.org, Gautam Ramakrishnan , "Mohit P . Tahiliani" , "Sachin D . Patil" , Mohit Bhasi , "V . Saicharan" , Leslie Monis , Dave Taht Subject: [RFC net-next 1/2] net: sched: pie: refactor PIE Date: Sun, 31 Mar 2019 23:10:04 +0530 Message-Id: <20190331174005.5841-2-gautamramk@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190331174005.5841-1-gautamramk@gmail.com> References: <20190331174005.5841-1-gautamramk@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mohit P. Tahiliani This patch refactors the code for the PIE qdisc. It breaks sch_pie.c down into 'sch_pie.c' and 'pie.h'. This enables the addition of qdiscs such as FQ-PIE that make use of the components of PIE. Signed-off-by: Mohit P. Tahiliani Signed-off-by: Sachin D. Patil Signed-off-by: Mohit Bhasi Signed-off-by: V. Saicharan Signed-off-by: Leslie Monis Signed-off-by: Gautam Ramakrishnan Cc: Dave Taht --- include/net/pie.h | 330 ++++++++++++++++++++++++++++++++++++++++++++ net/sched/sch_pie.c | 314 +---------------------------------------- 2 files changed, 336 insertions(+), 308 deletions(-) create mode 100644 include/net/pie.h diff --git a/include/net/pie.h b/include/net/pie.h new file mode 100644 index 000000000000..1cc295f468b4 --- /dev/null +++ b/include/net/pie.h @@ -0,0 +1,330 @@ +/* + * include/net/pie.h Proportional Integral Controller Enhanced + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * Original Author: Vijay Subramanian + * Original Author: Mythili Prabhu + * + * ECN support is added by Naeem Khademi + * University of Oslo, Norway. + * + * References: + * RFC 8033: https://tools.ietf.org/html/rfc8033 + */ + +#ifndef __NET_SCHED_PIE_H +#define __NET_SCHED_PIE_H + +#include +#include +#include +#include +#include +#include +#include +#include + +#define QUEUE_THRESHOLD 16384 +#define DQCOUNT_INVALID -1 +#define MAX_PROB 0xffffffffffffffff +#define PIE_SCALE 8 + +/* parameters used */ +struct pie_params { + psched_time_t target; /* user specified target delay in pschedtime */ + u32 tupdate; /* timer frequency (in jiffies) */ + u32 limit; /* number of packets that can be enqueued */ + u32 alpha; /* alpha and beta are between 0 and 32 */ + u32 beta; /* and are used for shift relative to 1 */ + bool ecn; /* true if ecn is enabled */ + bool bytemode; /* to scale drop early prob based on pkt size */ +}; + +/* variables used */ +struct pie_vars { + u64 prob; /* probability but scaled by u64 limit. */ + psched_time_t burst_time; + psched_time_t qdelay; + psched_time_t qdelay_old; + u64 dq_count; /* measured in bytes */ + psched_time_t dq_tstamp; /* drain rate */ + u64 accu_prob; /* accumulated drop probability */ + u32 avg_dq_rate; /* bytes per pschedtime tick,scaled */ + u32 qlen_old; /* in bytes */ + u8 accu_prob_overflows; /* overflows of accu_prob */ +}; + +/* statistics gathering */ +struct pie_stats { + u32 packets_in; /* total number of packets enqueued */ + u32 dropped; /* packets dropped due to pie_action */ + u32 overlimit; /* dropped due to lack of space in queue */ + u32 maxq; /* maximum queue size */ + u32 ecn_mark; /* packets marked with ECN */ +}; + +static void pie_params_init(struct pie_params *params) +{ + params->alpha = 2; + params->beta = 20; + params->tupdate = usecs_to_jiffies(15 * USEC_PER_MSEC); /* 15 ms */ + params->limit = 1000; /* default of 1000 packets */ + params->target = PSCHED_NS2TICKS(15 * NSEC_PER_MSEC); /* 15 ms */ + params->ecn = false; + params->bytemode = false; +} + +static void pie_vars_init(struct pie_vars *vars) +{ + vars->dq_count = DQCOUNT_INVALID; + vars->accu_prob = 0; + vars->avg_dq_rate = 0; + /* default of 150 ms in pschedtime */ + vars->burst_time = PSCHED_NS2TICKS(150 * NSEC_PER_MSEC); + vars->accu_prob_overflows = 0; +} + +static bool drop_early(struct Qdisc *sch, u32 qlen, u32 packet_size, + struct pie_vars *vars, struct pie_params *params) +{ + u64 rnd; + u64 local_prob = vars->prob; + u32 mtu = psched_mtu(qdisc_dev(sch)); + + /* If there is still burst allowance left skip random early drop */ + if (vars->burst_time > 0) + return false; + + /* If current delay is less than half of target, and + * if drop prob is low already, disable early_drop + */ + if ((vars->qdelay < params->target / 2) && + (vars->prob < MAX_PROB / 5)) + return false; + + /* If we have fewer than 2 mtu-sized packets, disable drop_early, + * similar to min_th in RED + */ + if (qlen < 2 * mtu) + return false; + + /* If bytemode is turned on, use packet size to compute new + * probablity. Smaller packets will have lower drop prob in this case + */ + if (params->bytemode && packet_size <= mtu) + local_prob = (u64)packet_size * div_u64(local_prob, mtu); + else + local_prob = vars->prob; + + if (local_prob == 0) { + vars->accu_prob = 0; + vars->accu_prob_overflows = 0; + } + + if (local_prob > MAX_PROB - vars->accu_prob) + vars->accu_prob_overflows++; + + vars->accu_prob += local_prob; + + if (vars->accu_prob_overflows == 0 && + vars->accu_prob < (MAX_PROB / 100) * 85) + return false; + if (vars->accu_prob_overflows == 8 && + vars->accu_prob >= MAX_PROB / 2) + return true; + + prandom_bytes(&rnd, 8); + if (rnd < local_prob) { + vars->accu_prob = 0; + vars->accu_prob_overflows = 0; + return true; + } + + return false; +} + +static void pie_process_dequeue(u32 qlen, struct sk_buff *skb, + struct pie_vars *vars) +{ + /* If current queue is about 10 packets or more and dq_count is unset + * we have enough packets to calculate the drain rate. Save + * current time as dq_tstamp and start measurement cycle. + */ + if (qlen >= QUEUE_THRESHOLD && vars->dq_count == DQCOUNT_INVALID) { + vars->dq_tstamp = psched_get_time(); + vars->dq_count = 0; + } + + /* Calculate the average drain rate from this value. If queue length + * has receded to a small value viz., <= QUEUE_THRESHOLD bytes,reset + * the dq_count to -1 as we don't have enough packets to calculate the + * drain rate anymore The following if block is entered only when we + * have a substantial queue built up (QUEUE_THRESHOLD bytes or more) + * and we calculate the drain rate for the threshold here. dq_count is + * in bytes, time difference in psched_time, hence rate is in + * bytes/psched_time. + */ + if (vars->dq_count != DQCOUNT_INVALID) { + vars->dq_count += skb->len; + + if (vars->dq_count >= QUEUE_THRESHOLD) { + psched_time_t now = psched_get_time(); + u32 dtime = now - vars->dq_tstamp; + u32 count = vars->dq_count << PIE_SCALE; + + if (dtime == 0) + return; + + count = count / dtime; + + if (vars->avg_dq_rate == 0) + vars->avg_dq_rate = count; + else + vars->avg_dq_rate = + (vars->avg_dq_rate - + (vars->avg_dq_rate >> 3)) + (count >> 3); + + /* If the queue has receded below the threshold, we hold + * on to the last drain rate calculated, else we reset + * dq_count to 0 to re-enter the if block when the next + * packet is dequeued + */ + if (qlen < QUEUE_THRESHOLD) { + vars->dq_count = DQCOUNT_INVALID; + } else { + vars->dq_count = 0; + vars->dq_tstamp = psched_get_time(); + } + + if (vars->burst_time > 0) { + if (vars->burst_time > dtime) + vars->burst_time -= dtime; + else + vars->burst_time = 0; + } + } + } +} + +static void calculate_probability(u32 qlen, struct pie_vars *vars, + struct pie_params *params) +{ + psched_time_t qdelay = 0; /* in pschedtime */ + psched_time_t qdelay_old = vars->qdelay; /* in pschedtime */ + s64 delta = 0; /* determines the change in probability */ + u64 oldprob; + u64 alpha, beta; + u32 power; + bool update_prob = true; + + vars->qdelay_old = vars->qdelay; + + if (vars->avg_dq_rate > 0) + qdelay = (qlen << PIE_SCALE) / vars->avg_dq_rate; + else + qdelay = 0; + + /* If qdelay is zero and qlen is not, it means qlen is very small, less + * than dequeue_rate, so we do not update probabilty in this round + */ + if (qdelay == 0 && qlen != 0) + update_prob = false; + + /* In the algorithm, alpha and beta are between 0 and 2 with typical + * value for alpha as 0.125. In this implementation, we use values 0-32 + * passed from user space to represent this. Also, alpha and beta have + * unit of HZ and need to be scaled before they can used to update + * probability. alpha/beta are updated locally below by scaling down + * by 16 to come to 0-2 range. + */ + alpha = ((u64)params->alpha * (MAX_PROB / PSCHED_TICKS_PER_SEC)) >> 4; + beta = ((u64)params->beta * (MAX_PROB / PSCHED_TICKS_PER_SEC)) >> 4; + + /* We scale alpha and beta differently depending on how heavy the + * congestion is. Please see RFC 8033 for details. + */ + if (vars->prob < MAX_PROB / 10) { + alpha >>= 1; + beta >>= 1; + + power = 100; + while (vars->prob < div_u64(MAX_PROB, power) && + power <= 1000000) { + alpha >>= 2; + beta >>= 2; + power *= 10; + } + } + + /* alpha and beta should be between 0 and 32, in multiples of 1/16 */ + delta += alpha * (u64)(qdelay - params->target); + delta += beta * (u64)(qdelay - qdelay_old); + + oldprob = vars->prob; + + /* to ensure we increase probability in steps of no more than 2% */ + if (delta > (s64)(MAX_PROB / (100 / 2)) && + vars->prob >= MAX_PROB / 10) + delta = (MAX_PROB / 100) * 2; + + /* Non-linear drop: + * Tune drop probability to increase quickly for high delays(>= 250ms) + * 250ms is derived through experiments and provides error protection + */ + + if (qdelay > (PSCHED_NS2TICKS(250 * NSEC_PER_MSEC))) + delta += MAX_PROB / (100 / 2); + + vars->prob += delta; + + if (delta > 0) { + /* prevent overflow */ + if (vars->prob < oldprob) { + vars->prob = MAX_PROB; + /* Prevent normalization error. If probability is at + * maximum value already, we normalize it here, and + * skip the check to do a non-linear drop in the next + * section. + */ + update_prob = false; + } + } else { + /* prevent underflow */ + if (vars->prob > oldprob) + vars->prob = 0; + } + + /* Non-linear drop in probability: Reduce drop probability quickly if + * delay is 0 for 2 consecutive Tupdate periods. + */ + + if (qdelay == 0 && qdelay_old == 0 && update_prob) + /* Reduce drop probability to 98.4% */ + vars->prob -= vars->prob / 64u; + + vars->qdelay = qdelay; + vars->qlen_old = qlen; + + /* We restart the measurement cycle if the following conditions are met + * 1. If the delay has been low for 2 consecutive Tupdate periods + * 2. Calculated drop probability is zero + * 3. We have atleast one estimate for the avg_dq_rate ie., + * is a non-zero value + */ + if ((vars->qdelay < params->target / 2) && + (vars->qdelay_old < params->target / 2) && + vars->prob == 0 && + vars->avg_dq_rate > 0) + pie_vars_init(vars); +} + +#endif /* __NET_SCHED_PIE_H */ diff --git a/net/sched/sch_pie.c b/net/sched/sch_pie.c index 1cc0c7b74aa3..f81a8c145914 100644 --- a/net/sched/sch_pie.c +++ b/net/sched/sch_pie.c @@ -20,53 +20,7 @@ * RFC 8033: https://tools.ietf.org/html/rfc8033 */ -#include -#include -#include -#include -#include -#include -#include -#include - -#define QUEUE_THRESHOLD 16384 -#define DQCOUNT_INVALID -1 -#define MAX_PROB 0xffffffffffffffff -#define PIE_SCALE 8 - -/* parameters used */ -struct pie_params { - psched_time_t target; /* user specified target delay in pschedtime */ - u32 tupdate; /* timer frequency (in jiffies) */ - u32 limit; /* number of packets that can be enqueued */ - u32 alpha; /* alpha and beta are between 0 and 32 */ - u32 beta; /* and are used for shift relative to 1 */ - bool ecn; /* true if ecn is enabled */ - bool bytemode; /* to scale drop early prob based on pkt size */ -}; - -/* variables used */ -struct pie_vars { - u64 prob; /* probability but scaled by u64 limit. */ - psched_time_t burst_time; - psched_time_t qdelay; - psched_time_t qdelay_old; - u64 dq_count; /* measured in bytes */ - psched_time_t dq_tstamp; /* drain rate */ - u64 accu_prob; /* accumulated drop probability */ - u32 avg_dq_rate; /* bytes per pschedtime tick,scaled */ - u32 qlen_old; /* in bytes */ - u8 accu_prob_overflows; /* overflows of accu_prob */ -}; - -/* statistics gathering */ -struct pie_stats { - u32 packets_in; /* total number of packets enqueued */ - u32 dropped; /* packets dropped due to pie_action */ - u32 overlimit; /* dropped due to lack of space in queue */ - u32 maxq; /* maximum queue size */ - u32 ecn_mark; /* packets marked with ECN */ -}; +#include /* private data for the Qdisc */ struct pie_sched_data { @@ -77,86 +31,6 @@ struct pie_sched_data { struct Qdisc *sch; }; -static void pie_params_init(struct pie_params *params) -{ - params->alpha = 2; - params->beta = 20; - params->tupdate = usecs_to_jiffies(15 * USEC_PER_MSEC); /* 15 ms */ - params->limit = 1000; /* default of 1000 packets */ - params->target = PSCHED_NS2TICKS(15 * NSEC_PER_MSEC); /* 15 ms */ - params->ecn = false; - params->bytemode = false; -} - -static void pie_vars_init(struct pie_vars *vars) -{ - vars->dq_count = DQCOUNT_INVALID; - vars->accu_prob = 0; - vars->avg_dq_rate = 0; - /* default of 150 ms in pschedtime */ - vars->burst_time = PSCHED_NS2TICKS(150 * NSEC_PER_MSEC); - vars->accu_prob_overflows = 0; -} - -static bool drop_early(struct Qdisc *sch, u32 packet_size) -{ - struct pie_sched_data *q = qdisc_priv(sch); - u64 rnd; - u64 local_prob = q->vars.prob; - u32 mtu = psched_mtu(qdisc_dev(sch)); - - /* If there is still burst allowance left skip random early drop */ - if (q->vars.burst_time > 0) - return false; - - /* If current delay is less than half of target, and - * if drop prob is low already, disable early_drop - */ - if ((q->vars.qdelay < q->params.target / 2) && - (q->vars.prob < MAX_PROB / 5)) - return false; - - /* If we have fewer than 2 mtu-sized packets, disable drop_early, - * similar to min_th in RED - */ - if (sch->qstats.backlog < 2 * mtu) - return false; - - /* If bytemode is turned on, use packet size to compute new - * probablity. Smaller packets will have lower drop prob in this case - */ - if (q->params.bytemode && packet_size <= mtu) - local_prob = (u64)packet_size * div_u64(local_prob, mtu); - else - local_prob = q->vars.prob; - - if (local_prob == 0) { - q->vars.accu_prob = 0; - q->vars.accu_prob_overflows = 0; - } - - if (local_prob > MAX_PROB - q->vars.accu_prob) - q->vars.accu_prob_overflows++; - - q->vars.accu_prob += local_prob; - - if (q->vars.accu_prob_overflows == 0 && - q->vars.accu_prob < (MAX_PROB / 100) * 85) - return false; - if (q->vars.accu_prob_overflows == 8 && - q->vars.accu_prob >= MAX_PROB / 2) - return true; - - prandom_bytes(&rnd, 8); - if (rnd < local_prob) { - q->vars.accu_prob = 0; - q->vars.accu_prob_overflows = 0; - return true; - } - - return false; -} - static int pie_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free) { @@ -168,7 +42,8 @@ static int pie_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch, goto out; } - if (!drop_early(sch, skb->len)) { + if (!drop_early(sch, sch->qstats.backlog, skb->len, &q->vars, + &q->params)) { enqueue = true; } else if (q->params.ecn && (q->vars.prob <= MAX_PROB / 10) && INET_ECN_set_ce(skb)) { @@ -270,184 +145,6 @@ static int pie_change(struct Qdisc *sch, struct nlattr *opt, return 0; } -static void pie_process_dequeue(struct Qdisc *sch, struct sk_buff *skb) -{ - struct pie_sched_data *q = qdisc_priv(sch); - int qlen = sch->qstats.backlog; /* current queue size in bytes */ - - /* If current queue is about 10 packets or more and dq_count is unset - * we have enough packets to calculate the drain rate. Save - * current time as dq_tstamp and start measurement cycle. - */ - if (qlen >= QUEUE_THRESHOLD && q->vars.dq_count == DQCOUNT_INVALID) { - q->vars.dq_tstamp = psched_get_time(); - q->vars.dq_count = 0; - } - - /* Calculate the average drain rate from this value. If queue length - * has receded to a small value viz., <= QUEUE_THRESHOLD bytes,reset - * the dq_count to -1 as we don't have enough packets to calculate the - * drain rate anymore The following if block is entered only when we - * have a substantial queue built up (QUEUE_THRESHOLD bytes or more) - * and we calculate the drain rate for the threshold here. dq_count is - * in bytes, time difference in psched_time, hence rate is in - * bytes/psched_time. - */ - if (q->vars.dq_count != DQCOUNT_INVALID) { - q->vars.dq_count += skb->len; - - if (q->vars.dq_count >= QUEUE_THRESHOLD) { - psched_time_t now = psched_get_time(); - u32 dtime = now - q->vars.dq_tstamp; - u32 count = q->vars.dq_count << PIE_SCALE; - - if (dtime == 0) - return; - - count = count / dtime; - - if (q->vars.avg_dq_rate == 0) - q->vars.avg_dq_rate = count; - else - q->vars.avg_dq_rate = - (q->vars.avg_dq_rate - - (q->vars.avg_dq_rate >> 3)) + (count >> 3); - - /* If the queue has receded below the threshold, we hold - * on to the last drain rate calculated, else we reset - * dq_count to 0 to re-enter the if block when the next - * packet is dequeued - */ - if (qlen < QUEUE_THRESHOLD) { - q->vars.dq_count = DQCOUNT_INVALID; - } else { - q->vars.dq_count = 0; - q->vars.dq_tstamp = psched_get_time(); - } - - if (q->vars.burst_time > 0) { - if (q->vars.burst_time > dtime) - q->vars.burst_time -= dtime; - else - q->vars.burst_time = 0; - } - } - } -} - -static void calculate_probability(struct Qdisc *sch) -{ - struct pie_sched_data *q = qdisc_priv(sch); - u32 qlen = sch->qstats.backlog; /* queue size in bytes */ - psched_time_t qdelay = 0; /* in pschedtime */ - psched_time_t qdelay_old = q->vars.qdelay; /* in pschedtime */ - s64 delta = 0; /* determines the change in probability */ - u64 oldprob; - u64 alpha, beta; - u32 power; - bool update_prob = true; - - q->vars.qdelay_old = q->vars.qdelay; - - if (q->vars.avg_dq_rate > 0) - qdelay = (qlen << PIE_SCALE) / q->vars.avg_dq_rate; - else - qdelay = 0; - - /* If qdelay is zero and qlen is not, it means qlen is very small, less - * than dequeue_rate, so we do not update probabilty in this round - */ - if (qdelay == 0 && qlen != 0) - update_prob = false; - - /* In the algorithm, alpha and beta are between 0 and 2 with typical - * value for alpha as 0.125. In this implementation, we use values 0-32 - * passed from user space to represent this. Also, alpha and beta have - * unit of HZ and need to be scaled before they can used to update - * probability. alpha/beta are updated locally below by scaling down - * by 16 to come to 0-2 range. - */ - alpha = ((u64)q->params.alpha * (MAX_PROB / PSCHED_TICKS_PER_SEC)) >> 4; - beta = ((u64)q->params.beta * (MAX_PROB / PSCHED_TICKS_PER_SEC)) >> 4; - - /* We scale alpha and beta differently depending on how heavy the - * congestion is. Please see RFC 8033 for details. - */ - if (q->vars.prob < MAX_PROB / 10) { - alpha >>= 1; - beta >>= 1; - - power = 100; - while (q->vars.prob < div_u64(MAX_PROB, power) && - power <= 1000000) { - alpha >>= 2; - beta >>= 2; - power *= 10; - } - } - - /* alpha and beta should be between 0 and 32, in multiples of 1/16 */ - delta += alpha * (u64)(qdelay - q->params.target); - delta += beta * (u64)(qdelay - qdelay_old); - - oldprob = q->vars.prob; - - /* to ensure we increase probability in steps of no more than 2% */ - if (delta > (s64)(MAX_PROB / (100 / 2)) && - q->vars.prob >= MAX_PROB / 10) - delta = (MAX_PROB / 100) * 2; - - /* Non-linear drop: - * Tune drop probability to increase quickly for high delays(>= 250ms) - * 250ms is derived through experiments and provides error protection - */ - - if (qdelay > (PSCHED_NS2TICKS(250 * NSEC_PER_MSEC))) - delta += MAX_PROB / (100 / 2); - - q->vars.prob += delta; - - if (delta > 0) { - /* prevent overflow */ - if (q->vars.prob < oldprob) { - q->vars.prob = MAX_PROB; - /* Prevent normalization error. If probability is at - * maximum value already, we normalize it here, and - * skip the check to do a non-linear drop in the next - * section. - */ - update_prob = false; - } - } else { - /* prevent underflow */ - if (q->vars.prob > oldprob) - q->vars.prob = 0; - } - - /* Non-linear drop in probability: Reduce drop probability quickly if - * delay is 0 for 2 consecutive Tupdate periods. - */ - - if (qdelay == 0 && qdelay_old == 0 && update_prob) - /* Reduce drop probability to 98.4% */ - q->vars.prob -= q->vars.prob / 64u; - - q->vars.qdelay = qdelay; - q->vars.qlen_old = qlen; - - /* We restart the measurement cycle if the following conditions are met - * 1. If the delay has been low for 2 consecutive Tupdate periods - * 2. Calculated drop probability is zero - * 3. We have atleast one estimate for the avg_dq_rate ie., - * is a non-zero value - */ - if ((q->vars.qdelay < q->params.target / 2) && - (q->vars.qdelay_old < q->params.target / 2) && - q->vars.prob == 0 && - q->vars.avg_dq_rate > 0) - pie_vars_init(&q->vars); -} - static void pie_timer(struct timer_list *t) { struct pie_sched_data *q = from_timer(q, t, adapt_timer); @@ -455,7 +152,7 @@ static void pie_timer(struct timer_list *t) spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch)); spin_lock(root_lock); - calculate_probability(sch); + calculate_probability(sch->qstats.backlog, &q->vars, &q->params); /* reset the timer to fire after 'tupdate'. tupdate is in jiffies. */ if (q->params.tupdate) @@ -537,12 +234,13 @@ static int pie_dump_stats(struct Qdisc *sch, struct gnet_dump *d) static struct sk_buff *pie_qdisc_dequeue(struct Qdisc *sch) { + struct pie_sched_data *q = qdisc_priv(sch); struct sk_buff *skb = qdisc_dequeue_head(sch); if (!skb) return NULL; - pie_process_dequeue(sch, skb); + pie_process_dequeue(sch->qstats.backlog, skb, &q->vars); return skb; }