From patchwork Wed Jan 28 13:23:53 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarek Poplawski X-Patchwork-Id: 20608 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id BAADBDE072 for ; Thu, 29 Jan 2009 00:24:09 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751512AbZA1NYH (ORCPT ); Wed, 28 Jan 2009 08:24:07 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751461AbZA1NYG (ORCPT ); Wed, 28 Jan 2009 08:24:06 -0500 Received: from mu-out-0910.google.com ([209.85.134.187]:58166 "EHLO mu-out-0910.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751123AbZA1NYC (ORCPT ); Wed, 28 Jan 2009 08:24:02 -0500 Received: by mu-out-0910.google.com with SMTP id g7so4986049muf.1 for ; Wed, 28 Jan 2009 05:24:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:cc:subject :message-id:mime-version:content-type:content-disposition :in-reply-to:user-agent; bh=51D6XM99xcNP+MMHex8klhC1RR+7npBMmoxDwGX9lxU=; b=o4Jz0TZpLnW0ZUtSv+UiPPDlUADOvoYS0kaDGxRSKUMJnDgn0c6PDnP4T/Wx9gr3Fc 3UlR+MNdGG38FiwOWOOReT5FifgvocAZmJ3/NfeIjWueq/KPPw1Zc1b0shQb8nix2HOJ bwvgENVH/xwxpi5TEg5e6H0DgiB+yitqibueM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:mime-version:content-type :content-disposition:in-reply-to:user-agent; b=JUxQMGgnv5dIh70YxnVxrmzX3vo5bf1XviebOUpdY3yipZur6bEi9fC6RtgYkg5UG2 48+Z7qTwF6MbJtUc1apkgoAqdsRF3Q8kodtzM3qpPWAS5HEn0wQ6wTB2SqdRQiyxhhfX cbMm8UJli/QwrRT5piw8HoDnKZcMMhk+L1X7U= Received: by 10.223.108.196 with SMTP id g4mr3695961fap.36.1233149040186; Wed, 28 Jan 2009 05:24:00 -0800 (PST) Received: from ff.dom.local (bv170.internetdsl.tpnet.pl [80.53.205.170]) by mx.google.com with ESMTPS id z37sm32307519ikz.21.2009.01.28.05.23.57 (version=SSLv3 cipher=RC4-MD5); Wed, 28 Jan 2009 05:23:59 -0800 (PST) Date: Wed, 28 Jan 2009 13:23:53 +0000 From: Jarek Poplawski To: Patrick McHardy Cc: David Miller , devik@cdi.cz, netdev@vger.kernel.org Subject: Re: [PATCH 7/6] Re: [PATCH 2/6] pkt_sched: sch_htb: Consider used jiffies in htb_dequeue() Message-ID: <20090128132353.GA6443@ff.dom.local> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <496B19F7.4060909@trash.net> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On 12-01-2009 11:22, Patrick McHardy wrote: > Jarek Poplawski wrote: >> On Mon, Jan 12, 2009 at 07:56:37AM +0100, Patrick McHardy wrote: >>> Sorry, I dropped the ball on this one. I still think scheduling >>> a work-queue or something else running in process context to >>> kick the queue once the scheduler had a chance to run would >>> be a better solution. But Jarek's patches are an improvement >>> to the current situation, so no objections from me. >>> >> Thanks for the review Patrick. As I wrote before, I'm not against >> using a workqueue here: it's logically better, but I still think >> this place is rather exception, so I'm not convinced we should >> care so much adding better solution, but also some overhead when >> cancelling this workqueue. But if it really bothers you, please >> confirm, and I'll do it. > > It doesn't bother me :) I just think its the technical better > and also most likely code-wise cleaner solution to this problem. > Cancellation wouldn't be necessary since an unnecessary > netif_schedule() doesn't really matter. > > It you don't mind adding the workqueue, I certainly would prefer > it, but I'm also fine with this patch. I don't have a HTB setup > or a testcase for this specific case, otherwise I'd simply do it > myself. Here is an example of this workqueue. I hope I didn't miss your point, but since I didn't find much difference in testing, I'd prefer not to sign-off/merge this yet, at least until there are many reports on "too many events" problem, and somebody finds it useful. Thanks, Jarek P. --- (for example only) --- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff -Nurp b/net/sched/sch_htb.c c/net/sched/sch_htb.c --- b/net/sched/sch_htb.c 2009-01-13 20:20:47.000000000 +0100 +++ c/net/sched/sch_htb.c 2009-01-13 21:32:17.000000000 +0100 @@ -35,6 +35,7 @@ #include #include #include +#include #include #include @@ -157,6 +158,7 @@ struct htb_sched { #define HTB_WARN_NONCONSERVING 0x1 #define HTB_WARN_TOOMANYEVENTS 0x2 int warned; /* only one warning about non work conserving etc. */ + struct work_struct work; }; /* find class in global hash table using given handle */ @@ -660,7 +662,7 @@ static void htb_charge_class(struct htb_ * htb_do_events - make mode changes to classes at the level * * Scans event queue for pending events and applies them. Returns time of - * next pending event (0 for no event in pq). + * next pending event (0 for no event in pq, q->now for too many events). * Note: Applied are events whose have cl->pq_key <= q->now. */ static psched_time_t htb_do_events(struct htb_sched *q, int level, @@ -688,12 +690,14 @@ static psched_time_t htb_do_events(struc if (cl->cmode != HTB_CAN_SEND) htb_add_to_wait_tree(q, cl, diff); } - /* too much load - let's continue on next jiffie (including above) */ + + /* too much load - let's continue after a break for scheduling */ if (!(q->warned & HTB_WARN_TOOMANYEVENTS)) { printk(KERN_WARNING "htb: too many events!\n"); q->warned |= HTB_WARN_TOOMANYEVENTS; } - return q->now + 2 * PSCHED_TICKS_PER_SEC / HZ; + + return q->now; } /* Returns class->node+prio from id-tree where classe's id is >= id. NULL @@ -898,7 +902,10 @@ static struct sk_buff *htb_dequeue(struc } } sch->qstats.overlimits++; - qdisc_watchdog_schedule(&q->watchdog, next_event); + if (likely(next_event > q->now)) + qdisc_watchdog_schedule(&q->watchdog, next_event); + else + schedule_work(&q->work); fin: return skb; } @@ -968,6 +975,14 @@ static const struct nla_policy htb_polic [TCA_HTB_RTAB] = { .type = NLA_BINARY, .len = TC_RTAB_SIZE }, }; +static void htb_work_func(struct work_struct *work) +{ + struct htb_sched *q = container_of(work, struct htb_sched, work); + struct Qdisc *sch = q->watchdog.qdisc; + + __netif_schedule(qdisc_root(sch)); +} + static int htb_init(struct Qdisc *sch, struct nlattr *opt) { struct htb_sched *q = qdisc_priv(sch); @@ -1002,6 +1017,7 @@ static int htb_init(struct Qdisc *sch, s INIT_LIST_HEAD(q->drops + i); qdisc_watchdog_init(&q->watchdog, sch); + INIT_WORK(&q->work, htb_work_func); skb_queue_head_init(&q->direct_queue); q->direct_qlen = qdisc_dev(sch)->tx_queue_len; @@ -1194,7 +1210,6 @@ static void htb_destroy_class(struct Qdi kfree(cl); } -/* always caled under BH & queue lock */ static void htb_destroy(struct Qdisc *sch) { struct htb_sched *q = qdisc_priv(sch); @@ -1202,6 +1217,7 @@ static void htb_destroy(struct Qdisc *sc struct htb_class *cl; unsigned int i; + cancel_work_sync(&q->work); qdisc_watchdog_cancel(&q->watchdog); /* This line used to be after htb_destroy_class call below and surprisingly it worked in 2.4. But it must precede it