From patchwork Thu Oct 30 13:05:48 2008 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarek Poplawski X-Patchwork-Id: 6492 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id 20380DDD01 for ; Fri, 31 Oct 2008 00:06:00 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754068AbYJ3NFz (ORCPT ); Thu, 30 Oct 2008 09:05:55 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754058AbYJ3NFz (ORCPT ); Thu, 30 Oct 2008 09:05:55 -0400 Received: from nf-out-0910.google.com ([64.233.182.189]:35682 "EHLO nf-out-0910.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754028AbYJ3NFy (ORCPT ); Thu, 30 Oct 2008 09:05:54 -0400 Received: by nf-out-0910.google.com with SMTP id d3so256207nfc.21 for ; Thu, 30 Oct 2008 06:05:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:cc:subject :message-id:mime-version:content-type:content-disposition:x-mutt-fcc :user-agent; bh=RU9KEBNmWVeOPcScuL9zb4dDL8fTufNw5ELqP8LAEmg=; b=G8lb9KMiyo9p7YZvOr6NJ9yzmpq8YPKvdn3Z0xjXjSZDpN+TzHdwVEsUP1SxmoDw46 E/ss9WrGN6jBjR/7JVwmfd+FqGyDWasxkLh5OidIRNieJ3eJC+15ZeulXfmusdKC9O1F NMhfwCbrMdHCX0BV9q54asH9B65q5QcG2wbUU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:mime-version:content-type :content-disposition:x-mutt-fcc:user-agent; b=U9HK4cusIn0ozsVhbI3F7qvNuN4/9SAATXo63kC6E8xqjswDnWFUPh3ZhoRk9lTLGZ /gCqTLtRAADbEY8Uk2s6oPMISeW9f2tLo9+DuVxJHbbbAMD/RpbZJbtebL111CrMbkUu Fgud9FG9w7+7EwhY2cUnDczwTaES/fWvNF7nk= Received: by 10.210.69.6 with SMTP id r6mr2534554eba.126.1225371953575; Thu, 30 Oct 2008 06:05:53 -0700 (PDT) Received: from ff.dom.local (bv170.internetdsl.tpnet.pl [80.53.205.170]) by mx.google.com with ESMTPS id 5sm1763230eyh.2.2008.10.30.06.05.52 (version=SSLv3 cipher=RC4-MD5); Thu, 30 Oct 2008 06:05:53 -0700 (PDT) Date: Thu, 30 Oct 2008 13:05:48 +0000 From: Jarek Poplawski To: Patrick McHardy Cc: David Miller , netdev@vger.kernel.org, Herbert Xu Subject: [PATCH 2/6 RESEND] pkt_sched: Add ->peek() methods for fifo, prio and SFQ qdiscs. Message-ID: <20081030130548.GC22853@ff.dom.local> MIME-Version: 1.0 Content-Disposition: inline X-Mutt-Fcc: =outbox User-Agent: Mutt/1.5.18 (2008-05-17) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Patrick McHardy Just as a demonstration how easy adding a peek operation to the work-conserving qdiscs actually is. It doesn't need to keep or change any internal state in many cases thanks to the guarantee that the packet will either be dequeued or, if another packet arrives, the upper qdisc will immediately ->peek again to reevaluate the state. (This is only slightly modified Patrick's patch.) Signed-off-by: Jarek Poplawski --- include/net/sch_generic.h | 5 +++++ net/sched/sch_fifo.c | 2 ++ net/sched/sch_prio.c | 14 ++++++++++++++ net/sched/sch_sfq.c | 12 ++++++++++++ 4 files changed, 33 insertions(+), 0 deletions(-) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index f81f7c4..da6839a 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -433,6 +433,11 @@ static inline struct sk_buff *qdisc_dequeue_tail(struct Qdisc *sch) return __qdisc_dequeue_tail(sch, &sch->q); } +static inline struct sk_buff *qdisc_peek_head(struct Qdisc *sch) +{ + return skb_peek(&sch->q); +} + static inline int __qdisc_requeue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff_head *list) { diff --git a/net/sched/sch_fifo.c b/net/sched/sch_fifo.c index 23d258b..8825e88 100644 --- a/net/sched/sch_fifo.c +++ b/net/sched/sch_fifo.c @@ -83,6 +83,7 @@ struct Qdisc_ops pfifo_qdisc_ops __read_mostly = { .priv_size = sizeof(struct fifo_sched_data), .enqueue = pfifo_enqueue, .dequeue = qdisc_dequeue_head, + .peek = qdisc_peek_head, .requeue = qdisc_requeue, .drop = qdisc_queue_drop, .init = fifo_init, @@ -98,6 +99,7 @@ struct Qdisc_ops bfifo_qdisc_ops __read_mostly = { .priv_size = sizeof(struct fifo_sched_data), .enqueue = bfifo_enqueue, .dequeue = qdisc_dequeue_head, + .peek = qdisc_peek_head, .requeue = qdisc_requeue, .drop = qdisc_queue_drop, .init = fifo_init, diff --git a/net/sched/sch_prio.c b/net/sched/sch_prio.c index 504a78c..3651da3 100644 --- a/net/sched/sch_prio.c +++ b/net/sched/sch_prio.c @@ -120,6 +120,19 @@ prio_requeue(struct sk_buff *skb, struct Qdisc* sch) return ret; } +static struct sk_buff *prio_peek(struct Qdisc *sch) +{ + struct prio_sched_data *q = qdisc_priv(sch); + int prio; + + for (prio = 0; prio < q->bands; prio++) { + struct Qdisc *qdisc = q->queues[prio]; + struct sk_buff *skb = qdisc->ops->peek(qdisc); + if (skb) + return skb; + } + return NULL; +} static struct sk_buff *prio_dequeue(struct Qdisc* sch) { @@ -421,6 +434,7 @@ static struct Qdisc_ops prio_qdisc_ops __read_mostly = { .priv_size = sizeof(struct prio_sched_data), .enqueue = prio_enqueue, .dequeue = prio_dequeue, + .peek = prio_peek, .requeue = prio_requeue, .drop = prio_drop, .init = prio_init, diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c index fe1508e..198b83d 100644 --- a/net/sched/sch_sfq.c +++ b/net/sched/sch_sfq.c @@ -391,8 +391,19 @@ sfq_requeue(struct sk_buff *skb, struct Qdisc *sch) return NET_XMIT_CN; } +static struct sk_buff * +sfq_peek(struct Qdisc *sch) +{ + struct sfq_sched_data *q = qdisc_priv(sch); + sfq_index a; + /* No active slots */ + if (q->tail == SFQ_DEPTH) + return NULL; + a = q->next[q->tail]; + return skb_peek(&q->qs[a]); +} static struct sk_buff * sfq_dequeue(struct Qdisc *sch) @@ -624,6 +635,7 @@ static struct Qdisc_ops sfq_qdisc_ops __read_mostly = { .priv_size = sizeof(struct sfq_sched_data), .enqueue = sfq_enqueue, .dequeue = sfq_dequeue, + .peek = sfq_peek, .requeue = sfq_requeue, .drop = sfq_drop, .init = sfq_init,