From patchwork Thu Dec 7 17:57:20 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 845739 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="gsMCp9xZ"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3yt39G2q0Kz9s7B for ; Fri, 8 Dec 2017 04:57:38 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753798AbdLGR5f (ORCPT ); Thu, 7 Dec 2017 12:57:35 -0500 Received: from mail-pg0-f68.google.com ([74.125.83.68]:42378 "EHLO mail-pg0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752070AbdLGR5d (ORCPT ); Thu, 7 Dec 2017 12:57:33 -0500 Received: by mail-pg0-f68.google.com with SMTP id e14so4974451pgr.9 for ; Thu, 07 Dec 2017 09:57:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=fgifRWEJ1jn05GJUTIlNgVdq8fxowM8f2Y195mhpW08=; b=gsMCp9xZzFuPc1L8PA2AnnrAvDqFpJqyCnvJLufI7hFhD3iihegUWLiadW5g4ox+fL BZVO4om/2vfQEGbI1Ag3RJTX2VgF5dyiF+wU2G/t0cFJITHMzgpKxUbog8di4i+bUelc smyFSPwOHrqQC8D5BwVAwdiTCLnYePvngyklad08W0c26uJYEvy0pfRcgDp3ggljmiPP 6s3CvztHfsnVEXRYkGJx/hM4veiU/w53QWZmsejAdHhZlntJTCeAJ69TWA6hsKeSUJmQ 3GvhQStecJOiIrhPUoLfALNNFbsTY7vrE7KXQaldW9xLHihRZAiDIGMSRfKO2bTB9JVD FVvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:from:to:cc:date:message-id:in-reply-to :references:user-agent:mime-version:content-transfer-encoding; bh=fgifRWEJ1jn05GJUTIlNgVdq8fxowM8f2Y195mhpW08=; b=d8yWLfqj3mElL3z02MNhFuPxc4Sqpe4vfSfyW2gji1X2PYpfJ1THki170snqfQqa7J I4x3xg6nzdo8yAOA2sNWFJNrVVB+u2sJrot53R/yYCoWHDHo30XCltz0Bj4m6H1CNTSf Q9a6EeO4Es8rwHJNYtPghKrIgEyfoFRJHnHJwDDtBg3O5Htor+CQiMUTivHXRiCwH9ya klrHu3naYIVy2uQggmkGw/+EXcrDNZmLe6tOOyRtrkhh8bmwl6lzPJHVH1wDCYrxKnhE REdInSng0GiK39GPWmOIt/2cltmdzpiu5TjsseajXcC9nU1m9rwXPMiBODYOJVOxv8zk QS6w== X-Gm-Message-State: AJaThX4tKZQw1ApjAnyLkkV38hgBygshxmVoZGc/LQXAEB2GZjjXlE2h suJ0Qlsm2RAbJo8uC4hudtU= X-Google-Smtp-Source: AGs4zMYvFEP8unQi9/YrfpfzFN96CiA9ydBjIV801dVeBi3Yuwr1RplWlRLnCjj7pyx/Vl5acp2Nlw== X-Received: by 10.84.132.46 with SMTP id 43mr27349984ple.126.1512669452979; Thu, 07 Dec 2017 09:57:32 -0800 (PST) Received: from [127.0.1.1] ([72.168.144.118]) by smtp.gmail.com with ESMTPSA id s73sm10930016pfi.167.2017.12.07.09.57.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Dec 2017 09:57:32 -0800 (PST) Subject: [net-next PATCH 11/14] net: sched: add support for TCQ_F_NOLOCK subqueues to sch_mq From: John Fastabend To: willemdebruijn.kernel@gmail.com, daniel@iogearbox.net, eric.dumazet@gmail.com, davem@davemloft.net Cc: netdev@vger.kernel.org, jiri@resnulli.us, xiyou.wangcong@gmail.com Date: Thu, 07 Dec 2017 09:57:20 -0800 Message-ID: <20171207175719.5771.27754.stgit@john-Precision-Tower-5810> In-Reply-To: <20171207173500.5771.41198.stgit@john-Precision-Tower-5810> References: <20171207173500.5771.41198.stgit@john-Precision-Tower-5810> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The sch_mq qdisc creates a sub-qdisc per tx queue which are then called independently for enqueue and dequeue operations. However statistics are aggregated and pushed up to the "master" qdisc. This patch adds support for any of the sub-qdiscs to be per cpu statistic qdiscs. To handle this case add a check when calculating stats and aggregate the per cpu stats if needed. Also exports __gnet_stats_copy_queue() to use as a helper function. Signed-off-by: John Fastabend --- include/net/gen_stats.h | 3 +++ net/core/gen_stats.c | 9 +++++---- net/sched/sch_mq.c | 25 ++++++++++++++++++------- 3 files changed, 26 insertions(+), 11 deletions(-) diff --git a/include/net/gen_stats.h b/include/net/gen_stats.h index 304f7aa..0304ba2 100644 --- a/include/net/gen_stats.h +++ b/include/net/gen_stats.h @@ -49,6 +49,9 @@ int gnet_stats_copy_rate_est(struct gnet_dump *d, int gnet_stats_copy_queue(struct gnet_dump *d, struct gnet_stats_queue __percpu *cpu_q, struct gnet_stats_queue *q, __u32 qlen); +void __gnet_stats_copy_queue(struct gnet_stats_queue *qstats, + const struct gnet_stats_queue __percpu *cpu_q, + const struct gnet_stats_queue *q, __u32 qlen); int gnet_stats_copy_app(struct gnet_dump *d, void *st, int len); int gnet_stats_finish_copy(struct gnet_dump *d); diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c index 87f2855..b2b2323b 100644 --- a/net/core/gen_stats.c +++ b/net/core/gen_stats.c @@ -252,10 +252,10 @@ } } -static void __gnet_stats_copy_queue(struct gnet_stats_queue *qstats, - const struct gnet_stats_queue __percpu *cpu, - const struct gnet_stats_queue *q, - __u32 qlen) +void __gnet_stats_copy_queue(struct gnet_stats_queue *qstats, + const struct gnet_stats_queue __percpu *cpu, + const struct gnet_stats_queue *q, + __u32 qlen) { if (cpu) { __gnet_stats_copy_queue_cpu(qstats, cpu); @@ -269,6 +269,7 @@ static void __gnet_stats_copy_queue(struct gnet_stats_queue *qstats, qstats->qlen = qlen; } +EXPORT_SYMBOL(__gnet_stats_copy_queue); /** * gnet_stats_copy_queue - copy queue statistics into statistics TLV diff --git a/net/sched/sch_mq.c b/net/sched/sch_mq.c index 213b586..bc59f05 100644 --- a/net/sched/sch_mq.c +++ b/net/sched/sch_mq.c @@ -17,6 +17,7 @@ #include #include #include +#include struct mq_sched { struct Qdisc **qdiscs; @@ -103,15 +104,25 @@ static int mq_dump(struct Qdisc *sch, struct sk_buff *skb) memset(&sch->qstats, 0, sizeof(sch->qstats)); for (ntx = 0; ntx < dev->num_tx_queues; ntx++) { + struct gnet_stats_basic_cpu __percpu *cpu_bstats = NULL; + struct gnet_stats_queue __percpu *cpu_qstats = NULL; + __u32 qlen = 0; + qdisc = netdev_get_tx_queue(dev, ntx)->qdisc_sleeping; spin_lock_bh(qdisc_lock(qdisc)); - sch->q.qlen += qdisc->q.qlen; - sch->bstats.bytes += qdisc->bstats.bytes; - sch->bstats.packets += qdisc->bstats.packets; - sch->qstats.backlog += qdisc->qstats.backlog; - sch->qstats.drops += qdisc->qstats.drops; - sch->qstats.requeues += qdisc->qstats.requeues; - sch->qstats.overlimits += qdisc->qstats.overlimits; + + if (qdisc_is_percpu_stats(qdisc)) { + cpu_bstats = qdisc->cpu_bstats; + cpu_qstats = qdisc->cpu_qstats; + } + + qlen = qdisc_qlen_sum(qdisc); + + __gnet_stats_copy_basic(NULL, &sch->bstats, + cpu_bstats, &qdisc->bstats); + __gnet_stats_copy_queue(&sch->qstats, + cpu_qstats, &qdisc->qstats, qlen); + spin_unlock_bh(qdisc_lock(qdisc)); } return 0;