From patchwork Sun Jul 7 17:29:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1128702 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="GAVJNY3O"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45hbDl1jPhz9sCJ for ; Mon, 8 Jul 2019 03:29:43 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727415AbfGGR3h (ORCPT ); Sun, 7 Jul 2019 13:29:37 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:45643 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727260AbfGGR3h (ORCPT ); Sun, 7 Jul 2019 13:29:37 -0400 Received: by mail-wr1-f68.google.com with SMTP id f9so14553462wre.12 for ; Sun, 07 Jul 2019 10:29:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=F465llDp9ujZu7QBr2K8i2MbI3NU4KwdYrZ5miEiAtY=; b=GAVJNY3OQ04r1g7+GbnvywLESPw8a5o22z4kAcYA/00ffmRlgYFSwunhf6KOEBy6qy wDqPdlUZW6p+VlbZKEDAf/Hxri/iqbQLqNZySOrrQwcHYiz83c880OY5XAzdTtUvBpFc fb/zCXvRVhcB9ZqlgN8LAvSAueiGaSAqAOorWrK6rPiI4xRmHvLHpb6jqmw6E1/uLoPr 1/EcjFMqazlk9Tl/OrjEpcsyGCcEpDzEGORoaO+Ds+U4I2Nuk2/A49PzGJNBHJw2EqsB 8jt+cOOCy7zYxTnw7/kVIZ0rfXYqBA5/XFp76bUeZZgT/UMbkAETKMpec+T4cSf0tBSH xP4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=F465llDp9ujZu7QBr2K8i2MbI3NU4KwdYrZ5miEiAtY=; b=suhv9IlS3DXZPC+SIWi21TJk/Yb4eNKWGUR7MW6UnsjHDsl+9K99UiRP+5p0VBkfod 1YQufkE7g+wcpYgocxYZ6+sMXZRzASg676nR92hHn4Sc9JxqiS3WXhH9tteP8PiR9a9H TSUKuUGxf6tB7qvbphmhjF5kuGIwVb2pA1HMQ5hv8sw8nsNhe1Wj/RnMJzR1V4P+XdSi 3GuBkkTBOOrv5HvICSvsfcHN+rrhpa9RcV3TyCT+kZJt4ZrrZSky/QarcOXEq1Z3kdSk U+g+TIW0iIQxt1NECStsIa/7YifMacO4k7eQbNikz63LqPJJWoclrLekB4HuPsvoY3w+ epyA== X-Gm-Message-State: APjAAAVFxXD3mLVz47HvW02kfxV/QqDC4HiXT+H3hap2fkmYKKi0Eec/ yi/8MiGugIRXERM30y/53q8= X-Google-Smtp-Source: APXvYqyWxsq65BCKwU6NVvkQ/k9FuDEdZSu15B/POigkQvCE5nLHl5+2pYletCbFbvf9bpV2r6Gijg== X-Received: by 2002:a5d:52cd:: with SMTP id r13mr14685701wrv.349.1562520574152; Sun, 07 Jul 2019 10:29:34 -0700 (PDT) Received: from localhost.localdomain ([188.26.252.192]) by smtp.gmail.com with ESMTPSA id g14sm14280463wro.11.2019.07.07.10.29.32 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Sun, 07 Jul 2019 10:29:33 -0700 (PDT) From: Vladimir Oltean To: f.fainelli@gmail.com, vivien.didelot@gmail.com, andrew@lunn.ch, davem@davemloft.net, vinicius.gomes@intel.com, vedang.patel@intel.com, richardcochran@gmail.com Cc: weifeng.voon@intel.com, jiri@mellanox.com, m-karicheri2@ti.com, Jose.Abreu@synopsys.com, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, Vladimir Oltean Subject: [RFC PATCH net-next 1/6] Revert "Merge branch 'net-sched-Add-txtime-assist-support-for-taprio'" Date: Sun, 7 Jul 2019 20:29:16 +0300 Message-Id: <20190707172921.17731-2-olteanv@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190707172921.17731-1-olteanv@gmail.com> References: <20190707172921.17731-1-olteanv@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This reverts commit 0a7960c7922228ca975ca4c5595e5539fc8f8b79, reversing changes made to 8747d82d3c32df488ea0fe9b86bdb53a8a04a7b8. This returns the tc-taprio state to where it was when Voon Weifeng had resent Vinicius Costa Gomes 's patch "taprio: Add support for hardware offloading" (also resent by me in this series). There are conflicts within the two that would otherwise propagate all the way up to iproute2. I don't want to redefine yet a third userspace interface, hence simply reverting the txtime-assist patches for review purposes of taprio offload. Signed-off-by: Vladimir Oltean --- drivers/net/ethernet/intel/igb/igb_main.c | 1 - include/uapi/linux/pkt_sched.h | 9 +- net/sched/sch_etf.c | 10 - net/sched/sch_taprio.c | 421 ++-------------------- 4 files changed, 36 insertions(+), 405 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index f66dae72fe37..fc925adbd9fa 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -5688,7 +5688,6 @@ static void igb_tx_ctxtdesc(struct igb_ring *tx_ring, */ if (tx_ring->launchtime_enable) { ts = ns_to_timespec64(first->skb->tstamp); - first->skb->tstamp = 0; context_desc->seqnum_seed = cpu_to_le32(ts.tv_nsec / 32); } else { context_desc->seqnum_seed = 0; diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h index 390efb54b2e0..8b2f993cbb77 100644 --- a/include/uapi/linux/pkt_sched.h +++ b/include/uapi/linux/pkt_sched.h @@ -988,9 +988,8 @@ struct tc_etf_qopt { __s32 delta; __s32 clockid; __u32 flags; -#define TC_ETF_DEADLINE_MODE_ON _BITUL(0) -#define TC_ETF_OFFLOAD_ON _BITUL(1) -#define TC_ETF_SKIP_SOCK_CHECK _BITUL(2) +#define TC_ETF_DEADLINE_MODE_ON BIT(0) +#define TC_ETF_OFFLOAD_ON BIT(1) }; enum { @@ -1159,8 +1158,6 @@ enum { * [TCA_TAPRIO_ATTR_SCHED_ENTRY_INTERVAL] */ -#define TCA_TAPRIO_ATTR_FLAG_TXTIME_ASSIST 0x1 - enum { TCA_TAPRIO_ATTR_UNSPEC, TCA_TAPRIO_ATTR_PRIOMAP, /* struct tc_mqprio_qopt */ @@ -1172,8 +1169,6 @@ enum { TCA_TAPRIO_ATTR_ADMIN_SCHED, /* The admin sched, only used in dump */ TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME, /* s64 */ TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME_EXTENSION, /* s64 */ - TCA_TAPRIO_ATTR_FLAGS, /* u32 */ - TCA_TAPRIO_ATTR_TXTIME_DELAY, /* s32 */ __TCA_TAPRIO_ATTR_MAX, }; diff --git a/net/sched/sch_etf.c b/net/sched/sch_etf.c index cebfb65d8556..db0c2ba1d156 100644 --- a/net/sched/sch_etf.c +++ b/net/sched/sch_etf.c @@ -22,12 +22,10 @@ #define DEADLINE_MODE_IS_ON(x) ((x)->flags & TC_ETF_DEADLINE_MODE_ON) #define OFFLOAD_IS_ON(x) ((x)->flags & TC_ETF_OFFLOAD_ON) -#define SKIP_SOCK_CHECK_IS_SET(x) ((x)->flags & TC_ETF_SKIP_SOCK_CHECK) struct etf_sched_data { bool offload; bool deadline_mode; - bool skip_sock_check; int clockid; int queue; s32 delta; /* in ns */ @@ -79,9 +77,6 @@ static bool is_packet_valid(struct Qdisc *sch, struct sk_buff *nskb) struct sock *sk = nskb->sk; ktime_t now; - if (q->skip_sock_check) - goto skip; - if (!sk) return false; @@ -97,7 +92,6 @@ static bool is_packet_valid(struct Qdisc *sch, struct sk_buff *nskb) if (sk->sk_txtime_deadline_mode != q->deadline_mode) return false; -skip: now = q->get_time(); if (ktime_before(txtime, now) || ktime_before(txtime, q->last)) return false; @@ -391,7 +385,6 @@ static int etf_init(struct Qdisc *sch, struct nlattr *opt, q->clockid = qopt->clockid; q->offload = OFFLOAD_IS_ON(qopt); q->deadline_mode = DEADLINE_MODE_IS_ON(qopt); - q->skip_sock_check = SKIP_SOCK_CHECK_IS_SET(qopt); switch (q->clockid) { case CLOCK_REALTIME: @@ -480,9 +473,6 @@ static int etf_dump(struct Qdisc *sch, struct sk_buff *skb) if (q->deadline_mode) opt.flags |= TC_ETF_DEADLINE_MODE_ON; - if (q->skip_sock_check) - opt.flags |= TC_ETF_SKIP_SOCK_CHECK; - if (nla_put(skb, TCA_ETF_PARMS, sizeof(opt), &opt)) goto nla_put_failure; diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c index 388750ddc57a..9ecfb8f5902a 100644 --- a/net/sched/sch_taprio.c +++ b/net/sched/sch_taprio.c @@ -21,17 +21,12 @@ #include #include #include -#include -#include static LIST_HEAD(taprio_list); static DEFINE_SPINLOCK(taprio_list_lock); #define TAPRIO_ALL_GATES_OPEN -1 -#define FLAGS_VALID(flags) (!((flags) & ~TCA_TAPRIO_ATTR_FLAG_TXTIME_ASSIST)) -#define TXTIME_ASSIST_IS_ENABLED(flags) ((flags) & TCA_TAPRIO_ATTR_FLAG_TXTIME_ASSIST) - struct sched_entry { struct list_head list; @@ -40,7 +35,6 @@ struct sched_entry { * packet leaves after this time. */ ktime_t close_time; - ktime_t next_txtime; atomic_t budget; int index; u32 gate_mask; @@ -61,8 +55,6 @@ struct sched_gate_list { struct taprio_sched { struct Qdisc **qdiscs; struct Qdisc *root; - u32 flags; - enum tk_offsets tk_offset; int clockid; atomic64_t picos_per_byte; /* Using picoseconds because for 10Gbps+ * speeds it's sub-nanoseconds per byte @@ -73,9 +65,9 @@ struct taprio_sched { struct sched_entry __rcu *current_entry; struct sched_gate_list __rcu *oper_sched; struct sched_gate_list __rcu *admin_sched; + ktime_t (*get_time)(void); struct hrtimer advance_timer; struct list_head taprio_list; - int txtime_delay; }; static ktime_t sched_base_time(const struct sched_gate_list *sched) @@ -86,20 +78,6 @@ static ktime_t sched_base_time(const struct sched_gate_list *sched) return ns_to_ktime(sched->base_time); } -static ktime_t taprio_get_time(struct taprio_sched *q) -{ - ktime_t mono = ktime_get(); - - switch (q->tk_offset) { - case TK_OFFS_MAX: - return mono; - default: - return ktime_mono_to_any(mono, q->tk_offset); - } - - return KTIME_MAX; -} - static void taprio_free_sched_cb(struct rcu_head *head) { struct sched_gate_list *sched = container_of(head, struct sched_gate_list, rcu); @@ -130,263 +108,20 @@ static void switch_schedules(struct taprio_sched *q, *admin = NULL; } -/* Get how much time has been already elapsed in the current cycle. */ -static s32 get_cycle_time_elapsed(struct sched_gate_list *sched, ktime_t time) -{ - ktime_t time_since_sched_start; - s32 time_elapsed; - - time_since_sched_start = ktime_sub(time, sched->base_time); - div_s64_rem(time_since_sched_start, sched->cycle_time, &time_elapsed); - - return time_elapsed; -} - -static ktime_t get_interval_end_time(struct sched_gate_list *sched, - struct sched_gate_list *admin, - struct sched_entry *entry, - ktime_t intv_start) -{ - s32 cycle_elapsed = get_cycle_time_elapsed(sched, intv_start); - ktime_t intv_end, cycle_ext_end, cycle_end; - - cycle_end = ktime_add_ns(intv_start, sched->cycle_time - cycle_elapsed); - intv_end = ktime_add_ns(intv_start, entry->interval); - cycle_ext_end = ktime_add(cycle_end, sched->cycle_time_extension); - - if (ktime_before(intv_end, cycle_end)) - return intv_end; - else if (admin && admin != sched && - ktime_after(admin->base_time, cycle_end) && - ktime_before(admin->base_time, cycle_ext_end)) - return admin->base_time; - else - return cycle_end; -} - -static int length_to_duration(struct taprio_sched *q, int len) -{ - return div_u64(len * atomic64_read(&q->picos_per_byte), 1000); -} - -/* Returns the entry corresponding to next available interval. If - * validate_interval is set, it only validates whether the timestamp occurs - * when the gate corresponding to the skb's traffic class is open. - */ -static struct sched_entry *find_entry_to_transmit(struct sk_buff *skb, - struct Qdisc *sch, - struct sched_gate_list *sched, - struct sched_gate_list *admin, - ktime_t time, - ktime_t *interval_start, - ktime_t *interval_end, - bool validate_interval) -{ - ktime_t curr_intv_start, curr_intv_end, cycle_end, packet_transmit_time; - ktime_t earliest_txtime = KTIME_MAX, txtime, cycle, transmit_end_time; - struct sched_entry *entry = NULL, *entry_found = NULL; - struct taprio_sched *q = qdisc_priv(sch); - struct net_device *dev = qdisc_dev(sch); - bool entry_available = false; - s32 cycle_elapsed; - int tc, n; - - tc = netdev_get_prio_tc_map(dev, skb->priority); - packet_transmit_time = length_to_duration(q, qdisc_pkt_len(skb)); - - *interval_start = 0; - *interval_end = 0; - - if (!sched) - return NULL; - - cycle = sched->cycle_time; - cycle_elapsed = get_cycle_time_elapsed(sched, time); - curr_intv_end = ktime_sub_ns(time, cycle_elapsed); - cycle_end = ktime_add_ns(curr_intv_end, cycle); - - list_for_each_entry(entry, &sched->entries, list) { - curr_intv_start = curr_intv_end; - curr_intv_end = get_interval_end_time(sched, admin, entry, - curr_intv_start); - - if (ktime_after(curr_intv_start, cycle_end)) - break; - - if (!(entry->gate_mask & BIT(tc)) || - packet_transmit_time > entry->interval) - continue; - - txtime = entry->next_txtime; - - if (ktime_before(txtime, time) || validate_interval) { - transmit_end_time = ktime_add_ns(time, packet_transmit_time); - if ((ktime_before(curr_intv_start, time) && - ktime_before(transmit_end_time, curr_intv_end)) || - (ktime_after(curr_intv_start, time) && !validate_interval)) { - entry_found = entry; - *interval_start = curr_intv_start; - *interval_end = curr_intv_end; - break; - } else if (!entry_available && !validate_interval) { - /* Here, we are just trying to find out the - * first available interval in the next cycle. - */ - entry_available = 1; - entry_found = entry; - *interval_start = ktime_add_ns(curr_intv_start, cycle); - *interval_end = ktime_add_ns(curr_intv_end, cycle); - } - } else if (ktime_before(txtime, earliest_txtime) && - !entry_available) { - earliest_txtime = txtime; - entry_found = entry; - n = div_s64(ktime_sub(txtime, curr_intv_start), cycle); - *interval_start = ktime_add(curr_intv_start, n * cycle); - *interval_end = ktime_add(curr_intv_end, n * cycle); - } - } - - return entry_found; -} - -static bool is_valid_interval(struct sk_buff *skb, struct Qdisc *sch) +static ktime_t get_cycle_time(struct sched_gate_list *sched) { - struct taprio_sched *q = qdisc_priv(sch); - struct sched_gate_list *sched, *admin; - ktime_t interval_start, interval_end; struct sched_entry *entry; + ktime_t cycle = 0; - rcu_read_lock(); - sched = rcu_dereference(q->oper_sched); - admin = rcu_dereference(q->admin_sched); - - entry = find_entry_to_transmit(skb, sch, sched, admin, skb->tstamp, - &interval_start, &interval_end, true); - rcu_read_unlock(); + if (sched->cycle_time != 0) + return sched->cycle_time; - return entry; -} + list_for_each_entry(entry, &sched->entries, list) + cycle = ktime_add_ns(cycle, entry->interval); -/* This returns the tstamp value set by TCP in terms of the set clock. */ -static ktime_t get_tcp_tstamp(struct taprio_sched *q, struct sk_buff *skb) -{ - unsigned int offset = skb_network_offset(skb); - const struct ipv6hdr *ipv6h; - const struct iphdr *iph; - struct ipv6hdr _ipv6h; + sched->cycle_time = cycle; - ipv6h = skb_header_pointer(skb, offset, sizeof(_ipv6h), &_ipv6h); - if (!ipv6h) - return 0; - - if (ipv6h->version == 4) { - iph = (struct iphdr *)ipv6h; - offset += iph->ihl * 4; - - /* special-case 6in4 tunnelling, as that is a common way to get - * v6 connectivity in the home - */ - if (iph->protocol == IPPROTO_IPV6) { - ipv6h = skb_header_pointer(skb, offset, - sizeof(_ipv6h), &_ipv6h); - - if (!ipv6h || ipv6h->nexthdr != IPPROTO_TCP) - return 0; - } else if (iph->protocol != IPPROTO_TCP) { - return 0; - } - } else if (ipv6h->version == 6 && ipv6h->nexthdr != IPPROTO_TCP) { - return 0; - } - - return ktime_mono_to_any(skb->skb_mstamp_ns, q->tk_offset); -} - -/* There are a few scenarios where we will have to modify the txtime from - * what is read from next_txtime in sched_entry. They are: - * 1. If txtime is in the past, - * a. The gate for the traffic class is currently open and packet can be - * transmitted before it closes, schedule the packet right away. - * b. If the gate corresponding to the traffic class is going to open later - * in the cycle, set the txtime of packet to the interval start. - * 2. If txtime is in the future, there are packets corresponding to the - * current traffic class waiting to be transmitted. So, the following - * possibilities exist: - * a. We can transmit the packet before the window containing the txtime - * closes. - * b. The window might close before the transmission can be completed - * successfully. So, schedule the packet in the next open window. - */ -static long get_packet_txtime(struct sk_buff *skb, struct Qdisc *sch) -{ - ktime_t transmit_end_time, interval_end, interval_start, tcp_tstamp; - struct taprio_sched *q = qdisc_priv(sch); - struct sched_gate_list *sched, *admin; - ktime_t minimum_time, now, txtime; - int len, packet_transmit_time; - struct sched_entry *entry; - bool sched_changed; - - now = taprio_get_time(q); - minimum_time = ktime_add_ns(now, q->txtime_delay); - - tcp_tstamp = get_tcp_tstamp(q, skb); - minimum_time = max_t(ktime_t, minimum_time, tcp_tstamp); - - rcu_read_lock(); - admin = rcu_dereference(q->admin_sched); - sched = rcu_dereference(q->oper_sched); - if (admin && ktime_after(minimum_time, admin->base_time)) - switch_schedules(q, &admin, &sched); - - /* Until the schedule starts, all the queues are open */ - if (!sched || ktime_before(minimum_time, sched->base_time)) { - txtime = minimum_time; - goto done; - } - - len = qdisc_pkt_len(skb); - packet_transmit_time = length_to_duration(q, len); - - do { - sched_changed = 0; - - entry = find_entry_to_transmit(skb, sch, sched, admin, - minimum_time, - &interval_start, &interval_end, - false); - if (!entry) { - txtime = 0; - goto done; - } - - txtime = entry->next_txtime; - txtime = max_t(ktime_t, txtime, minimum_time); - txtime = max_t(ktime_t, txtime, interval_start); - - if (admin && admin != sched && - ktime_after(txtime, admin->base_time)) { - sched = admin; - sched_changed = 1; - continue; - } - - transmit_end_time = ktime_add(txtime, packet_transmit_time); - minimum_time = transmit_end_time; - - /* Update the txtime of current entry to the next time it's - * interval starts. - */ - if (ktime_after(transmit_end_time, interval_end)) - entry->next_txtime = ktime_add(interval_start, sched->cycle_time); - } while (sched_changed || ktime_after(transmit_end_time, interval_end)); - - entry->next_txtime = transmit_end_time; - -done: - rcu_read_unlock(); - return txtime; + return cycle; } static int taprio_enqueue(struct sk_buff *skb, struct Qdisc *sch, @@ -402,15 +137,6 @@ static int taprio_enqueue(struct sk_buff *skb, struct Qdisc *sch, if (unlikely(!child)) return qdisc_drop(skb, sch, to_free); - if (skb->sk && sock_flag(skb->sk, SOCK_TXTIME)) { - if (!is_valid_interval(skb, sch)) - return qdisc_drop(skb, sch, to_free); - } else if (TXTIME_ASSIST_IS_ENABLED(q->flags)) { - skb->tstamp = get_packet_txtime(skb, sch); - if (!skb->tstamp) - return qdisc_drop(skb, sch, to_free); - } - qdisc_qstats_backlog_inc(sch, skb); sch->q.qlen++; @@ -446,9 +172,6 @@ static struct sk_buff *taprio_peek(struct Qdisc *sch) if (!skb) continue; - if (TXTIME_ASSIST_IS_ENABLED(q->flags)) - return skb; - prio = skb->priority; tc = netdev_get_prio_tc_map(dev, prio); @@ -461,6 +184,11 @@ static struct sk_buff *taprio_peek(struct Qdisc *sch) return NULL; } +static inline int length_to_duration(struct taprio_sched *q, int len) +{ + return div_u64(len * atomic64_read(&q->picos_per_byte), 1000); +} + static void taprio_set_budget(struct taprio_sched *q, struct sched_entry *entry) { atomic_set(&entry->budget, @@ -504,13 +232,6 @@ static struct sk_buff *taprio_dequeue(struct Qdisc *sch) if (unlikely(!child)) continue; - if (TXTIME_ASSIST_IS_ENABLED(q->flags)) { - skb = child->ops->dequeue(child); - if (!skb) - continue; - goto skb_found; - } - skb = child->ops->peek(child); if (!skb) continue; @@ -522,7 +243,7 @@ static struct sk_buff *taprio_dequeue(struct Qdisc *sch) continue; len = qdisc_pkt_len(skb); - guard = ktime_add_ns(taprio_get_time(q), + guard = ktime_add_ns(q->get_time(), length_to_duration(q, len)); /* In the case that there's no gate entry, there's no @@ -541,7 +262,6 @@ static struct sk_buff *taprio_dequeue(struct Qdisc *sch) if (unlikely(!skb)) goto done; -skb_found: qdisc_bstats_update(sch, skb); qdisc_qstats_backlog_dec(sch, skb); sch->q.qlen--; @@ -804,22 +524,12 @@ static int parse_taprio_schedule(struct nlattr **tb, if (err < 0) return err; - if (!new->cycle_time) { - struct sched_entry *entry; - ktime_t cycle = 0; - - list_for_each_entry(entry, &new->entries, list) - cycle = ktime_add_ns(cycle, entry->interval); - new->cycle_time = cycle; - } - return 0; } static int taprio_parse_mqprio_opt(struct net_device *dev, struct tc_mqprio_qopt *qopt, - struct netlink_ext_ack *extack, - u32 taprio_flags) + struct netlink_ext_ack *extack) { int i, j; @@ -867,9 +577,6 @@ static int taprio_parse_mqprio_opt(struct net_device *dev, return -EINVAL; } - if (TXTIME_ASSIST_IS_ENABLED(taprio_flags)) - continue; - /* Verify that the offset and counts do not overlap */ for (j = i + 1; j < qopt->num_tc; j++) { if (last > qopt->offset[j]) { @@ -891,14 +598,14 @@ static int taprio_get_start_time(struct Qdisc *sch, s64 n; base = sched_base_time(sched); - now = taprio_get_time(q); + now = q->get_time(); if (ktime_after(base, now)) { *start = base; return 0; } - cycle = sched->cycle_time; + cycle = get_cycle_time(sched); /* The qdisc is expected to have at least one sched_entry. Moreover, * any entry must have 'interval' > 0. Thus if the cycle time is zero, @@ -925,7 +632,7 @@ static void setup_first_close_time(struct taprio_sched *q, first = list_first_entry(&sched->entries, struct sched_entry, list); - cycle = sched->cycle_time; + cycle = get_cycle_time(sched); /* FIXME: find a better place to do this */ sched->cycle_close_time = ktime_add_ns(base, cycle); @@ -1000,18 +707,6 @@ static int taprio_dev_notifier(struct notifier_block *nb, unsigned long event, return NOTIFY_DONE; } -static void setup_txtime(struct taprio_sched *q, - struct sched_gate_list *sched, ktime_t base) -{ - struct sched_entry *entry; - u32 interval = 0; - - list_for_each_entry(entry, &sched->entries, list) { - entry->next_txtime = ktime_add_ns(base, interval); - interval += entry->interval; - } -} - static int taprio_change(struct Qdisc *sch, struct nlattr *opt, struct netlink_ext_ack *extack) { @@ -1020,7 +715,6 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt, struct taprio_sched *q = qdisc_priv(sch); struct net_device *dev = qdisc_dev(sch); struct tc_mqprio_qopt *mqprio = NULL; - u32 taprio_flags = 0; int i, err, clockid; unsigned long flags; ktime_t start; @@ -1033,21 +727,7 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt, if (tb[TCA_TAPRIO_ATTR_PRIOMAP]) mqprio = nla_data(tb[TCA_TAPRIO_ATTR_PRIOMAP]); - if (tb[TCA_TAPRIO_ATTR_FLAGS]) { - taprio_flags = nla_get_u32(tb[TCA_TAPRIO_ATTR_FLAGS]); - - if (q->flags != 0 && q->flags != taprio_flags) { - NL_SET_ERR_MSG_MOD(extack, "Changing 'flags' of a running schedule is not supported"); - return -EOPNOTSUPP; - } else if (!FLAGS_VALID(taprio_flags)) { - NL_SET_ERR_MSG_MOD(extack, "Specified 'flags' are not valid"); - return -EINVAL; - } - - q->flags = taprio_flags; - } - - err = taprio_parse_mqprio_opt(dev, mqprio, extack, taprio_flags); + err = taprio_parse_mqprio_opt(dev, mqprio, extack); if (err < 0) return err; @@ -1106,18 +786,7 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt, /* Protects against enqueue()/dequeue() */ spin_lock_bh(qdisc_lock(sch)); - if (tb[TCA_TAPRIO_ATTR_TXTIME_DELAY]) { - if (!TXTIME_ASSIST_IS_ENABLED(q->flags)) { - NL_SET_ERR_MSG_MOD(extack, "txtime-delay can only be set when txtime-assist mode is enabled"); - err = -EINVAL; - goto unlock; - } - - q->txtime_delay = nla_get_s32(tb[TCA_TAPRIO_ATTR_TXTIME_DELAY]); - } - - if (!TXTIME_ASSIST_IS_ENABLED(taprio_flags) && - !hrtimer_active(&q->advance_timer)) { + if (!hrtimer_active(&q->advance_timer)) { hrtimer_init(&q->advance_timer, q->clockid, HRTIMER_MODE_ABS); q->advance_timer.function = advance_sched; } @@ -1137,16 +806,16 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt, switch (q->clockid) { case CLOCK_REALTIME: - q->tk_offset = TK_OFFS_REAL; + q->get_time = ktime_get_real; break; case CLOCK_MONOTONIC: - q->tk_offset = TK_OFFS_MAX; + q->get_time = ktime_get; break; case CLOCK_BOOTTIME: - q->tk_offset = TK_OFFS_BOOT; + q->get_time = ktime_get_boottime; break; case CLOCK_TAI: - q->tk_offset = TK_OFFS_TAI; + q->get_time = ktime_get_clocktai; break; default: NL_SET_ERR_MSG(extack, "Invalid 'clockid'"); @@ -1160,35 +829,20 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt, goto unlock; } - if (TXTIME_ASSIST_IS_ENABLED(taprio_flags)) { - setup_txtime(q, new_admin, start); - - if (!oper) { - rcu_assign_pointer(q->oper_sched, new_admin); - err = 0; - new_admin = NULL; - goto unlock; - } - - rcu_assign_pointer(q->admin_sched, new_admin); - if (admin) - call_rcu(&admin->rcu, taprio_free_sched_cb); - } else { - setup_first_close_time(q, new_admin, start); + setup_first_close_time(q, new_admin, start); - /* Protects against advance_sched() */ - spin_lock_irqsave(&q->current_entry_lock, flags); + /* Protects against advance_sched() */ + spin_lock_irqsave(&q->current_entry_lock, flags); - taprio_start_sched(sch, start, new_admin); + taprio_start_sched(sch, start, new_admin); - rcu_assign_pointer(q->admin_sched, new_admin); - if (admin) - call_rcu(&admin->rcu, taprio_free_sched_cb); + rcu_assign_pointer(q->admin_sched, new_admin); + if (admin) + call_rcu(&admin->rcu, taprio_free_sched_cb); + new_admin = NULL; - spin_unlock_irqrestore(&q->current_entry_lock, flags); - } + spin_unlock_irqrestore(&q->current_entry_lock, flags); - new_admin = NULL; err = 0; unlock: @@ -1426,13 +1080,6 @@ static int taprio_dump(struct Qdisc *sch, struct sk_buff *skb) if (nla_put_s32(skb, TCA_TAPRIO_ATTR_SCHED_CLOCKID, q->clockid)) goto options_error; - if (q->flags && nla_put_u32(skb, TCA_TAPRIO_ATTR_FLAGS, q->flags)) - goto options_error; - - if (q->txtime_delay && - nla_put_s32(skb, TCA_TAPRIO_ATTR_TXTIME_DELAY, q->txtime_delay)) - goto options_error; - if (oper && dump_schedule(skb, oper)) goto options_error; From patchwork Sun Jul 7 17:29:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1128703 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="uTBCsj1U"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45hbDm1Wzdz9s8m for ; Mon, 8 Jul 2019 03:29:44 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727429AbfGGR3l (ORCPT ); Sun, 7 Jul 2019 13:29:41 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:37962 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726418AbfGGR3i (ORCPT ); Sun, 7 Jul 2019 13:29:38 -0400 Received: by mail-wr1-f66.google.com with SMTP id g17so4437504wrr.5 for ; Sun, 07 Jul 2019 10:29:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=3d4+feB6xC86x+AhNLaBH4jiv2MCqz67UMupg572fLU=; b=uTBCsj1UOUAbkpSOlE/Wro3ZNGmevTRGHXMzYRmmcj5t464MUWukvgdRivaMOEUGd3 z7M/HJozAQNgQMtwZrDd+nQcoIqc96ODqRWHuxec+qM4H5w5NXqratHJZ86xS2GzxWKd 4E/hpCtm9fxiwtJaEktBcojRez97d94KUTx+Qm+fYn1MiJlYcUnrjueRg+IoGBT5cmNG 26RKMHrkBJwwlogqjFg8I6Co53zkQGHh2t6lD0hfaEOr94OaiPn8OZ+efA6Pz8FNNtJw ACNV7tlKpfD19fJrOhrpfzriDmJvLtEDfh5iXxUBC677a1YEXecWIbQC1xLtXWzOxKEO Z8kA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=3d4+feB6xC86x+AhNLaBH4jiv2MCqz67UMupg572fLU=; b=VaessQpPtd6a/cRsr0COq980zQ/2hFltDefgpgfm3cQo5ym1+Se2jBw4EDr7FHtbxS kdzbmse03tnPX2zfeRMmkEBiXRMIf0FDA7vdpTWz2ja1f8vh++9VS2DUf7IL239EIpqp Udja3THhplORGd90r8hk5BEomX1JdO6C2yGyPvUb0FZMSbIg8KNZJ+mpU4VLfG0bk8ZN BEo/92YaC6Dw1GCH9ZfHTVJOXeKQn7/uHDVG0Ewc+DuXFlMEWBTrtP8XDnQTa1oOnKdG Dg83CcdnO0XzP8sG+IU7OyPuyF1MqAsfnp/7ilWxzmKK3Bk7sg3WVpleF7A2J+lXRh1R +1mg== X-Gm-Message-State: APjAAAUthNasH8zwkutgFBrytAHoBnSbPpz4v+iuKSdXRmZrJyWBy44m PVER6Ek1jy/T/Z+jS5fLWOY= X-Google-Smtp-Source: APXvYqwQ+4jt0wkvk8MlABLi/ojggtc09pIcZhDJmBUWJbfE2zfiWlwcIIt80NPemOIaSk2VJCKuFA== X-Received: by 2002:a05:6000:10c6:: with SMTP id b6mr14819760wrx.269.1562520575430; Sun, 07 Jul 2019 10:29:35 -0700 (PDT) Received: from localhost.localdomain ([188.26.252.192]) by smtp.gmail.com with ESMTPSA id g14sm14280463wro.11.2019.07.07.10.29.34 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Sun, 07 Jul 2019 10:29:34 -0700 (PDT) From: Vladimir Oltean To: f.fainelli@gmail.com, vivien.didelot@gmail.com, andrew@lunn.ch, davem@davemloft.net, vinicius.gomes@intel.com, vedang.patel@intel.com, richardcochran@gmail.com Cc: weifeng.voon@intel.com, jiri@mellanox.com, m-karicheri2@ti.com, Jose.Abreu@synopsys.com, ilias.apalodimas@linaro.org, netdev@vger.kernel.org Subject: [RFC PATCH net-next 2/6] taprio: Add support for hardware offloading Date: Sun, 7 Jul 2019 20:29:17 +0300 Message-Id: <20190707172921.17731-3-olteanv@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190707172921.17731-1-olteanv@gmail.com> References: <20190707172921.17731-1-olteanv@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vinicius Costa Gomes This allows taprio to offload the schedule enforcement to capable network cards, resulting in more precise windows and less CPU usage. The important detail here is the difference between the gate_mask in taprio and gate_mask for the network driver. For the driver, each bit in gate_mask references a transmission queue: bit 0 for queue 0, bit 1 for queue 1, and so on. This is done so the driver doesn't need to know about traffic classes. Signed-off-by: Vinicius Costa Gomes Signed-off-by: Voon Weifeng --- include/linux/netdevice.h | 1 + include/net/pkt_sched.h | 18 +++ include/uapi/linux/pkt_sched.h | 4 + net/sched/sch_taprio.c | 263 ++++++++++++++++++++++++++++++++- 4 files changed, 284 insertions(+), 2 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 88292953aa6f..514eb7e9feee 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -845,6 +845,7 @@ enum tc_setup_type { TC_SETUP_QDISC_ETF, TC_SETUP_ROOT_QDISC, TC_SETUP_QDISC_GRED, + TC_SETUP_QDISC_TAPRIO, }; /* These structures hold the attributes of bpf state that are being passed diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h index a16fbe9a2a67..3333c107f920 100644 --- a/include/net/pkt_sched.h +++ b/include/net/pkt_sched.h @@ -161,4 +161,22 @@ struct tc_etf_qopt_offload { s32 queue; }; +struct tc_taprio_sched_entry { + u8 command; /* TC_TAPRIO_CMD_* */ + + /* The gate_mask in the offloading side refers to HW queues */ + u32 gate_mask; + u32 interval; +}; + +struct tc_taprio_qopt_offload { + u8 enable; + ktime_t base_time; + u64 cycle_time; + u64 cycle_time_extension; + + size_t num_entries; + struct tc_taprio_sched_entry entries[0]; +}; + #endif diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h index 8b2f993cbb77..08a260fd7843 100644 --- a/include/uapi/linux/pkt_sched.h +++ b/include/uapi/linux/pkt_sched.h @@ -1158,6 +1158,9 @@ enum { * [TCA_TAPRIO_ATTR_SCHED_ENTRY_INTERVAL] */ +#define TCA_TAPRIO_ATTR_OFFLOAD_FLAG_FULL_OFFLOAD 0x1 +#define TCA_TAPRIO_ATTR_OFFLOAD_FLAG_TXTIME_OFFLOAD 0x2 + enum { TCA_TAPRIO_ATTR_UNSPEC, TCA_TAPRIO_ATTR_PRIOMAP, /* struct tc_mqprio_qopt */ @@ -1169,6 +1172,7 @@ enum { TCA_TAPRIO_ATTR_ADMIN_SCHED, /* The admin sched, only used in dump */ TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME, /* s64 */ TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME_EXTENSION, /* s64 */ + TCA_TAPRIO_ATTR_OFFLOAD_FLAGS, /* u32 */ __TCA_TAPRIO_ATTR_MAX, }; diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c index 9ecfb8f5902a..9e8f066a2474 100644 --- a/net/sched/sch_taprio.c +++ b/net/sched/sch_taprio.c @@ -26,6 +26,9 @@ static LIST_HEAD(taprio_list); static DEFINE_SPINLOCK(taprio_list_lock); #define TAPRIO_ALL_GATES_OPEN -1 +#define FULL_OFFLOAD_IS_ON(flags) ((flags) & TCA_TAPRIO_ATTR_OFFLOAD_FLAG_FULL_OFFLOAD) +#define TXTIME_OFFLOAD_IS_ON(flags) ((flags) & TCA_TAPRIO_ATTR_OFFLOAD_FLAG_TXTIME_OFFLOAD) +#define VALID_OFFLOAD(flags) ((flags) != U32_MAX) struct sched_entry { struct list_head list; @@ -55,6 +58,8 @@ struct sched_gate_list { struct taprio_sched { struct Qdisc **qdiscs; struct Qdisc *root; + struct tc_mqprio_qopt mqprio; + u32 offload_flags; int clockid; atomic64_t picos_per_byte; /* Using picoseconds because for 10Gbps+ * speeds it's sub-nanoseconds per byte @@ -66,6 +71,8 @@ struct taprio_sched { struct sched_gate_list __rcu *oper_sched; struct sched_gate_list __rcu *admin_sched; ktime_t (*get_time)(void); + struct sk_buff *(*dequeue)(struct Qdisc *sch); + struct sk_buff *(*peek)(struct Qdisc *sch); struct hrtimer advance_timer; struct list_head taprio_list; }; @@ -143,7 +150,30 @@ static int taprio_enqueue(struct sk_buff *skb, struct Qdisc *sch, return qdisc_enqueue(skb, child, to_free); } -static struct sk_buff *taprio_peek(struct Qdisc *sch) +static struct sk_buff *taprio_peek_offload(struct Qdisc *sch) +{ + struct taprio_sched *q = qdisc_priv(sch); + struct net_device *dev = qdisc_dev(sch); + struct sk_buff *skb; + int i; + + for (i = 0; i < dev->num_tx_queues; i++) { + struct Qdisc *child = q->qdiscs[i]; + + if (unlikely(!child)) + continue; + + skb = child->ops->peek(child); + if (!skb) + continue; + + return skb; + } + + return NULL; +} + +static struct sk_buff *taprio_peek_soft(struct Qdisc *sch) { struct taprio_sched *q = qdisc_priv(sch); struct net_device *dev = qdisc_dev(sch); @@ -184,6 +214,13 @@ static struct sk_buff *taprio_peek(struct Qdisc *sch) return NULL; } +static struct sk_buff *taprio_peek(struct Qdisc *sch) +{ + struct taprio_sched *q = qdisc_priv(sch); + + return q->peek(sch); +} + static inline int length_to_duration(struct taprio_sched *q, int len) { return div_u64(len * atomic64_read(&q->picos_per_byte), 1000); @@ -196,7 +233,7 @@ static void taprio_set_budget(struct taprio_sched *q, struct sched_entry *entry) atomic64_read(&q->picos_per_byte))); } -static struct sk_buff *taprio_dequeue(struct Qdisc *sch) +static struct sk_buff *taprio_dequeue_soft(struct Qdisc *sch) { struct taprio_sched *q = qdisc_priv(sch); struct net_device *dev = qdisc_dev(sch); @@ -275,6 +312,40 @@ static struct sk_buff *taprio_dequeue(struct Qdisc *sch) return skb; } +static struct sk_buff *taprio_dequeue_offload(struct Qdisc *sch) +{ + struct taprio_sched *q = qdisc_priv(sch); + struct net_device *dev = qdisc_dev(sch); + struct sk_buff *skb; + int i; + + for (i = 0; i < dev->num_tx_queues; i++) { + struct Qdisc *child = q->qdiscs[i]; + + if (unlikely(!child)) + continue; + + skb = child->ops->dequeue(child); + if (unlikely(!skb)) + continue; + + qdisc_bstats_update(sch, skb); + qdisc_qstats_backlog_dec(sch, skb); + sch->q.qlen--; + + return skb; + } + + return NULL; +} + +static struct sk_buff *taprio_dequeue(struct Qdisc *sch) +{ + struct taprio_sched *q = qdisc_priv(sch); + + return q->dequeue(sch); +} + static bool should_restart_cycle(const struct sched_gate_list *oper, const struct sched_entry *entry) { @@ -707,6 +778,165 @@ static int taprio_dev_notifier(struct notifier_block *nb, unsigned long event, return NOTIFY_DONE; } +static u32 tc_mask_to_queue_mask(const struct tc_mqprio_qopt *mqprio, + u32 tc_mask) +{ + u32 i, queue_mask = 0; + + for (i = 0; i < mqprio->num_tc; i++) { + u32 offset, count; + + if (!(tc_mask & BIT(i))) + continue; + + offset = mqprio->offset[i]; + count = mqprio->count[i]; + + queue_mask |= GENMASK(offset + count - 1, offset); + } + + return queue_mask; +} + +static void taprio_sched_to_offload(struct taprio_sched *q, + struct sched_gate_list *sched, + struct tc_taprio_qopt_offload *taprio) +{ + struct sched_entry *entry; + int i = 0; + + taprio->base_time = sched->base_time; + + list_for_each_entry(entry, &sched->entries, list) { + struct tc_taprio_sched_entry *e = &taprio->entries[i]; + + e->command = entry->command; + e->interval = entry->interval; + + /* We do this transformation because the NIC + * has no knowledge of traffic classes, but it + * knows about queues. + */ + e->gate_mask = tc_mask_to_queue_mask(&q->mqprio, + entry->gate_mask); + i++; + } + + taprio->num_entries = i; +} + +static void taprio_disable_offload(struct net_device *dev, + struct taprio_sched *q) +{ + const struct net_device_ops *ops = dev->netdev_ops; + struct tc_taprio_qopt_offload taprio = { }; + int err; + + if (!q->offload_flags) + return; + + if (!ops->ndo_setup_tc) + return; + + taprio.enable = 0; + + err = ops->ndo_setup_tc(dev, TC_SETUP_QDISC_TAPRIO, &taprio); + if (err < 0) + return; + + /* Just to be sure to keep the function pointers in a + * consistent state always. + */ + q->dequeue = taprio_dequeue_soft; + q->peek = taprio_peek_soft; + + q->advance_timer.function = advance_sched; + + q->offload_flags = 0; +} + +static enum hrtimer_restart next_sched(struct hrtimer *timer) +{ + struct taprio_sched *q = container_of(timer, struct taprio_sched, + advance_timer); + struct sched_gate_list *oper, *admin; + + spin_lock(&q->current_entry_lock); + oper = rcu_dereference_protected(q->oper_sched, + lockdep_is_held(&q->current_entry_lock)); + admin = rcu_dereference_protected(q->admin_sched, + lockdep_is_held(&q->current_entry_lock)); + + rcu_assign_pointer(q->oper_sched, admin); + rcu_assign_pointer(q->admin_sched, NULL); + + if (oper) + call_rcu(&oper->rcu, taprio_free_sched_cb); + + spin_unlock(&q->current_entry_lock); + + return HRTIMER_NORESTART; +} + +static int taprio_enable_offload(struct net_device *dev, + struct tc_mqprio_qopt *mqprio, + struct taprio_sched *q, + struct sched_gate_list *sched, + struct netlink_ext_ack *extack, + u32 offload_flags) +{ + const struct net_device_ops *ops = dev->netdev_ops; + struct tc_taprio_qopt_offload *taprio; + size_t size; + int err = 0; + + if (!FULL_OFFLOAD_IS_ON(offload_flags)) { + NL_SET_ERR_MSG(extack, "Offload mode is not supported"); + return -EOPNOTSUPP; + } + + if (!ops->ndo_setup_tc) { + NL_SET_ERR_MSG(extack, "Specified device does not support taprio offload"); + return -EOPNOTSUPP; + } + + size = sizeof(*taprio) + + sched->num_entries * sizeof(struct tc_taprio_sched_entry); + + taprio = kzalloc(size, GFP_ATOMIC); + if (!taprio) { + NL_SET_ERR_MSG(extack, "Not enough memory for enabling offload mode"); + return -ENOMEM; + } + + taprio->enable = 1; + taprio_sched_to_offload(q, sched, taprio); + + err = ops->ndo_setup_tc(dev, TC_SETUP_QDISC_TAPRIO, taprio); + if (err < 0) { + NL_SET_ERR_MSG(extack, "Specified device failed to setup taprio hardware offload"); + goto done; + } + + q->dequeue = taprio_dequeue_offload; + q->peek = taprio_peek_offload; + + /* This function will only serve to keep the pointers to the + * "oper" and "admin" schedules valid in relation to their + * base times, so when calling dump() the users looks at the + * right schedules. + */ + q->advance_timer.function = next_sched; + +done: + kfree(taprio); + + if (err == 0) + q->offload_flags = offload_flags; + + return err; +} + static int taprio_change(struct Qdisc *sch, struct nlattr *opt, struct netlink_ext_ack *extack) { @@ -715,6 +945,7 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt, struct taprio_sched *q = qdisc_priv(sch); struct net_device *dev = qdisc_dev(sch); struct tc_mqprio_qopt *mqprio = NULL; + u32 offload_flags = U32_MAX; int i, err, clockid; unsigned long flags; ktime_t start; @@ -731,6 +962,9 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt, if (err < 0) return err; + if (tb[TCA_TAPRIO_ATTR_OFFLOAD_FLAGS]) + offload_flags = nla_get_u32(tb[TCA_TAPRIO_ATTR_OFFLOAD_FLAGS]); + new_admin = kzalloc(sizeof(*new_admin), GFP_KERNEL); if (!new_admin) { NL_SET_ERR_MSG(extack, "Not enough memory for a new schedule"); @@ -749,6 +983,12 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt, goto free_sched; } + if (offload_flags != U32_MAX && (oper || admin)) { + NL_SET_ERR_MSG(extack, "Changing 'offload' of a running schedule is not supported"); + err = -ENOTSUPP; + goto free_sched; + } + err = parse_taprio_schedule(tb, new_admin, extack); if (err < 0) goto free_sched; @@ -802,6 +1042,8 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt, for (i = 0; i < TC_BITMASK + 1; i++) netdev_set_prio_tc_map(dev, i, mqprio->prio_tc_map[i]); + + memcpy(&q->mqprio, mqprio, sizeof(q->mqprio)); } switch (q->clockid) { @@ -823,6 +1065,15 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt, goto unlock; } + if (!offload_flags) { + taprio_disable_offload(dev, q); + } else if (VALID_OFFLOAD(offload_flags) || q->offload_flags) { + err = taprio_enable_offload(dev, mqprio, q, + new_admin, extack, offload_flags); + if (err) + goto unlock; + } + err = taprio_get_start_time(sch, new_admin, &start); if (err < 0) { NL_SET_ERR_MSG(extack, "Internal error: failed get start time"); @@ -866,6 +1117,8 @@ static void taprio_destroy(struct Qdisc *sch) hrtimer_cancel(&q->advance_timer); + taprio_disable_offload(dev, q); + if (q->qdiscs) { for (i = 0; i < dev->num_tx_queues && q->qdiscs[i]; i++) qdisc_put(q->qdiscs[i]); @@ -895,6 +1148,9 @@ static int taprio_init(struct Qdisc *sch, struct nlattr *opt, hrtimer_init(&q->advance_timer, CLOCK_TAI, HRTIMER_MODE_ABS); q->advance_timer.function = advance_sched; + q->dequeue = taprio_dequeue_soft; + q->peek = taprio_peek_soft; + q->root = sch; /* We only support static clockids. Use an invalid value as default @@ -1080,6 +1336,9 @@ static int taprio_dump(struct Qdisc *sch, struct sk_buff *skb) if (nla_put_s32(skb, TCA_TAPRIO_ATTR_SCHED_CLOCKID, q->clockid)) goto options_error; + if (nla_put_u32(skb, TCA_TAPRIO_ATTR_OFFLOAD_FLAGS, q->offload_flags)) + goto options_error; + if (oper && dump_schedule(skb, oper)) goto options_error; From patchwork Sun Jul 7 17:29:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1128704 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="tKrD7hyG"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45hbDm6DfFz9sCJ for ; Mon, 8 Jul 2019 03:29:44 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727423AbfGGR3j (ORCPT ); Sun, 7 Jul 2019 13:29:39 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:45644 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727260AbfGGR3i (ORCPT ); Sun, 7 Jul 2019 13:29:38 -0400 Received: by mail-wr1-f67.google.com with SMTP id f9so14553514wre.12 for ; Sun, 07 Jul 2019 10:29:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=CH51aSesznl4kwGflBayqjxgdgAPJwL8UP0YMdZG+IM=; b=tKrD7hyGCw7lO8DIRMq/9Fb7ikPIqyt/8Bw5HDPbb/Dhs06r3eUlNSiQ3cOZQ9GxZ/ XzvEl0pqMB+NxmqtMgiYHsYzbPSAyElNQJEqNTBflWaFmOGcl0yiJRWR+Ui4SBrY9NcA jBwnBbCKgJESkGc6LARx6X3MRSkvSlLFcbWuz02NaUP1HgQ4MZlXnFwPcxWb4AJedCj1 058ZDAwotFZVcD6ZjarsRIMRMM99ucC4KQr3S6sqvRY7QbtXvsymvN+1D0hbP3YehPc1 WXO04R404u/iWbnp2E3yeSGVNuKGyA0W1dRO1rm4wDhwWd9jt0QWS0Kxby9Gca1sCSd6 QNVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=CH51aSesznl4kwGflBayqjxgdgAPJwL8UP0YMdZG+IM=; b=YWJNpuEKeCL//PMfrUn5HkwJQiqrp4SoeQwSYoMkoK0swqS7r1TBTqx96F50cYmB5m APrnf4FYL9Vk/ScvE5YY1LYzcHS+ws+blv8ayzWBUyS3Lj970wfDSX8lz9A/W79E9Jeg blpwbsbCV+N8SjLR70QphQXRwvsZYk8lOjfpY6w+yce1yCQ/uZn6xF3BaLNxjYE1gtNm Hw27DO2UXrWDD95prOcBk3/WNK1pEbyFZmAIkKHLTVPO1irYW16XlPa9B5W461Wz5XBX 0fkiVzJx/m3Zq7ZK32xCLGodenZAVw5uBbUPO51I/wXCe+fTxiWKFP9xLLW7r/VZGYUA TlQg== X-Gm-Message-State: APjAAAXmRRmcz0xWlKoPtx8VUohrS/0cZ1f1QijlNOu1/ehX/pqBeZew AC4edNgA9A6x4p1/vSlU+j0= X-Google-Smtp-Source: APXvYqzXNLSd0u4Z12bR0rTTLZ1h38fUjEBOckd/xIWuvsFL/p6GKv27DJ9Xg7Ga1wU1Ssoammmr/g== X-Received: by 2002:adf:eb51:: with SMTP id u17mr13848190wrn.257.1562520576700; Sun, 07 Jul 2019 10:29:36 -0700 (PDT) Received: from localhost.localdomain ([188.26.252.192]) by smtp.gmail.com with ESMTPSA id g14sm14280463wro.11.2019.07.07.10.29.35 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Sun, 07 Jul 2019 10:29:36 -0700 (PDT) From: Vladimir Oltean To: f.fainelli@gmail.com, vivien.didelot@gmail.com, andrew@lunn.ch, davem@davemloft.net, vinicius.gomes@intel.com, vedang.patel@intel.com, richardcochran@gmail.com Cc: weifeng.voon@intel.com, jiri@mellanox.com, m-karicheri2@ti.com, Jose.Abreu@synopsys.com, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, Vladimir Oltean Subject: [RFC PATCH net-next 3/6] net: dsa: Pass tc-taprio offload to drivers Date: Sun, 7 Jul 2019 20:29:18 +0300 Message-Id: <20190707172921.17731-4-olteanv@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190707172921.17731-1-olteanv@gmail.com> References: <20190707172921.17731-1-olteanv@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org tc-taprio is a qdisc based on the enhancements for scheduled traffic specified in IEEE 802.1Qbv (later merged in 802.1Q). This qdisc has a software implementation and an optional offload through which compatible Ethernet ports may configure their egress 802.1Qbv schedulers. Signed-off-by: Vladimir Oltean --- include/net/dsa.h | 3 +++ net/dsa/slave.c | 14 ++++++++++++++ 2 files changed, 17 insertions(+) diff --git a/include/net/dsa.h b/include/net/dsa.h index 1e8650fa8acc..e7ee6ac8ce6b 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -152,6 +152,7 @@ struct dsa_mall_tc_entry { }; }; +struct tc_taprio_qopt_offload; struct dsa_port { /* A CPU port is physically connected to a master device. @@ -516,6 +517,8 @@ struct dsa_switch_ops { bool ingress); void (*port_mirror_del)(struct dsa_switch *ds, int port, struct dsa_mall_mirror_tc_entry *mirror); + int (*port_setup_taprio)(struct dsa_switch *ds, int port, + struct tc_taprio_qopt_offload *qopt); /* * Cross-chip operations diff --git a/net/dsa/slave.c b/net/dsa/slave.c index 99673f6b07f6..2bae33788708 100644 --- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -965,12 +965,26 @@ static int dsa_slave_setup_tc_block(struct net_device *dev, } } +static int dsa_slave_setup_tc_taprio(struct net_device *dev, + struct tc_taprio_qopt_offload *f) +{ + struct dsa_port *dp = dsa_slave_to_port(dev); + struct dsa_switch *ds = dp->ds; + + if (!ds->ops->port_setup_taprio) + return -EOPNOTSUPP; + + return ds->ops->port_setup_taprio(ds, dp->index, f); +} + static int dsa_slave_setup_tc(struct net_device *dev, enum tc_setup_type type, void *type_data) { switch (type) { case TC_SETUP_BLOCK: return dsa_slave_setup_tc_block(dev, type_data); + case TC_SETUP_QDISC_TAPRIO: + return dsa_slave_setup_tc_taprio(dev, type_data); default: return -EOPNOTSUPP; } From patchwork Sun Jul 7 17:29:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1128706 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="obzNKAl0"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45hbDr1gvJz9s8m for ; Mon, 8 Jul 2019 03:29:48 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727438AbfGGR3q (ORCPT ); Sun, 7 Jul 2019 13:29:46 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:36033 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727416AbfGGR3m (ORCPT ); Sun, 7 Jul 2019 13:29:42 -0400 Received: by mail-wm1-f65.google.com with SMTP id g67so7911897wme.1 for ; Sun, 07 Jul 2019 10:29:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=dvFi5mx45b37+pGtmgRj4eItTZ6iaE9A0Ucx9Pj6C6s=; b=obzNKAl0y2mXJ3YqNqCBr6oaN/914SLzkAuvIfyfd7GAeipW6uyPHVn1Xvtz8eIkQY i2VL4FNeqLS4DHA+W9R6s8BUTP3i3T/MJzxO/9p181Y4A1drBj3GgmYSZqnOoV7Wq88t YPNYYz0fi0YqYJFgJ98T2XJs7USDJsVc0vKnCis74K3mtI6R9Z+GD9j+2w5A11TtoTW9 VbMZP7vvwj7j4JgO9BAMpkznFoli33WmvcBbHUwWqmRcm4RZfAB7Z4HqHatgerHN9KWh U1r1X3HzvvtGrcA3LiGR5abTEG9yUQ4XqXhhtkPOC2SSHvha8PIB1+eDMrie5B81rANj Fuyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dvFi5mx45b37+pGtmgRj4eItTZ6iaE9A0Ucx9Pj6C6s=; b=SirvV1hUsghorJmAiJBD3QYkEc4rfBjVbTUW9Kb75TbyfLBvIpygagHDTh6tjJKvYL pd7bjS+2Frc4zcK7xn3sHw/tjdnnWPJs4Nwq+YvZApaa8hv1NajyB+5Jc39onMTKFo1L gga1ADTY/EOxho1zcoyD7NfR5pc6VFg8l8K5MQLieyzZvpS6xT7YaEBH1e61TOnelurS mRm137HaJCnDRurEiJIT4JIp5zm5b7LGTGYgX5Ypxzn1HT0EC9rXMLmvkMOPMI5UcMlD 24ZMHCj4aVetgSbxCzLcRoHZaQ1Cexk/7qjeZpaim2v/mUY4cWR7WAhDx8f6T6eiF4Zq znOQ== X-Gm-Message-State: APjAAAWfAf2TkRQKwuO5x9v9w/LLON0T5JLshi7qKlvxmtwmoKIWR4aQ xqB5qEjte9ljsQ9agfe3It0= X-Google-Smtp-Source: APXvYqwxzA2Tx4SeB5CwJip/QfLcWiaszpfLJfQ8XGzmECaQ6A7AVvaB6V4/IjqkXcAOUxjBAQTmcg== X-Received: by 2002:a1c:2015:: with SMTP id g21mr12405750wmg.33.1562520577919; Sun, 07 Jul 2019 10:29:37 -0700 (PDT) Received: from localhost.localdomain ([188.26.252.192]) by smtp.gmail.com with ESMTPSA id g14sm14280463wro.11.2019.07.07.10.29.36 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Sun, 07 Jul 2019 10:29:37 -0700 (PDT) From: Vladimir Oltean To: f.fainelli@gmail.com, vivien.didelot@gmail.com, andrew@lunn.ch, davem@davemloft.net, vinicius.gomes@intel.com, vedang.patel@intel.com, richardcochran@gmail.com Cc: weifeng.voon@intel.com, jiri@mellanox.com, m-karicheri2@ti.com, Jose.Abreu@synopsys.com, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, Vladimir Oltean Subject: [RFC PATCH net-next 4/6] net: dsa: sja1105: Add static config tables for scheduling Date: Sun, 7 Jul 2019 20:29:19 +0300 Message-Id: <20190707172921.17731-5-olteanv@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190707172921.17731-1-olteanv@gmail.com> References: <20190707172921.17731-1-olteanv@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In order to support tc-taprio offload, the TTEthernet egress scheduling core registers must be made visible through the static interface. Signed-off-by: Vladimir Oltean --- .../net/dsa/sja1105/sja1105_dynamic_config.c | 8 + .../net/dsa/sja1105/sja1105_static_config.c | 167 ++++++++++++++++++ .../net/dsa/sja1105/sja1105_static_config.h | 48 ++++- 3 files changed, 222 insertions(+), 1 deletion(-) diff --git a/drivers/net/dsa/sja1105/sja1105_dynamic_config.c b/drivers/net/dsa/sja1105/sja1105_dynamic_config.c index 9988c9d18567..91da430045ff 100644 --- a/drivers/net/dsa/sja1105/sja1105_dynamic_config.c +++ b/drivers/net/dsa/sja1105/sja1105_dynamic_config.c @@ -488,6 +488,8 @@ sja1105et_general_params_entry_packing(void *buf, void *entry_ptr, /* SJA1105E/T: First generation */ struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = { + [BLK_IDX_SCHEDULE] = {0}, + [BLK_IDX_SCHEDULE_ENTRY_POINTS] = {0}, [BLK_IDX_L2_LOOKUP] = { .entry_packing = sja1105et_dyn_l2_lookup_entry_packing, .cmd_packing = sja1105et_l2_lookup_cmd_packing, @@ -529,6 +531,8 @@ struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = { .packed_size = SJA1105ET_SIZE_MAC_CONFIG_DYN_CMD, .addr = 0x36, }, + [BLK_IDX_SCHEDULE_PARAMS] = {0}, + [BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = {0}, [BLK_IDX_L2_LOOKUP_PARAMS] = { .entry_packing = sja1105et_l2_lookup_params_entry_packing, .cmd_packing = sja1105et_l2_lookup_params_cmd_packing, @@ -552,6 +556,8 @@ struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = { /* SJA1105P/Q/R/S: Second generation */ struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = { + [BLK_IDX_SCHEDULE] = {0}, + [BLK_IDX_SCHEDULE_ENTRY_POINTS] = {0}, [BLK_IDX_L2_LOOKUP] = { .entry_packing = sja1105pqrs_dyn_l2_lookup_entry_packing, .cmd_packing = sja1105pqrs_l2_lookup_cmd_packing, @@ -593,6 +599,8 @@ struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = { .packed_size = SJA1105PQRS_SIZE_MAC_CONFIG_DYN_CMD, .addr = 0x4B, }, + [BLK_IDX_SCHEDULE_PARAMS] = {0}, + [BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = {0}, [BLK_IDX_L2_LOOKUP_PARAMS] = { .entry_packing = sja1105et_l2_lookup_params_entry_packing, .cmd_packing = sja1105et_l2_lookup_params_cmd_packing, diff --git a/drivers/net/dsa/sja1105/sja1105_static_config.c b/drivers/net/dsa/sja1105/sja1105_static_config.c index b31c737dc560..0d03e13e9909 100644 --- a/drivers/net/dsa/sja1105/sja1105_static_config.c +++ b/drivers/net/dsa/sja1105/sja1105_static_config.c @@ -371,6 +371,63 @@ size_t sja1105pqrs_mac_config_entry_packing(void *buf, void *entry_ptr, return size; } +static size_t +sja1105_schedule_entry_points_params_entry_packing(void *buf, void *entry_ptr, + enum packing_op op) +{ + struct sja1105_schedule_entry_points_params_entry *entry = entry_ptr; + const size_t size = SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY; + + sja1105_packing(buf, &entry->clksrc, 31, 30, size, op); + sja1105_packing(buf, &entry->actsubsch, 29, 27, size, op); + return size; +} + +static size_t +sja1105_schedule_entry_points_entry_packing(void *buf, void *entry_ptr, + enum packing_op op) +{ + struct sja1105_schedule_entry_points_entry *entry = entry_ptr; + const size_t size = SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_ENTRY; + + sja1105_packing(buf, &entry->subschindx, 31, 29, size, op); + sja1105_packing(buf, &entry->delta, 28, 11, size, op); + sja1105_packing(buf, &entry->address, 10, 1, size, op); + return size; +} + +static size_t sja1105_schedule_params_entry_packing(void *buf, void *entry_ptr, + enum packing_op op) +{ + const size_t size = SJA1105_SIZE_SCHEDULE_PARAMS_ENTRY; + struct sja1105_schedule_params_entry *entry = entry_ptr; + int offset, i; + + for (i = 0, offset = 16; i < 8; i++, offset += 10) + sja1105_packing(buf, &entry->subscheind[i], + offset + 9, offset + 0, size, op); + return size; +} + +static size_t sja1105_schedule_entry_packing(void *buf, void *entry_ptr, + enum packing_op op) +{ + const size_t size = SJA1105_SIZE_SCHEDULE_ENTRY; + struct sja1105_schedule_entry *entry = entry_ptr; + + sja1105_packing(buf, &entry->winstindex, 63, 54, size, op); + sja1105_packing(buf, &entry->winend, 53, 53, size, op); + sja1105_packing(buf, &entry->winst, 52, 52, size, op); + sja1105_packing(buf, &entry->destports, 51, 47, size, op); + sja1105_packing(buf, &entry->setvalid, 46, 46, size, op); + sja1105_packing(buf, &entry->txen, 45, 45, size, op); + sja1105_packing(buf, &entry->resmedia_en, 44, 44, size, op); + sja1105_packing(buf, &entry->resmedia, 43, 36, size, op); + sja1105_packing(buf, &entry->vlindex, 35, 26, size, op); + sja1105_packing(buf, &entry->delta, 25, 8, size, op); + return size; +} + size_t sja1105_vlan_lookup_entry_packing(void *buf, void *entry_ptr, enum packing_op op) { @@ -447,11 +504,15 @@ static void sja1105_table_write_crc(u8 *table_start, u8 *crc_ptr) * before blindly indexing kernel memory with the blk_idx. */ static u64 blk_id_map[BLK_IDX_MAX] = { + [BLK_IDX_SCHEDULE] = BLKID_SCHEDULE, + [BLK_IDX_SCHEDULE_ENTRY_POINTS] = BLKID_SCHEDULE_ENTRY_POINTS, [BLK_IDX_L2_LOOKUP] = BLKID_L2_LOOKUP, [BLK_IDX_L2_POLICING] = BLKID_L2_POLICING, [BLK_IDX_VLAN_LOOKUP] = BLKID_VLAN_LOOKUP, [BLK_IDX_L2_FORWARDING] = BLKID_L2_FORWARDING, [BLK_IDX_MAC_CONFIG] = BLKID_MAC_CONFIG, + [BLK_IDX_SCHEDULE_PARAMS] = BLKID_SCHEDULE_PARAMS, + [BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = BLKID_SCHEDULE_ENTRY_POINTS_PARAMS, [BLK_IDX_L2_LOOKUP_PARAMS] = BLKID_L2_LOOKUP_PARAMS, [BLK_IDX_L2_FORWARDING_PARAMS] = BLKID_L2_FORWARDING_PARAMS, [BLK_IDX_AVB_PARAMS] = BLKID_AVB_PARAMS, @@ -461,6 +522,13 @@ static u64 blk_id_map[BLK_IDX_MAX] = { const char *sja1105_static_config_error_msg[] = { [SJA1105_CONFIG_OK] = "", + [SJA1105_TTETHERNET_NOT_SUPPORTED] = + "schedule-table present, but TTEthernet is " + "only supported on T and Q/S", + [SJA1105_INCORRECT_TTETHERNET_CONFIGURATION] = + "schedule-table present, but one of " + "schedule-entry-points-table, schedule-parameters-table or " + "schedule-entry-points-parameters table is empty", [SJA1105_MISSING_L2_POLICING_TABLE] = "l2-policing-table needs to have at least one entry", [SJA1105_MISSING_L2_FORWARDING_TABLE] = @@ -508,6 +576,21 @@ sja1105_static_config_check_valid(const struct sja1105_static_config *config) #define IS_FULL(blk_idx) \ (tables[blk_idx].entry_count == tables[blk_idx].ops->max_entry_count) + if (tables[BLK_IDX_SCHEDULE].entry_count) { + if (config->device_id != SJA1105T_DEVICE_ID && + config->device_id != SJA1105QS_DEVICE_ID) + return SJA1105_TTETHERNET_NOT_SUPPORTED; + + if (tables[BLK_IDX_SCHEDULE_ENTRY_POINTS].entry_count == 0) + return SJA1105_INCORRECT_TTETHERNET_CONFIGURATION; + + if (!IS_FULL(BLK_IDX_SCHEDULE_PARAMS)) + return SJA1105_INCORRECT_TTETHERNET_CONFIGURATION; + + if (!IS_FULL(BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS)) + return SJA1105_INCORRECT_TTETHERNET_CONFIGURATION; + } + if (tables[BLK_IDX_L2_POLICING].entry_count == 0) return SJA1105_MISSING_L2_POLICING_TABLE; @@ -614,6 +697,8 @@ sja1105_static_config_get_length(const struct sja1105_static_config *config) /* SJA1105E: First generation, no TTEthernet */ struct sja1105_table_ops sja1105e_table_ops[BLK_IDX_MAX] = { + [BLK_IDX_SCHEDULE] = {0}, + [BLK_IDX_SCHEDULE_ENTRY_POINTS] = {0}, [BLK_IDX_L2_LOOKUP] = { .packing = sja1105et_l2_lookup_entry_packing, .unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry), @@ -644,6 +729,8 @@ struct sja1105_table_ops sja1105e_table_ops[BLK_IDX_MAX] = { .packed_entry_size = SJA1105ET_SIZE_MAC_CONFIG_ENTRY, .max_entry_count = SJA1105_MAX_MAC_CONFIG_COUNT, }, + [BLK_IDX_SCHEDULE_PARAMS] = {0}, + [BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = {0}, [BLK_IDX_L2_LOOKUP_PARAMS] = { .packing = sja1105et_l2_lookup_params_entry_packing, .unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry), @@ -678,6 +765,18 @@ struct sja1105_table_ops sja1105e_table_ops[BLK_IDX_MAX] = { /* SJA1105T: First generation, TTEthernet */ struct sja1105_table_ops sja1105t_table_ops[BLK_IDX_MAX] = { + [BLK_IDX_SCHEDULE] = { + .packing = sja1105_schedule_entry_packing, + .unpacked_entry_size = sizeof(struct sja1105_schedule_entry), + .packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY, + .max_entry_count = SJA1105_MAX_SCHEDULE_COUNT, + }, + [BLK_IDX_SCHEDULE_ENTRY_POINTS] = { + .packing = sja1105_schedule_entry_points_entry_packing, + .unpacked_entry_size = sizeof(struct sja1105_schedule_entry_points_entry), + .packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_ENTRY, + .max_entry_count = SJA1105_MAX_SCHEDULE_ENTRY_POINTS_COUNT, + }, [BLK_IDX_L2_LOOKUP] = { .packing = sja1105et_l2_lookup_entry_packing, .unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry), @@ -708,6 +807,18 @@ struct sja1105_table_ops sja1105t_table_ops[BLK_IDX_MAX] = { .packed_entry_size = SJA1105ET_SIZE_MAC_CONFIG_ENTRY, .max_entry_count = SJA1105_MAX_MAC_CONFIG_COUNT, }, + [BLK_IDX_SCHEDULE_PARAMS] = { + .packing = sja1105_schedule_params_entry_packing, + .unpacked_entry_size = sizeof(struct sja1105_schedule_params_entry), + .packed_entry_size = SJA1105_SIZE_SCHEDULE_PARAMS_ENTRY, + .max_entry_count = SJA1105_MAX_SCHEDULE_PARAMS_COUNT, + }, + [BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = { + .packing = sja1105_schedule_entry_points_params_entry_packing, + .unpacked_entry_size = sizeof(struct sja1105_schedule_entry_points_params_entry), + .packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY, + .max_entry_count = SJA1105_MAX_SCHEDULE_ENTRY_POINTS_PARAMS_COUNT, + }, [BLK_IDX_L2_LOOKUP_PARAMS] = { .packing = sja1105et_l2_lookup_params_entry_packing, .unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry), @@ -742,6 +853,8 @@ struct sja1105_table_ops sja1105t_table_ops[BLK_IDX_MAX] = { /* SJA1105P: Second generation, no TTEthernet, no SGMII */ struct sja1105_table_ops sja1105p_table_ops[BLK_IDX_MAX] = { + [BLK_IDX_SCHEDULE] = {0}, + [BLK_IDX_SCHEDULE_ENTRY_POINTS] = {0}, [BLK_IDX_L2_LOOKUP] = { .packing = sja1105pqrs_l2_lookup_entry_packing, .unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry), @@ -772,6 +885,8 @@ struct sja1105_table_ops sja1105p_table_ops[BLK_IDX_MAX] = { .packed_entry_size = SJA1105PQRS_SIZE_MAC_CONFIG_ENTRY, .max_entry_count = SJA1105_MAX_MAC_CONFIG_COUNT, }, + [BLK_IDX_SCHEDULE_PARAMS] = {0}, + [BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = {0}, [BLK_IDX_L2_LOOKUP_PARAMS] = { .packing = sja1105pqrs_l2_lookup_params_entry_packing, .unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry), @@ -806,6 +921,18 @@ struct sja1105_table_ops sja1105p_table_ops[BLK_IDX_MAX] = { /* SJA1105Q: Second generation, TTEthernet, no SGMII */ struct sja1105_table_ops sja1105q_table_ops[BLK_IDX_MAX] = { + [BLK_IDX_SCHEDULE] = { + .packing = sja1105_schedule_entry_packing, + .unpacked_entry_size = sizeof(struct sja1105_schedule_entry), + .packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY, + .max_entry_count = SJA1105_MAX_SCHEDULE_COUNT, + }, + [BLK_IDX_SCHEDULE_ENTRY_POINTS] = { + .packing = sja1105_schedule_entry_points_entry_packing, + .unpacked_entry_size = sizeof(struct sja1105_schedule_entry_points_entry), + .packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_ENTRY, + .max_entry_count = SJA1105_MAX_SCHEDULE_ENTRY_POINTS_COUNT, + }, [BLK_IDX_L2_LOOKUP] = { .packing = sja1105pqrs_l2_lookup_entry_packing, .unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry), @@ -836,6 +963,18 @@ struct sja1105_table_ops sja1105q_table_ops[BLK_IDX_MAX] = { .packed_entry_size = SJA1105PQRS_SIZE_MAC_CONFIG_ENTRY, .max_entry_count = SJA1105_MAX_MAC_CONFIG_COUNT, }, + [BLK_IDX_SCHEDULE_PARAMS] = { + .packing = sja1105_schedule_params_entry_packing, + .unpacked_entry_size = sizeof(struct sja1105_schedule_params_entry), + .packed_entry_size = SJA1105_SIZE_SCHEDULE_PARAMS_ENTRY, + .max_entry_count = SJA1105_MAX_SCHEDULE_PARAMS_COUNT, + }, + [BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = { + .packing = sja1105_schedule_entry_points_params_entry_packing, + .unpacked_entry_size = sizeof(struct sja1105_schedule_entry_points_params_entry), + .packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY, + .max_entry_count = SJA1105_MAX_SCHEDULE_ENTRY_POINTS_PARAMS_COUNT, + }, [BLK_IDX_L2_LOOKUP_PARAMS] = { .packing = sja1105pqrs_l2_lookup_params_entry_packing, .unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry), @@ -870,6 +1009,8 @@ struct sja1105_table_ops sja1105q_table_ops[BLK_IDX_MAX] = { /* SJA1105R: Second generation, no TTEthernet, SGMII */ struct sja1105_table_ops sja1105r_table_ops[BLK_IDX_MAX] = { + [BLK_IDX_SCHEDULE] = {0}, + [BLK_IDX_SCHEDULE_ENTRY_POINTS] = {0}, [BLK_IDX_L2_LOOKUP] = { .packing = sja1105pqrs_l2_lookup_entry_packing, .unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry), @@ -900,6 +1041,8 @@ struct sja1105_table_ops sja1105r_table_ops[BLK_IDX_MAX] = { .packed_entry_size = SJA1105PQRS_SIZE_MAC_CONFIG_ENTRY, .max_entry_count = SJA1105_MAX_MAC_CONFIG_COUNT, }, + [BLK_IDX_SCHEDULE_PARAMS] = {0}, + [BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = {0}, [BLK_IDX_L2_LOOKUP_PARAMS] = { .packing = sja1105pqrs_l2_lookup_params_entry_packing, .unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry), @@ -934,6 +1077,18 @@ struct sja1105_table_ops sja1105r_table_ops[BLK_IDX_MAX] = { /* SJA1105S: Second generation, TTEthernet, SGMII */ struct sja1105_table_ops sja1105s_table_ops[BLK_IDX_MAX] = { + [BLK_IDX_SCHEDULE] = { + .packing = sja1105_schedule_entry_packing, + .unpacked_entry_size = sizeof(struct sja1105_schedule_entry), + .packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY, + .max_entry_count = SJA1105_MAX_SCHEDULE_COUNT, + }, + [BLK_IDX_SCHEDULE_ENTRY_POINTS] = { + .packing = sja1105_schedule_entry_points_entry_packing, + .unpacked_entry_size = sizeof(struct sja1105_schedule_entry_points_entry), + .packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_ENTRY, + .max_entry_count = SJA1105_MAX_SCHEDULE_ENTRY_POINTS_COUNT, + }, [BLK_IDX_L2_LOOKUP] = { .packing = sja1105pqrs_l2_lookup_entry_packing, .unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry), @@ -964,6 +1119,18 @@ struct sja1105_table_ops sja1105s_table_ops[BLK_IDX_MAX] = { .packed_entry_size = SJA1105PQRS_SIZE_MAC_CONFIG_ENTRY, .max_entry_count = SJA1105_MAX_MAC_CONFIG_COUNT, }, + [BLK_IDX_SCHEDULE_PARAMS] = { + .packing = sja1105_schedule_params_entry_packing, + .unpacked_entry_size = sizeof(struct sja1105_schedule_params_entry), + .packed_entry_size = SJA1105_SIZE_SCHEDULE_PARAMS_ENTRY, + .max_entry_count = SJA1105_MAX_SCHEDULE_PARAMS_COUNT, + }, + [BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = { + .packing = sja1105_schedule_entry_points_params_entry_packing, + .unpacked_entry_size = sizeof(struct sja1105_schedule_entry_points_params_entry), + .packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY, + .max_entry_count = SJA1105_MAX_SCHEDULE_ENTRY_POINTS_PARAMS_COUNT, + }, [BLK_IDX_L2_LOOKUP_PARAMS] = { .packing = sja1105pqrs_l2_lookup_params_entry_packing, .unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry), diff --git a/drivers/net/dsa/sja1105/sja1105_static_config.h b/drivers/net/dsa/sja1105/sja1105_static_config.h index 684465fc0882..7f87022a2d61 100644 --- a/drivers/net/dsa/sja1105/sja1105_static_config.h +++ b/drivers/net/dsa/sja1105/sja1105_static_config.h @@ -11,11 +11,15 @@ #define SJA1105_SIZE_DEVICE_ID 4 #define SJA1105_SIZE_TABLE_HEADER 12 +#define SJA1105_SIZE_SCHEDULE_ENTRY 8 +#define SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_ENTRY 4 #define SJA1105_SIZE_L2_POLICING_ENTRY 8 #define SJA1105_SIZE_VLAN_LOOKUP_ENTRY 8 #define SJA1105_SIZE_L2_FORWARDING_ENTRY 8 #define SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY 12 #define SJA1105_SIZE_XMII_PARAMS_ENTRY 4 +#define SJA1105_SIZE_SCHEDULE_PARAMS_ENTRY 12 +#define SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY 4 #define SJA1105ET_SIZE_L2_LOOKUP_ENTRY 12 #define SJA1105ET_SIZE_MAC_CONFIG_ENTRY 28 #define SJA1105ET_SIZE_L2_LOOKUP_PARAMS_ENTRY 4 @@ -29,11 +33,15 @@ /* UM10944.pdf Page 11, Table 2. Configuration Blocks */ enum { + BLKID_SCHEDULE = 0x00, + BLKID_SCHEDULE_ENTRY_POINTS = 0x01, BLKID_L2_LOOKUP = 0x05, BLKID_L2_POLICING = 0x06, BLKID_VLAN_LOOKUP = 0x07, BLKID_L2_FORWARDING = 0x08, BLKID_MAC_CONFIG = 0x09, + BLKID_SCHEDULE_PARAMS = 0x0A, + BLKID_SCHEDULE_ENTRY_POINTS_PARAMS = 0x0B, BLKID_L2_LOOKUP_PARAMS = 0x0D, BLKID_L2_FORWARDING_PARAMS = 0x0E, BLKID_AVB_PARAMS = 0x10, @@ -42,11 +50,15 @@ enum { }; enum sja1105_blk_idx { - BLK_IDX_L2_LOOKUP = 0, + BLK_IDX_SCHEDULE = 0, + BLK_IDX_SCHEDULE_ENTRY_POINTS, + BLK_IDX_L2_LOOKUP, BLK_IDX_L2_POLICING, BLK_IDX_VLAN_LOOKUP, BLK_IDX_L2_FORWARDING, BLK_IDX_MAC_CONFIG, + BLK_IDX_SCHEDULE_PARAMS, + BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS, BLK_IDX_L2_LOOKUP_PARAMS, BLK_IDX_L2_FORWARDING_PARAMS, BLK_IDX_AVB_PARAMS, @@ -59,11 +71,15 @@ enum sja1105_blk_idx { BLK_IDX_INVAL = -1, }; +#define SJA1105_MAX_SCHEDULE_COUNT 1024 +#define SJA1105_MAX_SCHEDULE_ENTRY_POINTS_COUNT 2048 #define SJA1105_MAX_L2_LOOKUP_COUNT 1024 #define SJA1105_MAX_L2_POLICING_COUNT 45 #define SJA1105_MAX_VLAN_LOOKUP_COUNT 4096 #define SJA1105_MAX_L2_FORWARDING_COUNT 13 #define SJA1105_MAX_MAC_CONFIG_COUNT 5 +#define SJA1105_MAX_SCHEDULE_PARAMS_COUNT 1 +#define SJA1105_MAX_SCHEDULE_ENTRY_POINTS_PARAMS_COUNT 1 #define SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT 1 #define SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT 1 #define SJA1105_MAX_GENERAL_PARAMS_COUNT 1 @@ -83,6 +99,23 @@ enum sja1105_blk_idx { #define SJA1105R_PART_NO 0x9A86 #define SJA1105S_PART_NO 0x9A87 +struct sja1105_schedule_entry { + u64 winstindex; + u64 winend; + u64 winst; + u64 destports; + u64 setvalid; + u64 txen; + u64 resmedia_en; + u64 resmedia; + u64 vlindex; + u64 delta; +}; + +struct sja1105_schedule_params_entry { + u64 subscheind[8]; +}; + struct sja1105_general_params_entry { u64 vllupformat; u64 mirr_ptacu; @@ -112,6 +145,17 @@ struct sja1105_general_params_entry { u64 replay_port; }; +struct sja1105_schedule_entry_points_entry { + u64 subschindx; + u64 delta; + u64 address; +}; + +struct sja1105_schedule_entry_points_params_entry { + u64 clksrc; + u64 actsubsch; +}; + struct sja1105_vlan_lookup_entry { u64 ving_mirr; u64 vegr_mirr; @@ -256,6 +300,8 @@ sja1105_static_config_get_length(const struct sja1105_static_config *config); typedef enum { SJA1105_CONFIG_OK = 0, + SJA1105_TTETHERNET_NOT_SUPPORTED, + SJA1105_INCORRECT_TTETHERNET_CONFIGURATION, SJA1105_MISSING_L2_POLICING_TABLE, SJA1105_MISSING_L2_FORWARDING_TABLE, SJA1105_MISSING_L2_FORWARDING_PARAMS_TABLE, From patchwork Sun Jul 7 17:29:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1128705 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="XcWjvAt6"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45hbDn2dL9z9sN4 for ; Mon, 8 Jul 2019 03:29:45 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727435AbfGGR3n (ORCPT ); Sun, 7 Jul 2019 13:29:43 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:38456 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727426AbfGGR3l (ORCPT ); Sun, 7 Jul 2019 13:29:41 -0400 Received: by mail-wm1-f65.google.com with SMTP id s15so14118535wmj.3 for ; Sun, 07 Jul 2019 10:29:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=1Yy+EllIuFIeHh+2l2fUyLKdsx7arkax3CnTrvl8+W0=; b=XcWjvAt6aSY/4x8hDI+MjyBtshhmmiPALWWdAWClh0GW1q73qw/g6QsFQFezOtWXkI Q3VW98BP8WFkodOw9srp9gDKwhNuQFZ3cAc38GzRV0J2qgirTPGkEYuSUDajd1/Cv+9k eqV7wwUYOhAsmEJrZ+KTBunJSMbAYk46a+a/yGNn3UzQeK/luy2HI9acm8mCvB2DL/9w Os+ntFmV9vkokHGUvNf81H8vF/JLk1JPbhJMMdvjuUYBPCFjAjb7TAyoaS+vvKz3o4bQ MiUP/MAD6hMW99ChOkTLCVr6aeTRhcR+/DlpFWvsjjh6xrWdKKFfuDFK7YUxNN28/Uqm WxKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=1Yy+EllIuFIeHh+2l2fUyLKdsx7arkax3CnTrvl8+W0=; b=c+hW7Z1tpy5zfFp/CN5hzQDZuIwLDKnhtJx3ha/2w1tzkynuErrQG0hO5QO92loSYm YZq4qRCdmBhDZ74u5HjT0L7Ix3dUu6CcvdcjIlPblpOfTNLoxAjaSyVChsYY5iD4MILb iBz09ScLQdO4nm7HqZa9rjQHRfHWU7EuP3zffDWGQf45vZRWRMXNDcUKnVAd1L3gobeT LIfK7i0LbrYQPAvc/RgGKRwm0LZnLxGt43jZFAAmIENsFLpk+Q/saYcNHGFJa8mjscBL EBamxe8Pq60i7Im5tNaGEPqsDBogUNcYaXhct1nxvMDQBEMI+/I4DjFm7lCRyrcJVMHm zPEA== X-Gm-Message-State: APjAAAUPow9Oas1p6WASukwHerOi3lCL94p1j1tZFlj5eaISCr9yh/0F b+z7NQCIyiHlfgXcEq4M+14= X-Google-Smtp-Source: APXvYqx2pb/V1zicA5zbFiEqaA1aTp/irmCAojxADmF5zNpK103R1p9oo5ygUNoXiuOpV2tWbMP2WA== X-Received: by 2002:a7b:c455:: with SMTP id l21mr13007893wmi.114.1562520579333; Sun, 07 Jul 2019 10:29:39 -0700 (PDT) Received: from localhost.localdomain ([188.26.252.192]) by smtp.gmail.com with ESMTPSA id g14sm14280463wro.11.2019.07.07.10.29.37 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Sun, 07 Jul 2019 10:29:38 -0700 (PDT) From: Vladimir Oltean To: f.fainelli@gmail.com, vivien.didelot@gmail.com, andrew@lunn.ch, davem@davemloft.net, vinicius.gomes@intel.com, vedang.patel@intel.com, richardcochran@gmail.com Cc: weifeng.voon@intel.com, jiri@mellanox.com, m-karicheri2@ti.com, Jose.Abreu@synopsys.com, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, Vladimir Oltean Subject: [RFC PATCH net-next 5/6] net: dsa: sja1105: Advertise the 8 TX queues Date: Sun, 7 Jul 2019 20:29:20 +0300 Message-Id: <20190707172921.17731-6-olteanv@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190707172921.17731-1-olteanv@gmail.com> References: <20190707172921.17731-1-olteanv@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This is a preparation patch for the tc-taprio offload (and potentially for other future offloads such as tc-mqprio). Instead of looking directly at skb->priority during xmit, let's get the netdev queue and the queue-to-traffic-class mapping, and put the resulting traffic class into the dsa_8021q PCP field. The switch is configured with a 1-to-1 PCP-to-ingress-queue-to-egress-queue mapping (see vlan_pmap in sja1105_main.c), so the effect is that we can inject into a front-panel's egress traffic class through VLAN tagging from Linux, completely transparently. Unfortunately the switch doesn't look at the VLAN PCP in the case of management traffic to/from the CPU (link-local frames at 01-80-C2-xx-xx-xx or 01-1B-19-xx-xx-xx) so we can't alter the transmission queue of this type of traffic on a frame-by-frame basis. It is only selected through the "hostprio" setting which ATM is harcoded in the driver to 7. Signed-off-by: Vladimir Oltean --- drivers/net/dsa/sja1105/sja1105_main.c | 7 ++++++- net/dsa/tag_sja1105.c | 3 ++- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c index bc7e4e030b07..30761e6545a3 100644 --- a/drivers/net/dsa/sja1105/sja1105_main.c +++ b/drivers/net/dsa/sja1105/sja1105_main.c @@ -384,7 +384,9 @@ static int sja1105_init_general_params(struct sja1105_private *priv) /* Disallow dynamic changing of the mirror port */ .mirr_ptacu = 0, .switchid = priv->ds->index, - /* Priority queue for link-local frames trapped to CPU */ + /* Priority queue for link-local management frames + * (both ingress to and egress from CPU - PTP, STP etc) + */ .hostprio = 7, .mac_fltres1 = SJA1105_LINKLOCAL_FILTER_A, .mac_flt1 = SJA1105_LINKLOCAL_FILTER_A_MASK, @@ -1704,6 +1706,9 @@ static int sja1105_setup(struct dsa_switch *ds) */ ds->vlan_filtering_is_global = true; + /* Advertise the 8 egress queues */ + ds->num_tx_queues = SJA1105_NUM_TC; + /* The DSA/switchdev model brings up switch ports in standalone mode by * default, and that means vlan_filtering is 0 since they're not under * a bridge, so it's safe to set up switch tagging at this time. diff --git a/net/dsa/tag_sja1105.c b/net/dsa/tag_sja1105.c index 1d96c9d4a8e9..9ae84990f730 100644 --- a/net/dsa/tag_sja1105.c +++ b/net/dsa/tag_sja1105.c @@ -89,7 +89,8 @@ static struct sk_buff *sja1105_xmit(struct sk_buff *skb, struct dsa_port *dp = dsa_slave_to_port(netdev); struct dsa_switch *ds = dp->ds; u16 tx_vid = dsa_8021q_tx_vid(ds, dp->index); - u8 pcp = skb->priority; + u16 queue_mapping = skb_get_queue_mapping(skb); + u8 pcp = netdev_txq_to_tc(netdev, queue_mapping); /* Transmitting management traffic does not rely upon switch tagging, * but instead SPI-installed management routes. Part 2 of this From patchwork Sun Jul 7 17:29:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1128707 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="g8d6r+aQ"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45hbDt0wKRz9s8m for ; Mon, 8 Jul 2019 03:29:50 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727444AbfGGR3s (ORCPT ); Sun, 7 Jul 2019 13:29:48 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:51694 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726418AbfGGR3o (ORCPT ); Sun, 7 Jul 2019 13:29:44 -0400 Received: by mail-wm1-f68.google.com with SMTP id 207so13542464wma.1 for ; Sun, 07 Jul 2019 10:29:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jT+kO3GoSoLptWDkqc1INwNbspePX3jmLWmG1/Yd7bc=; b=g8d6r+aQwnYRhC+ssTDiPnyI7kBwxdfuPn/IHfO29xaspTqdr47K/MxAe+nkyymeTw 0z+ahCJCQEmoIHpnACw/IO5YcELGTUXjFavRdnoWYJh10awJcTvJnOHn1uh6YKZcx4aO np8OjPEGySnbp3+bgTIAiAySDWMA60vykdeeGNZ9UX9fVncoHEh0KG5cT93U42lj2KbY Y0oFKQuzK+FLfgpGPLIV2Y59dMxSwL/Fyko4kUMyB4Czocr598lUaA0QoZcM1/+ZdqOE 4AcRP6nEZ8dGsYCSbnr1E6AZN0SmAyJoqmeA6hTKdiNxmdG8Q7Sr9L8QpJ0VoHmohglb qNNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jT+kO3GoSoLptWDkqc1INwNbspePX3jmLWmG1/Yd7bc=; b=J4BvkRgp+mqJ/C1fJG1zunGBDTtkFn1iEq9m9h02chGLAkuwXNesAUttavkABYh0n7 tfpPL+XrzmndKAL16diG1IBsq1LDN6VQfJGlJc7qXRIGuq7PdWp9usofAHjL1JofPCTC qQwy3ZbJpkLEsXZFq4taOBW7GYk7WpjhktjsoTvSXeURqQuXdgSR2H3YcCj6ZJWDO8mm s2hAnC5IA8SF04MnDTvjSOWithG7MTSg5885JntAEC0CIj6vsnmLRmwQ7KnsCxi6eiQZ 9xJgayjM9/4ISHjnKVnPT1v+gTMCCGBNZfUN5oYhOTHV984CpnVS0ok1O2h4GhctNByl IGTQ== X-Gm-Message-State: APjAAAV/QGUFwzaP1jBW1vTg12I66imh7FAMNmtOtLecYPQBIH46CDnX jlb/YFToQRnNxH8qa2kBC6c= X-Google-Smtp-Source: APXvYqxFkjZeRFxPfyFQNR2bAQZJuO3BZnnkuBfharO1y/pD3JNBCdo72gYKeqDSY04zQGVpW9/PpA== X-Received: by 2002:a1c:f918:: with SMTP id x24mr12239564wmh.132.1562520580483; Sun, 07 Jul 2019 10:29:40 -0700 (PDT) Received: from localhost.localdomain ([188.26.252.192]) by smtp.gmail.com with ESMTPSA id g14sm14280463wro.11.2019.07.07.10.29.39 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Sun, 07 Jul 2019 10:29:40 -0700 (PDT) From: Vladimir Oltean To: f.fainelli@gmail.com, vivien.didelot@gmail.com, andrew@lunn.ch, davem@davemloft.net, vinicius.gomes@intel.com, vedang.patel@intel.com, richardcochran@gmail.com Cc: weifeng.voon@intel.com, jiri@mellanox.com, m-karicheri2@ti.com, Jose.Abreu@synopsys.com, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, Vladimir Oltean Subject: [RFC PATCH net-next 6/6] net: dsa: sja1105: Configure the Time-Aware Shaper via tc-taprio offload Date: Sun, 7 Jul 2019 20:29:21 +0300 Message-Id: <20190707172921.17731-7-olteanv@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190707172921.17731-1-olteanv@gmail.com> References: <20190707172921.17731-1-olteanv@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This qdisc offload is the closest thing to what the SJA1105 supports in hardware for time-based egress shaping. The switch core really is built around SAE AS6802/TTEthernet (a TTTech standard) but can be made to operate similarly to IEEE 802.1Qbv with some constraints: - The gate control list is a global list for all ports. There are 8 execution threads that iterate through this global list in parallel. I don't know why 8, there are only 4 front-panel ports. - Care must be taken by the user to make sure that two execution threads never get to execute a GCL entry simultaneously. I created a O(n^5) checker for this hardware limitation, prior to accepting a taprio offload configuration as valid. - The spec says that if a GCL entry's interval is shorter than the frame length, you shouldn't send it (and end up in head-of-line blocking). Well, this switch does anyway. - The switch has no concept of ADMIN and OPER configurations. Because it's so simple, the TAS settings are loaded through the static config tables interface, so there isn't even place for any discussion about 'graceful switchover between ADMIN and OPER'. You just reset the switch and upload a new OPER config. - The switch accepts multiple time sources for the gate events. Right now I am using the standalone clock source as opposed to PTP. So the base time parameter doesn't really do much. This is because the PTP part of the driver uses a timecounter/cyclecounter and its PTP clock is free-running (so it's not a valid time source for 802.1Qbv anyway). Signed-off-by: Vladimir Oltean --- drivers/net/dsa/sja1105/Kconfig | 8 + drivers/net/dsa/sja1105/Makefile | 4 + drivers/net/dsa/sja1105/sja1105.h | 6 + drivers/net/dsa/sja1105/sja1105_main.c | 12 +- drivers/net/dsa/sja1105/sja1105_tas.c | 452 +++++++++++++++++++++++++ drivers/net/dsa/sja1105/sja1105_tas.h | 22 ++ 6 files changed, 503 insertions(+), 1 deletion(-) create mode 100644 drivers/net/dsa/sja1105/sja1105_tas.c create mode 100644 drivers/net/dsa/sja1105/sja1105_tas.h diff --git a/drivers/net/dsa/sja1105/Kconfig b/drivers/net/dsa/sja1105/Kconfig index 770134a66e48..55424f39cb0d 100644 --- a/drivers/net/dsa/sja1105/Kconfig +++ b/drivers/net/dsa/sja1105/Kconfig @@ -23,3 +23,11 @@ config NET_DSA_SJA1105_PTP help This enables support for timestamping and PTP clock manipulations in the SJA1105 DSA driver. + +config NET_DSA_SJA1105_TAS + bool "Support for the Time-Aware Scheduler on NXP SJA1105" + depends on NET_DSA_SJA1105 + help + This enables support for the TTEthernet-based egress scheduling + engine in the SJA1105 DSA driver, which is controlled using a + hardware offload of the tc-tqprio qdisc. diff --git a/drivers/net/dsa/sja1105/Makefile b/drivers/net/dsa/sja1105/Makefile index 4483113e6259..66161e874344 100644 --- a/drivers/net/dsa/sja1105/Makefile +++ b/drivers/net/dsa/sja1105/Makefile @@ -12,3 +12,7 @@ sja1105-objs := \ ifdef CONFIG_NET_DSA_SJA1105_PTP sja1105-objs += sja1105_ptp.o endif + +ifdef CONFIG_NET_DSA_SJA1105_TAS +sja1105-objs += sja1105_tas.o +endif diff --git a/drivers/net/dsa/sja1105/sja1105.h b/drivers/net/dsa/sja1105/sja1105.h index 12cfcae0dc11..b326345f3f5c 100644 --- a/drivers/net/dsa/sja1105/sja1105.h +++ b/drivers/net/dsa/sja1105/sja1105.h @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include "sja1105_static_config.h" @@ -111,6 +112,8 @@ struct sja1105_private { */ struct mutex mgmt_lock; struct sja1105_tagger_data tagger_data; + struct tc_taprio_qopt_offload *tas_config[SJA1105_NUM_PORTS]; + struct work_struct tas_config_work; }; #include "sja1105_dynamic_config.h" @@ -127,6 +130,9 @@ typedef enum { SPI_WRITE = 1, } sja1105_spi_rw_mode_t; +/* From sja1105_main.c */ +int sja1105_static_config_reload(struct sja1105_private *priv); + /* From sja1105_spi.c */ int sja1105_spi_send_packed_buf(const struct sja1105_private *priv, sja1105_spi_rw_mode_t rw, u64 reg_addr, diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c index 30761e6545a3..0efc4bfa42d9 100644 --- a/drivers/net/dsa/sja1105/sja1105_main.c +++ b/drivers/net/dsa/sja1105/sja1105_main.c @@ -22,6 +22,7 @@ #include #include #include "sja1105.h" +#include "sja1105_tas.h" static void sja1105_hw_reset(struct gpio_desc *gpio, unsigned int pulse_len, unsigned int startup_delay) @@ -1375,7 +1376,7 @@ static void sja1105_bridge_leave(struct dsa_switch *ds, int port, * modify at runtime (currently only MAC) and restore them after uploading, * such that this operation is relatively seamless. */ -static int sja1105_static_config_reload(struct sja1105_private *priv) +int sja1105_static_config_reload(struct sja1105_private *priv) { struct sja1105_mac_config_entry *mac; int speed_mbps[SJA1105_NUM_PORTS]; @@ -1719,9 +1720,16 @@ static int sja1105_setup(struct dsa_switch *ds) static void sja1105_teardown(struct dsa_switch *ds) { struct sja1105_private *priv = ds->priv; + int port; cancel_work_sync(&priv->tagger_data.rxtstamp_work); skb_queue_purge(&priv->tagger_data.skb_rxtstamp_queue); + + cancel_work_sync(&priv->tas_config_work); + + for (port = 0; port < SJA1105_NUM_PORTS; port++) + if (priv->tas_config[port]) + kfree(priv->tas_config[port]); } static int sja1105_mgmt_xmit(struct dsa_switch *ds, int port, int slot, @@ -2075,6 +2083,7 @@ static const struct dsa_switch_ops sja1105_switch_ops = { .port_hwtstamp_set = sja1105_hwtstamp_set, .port_rxtstamp = sja1105_port_rxtstamp, .port_txtstamp = sja1105_port_txtstamp, + .port_setup_taprio = sja1105_setup_taprio, }; static int sja1105_check_device_id(struct sja1105_private *priv) @@ -2173,6 +2182,7 @@ static int sja1105_probe(struct spi_device *spi) tagger_data = &priv->tagger_data; skb_queue_head_init(&tagger_data->skb_rxtstamp_queue); INIT_WORK(&tagger_data->rxtstamp_work, sja1105_rxtstamp_work); + INIT_WORK(&priv->tas_config_work, sja1105_tas_config_work); /* Connections between dsa_port and sja1105_port */ for (i = 0; i < SJA1105_NUM_PORTS; i++) { diff --git a/drivers/net/dsa/sja1105/sja1105_tas.c b/drivers/net/dsa/sja1105/sja1105_tas.c new file mode 100644 index 000000000000..7fe7c5cbfbff --- /dev/null +++ b/drivers/net/dsa/sja1105/sja1105_tas.c @@ -0,0 +1,452 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2019, Vladimir Oltean + */ +#include "sja1105.h" + +#define SJA1105_TAS_CLKSRC_DISABLED 0 +#define SJA1105_TAS_CLKSRC_STANDALONE 1 +#define SJA1105_TAS_CLKSRC_AS6802 2 +#define SJA1105_TAS_CLKSRC_PTP 3 +#define SJA1105_GATE_MASK GENMASK_ULL(SJA1105_NUM_TC - 1, 0) +#define SJA1105_TAS_MAX_DELTA BIT(19) + +/* This is not a preprocessor macro because the "ns" argument may or may not be + * u64 at caller side. This ensures it is properly type-cast before div_u64. + */ +static u64 sja1105_tas_cycles(u64 ns) +{ + return div_u64(ns, 200); +} + +/* Lo and behold: the egress scheduler from hell. + * + * At the hardware level, the Time-Aware Shaper holds a global linear arrray of + * all schedule entries for all ports. These are the Gate Control List (GCL) + * entries, let's call them "timeslots" for short. This linear array of + * timeslots is held in BLK_IDX_SCHEDULE. + * + * Then there are a maximum of 8 "execution threads" inside the switch, which + * iterate cyclically through the "schedule". Each "cycle" has an entry point + * and an exit point, both being timeslot indices in the schedule table. The + * hardware calls each cycle a "subschedule". + * + * Subschedule (cycle) i starts when PTPCLKVAL >= BLK_IDX_SCHEDULE_ENTRY_POINTS[i].delta. + * The hardware scheduler iterates BLK_IDX_SCHEDULE with a k ranging from + * k = BLK_IDX_SCHEDULE_ENTRY_POINTS[i].address to + * k = BLK_IDX_SCHEDULE_PARAMS.subscheind[i] + * For each schedule entry (timeslot) k, the engine executes the gate control + * list entry for the duration of BLK_IDX_SCHEDULE[k].delta. + * + * +---------+ + * | | BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS + * +---------+ + * | + * | .actsubsch + * +-----------------+ + * | + * | + * | + * BLK_IDX_SCHEDULE_ENTRY_POINTS v + * +---------+---------+ + * | cycle 0 | cycle 1 | + * +---------+---------+ + * | | | | + * +----------------+ | | +-----------------------------------------------+ + * | .subschindx | | .subschindx | + * | | +-------------------+ | + * | .address | .address | | + * | | | | + * | | | | + * | BLK_IDX_SCHEDULE v v | + * | +---------+---------+---------+---------+---------+---------+ | + * | | entry 0 | entry 1 | entry 2 | entry 3 | entry 4 | entry 5 | | + * | +---------+---------+---------+---------+---------+---------+ | + * | ^ ^ ^ ^ | + * | | | | | | + * | +-----------------------------+ | | | | + * | | | | | | + * | | +--------------------------------------+ | | | + * | | | | | | + * | | | +-----------------------+ | | + * | | | | | | + * | | | | BLK_IDX_SCHEDULE_PARAMS | | + * | +----------------------------------------------------------------------------+ | + * | | .subscheind[0] <= .subscheind[1] <= .subscheind[2] <= ... <= subscheind[7] | | + * | +----------------------------------------------------------------------------+ | + * | ^ ^ | + * | | | | + * +---------+ +---------------------------------------------------+ + * + * In the above picture there are two subschedules (cycles): + * + * - cycle 0: iterates the schedule table from 0 to 2 (and back) + * - cycle 1: iterates the schedule table from 3 to 5 (and back) + * + * All other possible execution threads must be marked as unused by making + * their "subschedule end index" (subscheind) equal to the last valid + * subschedule's end index (in this case 5). + */ +static int sja1105_init_scheduling(struct sja1105_private *priv) +{ + struct sja1105_schedule_entry_points_entry *schedule_entry_points; + struct sja1105_schedule_entry_points_params_entry + *schedule_entry_points_params; + struct sja1105_schedule_params_entry *schedule_params; + struct sja1105_schedule_entry *schedule; + struct sja1105_table *table; + int subscheind[8] = {0}; + int schedule_start_idx; + u64 entry_point_delta; + int schedule_end_idx; + int num_entries = 0; + int num_cycles = 0; + int cycle = 0; + int i, k = 0; + int port; + + /* Discard previous Schedule Table */ + table = &priv->static_config.tables[BLK_IDX_SCHEDULE]; + if (table->entry_count) { + kfree(table->entries); + table->entry_count = 0; + } + + /* Discard previous Schedule Entry Points Parameters Table */ + table = &priv->static_config.tables[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS]; + if (table->entry_count) { + kfree(table->entries); + table->entry_count = 0; + } + + /* Discard previous Schedule Parameters Table */ + table = &priv->static_config.tables[BLK_IDX_SCHEDULE_PARAMS]; + if (table->entry_count) { + kfree(table->entries); + table->entry_count = 0; + } + + /* Discard previous Schedule Entry Points Table */ + table = &priv->static_config.tables[BLK_IDX_SCHEDULE_ENTRY_POINTS]; + if (table->entry_count) { + kfree(table->entries); + table->entry_count = 0; + } + + /* Figure out the dimensioning of the problem */ + for (port = 0; port < SJA1105_NUM_PORTS; port++) { + if (priv->tas_config[port]) { + num_entries += priv->tas_config[port]->num_entries; + num_cycles++; + } + } + + /* Nothing to do */ + if (!num_cycles) + return 0; + + /* Pre-allocate space in the static config tables */ + + /* Schedule Table */ + table = &priv->static_config.tables[BLK_IDX_SCHEDULE]; + table->entries = kcalloc(num_entries, table->ops->unpacked_entry_size, + GFP_ATOMIC); + if (!table->entries) + return -ENOMEM; + table->entry_count = num_entries; + schedule = table->entries; + + /* Schedule Points Parameters Table */ + table = &priv->static_config.tables[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS]; + table->entries = kcalloc(SJA1105_MAX_SCHEDULE_ENTRY_POINTS_PARAMS_COUNT, + table->ops->unpacked_entry_size, GFP_ATOMIC); + if (!table->entries) + return -ENOMEM; + table->entry_count = SJA1105_MAX_SCHEDULE_ENTRY_POINTS_PARAMS_COUNT; + schedule_entry_points_params = table->entries; + + /* Schedule Parameters Table */ + table = &priv->static_config.tables[BLK_IDX_SCHEDULE_PARAMS]; + table->entries = kcalloc(SJA1105_MAX_SCHEDULE_PARAMS_COUNT, + table->ops->unpacked_entry_size, GFP_ATOMIC); + if (!table->entries) + return -ENOMEM; + table->entry_count = SJA1105_MAX_SCHEDULE_PARAMS_COUNT; + schedule_params = table->entries; + + /* Schedule Entry Points Table */ + table = &priv->static_config.tables[BLK_IDX_SCHEDULE_ENTRY_POINTS]; + table->entries = kcalloc(num_cycles, table->ops->unpacked_entry_size, + GFP_ATOMIC); + if (!table->entries) + return -ENOMEM; + table->entry_count = num_cycles; + schedule_entry_points = table->entries; + + /* Finally start populating the static config tables */ + schedule_entry_points_params->clksrc = SJA1105_TAS_CLKSRC_STANDALONE; + schedule_entry_points_params->actsubsch = num_cycles - 1; + + for (port = 0; port < SJA1105_NUM_PORTS; port++) { + const struct tc_taprio_qopt_offload *tas_config; + + tas_config = priv->tas_config[port]; + if (!tas_config) + continue; + + schedule_start_idx = k; + schedule_end_idx = k + tas_config->num_entries - 1; + /* TODO this is only a relative base time for the subschedule + * (relative to PTPSCHTM). But as we're using standalone and + * not PTP clock as time reference, leave it like this for now. + * Later we'll have to enforce that all ports' base times are + * within SJA1105_TAS_MAX_DELTA 200ns cycles of one another. + */ + entry_point_delta = sja1105_tas_cycles(tas_config->base_time); + + schedule_entry_points[cycle].subschindx = cycle; + schedule_entry_points[cycle].delta = entry_point_delta; + schedule_entry_points[cycle].address = schedule_start_idx; + + for (i = cycle; i < 8; i++) + subscheind[i] = schedule_end_idx; + + for (i = 0; i < tas_config->num_entries; i++, k++) { + u64 delta_ns = tas_config->entries[i].interval; + + schedule[k].delta = sja1105_tas_cycles(delta_ns); + schedule[k].destports = BIT(port); + schedule[k].resmedia_en = true; + schedule[k].resmedia = SJA1105_GATE_MASK & + ~tas_config->entries[i].gate_mask; + } + cycle++; + } + + for (i = 0; i < 8; i++) + schedule_params->subscheind[i] = subscheind[i]; + + return 0; +} + +static struct tc_taprio_qopt_offload +*tc_taprio_qopt_offload_copy(const struct tc_taprio_qopt_offload *from) +{ + struct tc_taprio_qopt_offload *to; + size_t size; + + size = sizeof(*from) + + from->num_entries * sizeof(struct tc_taprio_sched_entry); + + to = kzalloc(size, GFP_ATOMIC); + if (!to) + return ERR_PTR(-ENOMEM); + + memcpy(to, from, size); + + return to; +} + +/* Be there 2 port subschedules, each executing an arbitrary number of gate + * open/close events cyclically. + * None of those gate events must ever occur at the exact same time, otherwise + * the switch is known to act in exotically strange ways. + * However the hardware doesn't bother performing these integrity checks - the + * designers probably said "nah, let's leave that to the experts" - oh well, + * now we're the experts. + * So here we are with the task of validating whether the new @qopt has any + * conflict with the already established TAS configuration in priv->tas_config. + * We already know the other ports are in harmony with one another, otherwise + * we wouldn't have saved them. + * Each gate event executes periodically, with a period of @cycle_time and a + * phase given by its cycle's @base_time plus its offset within the cycle + * (which in turn is given by the length of the events prior to it). + * There are two aspects to possible collisions: + * - Collisions within one cycle's (actually the longest cycle's) time frame. + * For that, we need to compare the cartesian product of each possible + * occurrence of each event within one cycle time. + * - Collisions in the future. Events may not collide within one cycle time, + * but if two port schedules don't have the same periodicity (aka the cycle + * times aren't multiples of one another), they surely will some time in the + * future (actually they will collide an infinite amount of times). + */ +static bool +sja1105_tas_check_conflicts(struct sja1105_private *priv, + const struct tc_taprio_qopt_offload *qopt) +{ + int port; + + /* No conflicts if we just want to disable this port's TAS config */ + if (!qopt->enable) + return false; + + for (port = 0; port < SJA1105_NUM_PORTS; port++) { + const struct tc_taprio_qopt_offload *tas_config; + u64 max_cycle_time, min_cycle_time; + u64 delta1, delta2; + u64 rbt1, rbt2; + u64 stop_time; + u64 t1, t2; + int i, j; + s32 rem; + + tas_config = priv->tas_config[port]; + + if (!tas_config) + continue; + + /* Check if the two cycle times are multiples of one another. + * If they aren't, then they will surely collide. + */ + max_cycle_time = max(tas_config->cycle_time, qopt->cycle_time); + min_cycle_time = min(tas_config->cycle_time, qopt->cycle_time); + div_u64_rem(max_cycle_time, min_cycle_time, &rem); + if (rem) + return true; + + /* Calculate the "reduced" base time of each of the two cycles + * (transposed back as close to 0 as possible) by dividing to + * the cycle time. + */ + div_u64_rem(tas_config->base_time, tas_config->cycle_time, + &rem); + rbt1 = rem; + + div_u64_rem(qopt->base_time, qopt->cycle_time, &rem); + rbt2 = rem; + + stop_time = max_cycle_time + max(rbt1, rbt2); + + /* delta1 is the relative base time of each GCL entry within + * the established ports' TAS config. + */ + for (i = 0, delta1 = 0; + i < tas_config->num_entries; + delta1 += tas_config->entries[i].interval, i++) { + + /* delta2 is the relative base time of each GCL entry + * within the newly added TAS config. + */ + for (j = 0, delta2 = 0; + j < qopt->num_entries; + delta2 += qopt->entries[j].interval, j++) { + + /* t1 follows all possible occurrences of the + * established ports' GCL entry i within the + * first cycle time. + */ + for (t1 = rbt1 + delta1; + t1 <= stop_time; + t1 += tas_config->cycle_time) { + + /* t2 follows all possible occurrences + * of the newly added GCL entry j + * within the first cycle time. + */ + for (t2 = rbt2 + delta2; + t2 <= stop_time; + t2 += qopt->cycle_time) { + + if (t1 == t2) { + dev_warn(priv->ds->dev, + "GCL entry %d collides with entry %d of port %d\n", + j, i, port); + return true; + } + } + } + } + } + } + + return false; +} + +#define to_sja1105(d) \ + container_of((d), struct sja1105_private, tas_config_work) + +void sja1105_tas_config_work(struct work_struct *work) +{ + struct sja1105_private *priv = to_sja1105(work); + struct dsa_switch *ds = priv->ds; + int rc; + + rc = sja1105_static_config_reload(priv); + if (rc) + dev_err(ds->dev, "Failed to change scheduling settings\n"); +} + +int sja1105_setup_taprio(struct dsa_switch *ds, int port, + const struct tc_taprio_qopt_offload *qopt) +{ + struct tc_taprio_qopt_offload *tas_config; + struct sja1105_private *priv = ds->priv; + int rc; + int i; + + /* Can't change an already configured port (must delete qdisc first). + * Can't delete the qdisc from an unconfigured port. + */ + if (!!priv->tas_config[port] == qopt->enable) + return -EINVAL; + + if (!qopt->enable) { + kfree(priv->tas_config[port]); + priv->tas_config[port] = NULL; + rc = sja1105_init_scheduling(priv); + if (rc < 0) + return rc; + + schedule_work(&priv->tas_config_work); + return 0; + } + + /* What is this? */ + if (qopt->cycle_time_extension) + return -ENOTSUPP; + + if (!sja1105_tas_cycles(qopt->base_time)) { + dev_err(ds->dev, "A base time of zero is not hardware-allowed\n"); + return -ERANGE; + } + + tas_config = tc_taprio_qopt_offload_copy(qopt); + if (IS_ERR_OR_NULL(tas_config)) + return PTR_ERR(tas_config); + + if (!tas_config->cycle_time) { + for (i = 0; i < tas_config->num_entries; i++) { + u64 delta_ns = tas_config->entries[i].interval; + u64 delta_cycles = sja1105_tas_cycles(delta_ns); + bool too_long, too_short; + + /* The cycle_time may not be provided. In that case it + * will be sum of all time interval of the entries in + * the schedule. + */ + tas_config->cycle_time += delta_ns; + + too_long = (delta_cycles >= SJA1105_TAS_MAX_DELTA); + too_short = (delta_cycles == 0); + if (too_long || too_short) { + dev_err(priv->ds->dev, + "Interval %llu too %s for GCL entry %d\n", + delta_ns, too_long ? "long" : "short", i); + return -ERANGE; + } + } + } + + if (sja1105_tas_check_conflicts(priv, tas_config)) { + kfree(tas_config); + return -ERANGE; + } + + priv->tas_config[port] = tas_config; + + rc = sja1105_init_scheduling(priv); + if (rc < 0) + return rc; + + schedule_work(&priv->tas_config_work); + return 0; +} diff --git a/drivers/net/dsa/sja1105/sja1105_tas.h b/drivers/net/dsa/sja1105/sja1105_tas.h new file mode 100644 index 000000000000..af535b4f5f29 --- /dev/null +++ b/drivers/net/dsa/sja1105/sja1105_tas.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0 + * Copyright (c) 2019, Vladimir Oltean + */ +#ifndef _SJA1105_TAS_H +#define _SJA1105_TAS_H + +#if IS_ENABLED(CONFIG_NET_DSA_SJA1105_TAS) + +void sja1105_tas_config_work(struct work_struct *work); + +int sja1105_setup_taprio(struct dsa_switch *ds, int port, + struct tc_taprio_qopt_offload *qopt); + +#else + +#define sja1105_tas_config_work NULL + +#define sja1105_setup_taprio NULL + +#endif /* IS_ENABLED(CONFIG_NET_DSA_SJA1105_TAS) */ + +#endif /* _SJA1105_TAS_H */