From patchwork Thu May 24 02:22:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 919525 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="v4Eke7/e"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40rtVL3P2mz9s0q for ; Thu, 24 May 2018 12:23:54 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935590AbeEXCXv (ORCPT ); Wed, 23 May 2018 22:23:51 -0400 Received: from mail-qt0-f195.google.com ([209.85.216.195]:32975 "EHLO mail-qt0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935485AbeEXCXa (ORCPT ); Wed, 23 May 2018 22:23:30 -0400 Received: by mail-qt0-f195.google.com with SMTP id e8-v6so126443qth.0 for ; Wed, 23 May 2018 19:23:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+31dKi/VGrnmAp1nRcW1iQYSg4MW+g5QuPUheUE8Sjw=; b=v4Eke7/e0vdFPLHo4ZWmx4uu0yCWjn8E5PAxz/Jl2ec9540XSvdglEdUvb2IOXnPM5 6ADIg9UC3Snbgh8czdhaeO3LFUUmaBk7xKMEbv+66Wc4vzmpYqbNE4kGK8TEEI+qSo7b fBOuu8YrIAThnsUeIjnlUMD/k5hjo0WpLkKrbTV3u1JZVAPvB280kIFRmc7FYjBF97Gk 0AHzMMiApJ7JODimz8uu0zSB2oQyTXy2CYGKqyURc13gwq56dG5GsdyZ/aWhT3/+CYW0 YmhltYDdysEu0aiTUI7T5SXy+7kZVstW4U9qoXxM8ZCLK28Lij3EyQ9AgH2xwXcJtGFD 7PIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+31dKi/VGrnmAp1nRcW1iQYSg4MW+g5QuPUheUE8Sjw=; b=Q4t3EDuLXnFs1Bj9EL1ttOq5QwWdL629Xe2liiu4HiXspSdUHcqyenCEn3bUDs4dcl lsJApBUcGAb1fwuiwVonlJf/u/9hQZCaVkoPa1GmP7hDEPqc+UdlLNqt8RiY9Vda+SiI UKUWaau5Dz6EEbFo8+rc8avHKLsNRCP0CBn4uyJrRc7evEYeNHhxQimDwf0Vsn9W+18a QrVT8QpOH0c7pSH8anCS21GuD2n0voBsii6ugm609ESmdLKQLlRDLz15e+5vOef9aVfv M0PkL5WU0thMxHZnxE4f414puQOHEanFFP5xVO/SYJzA1CsPpRi97s82dB07I3XKPgPu 4uqw== X-Gm-Message-State: ALKqPweA7wg8iqelC4wLHZi2mHsT/EyjqMMwDj4/5Ily0w6JiwUJSWn3 i920CPQM7V+ywQIK2XWlKTahGA== X-Google-Smtp-Source: ADUXVKKbAql/5sHQM8GazpdvWrEwtFn57KRgcgd6zB1XuiSnchl2sqRP94kEHHseVTc2c2imAxdgWQ== X-Received: by 2002:a0c:a0c3:: with SMTP id c61-v6mr498502qva.200.1527128609332; Wed, 23 May 2018 19:23:29 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([75.53.12.129]) by smtp.gmail.com with ESMTPSA id c4-v6sm14516705qtj.49.2018.05.23.19.23.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 23 May 2018 19:23:28 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, John Hurley Subject: [PATCH net-next 7/8] nfp: flower: implement host cmsg handler for LAG Date: Wed, 23 May 2018 19:22:54 -0700 Message-Id: <20180524022255.18548-8-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180524022255.18548-1-jakub.kicinski@netronome.com> References: <20180524022255.18548-1-jakub.kicinski@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: John Hurley Adds the control message handler to synchronize offloaded group config with that of the kernel. Such messages are sent from fw to driver and feature the following 3 flags: - Data: an attached cmsg could not be processed - store for retransmission - Xon: FW can accept new messages - retransmit any stored cmsgs - Sync: full sync requested so retransmit all kernel LAG group info Signed-off-by: John Hurley Reviewed-by: Pieter Jansen van Vuuren Reviewed-by: Jakub Kicinski --- .../net/ethernet/netronome/nfp/flower/cmsg.c | 8 +- .../ethernet/netronome/nfp/flower/lag_conf.c | 95 +++++++++++++++++++ .../net/ethernet/netronome/nfp/flower/main.h | 4 + 3 files changed, 105 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.c b/drivers/net/ethernet/netronome/nfp/flower/cmsg.c index 03aae2ed9983..cb8565222621 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.c +++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.c @@ -242,6 +242,7 @@ nfp_flower_cmsg_process_one_rx(struct nfp_app *app, struct sk_buff *skb) struct nfp_flower_priv *app_priv = app->priv; struct nfp_flower_cmsg_hdr *cmsg_hdr; enum nfp_flower_cmsg_type_port type; + bool skb_stored = false; cmsg_hdr = nfp_flower_cmsg_get_hdr(skb); @@ -260,8 +261,10 @@ nfp_flower_cmsg_process_one_rx(struct nfp_app *app, struct sk_buff *skb) nfp_tunnel_keep_alive(app, skb); break; case NFP_FLOWER_CMSG_TYPE_LAG_CONFIG: - if (app_priv->flower_ext_feats & NFP_FL_FEATS_LAG) + if (app_priv->flower_ext_feats & NFP_FL_FEATS_LAG) { + skb_stored = nfp_flower_lag_unprocessed_msg(app, skb); break; + } /* fall through */ default: nfp_flower_cmsg_warn(app, "Cannot handle invalid repr control type %u\n", @@ -269,7 +272,8 @@ nfp_flower_cmsg_process_one_rx(struct nfp_app *app, struct sk_buff *skb) goto out; } - dev_consume_skb_any(skb); + if (!skb_stored) + dev_consume_skb_any(skb); return; out: dev_kfree_skb_any(skb); diff --git a/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c b/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c index 35a700b879d7..a09fe2778250 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c +++ b/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c @@ -36,6 +36,9 @@ /* LAG group config flags. */ #define NFP_FL_LAG_LAST BIT(1) #define NFP_FL_LAG_FIRST BIT(2) +#define NFP_FL_LAG_DATA BIT(3) +#define NFP_FL_LAG_XON BIT(4) +#define NFP_FL_LAG_SYNC BIT(5) #define NFP_FL_LAG_SWITCH BIT(6) #define NFP_FL_LAG_RESET BIT(7) @@ -108,6 +111,8 @@ struct nfp_fl_lag_group { /* wait for more config */ #define NFP_FL_LAG_DELAY (msecs_to_jiffies(2)) +#define NFP_FL_LAG_RETRANS_LIMIT 100 /* max retrans cmsgs to store */ + static unsigned int nfp_fl_get_next_pkt_number(struct nfp_fl_lag *lag) { lag->pkt_num++; @@ -360,6 +365,92 @@ static void nfp_fl_lag_do_work(struct work_struct *work) mutex_unlock(&lag->lock); } +static int +nfp_fl_lag_put_unprocessed(struct nfp_fl_lag *lag, struct sk_buff *skb) +{ + struct nfp_flower_cmsg_lag_config *cmsg_payload; + + cmsg_payload = nfp_flower_cmsg_get_data(skb); + if (be32_to_cpu(cmsg_payload->group_id) >= NFP_FL_LAG_GROUP_MAX) + return -EINVAL; + + /* Drop cmsg retrans if storage limit is exceeded to prevent + * overloading. If the fw notices that expected messages have not been + * received in a given time block, it will request a full resync. + */ + if (skb_queue_len(&lag->retrans_skbs) >= NFP_FL_LAG_RETRANS_LIMIT) + return -ENOSPC; + + __skb_queue_tail(&lag->retrans_skbs, skb); + + return 0; +} + +static void nfp_fl_send_unprocessed(struct nfp_fl_lag *lag) +{ + struct nfp_flower_priv *priv; + struct sk_buff *skb; + + priv = container_of(lag, struct nfp_flower_priv, nfp_lag); + + while ((skb = __skb_dequeue(&lag->retrans_skbs))) + nfp_ctrl_tx(priv->app->ctrl, skb); +} + +bool nfp_flower_lag_unprocessed_msg(struct nfp_app *app, struct sk_buff *skb) +{ + struct nfp_flower_cmsg_lag_config *cmsg_payload; + struct nfp_flower_priv *priv = app->priv; + struct nfp_fl_lag_group *group_entry; + unsigned long int flags; + bool store_skb = false; + int err; + + cmsg_payload = nfp_flower_cmsg_get_data(skb); + flags = cmsg_payload->ctrl_flags; + + /* Note the intentional fall through below. If DATA and XON are both + * set, the message will stored and sent again with the rest of the + * unprocessed messages list. + */ + + /* Store */ + if (flags & NFP_FL_LAG_DATA) + if (!nfp_fl_lag_put_unprocessed(&priv->nfp_lag, skb)) + store_skb = true; + + /* Send stored */ + if (flags & NFP_FL_LAG_XON) + nfp_fl_send_unprocessed(&priv->nfp_lag); + + /* Resend all */ + if (flags & NFP_FL_LAG_SYNC) { + /* To resend all config: + * 1) Clear all unprocessed messages + * 2) Mark all groups dirty + * 3) Reset NFP group config + * 4) Schedule a LAG config update + */ + + __skb_queue_purge(&priv->nfp_lag.retrans_skbs); + + mutex_lock(&priv->nfp_lag.lock); + list_for_each_entry(group_entry, &priv->nfp_lag.group_list, + list) + group_entry->dirty = true; + + err = nfp_flower_lag_reset(&priv->nfp_lag); + if (err) + nfp_flower_cmsg_warn(priv->app, + "mem err in group reset msg\n"); + mutex_unlock(&priv->nfp_lag.lock); + + schedule_delayed_work(&priv->nfp_lag.work, 0); + } + + return store_skb; +} + static void nfp_fl_lag_schedule_group_remove(struct nfp_fl_lag *lag, struct nfp_fl_lag_group *group) @@ -565,6 +656,8 @@ void nfp_flower_lag_init(struct nfp_fl_lag *lag) mutex_init(&lag->lock); ida_init(&lag->ida_handle); + __skb_queue_head_init(&lag->retrans_skbs); + /* 0 is a reserved batch version so increment to first valid value. */ nfp_fl_increment_version(lag); @@ -577,6 +670,8 @@ void nfp_flower_lag_cleanup(struct nfp_fl_lag *lag) cancel_delayed_work_sync(&lag->work); + __skb_queue_purge(&lag->retrans_skbs); + /* Remove all groups. */ mutex_lock(&lag->lock); list_for_each_entry_safe(entry, storage, &lag->group_list, list) { diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h index e03efb034948..2fd75c155ccb 100644 --- a/drivers/net/ethernet/netronome/nfp/flower/main.h +++ b/drivers/net/ethernet/netronome/nfp/flower/main.h @@ -109,6 +109,8 @@ struct nfp_mtu_conf { * @batch_ver: Incremented for each batch of config packets * @global_inst: Instance allocator for groups * @rst_cfg: Marker to reset HW LAG config + * @retrans_skbs: Cmsgs that could not be processed by HW and require + * retransmission */ struct nfp_fl_lag { struct notifier_block lag_nb; @@ -120,6 +122,7 @@ struct nfp_fl_lag { unsigned int batch_ver; u8 global_inst; bool rst_cfg; + struct sk_buff_head retrans_skbs; }; /** @@ -280,5 +283,6 @@ int nfp_flower_setup_tc_egress_cb(enum tc_setup_type type, void *type_data, void nfp_flower_lag_init(struct nfp_fl_lag *lag); void nfp_flower_lag_cleanup(struct nfp_fl_lag *lag); int nfp_flower_lag_reset(struct nfp_fl_lag *lag); +bool nfp_flower_lag_unprocessed_msg(struct nfp_app *app, struct sk_buff *skb); #endif