Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/816694/?format=api
{ "id": 816694, "url": "http://patchwork.ozlabs.org/api/patches/816694/?format=api", "web_url": "http://patchwork.ozlabs.org/project/netdev/patch/20170921064338.1282-8-jiri@resnulli.us/", "project": { "id": 7, "url": "http://patchwork.ozlabs.org/api/projects/7/?format=api", "name": "Linux network development", "link_name": "netdev", "list_id": "netdev.vger.kernel.org", "list_email": "netdev@vger.kernel.org", "web_url": null, "scm_url": null, "webscm_url": null, "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20170921064338.1282-8-jiri@resnulli.us>", "list_archive_url": null, "date": "2017-09-21T06:43:33", "name": "[net-next,07/12] mlxsw: spectrum: Add the multicast routing offloading logic", "commit_ref": null, "pull_url": null, "state": "changes-requested", "archived": true, "hash": "e666321082975a312a4ec1d59359f05b2c2d5879", "submitter": { "id": 15321, "url": "http://patchwork.ozlabs.org/api/people/15321/?format=api", "name": "Jiri Pirko", "email": "jiri@resnulli.us" }, "delegate": { "id": 34, "url": "http://patchwork.ozlabs.org/api/users/34/?format=api", "username": "davem", "first_name": "David", "last_name": "Miller", "email": "davem@davemloft.net" }, "mbox": "http://patchwork.ozlabs.org/project/netdev/patch/20170921064338.1282-8-jiri@resnulli.us/mbox/", "series": [ { "id": 4309, "url": "http://patchwork.ozlabs.org/api/series/4309/?format=api", "web_url": "http://patchwork.ozlabs.org/project/netdev/list/?series=4309", "date": "2017-09-21T06:43:26", "name": "mlxsw: Add support for offloading IPv4 multicast routes", "version": 1, "mbox": "http://patchwork.ozlabs.org/series/4309/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/816694/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/816694/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<netdev-owner@vger.kernel.org>", "X-Original-To": "patchwork-incoming@ozlabs.org", "Delivered-To": "patchwork-incoming@ozlabs.org", "Authentication-Results": [ "ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=netdev-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)", "ozlabs.org; dkim=pass (2048-bit key;\n\tunprotected) header.d=resnulli-us.20150623.gappssmtp.com\n\theader.i=@resnulli-us.20150623.gappssmtp.com\n\theader.b=\"aYH2hytv\"; dkim-atps=neutral" ], "Received": [ "from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xyRsN5vs1z9s7g\n\tfor <patchwork-incoming@ozlabs.org>;\n\tThu, 21 Sep 2017 16:43:52 +1000 (AEST)", "(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1751943AbdIUGnu (ORCPT <rfc822;patchwork-incoming@ozlabs.org>);\n\tThu, 21 Sep 2017 02:43:50 -0400", "from mail-wr0-f195.google.com ([209.85.128.195]:33143 \"EHLO\n\tmail-wr0-f195.google.com\" rhost-flags-OK-OK-OK-OK) by vger.kernel.org\n\twith ESMTP id S1751925AbdIUGnr (ORCPT\n\t<rfc822;netdev@vger.kernel.org>); Thu, 21 Sep 2017 02:43:47 -0400", "by mail-wr0-f195.google.com with SMTP id b9so2601305wra.0\n\tfor <netdev@vger.kernel.org>; Wed, 20 Sep 2017 23:43:46 -0700 (PDT)", "from localhost (ip-89-177-125-82.net.upcbroadband.cz.\n\t[89.177.125.82]) by smtp.gmail.com with ESMTPSA id\n\t109sm341931wrc.25.2017.09.20.23.43.44\n\t(version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256);\n\tWed, 20 Sep 2017 23:43:44 -0700 (PDT)" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=resnulli-us.20150623.gappssmtp.com; s=20150623;\n\th=from:to:cc:subject:date:message-id:in-reply-to:references;\n\tbh=89WVn+viTbF6qPGXpQkuvFTKYtFF5tGQ4c4GIsjPFMM=;\n\tb=aYH2hytvkQjzihVn5UWSOJuVa1TvyXnkJskBGkWfQ5IT85wWhiwTMrEDaPXd1h+k9/\n\tNYaD5P/k3NWZuU9ud1ZNWXTkDjbkMc8KoE8g7HJq/WEhGkHOBx07IImLzZII4qxlX6ZA\n\t7yt3MKbJFbHO9OGlBhPgIRXPhK+SnHJ7IbQZ4IOBU+Bf7ySGI7RaEGLwVSBN9NgkIONH\n\tUGRkrKUHy8XmeMxnSIP05eUuhZwn/09sdrEPEwQ+xrhSHKXBP9JypFLyS9PCiBCyb6zd\n\t2chrj/EcKoSywFQ3mMQYZM2Dwi6o5Xekj3L9CirJdYQcH9/0zokWvbiJkoAJarYDGlx3\n\tLHwA==", "X-Google-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to\n\t:references;\n\tbh=89WVn+viTbF6qPGXpQkuvFTKYtFF5tGQ4c4GIsjPFMM=;\n\tb=EvjzCc0o6tmcVoB+/qjSu6qy/MbCqvmkfVPbH/JHZi6DAtHzCf4rEhFzJTaUKk37LX\n\tAMCpE4WC4IvbS0FoitUO/A5S/5y2k/SJJQRY52ahE19T4fLWURQxhTsR6RWqcQ6hlLvb\n\t6o/SOTB3vtT9dl3aYi0dWvnw9HtPmZd5jMj7jk8E183ld2Ux4CiiA6qxoUrGwI0e5jxv\n\tBEwZ1N9ye6MbYgsr81DpL3Vp5UGntf975YUiOpPFy553jiAAZZQCv9FHqpb1UK6aqd3w\n\tLrZVgXqugySTPXqr0zy1jtu3BPeI4Gi4AtPQqASMhiaR93c4IMarvtv4outKVEeDITpL\n\t2GnA==", "X-Gm-Message-State": "AHPjjUhHX+2cQcaai65iE5gtOVdK1LeLbZIgEKVp1AHnb90QnC84DNDK\n\ttGfU7cGZUpab0d0X+BuNaWYP5n6o", "X-Google-Smtp-Source": "AOwi7QCh0/99UQsKv8ijugG1GPTH19xXO+t7ne6+JVP6kdwI8WzJ7k3loxcIZaPTj8b/7S4bFZUHVQ==", "X-Received": "by 10.223.184.251 with SMTP id c56mr971151wrg.145.1505976225009; \n\tWed, 20 Sep 2017 23:43:45 -0700 (PDT)", "From": "Jiri Pirko <jiri@resnulli.us>", "To": "netdev@vger.kernel.org", "Cc": "davem@davemloft.net, yotamg@mellanox.com, idosch@mellanox.com,\n\tmlxsw@mellanox.com", "Subject": "[patch net-next 07/12] mlxsw: spectrum: Add the multicast routing\n\toffloading logic", "Date": "Thu, 21 Sep 2017 08:43:33 +0200", "Message-Id": "<20170921064338.1282-8-jiri@resnulli.us>", "X-Mailer": "git-send-email 2.9.5", "In-Reply-To": "<20170921064338.1282-1-jiri@resnulli.us>", "References": "<20170921064338.1282-1-jiri@resnulli.us>", "Sender": "netdev-owner@vger.kernel.org", "Precedence": "bulk", "List-ID": "<netdev.vger.kernel.org>", "X-Mailing-List": "netdev@vger.kernel.org" }, "content": "From: Yotam Gigi <yotamg@mellanox.com>\n\nAdd the multicast router offloading logic, which is in charge of handling\nthe VIF and MFC notifications and translating it to the hardware logic API.\n\nThe offloading logic has to overcome several obstacles in order to safely\ncomply with the kernel multicast router user API:\n - It must keep track of the mapping between VIFs to netdevices. The user\n can add an MFC cache entry pointing to a VIF, delete the VIF and add\n re-add it with a different netdevice. The offloading logic has to handle\n this in order to be compatible with the kernel logic.\n - It must keep track of the mapping between netdevices to spectrum RIFs,\n as the current hardware implementation assume having a RIF for every\n port in a multicast router.\n - It must handle routes pointing to pimreg device to be trapped to the\n kernel, as the packet should be delivered to userspace.\n - It must handle routes pointing tunnel VIFs. The current implementation\n does not support multicast forwarding to tunnels, thus routes that point\n to a tunnel should be trapped to the kernel.\n - It must be aware of proxy multicast routes, which include both (*,*)\n routes and duplicate routes. Currently proxy routes are not offloaded\n and trigger the abort mechanism: removal of all routes from hardware and\n triggering the traffic to go through the kernel.\n\nThe multicast routing offloading logic also updates the counters of the\noffloaded MFC routes in a periodic work.\n\nSigned-off-by: Yotam Gigi <yotamg@mellanox.com>\nReviewed-by: Ido Schimmel <idosch@mellanox.com>\nSigned-off-by: Jiri Pirko <jiri@mellanox.com>\n---\n drivers/net/ethernet/mellanox/mlxsw/Makefile | 3 +-\n drivers/net/ethernet/mellanox/mlxsw/spectrum.h | 1 +\n drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c | 1012 +++++++++++++++++++++\n drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.h | 133 +++\n 4 files changed, 1148 insertions(+), 1 deletion(-)\n create mode 100644 drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c\n create mode 100644 drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.h", "diff": "diff --git a/drivers/net/ethernet/mellanox/mlxsw/Makefile b/drivers/net/ethernet/mellanox/mlxsw/Makefile\nindex 4b88158..9b29764 100644\n--- a/drivers/net/ethernet/mellanox/mlxsw/Makefile\n+++ b/drivers/net/ethernet/mellanox/mlxsw/Makefile\n@@ -17,7 +17,8 @@ mlxsw_spectrum-objs\t\t:= spectrum.o spectrum_buffers.o \\\n \t\t\t\t spectrum_kvdl.o spectrum_acl_tcam.o \\\n \t\t\t\t spectrum_acl.o spectrum_flower.o \\\n \t\t\t\t spectrum_cnt.o spectrum_fid.o \\\n-\t\t\t\t spectrum_ipip.o spectrum_acl_flex_actions.o\n+\t\t\t\t spectrum_ipip.o spectrum_acl_flex_actions.o \\\n+\t\t\t\t spectrum_mr.o\n mlxsw_spectrum-$(CONFIG_MLXSW_SPECTRUM_DCB)\t+= spectrum_dcb.o\n mlxsw_spectrum-$(CONFIG_NET_DEVLINK) += spectrum_dpipe.o\n obj-$(CONFIG_MLXSW_MINIMAL)\t+= mlxsw_minimal.o\ndiff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h\nindex e907ec4..51d8b9f 100644\n--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h\n+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h\n@@ -153,6 +153,7 @@ struct mlxsw_sp {\n \tstruct mlxsw_sp_sb *sb;\n \tstruct mlxsw_sp_bridge *bridge;\n \tstruct mlxsw_sp_router *router;\n+\tstruct mlxsw_sp_mr *mr;\n \tstruct mlxsw_afa *afa;\n \tstruct mlxsw_sp_acl *acl;\n \tstruct mlxsw_sp_fid_core *fid_core;\ndiff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c\nnew file mode 100644\nindex 0000000..c77febd\n--- /dev/null\n+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c\n@@ -0,0 +1,1012 @@\n+/*\n+ * drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c\n+ * Copyright (c) 2017 Mellanox Technologies. All rights reserved.\n+ * Copyright (c) 2017 Yotam Gigi <yotamg@mellanox.com>\n+ *\n+ * Redistribution and use in source and binary forms, with or without\n+ * modification, are permitted provided that the following conditions are met:\n+ *\n+ * 1. Redistributions of source code must retain the above copyright\n+ * notice, this list of conditions and the following disclaimer.\n+ * 2. Redistributions in binary form must reproduce the above copyright\n+ * notice, this list of conditions and the following disclaimer in the\n+ * documentation and/or other materials provided with the distribution.\n+ * 3. Neither the names of the copyright holders nor the names of its\n+ * contributors may be used to endorse or promote products derived from\n+ * this software without specific prior written permission.\n+ *\n+ * Alternatively, this software may be distributed under the terms of the\n+ * GNU General Public License (\"GPL\") version 2 as published by the Free\n+ * Software Foundation.\n+ *\n+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n+ * POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#include <linux/rhashtable.h>\n+\n+#include \"spectrum_mr.h\"\n+#include \"spectrum_router.h\"\n+\n+struct mlxsw_sp_mr {\n+\tconst struct mlxsw_sp_mr_ops *mr_ops;\n+\tvoid *catchall_route_priv;\n+\tstruct delayed_work stats_update_dw;\n+\tstruct list_head table_list;\n+#define MLXSW_SP_MR_ROUTES_COUNTER_UPDATE_INTERVAL 5000 /* ms */\n+\tunsigned long priv[0];\n+\t/* priv has to be always the last item */\n+};\n+\n+struct mlxsw_sp_mr_vif {\n+\tstruct net_device *dev;\n+\tconst struct mlxsw_sp_rif *rif;\n+\tunsigned long vif_flags;\n+\n+\t/* A list of route_vif_entry structs that point to routes that the VIF\n+\t * instance is used as one of the egress VIFs\n+\t */\n+\tstruct list_head route_evif_list;\n+\n+\t/* A list of route_vif_entry structs that point to routes that the VIF\n+\t * instance is used as an ingress VIF\n+\t */\n+\tstruct list_head route_ivif_list;\n+};\n+\n+struct mlxsw_sp_mr_route_vif_entry {\n+\tstruct list_head vif_node;\n+\tstruct list_head route_node;\n+\tstruct mlxsw_sp_mr_vif *mr_vif;\n+\tstruct mlxsw_sp_mr_route *mr_route;\n+};\n+\n+struct mlxsw_sp_mr_table {\n+\tstruct list_head node;\n+\tenum mlxsw_sp_l3proto proto;\n+\tstruct mlxsw_sp *mlxsw_sp;\n+\tu32 vr_id;\n+\tstruct mlxsw_sp_mr_vif vifs[MAXVIFS];\n+\tstruct list_head route_list;\n+\tstruct rhashtable route_ht;\n+\tchar catchall_route_priv[0];\n+\t/* catchall_route_priv has to be always the last item */\n+};\n+\n+struct mlxsw_sp_mr_route {\n+\tstruct list_head node;\n+\tstruct rhash_head ht_node;\n+\tstruct mlxsw_sp_mr_route_key key;\n+\tenum mlxsw_sp_mr_route_action route_action;\n+\tu16 min_mtu;\n+\tstruct mfc_cache *mfc4;\n+\tvoid *route_priv;\n+\tconst struct mlxsw_sp_mr_table *mr_table;\n+\t/* A list of route_vif_entry structs that point to the egress VIFs */\n+\tstruct list_head evif_list;\n+\t/* A route_vif_entry struct that point to the ingress VIF */\n+\tstruct mlxsw_sp_mr_route_vif_entry ivif;\n+};\n+\n+static const struct rhashtable_params mlxsw_sp_mr_route_ht_params = {\n+\t.key_len = sizeof(struct mlxsw_sp_mr_route_key),\n+\t.key_offset = offsetof(struct mlxsw_sp_mr_route, key),\n+\t.head_offset = offsetof(struct mlxsw_sp_mr_route, ht_node),\n+\t.automatic_shrinking = true,\n+};\n+\n+static bool mlxsw_sp_mr_vif_regular(const struct mlxsw_sp_mr_vif *vif)\n+{\n+\treturn !(vif->vif_flags & (VIFF_TUNNEL | VIFF_REGISTER));\n+}\n+\n+static bool mlxsw_sp_mr_vif_valid(const struct mlxsw_sp_mr_vif *vif)\n+{\n+\treturn mlxsw_sp_mr_vif_regular(vif) && vif->dev && vif->rif;\n+}\n+\n+static bool mlxsw_sp_mr_vif_rif_invalid(const struct mlxsw_sp_mr_vif *vif)\n+{\n+\treturn mlxsw_sp_mr_vif_regular(vif) && vif->dev && !vif->rif;\n+}\n+\n+static bool\n+mlxsw_sp_mr_route_ivif_in_evifs(const struct mlxsw_sp_mr_route *mr_route)\n+{\n+\tvifi_t ivif;\n+\n+\tswitch (mr_route->mr_table->proto) {\n+\tcase MLXSW_SP_L3_PROTO_IPV4:\n+\t\tivif = mr_route->mfc4->mfc_parent;\n+\t\treturn mr_route->mfc4->mfc_un.res.ttls[ivif] != 255;\n+\tcase MLXSW_SP_L3_PROTO_IPV6:\n+\t\t/* fall through */\n+\tdefault:\n+\t\tWARN_ON_ONCE(1);\n+\t}\n+\treturn false;\n+}\n+\n+static int\n+mlxsw_sp_mr_route_valid_evifs_num(const struct mlxsw_sp_mr_route *mr_route)\n+{\n+\tstruct mlxsw_sp_mr_route_vif_entry *rve;\n+\tint valid_evifs = 0;\n+\n+\tvalid_evifs = 0;\n+\tlist_for_each_entry(rve, &mr_route->evif_list, route_node)\n+\t\tif (mlxsw_sp_mr_vif_valid(rve->mr_vif))\n+\t\t\tvalid_evifs++;\n+\treturn valid_evifs;\n+}\n+\n+static bool mlxsw_sp_mr_route_starg(const struct mlxsw_sp_mr_route *mr_route)\n+{\n+\tswitch (mr_route->mr_table->proto) {\n+\tcase MLXSW_SP_L3_PROTO_IPV4:\n+\t\treturn mr_route->key.source_mask.addr4 == INADDR_ANY;\n+\tcase MLXSW_SP_L3_PROTO_IPV6:\n+\t\t/* fall through */\n+\tdefault:\n+\t\tWARN_ON_ONCE(1);\n+\t}\n+\treturn false;\n+}\n+\n+static enum mlxsw_sp_mr_route_action\n+mlxsw_sp_mr_route_action(const struct mlxsw_sp_mr_route *mr_route)\n+{\n+\tstruct mlxsw_sp_mr_route_vif_entry *rve;\n+\n+\t/* If the ingress port is not regular and resolved, trap the route */\n+\tif (!mlxsw_sp_mr_vif_valid(mr_route->ivif.mr_vif))\n+\t\treturn MLXSW_SP_MR_ROUTE_ACTION_TRAP;\n+\n+\t/* The kernel does not match a (*,G) route that the ingress interface is\n+\t * not one of the egress interfaces, so trap these kind of routes.\n+\t */\n+\tif (mlxsw_sp_mr_route_starg(mr_route) &&\n+\t !mlxsw_sp_mr_route_ivif_in_evifs(mr_route))\n+\t\treturn MLXSW_SP_MR_ROUTE_ACTION_TRAP;\n+\n+\t/* If the route has no valid eVIFs, trap it. */\n+\tif (!mlxsw_sp_mr_route_valid_evifs_num(mr_route))\n+\t\treturn MLXSW_SP_MR_ROUTE_ACTION_TRAP;\n+\n+\t/* If either one of the eVIFs is not regular (VIF of type pimreg or\n+\t * tunnel) or one of the VIFs has no matching RIF, trap the packet.\n+\t */\n+\tlist_for_each_entry(rve, &mr_route->evif_list, route_node) {\n+\t\tif (!mlxsw_sp_mr_vif_regular(rve->mr_vif) ||\n+\t\t mlxsw_sp_mr_vif_rif_invalid(rve->mr_vif))\n+\t\t\treturn MLXSW_SP_MR_ROUTE_ACTION_TRAP;\n+\t}\n+\treturn MLXSW_SP_MR_ROUTE_ACTION_FORWARD;\n+}\n+\n+static enum mlxsw_sp_mr_route_prio\n+mlxsw_sp_mr_route_prio(const struct mlxsw_sp_mr_route *mr_route)\n+{\n+\treturn mlxsw_sp_mr_route_starg(mr_route) ?\n+\t\tMLXSW_SP_MR_ROUTE_PRIO_STARG : MLXSW_SP_MR_ROUTE_PRIO_SG;\n+}\n+\n+static void mlxsw_sp_mr_route4_key(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t\t struct mlxsw_sp_mr_route_key *key,\n+\t\t\t\t const struct mfc_cache *mfc)\n+{\n+\tbool starg = (mfc->mfc_origin == INADDR_ANY);\n+\n+\tmemset(key, 0, sizeof(*key));\n+\tkey->vrid = mr_table->vr_id;\n+\tkey->proto = mr_table->proto;\n+\tkey->group.addr4 = mfc->mfc_mcastgrp;\n+\tkey->group_mask.addr4 = 0xffffffff;\n+\tkey->source.addr4 = mfc->mfc_origin;\n+\tkey->source_mask.addr4 = starg ? 0 : 0xffffffff;\n+}\n+\n+static int mlxsw_sp_mr_route_evif_link(struct mlxsw_sp_mr_route *mr_route,\n+\t\t\t\t struct mlxsw_sp_mr_vif *mr_vif)\n+{\n+\tstruct mlxsw_sp_mr_route_vif_entry *rve;\n+\n+\trve = kzalloc(sizeof(*rve), GFP_KERNEL);\n+\tif (!rve)\n+\t\treturn -ENOMEM;\n+\trve->mr_route = mr_route;\n+\trve->mr_vif = mr_vif;\n+\tlist_add_tail(&rve->route_node, &mr_route->evif_list);\n+\tlist_add_tail(&rve->vif_node, &mr_vif->route_evif_list);\n+\treturn 0;\n+}\n+\n+static void\n+mlxsw_sp_mr_route_evif_unlink(struct mlxsw_sp_mr_route_vif_entry *rve)\n+{\n+\tlist_del(&rve->route_node);\n+\tlist_del(&rve->vif_node);\n+\tkfree(rve);\n+}\n+\n+static void mlxsw_sp_mr_route_ivif_link(struct mlxsw_sp_mr_route *mr_route,\n+\t\t\t\t\tstruct mlxsw_sp_mr_vif *mr_vif)\n+{\n+\tmr_route->ivif.mr_route = mr_route;\n+\tmr_route->ivif.mr_vif = mr_vif;\n+\tlist_add_tail(&mr_route->ivif.vif_node, &mr_vif->route_ivif_list);\n+}\n+\n+static void mlxsw_sp_mr_route_ivif_unlink(struct mlxsw_sp_mr_route *mr_route)\n+{\n+\tlist_del(&mr_route->ivif.vif_node);\n+}\n+\n+static int\n+mlxsw_sp_mr_route_info_create(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t struct mlxsw_sp_mr_route *mr_route,\n+\t\t\t struct mlxsw_sp_mr_route_info *route_info)\n+{\n+\tstruct mlxsw_sp_mr_route_vif_entry *rve;\n+\tu16 *erif_indices;\n+\tu16 irif_index;\n+\tu16 erif = 0;\n+\n+\terif_indices = kmalloc_array(MAXVIFS, sizeof(*erif_indices),\n+\t\t\t\t GFP_KERNEL);\n+\tif (!erif_indices)\n+\t\treturn -ENOMEM;\n+\n+\tlist_for_each_entry(rve, &mr_route->evif_list, route_node) {\n+\t\tif (mlxsw_sp_mr_vif_valid(rve->mr_vif)) {\n+\t\t\tu16 rifi = mlxsw_sp_rif_index(rve->mr_vif->rif);\n+\n+\t\t\terif_indices[erif++] = rifi;\n+\t\t}\n+\t}\n+\n+\tif (mlxsw_sp_mr_vif_valid(mr_route->ivif.mr_vif))\n+\t\tirif_index = mlxsw_sp_rif_index(mr_route->ivif.mr_vif->rif);\n+\telse\n+\t\tirif_index = 0;\n+\n+\troute_info->irif_index = irif_index;\n+\troute_info->erif_indices = erif_indices;\n+\troute_info->min_mtu = mr_route->min_mtu;\n+\troute_info->route_action = mr_route->route_action;\n+\troute_info->erif_num = erif;\n+\treturn 0;\n+}\n+\n+static void\n+mlxsw_sp_mr_route_info_destroy(struct mlxsw_sp_mr_route_info *route_info)\n+{\n+\tkfree(route_info->erif_indices);\n+}\n+\n+static int mlxsw_sp_mr_route_write(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t\t struct mlxsw_sp_mr_route *mr_route,\n+\t\t\t\t bool replace)\n+{\n+\tstruct mlxsw_sp *mlxsw_sp = mr_table->mlxsw_sp;\n+\tstruct mlxsw_sp_mr_route_info route_info;\n+\tstruct mlxsw_sp_mr *mr = mlxsw_sp->mr;\n+\tint err;\n+\n+\terr = mlxsw_sp_mr_route_info_create(mr_table, mr_route, &route_info);\n+\tif (err)\n+\t\treturn err;\n+\n+\tif (!replace) {\n+\t\tstruct mlxsw_sp_mr_route_params route_params;\n+\n+\t\tmr_route->route_priv = kzalloc(mr->mr_ops->route_priv_size,\n+\t\t\t\t\t GFP_KERNEL);\n+\t\tif (!mr_route->route_priv) {\n+\t\t\terr = -ENOMEM;\n+\t\t\tgoto out;\n+\t\t}\n+\n+\t\troute_params.key = mr_route->key;\n+\t\troute_params.value = route_info;\n+\t\troute_params.prio = mlxsw_sp_mr_route_prio(mr_route);\n+\t\terr = mr->mr_ops->route_create(mlxsw_sp, mr->priv,\n+\t\t\t\t\t mr_route->route_priv,\n+\t\t\t\t\t &route_params);\n+\t\tif (err)\n+\t\t\tkfree(mr_route->route_priv);\n+\t} else {\n+\t\terr = mr->mr_ops->route_update(mlxsw_sp, mr_route->route_priv,\n+\t\t\t\t\t &route_info);\n+\t}\n+out:\n+\tmlxsw_sp_mr_route_info_destroy(&route_info);\n+\treturn err;\n+}\n+\n+static void mlxsw_sp_mr_route_erase(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t\t struct mlxsw_sp_mr_route *mr_route)\n+{\n+\tstruct mlxsw_sp *mlxsw_sp = mr_table->mlxsw_sp;\n+\tstruct mlxsw_sp_mr *mr = mlxsw_sp->mr;\n+\n+\tmr->mr_ops->route_destroy(mlxsw_sp, mr->priv, mr_route->route_priv);\n+\tkfree(mr_route->route_priv);\n+}\n+\n+static struct mlxsw_sp_mr_route *\n+mlxsw_sp_mr_route4_create(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t struct mfc_cache *mfc)\n+{\n+\tstruct mlxsw_sp_mr_route_vif_entry *rve, *tmp;\n+\tstruct mlxsw_sp_mr_route *mr_route;\n+\tint err;\n+\tint i;\n+\n+\t/* Allocate and init a new route and fill it with parameters */\n+\tmr_route = kzalloc(sizeof(*mr_table), GFP_KERNEL);\n+\tif (!mr_route)\n+\t\treturn ERR_PTR(-ENOMEM);\n+\tINIT_LIST_HEAD(&mr_route->evif_list);\n+\tmlxsw_sp_mr_route4_key(mr_table, &mr_route->key, mfc);\n+\n+\t/* Find min_mtu and link iVIF and eVIFs */\n+\tmr_route->min_mtu = ETH_MAX_MTU;\n+\tipmr_cache_hold(mfc);\n+\tmr_route->mfc4 = mfc;\n+\tmr_route->mr_table = mr_table;\n+\tfor (i = 0; i < MAXVIFS; i++) {\n+\t\tif (mfc->mfc_un.res.ttls[i] != 255) {\n+\t\t\terr = mlxsw_sp_mr_route_evif_link(mr_route,\n+\t\t\t\t\t\t\t &mr_table->vifs[i]);\n+\t\t\tif (err)\n+\t\t\t\tgoto err;\n+\t\t\tif (mr_table->vifs[i].dev &&\n+\t\t\t mr_table->vifs[i].dev->mtu < mr_route->min_mtu)\n+\t\t\t\tmr_route->min_mtu = mr_table->vifs[i].dev->mtu;\n+\t\t}\n+\t}\n+\tmlxsw_sp_mr_route_ivif_link(mr_route, &mr_table->vifs[mfc->mfc_parent]);\n+\tif (err)\n+\t\tgoto err;\n+\n+\tmr_route->route_action = mlxsw_sp_mr_route_action(mr_route);\n+\treturn mr_route;\n+err:\n+\tipmr_cache_put(mfc);\n+\tlist_for_each_entry_safe(rve, tmp, &mr_route->evif_list, route_node)\n+\t\tmlxsw_sp_mr_route_evif_unlink(rve);\n+\tkfree(mr_route);\n+\treturn ERR_PTR(err);\n+}\n+\n+static void mlxsw_sp_mr_route4_destroy(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t\t struct mlxsw_sp_mr_route *mr_route)\n+{\n+\tstruct mlxsw_sp_mr_route_vif_entry *rve, *tmp;\n+\n+\tmlxsw_sp_mr_route_ivif_unlink(mr_route);\n+\tipmr_cache_put(mr_route->mfc4);\n+\tlist_for_each_entry_safe(rve, tmp, &mr_route->evif_list, route_node)\n+\t\tmlxsw_sp_mr_route_evif_unlink(rve);\n+\tkfree(mr_route);\n+}\n+\n+static void mlxsw_sp_mr_route_destroy(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t\t struct mlxsw_sp_mr_route *mr_route)\n+{\n+\tswitch (mr_table->proto) {\n+\tcase MLXSW_SP_L3_PROTO_IPV4:\n+\t\tmlxsw_sp_mr_route4_destroy(mr_table, mr_route);\n+\t\tbreak;\n+\tcase MLXSW_SP_L3_PROTO_IPV6:\n+\t\t/* fall through */\n+\tdefault:\n+\t\tWARN_ON_ONCE(1);\n+\t}\n+}\n+\n+static void mlxsw_sp_mr_mfc_offload_set(struct mlxsw_sp_mr_route *mr_route,\n+\t\t\t\t\tbool offload)\n+{\n+\tswitch (mr_route->mr_table->proto) {\n+\tcase MLXSW_SP_L3_PROTO_IPV4:\n+\t\tif (offload)\n+\t\t\tmr_route->mfc4->mfc_flags |= MFC_OFFLOAD;\n+\t\telse\n+\t\t\tmr_route->mfc4->mfc_flags &= ~MFC_OFFLOAD;\n+\t\tbreak;\n+\tcase MLXSW_SP_L3_PROTO_IPV6:\n+\t\t/* fall through */\n+\tdefault:\n+\t\tWARN_ON_ONCE(1);\n+\t}\n+}\n+\n+static void mlxsw_sp_mr_mfc_offload_update(struct mlxsw_sp_mr_route *mr_route)\n+{\n+\tbool offload;\n+\n+\toffload = mr_route->route_action != MLXSW_SP_MR_ROUTE_ACTION_TRAP;\n+\tmlxsw_sp_mr_mfc_offload_set(mr_route, offload);\n+}\n+\n+static void __mlxsw_sp_mr_route_del(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t\t struct mlxsw_sp_mr_route *mr_route)\n+{\n+\tmlxsw_sp_mr_mfc_offload_set(mr_route, false);\n+\tmlxsw_sp_mr_route_erase(mr_table, mr_route);\n+\trhashtable_remove_fast(&mr_table->route_ht, &mr_route->ht_node,\n+\t\t\t mlxsw_sp_mr_route_ht_params);\n+\tlist_del(&mr_route->node);\n+\tmlxsw_sp_mr_route_destroy(mr_table, mr_route);\n+}\n+\n+int mlxsw_sp_mr_route4_add(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t struct mfc_cache *mfc, bool replace)\n+{\n+\tstruct mlxsw_sp_mr_route *mr_orig_route = NULL;\n+\tstruct mlxsw_sp_mr_route *mr_route;\n+\tint err;\n+\n+\t/* If the route is a (*,*) route, abort, as these kind of routes are\n+\t * used for proxy routes.\n+\t */\n+\tif (mfc->mfc_origin == INADDR_ANY && mfc->mfc_mcastgrp == INADDR_ANY) {\n+\t\tdev_warn(mr_table->mlxsw_sp->bus_info->dev,\n+\t\t\t \"Offloading proxy routes is not supported.\\n\");\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/* Create a new route */\n+\tmr_route = mlxsw_sp_mr_route4_create(mr_table, mfc);\n+\tif (IS_ERR(mr_route))\n+\t\treturn PTR_ERR(mr_route);\n+\n+\t/* Find any route with a matching key */\n+\tmr_orig_route = rhashtable_lookup_fast(&mr_table->route_ht,\n+\t\t\t\t\t &mr_route->key,\n+\t\t\t\t\t mlxsw_sp_mr_route_ht_params);\n+\tif (replace) {\n+\t\t/* On replace case, make the route point to the new route_priv.\n+\t\t */\n+\t\tif (WARN_ON(!mr_orig_route)) {\n+\t\t\terr = -ENOENT;\n+\t\t\tgoto err_no_orig_route;\n+\t\t}\n+\t\tmr_route->route_priv = mr_orig_route->route_priv;\n+\t} else if (mr_orig_route) {\n+\t\t/* On non replace case, if another route with the same key was\n+\t\t * found, abort, as duplicate routes are used for proxy routes.\n+\t\t */\n+\t\tdev_warn(mr_table->mlxsw_sp->bus_info->dev,\n+\t\t\t \"Offloading proxy routes is not supported.\\n\");\n+\t\terr = -EINVAL;\n+\t\tgoto err_duplicate_route;\n+\t}\n+\n+\t/* Put it in the table data-structures */\n+\tlist_add_tail(&mr_route->node, &mr_table->route_list);\n+\terr = rhashtable_insert_fast(&mr_table->route_ht,\n+\t\t\t\t &mr_route->ht_node,\n+\t\t\t\t mlxsw_sp_mr_route_ht_params);\n+\tif (err)\n+\t\tgoto err_rhashtable_insert;\n+\n+\t/* Write the route to the hardware */\n+\terr = mlxsw_sp_mr_route_write(mr_table, mr_route, replace);\n+\tif (err)\n+\t\tgoto err_mr_route_write;\n+\n+\t/* Destroy the original route */\n+\tif (replace) {\n+\t\trhashtable_remove_fast(&mr_table->route_ht,\n+\t\t\t\t &mr_orig_route->ht_node,\n+\t\t\t\t mlxsw_sp_mr_route_ht_params);\n+\t\tlist_del(&mr_orig_route->node);\n+\t\tmlxsw_sp_mr_route4_destroy(mr_table, mr_orig_route);\n+\t}\n+\n+\tmlxsw_sp_mr_mfc_offload_update(mr_route);\n+\treturn 0;\n+\n+err_mr_route_write:\n+\trhashtable_remove_fast(&mr_table->route_ht, &mr_route->ht_node,\n+\t\t\t mlxsw_sp_mr_route_ht_params);\n+err_rhashtable_insert:\n+\tlist_del(&mr_route->node);\n+err_no_orig_route:\n+err_duplicate_route:\n+\tmlxsw_sp_mr_route4_destroy(mr_table, mr_route);\n+\treturn err;\n+}\n+\n+void mlxsw_sp_mr_route4_del(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t struct mfc_cache *mfc)\n+{\n+\tstruct mlxsw_sp_mr_route *mr_route;\n+\tstruct mlxsw_sp_mr_route_key key;\n+\n+\tmlxsw_sp_mr_route4_key(mr_table, &key, mfc);\n+\tmr_route = rhashtable_lookup_fast(&mr_table->route_ht, &key,\n+\t\t\t\t\t mlxsw_sp_mr_route_ht_params);\n+\tif (mr_route)\n+\t\t__mlxsw_sp_mr_route_del(mr_table, mr_route);\n+}\n+\n+/* Should be called after the VIF struct is updated */\n+static int\n+mlxsw_sp_mr_route_ivif_resolve(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t struct mlxsw_sp_mr_route_vif_entry *rve)\n+{\n+\tstruct mlxsw_sp *mlxsw_sp = mr_table->mlxsw_sp;\n+\tenum mlxsw_sp_mr_route_action route_action;\n+\tstruct mlxsw_sp_mr *mr = mlxsw_sp->mr;\n+\tu16 irif_index;\n+\tint err;\n+\n+\troute_action = mlxsw_sp_mr_route_action(rve->mr_route);\n+\tif (route_action == MLXSW_SP_MR_ROUTE_ACTION_TRAP)\n+\t\treturn 0;\n+\n+\t/* rve->mr_vif->rif is guaranteed to be valid at this stage */\n+\tirif_index = mlxsw_sp_rif_index(rve->mr_vif->rif);\n+\terr = mr->mr_ops->route_irif_update(mlxsw_sp, rve->mr_route->route_priv,\n+\t\t\t\t\t irif_index);\n+\tif (err)\n+\t\treturn err;\n+\n+\terr = mr->mr_ops->route_action_update(mlxsw_sp,\n+\t\t\t\t\t rve->mr_route->route_priv,\n+\t\t\t\t\t route_action);\n+\tif (err)\n+\t\t/* No need to rollback here because the iRIF change only takes\n+\t\t * place after the action has been updated.\n+\t\t */\n+\t\treturn err;\n+\n+\trve->mr_route->route_action = route_action;\n+\tmlxsw_sp_mr_mfc_offload_update(rve->mr_route);\n+\treturn 0;\n+}\n+\n+static void\n+mlxsw_sp_mr_route_ivif_unresolve(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t\t struct mlxsw_sp_mr_route_vif_entry *rve)\n+{\n+\tstruct mlxsw_sp *mlxsw_sp = mr_table->mlxsw_sp;\n+\tstruct mlxsw_sp_mr *mr = mlxsw_sp->mr;\n+\n+\tmr->mr_ops->route_action_update(mlxsw_sp, rve->mr_route->route_priv,\n+\t\t\t\t\tMLXSW_SP_MR_ROUTE_ACTION_TRAP);\n+\trve->mr_route->route_action = MLXSW_SP_MR_ROUTE_ACTION_TRAP;\n+\tmlxsw_sp_mr_mfc_offload_update(rve->mr_route);\n+}\n+\n+/* Should be called after the RIF struct is updated */\n+static int\n+mlxsw_sp_mr_route_evif_resolve(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t struct mlxsw_sp_mr_route_vif_entry *rve)\n+{\n+\tstruct mlxsw_sp *mlxsw_sp = mr_table->mlxsw_sp;\n+\tenum mlxsw_sp_mr_route_action route_action;\n+\tstruct mlxsw_sp_mr *mr = mlxsw_sp->mr;\n+\tu16 erif_index = 0;\n+\tint err;\n+\n+\t/* Update the route action, as the new eVIF can be a tunnel or a pimreg\n+\t * device which will require updating the action.\n+\t */\n+\troute_action = mlxsw_sp_mr_route_action(rve->mr_route);\n+\tif (route_action != rve->mr_route->route_action) {\n+\t\terr = mr->mr_ops->route_action_update(mlxsw_sp,\n+\t\t\t\t\t\t rve->mr_route->route_priv,\n+\t\t\t\t\t\t route_action);\n+\t\tif (err)\n+\t\t\treturn err;\n+\t}\n+\n+\t/* Add the eRIF */\n+\tif (mlxsw_sp_mr_vif_valid(rve->mr_vif)) {\n+\t\terif_index = mlxsw_sp_rif_index(rve->mr_vif->rif);\n+\t\terr = mr->mr_ops->route_erif_add(mlxsw_sp,\n+\t\t\t\t\t\t rve->mr_route->route_priv,\n+\t\t\t\t\t\t erif_index);\n+\t\tif (err)\n+\t\t\tgoto err_route_erif_add;\n+\t}\n+\n+\t/* Update the minimum MTU */\n+\tif (rve->mr_vif->dev->mtu < rve->mr_route->min_mtu) {\n+\t\trve->mr_route->min_mtu = rve->mr_vif->dev->mtu;\n+\t\terr = mr->mr_ops->route_min_mtu_update(mlxsw_sp,\n+\t\t\t\t\t\t rve->mr_route->route_priv,\n+\t\t\t\t\t\t rve->mr_route->min_mtu);\n+\t\tif (err)\n+\t\t\tgoto err_route_min_mtu_update;\n+\t}\n+\n+\trve->mr_route->route_action = route_action;\n+\tmlxsw_sp_mr_mfc_offload_update(rve->mr_route);\n+\treturn 0;\n+\n+err_route_min_mtu_update:\n+\tif (mlxsw_sp_mr_vif_valid(rve->mr_vif))\n+\t\tmr->mr_ops->route_erif_del(mlxsw_sp, rve->mr_route->route_priv,\n+\t\t\t\t\t erif_index);\n+err_route_erif_add:\n+\tif (route_action != rve->mr_route->route_action)\n+\t\tmr->mr_ops->route_action_update(mlxsw_sp,\n+\t\t\t\t\t\trve->mr_route->route_priv,\n+\t\t\t\t\t\trve->mr_route->route_action);\n+\treturn err;\n+}\n+\n+/* Should be called before the RIF struct is updated */\n+static void\n+mlxsw_sp_mr_route_evif_unresolve(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t\t struct mlxsw_sp_mr_route_vif_entry *rve)\n+{\n+\tstruct mlxsw_sp *mlxsw_sp = mr_table->mlxsw_sp;\n+\tenum mlxsw_sp_mr_route_action route_action;\n+\tstruct mlxsw_sp_mr *mr = mlxsw_sp->mr;\n+\tu16 rifi;\n+\n+\t/* If the unresolved RIF was not valid, no need to delete it */\n+\tif (!mlxsw_sp_mr_vif_valid(rve->mr_vif))\n+\t\treturn;\n+\n+\t/* Update the route action: if there is only one valid eVIF in the\n+\t * route, set the action to trap as the VIF deletion will lead to zero\n+\t * valid eVIFs. On any other case, use the mlxsw_sp_mr_route_action to\n+\t * determine the route action.\n+\t */\n+\tif (mlxsw_sp_mr_route_valid_evifs_num(rve->mr_route) == 1)\n+\t\troute_action = MLXSW_SP_MR_ROUTE_ACTION_TRAP;\n+\telse\n+\t\troute_action = mlxsw_sp_mr_route_action(rve->mr_route);\n+\tif (route_action != rve->mr_route->route_action)\n+\t\tmr->mr_ops->route_action_update(mlxsw_sp,\n+\t\t\t\t\t\trve->mr_route->route_priv,\n+\t\t\t\t\t\troute_action);\n+\n+\t/* Delete the erif from the route */\n+\trifi = mlxsw_sp_rif_index(rve->mr_vif->rif);\n+\tmr->mr_ops->route_erif_del(mlxsw_sp, rve->mr_route->route_priv, rifi);\n+\trve->mr_route->route_action = route_action;\n+\tmlxsw_sp_mr_mfc_offload_update(rve->mr_route);\n+}\n+\n+static int mlxsw_sp_mr_vif_resolve(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t\t struct net_device *dev,\n+\t\t\t\t struct mlxsw_sp_mr_vif *mr_vif,\n+\t\t\t\t unsigned long vif_flags,\n+\t\t\t\t const struct mlxsw_sp_rif *rif)\n+{\n+\tstruct mlxsw_sp_mr_route_vif_entry *irve, *erve;\n+\tint err;\n+\n+\t/* Update the VIF */\n+\tmr_vif->dev = dev;\n+\tmr_vif->rif = rif;\n+\tmr_vif->vif_flags = vif_flags;\n+\n+\t/* Update all routes where this VIF is used as an unresolved iRIF */\n+\tlist_for_each_entry(irve, &mr_vif->route_ivif_list, vif_node) {\n+\t\terr = mlxsw_sp_mr_route_ivif_resolve(mr_table, irve);\n+\t\tif (err)\n+\t\t\tgoto err_irif_unresolve;\n+\t}\n+\n+\t/* Update all routes where this VIF is used as an unresolved eRIF */\n+\tlist_for_each_entry(erve, &mr_vif->route_evif_list, vif_node) {\n+\t\terr = mlxsw_sp_mr_route_evif_resolve(mr_table, erve);\n+\t\tif (err)\n+\t\t\tgoto err_erif_unresolve;\n+\t}\n+\treturn 0;\n+\n+err_erif_unresolve:\n+\tlist_for_each_entry_from_reverse(erve, &mr_vif->route_evif_list,\n+\t\t\t\t\t vif_node)\n+\t\tmlxsw_sp_mr_route_evif_unresolve(mr_table, erve);\n+err_irif_unresolve:\n+\tlist_for_each_entry_from_reverse(irve, &mr_vif->route_ivif_list,\n+\t\t\t\t\t vif_node)\n+\t\tmlxsw_sp_mr_route_ivif_unresolve(mr_table, irve);\n+\tmr_vif->rif = NULL;\n+\treturn err;\n+}\n+\n+static void mlxsw_sp_mr_vif_unresolve(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t\t struct net_device *dev,\n+\t\t\t\t struct mlxsw_sp_mr_vif *mr_vif)\n+{\n+\tstruct mlxsw_sp_mr_route_vif_entry *rve;\n+\n+\t/* Update all routes where this VIF is used as an unresolved eRIF */\n+\tlist_for_each_entry(rve, &mr_vif->route_evif_list, vif_node)\n+\t\tmlxsw_sp_mr_route_evif_unresolve(mr_table, rve);\n+\n+\t/* Update all routes where this VIF is used as an unresolved iRIF */\n+\tlist_for_each_entry(rve, &mr_vif->route_ivif_list, vif_node)\n+\t\tmlxsw_sp_mr_route_ivif_unresolve(mr_table, rve);\n+\n+\t/* Update the VIF */\n+\tmr_vif->dev = dev;\n+\tmr_vif->rif = NULL;\n+}\n+\n+int mlxsw_sp_mr_vif_add(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\tstruct net_device *dev, vifi_t vif_index,\n+\t\t\tunsigned long vif_flags, const struct mlxsw_sp_rif *rif)\n+{\n+\tstruct mlxsw_sp_mr_vif *mr_vif = &mr_table->vifs[vif_index];\n+\n+\tif (WARN_ON(vif_index >= MAXVIFS))\n+\t\treturn -EINVAL;\n+\tif (mr_vif->dev)\n+\t\treturn -EEXIST;\n+\treturn mlxsw_sp_mr_vif_resolve(mr_table, dev, mr_vif, vif_flags, rif);\n+}\n+\n+void mlxsw_sp_mr_vif_del(struct mlxsw_sp_mr_table *mr_table, vifi_t vif_index)\n+{\n+\tstruct mlxsw_sp_mr_vif *mr_vif = &mr_table->vifs[vif_index];\n+\n+\tif (WARN_ON(vif_index >= MAXVIFS))\n+\t\treturn;\n+\tif (WARN_ON(!mr_vif->dev))\n+\t\treturn;\n+\tmlxsw_sp_mr_vif_unresolve(mr_table, NULL, mr_vif);\n+}\n+\n+struct mlxsw_sp_mr_vif *\n+mlxsw_sp_mr_dev_vif_lookup(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t const struct net_device *dev)\n+{\n+\tvifi_t vif_index;\n+\n+\tfor (vif_index = 0; vif_index < MAXVIFS; vif_index++)\n+\t\tif (mr_table->vifs[vif_index].dev == dev)\n+\t\t\treturn &mr_table->vifs[vif_index];\n+\treturn NULL;\n+}\n+\n+int mlxsw_sp_mr_rif_add(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\tconst struct mlxsw_sp_rif *rif)\n+{\n+\tconst struct net_device *rif_dev = mlxsw_sp_rif_dev(rif);\n+\tstruct mlxsw_sp_mr_vif *mr_vif;\n+\n+\tif (!rif_dev)\n+\t\treturn 0;\n+\n+\tmr_vif = mlxsw_sp_mr_dev_vif_lookup(mr_table, rif_dev);\n+\tif (!mr_vif)\n+\t\treturn 0;\n+\treturn mlxsw_sp_mr_vif_resolve(mr_table, mr_vif->dev, mr_vif,\n+\t\t\t\t mr_vif->vif_flags, rif);\n+}\n+\n+void mlxsw_sp_mr_rif_del(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t const struct mlxsw_sp_rif *rif)\n+{\n+\tconst struct net_device *rif_dev = mlxsw_sp_rif_dev(rif);\n+\tstruct mlxsw_sp_mr_vif *mr_vif;\n+\n+\tif (!rif_dev)\n+\t\treturn;\n+\n+\tmr_vif = mlxsw_sp_mr_dev_vif_lookup(mr_table, rif_dev);\n+\tif (!mr_vif)\n+\t\treturn;\n+\tmlxsw_sp_mr_vif_unresolve(mr_table, mr_vif->dev, mr_vif);\n+}\n+\n+void mlxsw_sp_mr_rif_mtu_update(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t\tconst struct mlxsw_sp_rif *rif, int mtu)\n+{\n+\tconst struct net_device *rif_dev = mlxsw_sp_rif_dev(rif);\n+\tstruct mlxsw_sp *mlxsw_sp = mr_table->mlxsw_sp;\n+\tstruct mlxsw_sp_mr_route_vif_entry *rve;\n+\tstruct mlxsw_sp_mr *mr = mlxsw_sp->mr;\n+\tstruct mlxsw_sp_mr_vif *mr_vif;\n+\n+\tif (!rif_dev)\n+\t\treturn;\n+\n+\t/* Search for a VIF that use that RIF */\n+\tmr_vif = mlxsw_sp_mr_dev_vif_lookup(mr_table, rif_dev);\n+\tif (!mr_vif)\n+\t\treturn;\n+\n+\t/* Update all the routes that uses that VIF as eVIF */\n+\tlist_for_each_entry(rve, &mr_vif->route_evif_list, vif_node) {\n+\t\tif (mtu < rve->mr_route->min_mtu) {\n+\t\t\trve->mr_route->min_mtu = mtu;\n+\t\t\tmr->mr_ops->route_min_mtu_update(mlxsw_sp,\n+\t\t\t\t\t\t\t rve->mr_route->route_priv,\n+\t\t\t\t\t\t\t mtu);\n+\t\t}\n+\t}\n+}\n+\n+struct mlxsw_sp_mr_table *mlxsw_sp_mr_table_create(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t\t\t\t u32 vr_id,\n+\t\t\t\t\t\t enum mlxsw_sp_l3proto proto)\n+{\n+\tstruct mlxsw_sp_mr_route_params catchall_route_params = {\n+\t\t.prio = MLXSW_SP_MR_ROUTE_PRIO_CATCHALL,\n+\t\t.key = {\n+\t\t\t.vrid = vr_id,\n+\t\t},\n+\t\t.value = {\n+\t\t\t.route_action = MLXSW_SP_MR_ROUTE_ACTION_TRAP,\n+\t\t}\n+\t};\n+\tstruct mlxsw_sp_mr *mr = mlxsw_sp->mr;\n+\tstruct mlxsw_sp_mr_table *mr_table;\n+\tint err;\n+\tint i;\n+\n+\tmr_table = kzalloc(sizeof(*mr_table) + mr->mr_ops->route_priv_size,\n+\t\t\t GFP_KERNEL);\n+\tif (!mr_table)\n+\t\treturn ERR_PTR(-ENOMEM);\n+\n+\tmr_table->vr_id = vr_id;\n+\tmr_table->mlxsw_sp = mlxsw_sp;\n+\tmr_table->proto = proto;\n+\tINIT_LIST_HEAD(&mr_table->route_list);\n+\n+\terr = rhashtable_init(&mr_table->route_ht,\n+\t\t\t &mlxsw_sp_mr_route_ht_params);\n+\tif (err)\n+\t\tgoto err_route_rhashtable_init;\n+\n+\tfor (i = 0; i < MAXVIFS; i++) {\n+\t\tINIT_LIST_HEAD(&mr_table->vifs[i].route_evif_list);\n+\t\tINIT_LIST_HEAD(&mr_table->vifs[i].route_ivif_list);\n+\t}\n+\n+\terr = mr->mr_ops->route_create(mlxsw_sp, mr->priv,\n+\t\t\t\t mr_table->catchall_route_priv,\n+\t\t\t\t &catchall_route_params);\n+\tif (err)\n+\t\tgoto err_ops_route_create;\n+\tlist_add_tail(&mr_table->node, &mr->table_list);\n+\treturn mr_table;\n+\n+err_ops_route_create:\n+\trhashtable_destroy(&mr_table->route_ht);\n+err_route_rhashtable_init:\n+\tkfree(mr_table);\n+\treturn ERR_PTR(err);\n+}\n+\n+void mlxsw_sp_mr_table_destroy(struct mlxsw_sp_mr_table *mr_table)\n+{\n+\tstruct mlxsw_sp *mlxsw_sp = mr_table->mlxsw_sp;\n+\tstruct mlxsw_sp_mr *mr = mlxsw_sp->mr;\n+\n+\tWARN_ON(!mlxsw_sp_mr_table_empty(mr_table));\n+\tlist_del(&mr_table->node);\n+\tmr->mr_ops->route_destroy(mlxsw_sp, mr->priv,\n+\t\t\t\t &mr_table->catchall_route_priv);\n+\trhashtable_destroy(&mr_table->route_ht);\n+\tkfree(mr_table);\n+}\n+\n+void mlxsw_sp_mr_table_flush(struct mlxsw_sp_mr_table *mr_table)\n+{\n+\tstruct mlxsw_sp_mr_route *mr_route, *tmp;\n+\tint i;\n+\n+\tlist_for_each_entry_safe(mr_route, tmp, &mr_table->route_list, node)\n+\t\t__mlxsw_sp_mr_route_del(mr_table, mr_route);\n+\n+\tfor (i = 0; i < MAXVIFS; i++) {\n+\t\tmr_table->vifs[i].dev = NULL;\n+\t\tmr_table->vifs[i].rif = NULL;\n+\t}\n+}\n+\n+bool mlxsw_sp_mr_table_empty(const struct mlxsw_sp_mr_table *mr_table)\n+{\n+\tint i;\n+\n+\tfor (i = 0; i < MAXVIFS; i++)\n+\t\tif (mr_table->vifs[i].dev)\n+\t\t\treturn false;\n+\treturn list_empty(&mr_table->route_list);\n+}\n+\n+static void mlxsw_sp_mr_route_stats_update(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t\t\t struct mlxsw_sp_mr_route *mr_route)\n+{\n+\tstruct mlxsw_sp_mr *mr = mlxsw_sp->mr;\n+\tu64 packets, bytes;\n+\n+\tif (mr_route->route_action == MLXSW_SP_MR_ROUTE_ACTION_TRAP)\n+\t\treturn;\n+\n+\tmr->mr_ops->route_stats(mlxsw_sp, mr_route->route_priv, &packets,\n+\t\t\t\t&bytes);\n+\n+\tswitch (mr_route->mr_table->proto) {\n+\tcase MLXSW_SP_L3_PROTO_IPV4:\n+\t\tmr_route->mfc4->mfc_un.res.pkt = packets;\n+\t\tmr_route->mfc4->mfc_un.res.bytes = bytes;\n+\t\tbreak;\n+\tcase MLXSW_SP_L3_PROTO_IPV6:\n+\t\t/* fall through */\n+\tdefault:\n+\t\tWARN_ON_ONCE(1);\n+\t}\n+}\n+\n+static void mlxsw_sp_mr_stats_update(struct work_struct *work)\n+{\n+\tstruct mlxsw_sp_mr *mr = container_of(work, struct mlxsw_sp_mr,\n+\t\t\t\t\t stats_update_dw.work);\n+\tstruct mlxsw_sp_mr_table *mr_table;\n+\tstruct mlxsw_sp_mr_route *mr_route;\n+\tunsigned long interval;\n+\n+\trtnl_lock();\n+\tlist_for_each_entry(mr_table, &mr->table_list, node)\n+\t\tlist_for_each_entry(mr_route, &mr_table->route_list, node)\n+\t\t\tmlxsw_sp_mr_route_stats_update(mr_table->mlxsw_sp,\n+\t\t\t\t\t\t mr_route);\n+\trtnl_unlock();\n+\n+\tinterval = msecs_to_jiffies(MLXSW_SP_MR_ROUTES_COUNTER_UPDATE_INTERVAL);\n+\tmlxsw_core_schedule_dw(&mr->stats_update_dw, interval);\n+}\n+\n+int mlxsw_sp_mr_init(struct mlxsw_sp *mlxsw_sp,\n+\t\t const struct mlxsw_sp_mr_ops *mr_ops)\n+{\n+\tstruct mlxsw_sp_mr *mr;\n+\tunsigned long interval;\n+\tint err;\n+\n+\tmr = kzalloc(sizeof(*mr) + mr_ops->priv_size, GFP_KERNEL);\n+\tif (!mr)\n+\t\treturn -ENOMEM;\n+\tmr->mr_ops = mr_ops;\n+\tmlxsw_sp->mr = mr;\n+\tINIT_LIST_HEAD(&mr->table_list);\n+\n+\terr = mr_ops->init(mlxsw_sp, mr->priv);\n+\tif (err)\n+\t\tgoto err;\n+\n+\t/* Create the delayed work for counter updates */\n+\tINIT_DELAYED_WORK(&mr->stats_update_dw, mlxsw_sp_mr_stats_update);\n+\tinterval = msecs_to_jiffies(MLXSW_SP_MR_ROUTES_COUNTER_UPDATE_INTERVAL);\n+\tmlxsw_core_schedule_dw(&mr->stats_update_dw, interval);\n+\treturn 0;\n+err:\n+\tkfree(mr);\n+\treturn err;\n+}\n+\n+void mlxsw_sp_mr_fini(struct mlxsw_sp *mlxsw_sp)\n+{\n+\tstruct mlxsw_sp_mr *mr = mlxsw_sp->mr;\n+\n+\tcancel_delayed_work_sync(&mr->stats_update_dw);\n+\tmr->mr_ops->fini(mr->priv);\n+\tkfree(mr);\n+}\ndiff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.h\nnew file mode 100644\nindex 0000000..c851b23\n--- /dev/null\n+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.h\n@@ -0,0 +1,133 @@\n+/*\n+ * drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.h\n+ * Copyright (c) 2017 Mellanox Technologies. All rights reserved.\n+ * Copyright (c) 2017 Yotam Gigi <yotamg@mellanox.com>\n+ *\n+ * Redistribution and use in source and binary forms, with or without\n+ * modification, are permitted provided that the following conditions are met:\n+ *\n+ * 1. Redistributions of source code must retain the above copyright\n+ * notice, this list of conditions and the following disclaimer.\n+ * 2. Redistributions in binary form must reproduce the above copyright\n+ * notice, this list of conditions and the following disclaimer in the\n+ * documentation and/or other materials provided with the distribution.\n+ * 3. Neither the names of the copyright holders nor the names of its\n+ * contributors may be used to endorse or promote products derived from\n+ * this software without specific prior written permission.\n+ *\n+ * Alternatively, this software may be distributed under the terms of the\n+ * GNU General Public License (\"GPL\") version 2 as published by the Free\n+ * Software Foundation.\n+ *\n+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n+ * POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#ifndef _MLXSW_SPECTRUM_MCROUTER_H\n+#define _MLXSW_SPECTRUM_MCROUTER_H\n+\n+#include <linux/mroute.h>\n+#include \"spectrum_router.h\"\n+#include \"spectrum.h\"\n+\n+enum mlxsw_sp_mr_route_action {\n+\tMLXSW_SP_MR_ROUTE_ACTION_FORWARD,\n+\tMLXSW_SP_MR_ROUTE_ACTION_TRAP,\n+};\n+\n+enum mlxsw_sp_mr_route_prio {\n+\tMLXSW_SP_MR_ROUTE_PRIO_SG,\n+\tMLXSW_SP_MR_ROUTE_PRIO_STARG,\n+\tMLXSW_SP_MR_ROUTE_PRIO_CATCHALL,\n+\t__MLXSW_SP_MR_ROUTE_PRIO_MAX\n+};\n+\n+#define MLXSW_SP_MR_ROUTE_PRIO_MAX (__MLXSW_SP_MR_ROUTE_PRIO_MAX - 1)\n+\n+struct mlxsw_sp_mr_route_key {\n+\tint vrid;\n+\tenum mlxsw_sp_l3proto proto;\n+\tunion mlxsw_sp_l3addr group;\n+\tunion mlxsw_sp_l3addr group_mask;\n+\tunion mlxsw_sp_l3addr source;\n+\tunion mlxsw_sp_l3addr source_mask;\n+};\n+\n+struct mlxsw_sp_mr_route_info {\n+\tenum mlxsw_sp_mr_route_action route_action;\n+\tu16 irif_index;\n+\tu16 *erif_indices;\n+\tsize_t erif_num;\n+\tu16 min_mtu;\n+};\n+\n+struct mlxsw_sp_mr_route_params {\n+\tstruct mlxsw_sp_mr_route_key key;\n+\tstruct mlxsw_sp_mr_route_info value;\n+\tenum mlxsw_sp_mr_route_prio prio;\n+};\n+\n+struct mlxsw_sp_mr_ops {\n+\tint priv_size;\n+\tint route_priv_size;\n+\tint (*init)(struct mlxsw_sp *mlxsw_sp, void *priv);\n+\tint (*route_create)(struct mlxsw_sp *mlxsw_sp, void *priv,\n+\t\t\t void *route_priv,\n+\t\t\t struct mlxsw_sp_mr_route_params *route_params);\n+\tint (*route_update)(struct mlxsw_sp *mlxsw_sp, void *route_priv,\n+\t\t\t struct mlxsw_sp_mr_route_info *route_info);\n+\tint (*route_stats)(struct mlxsw_sp *mlxsw_sp, void *route_priv,\n+\t\t\t u64 *packets, u64 *bytes);\n+\tint (*route_action_update)(struct mlxsw_sp *mlxsw_sp, void *route_priv,\n+\t\t\t\t enum mlxsw_sp_mr_route_action route_action);\n+\tint (*route_min_mtu_update)(struct mlxsw_sp *mlxsw_sp, void *route_priv,\n+\t\t\t\t u16 min_mtu);\n+\tint (*route_irif_update)(struct mlxsw_sp *mlxsw_sp, void *route_priv,\n+\t\t\t\t u16 irif_index);\n+\tint (*route_erif_add)(struct mlxsw_sp *mlxsw_sp, void *route_priv,\n+\t\t\t u16 erif_index);\n+\tint (*route_erif_del)(struct mlxsw_sp *mlxsw_sp, void *route_priv,\n+\t\t\t u16 erif_index);\n+\tvoid (*route_destroy)(struct mlxsw_sp *mlxsw_sp, void *priv,\n+\t\t\t void *route_priv);\n+\tvoid (*fini)(void *priv);\n+};\n+\n+struct mlxsw_sp_mr;\n+struct mlxsw_sp_mr_table;\n+\n+int mlxsw_sp_mr_init(struct mlxsw_sp *mlxsw_sp,\n+\t\t const struct mlxsw_sp_mr_ops *mr_ops);\n+void mlxsw_sp_mr_fini(struct mlxsw_sp *mlxsw_sp);\n+int mlxsw_sp_mr_route4_add(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t struct mfc_cache *mfc, bool replace);\n+void mlxsw_sp_mr_route4_del(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t struct mfc_cache *mfc);\n+int mlxsw_sp_mr_vif_add(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\tstruct net_device *dev, vifi_t vif_index,\n+\t\t\tunsigned long vif_flags,\n+\t\t\tconst struct mlxsw_sp_rif *rif);\n+void mlxsw_sp_mr_vif_del(struct mlxsw_sp_mr_table *mr_table, vifi_t vif_index);\n+int mlxsw_sp_mr_rif_add(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\tconst struct mlxsw_sp_rif *rif);\n+void mlxsw_sp_mr_rif_del(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t const struct mlxsw_sp_rif *rif);\n+void mlxsw_sp_mr_rif_mtu_update(struct mlxsw_sp_mr_table *mr_table,\n+\t\t\t\tconst struct mlxsw_sp_rif *rif, int mtu);\n+struct mlxsw_sp_mr_table *mlxsw_sp_mr_table_create(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t\t\t\t u32 tb_id,\n+\t\t\t\t\t\t enum mlxsw_sp_l3proto proto);\n+void mlxsw_sp_mr_table_destroy(struct mlxsw_sp_mr_table *mr_table);\n+void mlxsw_sp_mr_table_flush(struct mlxsw_sp_mr_table *mr_table);\n+bool mlxsw_sp_mr_table_empty(const struct mlxsw_sp_mr_table *mr_table);\n+\n+#endif\n", "prefixes": [ "net-next", "07/12" ] }