Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/816698/?format=api
{ "id": 816698, "url": "http://patchwork.ozlabs.org/api/patches/816698/?format=api", "web_url": "http://patchwork.ozlabs.org/project/netdev/patch/20170921064338.1282-9-jiri@resnulli.us/", "project": { "id": 7, "url": "http://patchwork.ozlabs.org/api/projects/7/?format=api", "name": "Linux network development", "link_name": "netdev", "list_id": "netdev.vger.kernel.org", "list_email": "netdev@vger.kernel.org", "web_url": null, "scm_url": null, "webscm_url": null, "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20170921064338.1282-9-jiri@resnulli.us>", "list_archive_url": null, "date": "2017-09-21T06:43:34", "name": "[net-next,08/12] mlxsw: spectrum: Add the multicast routing hardware logic", "commit_ref": null, "pull_url": null, "state": "changes-requested", "archived": true, "hash": "f0e5b771b6e13673b960ef69e9f3cc50af98e346", "submitter": { "id": 15321, "url": "http://patchwork.ozlabs.org/api/people/15321/?format=api", "name": "Jiri Pirko", "email": "jiri@resnulli.us" }, "delegate": { "id": 34, "url": "http://patchwork.ozlabs.org/api/users/34/?format=api", "username": "davem", "first_name": "David", "last_name": "Miller", "email": "davem@davemloft.net" }, "mbox": "http://patchwork.ozlabs.org/project/netdev/patch/20170921064338.1282-9-jiri@resnulli.us/mbox/", "series": [ { "id": 4309, "url": "http://patchwork.ozlabs.org/api/series/4309/?format=api", "web_url": "http://patchwork.ozlabs.org/project/netdev/list/?series=4309", "date": "2017-09-21T06:43:26", "name": "mlxsw: Add support for offloading IPv4 multicast routes", "version": 1, "mbox": "http://patchwork.ozlabs.org/series/4309/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/816698/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/816698/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<netdev-owner@vger.kernel.org>", "X-Original-To": "patchwork-incoming@ozlabs.org", "Delivered-To": "patchwork-incoming@ozlabs.org", "Authentication-Results": [ "ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=netdev-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)", "ozlabs.org; dkim=pass (2048-bit key;\n\tunprotected) header.d=resnulli-us.20150623.gappssmtp.com\n\theader.i=@resnulli-us.20150623.gappssmtp.com\n\theader.b=\"BunWLI9I\"; dkim-atps=neutral" ], "Received": [ "from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xyRsn5n5Zz9s7g\n\tfor <patchwork-incoming@ozlabs.org>;\n\tThu, 21 Sep 2017 16:44:13 +1000 (AEST)", "(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1751980AbdIUGoL (ORCPT <rfc822;patchwork-incoming@ozlabs.org>);\n\tThu, 21 Sep 2017 02:44:11 -0400", "from mail-wm0-f65.google.com ([74.125.82.65]:38572 \"EHLO\n\tmail-wm0-f65.google.com\" rhost-flags-OK-OK-OK-OK) by vger.kernel.org\n\twith ESMTP id S1751732AbdIUGnr (ORCPT\n\t<rfc822;netdev@vger.kernel.org>); Thu, 21 Sep 2017 02:43:47 -0400", "by mail-wm0-f65.google.com with SMTP id x17so4263379wmd.5\n\tfor <netdev@vger.kernel.org>; Wed, 20 Sep 2017 23:43:46 -0700 (PDT)", "from localhost (ip-89-177-125-82.net.upcbroadband.cz.\n\t[89.177.125.82]) by smtp.gmail.com with ESMTPSA id\n\t204sm1338179wms.1.2017.09.20.23.43.45\n\t(version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256);\n\tWed, 20 Sep 2017 23:43:45 -0700 (PDT)" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=resnulli-us.20150623.gappssmtp.com; s=20150623;\n\th=from:to:cc:subject:date:message-id:in-reply-to:references;\n\tbh=kDEj1hIy+x40PPZfO87sRiggvNuK76yq7+EaF0FB/ac=;\n\tb=BunWLI9IMxHAYw64xdwRbr/AmvrUCmXVJq2Mf/0vWvIduS8HfYerA+vWfFj21+79DR\n\tvkj20pF6sMG4NjMlxJfEH15AooNseURxN78r9LfKR/8XUyIpCvzgDDrJH0VQ1jvjMI91\n\tMx6x+yflfnNaUs6tVB25yFg4P/KcjkSdOO1ZOoWzRSl1h4xvkh4fI0ucERvxAznvg1B2\n\tK59tb4HvhOxPNSejYmXByyw2/sW4vKi5t907kGcOds+CYHO2IKHTnBAFN3P4qHkvFAGR\n\tJk06swO4BH1k6H8GC3DYkxRRLVGEoXw2baGVhIfZ0Jcsd484FdOz9tPEDtGRPUnhIiKu\n\tE2Tg==", "X-Google-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to\n\t:references;\n\tbh=kDEj1hIy+x40PPZfO87sRiggvNuK76yq7+EaF0FB/ac=;\n\tb=UOnadXQos58lCUEPJyVPWc05K7sLobnsO5cvYWRnmGLMKguS0f7GgboE1RwLMhIJLd\n\tE9/4u9zCNwS5ED5WqG1+SvT9kvLfoXo+o2XCpk8a7X/g2rw7fo2fO2mY9oIUy+LYSQR3\n\tffegMcgIQx73L46wryhuHnyags9G0EYex8rEykpufxw295alNPJDqovo4h6+h4twJ6Pp\n\t/nLce/+zvEZLdiVTCCai7LC86tqnYdPrheIOhrkSHf0mr3rzOygCOagYaK15u1Go9tsG\n\tgtbv8VTgcG6FzwJXSUjs0bwlXe4NNiajcIYOwcwHHjE9qVsxq4gjvqMXTTRdbfo4YcR5\n\tzyUQ==", "X-Gm-Message-State": "AHPjjUgnoNnlpfsDX0uw47SE4/yvymmGgVNevr81n1VyhvvXsA2AClVB\n\tlBSsBwBCKGa/VT+9ZAUMvfeXV+L1", "X-Google-Smtp-Source": "AOwi7QAhH88s9Tl6lg+Dxpdb+9ul3Ni8TwFBpYENbXz/AgmA7g7U28Y3PaVu5i0jau36PZ9t6+KwyQ==", "X-Received": "by 10.28.22.82 with SMTP id 79mr6908wmw.70.1505976225940;\n\tWed, 20 Sep 2017 23:43:45 -0700 (PDT)", "From": "Jiri Pirko <jiri@resnulli.us>", "To": "netdev@vger.kernel.org", "Cc": "davem@davemloft.net, yotamg@mellanox.com, idosch@mellanox.com,\n\tmlxsw@mellanox.com", "Subject": "[patch net-next 08/12] mlxsw: spectrum: Add the multicast routing\n\thardware logic", "Date": "Thu, 21 Sep 2017 08:43:34 +0200", "Message-Id": "<20170921064338.1282-9-jiri@resnulli.us>", "X-Mailer": "git-send-email 2.9.5", "In-Reply-To": "<20170921064338.1282-1-jiri@resnulli.us>", "References": "<20170921064338.1282-1-jiri@resnulli.us>", "Sender": "netdev-owner@vger.kernel.org", "Precedence": "bulk", "List-ID": "<netdev.vger.kernel.org>", "X-Mailing-List": "netdev@vger.kernel.org" }, "content": "From: Yotam Gigi <yotamg@mellanox.com>\n\nImplement the multicast routing hardware API introduced in previous patch\nfor the specific spectrum hardware.\n\nThe spectrum hardware multicast routes are written using the RMFT2 register\nand point to an ACL flexible action set. The actions used for multicast\nroutes are:\n - Counter action, which allows counting bytes and packets on multicast\n routes.\n - Multicast route action, which provide RPF check and do the actual packet\n duplication to a list of RIFs.\n - Trap action, in the case the route action specified by the called is\n trap.\n\nSigned-off-by: Yotam Gigi <yotamg@mellanox.com>\nReviewed-by: Ido Schimmel <idosch@mellanox.com>\nSigned-off-by: Jiri Pirko <jiri@mellanox.com>\n---\n drivers/net/ethernet/mellanox/mlxsw/Makefile | 2 +-\n drivers/net/ethernet/mellanox/mlxsw/spectrum.h | 1 +\n .../net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.c | 828 +++++++++++++++++++++\n .../net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.h | 43 ++\n 4 files changed, 873 insertions(+), 1 deletion(-)\n create mode 100644 drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.c\n create mode 100644 drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.h", "diff": "diff --git a/drivers/net/ethernet/mellanox/mlxsw/Makefile b/drivers/net/ethernet/mellanox/mlxsw/Makefile\nindex 9b29764..4816504 100644\n--- a/drivers/net/ethernet/mellanox/mlxsw/Makefile\n+++ b/drivers/net/ethernet/mellanox/mlxsw/Makefile\n@@ -18,7 +18,7 @@ mlxsw_spectrum-objs\t\t:= spectrum.o spectrum_buffers.o \\\n \t\t\t\t spectrum_acl.o spectrum_flower.o \\\n \t\t\t\t spectrum_cnt.o spectrum_fid.o \\\n \t\t\t\t spectrum_ipip.o spectrum_acl_flex_actions.o \\\n-\t\t\t\t spectrum_mr.o\n+\t\t\t\t spectrum_mr.o spectrum_mr_tcam.o\n mlxsw_spectrum-$(CONFIG_MLXSW_SPECTRUM_DCB)\t+= spectrum_dcb.o\n mlxsw_spectrum-$(CONFIG_NET_DEVLINK) += spectrum_dpipe.o\n obj-$(CONFIG_MLXSW_MINIMAL)\t+= mlxsw_minimal.o\ndiff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h\nindex 51d8b9f..d06f7fe 100644\n--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h\n+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h\n@@ -139,6 +139,7 @@ struct mlxsw_sp_port_mall_tc_entry {\n struct mlxsw_sp_sb;\n struct mlxsw_sp_bridge;\n struct mlxsw_sp_router;\n+struct mlxsw_sp_mr;\n struct mlxsw_sp_acl;\n struct mlxsw_sp_counter_pool;\n struct mlxsw_sp_fid_core;\ndiff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.c\nnew file mode 100644\nindex 0000000..cda9e9a\n--- /dev/null\n+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.c\n@@ -0,0 +1,828 @@\n+/*\n+ * drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.c\n+ * Copyright (c) 2017 Mellanox Technologies. All rights reserved.\n+ * Copyright (c) 2017 Yotam Gigi <yotamg@mellanox.com>\n+ *\n+ * Redistribution and use in source and binary forms, with or without\n+ * modification, are permitted provided that the following conditions are met:\n+ *\n+ * 1. Redistributions of source code must retain the above copyright\n+ * notice, this list of conditions and the following disclaimer.\n+ * 2. Redistributions in binary form must reproduce the above copyright\n+ * notice, this list of conditions and the following disclaimer in the\n+ * documentation and/or other materials provided with the distribution.\n+ * 3. Neither the names of the copyright holders nor the names of its\n+ * contributors may be used to endorse or promote products derived from\n+ * this software without specific prior written permission.\n+ *\n+ * Alternatively, this software may be distributed under the terms of the\n+ * GNU General Public License (\"GPL\") version 2 as published by the Free\n+ * Software Foundation.\n+ *\n+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n+ * POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#include <linux/kernel.h>\n+#include <linux/list.h>\n+#include <linux/netdevice.h>\n+#include <linux/parman.h>\n+\n+#include \"reg.h\"\n+#include \"spectrum.h\"\n+#include \"core_acl_flex_actions.h\"\n+#include \"spectrum_mr.h\"\n+\n+struct mlxsw_sp_mr_tcam_region {\n+\tstruct mlxsw_sp *mlxsw_sp;\n+\tenum mlxsw_reg_rtar_key_type rtar_key_type;\n+\tstruct parman *parman;\n+\tstruct parman_prio *parman_prios;\n+};\n+\n+struct mlxsw_sp_mr_tcam {\n+\tstruct mlxsw_sp_mr_tcam_region ipv4_tcam_region;\n+};\n+\n+/* This struct maps to one RIGR2 register entry */\n+struct mlxsw_sp_mr_erif_sublist {\n+\tstruct list_head list;\n+\tu32 rigr2_kvdl_index;\n+\tint num_erifs;\n+\tu16 erif_indices[MLXSW_REG_RIGR2_MAX_ERIFS];\n+\tbool synced;\n+};\n+\n+struct mlxsw_sp_mr_tcam_erif_list {\n+\tstruct list_head erif_sublists;\n+\tu32 kvdl_index;\n+};\n+\n+static bool\n+mlxsw_sp_mr_erif_sublist_full(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t struct mlxsw_sp_mr_erif_sublist *erif_sublist)\n+{\n+\tint erif_list_entries = MLXSW_CORE_RES_GET(mlxsw_sp->core,\n+\t\t\t\t\t\t MC_ERIF_LIST_ENTRIES);\n+\n+\treturn erif_sublist->num_erifs == erif_list_entries;\n+}\n+\n+static void\n+mlxsw_sp_mr_erif_list_init(struct mlxsw_sp_mr_tcam_erif_list *erif_list)\n+{\n+\tINIT_LIST_HEAD(&erif_list->erif_sublists);\n+}\n+\n+#define MLXSW_SP_KVDL_RIGR2_SIZE 1\n+\n+static struct mlxsw_sp_mr_erif_sublist *\n+mlxsw_sp_mr_erif_sublist_create(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t\tstruct mlxsw_sp_mr_tcam_erif_list *erif_list)\n+{\n+\tstruct mlxsw_sp_mr_erif_sublist *erif_sublist;\n+\tint err;\n+\n+\terif_sublist = kzalloc(sizeof(*erif_sublist), GFP_KERNEL);\n+\tif (!erif_sublist)\n+\t\treturn ERR_PTR(-ENOMEM);\n+\terr = mlxsw_sp_kvdl_alloc(mlxsw_sp, MLXSW_SP_KVDL_RIGR2_SIZE,\n+\t\t\t\t &erif_sublist->rigr2_kvdl_index);\n+\tif (err) {\n+\t\tkfree(erif_sublist);\n+\t\treturn ERR_PTR(err);\n+\t}\n+\n+\tlist_add_tail(&erif_sublist->list, &erif_list->erif_sublists);\n+\treturn erif_sublist;\n+}\n+\n+static void\n+mlxsw_sp_mr_erif_sublist_destroy(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t\t struct mlxsw_sp_mr_erif_sublist *erif_sublist)\n+{\n+\tlist_del(&erif_sublist->list);\n+\tmlxsw_sp_kvdl_free(mlxsw_sp, erif_sublist->rigr2_kvdl_index);\n+\tkfree(erif_sublist);\n+}\n+\n+static int\n+mlxsw_sp_mr_erif_list_add(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t struct mlxsw_sp_mr_tcam_erif_list *erif_list,\n+\t\t\t u16 erif_index)\n+{\n+\tstruct mlxsw_sp_mr_erif_sublist *sublist;\n+\n+\t/* If either there is no erif_entry or the last one is full, allocate a\n+\t * new one.\n+\t */\n+\tif (list_empty(&erif_list->erif_sublists)) {\n+\t\tsublist = mlxsw_sp_mr_erif_sublist_create(mlxsw_sp, erif_list);\n+\t\tif (IS_ERR(sublist))\n+\t\t\treturn PTR_ERR(sublist);\n+\t\terif_list->kvdl_index = sublist->rigr2_kvdl_index;\n+\t} else {\n+\t\tsublist = list_last_entry(&erif_list->erif_sublists,\n+\t\t\t\t\t struct mlxsw_sp_mr_erif_sublist,\n+\t\t\t\t\t list);\n+\t\tsublist->synced = false;\n+\t\tif (mlxsw_sp_mr_erif_sublist_full(mlxsw_sp, sublist)) {\n+\t\t\tsublist = mlxsw_sp_mr_erif_sublist_create(mlxsw_sp,\n+\t\t\t\t\t\t\t\t erif_list);\n+\t\t\tif (IS_ERR(sublist))\n+\t\t\t\treturn PTR_ERR(sublist);\n+\t\t}\n+\t}\n+\n+\t/* Add the eRIF to the last entry's last index */\n+\tsublist->erif_indices[sublist->num_erifs++] = erif_index;\n+\treturn 0;\n+}\n+\n+static void\n+mlxsw_sp_mr_erif_list_flush(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t struct mlxsw_sp_mr_tcam_erif_list *erif_list)\n+{\n+\tstruct mlxsw_sp_mr_erif_sublist *erif_sublist, *tmp;\n+\n+\tlist_for_each_entry_safe(erif_sublist, tmp, &erif_list->erif_sublists,\n+\t\t\t\t list)\n+\t\tmlxsw_sp_mr_erif_sublist_destroy(mlxsw_sp, erif_sublist);\n+}\n+\n+static int\n+mlxsw_sp_mr_erif_list_commit(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t struct mlxsw_sp_mr_tcam_erif_list *erif_list)\n+{\n+\tstruct mlxsw_sp_mr_erif_sublist *curr_sublist;\n+\tchar rigr2_pl[MLXSW_REG_RIGR2_LEN];\n+\tint err;\n+\tint i;\n+\n+\tlist_for_each_entry(curr_sublist, &erif_list->erif_sublists, list) {\n+\t\tif (curr_sublist->synced)\n+\t\t\tcontinue;\n+\n+\t\t/* If the sublist is not the last one, pack the next index */\n+\t\tif (list_is_last(&curr_sublist->list,\n+\t\t\t\t &erif_list->erif_sublists)) {\n+\t\t\tmlxsw_reg_rigr2_pack(rigr2_pl,\n+\t\t\t\t\t curr_sublist->rigr2_kvdl_index,\n+\t\t\t\t\t false, 0);\n+\t\t} else {\n+\t\t\tstruct mlxsw_sp_mr_erif_sublist *next_sublist;\n+\n+\t\t\tnext_sublist = list_next_entry(curr_sublist, list);\n+\t\t\tmlxsw_reg_rigr2_pack(rigr2_pl,\n+\t\t\t\t\t curr_sublist->rigr2_kvdl_index,\n+\t\t\t\t\t true,\n+\t\t\t\t\t next_sublist->rigr2_kvdl_index);\n+\t\t}\n+\n+\t\t/* Pack all the erifs */\n+\t\tfor (i = 0; i < curr_sublist->num_erifs; i++) {\n+\t\t\tu16 erif_index = curr_sublist->erif_indices[i];\n+\n+\t\t\tmlxsw_reg_rigr2_erif_entry_pack(rigr2_pl, i, true,\n+\t\t\t\t\t\t\terif_index);\n+\t\t}\n+\n+\t\t/* Write the entry */\n+\t\terr = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(rigr2),\n+\t\t\t\t rigr2_pl);\n+\t\tif (err)\n+\t\t\t/* No need of a rollback here because this\n+\t\t\t * hardware entry should not be pointed yet.\n+\t\t\t */\n+\t\t\treturn err;\n+\t\tcurr_sublist->synced = true;\n+\t}\n+\treturn 0;\n+}\n+\n+static void mlxsw_sp_mr_erif_list_move(struct mlxsw_sp_mr_tcam_erif_list *to,\n+\t\t\t\t struct mlxsw_sp_mr_tcam_erif_list *from)\n+{\n+\tlist_splice(&from->erif_sublists, &to->erif_sublists);\n+\tto->kvdl_index = from->kvdl_index;\n+}\n+\n+struct mlxsw_sp_mr_tcam_route {\n+\tstruct mlxsw_sp_mr_tcam_erif_list erif_list;\n+\tstruct mlxsw_afa_block *afa_block;\n+\tu32 counter_index;\n+\tstruct parman_item parman_item;\n+\tstruct parman_prio *parman_prio;\n+\tenum mlxsw_sp_mr_route_action action;\n+\tstruct mlxsw_sp_mr_route_key key;\n+\tu16 irif_index;\n+\tu16 min_mtu;\n+};\n+\n+static struct mlxsw_afa_block *\n+mlxsw_sp_mr_tcam_afa_block_create(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t\t enum mlxsw_sp_mr_route_action route_action,\n+\t\t\t\t u16 irif_index, u32 counter_index,\n+\t\t\t\t u16 min_mtu,\n+\t\t\t\t struct mlxsw_sp_mr_tcam_erif_list *erif_list)\n+{\n+\tstruct mlxsw_afa_block *afa_block;\n+\tint err;\n+\n+\tafa_block = mlxsw_afa_block_create(mlxsw_sp->afa);\n+\tif (IS_ERR(afa_block))\n+\t\treturn afa_block;\n+\n+\terr = mlxsw_afa_block_append_counter(afa_block, counter_index);\n+\tif (err)\n+\t\tgoto err;\n+\n+\tswitch (route_action) {\n+\tcase MLXSW_SP_MR_ROUTE_ACTION_TRAP:\n+\t\terr = mlxsw_afa_block_append_trap(afa_block,\n+\t\t\t\t\t\t MLXSW_TRAP_ID_ACL1);\n+\t\tif (err)\n+\t\t\tgoto err;\n+\t\tbreak;\n+\tcase MLXSW_SP_MR_ROUTE_ACTION_FORWARD:\n+\t\t/* If we are about to append a multicast router action, commit\n+\t\t * the erif_list.\n+\t\t */\n+\t\terr = mlxsw_sp_mr_erif_list_commit(mlxsw_sp, erif_list);\n+\t\tif (err)\n+\t\t\tgoto err;\n+\n+\t\terr = mlxsw_afa_block_append_mcrouter(afa_block, irif_index,\n+\t\t\t\t\t\t min_mtu, false,\n+\t\t\t\t\t\t erif_list->kvdl_index);\n+\t\tif (err)\n+\t\t\tgoto err;\n+\t\tbreak;\n+\tdefault:\n+\t\terr = -EINVAL;\n+\t\tgoto err;\n+\t}\n+\n+\terr = mlxsw_afa_block_commit(afa_block);\n+\tif (err)\n+\t\tgoto err;\n+\treturn afa_block;\n+err:\n+\tmlxsw_afa_block_destroy(afa_block);\n+\treturn ERR_PTR(err);\n+}\n+\n+static void\n+mlxsw_sp_mr_tcam_afa_block_destroy(struct mlxsw_afa_block *afa_block)\n+{\n+\tmlxsw_afa_block_destroy(afa_block);\n+}\n+\n+static int mlxsw_sp_mr_tcam_route_replace(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t\t\t struct parman_item *parman_item,\n+\t\t\t\t\t struct mlxsw_sp_mr_route_key *key,\n+\t\t\t\t\t struct mlxsw_afa_block *afa_block)\n+{\n+\tchar rmft2_pl[MLXSW_REG_RMFT2_LEN];\n+\n+\tswitch (key->proto) {\n+\tcase MLXSW_SP_L3_PROTO_IPV4:\n+\t\tmlxsw_reg_rmft2_ipv4_pack(rmft2_pl, true, parman_item->index,\n+\t\t\t\t\t key->vrid,\n+\t\t\t\t\t MLXSW_REG_RMFT2_IRIF_MASK_IGNORE, 0,\n+\t\t\t\t\t ntohl(key->group.addr4),\n+\t\t\t\t\t ntohl(key->group_mask.addr4),\n+\t\t\t\t\t ntohl(key->source.addr4),\n+\t\t\t\t\t ntohl(key->source_mask.addr4),\n+\t\t\t\t\t mlxsw_afa_block_first_set(afa_block));\n+\t\tbreak;\n+\tcase MLXSW_SP_L3_PROTO_IPV6:\n+\tdefault:\n+\t\tWARN_ON_ONCE(1);\n+\t}\n+\n+\treturn mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(rmft2), rmft2_pl);\n+}\n+\n+static int mlxsw_sp_mr_tcam_route_remove(struct mlxsw_sp *mlxsw_sp, int vrid,\n+\t\t\t\t\t struct parman_item *parman_item)\n+{\n+\tchar rmft2_pl[MLXSW_REG_RMFT2_LEN];\n+\n+\tmlxsw_reg_rmft2_ipv4_pack(rmft2_pl, false, parman_item->index, vrid,\n+\t\t\t\t 0, 0, 0, 0, 0, 0, NULL);\n+\n+\treturn mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(rmft2), rmft2_pl);\n+}\n+\n+static int\n+mlxsw_sp_mr_tcam_erif_populate(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t struct mlxsw_sp_mr_tcam_erif_list *erif_list,\n+\t\t\t struct mlxsw_sp_mr_route_info *route_info)\n+{\n+\tint err;\n+\tint i;\n+\n+\tfor (i = 0; i < route_info->erif_num; i++) {\n+\t\tu16 erif_index = route_info->erif_indices[i];\n+\n+\t\terr = mlxsw_sp_mr_erif_list_add(mlxsw_sp, erif_list,\n+\t\t\t\t\t\terif_index);\n+\t\tif (err)\n+\t\t\treturn err;\n+\t}\n+\treturn 0;\n+}\n+\n+static int\n+mlxsw_sp_mr_tcam_route_parman_item_add(struct mlxsw_sp_mr_tcam *mr_tcam,\n+\t\t\t\t struct mlxsw_sp_mr_tcam_route *route,\n+\t\t\t\t enum mlxsw_sp_mr_route_prio prio)\n+{\n+\tstruct parman_prio *parman_prio = NULL;\n+\tint err;\n+\n+\tswitch (route->key.proto) {\n+\tcase MLXSW_SP_L3_PROTO_IPV4:\n+\t\tparman_prio = &mr_tcam->ipv4_tcam_region.parman_prios[prio];\n+\t\terr = parman_item_add(mr_tcam->ipv4_tcam_region.parman,\n+\t\t\t\t parman_prio, &route->parman_item);\n+\t\tif (err)\n+\t\t\treturn err;\n+\t\tbreak;\n+\tcase MLXSW_SP_L3_PROTO_IPV6:\n+\tdefault:\n+\t\tWARN_ON_ONCE(1);\n+\t}\n+\troute->parman_prio = parman_prio;\n+\treturn 0;\n+}\n+\n+static void\n+mlxsw_sp_mr_tcam_route_parman_item_remove(struct mlxsw_sp_mr_tcam *mr_tcam,\n+\t\t\t\t\t struct mlxsw_sp_mr_tcam_route *route)\n+{\n+\tswitch (route->key.proto) {\n+\tcase MLXSW_SP_L3_PROTO_IPV4:\n+\t\tparman_item_remove(mr_tcam->ipv4_tcam_region.parman,\n+\t\t\t\t route->parman_prio, &route->parman_item);\n+\t\tbreak;\n+\tcase MLXSW_SP_L3_PROTO_IPV6:\n+\tdefault:\n+\t\tWARN_ON_ONCE(1);\n+\t}\n+}\n+\n+static int\n+mlxsw_sp_mr_tcam_route_create(struct mlxsw_sp *mlxsw_sp, void *priv,\n+\t\t\t void *route_priv,\n+\t\t\t struct mlxsw_sp_mr_route_params *route_params)\n+{\n+\tstruct mlxsw_sp_mr_tcam_route *route = route_priv;\n+\tstruct mlxsw_sp_mr_tcam *mr_tcam = priv;\n+\tint err;\n+\n+\troute->key = route_params->key;\n+\troute->irif_index = route_params->value.irif_index;\n+\troute->min_mtu = route_params->value.min_mtu;\n+\troute->action = route_params->value.route_action;\n+\n+\t/* Create the egress RIFs list */\n+\tmlxsw_sp_mr_erif_list_init(&route->erif_list);\n+\terr = mlxsw_sp_mr_tcam_erif_populate(mlxsw_sp, &route->erif_list,\n+\t\t\t\t\t &route_params->value);\n+\tif (err)\n+\t\tgoto err_erif_populate;\n+\n+\t/* Create the flow counter */\n+\terr = mlxsw_sp_flow_counter_alloc(mlxsw_sp, &route->counter_index);\n+\tif (err)\n+\t\tgoto err_counter_alloc;\n+\n+\t/* Create the flexible action block */\n+\troute->afa_block = mlxsw_sp_mr_tcam_afa_block_create(mlxsw_sp,\n+\t\t\t\t\t\t\t route->action,\n+\t\t\t\t\t\t\t route->irif_index,\n+\t\t\t\t\t\t\t route->counter_index,\n+\t\t\t\t\t\t\t route->min_mtu,\n+\t\t\t\t\t\t\t &route->erif_list);\n+\tif (IS_ERR(route->afa_block)) {\n+\t\terr = PTR_ERR(route->afa_block);\n+\t\tgoto err_afa_block_create;\n+\t}\n+\n+\t/* Allocate place in the TCAM */\n+\terr = mlxsw_sp_mr_tcam_route_parman_item_add(mr_tcam, route,\n+\t\t\t\t\t\t route_params->prio);\n+\tif (err)\n+\t\tgoto err_parman_item_add;\n+\n+\t/* Write the route to the TCAM */\n+\terr = mlxsw_sp_mr_tcam_route_replace(mlxsw_sp, &route->parman_item,\n+\t\t\t\t\t &route->key, route->afa_block);\n+\tif (err)\n+\t\tgoto err_route_replace;\n+\treturn 0;\n+\n+err_route_replace:\n+\tmlxsw_sp_mr_tcam_route_parman_item_remove(mr_tcam, route);\n+err_parman_item_add:\n+\tmlxsw_sp_mr_tcam_afa_block_destroy(route->afa_block);\n+err_afa_block_create:\n+\tmlxsw_sp_flow_counter_free(mlxsw_sp, route->counter_index);\n+err_erif_populate:\n+err_counter_alloc:\n+\tmlxsw_sp_mr_erif_list_flush(mlxsw_sp, &route->erif_list);\n+\treturn err;\n+}\n+\n+static void mlxsw_sp_mr_tcam_route_destroy(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t\t\t void *priv, void *route_priv)\n+{\n+\tstruct mlxsw_sp_mr_tcam_route *route = route_priv;\n+\tstruct mlxsw_sp_mr_tcam *mr_tcam = priv;\n+\n+\tmlxsw_sp_mr_tcam_route_remove(mlxsw_sp, route->key.vrid,\n+\t\t\t\t &route->parman_item);\n+\tmlxsw_sp_mr_tcam_route_parman_item_remove(mr_tcam, route);\n+\tmlxsw_sp_mr_tcam_afa_block_destroy(route->afa_block);\n+\tmlxsw_sp_flow_counter_free(mlxsw_sp, route->counter_index);\n+\tmlxsw_sp_mr_erif_list_flush(mlxsw_sp, &route->erif_list);\n+}\n+\n+static int mlxsw_sp_mr_tcam_route_stats(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t\t\tvoid *route_priv, u64 *packets,\n+\t\t\t\t\tu64 *bytes)\n+{\n+\tstruct mlxsw_sp_mr_tcam_route *route = route_priv;\n+\n+\treturn mlxsw_sp_flow_counter_get(mlxsw_sp, route->counter_index,\n+\t\t\t\t\t packets, bytes);\n+}\n+\n+static int\n+mlxsw_sp_mr_tcam_route_action_update(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t\t void *route_priv,\n+\t\t\t\t enum mlxsw_sp_mr_route_action route_action)\n+{\n+\tstruct mlxsw_sp_mr_tcam_route *route = route_priv;\n+\tstruct mlxsw_afa_block *afa_block;\n+\tint err;\n+\n+\t/* Create a new flexible action block */\n+\tafa_block = mlxsw_sp_mr_tcam_afa_block_create(mlxsw_sp, route_action,\n+\t\t\t\t\t\t route->irif_index,\n+\t\t\t\t\t\t route->counter_index,\n+\t\t\t\t\t\t route->min_mtu,\n+\t\t\t\t\t\t &route->erif_list);\n+\tif (IS_ERR(afa_block))\n+\t\treturn PTR_ERR(afa_block);\n+\n+\t/* Update the TCAM route entry */\n+\terr = mlxsw_sp_mr_tcam_route_replace(mlxsw_sp, &route->parman_item,\n+\t\t\t\t\t &route->key, afa_block);\n+\tif (err)\n+\t\tgoto err;\n+\n+\t/* Delete the old one */\n+\tmlxsw_sp_mr_tcam_afa_block_destroy(route->afa_block);\n+\troute->afa_block = afa_block;\n+\troute->action = route_action;\n+\treturn 0;\n+err:\n+\tmlxsw_sp_mr_tcam_afa_block_destroy(afa_block);\n+\treturn err;\n+}\n+\n+static int mlxsw_sp_mr_tcam_route_min_mtu_update(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t\t\t\t void *route_priv, u16 min_mtu)\n+{\n+\tstruct mlxsw_sp_mr_tcam_route *route = route_priv;\n+\tstruct mlxsw_afa_block *afa_block;\n+\tint err;\n+\n+\t/* Create a new flexible action block */\n+\tafa_block = mlxsw_sp_mr_tcam_afa_block_create(mlxsw_sp,\n+\t\t\t\t\t\t route->action,\n+\t\t\t\t\t\t route->irif_index,\n+\t\t\t\t\t\t route->counter_index,\n+\t\t\t\t\t\t min_mtu,\n+\t\t\t\t\t\t &route->erif_list);\n+\tif (IS_ERR(afa_block))\n+\t\treturn PTR_ERR(afa_block);\n+\n+\t/* Update the TCAM route entry */\n+\terr = mlxsw_sp_mr_tcam_route_replace(mlxsw_sp, &route->parman_item,\n+\t\t\t\t\t &route->key, afa_block);\n+\tif (err)\n+\t\tgoto err;\n+\n+\t/* Delete the old one */\n+\tmlxsw_sp_mr_tcam_afa_block_destroy(route->afa_block);\n+\troute->afa_block = afa_block;\n+\troute->min_mtu = min_mtu;\n+\treturn 0;\n+err:\n+\tmlxsw_sp_mr_tcam_afa_block_destroy(afa_block);\n+\treturn err;\n+}\n+\n+static int mlxsw_sp_mr_tcam_route_irif_update(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t\t\t void *route_priv, u16 irif_index)\n+{\n+\tstruct mlxsw_sp_mr_tcam_route *route = route_priv;\n+\n+\tif (route->action != MLXSW_SP_MR_ROUTE_ACTION_TRAP)\n+\t\treturn -EINVAL;\n+\troute->irif_index = irif_index;\n+\treturn 0;\n+}\n+\n+static int mlxsw_sp_mr_tcam_route_erif_add(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t\t\t void *route_priv, u16 erif_index)\n+{\n+\tstruct mlxsw_sp_mr_tcam_route *route = route_priv;\n+\tint err;\n+\n+\terr = mlxsw_sp_mr_erif_list_add(mlxsw_sp, &route->erif_list,\n+\t\t\t\t\terif_index);\n+\tif (err)\n+\t\treturn err;\n+\n+\t/* Commit the action only if the route action is not TRAP */\n+\tif (route->action != MLXSW_SP_MR_ROUTE_ACTION_TRAP)\n+\t\treturn mlxsw_sp_mr_erif_list_commit(mlxsw_sp,\n+\t\t\t\t\t\t &route->erif_list);\n+\treturn 0;\n+}\n+\n+static int mlxsw_sp_mr_tcam_route_erif_del(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t\t\t void *route_priv, u16 erif_index)\n+{\n+\tstruct mlxsw_sp_mr_tcam_route *route = route_priv;\n+\tstruct mlxsw_sp_mr_erif_sublist *erif_sublist;\n+\tstruct mlxsw_sp_mr_tcam_erif_list erif_list;\n+\tstruct mlxsw_afa_block *afa_block;\n+\tint err;\n+\tint i;\n+\n+\t/* Create a copy of the original erif_list without the deleted entry */\n+\tmlxsw_sp_mr_erif_list_init(&erif_list);\n+\tlist_for_each_entry(erif_sublist, &route->erif_list.erif_sublists, list) {\n+\t\tfor (i = 0; i < erif_sublist->num_erifs; i++) {\n+\t\t\tu16 curr_erif = erif_sublist->erif_indices[i];\n+\n+\t\t\tif (curr_erif == erif_index)\n+\t\t\t\tcontinue;\n+\t\t\terr = mlxsw_sp_mr_erif_list_add(mlxsw_sp, &erif_list,\n+\t\t\t\t\t\t\tcurr_erif);\n+\t\t\tif (err)\n+\t\t\t\tgoto err_erif_list_add;\n+\t\t}\n+\t}\n+\n+\t/* Create the flexible action block pointing to the new erif_list */\n+\tafa_block = mlxsw_sp_mr_tcam_afa_block_create(mlxsw_sp, route->action,\n+\t\t\t\t\t\t route->irif_index,\n+\t\t\t\t\t\t route->counter_index,\n+\t\t\t\t\t\t route->min_mtu,\n+\t\t\t\t\t\t &erif_list);\n+\tif (IS_ERR(afa_block)) {\n+\t\terr = PTR_ERR(afa_block);\n+\t\tgoto err_afa_block_create;\n+\t}\n+\n+\t/* Update the TCAM route entry */\n+\terr = mlxsw_sp_mr_tcam_route_replace(mlxsw_sp, &route->parman_item,\n+\t\t\t\t\t &route->key, afa_block);\n+\tif (err)\n+\t\tgoto err_route_write;\n+\n+\tmlxsw_sp_mr_tcam_afa_block_destroy(route->afa_block);\n+\tmlxsw_sp_mr_erif_list_flush(mlxsw_sp, &route->erif_list);\n+\troute->afa_block = afa_block;\n+\tmlxsw_sp_mr_erif_list_move(&route->erif_list, &erif_list);\n+\treturn 0;\n+\n+err_route_write:\n+\tmlxsw_sp_mr_tcam_afa_block_destroy(afa_block);\n+err_afa_block_create:\n+err_erif_list_add:\n+\tmlxsw_sp_mr_erif_list_flush(mlxsw_sp, &erif_list);\n+\treturn err;\n+}\n+\n+static int\n+mlxsw_sp_mr_tcam_route_update(struct mlxsw_sp *mlxsw_sp, void *route_priv,\n+\t\t\t struct mlxsw_sp_mr_route_info *route_info)\n+{\n+\tstruct mlxsw_sp_mr_tcam_route *route = route_priv;\n+\tstruct mlxsw_sp_mr_tcam_erif_list erif_list;\n+\tstruct mlxsw_afa_block *afa_block;\n+\tint err;\n+\n+\t/* Create a new erif_list */\n+\tmlxsw_sp_mr_erif_list_init(&erif_list);\n+\terr = mlxsw_sp_mr_tcam_erif_populate(mlxsw_sp, &erif_list, route_info);\n+\tif (err)\n+\t\tgoto err_erif_populate;\n+\n+\t/* Create the flexible action block pointing to the new erif_list */\n+\tafa_block = mlxsw_sp_mr_tcam_afa_block_create(mlxsw_sp,\n+\t\t\t\t\t\t route_info->route_action,\n+\t\t\t\t\t\t route_info->irif_index,\n+\t\t\t\t\t\t route->counter_index,\n+\t\t\t\t\t\t route_info->min_mtu,\n+\t\t\t\t\t\t &erif_list);\n+\tif (IS_ERR(afa_block)) {\n+\t\terr = PTR_ERR(afa_block);\n+\t\tgoto err_afa_block_create;\n+\t}\n+\n+\t/* Update the TCAM route entry */\n+\terr = mlxsw_sp_mr_tcam_route_replace(mlxsw_sp, &route->parman_item,\n+\t\t\t\t\t &route->key, afa_block);\n+\tif (err)\n+\t\tgoto err_route_write;\n+\n+\tmlxsw_sp_mr_tcam_afa_block_destroy(route->afa_block);\n+\tmlxsw_sp_mr_erif_list_flush(mlxsw_sp, &route->erif_list);\n+\troute->afa_block = afa_block;\n+\tmlxsw_sp_mr_erif_list_move(&route->erif_list, &erif_list);\n+\troute->action = route_info->route_action;\n+\troute->irif_index = route_info->irif_index;\n+\troute->min_mtu = route_info->min_mtu;\n+\treturn 0;\n+\n+err_route_write:\n+\tmlxsw_sp_mr_tcam_afa_block_destroy(afa_block);\n+err_afa_block_create:\n+err_erif_populate:\n+\tmlxsw_sp_mr_erif_list_flush(mlxsw_sp, &erif_list);\n+\treturn err;\n+}\n+\n+#define MLXSW_SP_MR_TCAM_REGION_BASE_COUNT 16\n+#define MLXSW_SP_MR_TCAM_REGION_RESIZE_STEP 16\n+\n+static int\n+mlxsw_sp_mr_tcam_region_alloc(struct mlxsw_sp_mr_tcam_region *mr_tcam_region)\n+{\n+\tstruct mlxsw_sp *mlxsw_sp = mr_tcam_region->mlxsw_sp;\n+\tchar rtar_pl[MLXSW_REG_RTAR_LEN];\n+\n+\tmlxsw_reg_rtar_pack(rtar_pl, MLXSW_REG_RTAR_OP_ALLOCATE,\n+\t\t\t mr_tcam_region->rtar_key_type,\n+\t\t\t MLXSW_SP_MR_TCAM_REGION_BASE_COUNT);\n+\treturn mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(rtar), rtar_pl);\n+}\n+\n+static void\n+mlxsw_sp_mr_tcam_region_free(struct mlxsw_sp_mr_tcam_region *mr_tcam_region)\n+{\n+\tstruct mlxsw_sp *mlxsw_sp = mr_tcam_region->mlxsw_sp;\n+\tchar rtar_pl[MLXSW_REG_RTAR_LEN];\n+\n+\tmlxsw_reg_rtar_pack(rtar_pl, MLXSW_REG_RTAR_OP_DEALLOCATE,\n+\t\t\t mr_tcam_region->rtar_key_type, 0);\n+\tmlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(rtar), rtar_pl);\n+}\n+\n+static int mlxsw_sp_mr_tcam_region_parman_resize(void *priv,\n+\t\t\t\t\t\t unsigned long new_count)\n+{\n+\tstruct mlxsw_sp_mr_tcam_region *mr_tcam_region = priv;\n+\tstruct mlxsw_sp *mlxsw_sp = mr_tcam_region->mlxsw_sp;\n+\tchar rtar_pl[MLXSW_REG_RTAR_LEN];\n+\tu64 max_tcam_rules;\n+\n+\tmax_tcam_rules = MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_TCAM_RULES);\n+\tif (new_count > max_tcam_rules)\n+\t\treturn -EINVAL;\n+\tmlxsw_reg_rtar_pack(rtar_pl, MLXSW_REG_RTAR_OP_RESIZE,\n+\t\t\t mr_tcam_region->rtar_key_type, new_count);\n+\treturn mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(rtar), rtar_pl);\n+}\n+\n+static void mlxsw_sp_mr_tcam_region_parman_move(void *priv,\n+\t\t\t\t\t\tunsigned long from_index,\n+\t\t\t\t\t\tunsigned long to_index,\n+\t\t\t\t\t\tunsigned long count)\n+{\n+\tstruct mlxsw_sp_mr_tcam_region *mr_tcam_region = priv;\n+\tstruct mlxsw_sp *mlxsw_sp = mr_tcam_region->mlxsw_sp;\n+\tchar rrcr_pl[MLXSW_REG_RRCR_LEN];\n+\n+\tmlxsw_reg_rrcr_pack(rrcr_pl, MLXSW_REG_RRCR_OP_MOVE,\n+\t\t\t from_index, count,\n+\t\t\t mr_tcam_region->rtar_key_type, to_index);\n+\tmlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(rrcr), rrcr_pl);\n+}\n+\n+static const struct parman_ops mlxsw_sp_mr_tcam_region_parman_ops = {\n+\t.base_count\t= MLXSW_SP_MR_TCAM_REGION_BASE_COUNT,\n+\t.resize_step\t= MLXSW_SP_MR_TCAM_REGION_RESIZE_STEP,\n+\t.resize\t\t= mlxsw_sp_mr_tcam_region_parman_resize,\n+\t.move\t\t= mlxsw_sp_mr_tcam_region_parman_move,\n+\t.algo\t\t= PARMAN_ALGO_TYPE_LSORT,\n+};\n+\n+static int\n+mlxsw_sp_mr_tcam_region_init(struct mlxsw_sp *mlxsw_sp,\n+\t\t\t struct mlxsw_sp_mr_tcam_region *mr_tcam_region,\n+\t\t\t enum mlxsw_reg_rtar_key_type rtar_key_type)\n+{\n+\tstruct parman_prio *parman_prios;\n+\tstruct parman *parman;\n+\tint err;\n+\tint i;\n+\n+\tmr_tcam_region->rtar_key_type = rtar_key_type;\n+\tmr_tcam_region->mlxsw_sp = mlxsw_sp;\n+\n+\terr = mlxsw_sp_mr_tcam_region_alloc(mr_tcam_region);\n+\tif (err)\n+\t\treturn err;\n+\n+\tparman = parman_create(&mlxsw_sp_mr_tcam_region_parman_ops,\n+\t\t\t mr_tcam_region);\n+\tif (!parman) {\n+\t\terr = -ENOMEM;\n+\t\tgoto err_parman_create;\n+\t}\n+\tmr_tcam_region->parman = parman;\n+\n+\tparman_prios = kmalloc_array(MLXSW_SP_MR_ROUTE_PRIO_MAX + 1,\n+\t\t\t\t sizeof(*parman_prios), GFP_KERNEL);\n+\tif (!parman_prios)\n+\t\tgoto err_parman_prios_alloc;\n+\tmr_tcam_region->parman_prios = parman_prios;\n+\n+\tfor (i = 0; i < MLXSW_SP_MR_ROUTE_PRIO_MAX + 1; i++)\n+\t\tparman_prio_init(mr_tcam_region->parman,\n+\t\t\t\t &mr_tcam_region->parman_prios[i], i);\n+\treturn 0;\n+\n+err_parman_prios_alloc:\n+\tparman_destroy(parman);\n+err_parman_create:\n+\tmlxsw_sp_mr_tcam_region_free(mr_tcam_region);\n+\treturn err;\n+}\n+\n+static void\n+mlxsw_sp_mr_tcam_region_fini(struct mlxsw_sp_mr_tcam_region *mr_tcam_region)\n+{\n+\tint i;\n+\n+\tfor (i = 0; i < MLXSW_SP_MR_ROUTE_PRIO_MAX + 1; i++)\n+\t\tparman_prio_fini(&mr_tcam_region->parman_prios[i]);\n+\tkfree(mr_tcam_region->parman_prios);\n+\tparman_destroy(mr_tcam_region->parman);\n+\tmlxsw_sp_mr_tcam_region_free(mr_tcam_region);\n+}\n+\n+static int mlxsw_sp_mr_tcam_init(struct mlxsw_sp *mlxsw_sp, void *priv)\n+{\n+\tstruct mlxsw_sp_mr_tcam *mr_tcam = priv;\n+\n+\tif (!MLXSW_CORE_RES_VALID(mlxsw_sp->core, MC_ERIF_LIST_ENTRIES) ||\n+\t !MLXSW_CORE_RES_VALID(mlxsw_sp->core, ACL_MAX_TCAM_RULES))\n+\t\treturn -EIO;\n+\n+\treturn mlxsw_sp_mr_tcam_region_init(mlxsw_sp,\n+\t\t\t\t\t &mr_tcam->ipv4_tcam_region,\n+\t\t\t\t\t MLXSW_REG_RTAR_KEY_TYPE_IPV4_MULTICAST);\n+}\n+\n+static void mlxsw_sp_mr_tcam_fini(void *priv)\n+{\n+\tstruct mlxsw_sp_mr_tcam *mr_tcam = priv;\n+\n+\tmlxsw_sp_mr_tcam_region_fini(&mr_tcam->ipv4_tcam_region);\n+}\n+\n+const struct mlxsw_sp_mr_ops mlxsw_sp_mr_tcam_ops = {\n+\t.priv_size = sizeof(struct mlxsw_sp_mr_tcam),\n+\t.route_priv_size = sizeof(struct mlxsw_sp_mr_tcam_route),\n+\t.init = mlxsw_sp_mr_tcam_init,\n+\t.route_create = mlxsw_sp_mr_tcam_route_create,\n+\t.route_update = mlxsw_sp_mr_tcam_route_update,\n+\t.route_stats = mlxsw_sp_mr_tcam_route_stats,\n+\t.route_action_update = mlxsw_sp_mr_tcam_route_action_update,\n+\t.route_min_mtu_update = mlxsw_sp_mr_tcam_route_min_mtu_update,\n+\t.route_irif_update = mlxsw_sp_mr_tcam_route_irif_update,\n+\t.route_erif_add = mlxsw_sp_mr_tcam_route_erif_add,\n+\t.route_erif_del = mlxsw_sp_mr_tcam_route_erif_del,\n+\t.route_destroy = mlxsw_sp_mr_tcam_route_destroy,\n+\t.fini = mlxsw_sp_mr_tcam_fini,\n+};\ndiff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.h\nnew file mode 100644\nindex 0000000..f9b59ee\n--- /dev/null\n+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.h\n@@ -0,0 +1,43 @@\n+/*\n+ * drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.h\n+ * Copyright (c) 2017 Mellanox Technologies. All rights reserved.\n+ * Copyright (c) 2017 Yotam Gigi <yotamg@mellanox.com>\n+ *\n+ * Redistribution and use in source and binary forms, with or without\n+ * modification, are permitted provided that the following conditions are met:\n+ *\n+ * 1. Redistributions of source code must retain the above copyright\n+ * notice, this list of conditions and the following disclaimer.\n+ * 2. Redistributions in binary form must reproduce the above copyright\n+ * notice, this list of conditions and the following disclaimer in the\n+ * documentation and/or other materials provided with the distribution.\n+ * 3. Neither the names of the copyright holders nor the names of its\n+ * contributors may be used to endorse or promote products derived from\n+ * this software without specific prior written permission.\n+ *\n+ * Alternatively, this software may be distributed under the terms of the\n+ * GNU General Public License (\"GPL\") version 2 as published by the Free\n+ * Software Foundation.\n+ *\n+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n+ * POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#ifndef _MLXSW_SPECTRUM_MCROUTER_TCAM_H\n+#define _MLXSW_SPECTRUM_MCROUTER_TCAM_H\n+\n+#include \"spectrum.h\"\n+#include \"spectrum_mr.h\"\n+\n+extern const struct mlxsw_sp_mr_ops mlxsw_sp_mr_tcam_ops;\n+\n+#endif\n", "prefixes": [ "net-next", "08/12" ] }