From patchwork Thu May 21 21:10:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1295705 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Tt+K23IO; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Sj2l6726z9sRW for ; Fri, 22 May 2020 07:10:55 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730586AbgEUVKy (ORCPT ); Thu, 21 May 2020 17:10:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726814AbgEUVKy (ORCPT ); Thu, 21 May 2020 17:10:54 -0400 Received: from mail-ej1-x642.google.com (mail-ej1-x642.google.com [IPv6:2a00:1450:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA608C061A0E for ; Thu, 21 May 2020 14:10:53 -0700 (PDT) Received: by mail-ej1-x642.google.com with SMTP id s21so10597975ejd.2 for ; Thu, 21 May 2020 14:10:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZonQOyncl1AtkOCcxLI36jYh3pCrGXGOAlprW87oCJk=; b=Tt+K23IOr1yopG5GgYPNr6BrLVrbtwxyDn1lf9Wljz9ge49Fl8Xnold1V0GA+Ih4FL R+hgcxZhinRbnRPyfKYuWFNAeOI5BCY6RMWoQzbahUeCpCEoJK7U/8thrFTlcYzNwzKx b6VrwNktOdlHGgY9Dz5XV21TXdukX9O160DWhiiaBCb3TY90+te8YjurwcFtSJosAfyF l0v88AULr5JrBN/4h4WRd8tMdKwpuMam5Od190qpZ7Xga0SeXRYRvR/gDD2qMydahKhL wnF/yDX3x2bKAA7CpVdDtPJfsSqyClEAKmfYmlQkQyNxoEuUUzmU3EIuuZBnXHzRuRv/ sJSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZonQOyncl1AtkOCcxLI36jYh3pCrGXGOAlprW87oCJk=; b=ea8EqgX/kqjTf+e7NCbU83zycJWA4dPSA1NGXH2aFoGf1TWr9csXypzNN5Xbqdv3ej D2HM2D0mZLwKH0Im01B/7bRdCi4UAcCY70CTQsohx+iBIV4YbGVbUtlxMw0xGczEyTNa txHahX9aO0Wj665XOCcVE/uVSIqoqXJ7igtFM/PfdDQhqZ34ttn4d77aNHLyHxdvZroa Lh4XCHnCRNbUBR1JxecimlN249/k4PMOtH8OwH0fkgyudE1gJplLJUwahcfAbK8xUf54 EBeMx/IN/giC4HgmIT4noZdyxXaYvsdSacrtktifCySt9Y1Z+EC8tRvRtXii4vsZeLX3 hezA== X-Gm-Message-State: AOAM53070GDu0X2lpT6OxzSGg/UNjN0gjB4m8nTdcy1MaFA5D4y+w3fL u+oLbOA1gQ4y4V7/IYok8kM= X-Google-Smtp-Source: ABdhPJxi5ZlR5eJOzEssB/VDGZOvtIKOspvoagW0I1K5BlrVni7iEg/TR2v4Oqm23BtRKEhBTDRoVw== X-Received: by 2002:a17:906:2e0e:: with SMTP id n14mr5183079eji.545.1590095452268; Thu, 21 May 2020 14:10:52 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.10.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:10:51 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 01/13] net: core: dev_addr_lists: add VID to device address Date: Fri, 22 May 2020 00:10:24 +0300 Message-Id: <20200521211036.668624-2-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ivan Khoronzhuk Despite this is supposed to be used for Ethernet VLANs, not Ethernet addresses with space for VID also can reuse this, so VID is considered as virtual ID extension, not belonging strictly to Ethernet VLAN VIDs, and overall change can be named individual virtual device filtering (IVDF). This patch adds VID tag at the end of each address. The actual reserved address size is 32 bytes. For Ethernet addresses with 6 bytes long that's possible to add tag w/o increasing address size. Thus, each address for the case has 32 - 6 = 26 bytes to hold additional info, say VID for virtual device addresses. Therefore, when addresses are synced to the address list of parent device the address list of latter can contain separate addresses for virtual devices. It allows to track separate address tables for virtual devices if they present and the device can be placed on any place of device tree as the address is propagated to to the end real device thru *_sync()/ndo_set_rx_mode() APIs. Also it simplifies handling VID addresses at real device when it supports IVDF. If parent device doesn't want to have virtual addresses in its address space the vid_len has to be 0, thus its address space is "shrunk" to the state as before this patch. For now it's 0 for every device. It allows two devices with and w/o IVDF to be part of same bond device for instance. The end real device supporting IVDF can retrieve VID tag from an address and set it for a given virtual device only. By default, vid 0 is used for real devices to distinguish it from virtual addresses. See next patches to see how it's used. Note that adding the vid_len member to struct net_device is not intended to change the structure layout. Here is the output of pahole: For ARM 32, on 1 hole less: --------------------------- before (https://pastebin.com/DG1SVpFR): /* size: 1344, cachelines: 21, members: 123 */ /* sum members: 1304, holes: 5, sum holes: 28 */ /* padding: 12 */ /* bit_padding: 31 bits */ after (https://pastebin.com/ZUMhxGkA): /* size: 1344, cachelines: 21, members: 124 */ /* sum members: 1305, holes: 5, sum holes: 27 */ /* padding: 12 */ /* bit_padding: 31 bits */ For ARM 64, on 1 hole less: --------------------------- before (https://pastebin.com/5CdTQWkc): /* size: 2048, cachelines: 32, members: 120 */ /* sum members: 1972, holes: 7, sum holes: 48 */ /* padding: 28 */ /* bit_padding: 31 bits */ after (https://pastebin.com/32ktb1iV): /* size: 2048, cachelines: 32, members: 121 */ /* sum members: 1973, holes: 7, sum holes: 47 */ /* padding: 28 */ /* bit_padding: 31 bits */ Signed-off-by: Ivan Khoronzhuk Signed-off-by: Vladimir Oltean --- include/linux/netdevice.h | 4 ++ net/core/dev_addr_lists.c | 127 ++++++++++++++++++++++++++++++++------ 2 files changed, 111 insertions(+), 20 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index a18f8fdf4260..2d11b93f3af4 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1698,6 +1698,7 @@ enum netdev_priv_flags { * @perm_addr: Permanent hw address * @addr_assign_type: Hw address assignment type * @addr_len: Hardware address length + * @vid_len: Virtual ID length, set in case of IVDF * @upper_level: Maximum depth level of upper devices. * @lower_level: Maximum depth level of lower devices. * @neigh_priv_len: Used in neigh_alloc() @@ -1950,6 +1951,7 @@ struct net_device { unsigned char perm_addr[MAX_ADDR_LEN]; unsigned char addr_assign_type; unsigned char addr_len; + unsigned char vid_len; unsigned char upper_level; unsigned char lower_level; unsigned short neigh_priv_len; @@ -4316,8 +4318,10 @@ int dev_addr_init(struct net_device *dev); /* Functions used for unicast addresses handling */ int dev_uc_add(struct net_device *dev, const unsigned char *addr); +int dev_vid_uc_add(struct net_device *dev, const unsigned char *addr); int dev_uc_add_excl(struct net_device *dev, const unsigned char *addr); int dev_uc_del(struct net_device *dev, const unsigned char *addr); +int dev_vid_uc_del(struct net_device *dev, const unsigned char *addr); int dev_uc_sync(struct net_device *to, struct net_device *from); int dev_uc_sync_multiple(struct net_device *to, struct net_device *from); void dev_uc_unsync(struct net_device *to, struct net_device *from); diff --git a/net/core/dev_addr_lists.c b/net/core/dev_addr_lists.c index 2f949b5a1eb9..90eaa99b19e5 100644 --- a/net/core/dev_addr_lists.c +++ b/net/core/dev_addr_lists.c @@ -541,6 +541,35 @@ int dev_addr_del(struct net_device *dev, const unsigned char *addr, } EXPORT_SYMBOL(dev_addr_del); +static int get_addr_len(struct net_device *dev) +{ + return dev->addr_len + dev->vid_len; +} + +/** + * set_vid_addr - Copy a device address into a new address with IVDF. + * @dev: device + * @addr: address to copy + * @naddr: location of new address + * + * Transform a regular device address into one with IVDF (Individual + * Virtual Device Filtering). If the device does not support IVDF, the + * original device address length is returned and no copying is done. + * Otherwise, the length of the IVDF address is returned. + * The VID is set to zero which denotes the address of a real device. + */ +static int set_vid_addr(struct net_device *dev, const unsigned char *addr, + unsigned char *naddr) +{ + if (!dev->vid_len) + return dev->addr_len; + + memcpy(naddr, addr, dev->addr_len); + memset(naddr + dev->addr_len, 0, dev->vid_len); + + return get_addr_len(dev); +} + /* * Unicast list handling functions */ @@ -552,18 +581,22 @@ EXPORT_SYMBOL(dev_addr_del); */ int dev_uc_add_excl(struct net_device *dev, const unsigned char *addr) { + unsigned char naddr[MAX_ADDR_LEN]; struct netdev_hw_addr *ha; - int err; + int addr_len, err; + + addr_len = set_vid_addr(dev, addr, naddr); + addr = dev->vid_len ? naddr : addr; netif_addr_lock_bh(dev); list_for_each_entry(ha, &dev->uc.list, list) { - if (!memcmp(ha->addr, addr, dev->addr_len) && + if (!memcmp(ha->addr, addr, addr_len) && ha->type == NETDEV_HW_ADDR_T_UNICAST) { err = -EEXIST; goto out; } } - err = __hw_addr_create_ex(&dev->uc, addr, dev->addr_len, + err = __hw_addr_create_ex(&dev->uc, addr, addr_len, NETDEV_HW_ADDR_T_UNICAST, true, false); if (!err) __dev_set_rx_mode(dev); @@ -574,47 +607,89 @@ int dev_uc_add_excl(struct net_device *dev, const unsigned char *addr) EXPORT_SYMBOL(dev_uc_add_excl); /** - * dev_uc_add - Add a secondary unicast address + * dev_vid_uc_add - Add a secondary unicast address with tag * @dev: device - * @addr: address to add + * @addr: address to add, includes vid tag already * * Add a secondary unicast address to the device or increase * the reference count if it already exists. */ -int dev_uc_add(struct net_device *dev, const unsigned char *addr) +int dev_vid_uc_add(struct net_device *dev, const unsigned char *addr) { int err; netif_addr_lock_bh(dev); - err = __hw_addr_add(&dev->uc, addr, dev->addr_len, + err = __hw_addr_add(&dev->uc, addr, get_addr_len(dev), NETDEV_HW_ADDR_T_UNICAST); if (!err) __dev_set_rx_mode(dev); netif_addr_unlock_bh(dev); return err; } +EXPORT_SYMBOL(dev_vid_uc_add); + +/** + * dev_uc_add - Add a secondary unicast address + * @dev: device + * @addr: address to add + * + * Add a secondary unicast address to the device or increase + * the reference count if it already exists. + */ +int dev_uc_add(struct net_device *dev, const unsigned char *addr) +{ + unsigned char naddr[MAX_ADDR_LEN]; + int err; + + set_vid_addr(dev, addr, naddr); + addr = dev->vid_len ? naddr : addr; + + err = dev_vid_uc_add(dev, addr); + return err; +} EXPORT_SYMBOL(dev_uc_add); /** * dev_uc_del - Release secondary unicast address. * @dev: device - * @addr: address to delete + * @addr: address to delete, includes vid tag already * * Release reference to a secondary unicast address and remove it * from the device if the reference count drops to zero. */ -int dev_uc_del(struct net_device *dev, const unsigned char *addr) +int dev_vid_uc_del(struct net_device *dev, const unsigned char *addr) { int err; netif_addr_lock_bh(dev); - err = __hw_addr_del(&dev->uc, addr, dev->addr_len, + err = __hw_addr_del(&dev->uc, addr, get_addr_len(dev), NETDEV_HW_ADDR_T_UNICAST); if (!err) __dev_set_rx_mode(dev); netif_addr_unlock_bh(dev); return err; } +EXPORT_SYMBOL(dev_vid_uc_del); + +/** + * dev_uc_del - Release secondary unicast address. + * @dev: device + * @addr: address to delete + * + * Release reference to a secondary unicast address and remove it + * from the device if the reference count drops to zero. + */ +int dev_uc_del(struct net_device *dev, const unsigned char *addr) +{ + unsigned char naddr[MAX_ADDR_LEN]; + int err; + + set_vid_addr(dev, addr, naddr); + addr = dev->vid_len ? naddr : addr; + + err = dev_vid_uc_del(dev, addr); + return err; +} EXPORT_SYMBOL(dev_uc_del); /** @@ -638,7 +713,7 @@ int dev_uc_sync(struct net_device *to, struct net_device *from) return -EINVAL; netif_addr_lock(to); - err = __hw_addr_sync(&to->uc, &from->uc, to->addr_len); + err = __hw_addr_sync(&to->uc, &from->uc, get_addr_len(to)); if (!err) __dev_set_rx_mode(to); netif_addr_unlock(to); @@ -668,7 +743,7 @@ int dev_uc_sync_multiple(struct net_device *to, struct net_device *from) return -EINVAL; netif_addr_lock(to); - err = __hw_addr_sync_multiple(&to->uc, &from->uc, to->addr_len); + err = __hw_addr_sync_multiple(&to->uc, &from->uc, get_addr_len(to)); if (!err) __dev_set_rx_mode(to); netif_addr_unlock(to); @@ -692,7 +767,7 @@ void dev_uc_unsync(struct net_device *to, struct net_device *from) netif_addr_lock_bh(from); netif_addr_lock(to); - __hw_addr_unsync(&to->uc, &from->uc, to->addr_len); + __hw_addr_unsync(&to->mc, &from->mc, get_addr_len(to)); __dev_set_rx_mode(to); netif_addr_unlock(to); netif_addr_unlock_bh(from); @@ -736,18 +811,22 @@ EXPORT_SYMBOL(dev_uc_init); */ int dev_mc_add_excl(struct net_device *dev, const unsigned char *addr) { + unsigned char naddr[MAX_ADDR_LEN]; struct netdev_hw_addr *ha; - int err; + int addr_len, err; + + addr_len = set_vid_addr(dev, addr, naddr); + addr = dev->vid_len ? naddr : addr; netif_addr_lock_bh(dev); list_for_each_entry(ha, &dev->mc.list, list) { - if (!memcmp(ha->addr, addr, dev->addr_len) && + if (!memcmp(ha->addr, addr, addr_len) && ha->type == NETDEV_HW_ADDR_T_MULTICAST) { err = -EEXIST; goto out; } } - err = __hw_addr_create_ex(&dev->mc, addr, dev->addr_len, + err = __hw_addr_create_ex(&dev->mc, addr, addr_len, NETDEV_HW_ADDR_T_MULTICAST, true, false); if (!err) __dev_set_rx_mode(dev); @@ -760,10 +839,14 @@ EXPORT_SYMBOL(dev_mc_add_excl); static int __dev_mc_add(struct net_device *dev, const unsigned char *addr, bool global) { - int err; + unsigned char naddr[MAX_ADDR_LEN]; + int addr_len, err; + + addr_len = set_vid_addr(dev, addr, naddr); + addr = dev->vid_len ? naddr : addr; netif_addr_lock_bh(dev); - err = __hw_addr_add_ex(&dev->mc, addr, dev->addr_len, + err = __hw_addr_add_ex(&dev->mc, addr, addr_len, NETDEV_HW_ADDR_T_MULTICAST, global, false, 0); if (!err) __dev_set_rx_mode(dev); @@ -800,10 +883,14 @@ EXPORT_SYMBOL(dev_mc_add_global); static int __dev_mc_del(struct net_device *dev, const unsigned char *addr, bool global) { - int err; + unsigned char naddr[MAX_ADDR_LEN]; + int addr_len, err; + + addr_len = set_vid_addr(dev, addr, naddr); + addr = dev->vid_len ? naddr : addr; netif_addr_lock_bh(dev); - err = __hw_addr_del_ex(&dev->mc, addr, dev->addr_len, + err = __hw_addr_del_ex(&dev->mc, addr, addr_len, NETDEV_HW_ADDR_T_MULTICAST, global, false); if (!err) __dev_set_rx_mode(dev); From patchwork Thu May 21 21:10:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1295706 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=rD4HQ7K2; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Sj2n4M9Kz9sRW for ; Fri, 22 May 2020 07:10:57 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730592AbgEUVK4 (ORCPT ); Thu, 21 May 2020 17:10:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730588AbgEUVKz (ORCPT ); Thu, 21 May 2020 17:10:55 -0400 Received: from mail-ed1-x544.google.com (mail-ed1-x544.google.com [IPv6:2a00:1450:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B403AC061A0E for ; Thu, 21 May 2020 14:10:54 -0700 (PDT) Received: by mail-ed1-x544.google.com with SMTP id l5so7718946edn.7 for ; Thu, 21 May 2020 14:10:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7EJrLMpzY+C3z3PmCt7FDRoPiUWJj/HjfBBfF1vmR1U=; b=rD4HQ7K201xuPOQn9RFBl4hVm+Z5SZAumze+80WRgwEM8QraSYdTZNoEmN8O4s3lUx 84aTXX16IwlOOdznEemOani896FegtlumwlxoEQE3PbKl3P8fT87VAR3b80399QYmZMh UhhdHNyJm+BKnEuSa9Qn7hwS8fnlUz2cIcVe+sWBi7sZEidmCnNNwX1WFUN6LVVg3g6H p2t/AsuHUhAL1zbCoGHdN675tTYNnSiawIS+Mp7iYCMyIVU49D95AGdBFC6EoPXemgKM Lp+/XImcIaM1fjFFFyHorztZYj2Pya0rgUjfAqLpgL7OIk67Z0TIe1sf4+I/hN3iVPhF 0Ryg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7EJrLMpzY+C3z3PmCt7FDRoPiUWJj/HjfBBfF1vmR1U=; b=GHhEact4ZMsyraYgjzTQoSPfwcFPYeJ4pkrHNZR8cw5kSIaoLxSGud7w06ZOVE5yjv Octe1K9rdTdqtbcIHWfpMzIsY61P7iN9Ao191yb08etOpjLkcDq2XVucCTgvCyF+4I2M vxnxjh6MdvatnpJ+eeLysOlxkXHMU8QzH9xVOYhEIkG0i3lGzsStbT7i/ccjYp18z6is ylbwLfnvFy5FzEgD1rjoKrGqygy4Woyon5ulrrN4cGL1BEnAQ9PEpeIslzUQANFGmryV FPzlqzqpSq7C4QEBggDn/DmZ+LybMS4IG4xFkiFSG0YHIJK+i6IIBRVOAqevNPqlau5Y kTBQ== X-Gm-Message-State: AOAM530g/Pp/icd4LJ9bJgSSfw6fWcTYJ+DG/XK2TVj7meIptjAA42Tp th5/iSdjvhM0rSfYOVhl094= X-Google-Smtp-Source: ABdhPJyFJq1ZKCCu5no0kld4rDHa5vEzCC8Y/A4mQqDVAllguFNJ3knVYhbYWIryGEH+T3VzOs9j2g== X-Received: by 2002:a05:6402:783:: with SMTP id d3mr544488edy.295.1590095453449; Thu, 21 May 2020 14:10:53 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.10.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:10:53 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 02/13] net: 8021q: vlan_dev: add vid tag to addresses of uc and mc lists Date: Fri, 22 May 2020 00:10:25 +0300 Message-Id: <20200521211036.668624-3-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ivan Khoronzhuk Update vlan mc and uc addresses with VID tag while propagating addresses to lower devices, do this only if address is not synced. It allows at end driver level to distinguish addresses belonging to vlan devices. Signed-off-by: Ivan Khoronzhuk Signed-off-by: Vladimir Oltean --- include/linux/if_vlan.h | 1 + net/8021q/vlan.h | 2 ++ net/8021q/vlan_core.c | 13 +++++++++++++ net/8021q/vlan_dev.c | 26 ++++++++++++++++++++++++++ 4 files changed, 42 insertions(+) diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h index b05e855f1ddd..20407f73cfee 100644 --- a/include/linux/if_vlan.h +++ b/include/linux/if_vlan.h @@ -131,6 +131,7 @@ extern struct net_device *__vlan_find_dev_deep_rcu(struct net_device *real_dev, extern int vlan_for_each(struct net_device *dev, int (*action)(struct net_device *dev, int vid, void *arg), void *arg); +extern u16 vlan_dev_get_addr_vid(struct net_device *dev, const u8 *addr); extern struct net_device *vlan_dev_real_dev(const struct net_device *dev); extern u16 vlan_dev_vlan_id(const struct net_device *dev); extern __be16 vlan_dev_vlan_proto(const struct net_device *dev); diff --git a/net/8021q/vlan.h b/net/8021q/vlan.h index bb7ec1a3915d..e7f43d7fcc9a 100644 --- a/net/8021q/vlan.h +++ b/net/8021q/vlan.h @@ -6,6 +6,8 @@ #include #include +#define NET_8021Q_VID_TSIZE 2 + /* if this changes, algorithm will have to be reworked because this * depends on completely exhausting the VLAN identifier space. Thus * it gives constant time look-up, but in many cases it wastes memory. diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c index 78ec2e1b14d1..b528f09be9a3 100644 --- a/net/8021q/vlan_core.c +++ b/net/8021q/vlan_core.c @@ -453,6 +453,19 @@ bool vlan_uses_dev(const struct net_device *dev) } EXPORT_SYMBOL(vlan_uses_dev); +u16 vlan_dev_get_addr_vid(struct net_device *dev, const u8 *addr) +{ + u16 vid = 0; + + if (dev->vid_len != NET_8021Q_VID_TSIZE) + return vid; + + vid = addr[dev->addr_len]; + vid |= (addr[dev->addr_len + 1] & 0xf) << 8; + return vid; +} +EXPORT_SYMBOL(vlan_dev_get_addr_vid); + static struct sk_buff *vlan_gro_receive(struct list_head *head, struct sk_buff *skb) { diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c index f00bb57f0f60..c2c3e5ae535c 100644 --- a/net/8021q/vlan_dev.c +++ b/net/8021q/vlan_dev.c @@ -244,6 +244,14 @@ void vlan_dev_get_realdev_name(const struct net_device *dev, char *result) strncpy(result, vlan_dev_priv(dev)->real_dev->name, 23); } +static void vlan_dev_set_addr_vid(struct net_device *vlan_dev, u8 *addr) +{ + u16 vid = vlan_dev_vlan_id(vlan_dev); + + addr[vlan_dev->addr_len] = vid & 0xff; + addr[vlan_dev->addr_len + 1] = (vid >> 8) & 0xf; +} + bool vlan_dev_inherit_address(struct net_device *dev, struct net_device *real_dev) { @@ -482,8 +490,26 @@ static void vlan_dev_change_rx_flags(struct net_device *dev, int change) } } +static void vlan_dev_align_addr_vid(struct net_device *vlan_dev) +{ + struct net_device *real_dev = vlan_dev_real_dev(vlan_dev); + struct netdev_hw_addr *ha; + + if (!real_dev->vid_len) + return; + + netdev_for_each_mc_addr(ha, vlan_dev) + if (!ha->sync_cnt) + vlan_dev_set_addr_vid(vlan_dev, ha->addr); + + netdev_for_each_uc_addr(ha, vlan_dev) + if (!ha->sync_cnt) + vlan_dev_set_addr_vid(vlan_dev, ha->addr); +} + static void vlan_dev_set_rx_mode(struct net_device *vlan_dev) { + vlan_dev_align_addr_vid(vlan_dev); dev_mc_sync(vlan_dev_priv(vlan_dev)->real_dev, vlan_dev); dev_uc_sync(vlan_dev_priv(vlan_dev)->real_dev, vlan_dev); } From patchwork Thu May 21 21:10:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1295707 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=P1aVlaj7; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Sj2q22Zkz9sRW for ; Fri, 22 May 2020 07:10:59 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730599AbgEUVK6 (ORCPT ); Thu, 21 May 2020 17:10:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726814AbgEUVK5 (ORCPT ); Thu, 21 May 2020 17:10:57 -0400 Received: from mail-ej1-x641.google.com (mail-ej1-x641.google.com [IPv6:2a00:1450:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBE64C061A0E for ; Thu, 21 May 2020 14:10:55 -0700 (PDT) Received: by mail-ej1-x641.google.com with SMTP id d7so10554012eja.7 for ; Thu, 21 May 2020 14:10:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XHvVDx0bn4cAErYabGuB0Tm/Q0t8MyOtAfbMRm6tWGc=; b=P1aVlaj7P/tjZ2lwJwJvr+7f43sjMVXItZQMs/Vl+yXY4hC+UQ2OHUZRy65Wd6VRAt aeVbcwtuO7yLm3HQZ7QvkeJ7foHdU3RHmtwacKZlDm7rZxsyWebljKB6oI1xlClMA8pd 929mmmX9TBhNX+PhucfdwyT6mljiG1sCeZE2fE1hnuHlNpftNDfAuJhcP0gYC5Mg4/mD JyGzrJympvGEysd+jnyf+3MJ1MApDCx6h4JP6+J3Iad+uBrC2SPF7qAi1jbJsjGj/1tr PWHjrjBtR4rr537uJL+GdAs0ra+sloWhSjHEsbHEsSOAzcQve4qGmuhRIppdVdCGrAgV 1BWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XHvVDx0bn4cAErYabGuB0Tm/Q0t8MyOtAfbMRm6tWGc=; b=jrqenADgtQrvwB0LFvjDFXTwbLWRxucZHnQukbs117TRC9uxI4MPPX3rwVmTbA2AOd ftsydGKRAohyUX+Cojx98RmJmPDNTd0N47ZPNSyml2tbbtpDW+En6Aakrp7otbNiWiPC 43Z3V/p2V3X+4MOrL9/oNdaY4OlBN7/kHZAHeYwjDl/Vb+k5xWOOMjMUhRveaxKRENow SD6ghZRc852SxB126TBcGNmh/mBUQw+TJNZzRj0uVosqi7jtReuDXjvhyOHtjz5U1BSJ nFRdeAceEnRMD+9eQkRQQO1U7h+Q0h7f4TA22UPv0zazuPLZYkBcp5gWgaRclsFPIOl2 Hx2A== X-Gm-Message-State: AOAM531l1UtAxxMzgoQep7kS9Y8kaPkrWa5rZH8tkp+LHLm5SuBBGHo4 NxPwIMuC/8813PIk3+hIfHU= X-Google-Smtp-Source: ABdhPJzfu6PLzY56K18O5JU7AN3ZJlRdFpN7ahVYFSIWotzlVt8BFsSuTRWcWShkGmQJkZ/hqE0+ZQ== X-Received: by 2002:a17:906:4088:: with SMTP id u8mr5506778ejj.444.1590095454668; Thu, 21 May 2020 14:10:54 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.10.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:10:54 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 03/13] net: 8021q: vlan_dev: add vid tag for vlan device own address Date: Fri, 22 May 2020 00:10:26 +0300 Message-Id: <20200521211036.668624-4-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ivan Khoronzhuk The vlan device address is held separately from uc/mc lists and handled differently. The vlan dev address is bound with real device address only if it's inherited from init, in all other cases it's separate address entry in uc list. With vid set, the address becomes not inherited from real device after it's set manually as before, but is part of uc list any way, with appropriate vid tag set. If vid_len for real device is 0, the behaviour is the same as before this change, so shouldn't be any impact on systems w/o individual virtual device filtering (IVDF) enabled. This allows to control and sync vlan device address and disable concrete vlan packet ingress when vlan interface is down. Signed-off-by: Ivan Khoronzhuk Signed-off-by: Vladimir Oltean --- net/8021q/vlan.c | 3 ++ net/8021q/vlan_dev.c | 75 +++++++++++++++++++++++++++++++++----------- 2 files changed, 60 insertions(+), 18 deletions(-) diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c index d4bcfd8f95bf..4cc341c191a4 100644 --- a/net/8021q/vlan.c +++ b/net/8021q/vlan.c @@ -298,6 +298,9 @@ static void vlan_sync_address(struct net_device *dev, if (vlan_dev_inherit_address(vlandev, dev)) goto out; + if (dev->vid_len) + goto out; + /* vlan address was different from the old address and is equal to * the new address */ if (!ether_addr_equal(vlandev->dev_addr, vlan->real_dev_addr) && diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c index c2c3e5ae535c..f3f570a12ffd 100644 --- a/net/8021q/vlan_dev.c +++ b/net/8021q/vlan_dev.c @@ -252,12 +252,61 @@ static void vlan_dev_set_addr_vid(struct net_device *vlan_dev, u8 *addr) addr[vlan_dev->addr_len + 1] = (vid >> 8) & 0xf; } +static int vlan_dev_add_addr(struct net_device *dev, u8 *addr) +{ + struct net_device *real_dev = vlan_dev_real_dev(dev); + unsigned char naddr[ETH_ALEN + NET_8021Q_VID_TSIZE]; + + if (real_dev->vid_len) { + memcpy(naddr, addr, dev->addr_len); + vlan_dev_set_addr_vid(dev, naddr); + return dev_vid_uc_add(real_dev, naddr); + } + + if (ether_addr_equal(addr, real_dev->dev_addr)) + return 0; + + return dev_uc_add(real_dev, addr); +} + +static void vlan_dev_del_addr(struct net_device *dev, u8 *addr) +{ + struct net_device *real_dev = vlan_dev_real_dev(dev); + unsigned char naddr[ETH_ALEN + NET_8021Q_VID_TSIZE]; + + if (real_dev->vid_len) { + memcpy(naddr, addr, dev->addr_len); + vlan_dev_set_addr_vid(dev, naddr); + dev_vid_uc_del(real_dev, naddr); + return; + } + + if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr)) + dev_uc_del(real_dev, addr); +} + +static int vlan_dev_subs_addr(struct net_device *dev, u8 *addr) +{ + int err; + + err = vlan_dev_add_addr(dev, addr); + if (err < 0) + return err; + + vlan_dev_del_addr(dev, dev->dev_addr); + return err; +} + bool vlan_dev_inherit_address(struct net_device *dev, struct net_device *real_dev) { if (dev->addr_assign_type != NET_ADDR_STOLEN) return false; + if (real_dev->vid_len) + if (vlan_dev_subs_addr(dev, real_dev->dev_addr)) + return false; + ether_addr_copy(dev->dev_addr, real_dev->dev_addr); call_netdevice_notifiers(NETDEV_CHANGEADDR, dev); return true; @@ -273,9 +322,10 @@ static int vlan_dev_open(struct net_device *dev) !(vlan->flags & VLAN_FLAG_LOOSE_BINDING)) return -ENETDOWN; - if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr) && - !vlan_dev_inherit_address(dev, real_dev)) { - err = dev_uc_add(real_dev, dev->dev_addr); + if (ether_addr_equal(dev->dev_addr, real_dev->dev_addr) || + (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr) && + !vlan_dev_inherit_address(dev, real_dev))) { + err = vlan_dev_add_addr(dev, dev->dev_addr); if (err < 0) goto out; } @@ -308,8 +358,7 @@ static int vlan_dev_open(struct net_device *dev) if (dev->flags & IFF_ALLMULTI) dev_set_allmulti(real_dev, -1); del_unicast: - if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr)) - dev_uc_del(real_dev, dev->dev_addr); + vlan_dev_del_addr(dev, dev->dev_addr); out: netif_carrier_off(dev); return err; @@ -327,8 +376,7 @@ static int vlan_dev_stop(struct net_device *dev) if (dev->flags & IFF_PROMISC) dev_set_promiscuity(real_dev, -1); - if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr)) - dev_uc_del(real_dev, dev->dev_addr); + vlan_dev_del_addr(dev, dev->dev_addr); if (!(vlan->flags & VLAN_FLAG_BRIDGE_BINDING)) netif_carrier_off(dev); @@ -337,9 +385,7 @@ static int vlan_dev_stop(struct net_device *dev) static int vlan_dev_set_mac_address(struct net_device *dev, void *p) { - struct net_device *real_dev = vlan_dev_priv(dev)->real_dev; struct sockaddr *addr = p; - int err; if (!is_valid_ether_addr(addr->sa_data)) return -EADDRNOTAVAIL; @@ -347,15 +393,8 @@ static int vlan_dev_set_mac_address(struct net_device *dev, void *p) if (!(dev->flags & IFF_UP)) goto out; - if (!ether_addr_equal(addr->sa_data, real_dev->dev_addr)) { - err = dev_uc_add(real_dev, addr->sa_data); - if (err < 0) - return err; - } - - if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr)) - dev_uc_del(real_dev, dev->dev_addr); - + if (vlan_dev_subs_addr(dev, addr->sa_data)) + return true; out: ether_addr_copy(dev->dev_addr, addr->sa_data); return 0; From patchwork Thu May 21 21:10:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1295708 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=JLn60lLa; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Sj2r0nZmz9sSn for ; Fri, 22 May 2020 07:11:00 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730606AbgEUVK7 (ORCPT ); Thu, 21 May 2020 17:10:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730588AbgEUVK5 (ORCPT ); Thu, 21 May 2020 17:10:57 -0400 Received: from mail-ej1-x644.google.com (mail-ej1-x644.google.com [IPv6:2a00:1450:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60D5AC05BD43 for ; Thu, 21 May 2020 14:10:57 -0700 (PDT) Received: by mail-ej1-x644.google.com with SMTP id s3so10572872eji.6 for ; Thu, 21 May 2020 14:10:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eIKCBg6+dx920ZtMunbCi2uj3FNIMlWi2s3bJKgHEls=; b=JLn60lLaKl8EL3oMmZOY/B82gBHQxP6SJHir3WL1WkjsR7SWzCQJup+tbrWxWfTDsb V4AOm0NJZ0gaq4DHay34xQAw/HFCKan8qChka4MVFd7YJU15WdVXnACy362+dnVA3UU5 E7MYU4XKpZljfiWI8gmn2Iosagf6M9moYxlZIjHzlkTfeqcVyp/oeafuxEEI0j6S+fXa K+4sTmwf3fRQREGjKR/EvdE5Z679lroUXhMd9GwZeETEqBDAt/+RWzOuSL+/wc1lO4Ei DiD7JUhfnzpwczIXtHukmM/yjYyVwbPxSwGyZH9xULyszUr8r47788dZykL6YLyDNJDR AIcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eIKCBg6+dx920ZtMunbCi2uj3FNIMlWi2s3bJKgHEls=; b=JUcs5JnicAQ2A1zIfEYlWZhYxdg0QoKJI4Yo0cB792CJLdnvwScBmTkesVXCpdcReT PnqkDIxcNOdQ1vRQy+mxccHnBK63BzaZDZMWpY/hj9DgejWJhZq2fdmhhsOqx/A7+vT5 wBTZe6FK5dVQpiEXluGPvOqDpQ0i9/1xAHJB6IyIzpPItM+sDZqEC5qT2nMerNuEpLTb BWzr6LX3MljoE0dO8lcT8+0n1Wymz22ob39Xirffg0a9FkWs41JqRLBCaVQ8gZjATDg5 FP1c9qG50bHxkqffFryCegLFj4e2YgZVpgqrtSStgcjwkQlvYP1L0V0/YN03tCuy41f1 GW/Q== X-Gm-Message-State: AOAM531r3KHL440xTzzoO3V9P/GkVkW3/J4SscBkwPBBDTSTH4K6FhwZ ObysATuFs/ipgoSdP6i4SCs= X-Google-Smtp-Source: ABdhPJySfZv9RMkEUIL5740XSIcEDmec/LW9M8wfF2DIx7VuqQk3KgeoSGvPQgsBuM/VhRuUUeJSWg== X-Received: by 2002:a17:906:55c3:: with SMTP id z3mr5214478ejp.180.1590095456058; Thu, 21 May 2020 14:10:56 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.10.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:10:55 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 04/13] ethernet: eth: add default vid len for all ethernet kind devices Date: Fri, 22 May 2020 00:10:27 +0300 Message-Id: <20200521211036.668624-5-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ivan Khoronzhuk IVDF - individual virtual device filtering. Allows to set per vlan L2 address filters on end real network device (for unicast and for multicast) and drop redundant, unexpected packet ingress. If CONFIG_VLAN_8021Q_IVDF is enabled the following changes are applied, and only for ethernet network devices. By default every ethernet netdev needs vid len = 2 bytes to be able to hold up to 4096 vids. So set it for every eth device to be correct, except vlan devs. In order to shrink all addresses of devices above vlan, the vid_len for vlan dev = 0, as result all suckers sync their addresses to common base not taking into account vid part (vid_len of "to" devices is important only). And only vlan device is the source of addresses with actual its vid set, propagating it to parent devices while rx_mode(). Also, don't bother those ethernet devices that at this moment are not moved to vlan addressing scheme, so while end ethernet device is created - set vid_len to 0, thus, while syncing, its address space is concatenated to one dimensional like usual, and who needs IVDF - set it to NET_8021Q_VID_TSIZE. There is another decision - is to inherit vid_len or some feature flag from end root device in order to all upper devices have vlan extended address space only if exact end real device have such capability. But I didn't, because it requires more changes and probably I'm not familiar with all places where it should be inherited, I would appreciate if someone can guide where it's applicable, then it could become a little bit more limited. Signed-off-by: Ivan Khoronzhuk Signed-off-by: Vladimir Oltean --- include/linux/if_vlan.h | 1 + net/8021q/Kconfig | 12 ++++++++++++ net/8021q/vlan_core.c | 12 ++++++++++++ net/8021q/vlan_dev.c | 1 + net/ethernet/eth.c | 12 ++++++++++-- 5 files changed, 36 insertions(+), 2 deletions(-) diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h index 20407f73cfee..b3f7e92cd645 100644 --- a/include/linux/if_vlan.h +++ b/include/linux/if_vlan.h @@ -132,6 +132,7 @@ extern int vlan_for_each(struct net_device *dev, int (*action)(struct net_device *dev, int vid, void *arg), void *arg); extern u16 vlan_dev_get_addr_vid(struct net_device *dev, const u8 *addr); +extern void vlan_dev_ivdf_set(struct net_device *dev, bool enable); extern struct net_device *vlan_dev_real_dev(const struct net_device *dev); extern u16 vlan_dev_vlan_id(const struct net_device *dev); extern __be16 vlan_dev_vlan_proto(const struct net_device *dev); diff --git a/net/8021q/Kconfig b/net/8021q/Kconfig index 5510b4b90ff0..aaae09068ab8 100644 --- a/net/8021q/Kconfig +++ b/net/8021q/Kconfig @@ -39,3 +39,15 @@ config VLAN_8021Q_MVRP supersedes GVRP and is not backwards-compatible. If unsure, say N. + +config VLAN_8021Q_IVDF + bool "IVDF (Individual Virtual Device Filtering) support" + depends on VLAN_8021Q + help + Select this to enable IVDF addressing scheme support. IVDF is used + for automatic propagation of registered VLANs addresses to real end + devices. If no device supporting IVDF then disable this as it can + consume some memory in configuration with complex network device + structures to hold vlan addresses. + + If unsure, say N. diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c index b528f09be9a3..d21492f7f557 100644 --- a/net/8021q/vlan_core.c +++ b/net/8021q/vlan_core.c @@ -453,6 +453,18 @@ bool vlan_uses_dev(const struct net_device *dev) } EXPORT_SYMBOL(vlan_uses_dev); +void vlan_dev_ivdf_set(struct net_device *dev, bool enable) +{ +#ifdef CONFIG_VLAN_8021Q_IVDF + if (enable) { + dev->vid_len = NET_8021Q_VID_TSIZE; + return; + } +#endif + dev->vid_len = 0; +} +EXPORT_SYMBOL(vlan_dev_ivdf_set); + u16 vlan_dev_get_addr_vid(struct net_device *dev, const u8 *addr) { u16 vid = 0; diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c index f3f570a12ffd..22ce9f9f666d 100644 --- a/net/8021q/vlan_dev.c +++ b/net/8021q/vlan_dev.c @@ -894,5 +894,6 @@ void vlan_setup(struct net_device *dev) dev->min_mtu = 0; dev->max_mtu = ETH_MAX_MTU; + vlan_dev_ivdf_set(dev, true); eth_zero_addr(dev->broadcast); } diff --git a/net/ethernet/eth.c b/net/ethernet/eth.c index c8b903302ff2..c40fae6df46b 100644 --- a/net/ethernet/eth.c +++ b/net/ethernet/eth.c @@ -372,6 +372,7 @@ void ether_setup(struct net_device *dev) dev->flags = IFF_BROADCAST|IFF_MULTICAST; dev->priv_flags |= IFF_TX_SKB_SHARING; + vlan_dev_ivdf_set(dev, false); eth_broadcast_addr(dev->broadcast); } @@ -395,8 +396,15 @@ EXPORT_SYMBOL(ether_setup); struct net_device *alloc_etherdev_mqs(int sizeof_priv, unsigned int txqs, unsigned int rxqs) { - return alloc_netdev_mqs(sizeof_priv, "eth%d", NET_NAME_UNKNOWN, - ether_setup, txqs, rxqs); + struct net_device *dev; + + dev = alloc_netdev_mqs(sizeof_priv, "eth%d", NET_NAME_UNKNOWN, + ether_setup, txqs, rxqs); + if (!dev) + return NULL; + + vlan_dev_ivdf_set(dev, false); + return dev; } EXPORT_SYMBOL(alloc_etherdev_mqs); From patchwork Thu May 21 21:10:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1295709 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=li9Xr5ia; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Sj2t4zCtz9sRW for ; Fri, 22 May 2020 07:11:02 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730615AbgEUVLB (ORCPT ); Thu, 21 May 2020 17:11:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730602AbgEUVK6 (ORCPT ); Thu, 21 May 2020 17:10:58 -0400 Received: from mail-ej1-x643.google.com (mail-ej1-x643.google.com [IPv6:2a00:1450:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80CEBC061A0E for ; Thu, 21 May 2020 14:10:58 -0700 (PDT) Received: by mail-ej1-x643.google.com with SMTP id x1so10539243ejd.8 for ; Thu, 21 May 2020 14:10:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PTgQbBvJIIShG3QEmW73X3/Ipq2Ev776Y3DL4ft32sw=; b=li9Xr5iaHNp3pFaKKAu54IVx9n+Nz7cxtIB+wF56BXbN/zlWqfnUvpHAKjLt8NNU0s g/BChiatT9rWPiherEHgzEVnEN09ZOXZSzfhnp72o4Rx2TwMXKlOT//uT1JS7JReMXUf 38SvXt4E1WbSsXIMexy3s2IZAgDDirQMnG0wvnDD/QsbTnB6LJPFjPh5NrURFVt5Mf6H w/AEaAC2TpcETxNBy2rW3Tv1SP0FC2unqUhRWslo/t9SpIaXUtse1LkkaYfmX5HmiVvP Mv3tiNFmkzvw7NAOdHjGWDDlu24id36mil5VH4150nwv6FkRUVNQH6bpXkEjMDqEVO3c Mx6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PTgQbBvJIIShG3QEmW73X3/Ipq2Ev776Y3DL4ft32sw=; b=l9lH4WfRnIPPj3Oj5clDbnmeEqv9G5izfIPIxlKC7H6FgMznM6G2EAmt0NU73ARF9c sTOKlz17Qe8Z8RLZ/lnCZlx3VIBF5bvcYurQX1B6PMaLPZMHiVAIj/lP489QziuBAAHS nfE3cvayVxkHiDAAVL2p6TvvsZZAwZbgoM0GC9SZ6Gsl0KRd8ZoItIc1MIGEOmoBhlkN cLkcDmBEX7UowwS/ad1hq1k4ervCDRCGtpDQM1npSEW90lJcsZRE0Ed48ZY0uuTOclPh x1A/bn9b8eZKno3zuCgG/H9kegvqZAUl+vZ6QzJwaEwnn8KfrHQBk8+tn3vqpjfkiGtp mB0Q== X-Gm-Message-State: AOAM532qsmBEBANKPrYLMa81q3NXKKY0I2C7kQjEcRNsEVvNbxUIhv+E Ck+B8+crzk8+PIP4qPUoueM= X-Google-Smtp-Source: ABdhPJytKUeSsqkoAfWiOO1nddJSz48dxCK6g4XFe0pRezN4qtspIP55qsl1BRDRICl/gdb/wbtRwQ== X-Received: by 2002:a17:906:ae93:: with SMTP id md19mr5410426ejb.4.1590095457256; Thu, 21 May 2020 14:10:57 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.10.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:10:56 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 05/13] net: bridge: multicast: propagate br_mc_disabled_update() return Date: Fri, 22 May 2020 00:10:28 +0300 Message-Id: <20200521211036.668624-6-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Florian Fainelli Some Ethernet switches might not be able to support disabling multicast flooding globally when e.g: several bridges span the same physical device, propagate the return value of br_mc_disabled_update() such that this propagates correctly to user-space. Signed-off-by: Florian Fainelli Signed-off-by: Vladimir Oltean --- net/bridge/br_multicast.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index ad12fe3fca8c..9e93035b1483 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -809,7 +809,7 @@ static void br_ip6_multicast_port_query_expired(struct timer_list *t) } #endif -static void br_mc_disabled_update(struct net_device *dev, bool value) +static int br_mc_disabled_update(struct net_device *dev, bool value) { struct switchdev_attr attr = { .orig_dev = dev, @@ -818,11 +818,13 @@ static void br_mc_disabled_update(struct net_device *dev, bool value) .u.mc_disabled = !value, }; - switchdev_port_attr_set(dev, &attr); + return switchdev_port_attr_set(dev, &attr); } int br_multicast_add_port(struct net_bridge_port *port) { + int ret; + port->multicast_router = MDB_RTR_TYPE_TEMP_QUERY; timer_setup(&port->multicast_router_timer, @@ -833,8 +835,11 @@ int br_multicast_add_port(struct net_bridge_port *port) timer_setup(&port->ip6_own_query.timer, br_ip6_multicast_port_query_expired, 0); #endif - br_mc_disabled_update(port->dev, - br_opt_get(port->br, BROPT_MULTICAST_ENABLED)); + ret = br_mc_disabled_update(port->dev, + br_opt_get(port->br, + BROPT_MULTICAST_ENABLED)); + if (ret) + return ret; port->mcast_stats = netdev_alloc_pcpu_stats(struct bridge_mcast_stats); if (!port->mcast_stats) @@ -2049,12 +2054,16 @@ static void br_multicast_start_querier(struct net_bridge *br, int br_multicast_toggle(struct net_bridge *br, unsigned long val) { struct net_bridge_port *port; + int err = 0; spin_lock_bh(&br->multicast_lock); if (!!br_opt_get(br, BROPT_MULTICAST_ENABLED) == !!val) goto unlock; - br_mc_disabled_update(br->dev, val); + err = br_mc_disabled_update(br->dev, val); + if (err && err != -EOPNOTSUPP) + goto unlock; + br_opt_toggle(br, BROPT_MULTICAST_ENABLED, !!val); if (!br_opt_get(br, BROPT_MULTICAST_ENABLED)) { br_multicast_leave_snoopers(br); @@ -2071,7 +2080,7 @@ int br_multicast_toggle(struct net_bridge *br, unsigned long val) unlock: spin_unlock_bh(&br->multicast_lock); - return 0; + return err; } bool br_multicast_enabled(const struct net_device *dev) From patchwork Thu May 21 21:10:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1295711 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=lwZP0zxw; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Sj2z72stz9sSn for ; Fri, 22 May 2020 07:11:07 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730633AbgEUVLH (ORCPT ); Thu, 21 May 2020 17:11:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730588AbgEUVLA (ORCPT ); Thu, 21 May 2020 17:11:00 -0400 Received: from mail-ej1-x642.google.com (mail-ej1-x642.google.com [IPv6:2a00:1450:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD842C061A0E for ; Thu, 21 May 2020 14:10:59 -0700 (PDT) Received: by mail-ej1-x642.google.com with SMTP id e2so10496214eje.13 for ; Thu, 21 May 2020 14:10:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=B0s/Er0BpZNr3kmWpzh0ZD+YrPYRB0SL5OAK3lR0aHY=; b=lwZP0zxwUYuJOCQ5z6sKMEG2uGAlk1aEceuipwcfEkfXk8jBUhItXbGLO8l4ZGhinP ABU/6aHiXWKvQimgHzRs290Hl+eC/IGKwU/liZ9vf5Nw1+lw6LOoys0w2w7i6zGYSkwz 3qmaOpOTwxnRkGzgGL2Px4MZnzv9ZSg/MzY8u3ZghWVoNKqpyahrv3GOZRiuxGXWyRFb 3V2gWqfuCBnYj3RY9IOi9VYZBgA+mpXZp1OnDq/SVyWvdsuQBOwAnx8T1b57ps0grQVz z6M00EHIlVNhS8DD/d5FAyuwdP8dHw2GO42ei9e99wnedjnJZ7JrIEuFgi0lFgGvaf4W U74g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=B0s/Er0BpZNr3kmWpzh0ZD+YrPYRB0SL5OAK3lR0aHY=; b=Lw/tz7C/SDpU50Vd9iujmTLPeCmB2vQsE7l+VaXFhmuY4uLEMjL0eKKIwqjazJ880L HEsgcoE4LqPGrm6h9qYoX6ZNeIqba8GdbcIEs+fynPhtlSHoxOKC9Ma0Kb9p1qMDMQ9I sGz6NNzvloZ8b3Tz5deEx3PPmGXggp82l80EJ81iko4aY65ONDl3h5PW/C/5D4rCBGOo DqeRKxVtTbM0p02/hKDPQ8V700SXIOvXjSLbDKgCuzWYmDHoOKIn/6ihy/SRpuywMxxS KNz/bLb+Wgv0mYiE4lVonOggkn192ERR+wZliwQzq/41POVEx0gr8Qr+3mCRYerfzKRp 1V6g== X-Gm-Message-State: AOAM531SeFgWlda4BZii6cEjfSasiyNPlXjlVeGGwgQdm//6tiSdSHq+ XwN1NbajB4wlYqdSlNg4QPc= X-Google-Smtp-Source: ABdhPJxKDaaRncieSb0EpEsR79D/rnNni533eP97uVxozQLX4Bf7WI59Ng+BhUWsk1FoUnjtHe/YCQ== X-Received: by 2002:a17:906:2c03:: with SMTP id e3mr5273586ejh.206.1590095458522; Thu, 21 May 2020 14:10:58 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.10.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:10:58 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 06/13] net: core: dev_addr_lists: export some raw __hw_addr helpers Date: Fri, 22 May 2020 00:10:29 +0300 Message-Id: <20200521211036.668624-7-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean DSA switches need to keep the list of addresses which are filtered towards the CPU port. One DSA switch can have 1 CPU port and many front-panel (user) ports, each user port having its own MAC address (they can potentially be all the same MAC address). Filtering towards the CPU port means adding a FDB address for each user port MAC address that sends that address to the CPU. There is no net_device associated with the CPU port. So the DSA switches need to keep their own reference counting of MAC addresses for which a FDB entry is installed or removed on the CPU port. Permit that by exporting the raw helpers instead of operating on a struct net_device. Signed-off-by: Vladimir Oltean --- include/linux/netdevice.h | 7 +++++++ net/core/dev_addr_lists.c | 17 ++++++++++------- 2 files changed, 17 insertions(+), 7 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 2d11b93f3af4..239efd209c33 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -4307,6 +4307,13 @@ void __hw_addr_unsync_dev(struct netdev_hw_addr_list *list, int (*unsync)(struct net_device *, const unsigned char *)); void __hw_addr_init(struct netdev_hw_addr_list *list); +void __hw_addr_flush(struct netdev_hw_addr_list *list); +int __hw_addr_add(struct netdev_hw_addr_list *list, + const unsigned char *addr, int addr_len, + unsigned char addr_type); +int __hw_addr_del(struct netdev_hw_addr_list *list, + const unsigned char *addr, int addr_len, + unsigned char addr_type); /* Functions used for device addresses handling */ int dev_addr_add(struct net_device *dev, const unsigned char *addr, diff --git a/net/core/dev_addr_lists.c b/net/core/dev_addr_lists.c index 90eaa99b19e5..e307ae7d2a44 100644 --- a/net/core/dev_addr_lists.c +++ b/net/core/dev_addr_lists.c @@ -77,13 +77,14 @@ static int __hw_addr_add_ex(struct netdev_hw_addr_list *list, sync); } -static int __hw_addr_add(struct netdev_hw_addr_list *list, - const unsigned char *addr, int addr_len, - unsigned char addr_type) +int __hw_addr_add(struct netdev_hw_addr_list *list, + const unsigned char *addr, int addr_len, + unsigned char addr_type) { return __hw_addr_add_ex(list, addr, addr_len, addr_type, false, false, 0); } +EXPORT_SYMBOL(__hw_addr_add); static int __hw_addr_del_entry(struct netdev_hw_addr_list *list, struct netdev_hw_addr *ha, bool global, @@ -123,12 +124,13 @@ static int __hw_addr_del_ex(struct netdev_hw_addr_list *list, return -ENOENT; } -static int __hw_addr_del(struct netdev_hw_addr_list *list, - const unsigned char *addr, int addr_len, - unsigned char addr_type) +int __hw_addr_del(struct netdev_hw_addr_list *list, + const unsigned char *addr, int addr_len, + unsigned char addr_type) { return __hw_addr_del_ex(list, addr, addr_len, addr_type, false, false); } +EXPORT_SYMBOL(__hw_addr_del); static int __hw_addr_sync_one(struct netdev_hw_addr_list *to_list, struct netdev_hw_addr *ha, @@ -403,7 +405,7 @@ void __hw_addr_unsync_dev(struct netdev_hw_addr_list *list, } EXPORT_SYMBOL(__hw_addr_unsync_dev); -static void __hw_addr_flush(struct netdev_hw_addr_list *list) +void __hw_addr_flush(struct netdev_hw_addr_list *list) { struct netdev_hw_addr *ha, *tmp; @@ -413,6 +415,7 @@ static void __hw_addr_flush(struct netdev_hw_addr_list *list) } list->count = 0; } +EXPORT_SYMBOL(__hw_addr_flush); void __hw_addr_init(struct netdev_hw_addr_list *list) { From patchwork Thu May 21 21:10:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1295717 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=OLaWPySr; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Sj3B0lnTz9sRW for ; Fri, 22 May 2020 07:11:18 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730623AbgEUVLE (ORCPT ); Thu, 21 May 2020 17:11:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730614AbgEUVLB (ORCPT ); Thu, 21 May 2020 17:11:01 -0400 Received: from mail-ej1-x641.google.com (mail-ej1-x641.google.com [IPv6:2a00:1450:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61C4FC05BD43 for ; Thu, 21 May 2020 14:11:01 -0700 (PDT) Received: by mail-ej1-x641.google.com with SMTP id x1so10539357ejd.8 for ; Thu, 21 May 2020 14:11:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UxPaIBDRafmiQ0UKafviws3l8/Ted7xW1QOTsLZHkDg=; b=OLaWPySrBqnIg6Pf6BjB5H4W1LcV9h1gTbn5DGM5SU1a/bePMwWbfV9pgSfLk+7V3I 9MwQan3iuGVF0dwlEdza1OVrdj1Qlb75zl0et6wKm/iMCyY3slPffDw6mruS2zSuSQlC GDYINZRMKVsUKfBa2jA2AQpJJ6rXbVjaVkLFYh05R8TPVgZyaimo4dc90YEwIjkeGuGu EY6/KVthlNL/9pFdKUbRs9tlFSR2/4yHkF8X4KRFuKaQYNx4K8hD2hEyP55gCQiiPfEL LIR6zqdSa2hEWfT/JJZx03q4AlPxTT3IBxApnKf3IQi2rCXWpUW5bv1qbQ+e0yUSiuIQ 6ERw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UxPaIBDRafmiQ0UKafviws3l8/Ted7xW1QOTsLZHkDg=; b=ruJ6uAFE3ic3+0U2dDfB0pSxsPtH1HErSd4XQZOtR4+qAjQ1+vKOl3fRLyzkiE6RQK rc44O+/tzYcEvCtadL59pEB3LRkIbaHxf5An+x+OmvyWdiV4vGkjSNZZw+7w6jTmpDLM 4IDNznFw5WEidgobpLLO+tyBX3/g3JtqMPc8g+T12RYTbVJ1vIqP3Ba7g3ainlwbs362 Atzx2UgTHUi1/DlrfKHDJYf06YD+0LbBwTG9xeLm0tk0swXltGqDqVQDktJCcBuxp3l6 wj/tok9CwpfCJrBpmcwQUD4vh5u1AzDLDkbofthfGAQI8VXbQOKMQf7zd4Sr89+NQuZz TkBQ== X-Gm-Message-State: AOAM5326lvtZ9adQjisYIkMtNbg0LPaycPbsSWY6AzfSC4LNHUuznzq1 Wo9ZcdPKFelKmge56y6wa5I= X-Google-Smtp-Source: ABdhPJy+zJV7Rq+419DJvC+EYHH5Q6ELRBZ/K/HcH5/Uu37UyXnvDoTScDUEJUiJIGyTjhzTTs2Ohg== X-Received: by 2002:a17:906:6990:: with SMTP id i16mr5653026ejr.175.1590095459940; Thu, 21 May 2020 14:10:59 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.10.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:10:59 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 07/13] net: dsa: don't use switchdev_notifier_fdb_info in dsa_switchdev_event_work Date: Fri, 22 May 2020 00:10:30 +0300 Message-Id: <20200521211036.668624-8-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean Currently DSA doesn't add FDB entries on the CPU port, because it only does so through switchdev, which is associated with a net_device, and there are none of those for the CPU port. But actually FDB addresses on the CPU port can be associated with RX filtering, so we can initiate switchdev operations from within the DSA layer. We need the deferred work because .ndo_set_rx_mode runs in atomic context. There is just one problem with the existing code: it passes a structure in dsa_switchdev_event_work which was retrieved directly from switchdev, so it contains a net_device. We need to generalize the contents to something that covers the CPU port as well: the "ds, port" tuple is fine for that. Note that the new procedure for notifying the successful FDB offload is inspired from the rocker model. Also, nothing was being done if added_by_user was false. Let's check for that a lot earlier, and don't actually bother to schedule the whole workqueue for nothing. Signed-off-by: Vladimir Oltean --- net/dsa/dsa_priv.h | 12 ++++++ net/dsa/slave.c | 98 +++++++++++++++++++++++----------------------- 2 files changed, 60 insertions(+), 50 deletions(-) diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h index adecf73bd608..001668007efd 100644 --- a/net/dsa/dsa_priv.h +++ b/net/dsa/dsa_priv.h @@ -72,6 +72,18 @@ struct dsa_notifier_mtu_info { int mtu; }; +struct dsa_switchdev_event_work { + struct dsa_switch *ds; + int port; + struct work_struct work; + unsigned long event; + /* Specific for SWITCHDEV_FDB_ADD_TO_DEVICE and + * SWITCHDEV_FDB_DEL_TO_DEVICE + */ + unsigned char addr[ETH_ALEN]; + u16 vid; +}; + struct dsa_slave_priv { /* Copy of CPU port xmit for faster access in slave transmit hot path */ struct sk_buff * (*xmit)(struct sk_buff *skb, diff --git a/net/dsa/slave.c b/net/dsa/slave.c index 886490fb203d..d2072fbd22fe 100644 --- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -1914,72 +1914,60 @@ static int dsa_slave_netdevice_event(struct notifier_block *nb, return NOTIFY_DONE; } -struct dsa_switchdev_event_work { - struct work_struct work; - struct switchdev_notifier_fdb_info fdb_info; - struct net_device *dev; - unsigned long event; -}; +static void +dsa_fdb_offload_notify(struct dsa_switchdev_event_work *switchdev_work) +{ + struct dsa_switch *ds = switchdev_work->ds; + struct dsa_port *dp = dsa_to_port(ds, switchdev_work->port); + struct switchdev_notifier_fdb_info info; + + if (!dsa_is_user_port(ds, dp->index)) + return; + + info.addr = switchdev_work->addr; + info.vid = switchdev_work->vid; + info.offloaded = true; + call_switchdev_notifiers(SWITCHDEV_FDB_OFFLOADED, + dp->slave, &info.info, NULL); +} static void dsa_slave_switchdev_event_work(struct work_struct *work) { struct dsa_switchdev_event_work *switchdev_work = container_of(work, struct dsa_switchdev_event_work, work); - struct net_device *dev = switchdev_work->dev; - struct switchdev_notifier_fdb_info *fdb_info; - struct dsa_port *dp = dsa_slave_to_port(dev); + struct dsa_switch *ds = switchdev_work->ds; + struct dsa_port *dp = dsa_to_port(ds, switchdev_work->port); int err; rtnl_lock(); switch (switchdev_work->event) { case SWITCHDEV_FDB_ADD_TO_DEVICE: - fdb_info = &switchdev_work->fdb_info; - if (!fdb_info->added_by_user) - break; - - err = dsa_port_fdb_add(dp, fdb_info->addr, fdb_info->vid); + err = dsa_port_fdb_add(dp, switchdev_work->addr, + switchdev_work->vid); if (err) { - netdev_dbg(dev, "fdb add failed err=%d\n", err); + dev_dbg(ds->dev, "port %d fdb add failed err=%d\n", + dp->index, err); break; } - fdb_info->offloaded = true; - call_switchdev_notifiers(SWITCHDEV_FDB_OFFLOADED, dev, - &fdb_info->info, NULL); + dsa_fdb_offload_notify(switchdev_work); break; case SWITCHDEV_FDB_DEL_TO_DEVICE: - fdb_info = &switchdev_work->fdb_info; - if (!fdb_info->added_by_user) - break; - - err = dsa_port_fdb_del(dp, fdb_info->addr, fdb_info->vid); + err = dsa_port_fdb_del(dp, switchdev_work->addr, + switchdev_work->vid); if (err) { - netdev_dbg(dev, "fdb del failed err=%d\n", err); - dev_close(dev); + dev_dbg(ds->dev, "port %d fdb del failed err=%d\n", + dp->index, err); + if (dsa_is_user_port(ds, dp->index)) + dev_close(dp->slave); } break; } rtnl_unlock(); - kfree(switchdev_work->fdb_info.addr); kfree(switchdev_work); - dev_put(dev); -} - -static int -dsa_slave_switchdev_fdb_work_init(struct dsa_switchdev_event_work * - switchdev_work, - const struct switchdev_notifier_fdb_info * - fdb_info) -{ - memcpy(&switchdev_work->fdb_info, fdb_info, - sizeof(switchdev_work->fdb_info)); - switchdev_work->fdb_info.addr = kzalloc(ETH_ALEN, GFP_ATOMIC); - if (!switchdev_work->fdb_info.addr) - return -ENOMEM; - ether_addr_copy((u8 *)switchdev_work->fdb_info.addr, - fdb_info->addr); - return 0; + if (dsa_is_user_port(ds, dp->index)) + dev_put(dp->slave); } /* Called under rcu_read_lock() */ @@ -1987,7 +1975,9 @@ static int dsa_slave_switchdev_event(struct notifier_block *unused, unsigned long event, void *ptr) { struct net_device *dev = switchdev_notifier_info_to_dev(ptr); + const struct switchdev_notifier_fdb_info *fdb_info; struct dsa_switchdev_event_work *switchdev_work; + struct dsa_port *dp; int err; if (event == SWITCHDEV_PORT_ATTR_SET) { @@ -2000,20 +1990,32 @@ static int dsa_slave_switchdev_event(struct notifier_block *unused, if (!dsa_slave_dev_check(dev)) return NOTIFY_DONE; + dp = dsa_slave_to_port(dev); + switchdev_work = kzalloc(sizeof(*switchdev_work), GFP_ATOMIC); if (!switchdev_work) return NOTIFY_BAD; INIT_WORK(&switchdev_work->work, dsa_slave_switchdev_event_work); - switchdev_work->dev = dev; + switchdev_work->ds = dp->ds; + switchdev_work->port = dp->index; switchdev_work->event = event; switch (event) { case SWITCHDEV_FDB_ADD_TO_DEVICE: /* fall through */ case SWITCHDEV_FDB_DEL_TO_DEVICE: - if (dsa_slave_switchdev_fdb_work_init(switchdev_work, ptr)) - goto err_fdb_work_init; + fdb_info = ptr; + + if (!fdb_info->added_by_user) { + kfree(switchdev_work); + return NOTIFY_OK; + } + + ether_addr_copy(switchdev_work->addr, + fdb_info->addr); + switchdev_work->vid = fdb_info->vid; + dev_hold(dev); break; default: @@ -2023,10 +2025,6 @@ static int dsa_slave_switchdev_event(struct notifier_block *unused, dsa_schedule_work(&switchdev_work->work); return NOTIFY_OK; - -err_fdb_work_init: - kfree(switchdev_work); - return NOTIFY_BAD; } static int dsa_slave_switchdev_blocking_event(struct notifier_block *unused, From patchwork Thu May 21 21:10:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1295710 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=oIt2LPFS; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Sj2z239kz9sRW for ; Fri, 22 May 2020 07:11:07 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730628AbgEUVLG (ORCPT ); Thu, 21 May 2020 17:11:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730602AbgEUVLD (ORCPT ); Thu, 21 May 2020 17:11:03 -0400 Received: from mail-ed1-x542.google.com (mail-ed1-x542.google.com [IPv6:2a00:1450:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B416CC08C5C0 for ; Thu, 21 May 2020 14:11:02 -0700 (PDT) Received: by mail-ed1-x542.google.com with SMTP id h16so7726234eds.5 for ; Thu, 21 May 2020 14:11:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GeNsWMw9QCbJxJekQNWDhdnyPJFizHZoxDKdXKPyLJg=; b=oIt2LPFSBWz91MxVqUdAKbX8Fxh4twQ7EnUh4mQtwEB8P/cJNfjef2/kTaaDy0H2cA I2A9E+w1wTIFKwYYVjeOq6PgVFA4VLhziQo/aEpOHaox3HJnQuN8x9eidmNZSJGS8o5N kwKYJcBf3FVLtt6+9NFBdhHd2O97j6XuP0ogkuO10MLHh/rlahKqTCKtPHw5YUSaeIwR QVYsTtTRBra1tV/pnM1XCiBUpJECQ1wY2T4vG97ngEHsRsHDZkiuK9f4OO6pTTCqyzMw 8ybOMnaEZFvqm4szBVIgMP1dwSQSCYII60JduIwDtdfvkQiitb4eT+nj4S0GdKzzKwwv o48w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GeNsWMw9QCbJxJekQNWDhdnyPJFizHZoxDKdXKPyLJg=; b=O3RY9Tqt+3YJIpGdRX+ZfHH5pjR1wwfA6RzncnJ3bOrWDHDQfjn6XIx2f9wVJe3epT UppYQVJIKZYPluFd+fBIst0HcJOv++mqcJ/CfVsvJ5QbQvfgLuTlxnI0YopMZdNrNf2S CkC4qrQVOySuByi1UjO5nx4A2qgmP3JYXKj0H43pP7bzGhma4HAQ/YG1t4vqHSvOVTuQ PHzJ9pIc6h/ZWc7tvJucHuMox/IISzB60c1gbcG7iYyhlSBAmDgK3uuYrDwDGdDZx+x/ U/v0Q5OWAfB39RhtGfSX3KfITu0PQIZxBYUZ/CSkrmvBDnZEF2zGEvs7mw8/lBGtlbJZ ypcQ== X-Gm-Message-State: AOAM532X3BuQz92lwiyxk2LdGulGTxsN+jqBpfqfuAa4V5yF/G32Da8/ UE00edgXQdl8djkp0MYOxic= X-Google-Smtp-Source: ABdhPJxxOnkgXB6Wa8AgVQ9zL5fg3O6ZXIdn562OfTYLesEZHDR5oE2hM63NMjsuHBpwrSc3gtKBiA== X-Received: by 2002:a50:d50f:: with SMTP id u15mr573035edi.244.1590095461189; Thu, 21 May 2020 14:11:01 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.11.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:11:00 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 08/13] net: dsa: add ability to program unicast and multicast filters for CPU port Date: Fri, 22 May 2020 00:10:31 +0300 Message-Id: <20200521211036.668624-9-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Florian Fainelli When the switch ports operate as individual network devices, the switch driver might have configured the switch to flood multicast all the way to the CPU port. This is really undesirable as it can lead to receiving a lot of unwanted traffic that the network stack needs to filter in software. For each valid multicast address, program it into the switch's MDB only when the host is interested in receiving such traffic, e.g: running a multicast application. For unicast filtering, consider that termination can only be done through the primary MAC address of each net device virtually corresponding to a switch port, as well as through upper interfaces (VLAN, bridge) that add their MAC address to the list of secondary unicast addresses of the switch net devices. For each such unicast address, install a reference-counted FDB entry towards the CPU port. Signed-off-by: Florian Fainelli Signed-off-by: Vladimir Oltean --- include/net/dsa.h | 6 ++ net/dsa/Kconfig | 1 + net/dsa/dsa2.c | 6 ++ net/dsa/slave.c | 182 ++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 195 insertions(+) diff --git a/include/net/dsa.h b/include/net/dsa.h index 50389772c597..7aa78884a5f2 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -261,6 +261,12 @@ struct dsa_switch { */ const struct dsa_switch_ops *ops; + /* + * {MAC, VLAN} addresses that are copied to the CPU. + */ + struct netdev_hw_addr_list uc; + struct netdev_hw_addr_list mc; + /* * Slave mii_bus and devices for the individual ports. */ diff --git a/net/dsa/Kconfig b/net/dsa/Kconfig index 739613070d07..d4644afdbdd7 100644 --- a/net/dsa/Kconfig +++ b/net/dsa/Kconfig @@ -9,6 +9,7 @@ menuconfig NET_DSA tristate "Distributed Switch Architecture" depends on HAVE_NET_DSA depends on BRIDGE || BRIDGE=n + depends on VLAN_8021Q_IVDF || VLAN_8021Q_IVDF=n select GRO_CELLS select NET_SWITCHDEV select PHYLINK diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c index 076908fdd29b..cd17554a912b 100644 --- a/net/dsa/dsa2.c +++ b/net/dsa/dsa2.c @@ -429,6 +429,9 @@ static int dsa_switch_setup(struct dsa_switch *ds) goto unregister_notifier; } + __hw_addr_init(&ds->mc); + __hw_addr_init(&ds->uc); + ds->setup = true; return 0; @@ -449,6 +452,9 @@ static void dsa_switch_teardown(struct dsa_switch *ds) if (!ds->setup) return; + __hw_addr_flush(&ds->mc); + __hw_addr_flush(&ds->uc); + if (ds->slave_mii_bus && ds->ops->phy_read) mdiobus_unregister(ds->slave_mii_bus); diff --git a/net/dsa/slave.c b/net/dsa/slave.c index d2072fbd22fe..2743d689f6b1 100644 --- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -62,6 +62,158 @@ static int dsa_slave_get_iflink(const struct net_device *dev) return dsa_slave_to_master(dev)->ifindex; } +/* Add a static host MDB entry, corresponding to a slave multicast MAC address, + * to the CPU port. The MDB entry is reference-counted (4 slave ports listening + * on the same multicast MAC address will only call this function once). + */ +static int dsa_upstream_sync_mdb_addr(struct net_device *dev, + const unsigned char *addr) +{ + struct switchdev_obj_port_mdb mdb; + + memset(&mdb, 0, sizeof(mdb)); + mdb.obj.id = SWITCHDEV_OBJ_ID_HOST_MDB; + mdb.obj.flags = SWITCHDEV_F_DEFER; + mdb.vid = vlan_dev_get_addr_vid(dev, addr); + ether_addr_copy(mdb.addr, addr); + + return switchdev_port_obj_add(dev, &mdb.obj, NULL); +} + +/* Delete a static host MDB entry, corresponding to a slave multicast MAC + * address, to the CPU port. The MDB entry is reference-counted (4 slave ports + * listening on the same multicast MAC address will only call this function + * once). + */ +static int dsa_upstream_unsync_mdb_addr(struct net_device *dev, + const unsigned char *addr) +{ + struct switchdev_obj_port_mdb mdb; + + memset(&mdb, 0, sizeof(mdb)); + mdb.obj.id = SWITCHDEV_OBJ_ID_HOST_MDB; + mdb.obj.flags = SWITCHDEV_F_DEFER; + mdb.vid = vlan_dev_get_addr_vid(dev, addr); + ether_addr_copy(mdb.addr, addr); + + return switchdev_port_obj_del(dev, &mdb.obj); +} + +static int dsa_slave_sync_mdb_addr(struct net_device *dev, + const unsigned char *addr) +{ + struct dsa_port *dp = dsa_slave_to_port(dev); + struct dsa_switch *ds = dp->ds; + int err; + + err = __hw_addr_add(&ds->mc, addr, dev->addr_len + dev->vid_len, + NETDEV_HW_ADDR_T_MULTICAST); + if (err) + return err; + + return __hw_addr_sync_dev(&ds->mc, dev, dsa_upstream_sync_mdb_addr, + dsa_upstream_unsync_mdb_addr); +} + +static int dsa_slave_unsync_mdb_addr(struct net_device *dev, + const unsigned char *addr) +{ + struct dsa_port *dp = dsa_slave_to_port(dev); + struct dsa_switch *ds = dp->ds; + int err; + + err = __hw_addr_del(&ds->mc, addr, dev->addr_len + dev->vid_len, + NETDEV_HW_ADDR_T_MULTICAST); + if (err) + return err; + + return __hw_addr_sync_dev(&ds->mc, dev, dsa_upstream_sync_mdb_addr, + dsa_upstream_unsync_mdb_addr); +} + +static void dsa_slave_switchdev_event_work(struct work_struct *work); + +static int dsa_upstream_fdb_addr(struct net_device *slave_dev, + const unsigned char *addr, + unsigned long event) +{ + int addr_len = slave_dev->addr_len + slave_dev->vid_len; + struct dsa_port *dp = dsa_slave_to_port(slave_dev); + u16 vid = vlan_dev_get_addr_vid(slave_dev, addr); + struct dsa_switchdev_event_work *switchdev_work; + + switchdev_work = kzalloc(sizeof(*switchdev_work), GFP_ATOMIC); + if (!switchdev_work) + return -ENOMEM; + + INIT_WORK(&switchdev_work->work, dsa_slave_switchdev_event_work); + switchdev_work->ds = dp->ds; + switchdev_work->port = dsa_upstream_port(dp->ds, dp->index); + switchdev_work->event = event; + + memcpy(switchdev_work->addr, addr, addr_len); + switchdev_work->vid = vid; + + dev_hold(slave_dev); + dsa_schedule_work(&switchdev_work->work); + + return 0; +} + +/* Add a static FDB entry, corresponding to a slave unicast MAC address, + * to the CPU port. The FDB entry is reference-counted (4 slave ports having + * the same MAC address will only call this function once). + */ +static int dsa_upstream_sync_fdb_addr(struct net_device *slave_dev, + const unsigned char *addr) +{ + return dsa_upstream_fdb_addr(slave_dev, addr, + SWITCHDEV_FDB_ADD_TO_DEVICE); +} + +/* Remove a static FDB entry, corresponding to a slave unicast MAC address, + * from the CPU port. The FDB entry is reference-counted (the MAC address is + * only removed when there is no remaining slave port that uses it). + */ +static int dsa_upstream_unsync_fdb_addr(struct net_device *slave_dev, + const unsigned char *addr) +{ + return dsa_upstream_fdb_addr(slave_dev, addr, + SWITCHDEV_FDB_DEL_TO_DEVICE); +} + +static int dsa_slave_sync_fdb_addr(struct net_device *dev, + const unsigned char *addr) +{ + struct dsa_port *dp = dsa_slave_to_port(dev); + struct dsa_switch *ds = dp->ds; + int err; + + err = __hw_addr_add(&ds->uc, addr, dev->addr_len + dev->vid_len, + NETDEV_HW_ADDR_T_UNICAST); + if (err) + return err; + + return __hw_addr_sync_dev(&ds->uc, dev, dsa_upstream_sync_fdb_addr, + dsa_upstream_unsync_fdb_addr); +} + +static int dsa_slave_unsync_fdb_addr(struct net_device *dev, + const unsigned char *addr) +{ + struct dsa_port *dp = dsa_slave_to_port(dev); + struct dsa_switch *ds = dp->ds; + int err; + + err = __hw_addr_del(&ds->uc, addr, dev->addr_len + dev->vid_len, + NETDEV_HW_ADDR_T_UNICAST); + if (err) + return err; + + return __hw_addr_sync_dev(&ds->uc, dev, dsa_upstream_sync_fdb_addr, + dsa_upstream_unsync_fdb_addr); +} + static int dsa_slave_open(struct net_device *dev) { struct net_device *master = dsa_slave_to_master(dev); @@ -76,6 +228,9 @@ static int dsa_slave_open(struct net_device *dev) if (err < 0) goto out; } + err = dsa_slave_sync_fdb_addr(dev, dev->dev_addr); + if (err < 0) + goto out; if (dev->flags & IFF_ALLMULTI) { err = dev_set_allmulti(master, 1); @@ -103,6 +258,7 @@ static int dsa_slave_open(struct net_device *dev) del_unicast: if (!ether_addr_equal(dev->dev_addr, master->dev_addr)) dev_uc_del(master, dev->dev_addr); + dsa_slave_unsync_fdb_addr(dev, dev->dev_addr); out: return err; } @@ -116,6 +272,9 @@ static int dsa_slave_close(struct net_device *dev) dev_mc_unsync(master, dev); dev_uc_unsync(master, dev); + __dev_mc_unsync(dev, dsa_slave_unsync_mdb_addr); + __dev_uc_unsync(dev, dsa_slave_unsync_fdb_addr); + if (dev->flags & IFF_ALLMULTI) dev_set_allmulti(master, -1); if (dev->flags & IFF_PROMISC) @@ -143,7 +302,17 @@ static void dsa_slave_change_rx_flags(struct net_device *dev, int change) static void dsa_slave_set_rx_mode(struct net_device *dev) { struct net_device *master = dsa_slave_to_master(dev); + struct dsa_port *dp = dsa_slave_to_port(dev); + + /* If the port is bridged, the bridge takes care of sending + * SWITCHDEV_OBJ_ID_HOST_MDB to program the host's MC filter + */ + if (netdev_mc_empty(dev) || dp->bridge_dev) + goto out; + __dev_mc_sync(dev, dsa_slave_sync_mdb_addr, dsa_slave_unsync_mdb_addr); +out: + __dev_uc_sync(dev, dsa_slave_sync_fdb_addr, dsa_slave_unsync_fdb_addr); dev_mc_sync(master, dev); dev_uc_sync(master, dev); } @@ -165,9 +334,15 @@ static int dsa_slave_set_mac_address(struct net_device *dev, void *a) if (err < 0) return err; } + err = dsa_slave_sync_fdb_addr(dev, addr->sa_data); + if (err < 0) + goto out; if (!ether_addr_equal(dev->dev_addr, master->dev_addr)) dev_uc_del(master, dev->dev_addr); + err = dsa_slave_unsync_fdb_addr(dev, dev->dev_addr); + if (err < 0) + goto out; out: ether_addr_copy(dev->dev_addr, addr->sa_data); @@ -1752,6 +1927,8 @@ int dsa_slave_create(struct dsa_port *port) else eth_hw_addr_inherit(slave_dev, master); slave_dev->priv_flags |= IFF_NO_QUEUE; + if (ds->ops->port_fdb_add && ds->ops->port_egress_floods) + slave_dev->priv_flags |= IFF_UNICAST_FLT; slave_dev->netdev_ops = &dsa_slave_netdev_ops; slave_dev->min_mtu = 0; if (ds->ops->port_max_mtu) @@ -1759,6 +1936,7 @@ int dsa_slave_create(struct dsa_port *port) else slave_dev->max_mtu = ETH_MAX_MTU; SET_NETDEV_DEVTYPE(slave_dev, &dsa_type); + vlan_dev_ivdf_set(slave_dev, true); netdev_for_each_tx_queue(slave_dev, dsa_slave_set_lockdep_class_one, NULL); @@ -1854,6 +2032,10 @@ static int dsa_slave_changeupper(struct net_device *dev, if (netif_is_bridge_master(info->upper_dev)) { if (info->linking) { + /* Remove existing MC addresses that might have been + * programmed + */ + __dev_mc_unsync(dev, dsa_slave_unsync_mdb_addr); err = dsa_port_bridge_join(dp, info->upper_dev); if (!err) dsa_bridge_mtu_normalization(dp); From patchwork Thu May 21 21:10:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1295712 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=UCc4UpSl; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Sj3223Mvz9sRW for ; Fri, 22 May 2020 07:11:10 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730637AbgEUVLJ (ORCPT ); Thu, 21 May 2020 17:11:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730616AbgEUVLE (ORCPT ); Thu, 21 May 2020 17:11:04 -0400 Received: from mail-ej1-x644.google.com (mail-ej1-x644.google.com [IPv6:2a00:1450:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D83A3C08C5C1 for ; Thu, 21 May 2020 14:11:03 -0700 (PDT) Received: by mail-ej1-x644.google.com with SMTP id e2so10496490eje.13 for ; Thu, 21 May 2020 14:11:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mGbdBbKPkt+TWwR/Wiwv/J0av95DOAjSPkVX8TLkwrE=; b=UCc4UpSlotEsJ970oSQuDuFIMibtzsrnNh5A4CCYSe/eRiBP9Z2xPtqz87maHfqTdi 9Se6oMXGNSCkqLw+KA/X1v6RBSepMs4Cuh4C+c/jDDRwspAKJnF+RrF0dtZobxiIt5tQ t2kKdJ97DlOxyhJ6E8bePNx+MHzwAJSh3X2U1rRz1rgbb6whfe37S/G1O+iLBHyMVyCT /RT0VKxWkYIX7pnpuUfsgLgeG9ypbomtsnk7+DLZzMJtmf9qCD+0e75kMFsEIhWmBthu DqkRlv3gfFRv5aRFrtRpuAtjGhF1QXbKTammIHpa18gJLXL9VmAysEpEh4lp+PQjfaEf /B9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mGbdBbKPkt+TWwR/Wiwv/J0av95DOAjSPkVX8TLkwrE=; b=SPK+klDNC4jHy1Leih4ZOqCHQgEmT0LzbMm4D7KKT9fk2rdOr2+7ynNE2nL5xF1F27 Lo4Ar6uRZMtE92ShrkG9iuoUrtYEjgCmAoAWj0sFBydZJt7Dp38m5L8Gr6E3fArDuFSw 7Nnp4zzmk6ZECaHGfQZz7Jy1kdnAG00/zPbH+AdppnqJt0ntAhRkCml08BHBDyPk2CQR WbyGKuykpHnTHyrvEBjRbu7IRDK+G7XDICTKVn+VGkC534nDBOMcHvXpVfdBxaxlUVWD twWvMIQs1IMxJcO16eeHUnxofMyNXX8cIww2vyMKIAzqLhgFhpU1YcRbLCVjyqPY8bWf o7Hw== X-Gm-Message-State: AOAM530QZ1J6WKfc4sjuBCqRLwm2Nu5s/fh/nA9Gvz5mE+S1/7rccbIu ojCl7zIpYtRmuGxjRsIuDaM= X-Google-Smtp-Source: ABdhPJyIP1pvkqlcYsHHQhHdpNyh9j6Z35wmH7WNEI8gku/i6m1aoJ6miFm80wQs5d49ZH2CaDQF/w== X-Received: by 2002:a17:906:c838:: with SMTP id dd24mr5282027ejb.28.1590095462435; Thu, 21 May 2020 14:11:02 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.11.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:11:02 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 09/13] net: dsa: mroute: don't panic the kernel if called without the prepare phase Date: Fri, 22 May 2020 00:10:32 +0300 Message-Id: <20200521211036.668624-10-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean Currently, this function would check the port_egress_floods pointer only in the preparation phase, the assumption being that the caller wouldn't proceed to a second call since it returned -EOPNOTSUPP. If the function were to be called a second time though, the port_egress_floods pointer would not be checked and the driver would proceed to dereference it. Signed-off-by: Vladimir Oltean --- net/dsa/port.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/net/dsa/port.c b/net/dsa/port.c index e23ece229c7e..c4032f79225a 100644 --- a/net/dsa/port.c +++ b/net/dsa/port.c @@ -324,8 +324,11 @@ int dsa_port_mrouter(struct dsa_port *dp, bool mrouter, struct dsa_switch *ds = dp->ds; int port = dp->index; + if (!ds->ops->port_egress_floods) + return -EOPNOTSUPP; + if (switchdev_trans_ph_prepare(trans)) - return ds->ops->port_egress_floods ? 0 : -EOPNOTSUPP; + return 0; return ds->ops->port_egress_floods(ds, port, true, mrouter); } From patchwork Thu May 21 21:10:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1295714 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=UcROnpdM; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Sj3461Pjz9sSn for ; Fri, 22 May 2020 07:11:12 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730642AbgEUVLK (ORCPT ); Thu, 21 May 2020 17:11:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730625AbgEUVLF (ORCPT ); Thu, 21 May 2020 17:11:05 -0400 Received: from mail-ej1-x643.google.com (mail-ej1-x643.google.com [IPv6:2a00:1450:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 035A1C061A0E for ; Thu, 21 May 2020 14:11:05 -0700 (PDT) Received: by mail-ej1-x643.google.com with SMTP id x1so10539622ejd.8 for ; Thu, 21 May 2020 14:11:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cvWXsQ7jwqJ+5wGrazkK/1Z+2D6k5GNrtr2FbBBt+yY=; b=UcROnpdM69lTWxOg2pEgMOJfybnFA7ITC28/tav5guwGrivEwg9wPSvirbwyi7tgLF 5wbcEgYyUJQS8SalyQYNAeV4473pNs9A7gkhJgWaNmi6sGbLfjWyziE3d/bU9xI06kMi wG6MgE2e4M1nnTtBYcVKwwgh3tlNIcqjgKhPQ6dGRW750oIxegriQuImSZGfhNQf9dL/ DWSWMIsKGBK70MCCPnGop3fw7haYGXHB22zx0j/IwjFg2pqxMNkPC3SxVmmJ5RfEjlUI mGofjlZubiMl72bKFHUCDjHwvayFtfDpcoPMfOaBeCib0y4/VPjE53ax2Tq7Q5r4mdeP yMCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cvWXsQ7jwqJ+5wGrazkK/1Z+2D6k5GNrtr2FbBBt+yY=; b=PjVwgsH1VMh3WH5hXQbwT78PtvlGjIDBswOmu7lHUP6N1o7SlXVBILxLC0XdjUoWnJ jGukHm1LxEEbL3qISYalpIOBWn8mJQpLQH172XEB3q1sMYyDYPtbf75fwX/F7Chch4gw gPeI893Z59P8cMKhHHPwp6DEylPRLtCaUTjfwPzcoFjjrPO9gCXpnGlgKunG4vwpzJbz 68Yw7NAZt1B27rv5+/a1eplbaTvcUyEK6Tp+PAUehoMS/p0GGvSarOfRvPofadx8rRnZ ey8HpnXOqyvgeIBqw0Eo7wtphhFFdYvWUbQqouT/jNMxRK8AXSBao1hbH9Rvjk2cfSFS fX6Q== X-Gm-Message-State: AOAM5305PnVULpcIYTO8AUmJD/dFXzC5o0aDVP5JO7bHvme37vwy0sVM 51vdruhwTOQGl6a7SAVhLbA= X-Google-Smtp-Source: ABdhPJzWwfT8ZW3r2p8XjEbFxYtJKKQ97sKfUZkkyj50whp8Dg7QC7QXP2oz4BngEUgh9rYxfTiFIg== X-Received: by 2002:a17:906:2b4f:: with SMTP id b15mr5140491ejg.64.1590095463687; Thu, 21 May 2020 14:11:03 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.11.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:11:03 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 10/13] net: bridge: add port flags for host flooding Date: Fri, 22 May 2020 00:10:33 +0300 Message-Id: <20200521211036.668624-11-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean In cases where the bridge is offloaded by a switchdev, there are situations where we can optimize RX filtering towards the host. To be precise, the host only needs to do termination, which it can do by responding at the MAC addresses of the slave ports and of the bridge interface itself. But most notably, it doesn't need to do forwarding, so there is no need to see packets with unknown destination address. But there are, however, cases when a switchdev does need to flood to the CPU. Such an example is when the switchdev is bridged with a foreign interface, and since there is no offloaded datapath, packets need to pass through the CPU. Currently this is the only identified case, but it can be extended at any time. So far, switchdev implementers made driver-level assumptions, such as: this chip is never integrated in SoCs where it can be bridged with a foreign interface, so I'll just disable host flooding and save some CPU cycles. Or: I can never know what else can be bridged with this switchdev port, so I must leave host flooding enabled in any case. Let the bridge drive the host flooding decision, and pass it to switchdev via the same mechanism as the external flooding flags. Signed-off-by: Vladimir Oltean --- include/linux/if_bridge.h | 3 +++ net/bridge/br_if.c | 40 +++++++++++++++++++++++++++++++++++++++ net/bridge/br_switchdev.c | 4 +++- 3 files changed, 46 insertions(+), 1 deletion(-) diff --git a/include/linux/if_bridge.h b/include/linux/if_bridge.h index b3a8d3054af0..6891a432862d 100644 --- a/include/linux/if_bridge.h +++ b/include/linux/if_bridge.h @@ -49,6 +49,9 @@ struct br_ip_list { #define BR_ISOLATED BIT(16) #define BR_MRP_AWARE BIT(17) #define BR_MRP_LOST_CONT BIT(18) +#define BR_HOST_FLOOD BIT(19) +#define BR_HOST_MCAST_FLOOD BIT(20) +#define BR_HOST_BCAST_FLOOD BIT(21) #define BR_DEFAULT_AGEING_TIME (300 * HZ) diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c index a0e9a7937412..aae59d1e619b 100644 --- a/net/bridge/br_if.c +++ b/net/bridge/br_if.c @@ -166,6 +166,45 @@ void br_manage_promisc(struct net_bridge *br) } } +static int br_manage_host_flood(struct net_bridge *br) +{ + const unsigned long mask = BR_HOST_FLOOD | BR_HOST_MCAST_FLOOD | + BR_HOST_BCAST_FLOOD; + struct net_bridge_port *p, *q; + + list_for_each_entry(p, &br->port_list, list) { + unsigned long flags = p->flags; + bool sw_bridging = false; + int err; + + list_for_each_entry(q, &br->port_list, list) { + if (p == q) + continue; + + if (!netdev_port_same_parent_id(p->dev, q->dev)) { + sw_bridging = true; + break; + } + } + + if (sw_bridging) + flags |= mask; + else + flags &= ~mask; + + if (flags == p->flags) + continue; + + err = br_switchdev_set_port_flag(p, flags, mask); + if (err) + return err; + + p->flags = flags; + } + + return 0; +} + int nbp_backup_change(struct net_bridge_port *p, struct net_device *backup_dev) { @@ -231,6 +270,7 @@ static void nbp_update_port_count(struct net_bridge *br) br->auto_cnt = cnt; br_manage_promisc(br); } + br_manage_host_flood(br); } static void nbp_delete_promisc(struct net_bridge_port *p) diff --git a/net/bridge/br_switchdev.c b/net/bridge/br_switchdev.c index 015209bf44aa..360806ac7463 100644 --- a/net/bridge/br_switchdev.c +++ b/net/bridge/br_switchdev.c @@ -56,7 +56,9 @@ bool nbp_switchdev_allowed_egress(const struct net_bridge_port *p, /* Flags that can be offloaded to hardware */ #define BR_PORT_FLAGS_HW_OFFLOAD (BR_LEARNING | BR_FLOOD | \ - BR_MCAST_FLOOD | BR_BCAST_FLOOD) + BR_MCAST_FLOOD | BR_BCAST_FLOOD | \ + BR_HOST_FLOOD | BR_HOST_MCAST_FLOOD | \ + BR_HOST_BCAST_FLOOD) int br_switchdev_set_port_flag(struct net_bridge_port *p, unsigned long flags, From patchwork Thu May 21 21:10:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1295713 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=aADDdK9Z; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Sj341SGFz9sRW for ; Fri, 22 May 2020 07:11:12 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730645AbgEUVLL (ORCPT ); Thu, 21 May 2020 17:11:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730629AbgEUVLG (ORCPT ); Thu, 21 May 2020 17:11:06 -0400 Received: from mail-ej1-x644.google.com (mail-ej1-x644.google.com [IPv6:2a00:1450:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56898C061A0E for ; Thu, 21 May 2020 14:11:06 -0700 (PDT) Received: by mail-ej1-x644.google.com with SMTP id yc10so10496607ejb.12 for ; Thu, 21 May 2020 14:11:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XfeIlbPMvfl48ZJdzb2qJI/UmnPjkgbWlWaOp9Lgfqw=; b=aADDdK9Zq8YEivkVtecVd8Y9yD3lVGG0KuVm5pHnKZOp+sVJgeen7kAPLz5SNkXDgo xFJ6nvDqbcCecDvBn5LKD0FDCk8wbH0aF2DT077EKisI4FEhlYUDGanVFKBLlF8WkNVt LBtLiMBnWEIvaGn4rJZlvAq1F16cqOxPrsj/M88X7+d9N3bHWHbUH/EYdEU6nDrDMz4l fLIbIIWxolwQ1gZG/cDNHpek+I5KT5TnRxyAAhtCazOw13TNmB407fH9gr/g0ECwAGlM pAdmzI7YoZtC1gmSzHmZtOvBnUtIwRMpuKIkTDEGOcQR7OkutNmQ1ysyIAO5yW8f4bC7 hQNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XfeIlbPMvfl48ZJdzb2qJI/UmnPjkgbWlWaOp9Lgfqw=; b=YZS6xYc5izf626AdGr8eRel0DUpXyTLePwQybqWGTVyx5joMyH07IH9j4WEfJWjSQq nvWUZSi38TtdUm231rl4G+VcNcu1P1r2G3DGoc0VrsfwjHLCFcCt2XnXp8swaWWk//IX tCWxCsVs8Md0bAc6Szuhwk31PUFwVPvOCXAbOZf8iAOyo9HjZBiWOf6392oMfKiNEp50 oS+1Xa6OCXOZ8ciZF+aBihN/YBoQrdGLztu+I2zrravM7okK75N7a4cfscErTdzuNxgy fYXt6u92sW6Te6FAS/FmS2YLR1R7AQ5mmVbuHlh9xSdt8ro3jPlVFlQkDH6m/NMi2uzu cl/A== X-Gm-Message-State: AOAM531b/V7F7bUuySMS626RpiRYwMq4yjUlMRhqhXUxoCgunF6RNSDW XNfFk9taOSHuZcBbsBXl/n0= X-Google-Smtp-Source: ABdhPJzCRDkTx2q7yucZfiYL8/zNCHtT4usKaHLIaXgaUXQgEWF0OtQCZcQjwLW9k9wMYMz6P7kMsQ== X-Received: by 2002:a17:906:70c2:: with SMTP id g2mr5215796ejk.207.1590095465033; Thu, 21 May 2020 14:11:05 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.11.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:11:04 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 11/13] net: dsa: deal with new flooding port attributes from bridge Date: Fri, 22 May 2020 00:10:34 +0300 Message-Id: <20200521211036.668624-12-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean This refactors the DSA core handling of flooding attributes, since 3 more have been introduced (related to host flooding). In DSA, actually host flooding is the same as egress flooding of the CPU port. Note that there are some switches where flooding is a decision taken per {source port, destination port}. In DSA, it is only per egress port. For now, let's keep it that way, which means that we need to implement a "flood count" for the CPU port (keep it in flooding while there is at least one user port with the BR_HOST_FLOOD flag set). With this patch, RX filtering can be done for switch ports operating in standalone mode and in bridge mode with no foreign interfaces. When bridging with other net devices in the system, all unknown destinations are allowed to go to the CPU, where they continue to be forwarded in software. Signed-off-by: Vladimir Oltean --- include/net/dsa.h | 8 ++++ net/dsa/dsa_priv.h | 2 +- net/dsa/port.c | 113 +++++++++++++++++++++++++++++++++------------ 3 files changed, 93 insertions(+), 30 deletions(-) diff --git a/include/net/dsa.h b/include/net/dsa.h index 7aa78884a5f2..c256467f1f4a 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -198,6 +198,14 @@ struct dsa_port { struct devlink_port devlink_port; struct phylink *pl; struct phylink_config pl_config; + /* Operational state of flooding */ + int uc_flood_count; + int mc_flood_count; + bool uc_flood; + bool mc_flood; + /* Knobs from bridge */ + unsigned long br_flags; + bool mrouter; struct list_head list; diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h index 001668007efd..91cbaefc56b3 100644 --- a/net/dsa/dsa_priv.h +++ b/net/dsa/dsa_priv.h @@ -167,7 +167,7 @@ int dsa_port_mdb_del(const struct dsa_port *dp, const struct switchdev_obj_port_mdb *mdb); int dsa_port_pre_bridge_flags(const struct dsa_port *dp, unsigned long flags, struct switchdev_trans *trans); -int dsa_port_bridge_flags(const struct dsa_port *dp, unsigned long flags, +int dsa_port_bridge_flags(struct dsa_port *dp, unsigned long flags, struct switchdev_trans *trans); int dsa_port_mrouter(struct dsa_port *dp, bool mrouter, struct switchdev_trans *trans); diff --git a/net/dsa/port.c b/net/dsa/port.c index c4032f79225a..b527740d03a8 100644 --- a/net/dsa/port.c +++ b/net/dsa/port.c @@ -144,10 +144,7 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br) }; int err; - /* Set the flooding mode before joining the port in the switch */ - err = dsa_port_bridge_flags(dp, BR_FLOOD | BR_MCAST_FLOOD, NULL); - if (err) - return err; + dp->cpu_dp->mrouter = br_multicast_router(br); /* Here the interface is already bridged. Reflect the current * configuration so that drivers can program their chips accordingly. @@ -156,12 +153,6 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br) err = dsa_broadcast(DSA_NOTIFIER_BRIDGE_JOIN, &info); - /* The bridging is rolled back on error */ - if (err) { - dsa_port_bridge_flags(dp, 0, NULL); - dp->bridge_dev = NULL; - } - return err; } @@ -184,8 +175,12 @@ void dsa_port_bridge_leave(struct dsa_port *dp, struct net_device *br) if (err) pr_err("DSA: failed to notify DSA_NOTIFIER_BRIDGE_LEAVE\n"); - /* Port is leaving the bridge, disable flooding */ - dsa_port_bridge_flags(dp, 0, NULL); + dp->cpu_dp->mrouter = false; + + /* Port is leaving the bridge, disable host flooding and enable + * egress flooding + */ + dsa_port_bridge_flags(dp, BR_FLOOD | BR_MCAST_FLOOD, NULL); /* Port left the bridge, put in BR_STATE_DISABLED by the bridge layer, * so allow it to be in BR_STATE_FORWARDING to be kept functional @@ -289,48 +284,108 @@ int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock, return dsa_port_notify(dp, DSA_NOTIFIER_AGEING_TIME, &info); } +static int dsa_port_update_flooding(struct dsa_port *dp, int uc_flood_count, + int mc_flood_count) +{ + struct dsa_switch *ds = dp->ds; + bool uc_flood_changed; + bool mc_flood_changed; + int port = dp->index; + bool uc_flood; + bool mc_flood; + int err; + + if (!ds->ops->port_egress_floods) + return 0; + + uc_flood = !!uc_flood_count; + mc_flood = dp->mrouter; + + uc_flood_changed = dp->uc_flood ^ uc_flood; + mc_flood_changed = dp->mc_flood ^ mc_flood; + + if (uc_flood_changed || mc_flood_changed) { + err = ds->ops->port_egress_floods(ds, port, uc_flood, mc_flood); + if (err) + return err; + } + + dp->uc_flood_count = uc_flood_count; + dp->mc_flood_count = mc_flood_count; + dp->uc_flood = uc_flood; + dp->mc_flood = mc_flood; + + return 0; +} + int dsa_port_pre_bridge_flags(const struct dsa_port *dp, unsigned long flags, struct switchdev_trans *trans) { + const unsigned long mask = BR_FLOOD | BR_MCAST_FLOOD | BR_BCAST_FLOOD | + BR_HOST_FLOOD | BR_HOST_MCAST_FLOOD | + BR_HOST_BCAST_FLOOD; struct dsa_switch *ds = dp->ds; - if (!ds->ops->port_egress_floods || - (flags & ~(BR_FLOOD | BR_MCAST_FLOOD))) - return -EINVAL; + if (!ds->ops->port_egress_floods || (flags & ~mask)) + return -EOPNOTSUPP; return 0; } -int dsa_port_bridge_flags(const struct dsa_port *dp, unsigned long flags, +int dsa_port_bridge_flags(struct dsa_port *dp, unsigned long flags, struct switchdev_trans *trans) { - struct dsa_switch *ds = dp->ds; - int port = dp->index; + struct dsa_port *cpu_dp = dp->cpu_dp; + int cpu_uc_flood_count; + int cpu_mc_flood_count; + unsigned long changed; + int uc_flood_count; + int mc_flood_count; int err = 0; if (switchdev_trans_ph_prepare(trans)) return 0; - if (ds->ops->port_egress_floods) - err = ds->ops->port_egress_floods(ds, port, flags & BR_FLOOD, - flags & BR_MCAST_FLOOD); + uc_flood_count = dp->uc_flood_count; + mc_flood_count = dp->mc_flood_count; + cpu_uc_flood_count = cpu_dp->uc_flood_count; + cpu_mc_flood_count = cpu_dp->mc_flood_count; - return err; + changed = dp->br_flags ^ flags; + + if (changed & BR_FLOOD) + uc_flood_count += (flags & BR_FLOOD) ? 1 : -1; + if (changed & BR_MCAST_FLOOD) + mc_flood_count += (flags & BR_MCAST_FLOOD) ? 1 : -1; + if (changed & BR_HOST_FLOOD) + cpu_uc_flood_count += (flags & BR_HOST_FLOOD) ? 1 : -1; + if (changed & BR_HOST_MCAST_FLOOD) + cpu_mc_flood_count += (flags & BR_HOST_MCAST_FLOOD) ? 1 : -1; + + err = dsa_port_update_flooding(dp, uc_flood_count, mc_flood_count); + if (err && err != -EOPNOTSUPP) + return err; + + err = dsa_port_update_flooding(cpu_dp, cpu_uc_flood_count, + cpu_mc_flood_count); + if (err && err != -EOPNOTSUPP) + return err; + + dp->br_flags = flags; + + return 0; } int dsa_port_mrouter(struct dsa_port *dp, bool mrouter, struct switchdev_trans *trans) { - struct dsa_switch *ds = dp->ds; - int port = dp->index; - - if (!ds->ops->port_egress_floods) - return -EOPNOTSUPP; - if (switchdev_trans_ph_prepare(trans)) return 0; - return ds->ops->port_egress_floods(ds, port, true, mrouter); + dp->mrouter = mrouter; + + return dsa_port_update_flooding(dp, dp->uc_flood_count, + dp->mc_flood_count); } int dsa_port_mtu_change(struct dsa_port *dp, int new_mtu, From patchwork Thu May 21 21:10:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1295715 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=QFXa7pJ3; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Sj382HnSz9sRW for ; Fri, 22 May 2020 07:11:16 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730651AbgEUVLP (ORCPT ); Thu, 21 May 2020 17:11:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730566AbgEUVLI (ORCPT ); Thu, 21 May 2020 17:11:08 -0400 Received: from mail-ed1-x542.google.com (mail-ed1-x542.google.com [IPv6:2a00:1450:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C774C061A0E for ; Thu, 21 May 2020 14:11:07 -0700 (PDT) Received: by mail-ed1-x542.google.com with SMTP id l25so7726910edj.4 for ; Thu, 21 May 2020 14:11:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RbB6CXL0qgQWxXTRExMkrz+2z7uh3PUfZ9RjInw+Vf8=; b=QFXa7pJ3teMwRJt7Zx0QraCNc5oPPIJmEA0fviwzRl3B+8vNVVKhUBrt51N4KUbN9H pz+RkmSck8H0NI2k/SlfIxYdbXx4ohEkTh8dSiaOfAzMctOSwn2LkGrWMHP5uCf/qxgZ vJoHzcym/jmOkzXr/GXrXXaUWu15a++I5lbilwt9T4p5d7R+dZ2PSe9BlrU9XCaKUqLs kQxkWKxnU2kp/twFqDViJ2k5KMFnqYa9ICY5y0VsnltUiIo+UbYj+rON16gatr0uolvA psoXzL4TKBu7IfcDPy67hKfqXdHeSgxwqRvi/6q6wgwwSipm1KOI4sNOOXq9bBzb3F7G Ndhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RbB6CXL0qgQWxXTRExMkrz+2z7uh3PUfZ9RjInw+Vf8=; b=MMKIKzyUTQRG8WSvbLKOfN1LyZlTGOeDhX47zOXD/vKgxkBm3jGAl2Eba+hwd1DGZP CSIsNXVlt5YqLGMoPlX4HqmrdshW/oeIOH7JbL26ntDS4DSMZ+QyOHoCl0kE1+3vQVqP lILmQhV57Nfn7mEyBJkkOG/ELgTxhHDuUz8U2H/nBWJ6omb16WEXdDGwWIoe69/onjUB cwlkBqO89seYKHvU6Ze221faY1P85LoU5m+wc2huhfsh3b0fvESaWEMIVpURacXLCD5t TYD7/7gOY3SKB7HHvWfCf8KuWZyoKaN3t1C+boOm2naGjPgREk5QNUFk/4gNz+8USnlg /W4w== X-Gm-Message-State: AOAM532DjXpo2xcCsZWJ1C6qYcAszeQ1Bq4UsuK8AGj0y/i87QHiYcQj TIR0KU5nLSw2APEu7dFINPg= X-Google-Smtp-Source: ABdhPJy1cRHIiCJ3jatnT9eOB9gzd+WRTzSMXxdaoyMRUsJQ6lYoBNi0At4MXbENZZZpm7I45E3dMg== X-Received: by 2002:a50:9f66:: with SMTP id b93mr547968edf.376.1590095466236; Thu, 21 May 2020 14:11:06 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.11.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:11:05 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 12/13] net: dsa: treat switchdev notifications for multicast router connected to port Date: Fri, 22 May 2020 00:10:35 +0300 Message-Id: <20200521211036.668624-13-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean Similar to the "bridge is multicast router" case, unknown multicast should be flooded by this bridge to the ports where a multicast router is connected. Signed-off-by: Vladimir Oltean --- net/dsa/slave.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/net/dsa/slave.c b/net/dsa/slave.c index 2743d689f6b1..c023f1120736 100644 --- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -467,7 +467,12 @@ static int dsa_slave_port_attr_set(struct net_device *dev, case SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS: ret = dsa_port_bridge_flags(dp, attr->u.brport_flags, trans); break; + case SWITCHDEV_ATTR_ID_PORT_MROUTER: + /* A multicast router is connected to this external port */ + ret = dsa_port_mrouter(dp, attr->u.mrouter, trans); + break; case SWITCHDEV_ATTR_ID_BRIDGE_MROUTER: + /* The local bridge is a multicast router */ ret = dsa_port_mrouter(dp->cpu_dp, attr->u.mrouter, trans); break; default: From patchwork Thu May 21 21:10:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 1295716 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ezp1c2qE; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Sj386cFyz9sSn for ; Fri, 22 May 2020 07:11:16 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730649AbgEUVLO (ORCPT ); Thu, 21 May 2020 17:11:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730638AbgEUVLJ (ORCPT ); Thu, 21 May 2020 17:11:09 -0400 Received: from mail-ed1-x542.google.com (mail-ed1-x542.google.com [IPv6:2a00:1450:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB119C05BD43 for ; Thu, 21 May 2020 14:11:08 -0700 (PDT) Received: by mail-ed1-x542.google.com with SMTP id f13so7112301edr.13 for ; Thu, 21 May 2020 14:11:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gfAmpoAK+70saBDUCv/DNe6j3cbIVcrSOVCUJ8rFE8Q=; b=ezp1c2qEW36qM305ql5VYIgBb7KpnhPWc50s8Zon6CdLFkaR+0HXojW4c0bCAvuhwd vklx8PLEviyHAEQyTX+yVuqC0/eYcmez1UMfyW0wKc3wjM2X2Tz+Gru7RcIPoeFhAU4y RQnDWT+Tz5YBtDQ5Ajq1XxW0iBqyWYIXw3hgQiRkKhU7IozWGrcmjLnCitrmlg596/xu gYfmRDNfzZkmhvvFfSNDA2PG4XgOkugUr609HUT0HZsPe/pF7N3+nc5jOPVQzLW8OOn+ bDHrT6S6blMIUXBQG3+tX7l/QMl/lsX3gKvwKdpxmQdOrxpet6L7KpjYG6Jg9ApsM4Ic WUZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gfAmpoAK+70saBDUCv/DNe6j3cbIVcrSOVCUJ8rFE8Q=; b=ekaXiH0oc//XAnqI6t8+bAjy6QDHn7iqALbxAa0vBItb+t5ZiHj7YLm922am3Xbwh+ QJRJi3csTIank8kzGevuSB+Hj8mAzfVrCueT+BZvcu+5ienU1NJI1JnPE1ZnalWHLFJF HA5ddaGonT257eQDpSYYVyk3Y05Sjs+Zx5o77gEVuOqJDfXwVtz0KSwWwgDCXvlP+eCp b0Y6tAuEhvnRqRtsdjd5yUIhocCMNMeUFwjBF2bXqdnURadsmkY6HTIEMhCwkqDCokwL 00ZeVuiSYUrA2wcMJNLHTcrkpU1b2jIMCuVWFrxvOtM8Ip/Gkm4OAwVDc2M5M20jpuDC uytQ== X-Gm-Message-State: AOAM5321dgotKegQ+WnulS0ApHRkFfNzV4hXXm79WpdgxRGSISsajgf3 W7OF4tmzpCh0iOQEsf0bLok= X-Google-Smtp-Source: ABdhPJw8TpUcklt+Bb4wHhMc3/uHgLT6slAFpzxxVjpRszHeKBOz/cK+DsasWi10/eqDPx56N248Fg== X-Received: by 2002:aa7:d850:: with SMTP id f16mr515502eds.365.1590095467453; Thu, 21 May 2020 14:11:07 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.11.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:11:07 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 13/13] net: dsa: wire up multicast IGMP snooping attribute notification Date: Fri, 22 May 2020 00:10:36 +0300 Message-Id: <20200521211036.668624-14-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Florian Fainelli The bridge can at runtime be configured with or without IGMP snooping enabled but we were not processing the switchdev attribute that notifies about that toggle, do this now. Drivers that support frame parsing up to IGMP/MLD should enable trapping of those frames towards the CPU, while pure L2 switches should trap the entire range of 01:00:5E:00:00:01 to 01:00:5E:00:00:FF. Signed-off-by: Florian Fainelli Signed-off-by: Vladimir Oltean --- include/net/dsa.h | 3 +++ net/dsa/dsa_priv.h | 13 +++++++++++++ net/dsa/port.c | 47 +++++++++++++++++++++++++++++++++++++++++++++- net/dsa/slave.c | 3 +++ net/dsa/switch.c | 36 +++++++++++++++++++++++++++++++++++ 5 files changed, 101 insertions(+), 1 deletion(-) diff --git a/include/net/dsa.h b/include/net/dsa.h index c256467f1f4a..3f7c1f56908c 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -205,6 +205,7 @@ struct dsa_port { bool mc_flood; /* Knobs from bridge */ unsigned long br_flags; + bool mc_disabled; bool mrouter; struct list_head list; @@ -564,6 +565,8 @@ struct dsa_switch_ops { const struct switchdev_obj_port_mdb *mdb); int (*port_mdb_del)(struct dsa_switch *ds, int port, const struct switchdev_obj_port_mdb *mdb); + int (*port_igmp_mld_snoop)(struct dsa_switch *ds, int port, + bool enable); /* * RXNFC */ diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h index 91cbaefc56b3..0761f2fff994 100644 --- a/net/dsa/dsa_priv.h +++ b/net/dsa/dsa_priv.h @@ -24,6 +24,7 @@ enum { DSA_NOTIFIER_VLAN_ADD, DSA_NOTIFIER_VLAN_DEL, DSA_NOTIFIER_MTU, + DSA_NOTIFIER_MC_DISABLED, }; /* DSA_NOTIFIER_AGEING_TIME */ @@ -72,6 +73,14 @@ struct dsa_notifier_mtu_info { int mtu; }; +/* DSA_NOTIFIER_MC_DISABLED */ +struct dsa_notifier_mc_disabled_info { + int tree_index; + int sw_index; + struct net_device *br; + bool mc_disabled; +}; + struct dsa_switchdev_event_work { struct dsa_switch *ds; int port; @@ -150,6 +159,10 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br); void dsa_port_bridge_leave(struct dsa_port *dp, struct net_device *br); int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering, struct switchdev_trans *trans); +int dsa_port_multicast_toggle(struct dsa_switch *ds, int port, + bool mc_disabled); +int dsa_port_mc_disabled(struct dsa_port *dp, bool mc_disabled, + struct switchdev_trans *trans); bool dsa_port_skip_vlan_configuration(struct dsa_port *dp); int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock, struct switchdev_trans *trans); diff --git a/net/dsa/port.c b/net/dsa/port.c index b527740d03a8..962f25ee8cf2 100644 --- a/net/dsa/port.c +++ b/net/dsa/port.c @@ -144,6 +144,7 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br) }; int err; + dp->cpu_dp->mc_disabled = !br_multicast_enabled(br); dp->cpu_dp->mrouter = br_multicast_router(br); /* Here the interface is already bridged. Reflect the current @@ -175,6 +176,7 @@ void dsa_port_bridge_leave(struct dsa_port *dp, struct net_device *br) if (err) pr_err("DSA: failed to notify DSA_NOTIFIER_BRIDGE_LEAVE\n"); + dp->cpu_dp->mc_disabled = true; dp->cpu_dp->mrouter = false; /* Port is leaving the bridge, disable host flooding and enable @@ -299,7 +301,17 @@ static int dsa_port_update_flooding(struct dsa_port *dp, int uc_flood_count, return 0; uc_flood = !!uc_flood_count; - mc_flood = dp->mrouter; + /* As explained in commit 8ecd4591e761 ("mlxsw: spectrum: Add an option + * to flood mc by mc_router_port"), the decision whether to flood a + * multicast packet to a port depends on 3 flags: mc_disabled, + * mc_router_port, mc_flood. + * If mc_disabled is on, the port will be flooded according to + * mc_flood, otherwise, according to mc_router_port. + */ + if (dp->mc_disabled) + mc_flood = !!mc_flood_count; + else + mc_flood = dp->mrouter; uc_flood_changed = dp->uc_flood ^ uc_flood; mc_flood_changed = dp->mc_flood ^ mc_flood; @@ -388,6 +400,39 @@ int dsa_port_mrouter(struct dsa_port *dp, bool mrouter, dp->mc_flood_count); } +int dsa_port_multicast_toggle(struct dsa_switch *ds, int port, bool mc_disabled) +{ + struct dsa_port *dp = dsa_to_port(ds, port); + int err; + + if (ds->ops->port_igmp_mld_snoop) { + err = ds->ops->port_igmp_mld_snoop(ds, port, !mc_disabled); + if (err) + return err; + } + + dp->mc_disabled = mc_disabled; + + return dsa_port_update_flooding(dp, dp->uc_flood_count, + dp->mc_flood_count); +} + +int dsa_port_mc_disabled(struct dsa_port *dp, bool mc_disabled, + struct switchdev_trans *trans) +{ + struct dsa_notifier_mc_disabled_info info = { + .tree_index = dp->ds->dst->index, + .sw_index = dp->ds->index, + .br = dp->bridge_dev, + .mc_disabled = mc_disabled, + }; + + if (switchdev_trans_ph_prepare(trans)) + return 0; + + return dsa_broadcast(DSA_NOTIFIER_MC_DISABLED, &info); +} + int dsa_port_mtu_change(struct dsa_port *dp, int new_mtu, bool propagate_upstream) { diff --git a/net/dsa/slave.c b/net/dsa/slave.c index c023f1120736..c0929613f1b4 100644 --- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -475,6 +475,9 @@ static int dsa_slave_port_attr_set(struct net_device *dev, /* The local bridge is a multicast router */ ret = dsa_port_mrouter(dp->cpu_dp, attr->u.mrouter, trans); break; + case SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED: + ret = dsa_port_mc_disabled(dp, attr->u.mc_disabled, trans); + break; default: ret = -EOPNOTSUPP; break; diff --git a/net/dsa/switch.c b/net/dsa/switch.c index 86c8dc5c32a0..9d4f8fd9cf10 100644 --- a/net/dsa/switch.c +++ b/net/dsa/switch.c @@ -337,6 +337,39 @@ static int dsa_switch_vlan_del(struct dsa_switch *ds, return 0; } +static bool +dsa_switch_mc_disabled_match(struct dsa_switch *ds, int port, + struct dsa_notifier_mc_disabled_info *info) +{ + struct dsa_port *dp = dsa_to_port(ds, port); + struct dsa_switch_tree *dst = ds->dst; + + if (dp->bridge_dev == info->br) + return true; + + if (dst->index == info->tree_index && ds->index == info->sw_index) + return dsa_is_cpu_port(ds, port) || dsa_is_dsa_port(ds, port); + + return false; +} + +static int dsa_switch_mc_disabled(struct dsa_switch *ds, + struct dsa_notifier_mc_disabled_info *info) +{ + bool mc_disabled = info->mc_disabled; + int port, err; + + for (port = 0; port < ds->num_ports; port++) { + if (dsa_switch_mc_disabled_match(ds, port, info)) { + err = dsa_port_multicast_toggle(ds, port, mc_disabled); + if (err) + return err; + } + } + + return 0; +} + static int dsa_switch_event(struct notifier_block *nb, unsigned long event, void *info) { @@ -374,6 +407,9 @@ static int dsa_switch_event(struct notifier_block *nb, case DSA_NOTIFIER_MTU: err = dsa_switch_mtu(ds, info); break; + case DSA_NOTIFIER_MC_DISABLED: + err = dsa_switch_mc_disabled(ds, info); + break; default: err = -EOPNOTSUPP; break;