From patchwork Thu Sep 29 11:03:32 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shmulik Ladkani X-Patchwork-Id: 676605 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3slBXn61r2z9s9Y for ; Thu, 29 Sep 2016 21:04:25 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b=Zk0Uf56T; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753030AbcI2LEW (ORCPT ); Thu, 29 Sep 2016 07:04:22 -0400 Received: from mail-wm0-f66.google.com ([74.125.82.66]:35919 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753162AbcI2LET (ORCPT ); Thu, 29 Sep 2016 07:04:19 -0400 Received: by mail-wm0-f66.google.com with SMTP id b184so10148536wma.3 for ; Thu, 29 Sep 2016 04:04:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=fZeEF0z8DTOtVvbbF0Yp0aknjecFvk3hXQopwchHRHE=; b=Zk0Uf56TFXiKrUl7AE+WNaEw6HTfR4EVc2fzLIqAjUAiNwQYhcnm+M3LgE6fK09mHt pbxc5bAehvb6H87ejHeDivMKLjYeA1Lncz9mVhapfrUI9bPHoYXY6BLF3ahAN9gz6O6b 6rIXfPpuwcDge/8ewpJNKKswqFodF9aMdkUBa1blsC+t+VIL9AmoWYpx0R2LDL2nOyRv jgkzv/OKS/2B4o7ad6K7IDK+kdiPleRuyNRaT8kXexUbWVqdoLlLfLhJ1TJxcAuF+gfm uo/LK2dK7s/iUpSOdFk5OhqonAKgkM3jG7R7yZQRpX9BB84xz1dvqKBPyzVzSgzr25FR qdWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=fZeEF0z8DTOtVvbbF0Yp0aknjecFvk3hXQopwchHRHE=; b=Be9KjRu8w52azenpfympxV4+CiABwaiWf7ZJk4Dt6jEHD18U3Z+ECcEeKdf561d23E /13mf3aa9QlA6pEdDoZ47obmMgEsRMGnrbmiLzwLxeiqWpqLdhozYPCnQf8bmFtKuun1 1Cg0SpPxGWViNXLi7Y9XnSHF8lChS0Nxu3b/MQBAPlu83LEhXnBzT1fGbwpozXdZp9/Y EdyK/iNhS7ZBTJhppC+u+s2IRAvaSGUxMVoLxwUhjUvroFGBLhcTF3YGfLoCbexgp8ie 8gakn1CorCJER+KRIEgY9a3nEeLxERDIlo2Y+J0WePjuJKCy/RyeNF1byCycodX5p7M5 1/fw== X-Gm-Message-State: AA6/9Rk7ZH9pIdGYarCVHNUFOpmcT/JFtooSlOIQA40Ze8MwiCQTYowsQludWVrEP5fI8w== X-Received: by 10.28.130.197 with SMTP id e188mr13386150wmd.131.1475147057568; Thu, 29 Sep 2016 04:04:17 -0700 (PDT) Received: from halley.home ([37.46.38.96]) by smtp.gmail.com with ESMTPSA id 193sm13928478wmo.14.2016.09.29.04.04.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 29 Sep 2016 04:04:17 -0700 (PDT) From: Shmulik Ladkani To: David Miller Cc: Jamal Hadi Salim , WANG Cong , Eric Dumazet , Daniel Borkmann , netdev@vger.kernel.org, Shmulik Ladkani , Eric Dumazet Subject: [PATCH v3 net-next 4/4] net/sched: act_mirred: Implement ingress actions Date: Thu, 29 Sep 2016 14:03:32 +0300 Message-Id: <1475147012-15538-5-git-send-email-shmulik.ladkani@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1475147012-15538-1-git-send-email-shmulik.ladkani@gmail.com> References: <1475147012-15538-1-git-send-email-shmulik.ladkani@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Up until now, 'action mirred' supported only egress actions (either TCA_EGRESS_REDIR or TCA_EGRESS_MIRROR). This patch implements the corresponding ingress actions TCA_INGRESS_REDIR and TCA_INGRESS_MIRROR. This allows attaching filters whose target is to hand matching skbs into the rx processing of a specified device. Signed-off-by: Shmulik Ladkani Cc: Jamal Hadi Salim Cc: Eric Dumazet --- v3: Addressed non coherency due to reading m->tcfm_eaction multiple times, as spotted by Eric Dumazet net/sched/act_mirred.c | 51 ++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 45 insertions(+), 6 deletions(-) diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c index 69dcce8c75..22dcfd68e6 100644 --- a/net/sched/act_mirred.c +++ b/net/sched/act_mirred.c @@ -33,6 +33,25 @@ static LIST_HEAD(mirred_list); static DEFINE_SPINLOCK(mirred_list_lock); +static bool tcf_mirred_is_act_redirect(int action) +{ + return action == TCA_EGRESS_REDIR || action == TCA_INGRESS_REDIR; +} + +static u32 tcf_mirred_act_direction(int action) +{ + switch (action) { + case TCA_EGRESS_REDIR: + case TCA_EGRESS_MIRROR: + return AT_EGRESS; + case TCA_INGRESS_REDIR: + case TCA_INGRESS_MIRROR: + return AT_INGRESS; + default: + BUG(); + } +} + static void tcf_mirred_release(struct tc_action *a, int bind) { struct tcf_mirred *m = to_mirred(a); @@ -97,6 +116,8 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla, switch (parm->eaction) { case TCA_EGRESS_MIRROR: case TCA_EGRESS_REDIR: + case TCA_INGRESS_REDIR: + case TCA_INGRESS_MIRROR: break; default: if (exists) @@ -156,15 +177,20 @@ static int tcf_mirred(struct sk_buff *skb, const struct tc_action *a, struct tcf_result *res) { struct tcf_mirred *m = to_mirred(a); + bool m_mac_header_xmit; struct net_device *dev; struct sk_buff *skb2; - int retval, err; + int retval, err = 0; + int m_eaction; + int mac_len; u32 at; tcf_lastuse_update(&m->tcf_tm); bstats_cpu_update(this_cpu_ptr(m->common.cpu_bstats), skb); rcu_read_lock(); + m_mac_header_xmit = READ_ONCE(m->tcfm_mac_header_xmit); + m_eaction = READ_ONCE(m->tcfm_eaction); retval = READ_ONCE(m->tcf_action); dev = rcu_dereference(m->tcfm_dev); if (unlikely(!dev)) { @@ -183,23 +209,36 @@ static int tcf_mirred(struct sk_buff *skb, const struct tc_action *a, if (!skb2) goto out; - if (!(at & AT_EGRESS)) { - if (m->tcfm_mac_header_xmit) + /* If action's target direction differs than filter's direction, + * and devices expect a mac header on xmit, then mac push/pull is + * needed. + */ + if (at != tcf_mirred_act_direction(m_eaction) && m_mac_header_xmit) { + if (at & AT_EGRESS) { + /* caught at egress, act ingress: pull mac */ + mac_len = skb_network_header(skb) - skb_mac_header(skb); + skb_pull_rcsum(skb2, mac_len); + } else { + /* caught at ingress, act egress: push mac */ skb_push_rcsum(skb2, skb->mac_len); + } } /* mirror is always swallowed */ - if (m->tcfm_eaction != TCA_EGRESS_MIRROR) + if (tcf_mirred_is_act_redirect(m_eaction)) skb2->tc_verd = SET_TC_FROM(skb2->tc_verd, at); skb2->skb_iif = skb->dev->ifindex; skb2->dev = dev; - err = dev_queue_xmit(skb2); + if (tcf_mirred_act_direction(m_eaction) & AT_EGRESS) + err = dev_queue_xmit(skb2); + else + netif_receive_skb(skb2); if (err) { out: qstats_overlimit_inc(this_cpu_ptr(m->common.cpu_qstats)); - if (m->tcfm_eaction != TCA_EGRESS_MIRROR) + if (tcf_mirred_is_act_redirect(m_eaction)) retval = TC_ACT_SHOT; } rcu_read_unlock();