From patchwork Mon Mar 9 17:14:32 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pablo Neira Ayuso X-Patchwork-Id: 448115 X-Patchwork-Delegate: pablo@netfilter.org Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 5619A14018C for ; Tue, 10 Mar 2015 04:11:30 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932159AbbCIRLM (ORCPT ); Mon, 9 Mar 2015 13:11:12 -0400 Received: from mail.us.es ([193.147.175.20]:47232 "EHLO mail.us.es" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753613AbbCIRLK (ORCPT ); Mon, 9 Mar 2015 13:11:10 -0400 Received: (qmail 9605 invoked from network); 9 Mar 2015 18:11:09 +0100 Received: from unknown (HELO us.es) (192.168.2.11) by us.es with SMTP; 9 Mar 2015 18:11:09 +0100 Received: (qmail 21869 invoked by uid 507); 9 Mar 2015 17:11:09 -0000 X-Qmail-Scanner-Diagnostics: from 127.0.0.1 by antivirus1 (envelope-from , uid 501) with qmail-scanner-2.10 (clamdscan: 0.98.6/20168. spamassassin: 3.4.0. Clear:RC:1(127.0.0.1):SA:0(-103.2/7.5):. Processed in 1.952971 secs); 09 Mar 2015 17:11:09 -0000 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on antivirus1 X-Spam-Level: X-Spam-Status: No, score=-103.2 required=7.5 tests=BAYES_50,SMTPAUTH_US, USER_IN_WHITELIST autolearn=disabled version=3.4.0 X-Spam-ASN: AS12715 87.216.0.0/16 X-Envelope-From: pablo@netfilter.org Received: from unknown (HELO antivirus1) (127.0.0.1) by us.es with SMTP; 9 Mar 2015 17:11:07 -0000 Received: from 192.168.1.13 (192.168.1.13) by antivirus1 (F-Secure/fsigk_smtp/412/antivirus1); Mon, 09 Mar 2015 18:11:07 +0100 (CET) X-Virus-Status: clean(F-Secure/fsigk_smtp/412/antivirus1) Received: (qmail 15768 invoked from network); 9 Mar 2015 18:11:06 +0100 Received: from 129.166.216.87.static.jazztel.es (HELO salvia.here) (pneira@us.es@87.216.166.129) by mail.us.es with SMTP; 9 Mar 2015 18:11:06 +0100 From: Pablo Neira Ayuso To: netfilter-devel@vger.kernel.org Cc: davem@davemloft.net, netdev@vger.kernel.org Subject: [PATCH 09/12] bridge: move mac header copying into br_netfilter Date: Mon, 9 Mar 2015 18:14:32 +0100 Message-Id: <1425921275-9171-10-git-send-email-pablo@netfilter.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1425921275-9171-1-git-send-email-pablo@netfilter.org> References: <1425921275-9171-1-git-send-email-pablo@netfilter.org> Sender: netfilter-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netfilter-devel@vger.kernel.org From: Florian Westphal The mac header only has to be copied back into the skb for fragments generated by ip_fragment(), which only happens for bridge forwarded packets with nf-call-iptables=1 && active nf_defrag. Signed-off-by: Florian Westphal Signed-off-by: Pablo Neira Ayuso --- include/linux/netfilter_bridge.h | 31 ------------------------------- net/bridge/br_forward.c | 4 +--- net/bridge/br_netfilter.c | 29 ++++++++++++++++++++++++++++- 3 files changed, 29 insertions(+), 35 deletions(-) diff --git a/include/linux/netfilter_bridge.h b/include/linux/netfilter_bridge.h index c755e49..332ef8a 100644 --- a/include/linux/netfilter_bridge.h +++ b/include/linux/netfilter_bridge.h @@ -44,36 +44,6 @@ static inline void nf_bridge_update_protocol(struct sk_buff *skb) skb->protocol = htons(ETH_P_PPP_SES); } -/* Fill in the header for fragmented IP packets handled by - * the IPv4 connection tracking code. - * - * Only used in br_forward.c - */ -static inline int nf_bridge_copy_header(struct sk_buff *skb) -{ - int err; - unsigned int header_size; - - nf_bridge_update_protocol(skb); - header_size = ETH_HLEN + nf_bridge_encap_header_len(skb); - err = skb_cow_head(skb, header_size); - if (err) - return err; - - skb_copy_to_linear_data_offset(skb, -header_size, - skb->nf_bridge->data, header_size); - __skb_push(skb, nf_bridge_encap_header_len(skb)); - return 0; -} - -static inline int nf_bridge_maybe_copy_header(struct sk_buff *skb) -{ - if (skb->nf_bridge && - skb->nf_bridge->mask & (BRNF_BRIDGED | BRNF_BRIDGED_DNAT)) - return nf_bridge_copy_header(skb); - return 0; -} - static inline unsigned int nf_bridge_mtu_reduction(const struct sk_buff *skb) { if (unlikely(skb->nf_bridge->mask & BRNF_PPPoE)) @@ -119,7 +89,6 @@ static inline void br_drop_fake_rtable(struct sk_buff *skb) } #else -#define nf_bridge_maybe_copy_header(skb) (0) #define nf_bridge_pad(skb) (0) #define br_drop_fake_rtable(skb) do { } while (0) #endif /* CONFIG_BRIDGE_NETFILTER */ diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c index f96933a..32541d4 100644 --- a/net/bridge/br_forward.c +++ b/net/bridge/br_forward.c @@ -37,9 +37,7 @@ static inline int should_deliver(const struct net_bridge_port *p, int br_dev_queue_push_xmit(struct sk_buff *skb) { - /* ip_fragment doesn't copy the MAC header */ - if (nf_bridge_maybe_copy_header(skb) || - !is_skb_forwardable(skb->dev, skb)) { + if (!is_skb_forwardable(skb->dev, skb)) { kfree_skb(skb); } else { skb_push(skb, ETH_HLEN); diff --git a/net/bridge/br_netfilter.c b/net/bridge/br_netfilter.c index 0ee453f..e547911 100644 --- a/net/bridge/br_netfilter.c +++ b/net/bridge/br_netfilter.c @@ -764,6 +764,33 @@ static unsigned int br_nf_forward_arp(const struct nf_hook_ops *ops, } #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV4) +static bool nf_bridge_copy_header(struct sk_buff *skb) +{ + int err; + unsigned int header_size; + + nf_bridge_update_protocol(skb); + header_size = ETH_HLEN + nf_bridge_encap_header_len(skb); + err = skb_cow_head(skb, header_size); + if (err) + return false; + + skb_copy_to_linear_data_offset(skb, -header_size, + skb->nf_bridge->data, header_size); + __skb_push(skb, nf_bridge_encap_header_len(skb)); + return true; +} + +static int br_nf_push_frag_xmit(struct sk_buff *skb) +{ + if (!nf_bridge_copy_header(skb)) { + kfree_skb(skb); + return 0; + } + + return br_dev_queue_push_xmit(skb); +} + static int br_nf_dev_queue_xmit(struct sk_buff *skb) { int ret; @@ -780,7 +807,7 @@ static int br_nf_dev_queue_xmit(struct sk_buff *skb) /* Drop invalid packet */ return NF_DROP; IPCB(skb)->frag_max_size = frag_max_size; - ret = ip_fragment(skb, br_dev_queue_push_xmit); + ret = ip_fragment(skb, br_nf_push_frag_xmit); } else ret = br_dev_queue_push_xmit(skb);