diff mbox series

[v3] net: bpf: permit redirect from ingress L3 to egress L2 devices at near max mtu

Message ID 20200507023606.111650-1-zenczykowski@gmail.com
State Changes Requested
Delegated to: BPF Maintainers
Headers show
Series [v3] net: bpf: permit redirect from ingress L3 to egress L2 devices at near max mtu | expand

Commit Message

Maciej Żenczykowski May 7, 2020, 2:36 a.m. UTC
From: Maciej Żenczykowski <maze@google.com>

__bpf_skb_max_len(skb) is used from:
  bpf_skb_adjust_room
  __bpf_skb_change_tail
  __bpf_skb_change_head

but in the case of forwarding we're likely calling these functions
during receive processing on ingress and bpf_redirect()'ing at
a later point in time to egress on another interface, thus these
mtu checks are for the wrong device (input instead of output).

This is particularly problematic if we're receiving on an L3 1500 mtu
cellular interface, trying to add an L2 header and forwarding to
an L3 mtu 1500 mtu wifi/ethernet device (which is thus L2 1514).

The mtu check prevents us from adding the 14 byte ethernet header prior
to forwarding the packet.

After the packet has already been redirected, we'd need to add
an additional 2nd ebpf program on the target device's egress tc hook,
but then we'd also see non-redirected traffic and have no easy
way to tell apart normal egress with ethernet header packets
from forwarded ethernet headerless packets.

Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Maciej Żenczykowski <maze@google.com>
---
 net/core/filter.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

Comments

Daniel Borkmann May 7, 2020, 3:54 p.m. UTC | #1
On 5/7/20 4:36 AM, Maciej Żenczykowski wrote:
> From: Maciej Żenczykowski <maze@google.com>
> 
> __bpf_skb_max_len(skb) is used from:
>    bpf_skb_adjust_room
>    __bpf_skb_change_tail
>    __bpf_skb_change_head
> 
> but in the case of forwarding we're likely calling these functions
> during receive processing on ingress and bpf_redirect()'ing at
> a later point in time to egress on another interface, thus these
> mtu checks are for the wrong device (input instead of output).
> 
> This is particularly problematic if we're receiving on an L3 1500 mtu
> cellular interface, trying to add an L2 header and forwarding to
> an L3 mtu 1500 mtu wifi/ethernet device (which is thus L2 1514).
> 
> The mtu check prevents us from adding the 14 byte ethernet header prior
> to forwarding the packet.
> 
> After the packet has already been redirected, we'd need to add
> an additional 2nd ebpf program on the target device's egress tc hook,
> but then we'd also see non-redirected traffic and have no easy
> way to tell apart normal egress with ethernet header packets
> from forwarded ethernet headerless packets.
> 
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Signed-off-by: Maciej Żenczykowski <maze@google.com>
> ---
>   net/core/filter.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 7d6ceaa54d21..5c8243930462 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -3159,8 +3159,9 @@ static int bpf_skb_net_shrink(struct sk_buff *skb, u32 off, u32 len_diff,
>   
>   static u32 __bpf_skb_max_len(const struct sk_buff *skb)
>   {
> -	return skb->dev ? skb->dev->mtu + skb->dev->hard_header_len :
> -			  SKB_MAX_ALLOC;
> +	if (skb_at_tc_ingress(skb) || !skb->dev)
> +		return SKB_MAX_ALLOC;
> +	return skb->dev->mtu + skb->dev->hard_header_len;
>   }

But then why even have any MTU check in the first place? Above would basically
break for the case where I'd have a one-legged load-balancer. skb comes in at
tc ingress, we adjust its size and are allowed to do so up to SKB_MAX_ALLOC.
Then we redirect it out through the same device through bpf where it came from.

I suppose we are the ones responsible to assert here that it doesn't exceed MTU.
There are 3 cases when skb exits the prog on tc ingress or egress: i) we redirect
via ingress, then ii) we redirect via egress, and iii) we just do tc_act_ok. Case
i) is asserted already via ____dev_forward_skb() today. If we fix/relax the
__bpf_skb_max_len(), we would also need to assert the other two locations,
something hacked up like the below. And on this it probably makes sense to expose
the current MTU, but that can be optional.

Thoughts?

Thanks,
Daniel

 From 95464f75ed8d520b9ff068b72687a422465686cd Mon Sep 17 00:00:00 2001
From: Daniel Borkmann <daniel@iogearbox.net>
Date: Thu, 7 May 2020 16:46:30 +0200
Subject: [PATCH] bpf: xxx

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
  include/linux/netdevice.h | 25 +++++++++++++++++++++++--
  include/uapi/linux/bpf.h  |  1 +
  net/core/dev.c            | 24 +++---------------------
  net/core/filter.c         | 22 +++++++++++++++++-----
  4 files changed, 44 insertions(+), 28 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 5a8d40f1ffe2..19770744d5b5 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3787,8 +3787,29 @@ int xdp_umem_query(struct net_device *dev, u16 queue_id);

  int __dev_forward_skb(struct net_device *dev, struct sk_buff *skb);
  int dev_forward_skb(struct net_device *dev, struct sk_buff *skb);
-bool is_skb_forwardable(const struct net_device *dev,
-			const struct sk_buff *skb);
+
+static __always_inline bool is_skb_size_ok(const struct net_device *dev,
+					   const struct sk_buff *skb)
+{
+	static const u32 vlan_header_len = 4;
+
+	if (skb->len <= dev->mtu + dev->hard_header_len + vlan_header_len)
+		return true;
+
+	/* If TSO is enabled, we don't care about the length as the packet
+	 * could be forwarded without being segmented before.
+	 */
+	return skb_is_gso(skb);
+}
+
+static __always_inline bool is_skb_forwardable(const struct net_device *dev,
+					       const struct sk_buff *skb)
+{
+	if (unlikely(!(dev->flags & IFF_UP)))
+		return false;
+
+	return is_skb_size_ok(dev, skb);
+}

  static __always_inline int ____dev_forward_skb(struct net_device *dev,
  					       struct sk_buff *skb)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index b3643e27e264..0239e415a469 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -3370,6 +3370,7 @@ struct __sk_buff {
  	__u32 gso_segs;
  	__bpf_md_ptr(struct bpf_sock *, sk);
  	__u32 gso_size;
+	__u32 mtu;
  };

  struct bpf_tunnel_key {
diff --git a/net/core/dev.c b/net/core/dev.c
index afff16849c26..b3bf738fc36f 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2100,27 +2100,6 @@ static inline void net_timestamp_set(struct sk_buff *skb)
  			__net_timestamp(SKB);			\
  	}							\

-bool is_skb_forwardable(const struct net_device *dev, const struct sk_buff *skb)
-{
-	unsigned int len;
-
-	if (!(dev->flags & IFF_UP))
-		return false;
-
-	len = dev->mtu + dev->hard_header_len + VLAN_HLEN;
-	if (skb->len <= len)
-		return true;
-
-	/* if TSO is enabled, we don't care about the length as the packet
-	 * could be forwarded without being segmented before
-	 */
-	if (skb_is_gso(skb))
-		return true;
-
-	return false;
-}
-EXPORT_SYMBOL_GPL(is_skb_forwardable);
-
  int __dev_forward_skb(struct net_device *dev, struct sk_buff *skb)
  {
  	int ret = ____dev_forward_skb(dev, skb);
@@ -3786,8 +3765,11 @@ sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev)
  	case TC_ACT_OK:
  	case TC_ACT_RECLASSIFY:
  		skb->tc_index = TC_H_MIN(cl_res.classid);
+		if (unlikely(!is_skb_size_ok(dev, skb)))
+			goto drop;
  		break;
  	case TC_ACT_SHOT:
+drop:
  		mini_qdisc_qstats_cpu_drop(miniq);
  		*ret = NET_XMIT_DROP;
  		kfree_skb(skb);
diff --git a/net/core/filter.c b/net/core/filter.c
index dfaf5df13722..54db75bf15c5 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -2037,10 +2037,11 @@ static inline int __bpf_tx_skb(struct net_device *dev, struct sk_buff *skb)
  {
  	int ret;

-	if (dev_xmit_recursion()) {
+	if (unlikely(!is_skb_forwardable(dev, skb)))
+		goto drop;
+	if (unlikely(dev_xmit_recursion())) {
  		net_crit_ratelimited("bpf: recursion limit reached on datapath, buggy bpf program?\n");
-		kfree_skb(skb);
-		return -ENETDOWN;
+		goto drop;
  	}

  	skb->dev = dev;
@@ -2051,6 +2052,10 @@ static inline int __bpf_tx_skb(struct net_device *dev, struct sk_buff *skb)
  	dev_xmit_recursion_dec();

  	return ret;
+drop:
+	atomic_long_inc(&dev->rx_dropped);
+	kfree_skb(skb);
+	return -EIO;
  }

  static int __bpf_redirect_no_mac(struct sk_buff *skb, struct net_device *dev,
@@ -3148,8 +3153,7 @@ static int bpf_skb_net_shrink(struct sk_buff *skb, u32 off, u32 len_diff,

  static u32 __bpf_skb_max_len(const struct sk_buff *skb)
  {
-	return skb->dev ? skb->dev->mtu + skb->dev->hard_header_len :
-			  SKB_MAX_ALLOC;
+	return SKB_MAX_ALLOC;
  }

  BPF_CALL_4(bpf_skb_adjust_room, struct sk_buff *, skb, s32, len_diff,
@@ -7831,6 +7835,14 @@ static u32 tc_cls_act_convert_ctx_access(enum bpf_access_type type,
  				      bpf_target_off(struct net_device, ifindex, 4,
  						     target_size));
  		break;
+	case offsetof(struct __sk_buff, mtu):
+		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, dev),
+				      si->dst_reg, si->src_reg,
+				      offsetof(struct sk_buff, dev));
+		*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->dst_reg,
+				      bpf_target_off(struct net_device, mtu, 4,
+						     target_size));
+		break;
  	default:
  		return bpf_convert_ctx_access(type, si, insn_buf, prog,
  					      target_size);
Maciej Żenczykowski May 7, 2020, 4:46 p.m. UTC | #2
(a) not clear why the max is SKB_MAX_ALLOC in the first place (this is
PAGE_SIZE << 2, ie. 16K on x86), while lo mtu is 64k

(b) hmm, if we're not redirecting, then exceeding the ingress device's
mtu doesn't seem to be a problem.

Indeed AFAIK this can already happen, some devices will round mtu up
when they configure the device mru buffers.
(ie. you configure L3 mtu 1500, they treat that as L2 1536 or 1532 [-4
fcs], simply because 3 * 512 is a nice amount of buffers, or they'll
accept not only 1514 L2, but also 1518 L2 or even 1522 L2 for VLAN and
Q-IN-Q -- even if the packets aren't actually VLAN'ed, so your non
VLAN mru might be 1504 or 1508)

Indeed my corp dell workstation with some standard 1 gigabit
motherboard nic has a standard default mtu of 1500, and I've seen it
receive L3 mtu 1520 packets (apparently due to misconfiguration in our
hardware [cisco? juniper?] ipv4->ipv6 translator which can take 1500
mtu ipv4 packets and convert them to 1520 mtu ipv6 packets without
fragmenting or generating icmp too big errors).  While it's obviously
wrong, it does just work (the network paths themselves are also
obviously 1520 clean).

(c) If we are redirecting we'll eventually (after bpf program returns)
hit dev_queue_xmit(), and shouldn't that be what returns an error?

btw. is_skb_forwardable() actually tests
- device is up && (packet is gso || skb->len < dev->mtu +
dev->hard_header_len + VLAN_HLEN);

which is also wrong and in 2 ways, cause VLAN_HLEN makes no sense on
non ethernet, and the __bpf_skb_max_len function doesn't account for
VLAN...  (which possibly has implications if you try to redirect to a
vlan interface)

---

I think having an mtu check is useful, but I think the mtu should be
selectable by the bpf program.  Because it might not even be device
mtu at all, it might be path mtu which we should be testing against.
It should also be checked for gso frames, since the max post
segmentation size should be enforced.

---

I agree we should expose dev->mtu (and dev->hard_header_len and hatype)

I'll mull this over a bit more, but I'm not convinced this patch isn't ok as is.
There just is probably more we should do.
Daniel Borkmann May 7, 2020, 9:05 p.m. UTC | #3
On 5/7/20 6:46 PM, Maciej Żenczykowski wrote:
> (a) not clear why the max is SKB_MAX_ALLOC in the first place (this is
> PAGE_SIZE << 2, ie. 16K on x86), while lo mtu is 64k

Agreed, tbh, it's not clear to me either atm. :) The SKB_MAX_ALLOC constant itself
should be replaced with something more appropriate. Also as a small side note,
the !skb->dev check should be removed since in tc ingress/egress the skb->dev
is never NULL. (See also tc_cls_act_convert_ctx_access() on struct __sk_buff,
ifindex conversion.)

> (b) hmm, if we're not redirecting, then exceeding the ingress device's
> mtu doesn't seem to be a problem.
> 
> Indeed AFAIK this can already happen, some devices will round mtu up
> when they configure the device mru buffers.
> (ie. you configure L3 mtu 1500, they treat that as L2 1536 or 1532 [-4
> fcs], simply because 3 * 512 is a nice amount of buffers, or they'll
> accept not only 1514 L2, but also 1518 L2 or even 1522 L2 for VLAN and
> Q-IN-Q -- even if the packets aren't actually VLAN'ed, so your non
> VLAN mru might be 1504 or 1508)
> 
> Indeed my corp dell workstation with some standard 1 gigabit
> motherboard nic has a standard default mtu of 1500, and I've seen it
> receive L3 mtu 1520 packets (apparently due to misconfiguration in our
> hardware [cisco? juniper?] ipv4->ipv6 translator which can take 1500
> mtu ipv4 packets and convert them to 1520 mtu ipv6 packets without
> fragmenting or generating icmp too big errors).  While it's obviously
> wrong, it does just work (the network paths themselves are also
> obviously 1520 clean).

Right, agree on tc ingress side when skb goes further up the stack.

> (c) If we are redirecting we'll eventually (after bpf program returns)
> hit dev_queue_xmit(), and shouldn't that be what returns an error?

You mean whether the check should be inside __dev_queue_xmit() itself
instead? Maybe. From a cursory glance the MTU checks are spread in upper
layer protos. As mentioned, we would need to have coverage for BPF progs
attached on the tc ingress and egress (sch_handle_{ingress,egress}())
hook where for each case an skb can be just TC_ACT_OK'ed (up to stack or
down to driver), redirected via ____dev_forward_skb() or dev_queue_xmit().
The ____dev_forward_skb() has us covered and for TC_ACT_OK on tc ingress,
we'd only need a generic upper cap. So for the rest of the cases, we'd
need to have some sort of sanity check somewhere.

> btw. is_skb_forwardable() actually tests
> - device is up && (packet is gso || skb->len < dev->mtu +
> dev->hard_header_len + VLAN_HLEN);
> 
> which is also wrong and in 2 ways, cause VLAN_HLEN makes no sense on
> non ethernet, and the __bpf_skb_max_len function doesn't account for
> VLAN...  (which possibly has implications if you try to redirect to a
> vlan interface)

Yeah, it probably would for QinQ which is probably why noone was running
into it so far. At least the skb_vlan_push() would first store the tag
via __vlan_hwaccel_put_tag() in the skb itself.

> I think having an mtu check is useful, but I think the mtu should be
> selectable by the bpf program.  Because it might not even be device
> mtu at all, it might be path mtu which we should be testing against.
> It should also be checked for gso frames, since the max post
> segmentation size should be enforced.
> 
> I agree we should expose dev->mtu (and dev->hard_header_len and hatype)
> 
> I'll mull this over a bit more, but I'm not convinced this patch isn't ok as is.
> There just is probably more we should do.

Ok, makes sense. Thanks for looking into it!
diff mbox series

Patch

diff --git a/net/core/filter.c b/net/core/filter.c
index 7d6ceaa54d21..5c8243930462 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3159,8 +3159,9 @@  static int bpf_skb_net_shrink(struct sk_buff *skb, u32 off, u32 len_diff,
 
 static u32 __bpf_skb_max_len(const struct sk_buff *skb)
 {
-	return skb->dev ? skb->dev->mtu + skb->dev->hard_header_len :
-			  SKB_MAX_ALLOC;
+	if (skb_at_tc_ingress(skb) || !skb->dev)
+		return SKB_MAX_ALLOC;
+	return skb->dev->mtu + skb->dev->hard_header_len;
 }
 
 BPF_CALL_4(bpf_skb_adjust_room, struct sk_buff *, skb, s32, len_diff,