From patchwork Fri Feb 14 17:34:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 1238243 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=zx2c4.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=zx2c4.com header.i=@zx2c4.com header.a=rsa-sha1 header.s=mail header.b=lXxOhTWa; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48K0qm0JC8zB43w for ; Sat, 15 Feb 2020 04:34:28 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390795AbgBNReZ (ORCPT ); Fri, 14 Feb 2020 12:34:25 -0500 Received: from frisell.zx2c4.com ([192.95.5.64]:52863 "EHLO frisell.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390331AbgBNReW (ORCPT ); Fri, 14 Feb 2020 12:34:22 -0500 Received: by frisell.zx2c4.com (ZX2C4 Mail Server) with ESMTP id 57d1e613; Fri, 14 Feb 2020 17:32:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=zx2c4.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=mail; bh=vm6a9/E6LFoGRv3B58XZuRK2v P8=; b=lXxOhTWaoiZkHHjbbLIgw1vV8DMCsfqZVveJb30uCgC1TuinHRFOB/f+w nBxcBYcl9A7APL8tKShjZVq3j6A01KXXomiiKtMoWC8BhCw024OgCvmsxmzSTJS8 YI2mZZeXKODPFtW8iQKW5DCDFnPRoj0ivcj9dIBzPbPyzlQ0kIQ+hzL3Jh2ycYTC w+iGA8VInJmWn2IM/a6PwJVco8lQOPgupdk3wFtEbFJZajv8WvYbwJZ2MreYvifB qQDUOv8E6JgfwzZMHVLBa5lJlu8eFF38UzDH7z5DYgXT8Q2bBlt8GZ53Tvek3/pb pHkMdzhqF+4abMR7J07WP/GfYfjSA== Received: by frisell.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id b0ddefbf (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256:NO); Fri, 14 Feb 2020 17:32:13 +0000 (UTC) From: "Jason A. Donenfeld" To: davem@davemloft.net, netdev@vger.kernel.org Cc: "Jason A. Donenfeld" , Eric Dumazet Subject: [PATCH v2 net 3/3] wireguard: send: account for mtu=0 devices Date: Fri, 14 Feb 2020 18:34:07 +0100 Message-Id: <20200214173407.52521-4-Jason@zx2c4.com> In-Reply-To: <20200214173407.52521-1-Jason@zx2c4.com> References: <20200214173407.52521-1-Jason@zx2c4.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org It turns out there's an easy way to get packets queued up while still having an MTU of zero, and that's via persistent keep alive. This commit makes sure that in whatever condition, we don't wind up dividing by zero. Note that an MTU of zero for a wireguard interface is something quasi-valid, so I don't think the correct fix is to limit it via min_mtu. This can be reproduced easily with: ip link add wg0 type wireguard ip link add wg1 type wireguard ip link set wg0 up mtu 0 ip link set wg1 up wg set wg0 private-key <(wg genkey) wg set wg1 listen-port 1 private-key <(wg genkey) peer $(wg show wg0 public-key) wg set wg0 peer $(wg show wg1 public-key) persistent-keepalive 1 endpoint 127.0.0.1:1 However, while min_mtu=0 seems fine, it makes sense to restrict the max_mtu. This commit also restricts the maximum MTU to the greatest number for which rounding up to the padding multiple won't overflow a signed integer. Packets this large were always rejected anyway eventually, due to checks deeper in, but it seems more sound not to even let the administrator configure something that won't work anyway. We use this opportunity to clean up this function a bit so that it's clear which paths we're expecting. Signed-off-by: Jason A. Donenfeld Cc: Eric Dumazet --- drivers/net/wireguard/device.c | 7 ++++--- drivers/net/wireguard/send.c | 16 +++++++++++----- 2 files changed, 15 insertions(+), 8 deletions(-) diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c index 43db442b1373..cdc96968b0f4 100644 --- a/drivers/net/wireguard/device.c +++ b/drivers/net/wireguard/device.c @@ -258,6 +258,8 @@ static void wg_setup(struct net_device *dev) enum { WG_NETDEV_FEATURES = NETIF_F_HW_CSUM | NETIF_F_RXCSUM | NETIF_F_SG | NETIF_F_GSO | NETIF_F_GSO_SOFTWARE | NETIF_F_HIGHDMA }; + const int overhead = MESSAGE_MINIMUM_LENGTH + sizeof(struct udphdr) + + max(sizeof(struct ipv6hdr), sizeof(struct iphdr)); dev->netdev_ops = &netdev_ops; dev->hard_header_len = 0; @@ -271,9 +273,8 @@ static void wg_setup(struct net_device *dev) dev->features |= WG_NETDEV_FEATURES; dev->hw_features |= WG_NETDEV_FEATURES; dev->hw_enc_features |= WG_NETDEV_FEATURES; - dev->mtu = ETH_DATA_LEN - MESSAGE_MINIMUM_LENGTH - - sizeof(struct udphdr) - - max(sizeof(struct ipv6hdr), sizeof(struct iphdr)); + dev->mtu = ETH_DATA_LEN - overhead; + dev->max_mtu = round_down(INT_MAX, MESSAGE_PADDING_MULTIPLE) - overhead; SET_NETDEV_DEVTYPE(dev, &device_type); diff --git a/drivers/net/wireguard/send.c b/drivers/net/wireguard/send.c index c13260563446..2a9990ab66cd 100644 --- a/drivers/net/wireguard/send.c +++ b/drivers/net/wireguard/send.c @@ -143,16 +143,22 @@ static void keep_key_fresh(struct wg_peer *peer) static unsigned int calculate_skb_padding(struct sk_buff *skb) { + unsigned int padded_size, last_unit = skb->len; + + if (unlikely(!PACKET_CB(skb)->mtu)) + return -last_unit % MESSAGE_PADDING_MULTIPLE; + /* We do this modulo business with the MTU, just in case the networking * layer gives us a packet that's bigger than the MTU. In that case, we * wouldn't want the final subtraction to overflow in the case of the - * padded_size being clamped. + * padded_size being clamped. Fortunately, that's very rarely the case, + * so we optimize for that not happening. */ - unsigned int last_unit = skb->len % PACKET_CB(skb)->mtu; - unsigned int padded_size = ALIGN(last_unit, MESSAGE_PADDING_MULTIPLE); + if (unlikely(last_unit > PACKET_CB(skb)->mtu)) + last_unit %= PACKET_CB(skb)->mtu; - if (padded_size > PACKET_CB(skb)->mtu) - padded_size = PACKET_CB(skb)->mtu; + padded_size = min(PACKET_CB(skb)->mtu, + ALIGN(last_unit, MESSAGE_PADDING_MULTIPLE)); return padded_size - last_unit; }