From patchwork Mon Aug 27 16:55:16 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 180251 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id 9C6A32C00F6 for ; Tue, 28 Aug 2012 02:55:36 +1000 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1T62aO-0005HA-80; Mon, 27 Aug 2012 16:54:48 +0000 Received: from mail.tpi.com ([70.99.223.143]) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1T62aL-0005GX-3u for kernel-team@lists.ubuntu.com; Mon, 27 Aug 2012 16:54:45 +0000 Received: from [10.0.2.6] (host-174-45-43-11.hln-mt.client.bresnan.net [174.45.43.11]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mail.tpi.com (Postfix) with ESMTP id A194C32AA52; Mon, 27 Aug 2012 09:55:02 -0700 (PDT) Message-ID: <503BA674.10508@canonical.com> Date: Mon, 27 Aug 2012 10:55:16 -0600 From: Tim Gardner User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120714 Thunderbird/14.0 MIME-Version: 1.0 To: Herton Ronaldo Krzesinski Subject: Re: Lucid CVE-2012-3412 References: <503659DD.6070708@canonical.com> <20120823192448.GC3004@herton-Z68MA-D2H-B3> <50368F1F.8030804@canonical.com> <20120824004642.GD3004@herton-Z68MA-D2H-B3> <5037888A.8000001@canonical.com> <20120824150530.GB3072@herton-Z68MA-D2H-B3> <5037999A.6090704@canonical.com> <20120824195052.GC3072@herton-Z68MA-D2H-B3> In-Reply-To: <20120824195052.GC3072@herton-Z68MA-D2H-B3> X-Enigmail-Version: 1.5a1pre Cc: kernel-team X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.13 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com OK, fixed some compile issues with his patch and re-pushed. git://kernel.ubuntu.com/rtg/ubuntu-lucid.git CVE-2012-3412 rtg From db6790cc6f1ad7257e6a6f44708162fcb1bb3013 Mon Sep 17 00:00:00 2001 From: Ben Hutchings Date: Mon, 27 Aug 2012 08:54:04 -0600 Subject: [PATCH] net: Allow driver to limit number of GSO segments per skb CVE-2012-3412 BugLink: http://bugs.launchpad.net/bugs/1037456 A peer (or local user) may cause TCP to use a nominal MSS of as little as 88 (actual MSS of 76 with timestamps). Given that we have a sufficiently prodigious local sender and the peer ACKs quickly enough, it is nevertheless possible to grow the window for such a connection to the point that we will try to send just under 64K at once. This results in a single skb that expands to 861 segments. In some drivers with TSO support, such an skb will require hundreds of DMA descriptors; a substantial fraction of a TX ring or even more than a full ring. The TX queue selected for the skb may stall and trigger the TX watchdog repeatedly (since the problem skb will be retried after the TX reset). This particularly affects sfc, for which the issue is designated as CVE-2012-3412. Therefore: 1. Add the field net_device::gso_max_segs holding the device-specific limit. 2. In netif_skb_features(), if the number of segments is too high then mask out GSO features to force fall back to software GSO. Signed-off-by: Ben Hutchings Signed-off-by: David S. Miller (back ported from commit 30b678d844af3305cda5953467005cebb5d7b687) Signed-off-by: Tim Gardner Acked-by: Herton Krzesinski --- drivers/net/xen-netfront.c | 2 +- include/linux/netdevice.h | 16 ++++++++++++++-- net/core/dev.c | 16 ++++++++++------ 3 files changed, 25 insertions(+), 9 deletions(-) diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index 1a11d95..422001f 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -486,7 +486,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev) if (unlikely(!netif_carrier_ok(dev) || (frags > 1 && !xennet_can_sg(dev)) || - netif_needs_gso(dev, skb))) { + netif_needs_gso(skb, netif_skb_features(dev, skb)))) { spin_unlock_irq(&np->tx_lock); goto drop; } diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index ea6187c..41e689e 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -907,6 +907,8 @@ struct net_device /* for setting kernel sock attribute on TCP connection setup */ #define GSO_MAX_SIZE 65536 unsigned int gso_max_size; +#define GSO_MAX_SEGS 65535 + u16 gso_max_segs; #ifdef CONFIG_DCB /* Data Center Bridging netlink ops */ @@ -1933,10 +1935,10 @@ static inline int skb_gso_ok(struct sk_buff *skb, int features) (!skb_has_frags(skb) || (features & NETIF_F_FRAGLIST)); } -static inline int netif_needs_gso(struct net_device *dev, struct sk_buff *skb) +static inline int netif_needs_gso(struct sk_buff *skb, int features) { return skb_is_gso(skb) && - (!skb_gso_ok(skb, dev->features) || + (!skb_gso_ok(skb, features) || unlikely(skb->ip_summed != CHECKSUM_PARTIAL)); } @@ -1956,6 +1958,16 @@ static inline void skb_bond_set_mac_by_master(struct sk_buff *skb, } } +static inline int netif_skb_features(struct net_device *dev, struct sk_buff *skb) +{ + int features = dev->features; + + if (skb_shinfo(skb)->gso_segs > skb->dev->gso_max_segs) + features &= ~NETIF_F_GSO_MASK; + + return features; +} + /* On bonding slaves other than the currently active slave, suppress * duplicates except for 802.3ad ETH_P_SLOW, alb non-mcast/bcast, and * ARP on active-backup slaves with arp_validate enabled. diff --git a/net/core/dev.c b/net/core/dev.c index f32f98a..b250f81 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -1682,13 +1682,12 @@ static void dev_gso_skb_destructor(struct sk_buff *skb) * This function segments the given skb and stores the list of segments * in skb->next. */ -static int dev_gso_segment(struct sk_buff *skb) +static int dev_gso_segment(struct sk_buff *skb, int features) { struct net_device *dev = skb->dev; struct sk_buff *segs; - int features = dev->features & ~(illegal_highdma(dev, skb) ? - NETIF_F_SG : 0); + features &= ~(illegal_highdma(dev, skb) ? NETIF_F_SG : 0); segs = skb_gso_segment(skb, features); /* Verifying header integrity only. */ @@ -1712,11 +1711,15 @@ int dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev, int rc; if (likely(!skb->next)) { + int features; + if (!list_empty(&ptype_all)) dev_queue_xmit_nit(skb, dev); - if (netif_needs_gso(dev, skb)) { - if (unlikely(dev_gso_segment(skb))) + features = netif_skb_features(dev, skb); + + if (netif_needs_gso(skb, features)) { + if (unlikely(dev_gso_segment(skb, features))) goto out_kfree_skb; if (skb->next) goto gso; @@ -1887,7 +1890,7 @@ int dev_queue_xmit(struct sk_buff *skb) int rc = -ENOMEM; /* GSO will handle the following emulations directly. */ - if (netif_needs_gso(dev, skb)) + if (netif_needs_gso(skb, netif_skb_features(dev, skb))) goto gso; if (skb_has_frags(skb) && @@ -5195,6 +5198,7 @@ struct net_device *alloc_netdev_mq(int sizeof_priv, const char *name, dev->real_num_tx_queues = queue_count; dev->gso_max_size = GSO_MAX_SIZE; + dev->gso_max_segs = GSO_MAX_SEGS; netdev_init_queues(dev); -- 1.7.9.5