From patchwork Fri Sep 21 16:27:21 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 185825 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id 804492C007A for ; Sat, 22 Sep 2012 02:27:38 +1000 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1TF62R-0000DZ-DA; Fri, 21 Sep 2012 16:25:11 +0000 Received: from mail.tpi.com ([70.99.223.143]) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1TF62P-0000DT-7Q for kernel-team@lists.ubuntu.com; Fri, 21 Sep 2012 16:25:09 +0000 Received: from salmon.rtg.net (mail.tpi.com [70.99.223.143]) by mail.tpi.com (Postfix) with ESMTP id DCFE732C2AC for ; Fri, 21 Sep 2012 09:26:35 -0700 (PDT) Received: by salmon.rtg.net (Postfix, from userid 1000) id D9D2620BC3; Fri, 21 Sep 2012 10:27:21 -0600 (MDT) From: Tim Gardner To: kernel-team@lists.ubuntu.com Subject: [PATCH Lucid CVE-2012-3412] sfc: Fix maximum number of TSO segments and minimum TX queue size Date: Fri, 21 Sep 2012 10:27:21 -0600 Message-Id: <1348244841-53636-1-git-send-email-tim.gardner@canonical.com> X-Mailer: git-send-email 1.7.9.5 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.13 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com From: Ben Hutchings CVE-2012-3412 BugLink: http://bugs.launchpad.net/bugs/1037456 This is related to commit 7e6d06f0de3f74ca929441add094518ae332257c upstream, but looks very different because: - TX queue size was constant before 2.6.37, so we don't need to check it - The upstream fix relies on changes to the TCP stack and networking core, which are not appropriate for stable updates. Instead we limit number of segments in efx_enqueue_skb_tso(). This effectively drops all the extra packets and seriously hurts TCP throughput if the limit is ever hit, but this shouldn't affect any legitimate traffic. The original commit message is: Currently an skb requiring TSO may not fit within a minimum-size TX queue. The TX queue selected for the skb may stall and trigger the TX watchdog repeatedly (since the problem skb will be retried after the TX reset). This issue is designated as CVE-2012-3412. Set the maximum number of TSO segments for our devices to 100. This should make no difference to behaviour unless the actual MSS is less than about 700. Increase the minimum TX queue size accordingly to allow for 2 worst-case skbs, so that there will definitely be space to add an skb after we wake a queue. To avoid invalidating existing configurations, change efx_ethtool_set_ringparam() to fix up values that are too small rather than returning -EINVAL. Signed-off-by: Ben Hutchings Signed-off-by: Tim Gardner --- drivers/net/sfc/efx.h | 3 +++ drivers/net/sfc/tx.c | 15 +++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/drivers/net/sfc/efx.h b/drivers/net/sfc/efx.h index 8141c76..dc0bd22 100644 --- a/drivers/net/sfc/efx.h +++ b/drivers/net/sfc/efx.h @@ -45,6 +45,9 @@ extern void efx_wake_queue(struct efx_nic *efx); #define EFX_TXQ_SIZE EFX_DEFAULT_DMAQ_SIZE #define EFX_TXQ_MASK (EFX_TXQ_SIZE - 1) +/* Maximum number of TCP segments we support for soft-TSO */ +#define EFX_TSO_MAX_SEGS 100 + /* RX */ extern int efx_probe_rx_queue(struct efx_rx_queue *rx_queue); extern void efx_remove_rx_queue(struct efx_rx_queue *rx_queue); diff --git a/drivers/net/sfc/tx.c b/drivers/net/sfc/tx.c index 585772b..7fdbf78 100644 --- a/drivers/net/sfc/tx.c +++ b/drivers/net/sfc/tx.c @@ -1072,6 +1072,21 @@ static int efx_enqueue_skb_tso(struct efx_tx_queue *tx_queue, int frag_i, rc, rc2 = NETDEV_TX_OK; struct tso_state state; + /* Since the stack does not limit the number of segments per + * skb, we must do so. Otherwise an attacker may be able to + * make the TCP produce skbs that will never fit in our TX + * queue, causing repeated resets. + */ + if (unlikely(skb_shinfo(skb)->gso_segs > EFX_TSO_MAX_SEGS)) { + unsigned int excess = + (skb_shinfo(skb)->gso_segs - EFX_TSO_MAX_SEGS) * + skb_shinfo(skb)->gso_size; + if (__pskb_trim(skb, skb->len - excess)) { + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; + } + } + /* Find the packet protocol and sanity-check it */ state.protocol = efx_tso_check_protocol(skb);