From patchwork Thu Sep 17 18:46:02 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Vyukov X-Patchwork-Id: 518994 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 177531414EA for ; Fri, 18 Sep 2015 04:46:33 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b=Nk0BJc/Q; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752275AbbIQSqQ (ORCPT ); Thu, 17 Sep 2015 14:46:16 -0400 Received: from mail-wi0-f181.google.com ([209.85.212.181]:37704 "EHLO mail-wi0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751604AbbIQSqP (ORCPT ); Thu, 17 Sep 2015 14:46:15 -0400 Received: by wicfx3 with SMTP id fx3so2981845wic.0 for ; Thu, 17 Sep 2015 11:46:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5RAo0ck/oNzl0KbS/6SifIBrCgXhFERL2tB8MbQTRl4=; b=Nk0BJc/QO8iL/GcdmkJTKZKceraf9wcOOY9FGMCi/I28wk78ryYtdPgJQMdazk+/x4 zuT7lvaXUH0cffGamouW8DtGzFKsT1j98LBPLdMkodaJ97i3/oiu7scIThJdpI0OmmWo un4jXyq299RoZ2HFWr9L/h7dVx1SpQGpj+mIEtNwsYgtcIU7C7yYxNxQ7dD1cOJRayzm 6Oc7BA/ObAxfE7rvGatN/a2e/QeHne0Xj9qylm0xSrO9fe9RQTyYmVSnTBe3+bQBja1U uSdwvMfADhcWTbQdRLFTR5DqHHQB30eWCP8WbMuyKiOpB57faIqrFc0/7JGasRZpUO0W mbBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5RAo0ck/oNzl0KbS/6SifIBrCgXhFERL2tB8MbQTRl4=; b=Mk5aCdw9p4DJsDWy/un84wjCyj6OEF5r48Ks5zl0qhZovpaCSjARTjU7PSxadto5iK 3l1LL8HuUz5ayggwuC77/bpPfaTlOTX+MZXM861Bh+kfzZPWVAGCmTzZdZiwhcUsTe4n PxAPTAMvNiXez3Rk3EpWhtW/sCt2w7XAbd41iL0IIEn6idg2YQ+orCpu9aaGxjyD3X3v s9XylUbWm0wcpYVCKupzwjOEcwNwDwU35d1MdWyWqgWpkrcmnP8w9G0KNDh3kkgzeOmK Sk0q2vIoi9lS1d2XQFq/OumyonwoeOXsxdCSAxbCEgXpjbmeu+XUxchsuA/CpQJD38Zq f++g== X-Gm-Message-State: ALoCoQliz1PAOWJCK8wcULHdDwkDvP1+PPY91LHjdh0XrTjIxSXk5I8o/I1KwcIWflnzR61FfTqa X-Received: by 10.194.104.39 with SMTP id gb7mr1215358wjb.150.1442515573922; Thu, 17 Sep 2015 11:46:13 -0700 (PDT) Received: from dvyukov-z840.muc.corp.google.com ([2620:0:1046:0:cd3e:475e:b9d9:2d44]) by smtp.gmail.com with ESMTPSA id cx3sm4790692wjc.27.2015.09.17.11.46.12 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Sep 2015 11:46:12 -0700 (PDT) Received: by dvyukov-z840.muc.corp.google.com (Postfix, from userid 129372) id 17730EA05E; Thu, 17 Sep 2015 20:46:11 +0200 (CEST) From: Dmitry Vyukov To: davem@davemloft.net, edumazet@google.com, alexander.h.duyck@redhat.com, jiri@resnulli.us, hannes@stressinduktion.org, linus.luessing@c0d3.blue, willemb@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: andreyknvl@google.com, kcc@google.com, glider@google.com, paulmck@linux.vnet.ibm.com, hboehm@google.com, ktsan@googlegroups.com, Dmitry Vyukov Subject: [RFC] net: fix data race on sk_buff after re-cloning Date: Thu, 17 Sep 2015 20:46:02 +0200 Message-Id: <1442515562-12672-1-git-send-email-dvyukov@google.com> X-Mailer: git-send-email 2.6.0.rc0.131.gf624c3d In-Reply-To: <1442515456-12390-1-git-send-email-dvyukov@google.com> References: <1442515456-12390-1-git-send-email-dvyukov@google.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org KernelThreadSanitizer (KTSAN) reported the following race (on 4.2 rc2): ThreadSanitizer: data-race in __copy_skb_header Write at 0xffff8800bb158f48 of size 8 by thread 3146 on CPU 5: [] __copy_skb_header+0xee/0x1d0 net/core/skbuff.c:765 [] __skb_clone+0x5c/0x320 net/core/skbuff.c:820 [] skb_clone+0xd0/0x130 net/core/skbuff.c:962 [] tcp_transmit_skb+0xb5/0x1750 net/ipv4/tcp_output.c:932 [] __tcp_retransmit_skb+0x244/0xb10 net/ipv4/tcp_output.c:2638 [] tcp_retransmit_skb+0x2b/0x240 net/ipv4/tcp_output.c:2655 [] tcp_retransmit_timer+0x579/0xb70 net/ipv4/tcp_timer.c:433 [] tcp_write_timer_handler+0x109/0x320 net/ipv4/tcp_timer.c:514 [] tcp_write_timer+0xc0/0xe0 net/ipv4/tcp_timer.c:532 [] call_timer_fn+0x4c/0x1b0 kernel/time/timer.c:1155 [< inline >] __run_timers kernel/time/timer.c:1231 [] run_timer_softirq+0x313/0x500 kernel/time/timer.c:1414 [] __do_softirq+0xbe/0x2f0 kernel/softirq.c:273 [] apic_timer_interrupt+0x8a/0xa0 arch/x86/entry/entry_64.S:790 Previous read at 0xffff8800bb158f48 of size 8 by thread 3168 on CPU 0: [] skb_release_head_state+0x4b/0x120 net/core/skbuff.c:640 [] skb_release_all+0x1d/0x50 net/core/skbuff.c:657 [< inline >] __kfree_skb net/core/skbuff.c:673 [] consume_skb+0x60/0x100 net/core/skbuff.c:746 [] __dev_kfree_skb_any+0x4d/0x60 net/core/dev.c:2312 [< inline >] dev_kfree_skb_any include/linux/netdevice.h:2933 [] e1000_unmap_and_free_tx_resource.isra.42+0xd3/0x120 drivers/net/ethernet/intel/e1000/e1000_main.c:1973 [< inline >] e1000_clean_tx_irq drivers/net/ethernet/intel/e1000/e1000_main.c:3881 [] e1000_clean+0x24d/0x11e0 drivers/net/ethernet/intel/e1000/e1000_main.c:3818 [< inline >] napi_poll net/core/dev.c:4744 [] net_rx_action+0x489/0x690 net/core/dev.c:4809 [] __do_softirq+0xbe/0x2f0 kernel/softirq.c:273 [] apic_timer_interrupt+0x8a/0xa0 arch/x86/entry/entry_64.S:790 Mutexes locked by thread 3146: Mutex 436586 is locked here: [< inline >] __raw_spin_lock include/linux/spinlock_api_smp.h:158 [] _raw_spin_lock+0x50/0x70 kernel/locking/spinlock.c:151 [< inline >] spin_lock include/linux/spinlock.h:312 [] tcp_write_timer+0x25/0xe0 net/ipv4/tcp_timer.c:530 [] call_timer_fn+0x4c/0x1b0 kernel/time/timer.c:1155 [< inline >] __run_timers kernel/time/timer.c:1231 [] run_timer_softirq+0x313/0x500 kernel/time/timer.c:1414 [] __do_softirq+0xbe/0x2f0 kernel/softirq.c:273 [] apic_timer_interrupt+0x8a/0xa0 arch/x86/entry/entry_64.S:790 The only way I can see it happens is as follows: - sk_buff_fclones is allocated - then it is cloned which returns fclones->skb2 - then fclones->skb2 is freed, which drops fclones->fclone_ref to 1 - then the original skb is cloned again - at this point skb_clone sees that fclones->fclone_ref = 1 and returns fclones->skb2 again Now initialization of fclones->skb2 races with the previous use, because refcounting lacks proper memory barriers. I am looking at skb code for the first time, so I can't conclude whether such scenario is possible or not. But refcount at least in kfree_skbmem() looks broken. For example, kfree_skb() properly inserts rmb after the fast-path check: if (likely(atomic_read(&skb->users) == 1)) smp_rmb(); The patch contains a proposed fix. If it looks good to you and the scenario looks sane, then I will update the description and resend it. --- net/core/skbuff.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index dad4dd3..4c89bac 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -618,8 +618,9 @@ static void kfree_skbmem(struct sk_buff *skb) /* We usually free the clone (TX completion) before original skb * This test would have no chance to be true for the clone, * while here, branch prediction will be good. + * Paired with atomic_dec_and_test() below. */ - if (atomic_read(&fclones->fclone_ref) == 1) + if (atomic_read_acquire(&fclones->fclone_ref) == 1) goto fastpath; break; @@ -944,7 +945,8 @@ struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t gfp_mask) return NULL; if (skb->fclone == SKB_FCLONE_ORIG && - atomic_read(&fclones->fclone_ref) == 1) { + /* Paired with atomic_dec_and_test() in kfree_skbmem(). */ + atomic_read_acquire(&fclones->fclone_ref) == 1) { n = &fclones->skb2; atomic_set(&fclones->fclone_ref, 2); } else {