From patchwork Fri Oct 9 01:44:07 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kosuke Tatsukawa X-Patchwork-Id: 528062 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 4AA391400CB for ; Fri, 9 Oct 2015 12:44:45 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755477AbbJIBol (ORCPT ); Thu, 8 Oct 2015 21:44:41 -0400 Received: from TYO201.gate.nec.co.jp ([210.143.35.51]:54848 "EHLO tyo201.gate.nec.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754172AbbJIBoj convert rfc822-to-8bit (ORCPT ); Thu, 8 Oct 2015 21:44:39 -0400 X-Greylist: delayed 3995 seconds by postgrey-1.27 at vger.kernel.org; Thu, 08 Oct 2015 21:44:38 EDT Received: from mailgate3.nec.co.jp ([10.7.69.197]) by tyo201.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id t991iS73000562; Fri, 9 Oct 2015 10:44:28 +0900 (JST) Received: from mailsv.nec.co.jp (imss62.nec.co.jp [10.7.69.157]) by mailgate3.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) with ESMTP id t991iRK06774; Fri, 9 Oct 2015 10:44:27 +0900 (JST) Received: from mail01b.kamome.nec.co.jp (mail01b.kamome.nec.co.jp [10.25.43.2]) by mailsv.nec.co.jp (8.13.8/8.13.4) with ESMTP id t991iR7q013939; Fri, 9 Oct 2015 10:44:27 +0900 (JST) Received: from bpxc99gp.gisp.nec.co.jp ([10.38.151.136] [10.38.151.136]) by mail01b.kamome.nec.co.jp with ESMTP id BT-MMP-2527022; Fri, 9 Oct 2015 10:44:08 +0900 Received: from BPXM09GP.gisp.nec.co.jp ([169.254.1.172]) by BPXC08GP.gisp.nec.co.jp ([10.38.151.136]) with mapi id 14.03.0224.002; Fri, 9 Oct 2015 10:44:08 +0900 From: Kosuke Tatsukawa To: Trond Myklebust , Anna Schumaker , "J. Bruce Fields" , "Jeff Layton" , "David S. Miller" CC: "linux-nfs@vger.kernel.org" , "netdev@vger.kernel.org" , "linux-kernel@vger.kernel.org" Subject: [PATCH v2] sunrpc: fix waitqueue_active without memory barrier in sunrpc Thread-Topic: [PATCH v2] sunrpc: fix waitqueue_active without memory barrier in sunrpc Thread-Index: AdECM/y+4HaHBB7jQ1+KActYjXR0Pw== Date: Fri, 9 Oct 2015 01:44:07 +0000 Message-ID: <17EC94B0A072C34B8DCF0D30AD16044A028748C0@BPXM09GP.gisp.nec.co.jp> Accept-Language: ja-JP, en-US Content-Language: ja-JP X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.34.125.78] MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org There are several places in net/sunrpc/svcsock.c which calls waitqueue_active() without calling a memory barrier. Add a memory barrier just as in wq_has_sleeper(). I found this issue when I was looking through the linux source code for places calling waitqueue_active() before wake_up*(), but without preceding memory barriers, after sending a patch to fix a similar issue in drivers/tty/n_tty.c (Details about the original issue can be found here: https://lkml.org/lkml/2015/9/28/849). Signed-off-by: Kosuke Tatsukawa --- v2: - Fixed compiler warnings caused by type mismatch v1: - https://lkml.org/lkml/2015/10/8/993 --- net/sunrpc/svcsock.c | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 0c81202..ec19444 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -414,6 +414,7 @@ static void svc_udp_data_ready(struct sock *sk) set_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags); svc_xprt_enqueue(&svsk->sk_xprt); } + smp_mb(); if (wq && waitqueue_active(wq)) wake_up_interruptible(wq); } @@ -432,6 +433,7 @@ static void svc_write_space(struct sock *sk) svc_xprt_enqueue(&svsk->sk_xprt); } + smp_mb(); if (wq && waitqueue_active(wq)) { dprintk("RPC svc_write_space: someone sleeping on %p\n", svsk); @@ -787,6 +789,7 @@ static void svc_tcp_listen_data_ready(struct sock *sk) } wq = sk_sleep(sk); + smp_mb(); if (wq && waitqueue_active(wq)) wake_up_interruptible_all(wq); } @@ -808,6 +811,7 @@ static void svc_tcp_state_change(struct sock *sk) set_bit(XPT_CLOSE, &svsk->sk_xprt.xpt_flags); svc_xprt_enqueue(&svsk->sk_xprt); } + smp_mb(); if (wq && waitqueue_active(wq)) wake_up_interruptible_all(wq); } @@ -823,6 +827,7 @@ static void svc_tcp_data_ready(struct sock *sk) set_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags); svc_xprt_enqueue(&svsk->sk_xprt); } + smp_mb(); if (wq && waitqueue_active(wq)) wake_up_interruptible(wq); } @@ -1594,6 +1599,7 @@ static void svc_sock_detach(struct svc_xprt *xprt) sk->sk_write_space = svsk->sk_owspace; wq = sk_sleep(sk); + smp_mb(); if (wq && waitqueue_active(wq)) wake_up_interruptible(wq); }