From patchwork Tue Apr 1 16:07:19 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Mostafa X-Patchwork-Id: 335978 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 232DB1400A1 for ; Wed, 2 Apr 2014 03:10:00 +1100 (EST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1WV1G7-00009o-Fa; Tue, 01 Apr 2014 16:09:55 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1WV1FH-0008Dv-At for kernel-team@lists.ubuntu.com; Tue, 01 Apr 2014 16:09:03 +0000 Received: from c-67-160-228-185.hsd1.ca.comcast.net ([67.160.228.185] helo=fourier) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1WV1FG-0004aI-A0; Tue, 01 Apr 2014 16:09:02 +0000 Received: from kamal by fourier with local (Exim 4.80) (envelope-from ) id 1WV1FE-0003M4-Gj; Tue, 01 Apr 2014 09:09:00 -0700 From: Kamal Mostafa To: linux-kernel@vger.kernel.org, stable@vger.kernel.org, kernel-team@lists.ubuntu.com Subject: [PATCH 3.8 06/86] tcp: tsq: fix nonagle handling Date: Tue, 1 Apr 2014 09:07:19 -0700 Message-Id: <1396368519-12728-7-git-send-email-kamal@canonical.com> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1396368519-12728-1-git-send-email-kamal@canonical.com> References: <1396368519-12728-1-git-send-email-kamal@canonical.com> X-Extended-Stable: 3.8 Cc: Kamal Mostafa , Eric Dumazet , "David S. Miller" , John Ogness X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.14 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: kernel-team-bounces@lists.ubuntu.com 3.8.13.21 -stable review patch. If anyone has any objections, please let me know. ------------------ From: John Ogness [ Upstream commit bf06200e732de613a1277984bf34d1a21c2de03d ] Commit 46d3ceabd8d9 ("tcp: TCP Small Queues") introduced a possible regression for applications using TCP_NODELAY. If TCP session is throttled because of tsq, we should consult tp->nonagle when TX completion is done and allow us to send additional segment, especially if this segment is not a full MSS. Otherwise this segment is sent after an RTO. [edumazet] : Cooked the changelog, added another fix about testing sk_wmem_alloc twice because TX completion can happen right before setting TSQ_THROTTLED bit. This problem is particularly visible with recent auto corking, but might also be triggered with low tcp_limit_output_bytes values or NIC drivers delaying TX completion by hundred of usec, and very low rtt. Thomas Glanzmann for example reported an iscsi regression, caused by tcp auto corking making this bug quite visible. Fixes: 46d3ceabd8d9 ("tcp: TCP Small Queues") Signed-off-by: John Ogness Signed-off-by: Eric Dumazet Reported-by: Thomas Glanzmann Signed-off-by: David S. Miller Signed-off-by: Kamal Mostafa --- net/ipv4/tcp_output.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index aa215b8..df953b3 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -851,7 +851,8 @@ static void tcp_tsq_handler(struct sock *sk) if ((1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_FIN_WAIT1 | TCPF_CLOSING | TCPF_CLOSE_WAIT | TCPF_LAST_ACK)) - tcp_write_xmit(sk, tcp_current_mss(sk), 0, 0, GFP_ATOMIC); + tcp_write_xmit(sk, tcp_current_mss(sk), tcp_sk(sk)->nonagle, + 0, GFP_ATOMIC); } /* * One tasklest per cpu tries to send more skbs. @@ -2028,7 +2029,15 @@ static bool tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle, if (atomic_read(&sk->sk_wmem_alloc) > limit) { set_bit(TSQ_THROTTLED, &tp->tsq_flags); - break; + /* It is possible TX completion already happened + * before we set TSQ_THROTTLED, so we must + * test again the condition. + * We abuse smp_mb__after_clear_bit() because + * there is no smp_mb__after_set_bit() yet + */ + smp_mb__after_clear_bit(); + if (atomic_read(&sk->sk_wmem_alloc) > limit) + break; } limit = mss_now;