From patchwork Mon Feb 13 04:37:10 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neal Cardwell X-Patchwork-Id: 140850 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 00F0FB6FA7 for ; Mon, 13 Feb 2012 15:37:50 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756141Ab2BMEhr (ORCPT ); Sun, 12 Feb 2012 23:37:47 -0500 Received: from mail-ey0-f202.google.com ([209.85.215.202]:53005 "EHLO mail-ey0-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754793Ab2BMEhc (ORCPT ); Sun, 12 Feb 2012 23:37:32 -0500 Received: by eaaf13 with SMTP id f13so136114eaa.1 for ; Sun, 12 Feb 2012 20:37:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=gamma; h=mime-version:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=YDF3KAJzAiPildvsB2GqMrIEDetGKeBznxL9XsdIXLQ=; b=aDIIx3LE2icsRlSszlObXkAAmkawwHkfK8RtVZIsWsxRcziuPCuvH727MzIUie96fT AOgKOZu47MenkYiZAb+5Hxab7j0+2tx9H7C4qpwE+ZSmMiYdRktONjF4uPbrrO4b5wBb YiJujkpN/1xRGjg1pgFcy7JDfW2RB5F/AHI98= Received: by 10.213.27.69 with SMTP id h5mr958613ebc.4.1329107851195; Sun, 12 Feb 2012 20:37:31 -0800 (PST) MIME-Version: 1.0 Received: by 10.213.27.69 with SMTP id h5mr958586ebc.4.1329107850973; Sun, 12 Feb 2012 20:37:30 -0800 (PST) Received: from hpza10.eem.corp.google.com ([74.125.121.33]) by gmr-mx.google.com with ESMTPS id n48si10941820eeh.1.2012.02.12.20.37.30 (version=TLSv1/SSLv3 cipher=AES128-SHA); Sun, 12 Feb 2012 20:37:30 -0800 (PST) Received: from coy.nyc.corp.google.com (coy.nyc.corp.google.com [172.26.62.101]) by hpza10.eem.corp.google.com (Postfix) with ESMTP id CE2BC20004E; Sun, 12 Feb 2012 20:37:30 -0800 (PST) Received: by coy.nyc.corp.google.com (Postfix, from userid 4318) id 6428A1C0B05; Sun, 12 Feb 2012 23:37:28 -0500 (EST) From: Neal Cardwell To: David Miller Cc: netdev@vger.kernel.org, ilpo.jarvinen@helsinki.fi, Nandita Dukkipati , Yuchung Cheng , Tom Herbert , Vijay Subramanian , Neal Cardwell Subject: [PATCH 2/2] tcp: fix range tcp_shifted_skb() passes to tcp_sacktag_one() Date: Sun, 12 Feb 2012 23:37:10 -0500 Message-Id: <1329107830-3351-2-git-send-email-ncardwell@google.com> X-Mailer: git-send-email 1.7.7.3 In-Reply-To: <1329107830-3351-1-git-send-email-ncardwell@google.com> References: <1329107830-3351-1-git-send-email-ncardwell@google.com> X-Gm-Message-State: ALoCoQndaDFrNAWas/ZW/b4xceIuYPD0Hej+HonQEIQxlLqTu141e8aIRGPosELr2ovk8zTF6/vcjhhjev7SwpVoUSgUypYCkopR9h1M2Fz/Iiqk64y/zviuVcvZJ2GPWONkXRaASV7S0Y2impfNb9gRhSI9de4vBp+Hu1NHCBOBlg8fAh3mivk= Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Fix the newly-SACKed range to be the range of newly-shifted bytes. Previously - since 832d11c5cd076abc0aa1eaf7be96c81d1a59ce41 - tcp_shifted_skb() incorrectly called tcp_sacktag_one() with the start and end sequence numbers of the skb it passes in set to the range just beyond the range that is newly-SACKed. This commit also removes a special-case adjustment to lost_cnt_hint in tcp_shifted_skb() since the pre-existing adjustment of lost_cnt_hint in tcp_sacktag_one() now properly handles this things now that the correct start sequence number is passed in. Signed-off-by: Neal Cardwell --- net/ipv4/tcp_input.c | 19 ++++++++++--------- 1 files changed, 10 insertions(+), 9 deletions(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 4e8a81f..8116d06 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -1388,6 +1388,9 @@ static u8 tcp_sacktag_one(struct sock *sk, return sacked; } +/* Shift newly-SACKed bytes from this skb to the immediately previous + * already-SACKed sk_buff. Mark the newly-SACKed bytes as such. + */ static int tcp_shifted_skb(struct sock *sk, struct sk_buff *skb, struct tcp_sacktag_state *state, unsigned int pcount, int shifted, int mss, @@ -1395,12 +1398,11 @@ static int tcp_shifted_skb(struct sock *sk, struct sk_buff *skb, { struct tcp_sock *tp = tcp_sk(sk); struct sk_buff *prev = tcp_write_queue_prev(sk, skb); + u32 start_seq = TCP_SKB_CB(skb)->seq; /* start of newly-SACKed */ + u32 end_seq = start_seq + shifted; /* end of newly-SACKed */ BUG_ON(!pcount); - if (skb == tp->lost_skb_hint) - tp->lost_cnt_hint += pcount; - TCP_SKB_CB(prev)->end_seq += shifted; TCP_SKB_CB(skb)->seq += shifted; @@ -1424,12 +1426,11 @@ static int tcp_shifted_skb(struct sock *sk, struct sk_buff *skb, skb_shinfo(skb)->gso_type = 0; } - /* We discard results */ - tcp_sacktag_one(sk, state, - TCP_SKB_CB(skb)->sacked, - TCP_SKB_CB(skb)->seq, - TCP_SKB_CB(skb)->end_seq, - dup_sack, pcount); + /* Adjust counters and hints for the newly sacked sequence range but + * discard the return value since prev is already marked. + */ + tcp_sacktag_one(sk, state, TCP_SKB_CB(skb)->sacked, + start_seq, end_seq, dup_sack, pcount); /* Difference in this won't matter, both ACKed by the same cumul. ACK */ TCP_SKB_CB(prev)->sacked |= (TCP_SKB_CB(skb)->sacked & TCPCB_EVER_RETRANS);