diff mbox

[net-next,1/2] tcp: remove per-destination timestamp cache

Message ID 20170315203046.158791-1-soheil.kdev@gmail.com
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

Soheil Hassas Yeganeh March 15, 2017, 8:30 p.m. UTC
From: Soheil Hassas Yeganeh <soheil@google.com>

Commit 8a5bd45f6616 (tcp: randomize tcp timestamp offsets for each connection)
randomizes TCP timestamps per connection. After this commit,
there is no guarantee that the timestamps received from the
same destination are monotonically increasing. As a result,
the per-destination timestamp cache in TCP metrics (i.e., tcpm_ts
in struct tcp_metrics_block) is broken and cannot be relied upon.

Remove the per-destination timestamp cache and all related code
paths.

Note that this cache was already broken for caching timestamps of
multiple machines behind a NAT sharing the same address.

Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Cc: Lutz Vieweg <lvml@5t9.de>
Cc: Florian Westphal <fw@strlen.de>
---
 include/net/tcp.h        |   6 +-
 net/ipv4/tcp_input.c     |   6 +-
 net/ipv4/tcp_ipv4.c      |   4 --
 net/ipv4/tcp_metrics.c   | 147 ++---------------------------------------------
 net/ipv4/tcp_minisocks.c |  22 ++-----
 net/ipv6/tcp_ipv6.c      |   5 --
 6 files changed, 11 insertions(+), 179 deletions(-)

Comments

David Miller March 15, 2017, 10:40 p.m. UTC | #1
From: Soheil Hassas Yeganeh <soheil.kdev@gmail.com>
Date: Wed, 15 Mar 2017 16:30:45 -0400

> Note that this cache was already broken for caching timestamps of
> multiple machines behind a NAT sharing the same address.

That's the documented, well established, limitation of time-wait
recycling.

People who enable it, need to consider this issue.

This limitation of the feature does not give us a reason to break the
feature even further as a matter of convenience, or to remove it
altogether for the same reason.

Please, instead, fix the bug that was introduced.

Thank you.
Willy Tarreau March 15, 2017, 10:55 p.m. UTC | #2
Hi David,

On Wed, Mar 15, 2017 at 03:40:44PM -0700, David Miller wrote:
> From: Soheil Hassas Yeganeh <soheil.kdev@gmail.com>
> Date: Wed, 15 Mar 2017 16:30:45 -0400
> 
> > Note that this cache was already broken for caching timestamps of
> > multiple machines behind a NAT sharing the same address.
> 
> That's the documented, well established, limitation of time-wait
> recycling.
> 
> People who enable it, need to consider this issue.
> 
> This limitation of the feature does not give us a reason to break the
> feature even further as a matter of convenience, or to remove it
> altogether for the same reason.
> 
> Please, instead, fix the bug that was introduced.

At least I can say I've seen many people enable it without understanding
its impact, confusing it with tcp_tw_reuse, and copy-pasting it from
random blogs and complaining about issues in production.

I agree that it's hard to arbiter between stupidity and flexibility
though :-/

Regards,
Willy
Florian Westphal March 15, 2017, 10:57 p.m. UTC | #3
David Miller <davem@davemloft.net> wrote:
> From: Soheil Hassas Yeganeh <soheil.kdev@gmail.com>
> Date: Wed, 15 Mar 2017 16:30:45 -0400
> 
> > Note that this cache was already broken for caching timestamps of
> > multiple machines behind a NAT sharing the same address.
> 
> That's the documented, well established, limitation of time-wait
> recycling.

Sigh.

"don't enable this if you connect your machine to the internet".
We're not in the 1990s anymore.  Even I am behind ipv4 CG-NAT nowadays.

So I disagree and would remove this thing.

> This limitation of the feature does not give us a reason to break the
> feature even further as a matter of convenience, or to remove it
> altogether for the same reason.
> 
> Please, instead, fix the bug that was introduced.

AFAIU we only have two alternatives, removal of the randomization feature
or switch to a offset computed via hash(saddr, daddr, secret).

Unless there are more comments I'll look into doing the latter tomorrow.
Eric Dumazet March 15, 2017, 10:59 p.m. UTC | #4
On Wed, 2017-03-15 at 15:40 -0700, David Miller wrote:
> From: Soheil Hassas Yeganeh <soheil.kdev@gmail.com>
> Date: Wed, 15 Mar 2017 16:30:45 -0400
> 
> > Note that this cache was already broken for caching timestamps of
> > multiple machines behind a NAT sharing the same address.
> 
> That's the documented, well established, limitation of time-wait
> recycling.
> 
> People who enable it, need to consider this issue.
> 
> This limitation of the feature does not give us a reason to break the
> feature even further as a matter of convenience, or to remove it
> altogether for the same reason.
> 
> Please, instead, fix the bug that was introduced.
> 
> Thank you.

You mean revert Florian nice patches ?

This would kill timestamps randomization, and thus prevent some
organizations to turn TCP timestamps on.

TCP timestamps are more useful than this obscure tw_recycle thing that
is hurting innocent users.
David Miller March 15, 2017, 11:45 p.m. UTC | #5
From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Wed, 15 Mar 2017 15:59:01 -0700

> On Wed, 2017-03-15 at 15:40 -0700, David Miller wrote:
>> From: Soheil Hassas Yeganeh <soheil.kdev@gmail.com>
>> Date: Wed, 15 Mar 2017 16:30:45 -0400
>> 
>> > Note that this cache was already broken for caching timestamps of
>> > multiple machines behind a NAT sharing the same address.
>> 
>> That's the documented, well established, limitation of time-wait
>> recycling.
>> 
>> People who enable it, need to consider this issue.
>> 
>> This limitation of the feature does not give us a reason to break the
>> feature even further as a matter of convenience, or to remove it
>> altogether for the same reason.
>> 
>> Please, instead, fix the bug that was introduced.
>> 
>> Thank you.
> 
> You mean revert Florian nice patches ?
> 
> This would kill timestamps randomization, and thus prevent some
> organizations to turn TCP timestamps on.
> 
> TCP timestamps are more useful than this obscure tw_recycle thing that
> is hurting innocent users.

Ok, I guess we can remove it in that case.  I'm still a bit disappointed
as I was always hoping someone would find a way to make this work even
in the presence of NAT.

I must be too optimistic.
David Miller March 15, 2017, 11:45 p.m. UTC | #6
From: Florian Westphal <fw@strlen.de>
Date: Wed, 15 Mar 2017 23:57:26 +0100

> AFAIU we only have two alternatives, removal of the randomization feature
> or switch to a offset computed via hash(saddr, daddr, secret).
> 
> Unless there are more comments I'll look into doing the latter tomorrow.

No, I'll apply the removal.
Eric Dumazet March 16, 2017, 12:06 a.m. UTC | #7
On Wed, 2017-03-15 at 16:45 -0700, David Miller wrote:

> Ok, I guess we can remove it in that case.  I'm still a bit disappointed
> as I was always hoping someone would find a way to make this work even
> in the presence of NAT.

Nat are good, Nat are good.

I can't find this hilarious video we watched in Copenhagen ;)
Lutz Vieweg March 16, 2017, 11:31 a.m. UTC | #8
On 03/15/2017 11:55 PM, Willy Tarreau wrote:
> At least I can say I've seen many people enable it without understanding its impact, confusing it
> with tcp_tw_reuse, and copy-pasting it from random blogs and complaining about issues in
> production.

I currently wonder: What it the correct advise to an operator who needs
to run one server instance that is meant to accept thousands of new,
short-lived TCP connections per minute?

The explanation text at
https://vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux
seems to provide comprehensive advise, but its summary is, after all,
somewhat disappointing:

> The universal solution is to increase the number of possible quadruplets by using, for example,
> more server ports. This will allow you to not exhaust the possible connections with TIME-WAIT
> entries.

Assuming an operator has to deal with a given server executable, which does not
provide a feature to "open many listening ports for the same purpose in parallel",
this is hardly an option. (Of course, if you can start just N instead of 1 server
instance, this becomes an option, but not everything is a simple, stateless web server.)

> On the server side, do not enable net.ipv4.tcp_tw_recycle unless you are pretty sure you will
> never have NAT devices in the mix. Enabling net.ipv4.tcp_tw_reuse is useless for incoming
> connections.

So basically both options won't help the server operator.

> On the client side, enabling net.ipv4.tcp_tw_reuse is another almost-safe solution. Enabling
> net.ipv4.tcp_tw_recycle in addition to net.ipv4.tcp_tw_reuse is mostly useless.

If you just operate the server, but not the (remote) clients, this is not relevant.


Is the final verdict that unless a server software by itself offers to open
N listen ports for the same purpose, there is no solution?

Regards,

Lutz Vieweg
Neal Cardwell March 16, 2017, 3:40 p.m. UTC | #9
On Thu, Mar 16, 2017 at 7:31 AM, Lutz Vieweg <lvml@5t9.de> wrote:
>
> On 03/15/2017 11:55 PM, Willy Tarreau wrote:
>>
>> At least I can say I've seen many people enable it without understanding its impact, confusing it
>> with tcp_tw_reuse, and copy-pasting it from random blogs and complaining about issues in
>> production.
>
>
> I currently wonder: What it the correct advise to an operator who needs
> to run one server instance that is meant to accept thousands of new,
> short-lived TCP connections per minute?

Note that for this to be a problem there would have to be thousands of
new, short-lived TCP connections per minute from a single source IP
address to a single destination IP address. Normal client software
should not be doing this. AFAIK this is pretty rare, unless someone is
running a load test or has an overly-aggressive monitoring system. NAT
boxes or proxies with that kind of traffic should be running with
multiple public source IPs.

But if/when the problem occurs, then the feasible solutions I'm aware
of, in approximate descending order of preference, are:

(1) use longer connections from the client side (browsers and RPC libraries are
    usually pretty good about keeping connections open for a long time, so this
    is usually sufficient)

(2) have the client do the close(), so the client is the side to carry the
    TIME_WAIT state

(3) have the server use SO_LINGER with a timeout of 0, so that
    the connection is closed with a RST and the server carries no
    TIME_WAIT state

neal
Willy Tarreau March 16, 2017, 4:05 p.m. UTC | #10
Hi Neal,

On Thu, Mar 16, 2017 at 11:40:52AM -0400, Neal Cardwell wrote:
> On Thu, Mar 16, 2017 at 7:31 AM, Lutz Vieweg <lvml@5t9.de> wrote:
> >
> > On 03/15/2017 11:55 PM, Willy Tarreau wrote:
> >>
> >> At least I can say I've seen many people enable it without understanding its impact, confusing it
> >> with tcp_tw_reuse, and copy-pasting it from random blogs and complaining about issues in
> >> production.
> >
> >
> > I currently wonder: What it the correct advise to an operator who needs
> > to run one server instance that is meant to accept thousands of new,
> > short-lived TCP connections per minute?
> 
> Note that for this to be a problem there would have to be thousands of
> new, short-lived TCP connections per minute from a single source IP
> address to a single destination IP address. Normal client software
> should not be doing this. AFAIK this is pretty rare, unless someone is
> running a load test or has an overly-aggressive monitoring system. NAT
> boxes or proxies with that kind of traffic should be running with
> multiple public source IPs.

In fact it's the regular stuff with reverse-proxies. You can scan the
whole source port range every second. But when enabling timestamps, you
benefit from PAWS and you don't have any problem anymore, everything
works pretty well.

> But if/when the problem occurs, then the feasible solutions I'm aware
> of, in approximate descending order of preference, are:
> 
> (1) use longer connections from the client side (browsers and RPC libraries are
>     usually pretty good about keeping connections open for a long time, so this
>     is usually sufficient)
> 
> (2) have the client do the close(), so the client is the side to carry the
>     TIME_WAIT state

That's impossible for proxies, as you can't connect again from the same
source port, causing the performances to be divided by more than 100. What
proxies have to do when they're forced to close first an outgoing connection
is to set SO_LINGER to (0,0) so that an RST is used and the source port can
be reused. But as you guess, if that RST gets lost, then next opening is
not that beautiful : either [SYN, ACK, RST, pause, SYN, SYN-ACK, ACK] or
[SYN, RST, pause SYN, SYN-ACK, ACK] depending on whether the SYN appears
in the previous window or not.

> (3) have the server use SO_LINGER with a timeout of 0, so that
>     the connection is closed with a RST and the server carries no
>     TIME_WAIT state

The problem is that it also kills the tail data.

Quite frankly, the only issues I'm used to see are with clients closing
first and with reusing source connections. As soon as timestamps are
enabled on both sides and people don't blindly play with tcp_tw_recycle,
I really never face any connection issue.

Willy
Lutz Vieweg March 16, 2017, 5:30 p.m. UTC | #11
On 03/16/2017 04:40 PM, Neal Cardwell wrote:
>> I currently wonder: What it the correct advise to an operator who needs
>> to run one server instance that is meant to accept thousands of new,
>> short-lived TCP connections per minute?
>
> Note that for this to be a problem there would have to be thousands of
> new, short-lived TCP connections per minute from a single source IP
> address to a single destination IP address. Normal client software
> should not be doing this. AFAIK this is pretty rare, unless someone is
> running a load test or has an overly-aggressive monitoring system.

Indeed, I meanwhile found that a load/regression test scenario had
been the rationale for the tcp_tw_recycle = 1 setting - when a
recorded log of hundreds of thousands connections (each placing
one or a few requests) was replayed, this failed due to excessive
number of TIME_WAIT state connections.

Do I understand correctly that "tcp_tw_recycle = 1" is fine
in such a scenario as one can be sure both client and server
are at fixed, not-NATed IP addresses?

I wonder whether there might be a possibility to limit the use
of "tcp_tw_recycle = 1" to either a certain address or listen-port range?

If not, I guess our best option at this time is to advise
enabling "tcp_tw_recycle = 1" only while explicitely performing
local load/regression tests, and to disable it otherwise.
(This however means that running both automated continous integration
tests and any services for remote clients on the same system
would not mix well, as the setting could be "right" for only one
of them.)


> (1) use longer connections from the client side

Sure, in cases where that is under our control, we do exactly that.

> (2) have the client do the close(), so the client is the side to carry the
>      TIME_WAIT state

In the load/regression test scenario, we are both server and client,
so I guess this would not help.

> (3) have the server use SO_LINGER with a timeout of 0, so that
>      the connection is closed with a RST and the server carries no
>      TIME_WAIT state

Potentially losing the end of some conversation is not really
an option for most protocols / use cases.

Regards,

Lutz Vieweg
Alexander Alemayhu March 19, 2017, 7:53 a.m. UTC | #12
On Wed, Mar 15, 2017 at 05:06:58PM -0700, Eric Dumazet wrote:
> 
> Nat are good, Nat are good.
> 
> I can't find this hilarious video we watched in Copenhagen ;)
> 
Maybe 'Oops, I did it: IPv6 NAT by Patrick McHardy'[0]. Starts around
19:10.

[0]: http://video.dkuug.dk/media/oops-i-did-it-ipv6-nat-by-patrick-mchardy
diff mbox

Patch

diff --git a/include/net/tcp.h b/include/net/tcp.h
index bede8f7fa742..c81f3b958d44 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -406,11 +406,7 @@  void tcp_clear_retrans(struct tcp_sock *tp);
 void tcp_update_metrics(struct sock *sk);
 void tcp_init_metrics(struct sock *sk);
 void tcp_metrics_init(void);
-bool tcp_peer_is_proven(struct request_sock *req, struct dst_entry *dst,
-			bool paws_check, bool timestamps);
-bool tcp_remember_stamp(struct sock *sk);
-bool tcp_tw_remember_stamp(struct inet_timewait_sock *tw);
-void tcp_fetch_timewait_stamp(struct sock *sk, struct dst_entry *dst);
+bool tcp_peer_is_proven(struct request_sock *req, struct dst_entry *dst);
 void tcp_disable_fack(struct tcp_sock *tp);
 void tcp_close(struct sock *sk, long timeout);
 void tcp_init_sock(struct sock *sk);
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 96b67a8b18c3..aafec0676d3e 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -6342,8 +6342,7 @@  int tcp_conn_request(struct request_sock_ops *rsk_ops,
 			dst = af_ops->route_req(sk, &fl, req, &strict);
 
 			if (dst && strict &&
-			    !tcp_peer_is_proven(req, dst, true,
-						tmp_opt.saw_tstamp)) {
+			    !tcp_peer_is_proven(req, dst)) {
 				NET_INC_STATS(sock_net(sk), LINUX_MIB_PAWSPASSIVEREJECTED);
 				goto drop_and_release;
 			}
@@ -6352,8 +6351,7 @@  int tcp_conn_request(struct request_sock_ops *rsk_ops,
 		else if (!net->ipv4.sysctl_tcp_syncookies &&
 			 (net->ipv4.sysctl_max_syn_backlog - inet_csk_reqsk_queue_len(sk) <
 			  (net->ipv4.sysctl_max_syn_backlog >> 2)) &&
-			 !tcp_peer_is_proven(req, dst, false,
-					     tmp_opt.saw_tstamp)) {
+			 !tcp_peer_is_proven(req, dst)) {
 			/* Without syncookies last quarter of
 			 * backlog is filled with destinations,
 			 * proven to be alive.
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 08d870e45658..d8b401fff9fe 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -198,10 +198,6 @@  int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
 			tp->write_seq	   = 0;
 	}
 
-	if (tcp_death_row->sysctl_tw_recycle &&
-	    !tp->rx_opt.ts_recent_stamp && fl4->daddr == daddr)
-		tcp_fetch_timewait_stamp(sk, &rt->dst);
-
 	inet->inet_dport = usin->sin_port;
 	sk_daddr_set(sk, daddr);
 
diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c
index 0f46e5fe31ad..9d0d4f39e42b 100644
--- a/net/ipv4/tcp_metrics.c
+++ b/net/ipv4/tcp_metrics.c
@@ -45,8 +45,6 @@  struct tcp_metrics_block {
 	struct inetpeer_addr		tcpm_saddr;
 	struct inetpeer_addr		tcpm_daddr;
 	unsigned long			tcpm_stamp;
-	u32				tcpm_ts;
-	u32				tcpm_ts_stamp;
 	u32				tcpm_lock;
 	u32				tcpm_vals[TCP_METRIC_MAX_KERNEL + 1];
 	struct tcp_fastopen_metrics	tcpm_fastopen;
@@ -123,8 +121,6 @@  static void tcpm_suck_dst(struct tcp_metrics_block *tm,
 	tm->tcpm_vals[TCP_METRIC_SSTHRESH] = dst_metric_raw(dst, RTAX_SSTHRESH);
 	tm->tcpm_vals[TCP_METRIC_CWND] = dst_metric_raw(dst, RTAX_CWND);
 	tm->tcpm_vals[TCP_METRIC_REORDERING] = dst_metric_raw(dst, RTAX_REORDERING);
-	tm->tcpm_ts = 0;
-	tm->tcpm_ts_stamp = 0;
 	if (fastopen_clear) {
 		tm->tcpm_fastopen.mss = 0;
 		tm->tcpm_fastopen.syn_loss = 0;
@@ -273,48 +269,6 @@  static struct tcp_metrics_block *__tcp_get_metrics_req(struct request_sock *req,
 	return tm;
 }
 
-static struct tcp_metrics_block *__tcp_get_metrics_tw(struct inet_timewait_sock *tw)
-{
-	struct tcp_metrics_block *tm;
-	struct inetpeer_addr saddr, daddr;
-	unsigned int hash;
-	struct net *net;
-
-	if (tw->tw_family == AF_INET) {
-		inetpeer_set_addr_v4(&saddr, tw->tw_rcv_saddr);
-		inetpeer_set_addr_v4(&daddr, tw->tw_daddr);
-		hash = ipv4_addr_hash(tw->tw_daddr);
-	}
-#if IS_ENABLED(CONFIG_IPV6)
-	else if (tw->tw_family == AF_INET6) {
-		if (ipv6_addr_v4mapped(&tw->tw_v6_daddr)) {
-			inetpeer_set_addr_v4(&saddr, tw->tw_rcv_saddr);
-			inetpeer_set_addr_v4(&daddr, tw->tw_daddr);
-			hash = ipv4_addr_hash(tw->tw_daddr);
-		} else {
-			inetpeer_set_addr_v6(&saddr, &tw->tw_v6_rcv_saddr);
-			inetpeer_set_addr_v6(&daddr, &tw->tw_v6_daddr);
-			hash = ipv6_addr_hash(&tw->tw_v6_daddr);
-		}
-	}
-#endif
-	else
-		return NULL;
-
-	net = twsk_net(tw);
-	hash ^= net_hash_mix(net);
-	hash = hash_32(hash, tcp_metrics_hash_log);
-
-	for (tm = rcu_dereference(tcp_metrics_hash[hash].chain); tm;
-	     tm = rcu_dereference(tm->tcpm_next)) {
-		if (addr_same(&tm->tcpm_saddr, &saddr) &&
-		    addr_same(&tm->tcpm_daddr, &daddr) &&
-		    net_eq(tm_net(tm), net))
-			break;
-	}
-	return tm;
-}
-
 static struct tcp_metrics_block *tcp_get_metrics(struct sock *sk,
 						 struct dst_entry *dst,
 						 bool create)
@@ -573,8 +527,7 @@  void tcp_init_metrics(struct sock *sk)
 	tp->snd_cwnd_stamp = tcp_time_stamp;
 }
 
-bool tcp_peer_is_proven(struct request_sock *req, struct dst_entry *dst,
-			bool paws_check, bool timestamps)
+bool tcp_peer_is_proven(struct request_sock *req, struct dst_entry *dst)
 {
 	struct tcp_metrics_block *tm;
 	bool ret;
@@ -584,94 +537,10 @@  bool tcp_peer_is_proven(struct request_sock *req, struct dst_entry *dst,
 
 	rcu_read_lock();
 	tm = __tcp_get_metrics_req(req, dst);
-	if (paws_check) {
-		if (tm &&
-		    (u32)get_seconds() - tm->tcpm_ts_stamp < TCP_PAWS_MSL &&
-		    ((s32)(tm->tcpm_ts - req->ts_recent) > TCP_PAWS_WINDOW ||
-		     !timestamps))
-			ret = false;
-		else
-			ret = true;
-	} else {
-		if (tm && tcp_metric_get(tm, TCP_METRIC_RTT) && tm->tcpm_ts_stamp)
-			ret = true;
-		else
-			ret = false;
-	}
-	rcu_read_unlock();
-
-	return ret;
-}
-
-void tcp_fetch_timewait_stamp(struct sock *sk, struct dst_entry *dst)
-{
-	struct tcp_metrics_block *tm;
-
-	rcu_read_lock();
-	tm = tcp_get_metrics(sk, dst, true);
-	if (tm) {
-		struct tcp_sock *tp = tcp_sk(sk);
-
-		if ((u32)get_seconds() - tm->tcpm_ts_stamp <= TCP_PAWS_MSL) {
-			tp->rx_opt.ts_recent_stamp = tm->tcpm_ts_stamp;
-			tp->rx_opt.ts_recent = tm->tcpm_ts;
-		}
-	}
-	rcu_read_unlock();
-}
-EXPORT_SYMBOL_GPL(tcp_fetch_timewait_stamp);
-
-/* VJ's idea. Save last timestamp seen from this destination and hold
- * it at least for normal timewait interval to use for duplicate
- * segment detection in subsequent connections, before they enter
- * synchronized state.
- */
-bool tcp_remember_stamp(struct sock *sk)
-{
-	struct dst_entry *dst = __sk_dst_get(sk);
-	bool ret = false;
-
-	if (dst) {
-		struct tcp_metrics_block *tm;
-
-		rcu_read_lock();
-		tm = tcp_get_metrics(sk, dst, true);
-		if (tm) {
-			struct tcp_sock *tp = tcp_sk(sk);
-
-			if ((s32)(tm->tcpm_ts - tp->rx_opt.ts_recent) <= 0 ||
-			    ((u32)get_seconds() - tm->tcpm_ts_stamp > TCP_PAWS_MSL &&
-			     tm->tcpm_ts_stamp <= (u32)tp->rx_opt.ts_recent_stamp)) {
-				tm->tcpm_ts_stamp = (u32)tp->rx_opt.ts_recent_stamp;
-				tm->tcpm_ts = tp->rx_opt.ts_recent;
-			}
-			ret = true;
-		}
-		rcu_read_unlock();
-	}
-	return ret;
-}
-
-bool tcp_tw_remember_stamp(struct inet_timewait_sock *tw)
-{
-	struct tcp_metrics_block *tm;
-	bool ret = false;
-
-	rcu_read_lock();
-	tm = __tcp_get_metrics_tw(tw);
-	if (tm) {
-		const struct tcp_timewait_sock *tcptw;
-		struct sock *sk = (struct sock *) tw;
-
-		tcptw = tcp_twsk(sk);
-		if ((s32)(tm->tcpm_ts - tcptw->tw_ts_recent) <= 0 ||
-		    ((u32)get_seconds() - tm->tcpm_ts_stamp > TCP_PAWS_MSL &&
-		     tm->tcpm_ts_stamp <= (u32)tcptw->tw_ts_recent_stamp)) {
-			tm->tcpm_ts_stamp = (u32)tcptw->tw_ts_recent_stamp;
-			tm->tcpm_ts	   = tcptw->tw_ts_recent;
-		}
+	if (tm && tcp_metric_get(tm, TCP_METRIC_RTT))
 		ret = true;
-	}
+	else
+		ret = false;
 	rcu_read_unlock();
 
 	return ret;
@@ -791,14 +660,6 @@  static int tcp_metrics_fill_info(struct sk_buff *msg,
 			  jiffies - tm->tcpm_stamp,
 			  TCP_METRICS_ATTR_PAD) < 0)
 		goto nla_put_failure;
-	if (tm->tcpm_ts_stamp) {
-		if (nla_put_s32(msg, TCP_METRICS_ATTR_TW_TS_STAMP,
-				(s32) (get_seconds() - tm->tcpm_ts_stamp)) < 0)
-			goto nla_put_failure;
-		if (nla_put_u32(msg, TCP_METRICS_ATTR_TW_TSVAL,
-				tm->tcpm_ts) < 0)
-			goto nla_put_failure;
-	}
 
 	{
 		int n = 0;
diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
index 7e16243cdb58..692f974e5abe 100644
--- a/net/ipv4/tcp_minisocks.c
+++ b/net/ipv4/tcp_minisocks.c
@@ -94,7 +94,6 @@  tcp_timewait_state_process(struct inet_timewait_sock *tw, struct sk_buff *skb,
 	struct tcp_options_received tmp_opt;
 	struct tcp_timewait_sock *tcptw = tcp_twsk((struct sock *)tw);
 	bool paws_reject = false;
-	struct inet_timewait_death_row *tcp_death_row = &sock_net((struct sock*)tw)->ipv4.tcp_death_row;
 
 	tmp_opt.saw_tstamp = 0;
 	if (th->doff > (sizeof(*th) >> 2) && tcptw->tw_ts_recent_stamp) {
@@ -149,12 +148,7 @@  tcp_timewait_state_process(struct inet_timewait_sock *tw, struct sk_buff *skb,
 			tcptw->tw_ts_recent	  = tmp_opt.rcv_tsval;
 		}
 
-		if (tcp_death_row->sysctl_tw_recycle &&
-		    tcptw->tw_ts_recent_stamp &&
-		    tcp_tw_remember_stamp(tw))
-			inet_twsk_reschedule(tw, tw->tw_timeout);
-		else
-			inet_twsk_reschedule(tw, TCP_TIMEWAIT_LEN);
+		inet_twsk_reschedule(tw, TCP_TIMEWAIT_LEN);
 		return TCP_TW_ACK;
 	}
 
@@ -259,12 +253,8 @@  void tcp_time_wait(struct sock *sk, int state, int timeo)
 	const struct inet_connection_sock *icsk = inet_csk(sk);
 	const struct tcp_sock *tp = tcp_sk(sk);
 	struct inet_timewait_sock *tw;
-	bool recycle_ok = false;
 	struct inet_timewait_death_row *tcp_death_row = &sock_net(sk)->ipv4.tcp_death_row;
 
-	if (tcp_death_row->sysctl_tw_recycle && tp->rx_opt.ts_recent_stamp)
-		recycle_ok = tcp_remember_stamp(sk);
-
 	tw = inet_twsk_alloc(sk, tcp_death_row, state);
 
 	if (tw) {
@@ -317,13 +307,9 @@  void tcp_time_wait(struct sock *sk, int state, int timeo)
 		if (timeo < rto)
 			timeo = rto;
 
-		if (recycle_ok) {
-			tw->tw_timeout = rto;
-		} else {
-			tw->tw_timeout = TCP_TIMEWAIT_LEN;
-			if (state == TCP_TIME_WAIT)
-				timeo = TCP_TIMEWAIT_LEN;
-		}
+		tw->tw_timeout = TCP_TIMEWAIT_LEN;
+		if (state == TCP_TIME_WAIT)
+			timeo = TCP_TIMEWAIT_LEN;
 
 		inet_twsk_schedule(tw, timeo);
 		/* Linkage updates. */
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index c73a431fd06f..853cb43e3e3c 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -265,11 +265,6 @@  static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
 	sk->sk_gso_type = SKB_GSO_TCPV6;
 	ip6_dst_store(sk, dst, NULL, NULL);
 
-	if (tcp_death_row->sysctl_tw_recycle &&
-	    !tp->rx_opt.ts_recent_stamp &&
-	    ipv6_addr_equal(&fl6.daddr, &sk->sk_v6_daddr))
-		tcp_fetch_timewait_stamp(sk, dst);
-
 	icsk->icsk_ext_hdr_len = 0;
 	if (opt)
 		icsk->icsk_ext_hdr_len = opt->opt_flen +