diff mbox

[net-next,4/9] rxrpc: Randomise epoch and starting client conn ID values

Message ID 147302297165.28597.8317497494192917329.stgit@warthog.procyon.org.uk
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

David Howells Sept. 4, 2016, 9:02 p.m. UTC
Create a random epoch value rather than a time-based one on startup and set
the top bit to indicate that this is the case.

Also create a random starting client connection ID value.  This will be
incremented from here as new client connections are created.

Signed-off-by: David Howells <dhowells@redhat.com>
---

 include/rxrpc/packet.h |    1 +
 net/rxrpc/af_rxrpc.c   |    9 ++++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)

Comments

David Laight Sept. 5, 2016, 3:01 p.m. UTC | #1
From: David Howells

> Sent: 04 September 2016 22:03

> Create a random epoch value rather than a time-based one on startup and set

> the top bit to indicate that this is the case.


Why set the top bit?
There is nothing to stop the time (in seconds) from having the top bit set.
Nothing else can care - otherwise this wouldn't work.

> Also create a random starting client connection ID value.  This will be

> incremented from here as new client connections are created.


I'm guessing this is to make duplicates less likely after a restart?
You may want to worry about duplicate allocations (after 2^32 connects).
There are id allocation algorithms that guarantee not to generate duplicates
and not to reuse values quickly while still being fixed cost.
Look at the code NetBSD uses to allocate process ids for an example.

	David
David Howells Sept. 5, 2016, 4:24 p.m. UTC | #2
[cc'ing Jeff Altman for comment]

David Laight <David.Laight@ACULAB.COM> wrote:

> > Create a random epoch value rather than a time-based one on startup and set
> > the top bit to indicate that this is the case.
> 
> Why set the top bit?
> There is nothing to stop the time (in seconds) from having the top bit set.
> Nothing else can care - otherwise this wouldn't work.

This is what I'm told I should do by purveyors of other RxRPC solutions.

> > Also create a random starting client connection ID value.  This will be
> > incremented from here as new client connections are created.
> 
> I'm guessing this is to make duplicates less likely after a restart?

Again, it's been suggested that I do this, but I would guess so.

> You may want to worry about duplicate allocations (after 2^32 connects).

It's actually a quarter of that, but connection != call, so a connection may
be used for up to ~16 billion RPC operations before it *has* to be flushed.

> There are id allocation algorithms that guarantee not to generate duplicates
> and not to reuse values quickly while still being fixed cost.
> Look at the code NetBSD uses to allocate process ids for an example.

I'm using idr_alloc_cyclic()[*] with a fixed size "window" on the active conn
ID values.  Client connections with IDs outside of that window are discarded
as soon as possible to keep the memory consumption of the tree down (and to
force security renegotiation occasionally).  However, given that there are a
billion IDs to cycle through, it will take quite a while for reuse to become
an issue.

I like the idea of incrementing the epoch every time we cycle through the ID
space, but I'm told that a change in the epoch value is an indication that the
client rebooted - with what consequences I cannot say.

[*] which is what Linux uses to allocate process IDs.

David
Jeffrey E Altman Sept. 6, 2016, 2:12 a.m. UTC | #3
Reply inline ....

On 9/5/2016 12:24 PM, David Howells wrote:
> [cc'ing Jeff Altman for comment]
> 
> David Laight <David.Laight@ACULAB.COM> wrote:
> 
>>> Create a random epoch value rather than a time-based one on startup and set
>>> the top bit to indicate that this is the case.
>>
>> Why set the top bit?
>> There is nothing to stop the time (in seconds) from having the top bit set.
>> Nothing else can care - otherwise this wouldn't work.
> 
> This is what I'm told I should do by purveyors of other RxRPC solutions.

The protocol specification requires that the top bit be 1 for a random
epoch and 0 for a time derived epoch.
> 
>>> Also create a random starting client connection ID value.  This will be
>>> incremented from here as new client connections are created.
>>
>> I'm guessing this is to make duplicates less likely after a restart?

Its to reduce the possibility of duplicates on multiple machines that
might at some point exchange an endpoint address either due to mobility
or NAT/PAT.
> 
> Again, it's been suggested that I do this, but I would guess so.
> 
>> You may want to worry about duplicate allocations (after 2^32 connects).
> 
> It's actually a quarter of that, but connection != call, so a connection may
> be used for up to ~16 billion RPC operations before it *has* to be flushed.
> 
>> There are id allocation algorithms that guarantee not to generate duplicates
>> and not to reuse values quickly while still being fixed cost.
>> Look at the code NetBSD uses to allocate process ids for an example.
> 
> I'm using idr_alloc_cyclic()[*] with a fixed size "window" on the active conn
> ID values.  Client connections with IDs outside of that window are discarded
> as soon as possible to keep the memory consumption of the tree down (and to
> force security renegotiation occasionally).  However, given that there are a
> billion IDs to cycle through, it will take quite a while for reuse to become
> an issue.
> 
> I like the idea of incrementing the epoch every time we cycle through the ID
> space, but I'm told that a change in the epoch value is an indication that the
> client rebooted - with what consequences I cannot say.

State information might be recorded about an rx peer with the assumption
that state will be reset when the epoch changes.  The most frequent use
of this technique is for rx rpc statistics monitoring.


> 
> [*] which is what Linux uses to allocate process IDs.
> 
> David
>
diff mbox

Patch

diff --git a/include/rxrpc/packet.h b/include/rxrpc/packet.h
index b2017440b765..3c6128e1fdbe 100644
--- a/include/rxrpc/packet.h
+++ b/include/rxrpc/packet.h
@@ -24,6 +24,7 @@  typedef __be32	rxrpc_serial_net_t; /* on-the-wire Rx message serial number */
  */
 struct rxrpc_wire_header {
 	__be32		epoch;		/* client boot timestamp */
+#define RXRPC_RANDOM_EPOCH	0x80000000	/* Random if set, date-based if not */
 
 	__be32		cid;		/* connection and channel ID */
 #define RXRPC_MAXCALLS		4			/* max active calls per conn */
diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
index 32d544995dda..b66a9e6f8d04 100644
--- a/net/rxrpc/af_rxrpc.c
+++ b/net/rxrpc/af_rxrpc.c
@@ -16,6 +16,7 @@ 
 #include <linux/net.h>
 #include <linux/slab.h>
 #include <linux/skbuff.h>
+#include <linux/random.h>
 #include <linux/poll.h>
 #include <linux/proc_fs.h>
 #include <linux/key-type.h>
@@ -700,7 +701,13 @@  static int __init af_rxrpc_init(void)
 
 	BUILD_BUG_ON(sizeof(struct rxrpc_skb_priv) > FIELD_SIZEOF(struct sk_buff, cb));
 
-	rxrpc_epoch = get_seconds();
+	get_random_bytes(&rxrpc_epoch, sizeof(rxrpc_epoch));
+	rxrpc_epoch |= RXRPC_RANDOM_EPOCH;
+	get_random_bytes(&rxrpc_client_conn_ids.cur,
+			 sizeof(rxrpc_client_conn_ids.cur));
+	rxrpc_client_conn_ids.cur &= 0x3fffffff;
+	if (rxrpc_client_conn_ids.cur == 0)
+		rxrpc_client_conn_ids.cur = 1;
 
 	ret = -ENOMEM;
 	rxrpc_call_jar = kmem_cache_create(