diff mbox series

[net-next,01/12] net/tls: simplify seq calculation in handle_device_resync()

Message ID 20190611044010.29161-2-jakub.kicinski@netronome.com
State Accepted
Delegated to: David Miller
Headers show
Series tls: add support for kernel-driven resync and nfp RX offload | expand

Commit Message

Jakub Kicinski June 11, 2019, 4:39 a.m. UTC
We subtract "TLS_HEADER_SIZE - 1" from req_seq, then if they
match we add the same constant to seq.  Just add it to seq,
and we don't have to touch req_seq.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
---
 net/tls/tls_device.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)
diff mbox series

Patch

diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index 43f2deb57078..59f0c8dacbcc 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -576,14 +576,13 @@  void handle_device_resync(struct sock *sk, u32 seq, u64 rcd_sn)
 
 	rx_ctx = tls_offload_ctx_rx(tls_ctx);
 	resync_req = atomic64_read(&rx_ctx->resync_req);
-	req_seq = (resync_req >> 32) - ((u32)TLS_HEADER_SIZE - 1);
+	req_seq = resync_req >> 32;
+	seq += TLS_HEADER_SIZE - 1;
 	is_req_pending = resync_req;
 
 	if (unlikely(is_req_pending) && req_seq == seq &&
-	    atomic64_try_cmpxchg(&rx_ctx->resync_req, &resync_req, 0)) {
-		seq += TLS_HEADER_SIZE - 1;
+	    atomic64_try_cmpxchg(&rx_ctx->resync_req, &resync_req, 0))
 		tls_device_resync_rx(tls_ctx, sk, seq, rcd_sn);
-	}
 }
 
 static int tls_device_reencrypt(struct sock *sk, struct sk_buff *skb)