Message ID | 1338544881.2760.1502.camel@edumazet-glaptop |
---|---|
State | Superseded, archived |
Delegated to: | David Miller |
Headers | show |
On Fri, 2012-06-01 at 12:01 +0200, Eric Dumazet wrote: > From: Eric Dumazet <edumazet@google.com> > > --- > net/ipv4/tcp_ipv4.c | 9 ++++++--- > 1 file changed, 6 insertions(+), 3 deletions(-) I'll send a v2 with the missing IPv6 part ;) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Friday 01 June 2012 12:01:21 Eric Dumazet wrote: > From: Eric Dumazet <edumazet@google.com> > > While testing how linux behaves on SYNFLOOD attack on multiqueue device > (ixgbe), I found that SYNACK messages were dropped at Qdisc level > because we send them all on a single queue. > > Obvious choice is to reflect incoming SYN packet @queue_mapping to > SYNACK packet. > > Under stress, my machine could only send 25.000 SYNACK per second (for > 200.000 incoming SYN per second). NIC : ixgbe with 16 rx/tx queues. > > After patch, not a single SYNACK is dropped. > Great, that was just what I was looking for Will try to get this on into test to day. IPv6, do you add that as well ?
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index a43b87d..c8d28c4 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -824,7 +824,8 @@ static void tcp_v4_reqsk_send_ack(struct sock *sk, struct sk_buff *skb, */ static int tcp_v4_send_synack(struct sock *sk, struct dst_entry *dst, struct request_sock *req, - struct request_values *rvp) + struct request_values *rvp, + u16 queue_mapping) { const struct inet_request_sock *ireq = inet_rsk(req); struct flowi4 fl4; @@ -840,6 +841,7 @@ static int tcp_v4_send_synack(struct sock *sk, struct dst_entry *dst, if (skb) { __tcp_v4_send_check(skb, ireq->loc_addr, ireq->rmt_addr); + skb_set_queue_mapping(skb, queue_mapping); err = ip_build_and_send_pkt(skb, sk, ireq->loc_addr, ireq->rmt_addr, ireq->opt); @@ -854,7 +856,7 @@ static int tcp_v4_rtx_synack(struct sock *sk, struct request_sock *req, struct request_values *rvp) { TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_RETRANSSEGS); - return tcp_v4_send_synack(sk, NULL, req, rvp); + return tcp_v4_send_synack(sk, NULL, req, rvp, 0); } /* @@ -1422,7 +1424,8 @@ int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb) tcp_rsk(req)->snt_synack = tcp_time_stamp; if (tcp_v4_send_synack(sk, dst, req, - (struct request_values *)&tmp_ext) || + (struct request_values *)&tmp_ext, + skb_get_queue_mapping(skb)) || want_cookie) goto drop_and_free;