From patchwork Fri Jun 1 10:01:21 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 162296 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 0099BB7005 for ; Fri, 1 Jun 2012 20:01:29 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932710Ab2FAKB1 (ORCPT ); Fri, 1 Jun 2012 06:01:27 -0400 Received: from mail-bk0-f46.google.com ([209.85.214.46]:34246 "EHLO mail-bk0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932579Ab2FAKB0 (ORCPT ); Fri, 1 Jun 2012 06:01:26 -0400 Received: by bkcji2 with SMTP id ji2so1674340bkc.19 for ; Fri, 01 Jun 2012 03:01:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:from:to:cc:content-type:date:message-id:mime-version :x-mailer:content-transfer-encoding; bh=/LgVuAUCtSZKFm/KpLnsFEeT/uTS7M3nNd83RWjTLfI=; b=PfMBVxMU4yVW6nmVwNL1dNLThFoBVFj+F8w+tnqphNgovADyzeD7Jdvnq/IrnFpcYk 1mV4xLSAvP87NlEbhPHLVUdadMA1YOO7HFi4u7XXg+PVrnSuLmfehN2hB+BwV/AxRMJJ rpe1Mi4rTercRRQZHEgAA4Z5xTb8KajTBBLpJJnLgV8+AHj7opKF5u10V49dFjAp5c+T EeP7f7bDGa4ImUl+1lfmFcf83T1s83HMntz4nSl2d8diSsfIdTwS2IFY1bJPeZn9JO1g fJKnCRKff2jUaQBV2Z2p4eHzU4bl0KjUjgOGTfLZw0xVuthpDZw5BglmzLzcpiz3r5fh Eobg== Received: by 10.204.152.70 with SMTP id f6mr1103173bkw.7.1338544884913; Fri, 01 Jun 2012 03:01:24 -0700 (PDT) Received: from [172.28.93.95] ([74.125.122.49]) by mx.google.com with ESMTPS id fw10sm1384070bkc.11.2012.06.01.03.01.22 (version=SSLv3 cipher=OTHER); Fri, 01 Jun 2012 03:01:23 -0700 (PDT) Subject: [PATCH] tcp: reflect SYN queue_mapping into SYNACK packets From: Eric Dumazet To: Hans Schillstrom Cc: netdev , Neal Cardwell , Jesper Dangaard Brouer , Tom Herbert Date: Fri, 01 Jun 2012 12:01:21 +0200 Message-ID: <1338544881.2760.1502.camel@edumazet-glaptop> Mime-Version: 1.0 X-Mailer: Evolution 2.28.3 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Eric Dumazet While testing how linux behaves on SYNFLOOD attack on multiqueue device (ixgbe), I found that SYNACK messages were dropped at Qdisc level because we send them all on a single queue. Obvious choice is to reflect incoming SYN packet @queue_mapping to SYNACK packet. Under stress, my machine could only send 25.000 SYNACK per second (for 200.000 incoming SYN per second). NIC : ixgbe with 16 rx/tx queues. After patch, not a single SYNACK is dropped. Signed-off-by: Eric Dumazet Cc: Hans Schillstrom Cc: Jesper Dangaard Brouer Cc: Neal Cardwell Cc: Tom Herbert --- net/ipv4/tcp_ipv4.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index a43b87d..c8d28c4 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -824,7 +824,8 @@ static void tcp_v4_reqsk_send_ack(struct sock *sk, struct sk_buff *skb, */ static int tcp_v4_send_synack(struct sock *sk, struct dst_entry *dst, struct request_sock *req, - struct request_values *rvp) + struct request_values *rvp, + u16 queue_mapping) { const struct inet_request_sock *ireq = inet_rsk(req); struct flowi4 fl4; @@ -840,6 +841,7 @@ static int tcp_v4_send_synack(struct sock *sk, struct dst_entry *dst, if (skb) { __tcp_v4_send_check(skb, ireq->loc_addr, ireq->rmt_addr); + skb_set_queue_mapping(skb, queue_mapping); err = ip_build_and_send_pkt(skb, sk, ireq->loc_addr, ireq->rmt_addr, ireq->opt); @@ -854,7 +856,7 @@ static int tcp_v4_rtx_synack(struct sock *sk, struct request_sock *req, struct request_values *rvp) { TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_RETRANSSEGS); - return tcp_v4_send_synack(sk, NULL, req, rvp); + return tcp_v4_send_synack(sk, NULL, req, rvp, 0); } /* @@ -1422,7 +1424,8 @@ int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb) tcp_rsk(req)->snt_synack = tcp_time_stamp; if (tcp_v4_send_synack(sk, dst, req, - (struct request_values *)&tmp_ext) || + (struct request_values *)&tmp_ext, + skb_get_queue_mapping(skb)) || want_cookie) goto drop_and_free;