From patchwork Wed Mar 6 22:58:01 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 225674 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 7CBD22C038F for ; Thu, 7 Mar 2013 09:58:18 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756669Ab3CFW6I (ORCPT ); Wed, 6 Mar 2013 17:58:08 -0500 Received: from mail-pb0-f43.google.com ([209.85.160.43]:49026 "EHLO mail-pb0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756295Ab3CFW6F (ORCPT ); Wed, 6 Mar 2013 17:58:05 -0500 Received: by mail-pb0-f43.google.com with SMTP id md12so6720776pbc.30 for ; Wed, 06 Mar 2013 14:58:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:message-id:subject:from:to:cc:date:content-type:x-mailer :content-transfer-encoding:mime-version; bh=jYOGk043N6+WU+lyqfIlQOho2c5SwLah66lE+h/XwUc=; b=GbfmlJcvlDX5NnGoQh1Ueo4bUPTqSpeytvh+H19iFwZgiHghCju6QqLaoKrzB/hH2b 7OS4OiQrJh70pzyGEulY71yzPNQUN9r+mDjJtErjl5t/9qflZ+K0m5Ti93ywLUow7OxP ywlInCZZX97qnIa7vSOXZUVIifIWBmIQo4HDJohnwgrCLqLA1p3JZI+gzJoUnG2KU3+/ 4N6OMrfOThNe7Iq3V1H+8aGxfg3q5G5vi9BLp0r2x2SHxCpH2cKK65wtAU/zNwZ24D1s pz3PNm1xGRadMek2kwhm/B/G224T0a74iwiX5RgseHs5P/1xQDVXby79V9NOa121EG0m GLiw== X-Received: by 10.68.135.196 with SMTP id pu4mr50586881pbb.50.1362610684661; Wed, 06 Mar 2013 14:58:04 -0800 (PST) Received: from [172.19.246.78] ([172.19.246.78]) by mx.google.com with ESMTPS id mz8sm30501533pbc.9.2013.03.06.14.58.02 (version=SSLv3 cipher=RC4-SHA bits=128/128); Wed, 06 Mar 2013 14:58:03 -0800 (PST) Message-ID: <1362610681.15793.205.camel@edumazet-glaptop> Subject: [PATCH net-next] tcp: uninline tcp_prequeue() From: Eric Dumazet To: David Miller Cc: netdev Date: Wed, 06 Mar 2013 14:58:01 -0800 X-Mailer: Evolution 3.2.3-0ubuntu6 Mime-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Eric Dumazet tcp_prequeue() became too big to be inlined. Signed-off-by: Eric Dumazet --- include/net/tcp.h | 45 ------------------------------------------ net/ipv4/tcp_ipv4.c | 44 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 45 insertions(+), 44 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/include/net/tcp.h b/include/net/tcp.h index cf0694d..a2baa5e 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1030,50 +1030,7 @@ static inline void tcp_prequeue_init(struct tcp_sock *tp) #endif } -/* Packet is added to VJ-style prequeue for processing in process - * context, if a reader task is waiting. Apparently, this exciting - * idea (VJ's mail "Re: query about TCP header on tcp-ip" of 07 Sep 93) - * failed somewhere. Latency? Burstiness? Well, at least now we will - * see, why it failed. 8)8) --ANK - * - * NOTE: is this not too big to inline? - */ -static inline bool tcp_prequeue(struct sock *sk, struct sk_buff *skb) -{ - struct tcp_sock *tp = tcp_sk(sk); - - if (sysctl_tcp_low_latency || !tp->ucopy.task) - return false; - - if (skb->len <= tcp_hdrlen(skb) && - skb_queue_len(&tp->ucopy.prequeue) == 0) - return false; - - __skb_queue_tail(&tp->ucopy.prequeue, skb); - tp->ucopy.memory += skb->truesize; - if (tp->ucopy.memory > sk->sk_rcvbuf) { - struct sk_buff *skb1; - - BUG_ON(sock_owned_by_user(sk)); - - while ((skb1 = __skb_dequeue(&tp->ucopy.prequeue)) != NULL) { - sk_backlog_rcv(sk, skb1); - NET_INC_STATS_BH(sock_net(sk), - LINUX_MIB_TCPPREQUEUEDROPPED); - } - - tp->ucopy.memory = 0; - } else if (skb_queue_len(&tp->ucopy.prequeue) == 1) { - wake_up_interruptible_sync_poll(sk_sleep(sk), - POLLIN | POLLRDNORM | POLLRDBAND); - if (!inet_csk_ack_scheduled(sk)) - inet_csk_reset_xmit_timer(sk, ICSK_TIME_DACK, - (3 * tcp_rto_min(sk)) / 4, - TCP_RTO_MAX); - } - return true; -} - +extern bool tcp_prequeue(struct sock *sk, struct sk_buff *skb); #undef STATE_TRACE diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 4a8ec45..8cdee12 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1950,6 +1950,50 @@ void tcp_v4_early_demux(struct sk_buff *skb) } } +/* Packet is added to VJ-style prequeue for processing in process + * context, if a reader task is waiting. Apparently, this exciting + * idea (VJ's mail "Re: query about TCP header on tcp-ip" of 07 Sep 93) + * failed somewhere. Latency? Burstiness? Well, at least now we will + * see, why it failed. 8)8) --ANK + * + */ +bool tcp_prequeue(struct sock *sk, struct sk_buff *skb) +{ + struct tcp_sock *tp = tcp_sk(sk); + + if (sysctl_tcp_low_latency || !tp->ucopy.task) + return false; + + if (skb->len <= tcp_hdrlen(skb) && + skb_queue_len(&tp->ucopy.prequeue) == 0) + return false; + + __skb_queue_tail(&tp->ucopy.prequeue, skb); + tp->ucopy.memory += skb->truesize; + if (tp->ucopy.memory > sk->sk_rcvbuf) { + struct sk_buff *skb1; + + BUG_ON(sock_owned_by_user(sk)); + + while ((skb1 = __skb_dequeue(&tp->ucopy.prequeue)) != NULL) { + sk_backlog_rcv(sk, skb1); + NET_INC_STATS_BH(sock_net(sk), + LINUX_MIB_TCPPREQUEUEDROPPED); + } + + tp->ucopy.memory = 0; + } else if (skb_queue_len(&tp->ucopy.prequeue) == 1) { + wake_up_interruptible_sync_poll(sk_sleep(sk), + POLLIN | POLLRDNORM | POLLRDBAND); + if (!inet_csk_ack_scheduled(sk)) + inet_csk_reset_xmit_timer(sk, ICSK_TIME_DACK, + (3 * tcp_rto_min(sk)) / 4, + TCP_RTO_MAX); + } + return true; +} +EXPORT_SYMBOL(tcp_prequeue); + /* * From tcp_input.c */