From patchwork Fri May 9 13:13:22 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jon Maloy X-Patchwork-Id: 347411 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id AAA08140122 for ; Fri, 9 May 2014 23:21:12 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755095AbaEINUv (ORCPT ); Fri, 9 May 2014 09:20:51 -0400 Received: from smtp105.biz.mail.ne1.yahoo.com ([98.138.207.12]:37127 "HELO smtp105.biz.mail.ne1.yahoo.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S1752029AbaEINUp (ORCPT ); Fri, 9 May 2014 09:20:45 -0400 Received: (qmail 95310 invoked from network); 9 May 2014 13:14:05 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1399641245; bh=yO631wGxmWTeEwTz1avY7WLzWn8IMGz6XU5gGTj+3/I=; h=X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:From:To:Cc:Subject:Date:Message-Id:X-Mailer:In-Reply-To:References; b=FSbQ4P5+4ywdbhnewISEba7PPzLtmZxqPXFjIy/UBv+WBwnmluYGQc4CfluDklzTWY1Mll1kKVpd2KBg7Iu3shyD2fWJ4rMp8DYXbD0sjYVmOuzNiZe9TcX3HH1oKVdKr0VMtAiAfi1yXpedoZHowrKTNotNsJgQBzQh6BG6qKc= X-Yahoo-Newman-Property: ymail-3 X-YMail-OSG: yqA3jVcVM1lc1_.bfQge4bymnqeodwe3pRFmT.uzmzDU84G BSDwvgaMcSMY6K.5Njke1MGmxNRWEo.DHYSRMYHQOV.OTUWijXA0uIj17v82 FiQc.0aF7Ia5N4OqLryrMBxP7EhPujRMxqxSMTg75YR1Z_2sJD2Yx_2McAEf i0O58IFP7tm1O0Px5sAOHhInwU8Bai5UR5CMRH_WO8dS9NK8sPY_HzxGfLjt Tk45fT.dySsnxhylrzCE81ZdvhQLNi8gqIjLD.262H_6ktkj78jU1W_dzE74 bmnKmxdk6rE9r_aXj5NGfD3egeZRFsqZZ.tRGmBiyQUAgBN1yNbMkBReCRki LJJ17w6uM2WgEIo.GjV_VFra7obOvjKBfMcsYDXwAcKK_lfLXEzelrcwZfL5 QMhlkVenps_ClXwXqV3abRokpAj29byyoZkGse_PP9OZtXZTeoB9CKs_8CQ5 Gsl06CJPlA0RoSGg9uT6Ytk2X9jcPu7hEMo66jtJs6RWuANvL997trta4FY1 XFuGdnA7EC18t._r55yNyFOGmvfditA-- X-Yahoo-SMTP: gPXIZm2swBAFQJ_Vx0CebjUfUdhJ X-Rocket-Received: from goethe.lan (jon.maloy@65.93.115.57 with plain [98.138.105.25]) by smtp105.biz.mail.ne1.yahoo.com with SMTP; 09 May 2014 06:14:05 -0700 PDT From: Jon Maloy To: davem@davemloft.net Cc: netdev@vger.kernel.org, Paul Gortmaker , erik.hugne@ericsson.com, ying.xue@windriver.com, maloy@donjonn.com, tipc-discussion@lists.sourceforge.net, Jon Maloy Subject: [PATCH net-next 1/8] tipc: decrease connection flow control window Date: Fri, 9 May 2014 09:13:22 -0400 Message-Id: <1399641209-26112-2-git-send-email-jon.maloy@ericsson.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1399641209-26112-1-git-send-email-jon.maloy@ericsson.com> References: <1399641209-26112-1-git-send-email-jon.maloy@ericsson.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Memory overhead when allocating big buffers for data transfer may be quite significant. E.g., truesize of a 64 KB buffer turns out to be 132 KB, 2 x the requested size. This invalidates the "worst case" calculation we have been using to determine the default socket receive buffer limit, which is based on the assumption that 1024x64KB = 67MB buffers may be queued up on a socket. Since TIPC connections cannot survive hitting the buffer limit, we have to compensate for this overhead. We do that in this commit by dividing the fix connection flow control window from 1024 (2*512) messages to 512 (2*256). Since older version nodes send out acks at 512 message intervals, compatibility with such nodes is guaranteed, although performance may be non-optimal in such cases. Signed-off-by: Jon Maloy Reviewed-by: Ying Xue --- net/tipc/core.c | 7 ++++--- net/tipc/port.h | 9 +++++---- net/tipc/socket.c | 4 ++-- 3 files changed, 11 insertions(+), 9 deletions(-) diff --git a/net/tipc/core.c b/net/tipc/core.c index 57f8ae9..676d180 100644 --- a/net/tipc/core.c +++ b/net/tipc/core.c @@ -154,10 +154,11 @@ static int __init tipc_init(void) tipc_max_ports = CONFIG_TIPC_PORTS; tipc_net_id = 4711; - sysctl_tipc_rmem[0] = CONN_OVERLOAD_LIMIT >> 4 << TIPC_LOW_IMPORTANCE; - sysctl_tipc_rmem[1] = CONN_OVERLOAD_LIMIT >> 4 << + sysctl_tipc_rmem[0] = TIPC_CONN_OVERLOAD_LIMIT >> 4 << + TIPC_LOW_IMPORTANCE; + sysctl_tipc_rmem[1] = TIPC_CONN_OVERLOAD_LIMIT >> 4 << TIPC_CRITICAL_IMPORTANCE; - sysctl_tipc_rmem[2] = CONN_OVERLOAD_LIMIT; + sysctl_tipc_rmem[2] = TIPC_CONN_OVERLOAD_LIMIT; res = tipc_core_start(); if (res) diff --git a/net/tipc/port.h b/net/tipc/port.h index a003973..5dfd165 100644 --- a/net/tipc/port.h +++ b/net/tipc/port.h @@ -42,9 +42,10 @@ #include "msg.h" #include "node_subscr.h" -#define TIPC_FLOW_CONTROL_WIN 512 -#define CONN_OVERLOAD_LIMIT ((TIPC_FLOW_CONTROL_WIN * 2 + 1) * \ - SKB_TRUESIZE(TIPC_MAX_USER_MSG_SIZE)) +#define TIPC_CONNACK_INTV 256 +#define TIPC_FLOWCTRL_WIN (TIPC_CONNACK_INTV * 2) +#define TIPC_CONN_OVERLOAD_LIMIT ((TIPC_FLOWCTRL_WIN * 2 + 1) * \ + SKB_TRUESIZE(TIPC_MAX_USER_MSG_SIZE)) /** * struct tipc_port - TIPC port structure @@ -187,7 +188,7 @@ static inline void tipc_port_unlock(struct tipc_port *p_ptr) static inline int tipc_port_congested(struct tipc_port *p_ptr) { - return (p_ptr->sent - p_ptr->acked) >= (TIPC_FLOW_CONTROL_WIN * 2); + return ((p_ptr->sent - p_ptr->acked) >= TIPC_FLOWCTRL_WIN); } diff --git a/net/tipc/socket.c b/net/tipc/socket.c index 3f9912f..8685daf 100644 --- a/net/tipc/socket.c +++ b/net/tipc/socket.c @@ -1101,7 +1101,7 @@ restart: /* Consume received message (optional) */ if (likely(!(flags & MSG_PEEK))) { if ((sock->state != SS_READY) && - (++port->conn_unacked >= TIPC_FLOW_CONTROL_WIN)) + (++port->conn_unacked >= TIPC_CONNACK_INTV)) tipc_acknowledge(port->ref, port->conn_unacked); advance_rx_queue(sk); } @@ -1210,7 +1210,7 @@ restart: /* Consume received message (optional) */ if (likely(!(flags & MSG_PEEK))) { - if (unlikely(++port->conn_unacked >= TIPC_FLOW_CONTROL_WIN)) + if (unlikely(++port->conn_unacked >= TIPC_CONNACK_INTV)) tipc_acknowledge(port->ref, port->conn_unacked); advance_rx_queue(sk); }