From patchwork Wed Oct 30 03:26:57 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ying Xue X-Patchwork-Id: 287125 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id A7CBB2C0374 for ; Wed, 30 Oct 2013 14:27:30 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752797Ab3J3D1Z (ORCPT ); Tue, 29 Oct 2013 23:27:25 -0400 Received: from mail.windriver.com ([147.11.1.11]:55845 "EHLO mail.windriver.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751605Ab3J3D1Y (ORCPT ); Tue, 29 Oct 2013 23:27:24 -0400 Received: from ALA-HCB.corp.ad.wrs.com (ala-hcb.corp.ad.wrs.com [147.11.189.41]) by mail.windriver.com (8.14.5/8.14.3) with ESMTP id r9U3R2nU016571 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Tue, 29 Oct 2013 20:27:02 -0700 (PDT) Received: from ying.corp.ad.wrs.com (128.224.162.184) by ALA-HCB.corp.ad.wrs.com (147.11.189.41) with Microsoft SMTP Server id 14.2.347.0; Tue, 29 Oct 2013 20:27:01 -0700 From: Ying Xue To: , , CC: , , , , , , Subject: [PATCH net-next v2] tipc: remove two indentation levels in tipc_recv_msg routine Date: Wed, 30 Oct 2013 11:26:57 +0800 Message-ID: <1383103617-28813-1-git-send-email-ying.xue@windriver.com> X-Mailer: git-send-email 1.7.9.5 MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The message dispatching part of tipc_recv_msg() is wrapped layers of while/if/if/switch, causing out-of-control indentation and does not look very good. We reduce two indentation levels by separating the message dispatching from the blocks that checks link state and sequence numbers, allowing longer function and arg names to be consistently indented without wrapping. Additionally we also rename "cont" label to "discard" and add one new label called "unlock_discard" to make code clearer. In all, these are cosmetic changes that do not alter the operation of TIPC in any way. Signed-off-by: Ying Xue Reviewed-by: Erik Hugne Cc: David Laight Cc: Andreas Bofjäll --- v2: Incorporated comments from David Laight and Andreas Bofjäll net/tipc/link.c | 173 +++++++++++++++++++++++++++---------------------------- 1 file changed, 84 insertions(+), 89 deletions(-) diff --git a/net/tipc/link.c b/net/tipc/link.c index e8153f6..54163f9 100644 --- a/net/tipc/link.c +++ b/net/tipc/link.c @@ -1507,15 +1507,15 @@ void tipc_recv_msg(struct sk_buff *head, struct tipc_bearer *b_ptr) /* Ensure bearer is still enabled */ if (unlikely(!b_ptr->active)) - goto cont; + goto discard; /* Ensure message is well-formed */ if (unlikely(!link_recv_buf_validate(buf))) - goto cont; + goto discard; /* Ensure message data is a single contiguous unit */ if (unlikely(skb_linearize(buf))) - goto cont; + goto discard; /* Handle arrival of a non-unicast link message */ msg = buf_msg(buf); @@ -1531,20 +1531,18 @@ void tipc_recv_msg(struct sk_buff *head, struct tipc_bearer *b_ptr) /* Discard unicast link messages destined for another node */ if (unlikely(!msg_short(msg) && (msg_destnode(msg) != tipc_own_addr))) - goto cont; + goto discard; /* Locate neighboring node that sent message */ n_ptr = tipc_node_find(msg_prevnode(msg)); if (unlikely(!n_ptr)) - goto cont; + goto discard; tipc_node_lock(n_ptr); /* Locate unicast link endpoint that should handle message */ l_ptr = n_ptr->links[b_ptr->identity]; - if (unlikely(!l_ptr)) { - tipc_node_unlock(n_ptr); - goto cont; - } + if (unlikely(!l_ptr)) + goto unlock_discard; /* Verify that communication with node is currently allowed */ if ((n_ptr->block_setup & WAIT_PEER_DOWN) && @@ -1554,10 +1552,8 @@ void tipc_recv_msg(struct sk_buff *head, struct tipc_bearer *b_ptr) !msg_redundant_link(msg)) n_ptr->block_setup &= ~WAIT_PEER_DOWN; - if (n_ptr->block_setup) { - tipc_node_unlock(n_ptr); - goto cont; - } + if (n_ptr->block_setup) + goto unlock_discard; /* Validate message sequence number info */ seq_no = msg_seqno(msg); @@ -1593,98 +1589,97 @@ void tipc_recv_msg(struct sk_buff *head, struct tipc_bearer *b_ptr) /* Now (finally!) process the incoming message */ protocol_check: - if (likely(link_working_working(l_ptr))) { - if (likely(seq_no == mod(l_ptr->next_in_no))) { - l_ptr->next_in_no++; - if (unlikely(l_ptr->oldest_deferred_in)) - head = link_insert_deferred_queue(l_ptr, - head); -deliver: - if (likely(msg_isdata(msg))) { - tipc_node_unlock(n_ptr); - tipc_port_recv_msg(buf); - continue; - } - switch (msg_user(msg)) { - int ret; - case MSG_BUNDLER: - l_ptr->stats.recv_bundles++; - l_ptr->stats.recv_bundled += - msg_msgcnt(msg); - tipc_node_unlock(n_ptr); - tipc_link_recv_bundle(buf); - continue; - case NAME_DISTRIBUTOR: - n_ptr->bclink.recv_permitted = true; - tipc_node_unlock(n_ptr); - tipc_named_recv(buf); - continue; - case BCAST_PROTOCOL: - tipc_link_recv_sync(n_ptr, buf); - tipc_node_unlock(n_ptr); - continue; - case CONN_MANAGER: - tipc_node_unlock(n_ptr); - tipc_port_recv_proto_msg(buf); - continue; - case MSG_FRAGMENTER: - l_ptr->stats.recv_fragments++; - ret = tipc_link_recv_fragment( - &l_ptr->defragm_buf, - &buf, &msg); - if (ret == 1) { - l_ptr->stats.recv_fragmented++; - goto deliver; - } - if (ret == -1) - l_ptr->next_in_no--; - break; - case CHANGEOVER_PROTOCOL: - type = msg_type(msg); - if (link_recv_changeover_msg(&l_ptr, - &buf)) { - msg = buf_msg(buf); - seq_no = msg_seqno(msg); - if (type == ORIGINAL_MSG) - goto deliver; - goto protocol_check; - } - break; - default: - kfree_skb(buf); - buf = NULL; - break; - } + if (unlikely(!link_working_working(l_ptr))) { + if (msg_user(msg) == LINK_PROTOCOL) { + link_recv_proto_msg(l_ptr, buf); + head = link_insert_deferred_queue(l_ptr, head); + tipc_node_unlock(n_ptr); + continue; + } + + /* Traffic message. Conditionally activate link */ + link_state_event(l_ptr, TRAFFIC_MSG_EVT); + + if (link_working_working(l_ptr)) { + /* Re-insert buffer in front of queue */ + buf->next = head; + head = buf; tipc_node_unlock(n_ptr); - tipc_net_route_msg(buf); continue; } + goto unlock_discard; + } + + /* Link is now in state WORKING_WORKING */ + if (unlikely(seq_no != mod(l_ptr->next_in_no))) { link_handle_out_of_seq_msg(l_ptr, buf); head = link_insert_deferred_queue(l_ptr, head); tipc_node_unlock(n_ptr); continue; } - - /* Link is not in state WORKING_WORKING */ - if (msg_user(msg) == LINK_PROTOCOL) { - link_recv_proto_msg(l_ptr, buf); + l_ptr->next_in_no++; + if (unlikely(l_ptr->oldest_deferred_in)) head = link_insert_deferred_queue(l_ptr, head); +deliver: + if (likely(msg_isdata(msg))) { tipc_node_unlock(n_ptr); + tipc_port_recv_msg(buf); continue; } - - /* Traffic message. Conditionally activate link */ - link_state_event(l_ptr, TRAFFIC_MSG_EVT); - - if (link_working_working(l_ptr)) { - /* Re-insert buffer in front of queue */ - buf->next = head; - head = buf; + switch (msg_user(msg)) { + int ret; + case MSG_BUNDLER: + l_ptr->stats.recv_bundles++; + l_ptr->stats.recv_bundled += msg_msgcnt(msg); + tipc_node_unlock(n_ptr); + tipc_link_recv_bundle(buf); + continue; + case NAME_DISTRIBUTOR: + n_ptr->bclink.recv_permitted = true; + tipc_node_unlock(n_ptr); + tipc_named_recv(buf); + continue; + case BCAST_PROTOCOL: + tipc_link_recv_sync(n_ptr, buf); tipc_node_unlock(n_ptr); continue; + case CONN_MANAGER: + tipc_node_unlock(n_ptr); + tipc_port_recv_proto_msg(buf); + continue; + case MSG_FRAGMENTER: + l_ptr->stats.recv_fragments++; + ret = tipc_link_recv_fragment(&l_ptr->defragm_buf, + &buf, &msg); + if (ret == 1) { + l_ptr->stats.recv_fragmented++; + goto deliver; + } + if (ret == -1) + l_ptr->next_in_no--; + break; + case CHANGEOVER_PROTOCOL: + type = msg_type(msg); + if (link_recv_changeover_msg(&l_ptr, &buf)) { + msg = buf_msg(buf); + seq_no = msg_seqno(msg); + if (type == ORIGINAL_MSG) + goto deliver; + goto protocol_check; + } + break; + default: + kfree_skb(buf); + buf = NULL; + break; } tipc_node_unlock(n_ptr); -cont: + tipc_net_route_msg(buf); + continue; +unlock_discard: + + tipc_node_unlock(n_ptr); +discard: kfree_skb(buf); } read_unlock_bh(&tipc_net_lock);