From patchwork Thu Oct 22 12:51:37 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jon Maloy X-Patchwork-Id: 534402 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id A2F2514131C for ; Thu, 22 Oct 2015 23:52:51 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=yahoo.com header.i=@yahoo.com header.b=GCcLbrC+; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757451AbbJVMwp (ORCPT ); Thu, 22 Oct 2015 08:52:45 -0400 Received: from smtp101.biz.mail.bf1.yahoo.com ([98.139.221.60]:34275 "EHLO smtp101.biz.mail.bf1.yahoo.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757422AbbJVMwk (ORCPT ); Thu, 22 Oct 2015 08:52:40 -0400 Received: (qmail 5794 invoked from network); 22 Oct 2015 12:52:38 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1445518358; bh=82NvFpyzbkZDOINL2AawP/4n3ztDx0t3lCiiRUXcacQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=GCcLbrC+PBptIWHdLzUIkQiTaxpQDWZlNquFg0IKZ6dMQZs3tl5VXT78U0qL/43wDL6drhAfTLOy3XwCuIcFN+BF5xzSPaKZkEgzKGyGXDtL9qttsbjdpDTBNR9umUTjcFK/rgkFvdaY5sTeqS6fOXOkBJYM2B5ygLTp2JWZikY= X-Yahoo-Newman-Property: ymail-3 X-YMail-OSG: sSWIIRIVM1kQujT1qlncmBjg1TZ18RYf79iXBovqkl.0FRk 1OEc3FsHCfJ6ExH77TqdRf3y88nsuSOR02J2p416cQA2eUyYRluVJAms9OYw .4eKsIiNElycuoVQ8vxk9z7mMwsAm_bFeF6oNzxIYwIjleGv.YuqRPKivvnX aUNJ7f.FYwdxEPuIym2UiZwSyNzDOifcGIVugutdw_gjcu6rI_q.LKKsYfJi .V1H_Rzs_50ytcnhFCgCdN.jpOFfeDmwotx9vKsVQjUS7I7YOb.69HlUwmVN 4bg5dHTQPrLmn.gihU_7T8Bk3U8AOvK_UBe8zAd20LVzJ6r_Jj2oGO67AQc4 VJdTx_0HahxKY5Rm7OebUVJAygZIP1qDeM8Pe5jrLeaJGCL1wkWqXjlCuTt. MV39Gj3LXg7Mz4P5hT7UiZaqvA8i_Om5NELpWrsut_rPMOxH8_DOhQbJqTyH pWhMqSDz5SGVBfChfOrQMyS9vJb7iXqtaLkoCCOv4j7nZKOWtf4KG5tfjAad .7cDrqaW9GxZST5H6rTva6WvbxWeESNFDwtM- X-Yahoo-SMTP: gPXIZm2swBAFQJ_Vx0CebjUfUdhJ From: Jon Maloy To: davem@davemloft.net Cc: netdev@vger.kernel.org, Paul Gortmaker , parthasarathy.xx.bhuvaragan@ericsson.com, richard.alpe@ericsson.com, ying.xue@windriver.com, maloy@donjonn.com, tipc-discussion@lists.sourceforge.net, Jon Maloy Subject: [PATCH net-next 05/16] tipc: use explicit allocation of broadcast send link Date: Thu, 22 Oct 2015 08:51:37 -0400 Message-Id: <1445518308-26200-6-git-send-email-jon.maloy@ericsson.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1445518308-26200-1-git-send-email-jon.maloy@ericsson.com> References: <1445518308-26200-1-git-send-email-jon.maloy@ericsson.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The broadcast link instance (struct tipc_link) used for sending is currently aggregated into struct tipc_bclink. This means that we cannot use the regular tipc_link_create() function for initiating the link, but do instead have to initiate numerous fields directly from the bcast_init() function. We want to reduce dependencies between the broadcast functionality and the inner workings of tipc_link. In this commit, we introduce a new function tipc_bclink_create() to link.c, and allocate the instance of the link separately using this function. Signed-off-by: Jon Maloy Reviewed-by: Ying Xue --- net/tipc/bcast.c | 89 +++++++++++++++++++++++++++++--------------------------- net/tipc/bcast.h | 2 -- net/tipc/link.c | 29 ++++++++++++++++++ net/tipc/link.h | 4 +++ 4 files changed, 79 insertions(+), 45 deletions(-) diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c index c6f0d1d..3b7bd21 100644 --- a/net/tipc/bcast.c +++ b/net/tipc/bcast.c @@ -98,10 +98,11 @@ struct tipc_bcbearer { * Handles sequence numbering, fragmentation, bundling, etc. */ struct tipc_bc_base { - struct tipc_link link; + struct tipc_link *link; struct tipc_node node; struct sk_buff_head arrvq; struct sk_buff_head inputq; + struct sk_buff_head namedq; struct tipc_node_map bcast_nodes; struct tipc_node *retransmit_to; }; @@ -180,7 +181,7 @@ void tipc_bclink_remove_node(struct net *net, u32 addr) /* Last node? => reset backlog queue */ if (!tn->bcbase->bcast_nodes.count) - tipc_link_purge_backlog(&tn->bcbase->link); + tipc_link_purge_backlog(tn->bcbase->link); tipc_bclink_unlock(net); } @@ -1010,55 +1011,56 @@ int tipc_nl_bc_link_set(struct net *net, struct nlattr *attrs[]) int tipc_bcast_init(struct net *net) { - struct tipc_net *tn = net_generic(net, tipc_net_id); - struct tipc_bcbearer *bcbearer; - struct tipc_bc_base *bclink; - struct tipc_link *bcl; - - bcbearer = kzalloc(sizeof(*bcbearer), GFP_ATOMIC); - if (!bcbearer) - return -ENOMEM; - - bclink = kzalloc(sizeof(*bclink), GFP_ATOMIC); - if (!bclink) { - kfree(bcbearer); - return -ENOMEM; - } - - bcl = &bclink->link; - bcbearer->bearer.media = &bcbearer->media; - bcbearer->media.send_msg = tipc_bcbearer_send; - sprintf(bcbearer->media.name, "tipc-broadcast"); - + struct tipc_net *tn = tipc_net(net); + struct tipc_bcbearer *bcb = NULL; + struct tipc_bc_base *bb = NULL; + struct tipc_link *l = NULL; + + bcb = kzalloc(sizeof(*bcb), GFP_ATOMIC); + if (!bcb) + goto enomem; + tn->bcbearer = bcb; + + bcb->bearer.window = BCLINK_WIN_DEFAULT; + bcb->bearer.mtu = MAX_PKT_DEFAULT_MCAST; + bcb->bearer.identity = MAX_BEARERS; + + bcb->bearer.media = &bcb->media; + bcb->media.send_msg = tipc_bcbearer_send; + sprintf(bcb->media.name, "tipc-broadcast"); + strcpy(bcb->bearer.name, bcb->media.name); + + bb = kzalloc(sizeof(*bb), GFP_ATOMIC); + if (!bb) + goto enomem; + tn->bcbase = bb; + __skb_queue_head_init(&bb->arrvq); spin_lock_init(&tipc_net(net)->bclock); - __skb_queue_head_init(&bcl->transmq); - __skb_queue_head_init(&bcl->backlogq); - __skb_queue_head_init(&bcl->deferdq); - skb_queue_head_init(&bcl->wakeupq); - bcl->snd_nxt = 1; - spin_lock_init(&bclink->node.lock); - __skb_queue_head_init(&bclink->arrvq); - skb_queue_head_init(&bclink->inputq); - bcl->owner = &bclink->node; - bcl->owner->net = net; - bcl->mtu = MAX_PKT_DEFAULT_MCAST; - tipc_link_set_queue_limits(bcl, BCLINK_WIN_DEFAULT); - bcl->bearer_id = MAX_BEARERS; - rcu_assign_pointer(tn->bearer_list[MAX_BEARERS], &bcbearer->bearer); - bcl->pmsg = (struct tipc_msg *)&bcl->proto_msg; - - strlcpy(bcl->name, tipc_bclink_name, TIPC_MAX_LINK_NAME); - tn->bcbearer = bcbearer; - tn->bcbase = bclink; - tn->bcl = bcl; + bb->node.net = net; + + if (!tipc_link_bc_create(&bb->node, + MAX_PKT_DEFAULT_MCAST, + BCLINK_WIN_DEFAULT, + &bb->inputq, + &bb->namedq, + &l)) + goto enomem; + bb->link = l; + tn->bcl = l; + rcu_assign_pointer(tn->bearer_list[MAX_BEARERS], &bcb->bearer); return 0; +enomem: + kfree(bcb); + kfree(bb); + kfree(l); + return -ENOMEM; } void tipc_bcast_reinit(struct net *net) { struct tipc_bc_base *b = tipc_bc_base(net); - msg_set_prevnode(b->link.pmsg, tipc_own_addr(net)); + msg_set_prevnode(b->link->pmsg, tipc_own_addr(net)); } void tipc_bcast_stop(struct net *net) @@ -1072,6 +1074,7 @@ void tipc_bcast_stop(struct net *net) synchronize_net(); kfree(tn->bcbearer); kfree(tn->bcbase); + kfree(tn->bcl); } /** diff --git a/net/tipc/bcast.h b/net/tipc/bcast.h index 041935d..a378fdd 100644 --- a/net/tipc/bcast.h +++ b/net/tipc/bcast.h @@ -44,8 +44,6 @@ struct tipc_msg; struct tipc_nl_msg; struct tipc_node_map; -extern const char tipc_bclink_name[]; - int tipc_bcast_init(struct net *net); void tipc_bcast_reinit(struct net *net); void tipc_bcast_stop(struct net *net); diff --git a/net/tipc/link.c b/net/tipc/link.c index 0d8fdc8..f0cf768 100644 --- a/net/tipc/link.c +++ b/net/tipc/link.c @@ -50,6 +50,7 @@ */ static const char *link_co_err = "Link tunneling error, "; static const char *link_rst_msg = "Resetting link "; +static const char tipc_bclink_name[] = "broadcast-link"; static const struct nla_policy tipc_nl_link_policy[TIPC_NLA_LINK_MAX + 1] = { [TIPC_NLA_LINK_UNSPEC] = { .type = NLA_UNSPEC }, @@ -231,6 +232,34 @@ bool tipc_link_create(struct tipc_node *n, char *if_name, int bearer_id, return true; } +/** + * tipc_link_bc_create - create new link to be used for broadcast + * @n: pointer to associated node + * @mtu: mtu to be used + * @window: send window to be used + * @inputq: queue to put messages ready for delivery + * @namedq: queue to put binding table update messages ready for delivery + * @link: return value, pointer to put the created link + * + * Returns true if link was created, otherwise false + */ +bool tipc_link_bc_create(struct tipc_node *n, int mtu, int window, + struct sk_buff_head *inputq, + struct sk_buff_head *namedq, + struct tipc_link **link) +{ + struct tipc_link *l; + + if (!tipc_link_create(n, "", MAX_BEARERS, 0, 'Z', mtu, 0, window, + 0, 0, 0, NULL, inputq, namedq, link)) + return false; + + l = *link; + strcpy(l->name, tipc_bclink_name); + tipc_link_reset(l); + return true; +} + /* tipc_link_build_bcast_sync_msg() - synchronize broadcast link endpoints. * * Give a newly added peer node the sequence number where it should diff --git a/net/tipc/link.h b/net/tipc/link.h index 06bf66d..9e4e367 100644 --- a/net/tipc/link.h +++ b/net/tipc/link.h @@ -211,6 +211,10 @@ bool tipc_link_create(struct tipc_node *n, char *if_name, int bearer_id, struct tipc_media_addr *maddr, struct sk_buff_head *inputq, struct sk_buff_head *namedq, struct tipc_link **link); +bool tipc_link_bc_create(struct tipc_node *n, int mtu, int window, + struct sk_buff_head *inputq, + struct sk_buff_head *namedq, + struct tipc_link **link); void tipc_link_tnl_prepare(struct tipc_link *l, struct tipc_link *tnl, int mtyp, struct sk_buff_head *xmitq); void tipc_link_build_bcast_sync_msg(struct tipc_link *l,