From patchwork Thu Jul 28 22:17:41 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Moats X-Patchwork-Id: 653891 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3s0mT40q71z9t0m for ; Fri, 29 Jul 2016 08:18:00 +1000 (AEST) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id 0E48D1136F; Thu, 28 Jul 2016 15:17:59 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v3.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id 441AE11325 for ; Thu, 28 Jul 2016 15:17:57 -0700 (PDT) Received: from bar6.cudamail.com (localhost [127.0.0.1]) by mx3v3.cudamail.com (Postfix) with ESMTPS id CAC2B162190 for ; Thu, 28 Jul 2016 16:17:56 -0600 (MDT) X-ASG-Debug-ID: 1469744275-0b323747733234b0001-byXFYA Received: from mx3-pf2.cudamail.com ([192.168.14.1]) by bar6.cudamail.com with ESMTP id 5nv65WNmzMH8dFn6 (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Thu, 28 Jul 2016 16:17:55 -0600 (MDT) X-Barracuda-Envelope-From: stack@tombstone-01.cloud.svl.ibm.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.1 Received: from unknown (HELO mx0a-001b2d01.pphosted.com) (148.163.158.5) by mx3-pf2.cudamail.com with ESMTPS (AES256-SHA encrypted); 28 Jul 2016 22:17:54 -0000 Received-SPF: none (mx3-pf2.cudamail.com: domain at tombstone-01.cloud.svl.ibm.com does not designate permitted sender hosts) X-Barracuda-Apparent-Source-IP: 148.163.158.5 X-Barracuda-RBL-IP: 148.163.158.5 Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u6SMEBwO013851 for ; Thu, 28 Jul 2016 18:17:53 -0400 Received: from e37.co.us.ibm.com (e37.co.us.ibm.com [32.97.110.158]) by mx0b-001b2d01.pphosted.com with ESMTP id 24eun2w9sg-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Thu, 28 Jul 2016 18:17:53 -0400 Received: from localhost by e37.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 28 Jul 2016 16:17:52 -0600 Received: from d03dlp02.boulder.ibm.com (9.17.202.178) by e37.co.us.ibm.com (192.168.1.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 28 Jul 2016 16:17:51 -0600 X-IBM-Helo: d03dlp02.boulder.ibm.com X-IBM-MailFrom: stack@tombstone-01.cloud.svl.ibm.com Received: from b01cxnp22035.gho.pok.ibm.com (b01cxnp22035.gho.pok.ibm.com [9.57.198.25]) by d03dlp02.boulder.ibm.com (Postfix) with ESMTP id 878203E40048 for ; Thu, 28 Jul 2016 16:17:50 -0600 (MDT) Received: from b01ledav005.gho.pok.ibm.com (b01ledav005.gho.pok.ibm.com [9.57.199.110]) by b01cxnp22035.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u6SMHoHt58261508; Thu, 28 Jul 2016 22:17:50 GMT Received: from b01ledav005.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F1C7AAE043; Thu, 28 Jul 2016 18:17:49 -0400 (EDT) Received: from localhost (unknown [9.30.183.40]) by b01ledav005.gho.pok.ibm.com (Postfix) with SMTP id 9ACE0AE03B; Thu, 28 Jul 2016 18:17:49 -0400 (EDT) Received: by localhost (Postfix, from userid 1000) id 690D7601AA; Thu, 28 Jul 2016 22:17:42 +0000 (UTC) X-CudaMail-Envelope-Sender: stack@tombstone-01.cloud.svl.ibm.com From: Ryan Moats To: dev@openvswitch.org X-CudaMail-MID: CM-V2-727051896 X-CudaMail-DTE: 072816 X-CudaMail-Originating-IP: 148.163.158.5 Date: Thu, 28 Jul 2016 22:17:41 +0000 X-ASG-Orig-Subj: [##CM-V2-727051896##][PATCH v3] ovn-controller: Persist desired conntrack groups. X-Mailer: git-send-email 1.9.1 X-TM-AS-GCONF: 00 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16072822-0024-0000-0000-000014370CCD X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00005537; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000177; SDB=6.00736818; UDB=6.00346079; IPR=6.00509407; BA=6.00004629; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00012096; XFM=3.00000011; UTC=2016-07-28 22:17:52 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16072822-0025-0000-0000-0000431BED70 Message-Id: <1469744261-30421-1-git-send-email-rmoats@us.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-07-28_12:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=4 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1607280223 X-GBUdb-Analysis: 0, 148.163.158.5, Ugly c=0.436326 p=-0.42029 Source Normal X-MessageSniffer-Rules: 0-0-0-20420-c X-Barracuda-Connect: UNKNOWN[192.168.14.1] X-Barracuda-Start-Time: 1469744275 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using global scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=4.0 tests=BSF_SC5_MJ1963, RDNS_NONE X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31588 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS 0.50 BSF_SC5_MJ1963 Custom Rule MJ1963 Subject: [ovs-dev] [PATCH v3] ovn-controller: Persist desired conntrack groups. X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" With incremental processing of logical flows desired conntrack groups are not being persisted. This patch adds this capability, with the side effect of adding a ds_clone method that this capability leverages. Signed-off-by: Ryan Moats Reported-by: Guru Shetty Reported-at: http://openvswitch.org/pipermail/dev/2016-July/076320.html Fixes: 70c7cfe ("ovn-controller: Add incremental processing to lflow_run and physical_run") Acked-by: Flavio Fernandes --- include/openvswitch/dynamic-string.h | 1 + include/ovn/actions.h | 6 +++++ lib/dynamic-string.c | 9 ++++++++ ovn/controller/lflow.c | 2 ++ ovn/controller/ofctrl.c | 43 ++++++++++++++++++++++++------------ ovn/controller/ofctrl.h | 5 ++++- ovn/controller/ovn-controller.c | 2 +- ovn/lib/actions.c | 1 + 8 files changed, 53 insertions(+), 16 deletions(-) diff --git a/include/openvswitch/dynamic-string.h b/include/openvswitch/dynamic-string.h index dfe2688..bf1f64a 100644 --- a/include/openvswitch/dynamic-string.h +++ b/include/openvswitch/dynamic-string.h @@ -73,6 +73,7 @@ void ds_swap(struct ds *, struct ds *); int ds_last(const struct ds *); bool ds_chomp(struct ds *, int c); +void ds_clone(struct ds *, struct ds *); /* Inline functions. */ diff --git a/include/ovn/actions.h b/include/ovn/actions.h index 114c71e..55720ce 100644 --- a/include/ovn/actions.h +++ b/include/ovn/actions.h @@ -22,7 +22,9 @@ #include "compiler.h" #include "openvswitch/hmap.h" #include "openvswitch/dynamic-string.h" +#include "openvswitch/uuid.h" #include "util.h" +#include "uuid.h" struct expr; struct lexer; @@ -43,6 +45,7 @@ struct group_table { struct group_info { struct hmap_node hmap_node; struct ds group; + struct uuid lflow_uuid; uint32_t group_id; }; @@ -107,6 +110,9 @@ struct action_params { /* A struct to figure out the group_id for group actions. */ struct group_table *group_table; + /* The logical flow uuid that drove this action. */ + struct uuid lflow_uuid; + /* OVN maps each logical flow table (ltable), one-to-one, onto a physical * OpenFlow flow table (ptable). A number of parameters describe this * mapping and data related to flow tables: diff --git a/lib/dynamic-string.c b/lib/dynamic-string.c index 1f17a9f..6f7b610 100644 --- a/lib/dynamic-string.c +++ b/lib/dynamic-string.c @@ -456,3 +456,12 @@ ds_chomp(struct ds *ds, int c) return false; } } + +void +ds_clone(struct ds *dst, struct ds *source) +{ + dst->length = source->length; + dst->allocated = dst->length; + dst->string = xmalloc(dst->allocated + 1); + memcpy(dst->string, source->string, dst->allocated + 1); +} diff --git a/ovn/controller/lflow.c b/ovn/controller/lflow.c index a4f3322..e38c32a 100644 --- a/ovn/controller/lflow.c +++ b/ovn/controller/lflow.c @@ -383,6 +383,7 @@ add_logical_flows(struct controller_ctx *ctx, const struct lport_index *lports, if (full_flow_processing) { ovn_flow_table_clear(); + ovn_group_table_clear(group_table, false); full_logical_flow_processing = true; full_neighbor_flow_processing = true; full_flow_processing = false; @@ -515,6 +516,7 @@ consider_logical_flow(const struct lport_index *lports, .aux = &aux, .ct_zones = ct_zones, .group_table = group_table, + .lflow_uuid = lflow->header_.uuid, .n_tables = LOG_PIPELINE_LEN, .first_ptable = first_ptable, diff --git a/ovn/controller/ofctrl.c b/ovn/controller/ofctrl.c index dd9f5ec..54bea99 100644 --- a/ovn/controller/ofctrl.c +++ b/ovn/controller/ofctrl.c @@ -140,8 +140,6 @@ static ovs_be32 queue_msg(struct ofpbuf *); static void ovn_flow_table_destroy(void); static struct ofpbuf *encode_flow_mod(struct ofputil_flow_mod *); -static void ovn_group_table_clear(struct group_table *group_table, - bool existing); static struct ofpbuf *encode_group_mod(const struct ofputil_group_mod *); static void ofctrl_recv(const struct ofp_header *, enum ofptype); @@ -150,12 +148,15 @@ static struct hmap match_flow_table = HMAP_INITIALIZER(&match_flow_table); static struct hindex uuid_flow_table = HINDEX_INITIALIZER(&uuid_flow_table); void -ofctrl_init(void) +ofctrl_init(struct group_table *group_table) { swconn = rconn_create(5, 0, DSCP_DEFAULT, 1 << OFP13_VERSION); tx_counter = rconn_packet_counter_create(); hmap_init(&installed_flows); ovs_list_init(&flow_updates); + if (!groups) { + groups = group_table; + } } /* S_NEW, for a new connection. @@ -680,6 +681,16 @@ ofctrl_remove_flows(const struct uuid *uuid) ovn_flow_destroy(f); } } + + /* Remove any group_info information created by this logical flow. */ + struct group_info *g, *next_g; + HMAP_FOR_EACH_SAFE (g, next_g, hmap_node, &groups->desired_groups) { + if (uuid_equals(&g->lflow_uuid, uuid)) { + hmap_remove(&groups->desired_groups, &g->hmap_node); + ds_destroy(&g->group); + free(g); + } + } } /* Shortcut to remove all flows matching the supplied UUID and add this @@ -833,6 +844,17 @@ add_flow_mod(struct ofputil_flow_mod *fm, struct ovs_list *msgs) /* group_table. */ +static struct group_info * +group_info_clone(struct group_info *source) +{ + struct group_info *clone = xmalloc(sizeof *clone); + ds_clone(&clone->group, &source->group); + clone->group_id = source->group_id; + clone->lflow_uuid = source->lflow_uuid; + clone->hmap_node.hash = source->hmap_node.hash; + return clone; +} + /* Finds and returns a group_info in 'existing_groups' whose key is identical * to 'target''s key, or NULL if there is none. */ static struct group_info * @@ -851,7 +873,7 @@ ovn_group_lookup(struct hmap *exisiting_groups, } /* Clear either desired_groups or existing_groups in group_table. */ -static void +void ovn_group_table_clear(struct group_table *group_table, bool existing) { struct group_info *g, *next; @@ -894,10 +916,6 @@ add_group_mod(const struct ofputil_group_mod *gm, struct ovs_list *msgs) void ofctrl_put(struct group_table *group_table, int64_t nb_cfg) { - if (!groups) { - groups = group_table; - } - /* The flow table can be updated if the connection to the switch is up and * in the correct state and not backlogged with existing flow_mods. (Our * criteria for being backlogged appear very conservative, but the socket @@ -1066,13 +1084,10 @@ ofctrl_put(struct group_table *group_table, int64_t nb_cfg) /* Move the contents of desired_groups to existing_groups. */ HMAP_FOR_EACH_SAFE(desired, next_group, hmap_node, &group_table->desired_groups) { - hmap_remove(&group_table->desired_groups, &desired->hmap_node); if (!ovn_group_lookup(&group_table->existing_groups, desired)) { - hmap_insert(&group_table->existing_groups, &desired->hmap_node, - desired->hmap_node.hash); - } else { - ds_destroy(&desired->group); - free(desired); + struct group_info *clone = group_info_clone(desired); + hmap_insert(&group_table->existing_groups, &clone->hmap_node, + clone->hmap_node.hash); } } diff --git a/ovn/controller/ofctrl.h b/ovn/controller/ofctrl.h index befae01..da4ebb4 100644 --- a/ovn/controller/ofctrl.h +++ b/ovn/controller/ofctrl.h @@ -30,7 +30,7 @@ struct ovsrec_bridge; struct group_table; /* Interface for OVN main loop. */ -void ofctrl_init(void); +void ofctrl_init(struct group_table *group_table); enum mf_field_id ofctrl_run(const struct ovsrec_bridge *br_int); void ofctrl_put(struct group_table *group_table, int64_t nb_cfg); void ofctrl_wait(void); @@ -54,4 +54,7 @@ void ofctrl_flow_table_clear(void); void ovn_flow_table_clear(void); +void ovn_group_table_clear(struct group_table *group_table, + bool existing); + #endif /* ovn/ofctrl.h */ diff --git a/ovn/controller/ovn-controller.c b/ovn/controller/ovn-controller.c index 5c74186..3cdfcdf 100644 --- a/ovn/controller/ovn-controller.c +++ b/ovn/controller/ovn-controller.c @@ -347,7 +347,7 @@ main(int argc, char *argv[]) ovsrec_init(); sbrec_init(); - ofctrl_init(); + ofctrl_init(&group_table); pinctrl_init(); lflow_init(); diff --git a/ovn/lib/actions.c b/ovn/lib/actions.c index fd5a867..aef5c75 100644 --- a/ovn/lib/actions.c +++ b/ovn/lib/actions.c @@ -761,6 +761,7 @@ parse_ct_lb_action(struct action_context *ctx) group_info = xmalloc(sizeof *group_info); group_info->group = ds; group_info->group_id = group_id; + group_info->lflow_uuid = ctx->ap->lflow_uuid; group_info->hmap_node.hash = hash; hmap_insert(&ctx->ap->group_table->desired_groups,