From patchwork Sat Jul 2 17:35:33 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Moats X-Patchwork-Id: 643535 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3rhgTs3Lx7z9s9d for ; Sun, 3 Jul 2016 03:37:53 +1000 (AEST) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id 9CF9422C3D4; Sat, 2 Jul 2016 10:37:52 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx1e4.cudamail.com (mx1.cudamail.com [69.90.118.67]) by archives.nicira.com (Postfix) with ESMTPS id B280A10C48 for ; Sat, 2 Jul 2016 10:37:50 -0700 (PDT) Received: from bar5.cudamail.com (unknown [192.168.21.12]) by mx1e4.cudamail.com (Postfix) with ESMTPS id 42C4A1E02FB for ; Sat, 2 Jul 2016 11:37:50 -0600 (MDT) X-ASG-Debug-ID: 1467481069-09eadd3a0613c580001-byXFYA Received: from mx3-pf3.cudamail.com ([192.168.14.3]) by bar5.cudamail.com with ESMTP id xJhhqtsox13ZpgMY (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Sat, 02 Jul 2016 11:37:49 -0600 (MDT) X-Barracuda-Envelope-From: rmoats@oc7146733065.ibm.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.3 Received: from unknown (HELO fed1rmfepo101.cox.net) (68.230.241.143) by mx3-pf3.cudamail.com with SMTP; 2 Jul 2016 17:37:48 -0000 Received-SPF: none (mx3-pf3.cudamail.com: domain at oc7146733065.ibm.com does not designate permitted sender hosts) X-Barracuda-Apparent-Source-IP: 68.230.241.143 X-Barracuda-RBL-IP: 68.230.241.143 Received: from fed1rmimpo209.cox.net ([68.230.241.160]) by fed1rmfepo101.cox.net (InterMail vM.8.01.05.28 201-2260-151-171-20160122) with ESMTP id <20160702173747.FKF14123.fed1rmfepo101.cox.net@fed1rmimpo209.cox.net> for ; Sat, 2 Jul 2016 13:37:47 -0400 Received: from oc7146733065.ibm.com ([68.13.99.247]) by fed1rmimpo209.cox.net with cox id Dtdm1t0055LF6cs01tdmwc; Sat, 02 Jul 2016 13:37:47 -0400 X-CT-Class: Clean X-CT-Score: 0.00 X-CT-RefID: str=0001.0A020202.5777FBEB.0086, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0 X-CT-Spam: 0 X-Authority-Analysis: v=2.1 cv=H/J7u7si c=1 sm=1 tr=0 a=Jmqd6mthTashISSy/JkQqg==:117 a=Jmqd6mthTashISSy/JkQqg==:17 a=L9H7d07YOLsA:10 a=9cW_t1CCXrUA:10 a=s5jvgZ67dGcA:10 a=cAmyUtKerLwA:10 a=VnNF1IyMAAAA:8 a=1s8FdT6ftesOc5RvHZYA:9 a=skCgnbhlp52w9zbo2JeP:22 X-CM-Score: 0.00 Authentication-Results: cox.net; none Received: by oc7146733065.ibm.com (Postfix, from userid 500) id 317591880390; Sat, 2 Jul 2016 12:37:46 -0500 (CDT) X-CudaMail-Envelope-Sender: rmoats@oc7146733065.ibm.com From: Ryan Moats To: dev@openvswitch.org X-CudaMail-MID: CM-V3-701011417 X-CudaMail-DTE: 070216 X-CudaMail-Originating-IP: 68.230.241.143 Date: Sat, 2 Jul 2016 12:35:33 -0500 X-ASG-Orig-Subj: [##CM-V3-701011417##][PATCH v20 7/7] Add incremental processing to lflow_run and physical_run Message-Id: <1467480933-5637-8-git-send-email-rmoats@us.ibm.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1467480933-5637-1-git-send-email-rmoats@us.ibm.com> References: <1467480933-5637-1-git-send-email-rmoats@us.ibm.com> X-GBUdb-Analysis: 0, 68.230.241.143, Ugly c=0.214287 p=-0.25 Source Normal X-MessageSniffer-Rules: 0-0-0-32767-c X-Barracuda-Connect: UNKNOWN[192.168.14.3] X-Barracuda-Start-Time: 1467481069 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-Barracuda-Spam-Score: 1.10 X-Barracuda-Spam-Status: No, SCORE=1.10 using global scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=4.0 tests=BSF_RULE7568M, BSF_SC5_MJ1963, RDNS_NONE X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.30952 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.50 BSF_RULE7568M Custom Rule 7568M 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS 0.50 BSF_SC5_MJ1963 Custom Rule MJ1963 Subject: [ovs-dev] [PATCH v20 7/7] Add incremental processing to lflow_run and physical_run X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" This code changes to allow incremental processing of the logical flow and physical binding tables whenver possible. Note: flows created by physical_run for multicast_groups are *NOT* handled incrementally due to to be solved issues with GWs and local routers. Signed-off-by: Ryan Moats --- ovn/controller/binding.c | 6 +++ ovn/controller/encaps.c | 5 +++ ovn/controller/lflow.c | 60 +++++++++++++++++++++++++---- ovn/controller/lflow.h | 1 + ovn/controller/lport.c | 14 ++++++- ovn/controller/lport.h | 4 +- ovn/controller/ovn-controller.c | 1 - ovn/controller/patch.c | 8 ++++ ovn/controller/physical.c | 84 +++++++++++++++++++++++++++++++++++++---- ovn/controller/physical.h | 1 + 10 files changed, 165 insertions(+), 19 deletions(-) diff --git a/ovn/controller/binding.c b/ovn/controller/binding.c index e10c1f0..ebeceac 100644 --- a/ovn/controller/binding.c +++ b/ovn/controller/binding.c @@ -15,6 +15,8 @@ #include #include "binding.h" +#include "lflow.h" +#include "lport.h" #include "lib/bitmap.h" #include "lib/hmap.h" @@ -136,6 +138,7 @@ remove_local_datapath(struct hmap *local_datapaths, struct local_datapath *ld) hmap_remove(local_datapaths, &ld->hmap_node); hmap_remove(&local_datapaths_by_uuid, &ld->uuid_hmap_node); free(ld); + lflow_reset_processing(); } static void @@ -174,6 +177,9 @@ add_local_datapath(struct hmap *local_datapaths, binding_rec->datapath->tunnel_key); hmap_insert(&local_datapaths_by_uuid, &ld->uuid_hmap_node, uuid_hash(uuid)); + lport_index_reset(); + mcgroup_index_reset(); + lflow_reset_processing(); } static void diff --git a/ovn/controller/encaps.c b/ovn/controller/encaps.c index 6cf60ff..03ec732 100644 --- a/ovn/controller/encaps.c +++ b/ovn/controller/encaps.c @@ -17,6 +17,7 @@ #include "encaps.h" #include "binding.h" #include "lflow.h" +#include "lport.h" #include "lib/hash.h" #include "lib/sset.h" @@ -237,6 +238,9 @@ tunnel_add(const struct sbrec_chassis *chassis_rec, free(port_name); free(ports); binding_reset_processing(); + lport_index_reset(); + mcgroup_index_reset(); + lflow_reset_processing(); process_full_encaps = true; } @@ -421,6 +425,7 @@ encaps_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int, &port_hash->uuid_node); free(port_hash); binding_reset_processing(); + lflow_reset_processing(); } } else if (sbrec_chassis_is_new(chassis_rec)) { check_and_add_tunnel(chassis_rec, chassis_id); diff --git a/ovn/controller/lflow.c b/ovn/controller/lflow.c index 25895fa..88b3205 100644 --- a/ovn/controller/lflow.c +++ b/ovn/controller/lflow.c @@ -27,6 +27,7 @@ #include "ovn/lib/ovn-dhcp.h" #include "ovn/lib/ovn-sb-idl.h" #include "packets.h" +#include "physical.h" #include "simap.h" VLOG_DEFINE_THIS_MODULE(lflow); @@ -36,6 +37,17 @@ VLOG_DEFINE_THIS_MODULE(lflow); /* Contains "struct expr_symbol"s for fields supported by OVN lflows. */ static struct shash symtab; +static bool full_flow_processing = false; +static bool full_logical_flow_processing = false; +static bool full_neighbor_flow_processing = false; + +void +lflow_reset_processing(void) +{ + full_flow_processing = true; + physical_reset_processing(); +} + static void add_logical_register(struct shash *symtab, enum mf_field_id id) { @@ -213,6 +225,7 @@ add_logical_flows(struct controller_ctx *ctx, const struct lport_index *lports, const struct simap *ct_zones) { uint32_t conj_id_ofs = 1; + const struct sbrec_logical_flow *lflow; struct hmap dhcp_opts = HMAP_INITIALIZER(&dhcp_opts); const struct sbrec_dhcp_options *dhcp_opt_row; @@ -221,11 +234,24 @@ add_logical_flows(struct controller_ctx *ctx, const struct lport_index *lports, dhcp_opt_row->type); } - const struct sbrec_logical_flow *lflow; - SBREC_LOGICAL_FLOW_FOR_EACH (lflow, ctx->ovnsb_idl) { - consider_logical_flow(lports, mcgroups, lflow, local_datapaths, - patched_datapaths, ct_zones, - &dhcp_opts, &conj_id_ofs, true); + if (full_logical_flow_processing) { + SBREC_LOGICAL_FLOW_FOR_EACH (lflow, ctx->ovnsb_idl) { + consider_logical_flow(lports, mcgroups, lflow, local_datapaths, + patched_datapaths, ct_zones, + &dhcp_opts, &conj_id_ofs, true); + } + full_logical_flow_processing = false; + } else { + SBREC_LOGICAL_FLOW_FOR_EACH_TRACKED (lflow, ctx->ovnsb_idl) { + if (sbrec_logical_flow_is_deleted(lflow)) { + ofctrl_remove_flows(&lflow->header_.uuid); + } else { + consider_logical_flow(lports, mcgroups, lflow, local_datapaths, + patched_datapaths, ct_zones, + &dhcp_opts, &conj_id_ofs, + sbrec_logical_flow_is_new(lflow)); + } + } } dhcp_opts_destroy(&dhcp_opts); @@ -460,9 +486,22 @@ add_neighbor_flows(struct controller_ctx *ctx, ofpbuf_init(&ofpacts, 0); const struct sbrec_mac_binding *b; - SBREC_MAC_BINDING_FOR_EACH (b, ctx->ovnsb_idl) { - consider_neighbor_flow(lports, b, &ofpacts, &match, true); + if (full_neighbor_flow_processing) { + SBREC_MAC_BINDING_FOR_EACH (b, ctx->ovnsb_idl) { + consider_neighbor_flow(lports, b, &ofpacts, &match, true); + } + full_neighbor_flow_processing = false; + } else { + SBREC_MAC_BINDING_FOR_EACH_TRACKED (b, ctx->ovnsb_idl) { + if (sbrec_mac_binding_is_deleted(b)) { + ofctrl_remove_flows(&b->header_.uuid); + } else { + consider_neighbor_flow(lports, b, &ofpacts, &match, + sbrec_mac_binding_is_new(b)); + } + } } + ofpbuf_uninit(&ofpacts); } @@ -475,6 +514,13 @@ lflow_run(struct controller_ctx *ctx, const struct lport_index *lports, const struct hmap *patched_datapaths, const struct simap *ct_zones) { + if (full_flow_processing) { + ovn_flow_table_clear(); + full_logical_flow_processing = true; + full_neighbor_flow_processing = true; + full_flow_processing = false; + physical_reset_processing(); + } add_logical_flows(ctx, lports, mcgroups, local_datapaths, patched_datapaths, ct_zones); add_neighbor_flows(ctx, lports); diff --git a/ovn/controller/lflow.h b/ovn/controller/lflow.h index 8f8f81a..abbb10a 100644 --- a/ovn/controller/lflow.h +++ b/ovn/controller/lflow.h @@ -65,5 +65,6 @@ void lflow_run(struct controller_ctx *, const struct lport_index *, const struct hmap *patched_datapaths, const struct simap *ct_zones); void lflow_destroy(void); +void lflow_reset_processing(void); #endif /* ovn/lflow.h */ diff --git a/ovn/controller/lport.c b/ovn/controller/lport.c index 080b27f..5d8d0d0 100644 --- a/ovn/controller/lport.c +++ b/ovn/controller/lport.c @@ -48,7 +48,7 @@ lport_index_init(struct lport_index *lports) hmap_init(&lports->by_uuid); } -void +bool lport_index_remove(struct lport_index *lports, const struct uuid *uuid) { const struct lport *port_ = lport_lookup_by_uuid(lports, uuid); @@ -58,7 +58,9 @@ lport_index_remove(struct lport_index *lports, const struct uuid *uuid) hmap_remove(&lports->by_key, &port->key_node); hmap_remove(&lports->by_uuid, &port->uuid_node); free(port); + return true; } + return false; } void @@ -74,6 +76,7 @@ lport_index_clear(struct lport_index *lports) hmap_remove(&lports->by_uuid, &port->uuid_node); free(port); } + lflow_reset_processing(); } static void @@ -93,6 +96,7 @@ consider_lport_index(struct lport_index *lports, uuid_hash(&pb->header_.uuid)); memcpy(&p->uuid, &pb->header_.uuid, sizeof p->uuid); p->pb = pb; + lflow_reset_processing(); } void @@ -108,7 +112,10 @@ lport_index_fill(struct lport_index *lports, struct ovsdb_idl *ovnsb_idl) } else { SBREC_PORT_BINDING_FOR_EACH_TRACKED (pb, ovnsb_idl) { if (sbrec_port_binding_is_deleted(pb)) { - lport_index_remove(lports, &pb->header_.uuid); + while (lport_index_remove(lports, &pb->header_.uuid)) { + ; + } + lflow_reset_processing(); } else { consider_lport_index(lports, pb); } @@ -202,6 +209,7 @@ mcgroup_index_remove(struct mcgroup_index *mcgroups, const struct uuid *uuid) hmap_remove(&mcgroups->by_uuid, &mcgroup->uuid_node); free(mcgroup); } + lflow_reset_processing(); } void @@ -231,6 +239,7 @@ consider_mcgroup_index(struct mcgroup_index *mcgroups, uuid_hash(&mg->header_.uuid)); memcpy(&m->uuid, &mg->header_.uuid, sizeof m->uuid); m->mg = mg; + lflow_reset_processing(); } void @@ -247,6 +256,7 @@ mcgroup_index_fill(struct mcgroup_index *mcgroups, struct ovsdb_idl *ovnsb_idl) SBREC_MULTICAST_GROUP_FOR_EACH_TRACKED (mg, ovnsb_idl) { if (sbrec_multicast_group_is_deleted(mg)) { mcgroup_index_remove(mcgroups, &mg->header_.uuid); + lflow_reset_processing(); } else { consider_mcgroup_index(mcgroups, mg); } diff --git a/ovn/controller/lport.h b/ovn/controller/lport.h index 33f81d5..bb8d8cd 100644 --- a/ovn/controller/lport.h +++ b/ovn/controller/lport.h @@ -39,9 +39,10 @@ struct lport_index { void lport_index_reset(void); void lport_index_init(struct lport_index *); void lport_index_fill(struct lport_index *, struct ovsdb_idl *); -void lport_index_remove(struct lport_index *, const struct uuid *); +bool lport_index_remove(struct lport_index *, const struct uuid *); void lport_index_clear(struct lport_index *); void lport_index_destroy(struct lport_index *); +void lport_index_rebuild(void); const struct sbrec_port_binding *lport_lookup_by_name( const struct lport_index *, const char *name); @@ -73,6 +74,7 @@ void mcgroup_index_fill(struct mcgroup_index *, struct ovsdb_idl *); void mcgroup_index_remove(struct mcgroup_index *, const struct uuid *); void mcgroup_index_clear(struct mcgroup_index *); void mcgroup_index_destroy(struct mcgroup_index *); +void mcgroup_index_rebuild(void); const struct sbrec_multicast_group *mcgroup_lookup_by_dp_name( const struct mcgroup_index *, diff --git a/ovn/controller/ovn-controller.c b/ovn/controller/ovn-controller.c index 6a8ff9b..c8e8146 100644 --- a/ovn/controller/ovn-controller.c +++ b/ovn/controller/ovn-controller.c @@ -433,7 +433,6 @@ main(int argc, char *argv[]) update_ct_zones(&all_lports, &patched_datapaths, &ct_zones, ct_zone_bitmap); - ovn_flow_table_clear(); lflow_run(&ctx, &lports, &mcgroups, &local_datapaths, &patched_datapaths, &ct_zones); if (chassis_id) { diff --git a/ovn/controller/patch.c b/ovn/controller/patch.c index 589529e..8218796 100644 --- a/ovn/controller/patch.c +++ b/ovn/controller/patch.c @@ -18,8 +18,10 @@ #include "patch.h" #include "hash.h" +#include "lflow.h" #include "lib/hmap.h" #include "lib/vswitch-idl.h" +#include "lport.h" #include "openvswitch/vlog.h" #include "ovn-controller.h" @@ -93,6 +95,9 @@ create_patch_port(struct controller_ctx *ctx, ovsrec_bridge_verify_ports(src); ovsrec_bridge_set_ports(src, ports, src->n_ports + 1); + lport_index_reset(); + mcgroup_index_reset(); + lflow_reset_processing(); free(ports); } @@ -125,6 +130,9 @@ remove_port(struct controller_ctx *ctx, return; } } + lport_index_reset(); + mcgroup_index_reset(); + lflow_reset_processing(); } /* Obtains external-ids:ovn-bridge-mappings from OVSDB and adds patch ports for diff --git a/ovn/controller/physical.c b/ovn/controller/physical.c index 7b567d1..cf5c867 100644 --- a/ovn/controller/physical.c +++ b/ovn/controller/physical.c @@ -58,6 +58,14 @@ static struct simap localvif_to_ofport = SIMAP_INITIALIZER(&localvif_to_ofport); static struct hmap tunnels = HMAP_INITIALIZER(&tunnels); +static bool full_binding_processing = false; + +void +physical_reset_processing(void) +{ + full_binding_processing = true; +} + /* Maps from a chassis to the OpenFlow port number of the tunnel that can be * used to reach that chassis. */ struct chassis_tunnel { @@ -628,15 +636,51 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, bool is_patch = !strcmp(iface_rec->type, "patch"); if (is_patch && localnet) { /* localnet patch ports can be handled just like VIFs. */ - simap_put(&localvif_to_ofport, localnet, ofport); + if (simap_find(&localvif_to_ofport, localnet)) { + unsigned int old_port = simap_get(&localvif_to_ofport, + localnet); + if (old_port != ofport) { + simap_put(&localvif_to_ofport, localnet, ofport); + full_binding_processing = true; + lflow_reset_processing(); + } + } else { + simap_put(&localvif_to_ofport, localnet, ofport); + full_binding_processing = true; + lflow_reset_processing(); + } break; } else if (is_patch && l2gateway) { /* L2 gateway patch ports can be handled just like VIFs. */ - simap_put(&localvif_to_ofport, l2gateway, ofport); + if (simap_find(&localvif_to_ofport, l2gateway)) { + unsigned int old_port = simap_get(&localvif_to_ofport, + l2gateway); + if (old_port != ofport) { + simap_put(&localvif_to_ofport, l2gateway, ofport); + full_binding_processing = true; + lflow_reset_processing(); + } + } else { + simap_put(&localvif_to_ofport, l2gateway, ofport); + full_binding_processing = true; + lflow_reset_processing(); + } break; } else if (is_patch && logpatch) { /* Logical patch ports can be handled just like VIFs. */ - simap_put(&localvif_to_ofport, logpatch, ofport); + if (simap_find(&localvif_to_ofport, logpatch)) { + unsigned int old_port = simap_get(&localvif_to_ofport, + logpatch); + if (old_port != ofport) { + simap_put(&localvif_to_ofport, logpatch, ofport); + full_binding_processing = true; + lflow_reset_processing(); + } + } else { + simap_put(&localvif_to_ofport, logpatch, ofport); + full_binding_processing = true; + lflow_reset_processing(); + } break; } else if (chassis_id) { enum chassis_tunnel_type tunnel_type; @@ -664,7 +708,19 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, const char *iface_id = smap_get(&iface_rec->external_ids, "iface-id"); if (iface_id) { - simap_put(&localvif_to_ofport, iface_id, ofport); + if (simap_find(&localvif_to_ofport, iface_id)) { + unsigned int old_port = simap_get(&localvif_to_ofport, + iface_id); + if (old_port != ofport) { + simap_put(&localvif_to_ofport, iface_id, ofport); + full_binding_processing = true; + lflow_reset_processing(); + } + } else { + simap_put(&localvif_to_ofport, iface_id, ofport); + full_binding_processing = true; + lflow_reset_processing(); + } } } } @@ -676,9 +732,22 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, /* Set up flows in table 0 for physical-to-logical translation and in table * 64 for logical-to-physical translation. */ const struct sbrec_port_binding *binding; - SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) { - consider_port_binding(mff_ovn_geneve, ct_zones, local_datapaths, - patched_datapaths, binding, &ofpacts, true); + if (full_binding_processing) { + SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) { + consider_port_binding(mff_ovn_geneve, ct_zones, local_datapaths, + patched_datapaths, binding, &ofpacts, true); + } + full_binding_processing = false; + } else { + SBREC_PORT_BINDING_FOR_EACH_TRACKED (binding, ctx->ovnsb_idl) { + if (sbrec_port_binding_is_deleted(binding)) { + ofctrl_remove_flows(&binding->header_.uuid); + } else { + consider_port_binding(mff_ovn_geneve, ct_zones, local_datapaths, + patched_datapaths, binding, &ofpacts, + sbrec_port_binding_is_new(binding)); + } + } } /* Handle output to multicast groups, in tables 32 and 33. */ @@ -795,7 +864,6 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, hc_uuid, true); ofpbuf_uninit(&ofpacts); - simap_clear(&localvif_to_ofport); HMAP_FOR_EACH_POP (tun, hmap_node, &tunnels) { free(tun); } diff --git a/ovn/controller/physical.h b/ovn/controller/physical.h index 1f98f71..92680dc 100644 --- a/ovn/controller/physical.h +++ b/ovn/controller/physical.h @@ -45,5 +45,6 @@ void physical_run(struct controller_ctx *, enum mf_field_id mff_ovn_geneve, const struct ovsrec_bridge *br_int, const char *chassis_id, const struct simap *ct_zones, struct hmap *local_datapaths, struct hmap *patched_datapaths); +void physical_reset_processing(void); #endif /* ovn/physical.h */