From patchwork Thu Aug 20 08:39:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 1348239 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.136; helo=silver.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BXJ474zd7z9sRN for ; Thu, 20 Aug 2020 18:39:49 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id F026322049; Thu, 20 Aug 2020 08:39:45 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qc79GMld+Xlb; Thu, 20 Aug 2020 08:39:29 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by silver.osuosl.org (Postfix) with ESMTP id 8BAE1204A9; Thu, 20 Aug 2020 08:39:29 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 64118C07FF; Thu, 20 Aug 2020 08:39:29 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id 2C182C0051 for ; Thu, 20 Aug 2020 08:39:28 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 210A48808D for ; Thu, 20 Aug 2020 08:39:28 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id xVzucXip5j5A for ; Thu, 20 Aug 2020 08:39:26 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay7-d.mail.gandi.net (relay7-d.mail.gandi.net [217.70.183.200]) by hemlock.osuosl.org (Postfix) with ESMTPS id DAB7D88084 for ; Thu, 20 Aug 2020 08:39:25 +0000 (UTC) X-Originating-IP: 115.99.88.165 Received: from nusiddiq.home.org.com (unknown [115.99.88.165]) (Authenticated sender: numans@ovn.org) by relay7-d.mail.gandi.net (Postfix) with ESMTPSA id 4EA0A2000D; Thu, 20 Aug 2020 08:39:20 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Thu, 20 Aug 2020 14:09:12 +0530 Message-Id: <20200820083912.3240721-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Subject: [ovs-dev] [PATCH ovn] ovn-controller: Persist the conjunction ids allocated for conjuctive matches. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Numan Siddique For a logical flow which results in conjuctive OF matches, we are not persisting the allocated conjunction ids for it. There are few side effects because of this. - When a port group or address set gets modified, the logical flows which references these port groups/address sets gets reprocessed again and the resulting OpenvSwitch flows with conjunctive matches gets modified in the vswitchd if the conjunction id changes. - And because of this there is small probability of a packet getting dropped when the OF flows gets updated with different conjunction ids. This patch fixes this issue by persisting the conjunction ids. Earlier, logical flow caching support was added [1] to ovn-controller and then reverted [2] later due to some issues. This patch takes the lflow caching approach to persist the conjunction ids. But it only creates the cache for logical flows which result in conjunctive matches. And it doesn't cache the expr tree. An upcoming patch series, will attempt to add back the expr caching (addressing all the issues.) [1] - 8795bec737b("ovn-controller: Cache logical flow expr tree for each lflow.) [2] - 065bcf46218("ovn-controller: Revert lflow expr caching") Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1858878 Signed-off-by: Numan Siddique --- controller/lflow.c | 114 ++++++++++++++++++++++++++++++++++-- controller/lflow.h | 7 ++- controller/ovn-controller.c | 54 ++++++++++++++--- tests/ovn.at | 112 +++++++++++++++++++++++++++++++++++ 4 files changed, 271 insertions(+), 16 deletions(-) diff --git a/controller/lflow.c b/controller/lflow.c index 151561210a..519e6b2bce 100644 --- a/controller/lflow.c +++ b/controller/lflow.c @@ -269,8 +269,72 @@ lflow_resource_destroy_lflow(struct lflow_resource_ref *lfrr, free(lfrn); } -/* Adds the logical flows from the Logical_Flow table to flow tables. */ +/* Represents an lflow cache which + * - stores the conjunction id offset if the lflow matches + * results in conjunctive OpenvSwitch flows. + */ +struct lflow_cache { + struct hmap_node node; + struct uuid lflow_uuid; /* key */ + uint32_t conj_id_ofs; +}; + +static struct lflow_cache * +lflow_cache_add(struct hmap *lflow_cache_map, + const struct sbrec_logical_flow *lflow) +{ + struct lflow_cache *lc = xmalloc(sizeof *lc); + lc->lflow_uuid = lflow->header_.uuid; + lc->conj_id_ofs = 0; + hmap_insert(lflow_cache_map, &lc->node, uuid_hash(&lc->lflow_uuid)); + return lc; +} + +static struct lflow_cache * +lflow_cache_get(struct hmap *lflow_cache_map, + const struct sbrec_logical_flow *lflow) +{ + struct lflow_cache *lc; + size_t hash = uuid_hash(&lflow->header_.uuid); + HMAP_FOR_EACH_WITH_HASH (lc, node, hash, lflow_cache_map) { + if (uuid_equals(&lc->lflow_uuid, &lflow->header_.uuid)) { + return lc; + } + } + + return NULL; +} + static void +lflow_cache_delete(struct hmap *lflow_cache_map, + const struct sbrec_logical_flow *lflow) +{ + struct lflow_cache *lc = lflow_cache_get(lflow_cache_map, lflow); + if (lc) { + hmap_remove(lflow_cache_map, &lc->node); + free(lc); + } +} + +void +lflow_cache_init(struct hmap *lflow_cache_map) +{ + hmap_init(lflow_cache_map); +} + +void +lflow_cache_destroy(struct hmap *lflow_cache_map) +{ + struct lflow_cache *lc; + HMAP_FOR_EACH_POP (lc, node, lflow_cache_map) { + free(lc); + } + + hmap_destroy(lflow_cache_map); +} + +/* Adds the logical flows from the Logical_Flow table to flow tables. */ +static bool add_logical_flows(struct lflow_ctx_in *l_ctx_in, struct lflow_ctx_out *l_ctx_out) { @@ -299,6 +363,7 @@ add_logical_flows(struct lflow_ctx_in *l_ctx_in, struct controller_event_options controller_event_opts; controller_event_opts_init(&controller_event_opts); + bool success = true; SBREC_LOGICAL_FLOW_TABLE_FOR_EACH (lflow, l_ctx_in->logical_flow_table) { if (!consider_logical_flow(lflow, &dhcp_opts, &dhcpv6_opts, &nd_ra_opts, &controller_event_opts, @@ -306,6 +371,7 @@ add_logical_flows(struct lflow_ctx_in *l_ctx_in, static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 5); VLOG_ERR_RL(&rl, "Conjunction id overflow when processing lflow " UUID_FMT, UUID_ARGS(&lflow->header_.uuid)); + success = false; } } @@ -313,6 +379,7 @@ add_logical_flows(struct lflow_ctx_in *l_ctx_in, dhcp_opts_destroy(&dhcpv6_opts); nd_ra_opts_destroy(&nd_ra_opts); controller_event_opts_destroy(&controller_event_opts); + return success; } bool @@ -355,6 +422,7 @@ lflow_handle_changed_flows(struct lflow_ctx_in *l_ctx_in, /* Delete entries from lflow resource reference. */ lflow_resource_destroy_lflow(l_ctx_out->lfrr, &lflow->header_.uuid); + lflow_cache_delete(l_ctx_out->lflow_cache_map, lflow); } } @@ -652,13 +720,33 @@ consider_logical_flow(const struct sbrec_logical_flow *lflow, ovnacts_free(ovnacts.data, ovnacts.size); ofpbuf_uninit(&ovnacts); + uint32_t conj_id_ofs = *l_ctx_out->conj_id_ofs; + if (n_conjs) { + struct lflow_cache *lc = + lflow_cache_get(l_ctx_out->lflow_cache_map, lflow); + if (!lc) { + lc = lflow_cache_add(l_ctx_out->lflow_cache_map, lflow); + } + + if (!lc->conj_id_ofs) { + lc->conj_id_ofs = *l_ctx_out->conj_id_ofs; + if (!update_conj_id_ofs(l_ctx_out->conj_id_ofs, n_conjs)) { + lc->conj_id_ofs = 0; + expr_matches_destroy(&matches); + return false; + } + } + + conj_id_ofs = lc->conj_id_ofs; + } + /* Prepare the OpenFlow matches for adding to the flow table. */ struct expr_match *m; HMAP_FOR_EACH (m, hmap_node, &matches) { match_set_metadata(&m->match, htonll(lflow->logical_datapath->tunnel_key)); if (m->match.wc.masks.conj_id) { - m->match.flow.conj_id += *l_ctx_out->conj_id_ofs; + m->match.flow.conj_id += conj_id_ofs; } if (datapath_is_switch(ldp)) { unsigned int reg_index @@ -693,7 +781,7 @@ consider_logical_flow(const struct sbrec_logical_flow *lflow, struct ofpact_conjunction *dst; dst = ofpact_put_CONJUNCTION(&conj); - dst->id = src->id + *l_ctx_out->conj_id_ofs; + dst->id = src->id + conj_id_ofs; dst->clause = src->clause; dst->n_clauses = src->n_clauses; } @@ -708,7 +796,7 @@ consider_logical_flow(const struct sbrec_logical_flow *lflow, /* Clean up. */ expr_matches_destroy(&matches); ofpbuf_uninit(&ofpacts); - return update_conj_id_ofs(l_ctx_out->conj_id_ofs, n_conjs); + return true; } static void @@ -853,15 +941,29 @@ lflow_handle_changed_neighbors( /* Translates logical flows in the Logical_Flow table in the OVN_SB database * into OpenFlow flows. See ovn-architecture(7) for more information. */ -void +bool lflow_run(struct lflow_ctx_in *l_ctx_in, struct lflow_ctx_out *l_ctx_out) { COVERAGE_INC(lflow_run); - add_logical_flows(l_ctx_in, l_ctx_out); + /* when lflow_run is called, it's possible that some of the logical flows + * are deleted. We need to delete the lflow cache for these + * lflows (if present), otherwise, they will not be deleted at all. */ + const struct sbrec_logical_flow *lflow; + SBREC_LOGICAL_FLOW_TABLE_FOR_EACH_TRACKED (lflow, + l_ctx_in->logical_flow_table) { + if (sbrec_logical_flow_is_deleted(lflow)) { + lflow_cache_delete(l_ctx_out->lflow_cache_map, lflow); + } + } + + if (!add_logical_flows(l_ctx_in, l_ctx_out)) { + return false; + } add_neighbor_flows(l_ctx_in->sbrec_port_binding_by_name, l_ctx_in->mac_binding_table, l_ctx_in->local_datapaths, l_ctx_out->flow_table); + return true; } void diff --git a/controller/lflow.h b/controller/lflow.h index ae02eaf5e8..0c1b4fb9dc 100644 --- a/controller/lflow.h +++ b/controller/lflow.h @@ -141,12 +141,12 @@ struct lflow_ctx_out { struct ovn_extend_table *group_table; struct ovn_extend_table *meter_table; struct lflow_resource_ref *lfrr; - struct hmap *lflow_expr_cache; + struct hmap *lflow_cache_map; uint32_t *conj_id_ofs; }; void lflow_init(void); -void lflow_run(struct lflow_ctx_in *, struct lflow_ctx_out *); +bool lflow_run(struct lflow_ctx_in *, struct lflow_ctx_out *); bool lflow_handle_changed_flows(struct lflow_ctx_in *, struct lflow_ctx_out *); bool lflow_handle_changed_ref(enum ref_type, const char *ref_name, struct lflow_ctx_in *, struct lflow_ctx_out *, @@ -159,7 +159,8 @@ void lflow_handle_changed_neighbors( void lflow_destroy(void); -void lflow_expr_destroy(struct hmap *lflow_expr_cache); +void lflow_cache_init(struct hmap *); +void lflow_cache_destroy(struct hmap *); bool lflow_add_flows_for_datapath(const struct sbrec_datapath_binding *, struct lflow_ctx_in *, diff --git a/controller/ovn-controller.c b/controller/ovn-controller.c index ea6a436c01..98ef5d056a 100644 --- a/controller/ovn-controller.c +++ b/controller/ovn-controller.c @@ -73,6 +73,7 @@ static unixctl_cb_func extend_table_list; static unixctl_cb_func inject_pkt; static unixctl_cb_func engine_recompute_cmd; static unixctl_cb_func cluster_state_reset_cmd; +static unixctl_cb_func flush_lflow_cache; #define DEFAULT_BRIDGE_NAME "br-int" #define DEFAULT_PROBE_INTERVAL_MSEC 5000 @@ -1539,6 +1540,11 @@ physical_flow_changes_ovs_iface_handler(struct engine_node *node, void *data) return true; } +struct flow_output_persistent_data { + uint32_t conj_id_ofs; + struct hmap lflow_cache_map; +}; + struct ed_type_flow_output { /* desired flows */ struct ovn_desired_flow_table flow_table; @@ -1546,10 +1552,12 @@ struct ed_type_flow_output { struct ovn_extend_table group_table; /* meter ids for QoS */ struct ovn_extend_table meter_table; - /* conjunction id offset */ - uint32_t conj_id_ofs; /* lflow resource cross reference */ struct lflow_resource_ref lflow_resource_ref; + + /* Data which is persistent and not cleared during + * full recompute. */ + struct flow_output_persistent_data pd; }; static void init_physical_ctx(struct engine_node *node, @@ -1700,7 +1708,8 @@ static void init_lflow_ctx(struct engine_node *node, l_ctx_out->group_table = &fo->group_table; l_ctx_out->meter_table = &fo->meter_table; l_ctx_out->lfrr = &fo->lflow_resource_ref; - l_ctx_out->conj_id_ofs = &fo->conj_id_ofs; + l_ctx_out->conj_id_ofs = &fo->pd.conj_id_ofs; + l_ctx_out->lflow_cache_map = &fo->pd.lflow_cache_map; } static void * @@ -1712,8 +1721,9 @@ en_flow_output_init(struct engine_node *node OVS_UNUSED, ovn_desired_flow_table_init(&data->flow_table); ovn_extend_table_init(&data->group_table); ovn_extend_table_init(&data->meter_table); - data->conj_id_ofs = 1; + data->pd.conj_id_ofs = 1; lflow_resource_init(&data->lflow_resource_ref); + lflow_cache_init(&data->pd.lflow_cache_map); return data; } @@ -1725,6 +1735,7 @@ en_flow_output_cleanup(void *data) ovn_extend_table_destroy(&flow_output_data->group_table); ovn_extend_table_destroy(&flow_output_data->meter_table); lflow_resource_destroy(&flow_output_data->lflow_resource_ref); + lflow_cache_destroy(&flow_output_data->pd.lflow_cache_map); } static void @@ -1758,7 +1769,6 @@ en_flow_output_run(struct engine_node *node, void *data) struct ovn_desired_flow_table *flow_table = &fo->flow_table; struct ovn_extend_table *group_table = &fo->group_table; struct ovn_extend_table *meter_table = &fo->meter_table; - uint32_t *conj_id_ofs = &fo->conj_id_ofs; struct lflow_resource_ref *lfrr = &fo->lflow_resource_ref; static bool first_run = true; @@ -1771,11 +1781,25 @@ en_flow_output_run(struct engine_node *node, void *data) lflow_resource_clear(lfrr); } - *conj_id_ofs = 1; struct lflow_ctx_in l_ctx_in; struct lflow_ctx_out l_ctx_out; init_lflow_ctx(node, rt_data, fo, &l_ctx_in, &l_ctx_out); - lflow_run(&l_ctx_in, &l_ctx_out); + if (!lflow_run(&l_ctx_in, &l_ctx_out)) { + /* lflow_run() failed because of conjunction ids overflow. + * There can be many holes in between. Destroy lflow cache + * and call lflow_run() again. */ + ovn_desired_flow_table_clear(flow_table); + ovn_extend_table_clear(group_table, false /* desired */); + ovn_extend_table_clear(meter_table, false /* desired */); + lflow_resource_clear(lfrr); + fo->pd.conj_id_ofs = 1; + lflow_cache_destroy(&fo->pd.lflow_cache_map); + lflow_cache_init(&fo->pd.lflow_cache_map); + if (!lflow_run(&l_ctx_in, &l_ctx_out)) { + VLOG_WARN("Flow translation failed due to conjunction id " + "overflow."); + } + } struct physical_ctx p_ctx; init_physical_ctx(node, rt_data, &p_ctx); @@ -2336,6 +2360,8 @@ main(int argc, char *argv[]) unixctl_command_register("recompute", "", 0, 0, engine_recompute_cmd, NULL); + unixctl_command_register("flush-lflow-cache", "", 0, 0, flush_lflow_cache, + &flow_output_data->pd); bool reset_ovnsb_idl_min_index = false; unixctl_command_register("sb-cluster-state-reset", "", 0, 0, @@ -2847,6 +2873,20 @@ engine_recompute_cmd(struct unixctl_conn *conn OVS_UNUSED, int argc OVS_UNUSED, unixctl_command_reply(conn, NULL); } +static void +flush_lflow_cache(struct unixctl_conn *conn OVS_UNUSED, int argc OVS_UNUSED, + const char *argv[] OVS_UNUSED, void *arg_) +{ + VLOG_INFO("User triggered lflow cache flush."); + struct flow_output_persistent_data *fo_pd = arg_; + lflow_cache_destroy(&fo_pd->lflow_cache_map); + lflow_cache_init(&fo_pd->lflow_cache_map); + fo_pd->conj_id_ofs = 1; + engine_set_force_recompute(true); + poll_immediate_wake(); + unixctl_command_reply(conn, NULL); +} + static void cluster_state_reset_cmd(struct unixctl_conn *conn, int argc OVS_UNUSED, const char *argv[] OVS_UNUSED, void *idl_reset_) diff --git a/tests/ovn.at b/tests/ovn.at index 8aabdf307a..a34acf11b0 100644 --- a/tests/ovn.at +++ b/tests/ovn.at @@ -21360,3 +21360,115 @@ AT_CHECK([ovn-sbctl find mac ip=10.0.0.2 mac='"00:00:00:00:03:02"' logical_port= OVN_CLEANUP([hv1],[hv2]) AT_CLEANUP + +AT_SETUP([ovn -- lflow cache for conjunctions]) +ovn_start + +net_add n1 +sim_add hv1 + +as hv1 +ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.1 + +ovn-nbctl ls-add sw0 +ovn-nbctl lsp-add sw0 sw0-p1 +ovn-nbctl lsp-set-addresses sw0-p1 "10:14:00:00:00:03 10.0.0.3" +ovn-nbctl lsp-set-port-security sw0-p1 "10:14:00:00:00:03 10.0.0.3" + +ovn-nbctl lsp-add sw0 sw0-p2 +ovn-nbctl lsp-set-addresses sw0-p2 "10:14:00:00:00:04 10.0.0.4" +ovn-nbctl lsp-set-port-security sw0-p2 "10:14:00:00:00:04 10.0.0.4" + +ovn-nbctl lsp-add sw0 sw0-p3 +ovn-nbctl lsp-set-addresses sw0-p3 "10:14:00:00:00:05 10.0.0.5" +ovn-nbctl lsp-set-port-security sw0-p3 "10:14:00:00:00:05 10.0.0.5" + +ovn-nbctl lsp-add sw0 sw0-p4 +ovn-nbctl lsp-set-addresses sw0-p4 "10:14:00:00:00:06 10.0.0.6" +ovn-nbctl lsp-set-port-security sw0-p4 "10:14:00:00:00:06 10.0.0.6" + +as hv1 +ovs-vsctl -- add-port br-int hv1-vif1 -- \ + set interface hv1-vif1 external-ids:iface-id=sw0-p1 \ + options:tx_pcap=hv1/vif1-tx.pcap \ + options:rxq_pcap=hv1/vif1-rx.pcap \ + ofport-request=1 +ovs-vsctl -- add-port br-int hv1-vif2 -- \ + set interface hv1-vif2 external-ids:iface-id=sw0-p2 \ + options:tx_pcap=hv1/vif2-tx.pcap \ + options:rxq_pcap=hv1/vif2-rx.pcap \ + ofport-request=2 +ovs-vsctl -- add-port br-int hv1-vif3 -- \ + set interface hv1-vif3 external-ids:iface-id=sw0-p3 \ + options:tx_pcap=hv1/vif3-tx.pcap \ + options:rxq_pcap=hv1/vif3-rx.pcap \ + ofport-request=3 +ovs-vsctl -- add-port br-int hv1-vif4 -- \ + set interface hv1-vif4 external-ids:iface-id=sw0-p4 \ + options:tx_pcap=hv1/vif4-tx.pcap \ + options:rxq_pcap=hv1/vif4-rx.pcap \ + ofport-request=4 + +OVS_WAIT_UNTIL([test x$(ovn-nbctl lsp-get-up sw0-p1) = xup]) +OVS_WAIT_UNTIL([test x$(ovn-nbctl lsp-get-up sw0-p2) = xup]) +OVS_WAIT_UNTIL([test x$(ovn-nbctl lsp-get-up sw0-p3) = xup]) +OVS_WAIT_UNTIL([test x$(ovn-nbctl lsp-get-up sw0-p4) = xup]) + +ovn-nbctl pg-add pg0 sw0-p1 sw0-p2 +ovn-nbctl acl-add pg0 to-lport 1002 "outport == @pg0 && ip4 && tcp.dst >= 80 && tcp.dst <= 82" allow +ovn-nbctl --wait=hv sync + +OVS_WAIT_UNTIL([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep -c "conj_id=2")]) + +# Add sw0-p3 to the port group pg0. The conj_id should be 2. +ovn-nbctl pg-set-ports pg0 sw0-p1 sw0-p2 sw0-p3 +OVS_WAIT_UNTIL([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep -c "conj_id")]) +AT_CHECK([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep -c "conj_id=2")]) + +# Add sw0p4 to the port group pg0. The conj_id should be 2. +ovn-nbctl pg-set-ports pg0 sw0-p1 sw0-p2 sw0-p3 sw0-p4 +OVS_WAIT_UNTIL([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep -c "conj_id")]) +AT_CHECK([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep -c "conj_id=2")]) + +# Add another ACL with conjunction. +ovn-nbctl acl-add pg0 to-lport 1002 "outport == @pg0 && ip4 && udp.dst >= 80 && udp.dst <= 82" allow +OVS_WAIT_UNTIL([test 2 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep -c "conj_id")]) +AT_CHECK([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep tcp | grep -c "conj_id=2")]) +AT_CHECK([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep udp | grep -c "conj_id=3")]) + +# Delete tcp ACL. +ovn-nbctl acl-del pg0 to-lport 1002 "outport == @pg0 && ip4 && tcp.dst >= 80 && tcp.dst <= 82" +OVS_WAIT_UNTIL([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep -c "conj_id")]) +AT_CHECK([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep udp | grep -c "conj_id=3")]) + +# Add back the tcp ACL. +ovn-nbctl acl-add pg0 to-lport 1002 "outport == @pg0 && ip4 && tcp.dst >= 80 && tcp.dst <= 82" allow +OVS_WAIT_UNTIL([test 2 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep -c "conj_id")]) +AT_CHECK([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep udp | grep -c "conj_id=3")]) +AT_CHECK([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep tcp | grep -c "conj_id=4")]) + +ovn-nbctl acl-add pg0 to-lport 1002 "outport == @pg0 && inport == @pg0 && ip4 && tcp.dst >= 84 && tcp.dst <= 86" allow +OVS_WAIT_UNTIL([test 3 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep -c "conj_id")]) +AT_CHECK([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep udp | grep -c "conj_id=3")]) +AT_CHECK([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep tcp | grep -c "conj_id=4")]) +AT_CHECK([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep tcp | grep -c "conj_id=5")]) + +ovn-nbctl clear port_group pg0 acls +OVS_WAIT_UNTIL([test 0 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep -c "conj_id")]) + +ovn-nbctl --wait=hv acl-add pg0 to-lport 1002 "outport == @pg0 && ip4 && tcp.dst >= 80 && tcp.dst <= 82" allow +ovn-nbctl --wait=hv acl-add pg0 to-lport 1002 "outport == @pg0 && ip4 && udp.dst >= 80 && udp.dst <= 82" allow +OVS_WAIT_UNTIL([test 2 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep -c "conj_id")]) +AT_CHECK([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep tcp | grep -c "conj_id=6")]) +AT_CHECK([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep udp | grep -c "conj_id=7")]) + +# Flush the lflow cache. +as hv1 ovn-appctl -t ovn-controller flush-lflow-cache +OVS_WAIT_UNTIL([test 2 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep -c "conj_id")]) +AT_CHECK([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep -c "conj_id=2")]) +AT_CHECK([test 1 = $(as hv1 ovs-ofctl dump-flows br-int table=44 | grep -c "conj_id=3")]) + +OVN_CLEANUP([hv1]) + +AT_CLEANUP