From patchwork Mon Dec 4 08:36:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guoshuai Li X-Patchwork-Id: 844150 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yqytr5N3lz9s7f for ; Mon, 4 Dec 2017 19:37:56 +1100 (AEDT) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 9E3A7BC3; Mon, 4 Dec 2017 08:37:04 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id E1C0FB1E for ; Mon, 4 Dec 2017 08:37:02 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from smtp2203-239.mail.aliyun.com (smtp2203-239.mail.aliyun.com [121.197.203.239]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 29A9C401 for ; Mon, 4 Dec 2017 08:36:58 +0000 (UTC) X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07445866|-1; CH=green; FP=0|0|0|0|0|-1|-1|-1; HT=e01l10422; MF=ligs@dtdream.com; NM=1; PH=DS; RN=2; RT=2; SR=0; TI=SMTPD_---.9atki-I_1512376611; Received: from localhost.localdomain(mailfrom:ligs@dtdream.com fp:222.128.6.202) by smtp.aliyun-inc.com(10.147.43.230); Mon, 04 Dec 2017 16:36:55 +0800 From: Guoshuai Li To: dev@openvswitch.org Date: Mon, 4 Dec 2017 16:36:31 +0800 Message-Id: <20171204083631.2020-2-ligs@dtdream.com> X-Mailer: git-send-email 2.13.2.windows.1 In-Reply-To: <20171204083631.2020-1-ligs@dtdream.com> References: <20171204083631.2020-1-ligs@dtdream.com> X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Subject: [ovs-dev] [PATCH V4 2/2] ovn: OVN Support QoS meter X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org This feature is used to limit the bandwidth of flows, such as floating IP. ovn-northd changes: 1. add bandwidth column in NB's QOS table. 2. add QOS_METER stages in Logical switch ingress/egress. 3. add set_meter() action in SB's LFlow table. ovn-controller changes: add meter_table for meter action process openflow meter table. Now, This feature is only supported in DPDK. Signed-off-by: Guoshuai Li --- V4: 1. Fix checkpatch error. 2. The local rate variable uses uint64_t. This can support a very large rate. 3. Fix formatting mismatch issues. 4. Repair meter can not be the last table problem. 5. Add a note and description of the rate unit. 6. Supplementary test cases. --- NEWS | 1 + include/ovn/actions.h | 13 ++++- ovn/controller/lflow.c | 10 +++- ovn/controller/lflow.h | 1 + ovn/controller/ofctrl.c | 101 +++++++++++++++++++++++++++++++--- ovn/controller/ofctrl.h | 3 +- ovn/controller/ovn-controller.c | 11 ++-- ovn/lib/actions.c | 83 ++++++++++++++++++++++++++++ ovn/northd/ovn-northd.8.xml | 54 ++++++++++++++----- ovn/northd/ovn-northd.c | 116 ++++++++++++++++++++++++++-------------- ovn/ovn-nb.ovsschema | 14 +++-- ovn/ovn-nb.xml | 16 ++++++ ovn/ovn-sb.xml | 15 ++++++ ovn/utilities/ovn-trace.c | 4 ++ tests/ovn.at | 52 +++++++++++++++--- tests/test-ovn.c | 5 ++ 16 files changed, 424 insertions(+), 75 deletions(-) diff --git a/NEWS b/NEWS index 427c8f83d..cc1f31159 100644 --- a/NEWS +++ b/NEWS @@ -69,6 +69,7 @@ v2.8.0 - 31 Aug 2017 gateway. * Add support for ACL logging. * ovn-northd now has native support for active-standby high availability. + * Add support for QoS bandwidth limt with DPDK. - Tracing with ofproto/trace now traces through recirculation. - OVSDB: * New support for role-based access control (see ovsdb-server(1)). diff --git a/include/ovn/actions.h b/include/ovn/actions.h index ea90dbb2a..9554a395d 100644 --- a/include/ovn/actions.h +++ b/include/ovn/actions.h @@ -75,7 +75,8 @@ struct ovn_extend_table; OVNACT(DNS_LOOKUP, ovnact_dns_lookup) \ OVNACT(LOG, ovnact_log) \ OVNACT(PUT_ND_RA_OPTS, ovnact_put_opts) \ - OVNACT(ND_NS, ovnact_nest) + OVNACT(ND_NS, ovnact_nest) \ + OVNACT(SET_METER, ovnact_set_meter) /* enum ovnact_type, with a member OVNACT_ for each action. */ enum OVS_PACKED_ENUM ovnact_type { @@ -281,6 +282,13 @@ struct ovnact_log { char *name; }; +/* OVNACT_SET_METER. */ +struct ovnact_set_meter { + struct ovnact ovnact; + uint64_t rate; /* rate field, in kbps. */ + uint64_t burst; /* burst rate field, in kbps. */ +}; + /* Internal use by the helpers below. */ void ovnact_init(struct ovnact *, enum ovnact_type, size_t len); void *ovnact_put(struct ofpbuf *, enum ovnact_type, size_t len); @@ -490,6 +498,9 @@ struct ovnact_encode_params { /* A struct to figure out the group_id for group actions. */ struct ovn_extend_table *group_table; + /* A struct to figure out the meter_id for meter actions. */ + struct ovn_extend_table *meter_table; + /* OVN maps each logical flow table (ltable), one-to-one, onto a physical * OpenFlow flow table (ptable). A number of parameters describe this * mapping and data related to flow tables: diff --git a/ovn/controller/lflow.c b/ovn/controller/lflow.c index 3d990c49c..1e79a5355 100644 --- a/ovn/controller/lflow.c +++ b/ovn/controller/lflow.c @@ -27,6 +27,7 @@ #include "ovn/expr.h" #include "ovn/lib/ovn-l7.h" #include "ovn/lib/ovn-sb-idl.h" +#include "ovn/lib/extend-table.h" #include "packets.h" #include "physical.h" #include "simap.h" @@ -62,6 +63,7 @@ static void consider_logical_flow(struct controller_ctx *ctx, const struct sbrec_logical_flow *lflow, const struct hmap *local_datapaths, struct ovn_extend_table *group_table, + struct ovn_extend_table *meter_table, const struct sbrec_chassis *chassis, struct hmap *dhcp_opts, struct hmap *dhcpv6_opts, @@ -144,6 +146,7 @@ add_logical_flows(struct controller_ctx *ctx, const struct chassis_index *chassis_index, const struct hmap *local_datapaths, struct ovn_extend_table *group_table, + struct ovn_extend_table *meter_table, const struct sbrec_chassis *chassis, const struct shash *addr_sets, struct hmap *flow_table, @@ -174,7 +177,7 @@ add_logical_flows(struct controller_ctx *ctx, SBREC_LOGICAL_FLOW_FOR_EACH (lflow, ctx->ovnsb_idl) { consider_logical_flow(ctx, chassis_index, lflow, local_datapaths, - group_table, chassis, + group_table, meter_table, chassis, &dhcp_opts, &dhcpv6_opts, &nd_ra_opts, &conj_id_ofs, addr_sets, flow_table, active_tunnels, local_lport_ids); @@ -191,6 +194,7 @@ consider_logical_flow(struct controller_ctx *ctx, const struct sbrec_logical_flow *lflow, const struct hmap *local_datapaths, struct ovn_extend_table *group_table, + struct ovn_extend_table *meter_table, const struct sbrec_chassis *chassis, struct hmap *dhcp_opts, struct hmap *dhcpv6_opts, @@ -263,6 +267,7 @@ consider_logical_flow(struct controller_ctx *ctx, .is_switch = is_switch(ldp), .is_gateway_router = is_gateway_router(ldp, local_datapaths), .group_table = group_table, + .meter_table = meter_table, .pipeline = ingress ? OVNACT_P_INGRESS : OVNACT_P_EGRESS, .ingress_ptable = OFTABLE_LOG_INGRESS_PIPELINE, @@ -435,13 +440,14 @@ lflow_run(struct controller_ctx *ctx, const struct chassis_index *chassis_index, const struct hmap *local_datapaths, struct ovn_extend_table *group_table, + struct ovn_extend_table *meter_table, const struct shash *addr_sets, struct hmap *flow_table, struct sset *active_tunnels, struct sset *local_lport_ids) { add_logical_flows(ctx, chassis_index, local_datapaths, - group_table, chassis, addr_sets, flow_table, + group_table, meter_table, chassis, addr_sets, flow_table, active_tunnels, local_lport_ids); add_neighbor_flows(ctx, flow_table); } diff --git a/ovn/controller/lflow.h b/ovn/controller/lflow.h index 087b0ed8d..22bf5341a 100644 --- a/ovn/controller/lflow.h +++ b/ovn/controller/lflow.h @@ -67,6 +67,7 @@ void lflow_run(struct controller_ctx *, const struct chassis_index *, const struct hmap *local_datapaths, struct ovn_extend_table *group_table, + struct ovn_extend_table *meter_table, const struct shash *addr_sets, struct hmap *flow_table, struct sset *active_tunnels, diff --git a/ovn/controller/ofctrl.c b/ovn/controller/ofctrl.c index 24894cb64..f4ac3cc7b 100644 --- a/ovn/controller/ofctrl.c +++ b/ovn/controller/ofctrl.c @@ -71,6 +71,8 @@ static void ovn_flow_destroy(struct ovn_flow *); static void add_group(uint32_t table_id, struct ds *ds, struct ovs_list *msgs); static void delete_group(uint32_t table_id, struct ovs_list *msgs); +static void add_meter(uint32_t table_id, struct ds *ds, struct ovs_list *msgs); +static void delete_meter(uint32_t table_id, struct ovs_list *msgs); /* OpenFlow connection to the switch. */ static struct rconn *swconn; @@ -137,6 +139,9 @@ static struct hmap installed_flows; /* A reference to the group_table. */ static struct ovn_extend_table *groups; +/* A reference to the meter_table. */ +static struct ovn_extend_table *meters; + /* MFF_* field ID for our Geneve option. In S_TLV_TABLE_MOD_SENT, this is * the option we requested (we don't know whether we obtained it yet). In * S_CLEAR_FLOWS or S_UPDATE_FLOWS, this is really the option we have. */ @@ -148,13 +153,16 @@ static struct ofpbuf *encode_flow_mod(struct ofputil_flow_mod *); static struct ofpbuf *encode_group_mod(const struct ofputil_group_mod *); +static struct ofpbuf *encode_meter_mod(const struct ofputil_meter_mod *); + static void ovn_flow_table_clear(struct hmap *flow_table); static void ovn_flow_table_destroy(struct hmap *flow_table); static void ofctrl_recv(const struct ofp_header *, enum ofptype); void -ofctrl_init(struct ovn_extend_table *group_table) +ofctrl_init(struct ovn_extend_table *group_table, + struct ovn_extend_table *meter_table) { swconn = rconn_create(5, 0, DSCP_DEFAULT, 1 << OFP13_VERSION); tx_counter = rconn_packet_counter_create(); @@ -164,6 +172,9 @@ ofctrl_init(struct ovn_extend_table *group_table) groups = group_table; groups->create = add_group; groups->remove = delete_group; + meters = meter_table; + meters->create = add_meter; + meters->remove = delete_meter; } /* S_NEW, for a new connection. @@ -394,6 +405,18 @@ run_S_CLEAR_FLOWS(void) ovn_extend_table_clear(groups, true); } + /* Send a meter_mod to delete all meters. */ + struct ofputil_meter_mod mm; + memset(&mm, 0, sizeof mm); + mm.command = OFPMC13_DELETE; + mm.meter.meter_id = OFPM13_ALL; + queue_msg(encode_meter_mod(&mm)); + + /* Clear existing meters, to match the state of the switch. */ + if (meters) { + ovn_extend_table_clear(meters, true); + } + /* All flow updates are irrelevant now. */ struct ofctrl_flow_update *fup, *next; LIST_FOR_EACH_SAFE (fup, next, list_node, &flow_updates) { @@ -813,6 +836,65 @@ delete_group(uint32_t table_id, struct ovs_list *msgs) { ds_destroy(&group_string); ofputil_uninit_group_mod(&gm); } + +static struct ofpbuf * +encode_meter_mod(const struct ofputil_meter_mod *mm) +{ + return ofputil_encode_meter_mod(OFP13_VERSION, mm); +} + +static void +add_meter_mod(const struct ofputil_meter_mod *mm, struct ovs_list *msgs) +{ + struct ofpbuf *msg = encode_meter_mod(mm); + ovs_list_push_back(msgs, &msg->list_node); +} + +static void +add_meter(uint32_t table_id, struct ds *ds, struct ovs_list *msgs) +{ + /* Create and install new meter. */ + struct ofputil_meter_mod mm; + enum ofputil_protocol usable_protocols; + char *error; + struct ds meter_string = DS_EMPTY_INITIALIZER; + ds_put_format(&meter_string, "meter=%"PRIu32",%s", table_id, ds_cstr(ds)); + + error = parse_ofp_meter_mod_str(&mm, ds_cstr(&meter_string), + OFPMC13_ADD, &usable_protocols); + if (!error) { + add_meter_mod(&mm, msgs); + } else { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1); + VLOG_ERR_RL(&rl, "new meter %s %s", error, + ds_cstr(&meter_string)); + free(error); + } + ds_destroy(&meter_string); +} + +static void +delete_meter(uint32_t table_id, struct ovs_list *msgs) { + /* Delete the meter. */ + struct ofputil_meter_mod mm; + enum ofputil_protocol usable_protocols; + char *error; + struct ds meter_string = DS_EMPTY_INITIALIZER; + ds_put_format(&meter_string, "meter=%"PRIu32"", table_id); + + error = parse_ofp_meter_mod_str(&mm, ds_cstr(&meter_string), + OFPMC13_DELETE, + &usable_protocols); + if (!error) { + add_meter_mod(&mm, msgs); + } else { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1); + VLOG_ERR_RL(&rl, "Error deleting meter %"PRIu32": %s", + table_id, error); + free(error); + } + ds_destroy(&meter_string); +} static void add_ct_flush_zone(uint16_t zone_id, struct ovs_list *msgs) @@ -843,11 +925,10 @@ ofctrl_can_put(void) /* Replaces the flow table on the switch, if possible, by the flows added * with ofctrl_add_flow(). * - * Replaces the group table on the switch, if possible, by the contents of - * 'groups->desired_groups'. Regardless of whether the group table - * is updated, this deletes all the groups from the - * 'groups->desired_groups' and frees them. (The hmap itself isn't - * destroyed.) + * Replaces the group table and meter table on the switch, if possible, by the + * contents of 'groups->desired'. Regardless of whether the group table + * is updated, this deletes all the groups from the 'groups->desired' and frees + * them. (The hmap itself isn't destroyed.) * * Sends conntrack flush messages to each zone in 'pending_ct_zones' that * is in the CT_ZONE_OF_QUEUED state and then moves the zone into the @@ -861,6 +942,7 @@ ofctrl_put(struct hmap *flow_table, struct shash *pending_ct_zones, if (!ofctrl_can_put()) { ovn_flow_table_clear(flow_table); ovn_extend_table_clear(groups, false); + ovn_extend_table_clear(meters, false); return; } @@ -880,6 +962,8 @@ ofctrl_put(struct hmap *flow_table, struct shash *pending_ct_zones, ovn_extend_table_install_desired(groups, &msgs); + ovn_extend_table_install_desired(meters, &msgs); + /* Iterate through all of the installed flows. If any of them are no * longer desired, delete them; if any of them should have different * actions, update them. */ @@ -956,6 +1040,11 @@ ofctrl_put(struct hmap *flow_table, struct shash *pending_ct_zones, /* Move the contents of groups->desired to groups->existing. */ ovn_extend_table_move(groups); + ovn_extend_table_remove_existing(meters, &msgs); + + /* Move the contents of meters->desired to meters->existing. */ + ovn_extend_table_move(meters); + if (!ovs_list_is_empty(&msgs)) { /* Add a barrier to the list of messages. */ struct ofpbuf *barrier = ofputil_encode_barrier_request(OFP13_VERSION); diff --git a/ovn/controller/ofctrl.h b/ovn/controller/ofctrl.h index 9b5eab1f4..125f9a4c2 100644 --- a/ovn/controller/ofctrl.h +++ b/ovn/controller/ofctrl.h @@ -31,7 +31,8 @@ struct ovsrec_bridge; struct shash; /* Interface for OVN main loop. */ -void ofctrl_init(struct ovn_extend_table *group_table); +void ofctrl_init(struct ovn_extend_table *group_table, + struct ovn_extend_table *meter_table); enum mf_field_id ofctrl_run(const struct ovsrec_bridge *br_int, struct shash *pending_ct_zones); bool ofctrl_can_put(void); diff --git a/ovn/controller/ovn-controller.c b/ovn/controller/ovn-controller.c index c486887a5..7592bda25 100644 --- a/ovn/controller/ovn-controller.c +++ b/ovn/controller/ovn-controller.c @@ -597,9 +597,13 @@ main(int argc, char *argv[]) struct ovn_extend_table group_table; ovn_extend_table_init(&group_table); + /* Initialize meter ids for QoS. */ + struct ovn_extend_table meter_table; + ovn_extend_table_init(&meter_table); + daemonize_complete(); - ofctrl_init(&group_table); + ofctrl_init(&group_table, &meter_table); pinctrl_init(); lflow_init(); @@ -709,8 +713,8 @@ main(int argc, char *argv[]) struct hmap flow_table = HMAP_INITIALIZER(&flow_table); lflow_run(&ctx, chassis, &chassis_index, &local_datapaths, &group_table, - &addr_sets, &flow_table, &active_tunnels, - &local_lport_ids); + &meter_table, &addr_sets, &flow_table, + &active_tunnels, &local_lport_ids); if (chassis_id) { bfd_run(&ctx, br_int, chassis, &local_datapaths, @@ -844,6 +848,7 @@ main(int argc, char *argv[]) shash_destroy(&pending_ct_zones); ovn_extend_table_destroy(&group_table); + ovn_extend_table_destroy(&meter_table); ovsdb_idl_loop_destroy(&ovs_idl_loop); ovsdb_idl_loop_destroy(&ovnsb_idl_loop); diff --git a/ovn/lib/actions.c b/ovn/lib/actions.c index 8a7fe4f59..a3ca48eb3 100644 --- a/ovn/lib/actions.c +++ b/ovn/lib/actions.c @@ -2097,6 +2097,87 @@ ovnact_log_free(struct ovnact_log *log) free(log->name); } +static void +parse_set_meter_action(struct action_context *ctx) +{ + uint64_t rate = 0; + uint64_t burst = 0; + + lexer_force_match(ctx->lexer, LEX_T_LPAREN); /* Skip '('. */ + if (ctx->lexer->token.type == LEX_T_INTEGER + && ctx->lexer->token.format == LEX_F_DECIMAL) { + rate = ntohll(ctx->lexer->token.value.integer); + } + lexer_get(ctx->lexer); + if (lexer_match(ctx->lexer, LEX_T_COMMA)) { /* Skip ','. */ + if (ctx->lexer->token.type == LEX_T_INTEGER + && ctx->lexer->token.format == LEX_F_DECIMAL) { + burst = ntohll(ctx->lexer->token.value.integer); + } + lexer_get(ctx->lexer); + } + lexer_force_match(ctx->lexer, LEX_T_RPAREN); /* Skip ')'. */ + + if (!rate) { + lexer_error(ctx->lexer, + "Rate %"PRId64" for set_meter is not in valid.", + rate); + return; + } + + struct ovnact_set_meter *cl = ovnact_put_SET_METER(ctx->ovnacts); + cl->rate = rate; + cl->burst = burst; +} + +static void +format_SET_METER(const struct ovnact_set_meter *cl, struct ds *s) +{ + if (cl->burst) { + ds_put_format(s, "set_meter(%"PRId64", %"PRId64");", + cl->rate, cl->burst); + } else { + ds_put_format(s, "set_meter(%"PRId64");", cl->rate); + } +} + +static void +encode_SET_METER(const struct ovnact_set_meter *cl, + const struct ovnact_encode_params *ep, + struct ofpbuf *ofpacts) +{ + uint32_t table_id; + struct ofpact_meter *om; + + struct ds ds = DS_EMPTY_INITIALIZER; + if (cl->burst) { + ds_put_format(&ds, + "kbps burst stats bands=type=drop rate=%"PRId64" " + "burst_size=%"PRId64"", + cl->rate, cl->burst); + } else { + ds_put_format(&ds, "kbps stats bands=type=drop rate=%"PRId64"", + cl->rate); + } + + table_id = ovn_extend_table_assign_id(ep->meter_table, &ds); + if (table_id == EXT_TABLE_ID_INVALID) { + ds_destroy(&ds); + return; + } + + ds_destroy(&ds); + + /* Create an action to set the meter. */ + om = ofpact_put_METER(ofpacts); + om->meter_id = table_id; +} + +static void +ovnact_set_meter_free(struct ovnact_set_meter *ct OVS_UNUSED) +{ +} + /* Parses an assignment or exchange or put_dhcp_opts action. */ static void parse_set_action(struct action_context *ctx) @@ -2184,6 +2265,8 @@ parse_action(struct action_context *ctx) parse_SET_QUEUE(ctx); } else if (lexer_match_id(ctx->lexer, "log")) { parse_LOG(ctx); + } else if (lexer_match_id(ctx->lexer, "set_meter")) { + parse_set_meter_action(ctx); } else { lexer_syntax_error(ctx->lexer, "expecting action"); } diff --git a/ovn/northd/ovn-northd.8.xml b/ovn/northd/ovn-northd.8.xml index 27bb7c89d..28f0c61e2 100644 --- a/ovn/northd/ovn-northd.8.xml +++ b/ovn/northd/ovn-northd.8.xml @@ -366,7 +366,28 @@ -

Ingress Table 8: LB

+

Ingress Table 8: from-lport QoS meter

+ +

+ Logical flows in this table closely reproduce those in the + QoS table bandwidth column in the + OVN_Northbound database for the from-lport + direction. +

+ +
    +
  • + For every qos_rules for every logical switch a flow will be added at + priorities mentioned in the QoS table. +
  • + +
  • + One priority-0 fallback flow that matches all packets and advances to + the next table. +
  • +
+ +

Ingress Table 9: LB

It contains a priority-0 flow that simply moves traffic to the next @@ -379,7 +400,7 @@ connection.)

-

Ingress Table 9: Stateful

+

Ingress Table 10: Stateful

  • @@ -424,7 +445,7 @@
-

Ingress Table 10: ARP/ND responder

+

Ingress Table 11: ARP/ND responder

This table implements ARP/ND responder in a logical switch for known @@ -574,7 +595,7 @@ nd_na { -

Ingress Table 11: DHCP option processing

+

Ingress Table 12: DHCP option processing

This table adds the DHCPv4 options to a DHCPv4 packet from the @@ -634,7 +655,7 @@ next; -

Ingress Table 12: DHCP responses

+

Ingress Table 13: DHCP responses

This table implements DHCP responder for the DHCP replies generated by @@ -716,7 +737,7 @@ output; -

Ingress Table 13 DNS Lookup

+

Ingress Table 14 DNS Lookup

This table looks up and resolves the DNS names to the corresponding @@ -745,7 +766,7 @@ reg0[4] = dns_lookup(); next; -

Ingress Table 14 DNS Responses

+

Ingress Table 15 DNS Responses

This table implements DNS responder for the DNS replies generated by @@ -780,7 +801,7 @@ output; -

Ingress Table 15 Destination Lookup

+

Ingress Table 16 Destination Lookup

This table implements switching behavior. It contains these logical @@ -882,7 +903,14 @@ output; to-lport qos rules.

-

Egress Table 6: Stateful

+

Egress Table 6: to-lport QoS meter

+ +

+ This is similar to ingress table QoS meter except for + to-lport qos rules. +

+ +

Egress Table 7: Stateful

This is similar to ingress table Stateful except that @@ -897,18 +925,18 @@ output; A priority 34000 logical flow is added for each logical port which has DHCPv4 options defined to allow the DHCPv4 reply packet and which has DHCPv6 options defined to allow the DHCPv6 reply packet from the - Ingress Table 12: DHCP responses. + Ingress Table 13: DHCP responses.

  • A priority 34000 logical flow is added for each logical switch datapath configured with DNS records with the match udp.dst = 53 to allow the DNS reply packet from the - Ingress Table 14:DNS responses. + Ingress Table 15:DNS responses.
  • -

    Egress Table 7: Egress Port Security - IP

    +

    Egress Table 8: Egress Port Security - IP

    This is similar to the port security logic in table @@ -918,7 +946,7 @@ output; ip4.src and ip6.src

    -

    Egress Table 8: Egress Port Security - L2

    +

    Egress Table 9: Egress Port Security - L2

    This is similar to the ingress port security logic in ingress table diff --git a/ovn/northd/ovn-northd.c b/ovn/northd/ovn-northd.c index 7e6b1d9a1..435f507fc 100644 --- a/ovn/northd/ovn-northd.c +++ b/ovn/northd/ovn-northd.c @@ -108,25 +108,27 @@ enum ovn_stage { PIPELINE_STAGE(SWITCH, IN, PRE_STATEFUL, 5, "ls_in_pre_stateful") \ PIPELINE_STAGE(SWITCH, IN, ACL, 6, "ls_in_acl") \ PIPELINE_STAGE(SWITCH, IN, QOS_MARK, 7, "ls_in_qos_mark") \ - PIPELINE_STAGE(SWITCH, IN, LB, 8, "ls_in_lb") \ - PIPELINE_STAGE(SWITCH, IN, STATEFUL, 9, "ls_in_stateful") \ - PIPELINE_STAGE(SWITCH, IN, ARP_ND_RSP, 10, "ls_in_arp_rsp") \ - PIPELINE_STAGE(SWITCH, IN, DHCP_OPTIONS, 11, "ls_in_dhcp_options") \ - PIPELINE_STAGE(SWITCH, IN, DHCP_RESPONSE, 12, "ls_in_dhcp_response") \ - PIPELINE_STAGE(SWITCH, IN, DNS_LOOKUP, 13, "ls_in_dns_lookup") \ - PIPELINE_STAGE(SWITCH, IN, DNS_RESPONSE, 14, "ls_in_dns_response") \ - PIPELINE_STAGE(SWITCH, IN, L2_LKUP, 15, "ls_in_l2_lkup") \ - \ - /* Logical switch egress stages. */ \ - PIPELINE_STAGE(SWITCH, OUT, PRE_LB, 0, "ls_out_pre_lb") \ - PIPELINE_STAGE(SWITCH, OUT, PRE_ACL, 1, "ls_out_pre_acl") \ - PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful") \ - PIPELINE_STAGE(SWITCH, OUT, LB, 3, "ls_out_lb") \ + PIPELINE_STAGE(SWITCH, IN, QOS_METER, 8, "ls_in_qos_meter") \ + PIPELINE_STAGE(SWITCH, IN, LB, 9, "ls_in_lb") \ + PIPELINE_STAGE(SWITCH, IN, STATEFUL, 10, "ls_in_stateful") \ + PIPELINE_STAGE(SWITCH, IN, ARP_ND_RSP, 11, "ls_in_arp_rsp") \ + PIPELINE_STAGE(SWITCH, IN, DHCP_OPTIONS, 12, "ls_in_dhcp_options") \ + PIPELINE_STAGE(SWITCH, IN, DHCP_RESPONSE, 13, "ls_in_dhcp_response") \ + PIPELINE_STAGE(SWITCH, IN, DNS_LOOKUP, 14, "ls_in_dns_lookup") \ + PIPELINE_STAGE(SWITCH, IN, DNS_RESPONSE, 15, "ls_in_dns_response") \ + PIPELINE_STAGE(SWITCH, IN, L2_LKUP, 16, "ls_in_l2_lkup") \ + \ + /* Logical switch egress stages. */ \ + PIPELINE_STAGE(SWITCH, OUT, PRE_LB, 0, "ls_out_pre_lb") \ + PIPELINE_STAGE(SWITCH, OUT, PRE_ACL, 1, "ls_out_pre_acl") \ + PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful") \ + PIPELINE_STAGE(SWITCH, OUT, LB, 3, "ls_out_lb") \ PIPELINE_STAGE(SWITCH, OUT, ACL, 4, "ls_out_acl") \ PIPELINE_STAGE(SWITCH, OUT, QOS_MARK, 5, "ls_out_qos_mark") \ - PIPELINE_STAGE(SWITCH, OUT, STATEFUL, 6, "ls_out_stateful") \ - PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_IP, 7, "ls_out_port_sec_ip") \ - PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_L2, 8, "ls_out_port_sec_l2") \ + PIPELINE_STAGE(SWITCH, OUT, QOS_METER, 6, "ls_out_qos_meter") \ + PIPELINE_STAGE(SWITCH, OUT, STATEFUL, 7, "ls_out_stateful") \ + PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_IP, 8, "ls_out_port_sec_ip") \ + PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_L2, 9, "ls_out_port_sec_l2") \ \ /* Logical router ingress stages. */ \ PIPELINE_STAGE(ROUTER, IN, ADMISSION, 0, "lr_in_admission") \ @@ -3383,21 +3385,57 @@ static void build_qos(struct ovn_datapath *od, struct hmap *lflows) { ovn_lflow_add(lflows, od, S_SWITCH_IN_QOS_MARK, 0, "1", "next;"); ovn_lflow_add(lflows, od, S_SWITCH_OUT_QOS_MARK, 0, "1", "next;"); + ovn_lflow_add(lflows, od, S_SWITCH_IN_QOS_METER, 0, "1", "next;"); + ovn_lflow_add(lflows, od, S_SWITCH_OUT_QOS_METER, 0, "1", "next;"); for (size_t i = 0; i < od->nbs->n_qos_rules; i++) { struct nbrec_qos *qos = od->nbs->qos_rules[i]; bool ingress = !strcmp(qos->direction, "from-lport") ? true :false; enum ovn_stage stage = ingress ? S_SWITCH_IN_QOS_MARK : S_SWITCH_OUT_QOS_MARK; + int64_t rate = 0; + int64_t burst = 0; + + for (size_t j = 0; j < qos->n_action; j++) { + if (!strcmp(qos->key_action[j], "dscp")) { + struct ds dscp_action = DS_EMPTY_INITIALIZER; + + ds_put_format(&dscp_action, "ip.dscp = %"PRId64"; next;", + qos->value_action[j]); + ovn_lflow_add(lflows, od, stage, + qos->priority, + qos->match, ds_cstr(&dscp_action)); + ds_destroy(&dscp_action); + } + } - if (!strcmp(qos->key_action, "dscp")) { - struct ds dscp_action = DS_EMPTY_INITIALIZER; + for (size_t n = 0; n < qos->n_bandwidth; n++) { + if (!strcmp(qos->key_bandwidth[n], "rate")) { + rate = qos->value_bandwidth[n]; + } else if (!strcmp(qos->key_bandwidth[n], "burst")) { + burst = qos->value_bandwidth[n]; + } + } + if (rate) { + struct ds meter_action = DS_EMPTY_INITIALIZER; + stage = ingress ? S_SWITCH_IN_QOS_METER : S_SWITCH_OUT_QOS_METER; + if (burst) { + ds_put_format(&meter_action, + "set_meter(%"PRId64", %"PRId64"); next;", + rate, burst); + } else { + ds_put_format(&meter_action, + "set_meter(%"PRId64"); next;", + rate); + } - ds_put_format(&dscp_action, "ip.dscp = %d; next;", - (uint8_t)qos->value_action); + /* Ingress and Egress QoS Meter Table. + * + * We limit the bandwidth of this flow by adding a meter table. + */ ovn_lflow_add(lflows, od, stage, qos->priority, - qos->match, ds_cstr(&dscp_action)); - ds_destroy(&dscp_action); + qos->match, ds_cstr(&meter_action)); + ds_destroy(&meter_action); } } } @@ -3513,7 +3551,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports, struct ds actions = DS_EMPTY_INITIALIZER; /* Build pre-ACL and ACL tables for both ingress and egress. - * Ingress tables 3 through 9. Egress tables 0 through 6. */ + * Ingress tables 3 through 10. Egress tables 0 through 7. */ struct ovn_datapath *od; HMAP_FOR_EACH (od, key_node, datapaths) { if (!od->nbs) { @@ -3596,7 +3634,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports, ovn_lflow_add(lflows, od, S_SWITCH_IN_PORT_SEC_IP, 0, "1", "next;"); } - /* Ingress table 10: ARP/ND responder, skip requests coming from localnet + /* Ingress table 11: ARP/ND responder, skip requests coming from localnet * and vtep ports. (priority 100); see ovn-northd.8.xml for the * rationale. */ HMAP_FOR_EACH (op, key_node, ports) { @@ -3613,7 +3651,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports, } } - /* Ingress table 10: ARP/ND responder, reply for known IPs. + /* Ingress table 11: ARP/ND responder, reply for known IPs. * (priority 50). */ HMAP_FOR_EACH (op, key_node, ports) { if (!op->nbsp) { @@ -3708,7 +3746,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports, } } - /* Ingress table 10: ARP/ND responder, by default goto next. + /* Ingress table 11: ARP/ND responder, by default goto next. * (priority 0)*/ HMAP_FOR_EACH (od, key_node, datapaths) { if (!od->nbs) { @@ -3718,7 +3756,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports, ovn_lflow_add(lflows, od, S_SWITCH_IN_ARP_ND_RSP, 0, "1", "next;"); } - /* Logical switch ingress table 11 and 12: DHCP options and response + /* Logical switch ingress table 12 and 13: DHCP options and response * priority 100 flows. */ HMAP_FOR_EACH (op, key_node, ports) { if (!op->nbsp) { @@ -3820,7 +3858,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports, } } - /* Logical switch ingress table 13 and 14: DNS lookup and response + /* Logical switch ingress table 14 and 15: DNS lookup and response * priority 100 flows. */ HMAP_FOR_EACH (od, key_node, datapaths) { @@ -3852,9 +3890,9 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports, ds_destroy(&action); } - /* Ingress table 11 and 12: DHCP options and response, by default goto next. - * (priority 0). - * Ingress table 13 and 14: DNS lookup and response, by default goto next. + /* Ingress table 12 and 13: DHCP options and response, by default goto + * next. (priority 0). + * Ingress table 14 and 15: DNS lookup and response, by default goto next. * (priority 0).*/ HMAP_FOR_EACH (od, key_node, datapaths) { @@ -3868,7 +3906,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports, ovn_lflow_add(lflows, od, S_SWITCH_IN_DNS_RESPONSE, 0, "1", "next;"); } - /* Ingress table 15: Destination lookup, broadcast and multicast handling + /* Ingress table 16: Destination lookup, broadcast and multicast handling * (priority 100). */ HMAP_FOR_EACH (op, key_node, ports) { if (!op->nbsp) { @@ -3888,7 +3926,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports, "outport = \""MC_FLOOD"\"; output;"); } - /* Ingress table 13: Destination lookup, unicast handling (priority 50), */ + /* Ingress table 16: Destination lookup, unicast handling (priority 50), */ HMAP_FOR_EACH (op, key_node, ports) { if (!op->nbsp) { continue; @@ -3988,7 +4026,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports, } } - /* Ingress table 13: Destination lookup for unknown MACs (priority 0). */ + /* Ingress table 16: Destination lookup for unknown MACs (priority 0). */ HMAP_FOR_EACH (od, key_node, datapaths) { if (!od->nbs) { continue; @@ -4000,8 +4038,8 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports, } } - /* Egress tables 6: Egress port security - IP (priority 0) - * Egress table 7: Egress port security L2 - multicast/broadcast + /* Egress tables 8: Egress port security - IP (priority 0) + * Egress table 9: Egress port security L2 - multicast/broadcast * (priority 100). */ HMAP_FOR_EACH (od, key_node, datapaths) { if (!od->nbs) { @@ -4013,10 +4051,10 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports, "output;"); } - /* Egress table 6: Egress port security - IP (priorities 90 and 80) + /* Egress table 8: Egress port security - IP (priorities 90 and 80) * if port security enabled. * - * Egress table 7: Egress port security - L2 (priorities 50 and 150). + * Egress table 9: Egress port security - L2 (priorities 50 and 150). * * Priority 50 rules implement port security for enabled logical port. * diff --git a/ovn/ovn-nb.ovsschema b/ovn/ovn-nb.ovsschema index fcd878cf2..aa9e5e1b0 100644 --- a/ovn/ovn-nb.ovsschema +++ b/ovn/ovn-nb.ovsschema @@ -1,7 +1,7 @@ { "name": "OVN_Northbound", - "version": "5.8.1", - "cksum": "607160660 16929", + "version": "5.8.2", + "cksum": "2463470726 17489", "tables": { "NB_Global": { "columns": { @@ -164,7 +164,15 @@ "enum": ["set", ["dscp"]]}, "value": {"type": "integer", "minInteger": 0, - "maxInteger": 63}}}, + "maxInteger": 63}, + "min": 0, "max": "unlimited"}}, + "bandwidth": {"type": {"key": {"type": "string", + "enum": ["set", ["rate", + "burst"]]}, + "value": {"type": "integer", + "minInteger": 1, + "maxInteger": 4294967295}, + "min": 0, "max": "unlimited"}}, "external_ids": { "type": {"key": "string", "value": "string", "min": 0, "max": "unlimited"}}}, diff --git a/ovn/ovn-nb.xml b/ovn/ovn-nb.xml index 1091c05ce..23b5d876d 100644 --- a/ovn/ovn-nb.xml +++ b/ovn/ovn-nb.xml @@ -1264,6 +1264,22 @@ + +

    + The bandwidth limit to be performed on the matched packet. + Currently only supported in the userspace by dpdk. +

    +
      +
    • + rate: The value of rate limit in kbps. +
    • +
    • + burst: The value of burst rate limit in kbps. + This is optional and needs to specify the rate first. +
    • +
    + + See External IDs at the beginning of this document. diff --git a/ovn/ovn-sb.xml b/ovn/ovn-sb.xml index ca8cbecdd..e24c4403b 100644 --- a/ovn/ovn-sb.xml +++ b/ovn/ovn-sb.xml @@ -1634,6 +1634,21 @@

    + +
    set_meter(rate);
    +
    set_meter(rate, burst);
    +
    +

    + Parameters: rate limit int field rate in kbps, + burst rate limits int field burst in kbps. +

    + +

    + This action sets the rate limit for a flow. +

    + +

    Example: set_meter(100, 1000);

    +
    diff --git a/ovn/utilities/ovn-trace.c b/ovn/utilities/ovn-trace.c index 7ff4a2682..06d4ddf8e 100644 --- a/ovn/utilities/ovn-trace.c +++ b/ovn/utilities/ovn-trace.c @@ -1888,6 +1888,10 @@ trace_actions(const struct ovnact *ovnacts, size_t ovnacts_len, case OVNACT_LOG: execute_log(ovnact_get_LOG(a), uflow, super); break; + + case OVNACT_SET_METER: + /* Nothing to do. */ + break; } } diff --git a/tests/ovn.at b/tests/ovn.at index b6a83cc22..f37a91279 100644 --- a/tests/ovn.at +++ b/tests/ovn.at @@ -1082,6 +1082,18 @@ reg1[0] = dns_lookup(); reg1[0] = dns_lookup("foo"); dns_lookup doesn't take any parameters +# set_meter +set_meter(0); + Rate 0 for set_meter is not in valid. +set_meter(1); + encodes as meter:1 +set_meter(100, 1000); + encodes as meter:2 +set_meter(100, 1000, ); + Syntax error at `,' expecting `)'. +set_meter(4294967295, 4294967295); + encodes as meter:3 + # put_nd_ra_opts reg1[0] = put_nd_ra_opts(addr_mode = "slaac", mtu = 1500, prefix = aef0::/64, slla = ae:01:02:03:04:05); encodes as controller(userdata=00.00.00.08.00.00.00.00.00.01.de.10.00.00.00.40.86.00.00.00.ff.00.ff.ff.00.00.00.00.00.00.00.00.05.01.00.00.00.00.05.dc.03.04.40.c0.ff.ff.ff.ff.ff.ff.ff.ff.00.00.00.00.ae.f0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.01.01.ae.01.02.03.04.05,pause) @@ -5953,7 +5965,7 @@ OVN_CLEANUP([hv]) AT_CLEANUP -AT_SETUP([ovn -- DSCP marking check]) +AT_SETUP([ovn -- DSCP marking and meter check]) AT_KEYWORDS([ovn]) ovn_start @@ -6023,10 +6035,32 @@ check_tos 0 qos_id=$(ovn-nbctl --wait=hv -- --id=@lp1-qos create QoS priority=100 action=dscp=48 match="inport\=\=\"lp1\"" direction="from-lport" -- set Logical_Switch lsw0 qos_rules=@lp1-qos) check_tos 48 +# check at hv without qos meter +AT_CHECK([as hv ovs-ofctl dump-flows br-int -O OpenFlow13 | grep meter | wc -l], [0], [0 +]) + +# Update the meter rate +ovn-nbctl --wait=hv set QoS $qos_id bandwidth=rate=100 + +# check at hv with a qos meter table +AT_CHECK([as hv ovs-ofctl dump-meters br-int -O OpenFlow13 | grep rate=100 | wc -l], [0], [1 +]) +AT_CHECK([as hv ovs-ofctl dump-flows br-int -O OpenFlow13 | grep meter | wc -l], [0], [1 +]) + # Update the DSCP marking ovn-nbctl --wait=hv set QoS $qos_id action=dscp=63 check_tos 63 +# Update the meter rate +ovn-nbctl --wait=hv set QoS $qos_id bandwidth=rate=4294967295,burst=4294967295 + +# check at hv with a qos meter table +AT_CHECK([as hv ovs-ofctl dump-meters br-int -O OpenFlow13 | grep burst_size=4294967295 | wc -l], [0], [1 +]) +AT_CHECK([as hv ovs-ofctl dump-flows br-int -O OpenFlow13 | grep meter | wc -l], [0], [1 +]) + ovn-nbctl --wait=hv set QoS $qos_id match="outport\=\=\"lp2\"" direction="to-lport" check_tos 63 @@ -6034,6 +6068,10 @@ check_tos 63 ovn-nbctl --wait=hv clear Logical_Switch lsw0 qos_rules check_tos 0 +# check at hv without qos meter +AT_CHECK([as hv ovs-ofctl dump-flows br-int -O OpenFlow13 | grep meter | wc -l], [0], [0 +]) + OVN_CLEANUP([hv]) AT_CLEANUP @@ -8502,9 +8540,9 @@ AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=32 | grep active_backup | gre sleep 3 # let BFD sessions settle so we get the right flows on the right chassis # make sure that flows for handling the outside router port reside on gw1 -AT_CHECK([as gw1 ovs-ofctl dump-flows br-int table=23 | grep 00:00:02:01:02:04 | wc -l], [0], [[1 +AT_CHECK([as gw1 ovs-ofctl dump-flows br-int table=24 | grep 00:00:02:01:02:04 | wc -l], [0], [[1 ]]) -AT_CHECK([as gw2 ovs-ofctl dump-flows br-int table=23 | grep 00:00:02:01:02:04 | wc -l], [0], [[0 +AT_CHECK([as gw2 ovs-ofctl dump-flows br-int table=24 | grep 00:00:02:01:02:04 | wc -l], [0], [[0 ]]) # make sure ARP responder flows for outside router port reside on gw1 too @@ -8594,9 +8632,9 @@ AT_CHECK([ovs-vsctl --bare --columns bfd find Interface name=ovn-hv1-0],[0], sleep 3 # let BFD sessions settle so we get the right flows on the right chassis # make sure that flows for handling the outside router port reside on gw2 now -AT_CHECK([as gw2 ovs-ofctl dump-flows br-int table=23 | grep 00:00:02:01:02:04 | wc -l], [0], [[1 +AT_CHECK([as gw2 ovs-ofctl dump-flows br-int table=24 | grep 00:00:02:01:02:04 | wc -l], [0], [[1 ]]) -AT_CHECK([as gw1 ovs-ofctl dump-flows br-int table=23 | grep 00:00:02:01:02:04 | wc -l], [0], [[0 +AT_CHECK([as gw1 ovs-ofctl dump-flows br-int table=24 | grep 00:00:02:01:02:04 | wc -l], [0], [[0 ]]) # disconnect GW2 from the network, GW1 should take over @@ -8608,9 +8646,9 @@ sleep 4 bfd_dump # make sure that flows for handling the outside router port reside on gw2 now -AT_CHECK([as gw1 ovs-ofctl dump-flows br-int table=23 | grep 00:00:02:01:02:04 | wc -l], [0], [[1 +AT_CHECK([as gw1 ovs-ofctl dump-flows br-int table=24 | grep 00:00:02:01:02:04 | wc -l], [0], [[1 ]]) -AT_CHECK([as gw2 ovs-ofctl dump-flows br-int table=23 | grep 00:00:02:01:02:04 | wc -l], [0], [[0 +AT_CHECK([as gw2 ovs-ofctl dump-flows br-int table=24 | grep 00:00:02:01:02:04 | wc -l], [0], [[0 ]]) # check that the chassis redirect port has been reclaimed by the gw1 chassis diff --git a/tests/test-ovn.c b/tests/test-ovn.c index 4f65ee9d1..997e778f6 100644 --- a/tests/test-ovn.c +++ b/tests/test-ovn.c @@ -1211,6 +1211,10 @@ test_parse_actions(struct ovs_cmdl_context *ctx OVS_UNUSED) struct ovn_extend_table group_table; ovn_extend_table_init(&group_table); + /* Initialize meter ids for QoS. */ + struct ovn_extend_table meter_table; + ovn_extend_table_init(&meter_table); + simap_init(&ports); simap_put(&ports, "eth0", 5); simap_put(&ports, "eth1", 6); @@ -1250,6 +1254,7 @@ test_parse_actions(struct ovs_cmdl_context *ctx OVS_UNUSED) .aux = &ports, .is_switch = true, .group_table = &group_table, + .meter_table = &meter_table, .pipeline = OVNACT_P_INGRESS, .ingress_ptable = 8,