From patchwork Thu Jul 7 17:07:16 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zong Kai LI X-Patchwork-Id: 645992 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3rlkZM62Xmz9s9Z for ; Fri, 8 Jul 2016 03:07:23 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b=pGY4ZVkT; dkim-atps=neutral Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id AEB6B10B07; Thu, 7 Jul 2016 10:07:21 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v3.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id 89B4910B01 for ; Thu, 7 Jul 2016 10:07:20 -0700 (PDT) Received: from bar6.cudamail.com (localhost [127.0.0.1]) by mx3v3.cudamail.com (Postfix) with ESMTPS id 114D1162AD3 for ; Thu, 7 Jul 2016 11:07:20 -0600 (MDT) X-ASG-Debug-ID: 1467911238-0b323706692cdd60001-byXFYA Received: from mx1-pf1.cudamail.com ([192.168.24.1]) by bar6.cudamail.com with ESMTP id 8DLmSlIl2lGx6RhT (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Thu, 07 Jul 2016 11:07:18 -0600 (MDT) X-Barracuda-Envelope-From: zealokii@gmail.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.24.1 Received: from unknown (HELO mail-io0-f194.google.com) (209.85.223.194) by mx1-pf1.cudamail.com with ESMTPS (AES128-SHA encrypted); 7 Jul 2016 17:07:17 -0000 Received-SPF: pass (mx1-pf1.cudamail.com: SPF record at _netblocks.google.com designates 209.85.223.194 as permitted sender) X-Barracuda-RBL-Trusted-Forwarder: 209.85.223.194 Received: by mail-io0-f194.google.com with SMTP id t74so3202544ioi.0 for ; Thu, 07 Jul 2016 10:07:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:date:message-id:subject:to; bh=j9LEjjs91JkxS8QAbI7631rw7Byn7fj+8rUBH9ZggUs=; b=pGY4ZVkT/5b6jeNZrUwBcalQ/tcgSpNXe2CYhdn8Sh1arcUZ+q0ST9IAg5qLR/LmIu h9YKD7qyyOsyzKz9xEbl74Uz1SGkgWiUasa8rX/6AZlMTBY8g4PDLTGqK6MquhpppIFe vU7KPwOOX/vt9i3kuM4LWWMAPw+xdH20lBPtq+1uP6JELj6k8JK0kNg5l9TDRtvBbG5M jK5dnC2CWcCWg80br6BXtVuqTnwfB30i3MzCy9ZMyEVp6cArguKRzfUPwDABxzVWC0U/ d35OSsxRV1TtpvJOq+2JDMXtqCMlil8Cc3fV+C1/neSsl/XGShyfilYXz6t4mDiJcwHz 6fwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=j9LEjjs91JkxS8QAbI7631rw7Byn7fj+8rUBH9ZggUs=; b=Sgx44a5Qd195C4yXBI8TpfA9bxZWmVAU9NayiWxATQEi1lEFrTnLyMm55+GozhZs1j Q/pbUbnMUGqDEJIIk1XVUqw1FqmKZj/5EuwAUW4oURIDRrEqivdOyJ23ils9VMF9GwA8 YRhBIiavP4CNrZrIwSuQNjKfeScD9ldU0A3WmQdwFzoY2gEe7ZMt5lwSoZzMm10BoOFi 1Srfj67oNwsqNbf7kVUo5LrqAGjWycHJgBegE3cnJHqoDYHb2NrMPcUR5x8NzMkbWDag oh6cPJvbdyhEgPTRtPL7jhs0XYaYUZMmE6MLJ3omRqdqsm2GHe0Yb+moo6DDmdynG1xY GmPA== X-Gm-Message-State: ALyK8tJdjqXh9tpyc7ekt55IaRJ1vvPxTgLLQ9PqjrZ3zXmS/qIaFVMouaHrzTxfKtfVlZDTNY12Qj3SPzHJkg== X-Received: by 10.107.195.140 with SMTP id t134mr4492009iof.175.1467911236535; Thu, 07 Jul 2016 10:07:16 -0700 (PDT) MIME-Version: 1.0 Received: by 10.36.192.11 with HTTP; Thu, 7 Jul 2016 10:07:16 -0700 (PDT) X-CudaMail-Envelope-Sender: zealokii@gmail.com From: Zong Kai Li Date: Fri, 8 Jul 2016 01:07:16 +0800 Message-ID: X-CudaMail-MID: CM-E1-706042326 X-CudaMail-DTE: 070716 X-CudaMail-Originating-IP: 209.85.223.194 To: ovs dev X-ASG-Orig-Subj: [##CM-E1-706042326##][PATCH] [RFC Patch] ovn-controller: ignore lflow matching remote VM port X-Barracuda-Connect: UNKNOWN[192.168.24.1] X-Barracuda-Start-Time: 1467911238 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-Barracuda-Spam-Score: 0.10 X-Barracuda-Spam-Status: No, SCORE=0.10 using global scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=4.0 tests=DKIM_SIGNED, HTML_MESSAGE, RDNS_NONE X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31093 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 DKIM_SIGNED Domain Keys Identified Mail: message has a signature 0.00 HTML_MESSAGE BODY: HTML included in message 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS X-Content-Filtered-By: Mailman/MimeDel 2.1.16 Subject: [ovs-dev] [PATCH] [RFC Patch] ovn-controller: ignore lflow matching remote VM port X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@openvswitch.org Sender: "dev" Currently, ovn-controller will install all lflows for a logical switch, when ovn-controller determines not to skip processing of that logical switch. This will install too many OVS flows. We have 11 tables for logical switch ingress pipeline, 8 tables for logical switch egress pipeline now, and more in futrue. There are two kind lflows in for logical switch. One has no inport/outport matching, such as lflows in table ls_in_arp_rsp and ls_in_l2_lkup. The other one has, and for now, lflows in the following tables belong to this type: - ls_in_port_sec_l2 - ls_in_port_sec_ip - ls_in_port_sec_nd - ls_in_acl - ls_out_pre_acl - ls_out_acl - ls_out_port_sec_ip - ls_out_port_sec_l2 Consider how packet trip through flows in network topology (P: port, S: switch, R: router. Two VM(or VIF) ports are on different chassis): - P-S-P: only flows matching remote inport, local VM port as "inport" and local VM port as "outport" will be matched. There is no chance for flows matching remote VM port as "inport" or "outport" to be matched. - P-S-R-S-P and P-S-R...R-S-P: all these cases seem different from the above one, but they have the same "last jump". No matter how many routers(with or without switches) are used, before packet leaves current chassis, the next jump will be: destination_switch_gateway -> destination_switch_port, so it will become a P-S-P case again. And sinse this patch will not change ingress pipeline for logical routers, so traffic between router port to router port will not be impacted. So, as we can see, we don't need to install flow for a lflow with inport or outport matching in logical switch ingress pipeline, when it tries to match a VM(or VIF) port that doesn't belong to current chassis. This can help ovn-controller to avoid to install many unnecessary flows. Signed-off-by: Zong Kai LI --- ovn/controller/lflow.c | 42 +++++++++++++++++++++++++++++++++++------ ovn/controller/lflow.h | 3 ++- ovn/controller/ovn-controller.c | 2 +- 3 files changed, 39 insertions(+), 8 deletions(-) diff --git a/ovn/controller/lflow.c b/ovn/controller/lflow.c index 05e1eaf..b0602b0 100644 --- a/ovn/controller/lflow.c +++ b/ovn/controller/lflow.c @@ -323,7 +323,8 @@ static void consider_logical_flow(const struct lport_index *lports, const struct simap *ct_zones, struct hmap *dhcp_opts_p, uint32_t *conj_id_ofs_p, - struct hmap *flow_table); + struct hmap *flow_table, + const char* chassis_id); static bool lookup_port_cb(const void *aux_, const char *port_name, unsigned int *portp) @@ -361,7 +362,8 @@ add_logical_flows(struct controller_ctx *ctx, const struct lport_index *lports, const struct hmap *local_datapaths, const struct hmap *patched_datapaths, struct group_table *group_table, - const struct simap *ct_zones, struct hmap *flow_table) + const struct simap *ct_zones, struct hmap *flow_table, + const char* chassis_id) { uint32_t conj_id_ofs = 1; @@ -376,7 +378,8 @@ add_logical_flows(struct controller_ctx *ctx, const struct lport_index *lports, SBREC_LOGICAL_FLOW_FOR_EACH (lflow, ctx->ovnsb_idl) { consider_logical_flow(lports, mcgroups, lflow, local_datapaths, patched_datapaths, group_table, ct_zones, - &dhcp_opts, &conj_id_ofs, flow_table); + &dhcp_opts, &conj_id_ofs, flow_table, + chassis_id); } dhcp_opts_destroy(&dhcp_opts); @@ -392,7 +395,8 @@ consider_logical_flow(const struct lport_index *lports, const struct simap *ct_zones, struct hmap *dhcp_opts_p, uint32_t *conj_id_ofs_p, - struct hmap *flow_table) + struct hmap *flow_table, + const char* chassis_id) { /* Determine translation of logical table IDs to physical table IDs. */ bool ingress = !strcmp(lflow->pipeline, "ingress"); @@ -436,6 +440,30 @@ consider_logical_flow(const struct lport_index *lports, return; } } + + /* Skip logical flow when it has an 'inport' or 'outport' to match, + * and the port is a VM or VIF interface, but not a local port to + * current chassis. */ + if (strstr(lflow->match, "inport") + || strstr(lflow->match, "outport")) { + struct lexer lexer; + lexer_init(&lexer, lflow->match); + do { + lexer_get(&lexer); + } while (lexer.token.type != LEX_T_ID + || (strcmp(lexer.token.s, "inport") + && strcmp(lexer.token.s, "outport"))); + /* Skip "==", then get logical port name. */ + lexer_get(&lexer); + lexer_get(&lexer); + const struct sbrec_port_binding *pb + = lport_lookup_by_name(lports, lexer.token.s); + lexer_destroy(&lexer); + if (pb && pb->chassis && !strcmp(pb->type, "") + && strcmp(chassis_id, pb->chassis->name)){ + return; + } + } } /* Determine translation of logical table IDs to physical table IDs. */ @@ -627,11 +655,13 @@ lflow_run(struct controller_ctx *ctx, const struct lport_index *lports, const struct hmap *local_datapaths, const struct hmap *patched_datapaths, struct group_table *group_table, - const struct simap *ct_zones, struct hmap *flow_table) + const struct simap *ct_zones, struct hmap *flow_table, + const char* chassis_id) { update_address_sets(ctx); add_logical_flows(ctx, lports, mcgroups, local_datapaths, - patched_datapaths, group_table, ct_zones, flow_table); + patched_datapaths, group_table, ct_zones, flow_table, + chassis_id); add_neighbor_flows(ctx, lports, flow_table); } diff --git a/ovn/controller/lflow.h b/ovn/controller/lflow.h index e96a24b..859e614 100644 --- a/ovn/controller/lflow.h +++ b/ovn/controller/lflow.h @@ -66,7 +66,8 @@ void lflow_run(struct controller_ctx *, const struct lport_index *, const struct hmap *patched_datapaths, struct group_table *group_table, const struct simap *ct_zones, - struct hmap *flow_table); + struct hmap *flow_table, + const char* chassis_id); void lflow_destroy(void); #endif /* ovn/lflow.h */ diff --git a/ovn/controller/ovn-controller.c b/ovn/controller/ovn-controller.c index 8471f64..c20949f 100644 --- a/ovn/controller/ovn-controller.c +++ b/ovn/controller/ovn-controller.c @@ -444,7 +444,7 @@ main(int argc, char *argv[]) struct hmap flow_table = HMAP_INITIALIZER(&flow_table); lflow_run(&ctx, &lports, &mcgroups, &local_datapaths, &patched_datapaths, &group_table, &ct_zones, - &flow_table); + &flow_table, chassis_id); if (chassis_id) { physical_run(&ctx, mff_ovn_geneve, br_int, chassis_id, &ct_zones, &flow_table,