From patchwork Mon Feb 29 06:33:22 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gurucharan Shetty X-Patchwork-Id: 590033 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id F208B1409B7 for ; Tue, 1 Mar 2016 03:23:23 +1100 (AEDT) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id B6FB110712; Mon, 29 Feb 2016 08:22:48 -0800 (PST) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v3.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id 03C20106FF for ; Mon, 29 Feb 2016 08:22:47 -0800 (PST) Received: from bar6.cudamail.com (localhost [127.0.0.1]) by mx3v3.cudamail.com (Postfix) with ESMTPS id 8F7D1161935 for ; Mon, 29 Feb 2016 09:22:46 -0700 (MST) X-ASG-Debug-ID: 1456762966-0b323761f8113e0001-byXFYA Received: from mx3-pf1.cudamail.com ([192.168.14.2]) by bar6.cudamail.com with ESMTP id Q1OedCx2iJ8VwLzO (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Mon, 29 Feb 2016 09:22:46 -0700 (MST) X-Barracuda-Envelope-From: guru.ovn@gmail.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.2 Received: from unknown (HELO mail-pf0-f196.google.com) (209.85.192.196) by mx3-pf1.cudamail.com with ESMTPS (RC4-SHA encrypted); 29 Feb 2016 16:22:46 -0000 Received-SPF: pass (mx3-pf1.cudamail.com: SPF record at _netblocks.google.com designates 209.85.192.196 as permitted sender) X-Barracuda-Apparent-Source-IP: 209.85.192.196 X-Barracuda-RBL-IP: 209.85.192.196 Received: by mail-pf0-f196.google.com with SMTP id 184so3445404pff.1 for ; Mon, 29 Feb 2016 08:22:46 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=lomyfPcnJR69YpNUVZrHcGB8hlbXCn6HlThHa7iRt1c=; b=Ug0zusCl0zsGFxdbTfRiH7gvbXcPjgJGscmR7IQ3dsSu8c4UOgJG33sppSSNF6nna2 HLfAzn/OsmKMaLF9roXIZtmH+kzf1O9Z0IojjMauOfPgVsjX7cUnxH6vPInfAZgA262R uDM3luq9GYc2t/JVvoCmF5LP+0e2tmHJw/y7E7T3dNIn60DySpau2ke842eqn4cOpWbH p0U/pULAzfXyWiG3vkUzvQkixUs7BtDkWtHkUNvz1RS+Nt0xbRbUd75qUevySXGZ/4G0 CgDFqeNuEGsXniH/p/9jEZzdnpzYLZ/6HsdvdiTdz5x4oN8YLRN4lWvgTpZTgK+klDN7 xZaQ== X-Gm-Message-State: AD7BkJJ9PLk6P2LKBvFumW+0RaLTpsK5W3A/Xx01M6VrKsh1784zFIeARkkqM9HipQz5IQ== X-Received: by 10.98.13.68 with SMTP id v65mr23108218pfi.150.1456762965739; Mon, 29 Feb 2016 08:22:45 -0800 (PST) Received: from ovn1.eng.vmware.com ([208.91.1.34]) by smtp.gmail.com with ESMTPSA id n68sm39202142pfj.46.2016.02.29.08.22.43 for (version=TLSv1/SSLv3 cipher=OTHER); Mon, 29 Feb 2016 08:22:44 -0800 (PST) X-CudaMail-Envelope-Sender: guru.ovn@gmail.com From: Gurucharan Shetty To: dev@openvswitch.org X-CudaMail-Whitelist-To: dev@openvswitch.org X-CudaMail-MID: CM-V1-228024929 X-CudaMail-DTE: 022916 X-CudaMail-Originating-IP: 209.85.192.196 Date: Sun, 28 Feb 2016 22:33:22 -0800 X-ASG-Orig-Subj: [##CM-V1-228024929##][RFC 6/8] ovn-northd: Pre-loadbalancing table. Message-Id: <1456727604-15784-7-git-send-email-guru@ovn.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1456727604-15784-1-git-send-email-guru@ovn.org> References: <1456727604-15784-1-git-send-email-guru@ovn.org> X-Barracuda-Connect: UNKNOWN[192.168.14.2] X-Barracuda-Start-Time: 1456762966 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-ASG-Whitelist: Header =?UTF-8?B?eFwtY3VkYW1haWxcLXdoaXRlbGlzdFwtdG8=?= X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 Subject: [ovs-dev] [RFC 6/8] ovn-northd: Pre-loadbalancing table. X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" This new table sits before the pre-Stateful table and sets 'reg0' as 1, if the destination ip address of a packet is a VIP in a loadbalancer object. Setting 'reg0' as 1 will send the packet through conntrack to get its status (or to track it.) Signed-off-by: Gurucharan Shetty --- ovn/northd/ovn-northd.8.xml | 47 ++++++++++++++++++++++++++++++--------------- ovn/northd/ovn-northd.c | 46 +++++++++++++++++++++++++++++++++++--------- 2 files changed, 69 insertions(+), 24 deletions(-) diff --git a/ovn/northd/ovn-northd.8.xml b/ovn/northd/ovn-northd.8.xml index b764848..3117b9a 100644 --- a/ovn/northd/ovn-northd.8.xml +++ b/ovn/northd/ovn-northd.8.xml @@ -150,17 +150,28 @@ advancing to table 3.

-

Ingress Table 2: Pre-STATEFUL

+

Ingress Table 2: Pre-loadbalancer

- Ingress table 2 prepares flows for all possible stateful processing + Ingress table 2 prepares flows for possible loadbalancing + in table 4. It contains a priority-0 flow that simply moves + traffic to next table. If the destination IP of the packet is a + VIP configured in the loadbalancer table, a priority-100 flow + is added that sets a hint (with reg0 = 1) for table 3 to send + IP packets to the connection tracker before advancing to table 4. +

+ +

Ingress Table 3: Pre-STATEFUL

+ +

+ Ingress table 3 prepares flows for all possible stateful processing in next tables. It contains a priority-0 flow that simply moves - traffic to table 3. A priority-100 flow sends the packets to connection + traffic to table 4. A priority-100 flow sends the packets to connection tracker based on a hint provided by the previous tables (with a match for reg0 == 1).

-

Ingress table 3: from-lport ACLs

+

Ingress table 4: from-lport ACLs

Logical flows in this table closely reproduce those in the @@ -175,7 +186,7 @@

- Ingress table 3 also contains a priority 0 flow with action + Ingress table 4 also contains a priority 0 flow with action next;, so that ACLs allow packets by default. If the logical datapath has a stateful ACL, the following flows will also be added: @@ -207,7 +218,7 @@ -

Ingress Table 4: STATEFUL

+

Ingress Table 5: STATEFUL

It contains a priority-0 flow that simply moves traffic to table 5. @@ -215,7 +226,7 @@ provided by the previous tables (with a match for reg1 == 1).

-

Ingress Table 5: Destination Lookup

+

Ingress Table 6: Destination Lookup

This table implements switching behavior. It contains these logical @@ -264,32 +275,38 @@ output; -

Egress Table 0: to-lport Pre-ACLs

+

Egress Table 0: Pre-loadbalancer

+ +

+ This is similar to ingress table 2. +

+ +

Egress Table 1: to-lport Pre-ACLs

This is similar to ingress table 1 except for to-lport traffic.

-

Egress Table 1: Pre-STATEFUL

+

Egress Table 2: Pre-STATEFUL

- This is similar to ingress table 2. + This is similar to ingress table 3.

-

Egress Table 2: to-lport ACLs

+

Egress Table 3: to-lport ACLs

- This is similar to ingress table 3 except for to-lport ACLs. + This is similar to ingress table 4 except for to-lport ACLs.

-

Egress Table 3: STATEFUL

+

Egress Table 4: STATEFUL

- This is similar to ingress table 4. + This is similar to ingress table 5.

-

Egress Table 4: Egress Port Security

+

Egress Table 5: Egress Port Security

This is similar to the ingress port security logic in ingress table 0, diff --git a/ovn/northd/ovn-northd.c b/ovn/northd/ovn-northd.c index 9e30bc0..28f5b45 100644 --- a/ovn/northd/ovn-northd.c +++ b/ovn/northd/ovn-northd.c @@ -87,17 +87,19 @@ enum ovn_stage { /* Logical switch ingress stages. */ \ PIPELINE_STAGE(SWITCH, IN, PORT_SEC, 0, "ls_in_port_sec") \ PIPELINE_STAGE(SWITCH, IN, PRE_ACL, 1, "ls_in_pre_acl") \ - PIPELINE_STAGE(SWITCH, IN, PRE_STATEFUL, 2, "ls_in_pre_stateful") \ - PIPELINE_STAGE(SWITCH, IN, ACL, 3, "ls_in_acl") \ - PIPELINE_STAGE(SWITCH, IN, STATEFUL, 4, "ls_in_stateful") \ - PIPELINE_STAGE(SWITCH, IN, L2_LKUP, 5, "ls_in_l2_lkup") \ + PIPELINE_STAGE(SWITCH, IN, PRE_LB, 2, "ls_in_pre_lb") \ + PIPELINE_STAGE(SWITCH, IN, PRE_STATEFUL, 3, "ls_in_pre_stateful") \ + PIPELINE_STAGE(SWITCH, IN, ACL, 4, "ls_in_acl") \ + PIPELINE_STAGE(SWITCH, IN, STATEFUL, 5, "ls_in_stateful") \ + PIPELINE_STAGE(SWITCH, IN, L2_LKUP, 6, "ls_in_l2_lkup") \ \ /* Logical switch egress stages. */ \ - PIPELINE_STAGE(SWITCH, OUT, PRE_ACL, 0, "ls_out_pre_acl") \ - PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 1, "ls_out_pre_stateful") \ - PIPELINE_STAGE(SWITCH, OUT, ACL, 2, "ls_out_acl") \ - PIPELINE_STAGE(SWITCH, OUT, STATEFUL, 3, "ls_out_stateful") \ - PIPELINE_STAGE(SWITCH, OUT, PORT_SEC, 4, "ls_out_port_sec") \ + PIPELINE_STAGE(SWITCH, OUT, PRE_LB, 0, "ls_out_pre_lb") \ + PIPELINE_STAGE(SWITCH, OUT, PRE_ACL, 1, "ls_out_pre_acl") \ + PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful") \ + PIPELINE_STAGE(SWITCH, OUT, ACL, 3, "ls_out_acl") \ + PIPELINE_STAGE(SWITCH, OUT, STATEFUL, 4, "ls_out_stateful") \ + PIPELINE_STAGE(SWITCH, OUT, PORT_SEC, 5, "ls_out_port_sec") \ \ /* Logical router ingress stages. */ \ PIPELINE_STAGE(ROUTER, IN, ADMISSION, 0, "lr_in_admission") \ @@ -1024,6 +1026,31 @@ build_pre_acls(struct ovn_datapath *od, struct hmap *lflows, } static void +build_pre_lb(struct ovn_datapath *od, struct hmap *lflows) +{ + /* Allow all packets to go to next tables by default. */ + ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_LB, 0, "1", "next;"); + ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_LB, 0, "1", "next;"); + + if (od->nbs->loadbalancer) { + struct nbrec_load_balancer *lb = od->nbs->loadbalancer; + struct smap *vips = &lb->vips; + struct smap_node *node; + + SMAP_FOR_EACH (node, vips) { + struct ds match = DS_EMPTY_INITIALIZER; + + ds_put_format(&match, "ip && ip4.dst == %s", node->key); + ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_LB, + 100, ds_cstr(&match), "reg0 = 1; next;"); + ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_LB, + 100, "ip", "reg0 = 1; next;"); + ds_destroy(&match); + } + } +} + +static void build_pre_stateful(struct ovn_datapath *od, struct hmap *lflows) { /* Ingress and Egress Pre-STATEFUL Table (Priority 0): Packets are @@ -1177,6 +1204,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports, } build_pre_acls(od, lflows, ports); + build_pre_lb(od, lflows); build_pre_stateful(od, lflows); build_acls(od, lflows); build_stateful(od, lflows);