From patchwork Wed May 5 15:38:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Michelson X-Patchwork-Id: 1474378 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.133; helo=smtp2.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=XiHWAVsy; dkim-atps=neutral Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4Fb18J0Ssnz9s1l for ; Thu, 6 May 2021 01:38:40 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 3E5864068D; Wed, 5 May 2021 15:38:37 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id KlitUoYgb3W2; Wed, 5 May 2021 15:38:31 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [IPv6:2605:bc80:3010:104::8cd3:938]) by smtp2.osuosl.org (Postfix) with ESMTP id A48B440688; Wed, 5 May 2021 15:38:29 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 8FCA2C0026; Wed, 5 May 2021 15:38:28 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136]) by lists.linuxfoundation.org (Postfix) with ESMTP id 5A570C000E for ; Wed, 5 May 2021 15:38:26 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id D697960A6E for ; Wed, 5 May 2021 15:38:25 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Authentication-Results: smtp3.osuosl.org (amavisd-new); dkim=pass (1024-bit key) header.d=redhat.com Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id xvl-g31xq29s for ; Wed, 5 May 2021 15:38:23 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by smtp3.osuosl.org (Postfix) with ESMTPS id BD505607E1 for ; Wed, 5 May 2021 15:38:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1620229101; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aKU7K9hwKuE89NCpJro3w3i3OSi+BN1N6qVjxt+TNjM=; b=XiHWAVsykPchu2GZ7/G4cbESLEyFlyu+rwd9BMJl/Bv8EPgEIDSPtd/DshjoDb4ZLWlM5E bp3UKfcZ8kGKUwrbg8UeqNJcrcpnnVScwZzNM6iI+5FxlTNzxRfb8aE4aP1godopbrC3Vz VQlPMsi5PPkpSmsPLiz0JL3xCPySxVg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-332-LLalQ3v_NGaWNXBTIMgtaA-1; Wed, 05 May 2021 11:38:16 -0400 X-MC-Unique: LLalQ3v_NGaWNXBTIMgtaA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 988E0107ACCA for ; Wed, 5 May 2021 15:38:15 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-114-56.rdu2.redhat.com [10.10.114.56]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3D2DC60C17 for ; Wed, 5 May 2021 15:38:15 +0000 (UTC) From: Mark Michelson To: dev@openvswitch.org Date: Wed, 5 May 2021 11:38:11 -0400 Message-Id: <20210505153811.2138036-6-mmichels@redhat.com> In-Reply-To: <20210505153811.2138036-1-mmichels@redhat.com> References: <20210505153811.2138036-1-mmichels@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=mmichels@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [ovs-dev] [PATCH ovn v7 5/5] northd: Flood ARPs to routers for "unreachable" addresses. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" Previously, ARP TPAs were filtered down only to "reachable" addresses. Reachable addresses are all router interface addresses, as well as NAT external addresses and load balancer VIPs that are within the subnet handled by a router's port. However, it is possible that in some configurations, CMSes purposely configure NAT or load balancer addresses on a router that are outside the router's subnets, and they expect the router to respond to ARPs for those addresses. This commit adds a higher priority flow to logical switches that makes it so ARPs targeted at "unreachable" addresses are flooded to all ports. This way, the ARPs can reach the router appropriately and receive a response. Reported at: https://bugzilla.redhat.com/show_bug.cgi?id=1929901 Signed-off-by: Mark Michelson --- northd/ovn-northd.8.xml | 8 ++ northd/ovn-northd.c | 158 +++++++++++++++++++++++++++------------- northd/ovn_northd.dl | 102 ++++++++++++++++++++------ tests/ovn-northd.at | 99 +++++++++++++++++++++++++ tests/system-ovn.at | 107 +++++++++++++++++++++++++++ 5 files changed, 402 insertions(+), 72 deletions(-) diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml index 54e88d3fa..bdfdd0ede 100644 --- a/northd/ovn-northd.8.xml +++ b/northd/ovn-northd.8.xml @@ -1535,6 +1535,14 @@ output; logical ports. +
  • + Priority-80 flows for each IP address/VIP/NAT address configured + outside its owning router port's subnet. These flows match ARP + requests and ND packets for the specific IP addresses. Matched packets + are forwarded only to the MC_FLOOD multicast group which + contains all connected logical ports. +
  • +
  • Priority-75 flows for each port connected to a logical router matching self originated ARP request/ND packets. These packets diff --git a/northd/ovn-northd.c b/northd/ovn-northd.c index 3a4d5323f..2b84dcd33 100644 --- a/northd/ovn-northd.c +++ b/northd/ovn-northd.c @@ -6477,44 +6477,51 @@ build_lswitch_rport_arp_req_self_orig_flow(struct ovn_port *op, ds_destroy(&match); } -/* - * Ingress table 19: Flows that forward ARP/ND requests only to the routers - * that own the addresses. Other ARP/ND packets are still flooded in the - * switching domain as regular broadcast. - */ static void -build_lswitch_rport_arp_req_flow_for_ip(struct sset *ips, - int addr_family, - struct ovn_port *patch_op, - struct ovn_datapath *od, - uint32_t priority, - struct hmap *lflows, - const struct ovsdb_idl_row *stage_hint) +arp_nd_ns_match(struct sset *ips, int addr_family, struct ds *match) { - struct ds match = DS_EMPTY_INITIALIZER; - struct ds actions = DS_EMPTY_INITIALIZER; /* Packets received from VXLAN tunnels have already been through the * router pipeline so we should skip them. Normally this is done by the * multicast_group implementation (VXLAN packets skip table 32 which * delivers to patch ports) but we're bypassing multicast_groups. */ - ds_put_cstr(&match, FLAGBIT_NOT_VXLAN " && "); + ds_put_cstr(match, FLAGBIT_NOT_VXLAN " && "); if (addr_family == AF_INET) { - ds_put_cstr(&match, "arp.op == 1 && arp.tpa == { "); + ds_put_cstr(match, "arp.op == 1 && arp.tpa == {"); } else { - ds_put_cstr(&match, "nd_ns && nd.target == { "); + ds_put_cstr(match, "nd_ns && nd.target == {"); } const char *ip_address; SSET_FOR_EACH (ip_address, ips) { - ds_put_format(&match, "%s, ", ip_address); + ds_put_format(match, "%s, ", ip_address); } - ds_chomp(&match, ' '); - ds_chomp(&match, ','); - ds_put_cstr(&match, "}"); + ds_chomp(match, ' '); + ds_chomp(match, ','); + ds_put_cstr(match, "}"); +} + +/* + * Ingress table 19: Flows that forward ARP/ND requests only to the routers + * that own the addresses. Other ARP/ND packets are still flooded in the + * switching domain as regular broadcast. + */ +static void +build_lswitch_rport_arp_req_flow_for_reachable_ip(struct sset *ips, + int addr_family, + struct ovn_port *patch_op, + struct ovn_datapath *od, + uint32_t priority, + struct hmap *lflows, + const struct ovsdb_idl_row *stage_hint) +{ + struct ds match = DS_EMPTY_INITIALIZER; + struct ds actions = DS_EMPTY_INITIALIZER; + + arp_nd_ns_match(ips, addr_family, &match); /* Send a the packet to the router pipeline. If the switch has non-router * ports then flood it there as well. @@ -6537,6 +6544,32 @@ build_lswitch_rport_arp_req_flow_for_ip(struct sset *ips, ds_destroy(&actions); } +/* + * Ingress table 19: Flows that forward ARP/ND requests for "unreachable" IPs + * (NAT or load balancer IPs configured on a router that are outside the router's + * configured subnets). + * These ARP/ND packets are flooded in the switching domain as regular broadcast. + */ +static void +build_lswitch_rport_arp_req_flow_for_unreachable_ip(struct sset *ips, + int addr_family, + struct ovn_datapath *od, + uint32_t priority, + struct hmap *lflows, + const struct ovsdb_idl_row *stage_hint) +{ + struct ds match = DS_EMPTY_INITIALIZER; + + arp_nd_ns_match(ips, addr_family, &match); + + ovn_lflow_add_unique_with_hint(lflows, od, S_SWITCH_IN_L2_LKUP, + priority, ds_cstr(&match), + "outport = \""MC_FLOOD"\"; output;", + stage_hint); + + ds_destroy(&match); +} + /* * Ingress table 19: Flows that forward ARP/ND requests only to the routers * that own the addresses. @@ -6563,39 +6596,48 @@ build_lswitch_rport_arp_req_flows(struct ovn_port *op, * router port. * Priority: 80. */ - struct sset all_ips_v4 = SSET_INITIALIZER(&all_ips_v4); - struct sset all_ips_v6 = SSET_INITIALIZER(&all_ips_v6); + struct sset lb_ips_v4 = SSET_INITIALIZER(&lb_ips_v4); + struct sset lb_ips_v6 = SSET_INITIALIZER(&lb_ips_v6); - get_router_load_balancer_ips(op->od, &all_ips_v4, &all_ips_v6); + get_router_load_balancer_ips(op->od, &lb_ips_v4, &lb_ips_v6); + + struct sset reachable_ips_v4 = SSET_INITIALIZER(&reachable_ips_v4); + struct sset reachable_ips_v6 = SSET_INITIALIZER(&reachable_ips_v6); + struct sset unreachable_ips_v4 = SSET_INITIALIZER(&unreachable_ips_v4); + struct sset unreachable_ips_v6 = SSET_INITIALIZER(&unreachable_ips_v6); const char *ip_addr; const char *ip_addr_next; - SSET_FOR_EACH_SAFE (ip_addr, ip_addr_next, &all_ips_v4) { + SSET_FOR_EACH_SAFE (ip_addr, ip_addr_next, &lb_ips_v4) { ovs_be32 ipv4_addr; /* Check if the ovn port has a network configured on which we could * expect ARP requests for the LB VIP. */ - if (ip_parse(ip_addr, &ipv4_addr) && - lrouter_port_ipv4_reachable(op, ipv4_addr)) { - continue; + if (ip_parse(ip_addr, &ipv4_addr)) { + if (lrouter_port_ipv4_reachable(op, ipv4_addr)) { + sset_add(&reachable_ips_v4, ip_addr); + } else { + sset_add(&unreachable_ips_v4, ip_addr); + } } - - sset_delete(&all_ips_v4, SSET_NODE_FROM_NAME(ip_addr)); } - SSET_FOR_EACH_SAFE (ip_addr, ip_addr_next, &all_ips_v6) { + SSET_FOR_EACH_SAFE (ip_addr, ip_addr_next, &lb_ips_v6) { struct in6_addr ipv6_addr; /* Check if the ovn port has a network configured on which we could * expect NS requests for the LB VIP. */ - if (ipv6_parse(ip_addr, &ipv6_addr) && - lrouter_port_ipv6_reachable(op, &ipv6_addr)) { - continue; + if (ipv6_parse(ip_addr, &ipv6_addr)) { + if (lrouter_port_ipv6_reachable(op, &ipv6_addr)) { + sset_add(&reachable_ips_v6, ip_addr); + } else { + sset_add(&unreachable_ips_v6, ip_addr); + } } - - sset_delete(&all_ips_v6, SSET_NODE_FROM_NAME(ip_addr)); } + sset_destroy(&lb_ips_v4); + sset_destroy(&lb_ips_v6); for (size_t i = 0; i < op->od->nbr->n_nat; i++) { struct ovn_nat *nat_entry = &op->od->nat_entries[i]; @@ -6616,37 +6658,53 @@ build_lswitch_rport_arp_req_flows(struct ovn_port *op, struct in6_addr *addr = &nat_entry->ext_addrs.ipv6_addrs[0].addr; if (lrouter_port_ipv6_reachable(op, addr)) { - sset_add(&all_ips_v6, nat->external_ip); + sset_add(&reachable_ips_v6, nat->external_ip); + } else { + sset_add(&unreachable_ips_v6, nat->external_ip); } } else { ovs_be32 addr = nat_entry->ext_addrs.ipv4_addrs[0].addr; if (lrouter_port_ipv4_reachable(op, addr)) { - sset_add(&all_ips_v4, nat->external_ip); + sset_add(&reachable_ips_v4, nat->external_ip); + } else { + sset_add(&unreachable_ips_v4, nat->external_ip); } } } for (size_t i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) { - sset_add(&all_ips_v4, op->lrp_networks.ipv4_addrs[i].addr_s); + sset_add(&reachable_ips_v4, op->lrp_networks.ipv4_addrs[i].addr_s); } for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) { - sset_add(&all_ips_v6, op->lrp_networks.ipv6_addrs[i].addr_s); + sset_add(&reachable_ips_v6, op->lrp_networks.ipv6_addrs[i].addr_s); } - if (!sset_is_empty(&all_ips_v4)) { - build_lswitch_rport_arp_req_flow_for_ip(&all_ips_v4, AF_INET, sw_op, - sw_od, 80, lflows, - stage_hint); + if (!sset_is_empty(&reachable_ips_v4)) { + build_lswitch_rport_arp_req_flow_for_reachable_ip(&reachable_ips_v4, AF_INET, sw_op, + sw_od, 80, lflows, + stage_hint); + } + if (!sset_is_empty(&reachable_ips_v6)) { + build_lswitch_rport_arp_req_flow_for_reachable_ip(&reachable_ips_v6, AF_INET6, sw_op, + sw_od, 80, lflows, + stage_hint); } - if (!sset_is_empty(&all_ips_v6)) { - build_lswitch_rport_arp_req_flow_for_ip(&all_ips_v6, AF_INET6, sw_op, - sw_od, 80, lflows, - stage_hint); + if (!sset_is_empty(&unreachable_ips_v4)) { + build_lswitch_rport_arp_req_flow_for_unreachable_ip(&unreachable_ips_v4, AF_INET, + sw_od, 90, lflows, + stage_hint); + } + if (!sset_is_empty(&unreachable_ips_v6)) { + build_lswitch_rport_arp_req_flow_for_unreachable_ip(&unreachable_ips_v6, AF_INET6, + sw_od, 90, lflows, + stage_hint); } - sset_destroy(&all_ips_v4); - sset_destroy(&all_ips_v6); + sset_destroy(&reachable_ips_v4); + sset_destroy(&reachable_ips_v6); + sset_destroy(&unreachable_ips_v4); + sset_destroy(&unreachable_ips_v6); /* Self originated ARP requests/ND need to be flooded as usual. * diff --git a/northd/ovn_northd.dl b/northd/ovn_northd.dl index 96101213d..f3ec68b75 100644 --- a/northd/ovn_northd.dl +++ b/northd/ovn_northd.dl @@ -4102,9 +4102,13 @@ UniqueFlow[Flow{.logical_datapath = sw.ls._uuid, * router port. * Priority: 80. */ -function get_arp_forward_ips(rp: Ref): (Set, Set) = { - var all_ips_v4 = set_empty(); - var all_ips_v6 = set_empty(); +function get_arp_forward_ips(rp: Ref): + (Set, Set, Set, Set) = +{ + var reachable_ips_v4 = set_empty(); + var reachable_ips_v6 = set_empty(); + var unreachable_ips_v4 = set_empty(); + var unreachable_ips_v6 = set_empty(); (var lb_ips_v4, var lb_ips_v6) = get_router_load_balancer_ips(deref(rp.router)); @@ -4114,7 +4118,9 @@ function get_arp_forward_ips(rp: Ref): (Set, Set) = */ match (ip_parse(a)) { Some{ipv4} -> if (lrouter_port_ip_reachable(rp, IPv4{ipv4})) { - all_ips_v4.insert(a) + reachable_ips_v4.insert(a) + } else { + unreachable_ips_v4.insert(a) }, _ -> () } @@ -4125,7 +4131,9 @@ function get_arp_forward_ips(rp: Ref): (Set, Set) = */ match (ipv6_parse(a)) { Some{ipv6} -> if (lrouter_port_ip_reachable(rp, IPv6{ipv6})) { - all_ips_v6.insert(a) + reachable_ips_v6.insert(a) + } else { + unreachable_ips_v6.insert(a) }, _ -> () } @@ -4138,22 +4146,45 @@ function get_arp_forward_ips(rp: Ref): (Set, Set) = */ if (lrouter_port_ip_reachable(rp, nat.external_ip)) { match (nat.external_ip) { - IPv4{_} -> all_ips_v4.insert(nat.nat.external_ip), - IPv6{_} -> all_ips_v6.insert(nat.nat.external_ip) + IPv4{_} -> reachable_ips_v4.insert(nat.nat.external_ip), + IPv6{_} -> reachable_ips_v6.insert(nat.nat.external_ip) + } + } else { + match (nat.external_ip) { + IPv4{_} -> unreachable_ips_v4.insert(nat.nat.external_ip), + IPv6{_} -> unreachable_ips_v6.insert(nat.nat.external_ip), } } } }; for (a in rp.networks.ipv4_addrs) { - all_ips_v4.insert("${a.addr}") + reachable_ips_v4.insert("${a.addr}") }; for (a in rp.networks.ipv6_addrs) { - all_ips_v6.insert("${a.addr}") + reachable_ips_v6.insert("${a.addr}") }; - (all_ips_v4, all_ips_v6) + (reachable_ips_v4, reachable_ips_v6, unreachable_ips_v4, unreachable_ips_v6) } + +relation &SwitchPortARPForwards( + port: Ref, + reachable_ips_v4: Set, + reachable_ips_v6: Set, + unreachable_ips_v4: Set, + unreachable_ips_v6: Set +) + +&SwitchPortARPForwards(.port = port, + .reachable_ips_v4 = reachable_ips_v4, + .reachable_ips_v6 = reachable_ips_v6, + .unreachable_ips_v4 = unreachable_ips_v4, + .unreachable_ips_v6 = unreachable_ips_v6) :- + port in &SwitchPort(.peer = Some{rp}), + rp.is_enabled(), + (var reachable_ips_v4, var reachable_ips_v6, var unreachable_ips_v4, var unreachable_ips_v6) = get_arp_forward_ips(rp). + /* Packets received from VXLAN tunnels have already been through the * router pipeline so we should skip them. Normally this is done by the * multicast_group implementation (VXLAN packets skip table 32 which @@ -4164,8 +4195,8 @@ AnnotatedFlow(.f = Flow{.logical_datapath = sw.ls._uuid, .stage = s_SWITCH_IN_L2_LKUP(), .priority = 80, .__match = fLAGBIT_NOT_VXLAN() ++ - " && arp.op == 1 && arp.tpa == { " ++ - all_ips_v4.to_vec().join(", ") ++ "}", + " && arp.op == 1 && arp.tpa == {" ++ + ipv4.to_vec().join(", ") ++ "}", .actions = if (sw.has_non_router_port) { "clone {outport = ${sp.json_name}; output; }; " "outport = ${mc_flood_l2}; output;" @@ -4174,17 +4205,17 @@ AnnotatedFlow(.f = Flow{.logical_datapath = sw.ls._uuid, }, .external_ids = stage_hint(sp.lsp._uuid)}, .shared = not sw.has_non_router_port) :- - sp in &SwitchPort(.sw = sw, .peer = Some{rp}), - rp.is_enabled(), - (var all_ips_v4, _) = get_arp_forward_ips(rp), - not all_ips_v4.is_empty(), + sp in &SwitchPort(.sw = sw), + &SwitchPortARPForwards(.port = sp, .reachable_ips_v4 = ipv4), + not ipv4.is_empty(), var mc_flood_l2 = json_string_escape(mC_FLOOD_L2().0). + AnnotatedFlow(.f = Flow{.logical_datapath = sw.ls._uuid, .stage = s_SWITCH_IN_L2_LKUP(), .priority = 80, .__match = fLAGBIT_NOT_VXLAN() ++ - " && nd_ns && nd.target == { " ++ - all_ips_v6.to_vec().join(", ") ++ "}", + " && nd_ns && nd.target == {" ++ + ipv6.to_vec().join(", ") ++ "}", .actions = if (sw.has_non_router_port) { "clone {outport = ${sp.json_name}; output; }; " "outport = ${mc_flood_l2}; output;" @@ -4193,12 +4224,39 @@ AnnotatedFlow(.f = Flow{.logical_datapath = sw.ls._uuid, }, .external_ids = stage_hint(sp.lsp._uuid)}, .shared = not sw.has_non_router_port) :- - sp in &SwitchPort(.sw = sw, .peer = Some{rp}), - rp.is_enabled(), - (_, var all_ips_v6) = get_arp_forward_ips(rp), - not all_ips_v6.is_empty(), + sp in &SwitchPort(.sw = sw), + &SwitchPortARPForwards(.port = sp, .reachable_ips_v6 = ipv6), + not ipv6.is_empty(), var mc_flood_l2 = json_string_escape(mC_FLOOD_L2().0). +AnnotatedFlow(.f = Flow{.logical_datapath = sw.ls._uuid, + .stage = s_SWITCH_IN_L2_LKUP(), + .priority = 90, + .__match = fLAGBIT_NOT_VXLAN() ++ + " && arp.op == 1 && arp.tpa == {" ++ + ipv4.to_vec().join(", ") ++ "}", + .actions = "outport = ${flood}; output;", + .external_ids = stage_hint(sp.lsp._uuid)}, + .shared = not sw.has_non_router_port) :- + sp in &SwitchPort(.sw = sw), + &SwitchPortARPForwards(.port = sp, .unreachable_ips_v4 = ipv4), + not ipv4.is_empty(), + var flood = json_string_escape(mC_FLOOD().0). + +AnnotatedFlow(.f = Flow{.logical_datapath = sw.ls._uuid, + .stage = s_SWITCH_IN_L2_LKUP(), + .priority = 90, + .__match = fLAGBIT_NOT_VXLAN() ++ + " && nd_ns && nd.target == {" ++ + ipv6.to_vec().join(", ") ++ "}", + .actions = "outport = ${flood}; output;", + .external_ids = stage_hint(sp.lsp._uuid)}, + .shared = not sw.has_non_router_port) :- + sp in &SwitchPort(.sw = sw), + &SwitchPortARPForwards(.port = sp, .unreachable_ips_v6 = ipv6), + not ipv6.is_empty(), + var flood = json_string_escape(mC_FLOOD().0). + for (SwitchPortNewDynamicAddress(.port = &SwitchPort{.lsp = lsp, .json_name = json_name, .sw = &sw}, .address = Some{addrs}) if lsp.__type != "external") { diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at index 5f464f4be..b49d96f81 100644 --- a/tests/ovn-northd.at +++ b/tests/ovn-northd.at @@ -3244,3 +3244,102 @@ check_column "" Port_Binding router_addresses logical_port=ls1-ro1 check_column "" Port_Binding router_addresses logical_port=ls2-ro2 AT_CLEANUP + +OVN_FOR_EACH_NORTHD([ +AT_SETUP([ovn -- ARP flood for unreachable addresses]) +ovn_start + +AS_BOX([Setting up the logical network]) + +# This network is the same as the one from "Router Address Propagation" +check ovn-nbctl ls-add sw + +check ovn-nbctl lr-add ro1 +check ovn-nbctl lrp-add ro1 ro1-sw 00:00:00:00:00:01 10.0.0.1/24 +check ovn-nbctl lsp-add sw sw-ro1 +check ovn-nbctl lsp-set-type sw-ro1 router +check ovn-nbctl lsp-set-addresses sw-ro1 router +check ovn-nbctl lsp-set-options sw-ro1 router-port=ro1-sw + +check ovn-nbctl lr-add ro2 +check ovn-nbctl lrp-add ro2 ro2-sw 00:00:00:00:00:02 20.0.0.1/24 +check ovn-nbctl lsp-add sw sw-ro2 +check ovn-nbctl lsp-set-type sw-ro2 router +check ovn-nbctl lsp-set-addresses sw-ro2 router +check ovn-nbctl --wait=sb lsp-set-options sw-ro2 router-port=ro2-sw + +check ovn-nbctl ls-add ls1 +check ovn-nbctl lsp-add ls1 vm1 +check ovn-nbctl lsp-set-addresses vm1 "00:00:00:00:01:02 192.168.1.2" +check ovn-nbctl lrp-add ro1 ro1-ls1 00:00:00:00:01:01 192.168.1.1/24 +check ovn-nbctl lsp-add ls1 ls1-ro1 +check ovn-nbctl lsp-set-type ls1-ro1 router +check ovn-nbctl lsp-set-addresses ls1-ro1 router +check ovn-nbctl lsp-set-options ls1-ro1 router-port=ro1-ls1 + +check ovn-nbctl ls-add ls2 +check ovn-nbctl lsp-add ls2 vm2 +check ovn-nbctl lsp-set-addresses vm2 "00:00:00:00:02:02 192.168.2.2" +check ovn-nbctl lrp-add ro2 ro2-ls2 00:00:00:00:02:01 192.168.2.1/24 +check ovn-nbctl lsp-add ls2 ls2-ro2 +check ovn-nbctl lsp-set-type ls2-ro2 router +check ovn-nbctl lsp-set-addresses ls2-ro2 router +check ovn-nbctl lsp-set-options ls2-ro2 router-port=ro2-ls2 + +AS_BOX([Ensure that unreachable flood flows are not installed, since no addresses are unreachable]) + +AT_CHECK([ovn-sbctl lflow-list sw | grep "ls_in_l2_lkup" | grep "priority=90" -c], [1], [dnl +0 +]) + +AS_BOX([Adding some reachable NAT addresses]) + +check ovn-nbctl lr-nat-add ro1 dnat 10.0.0.100 192.168.1.100 +check ovn-nbctl lr-nat-add ro1 snat 10.0.0.200 192.168.1.200/30 + +check ovn-nbctl lr-nat-add ro2 dnat 20.0.0.100 192.168.2.100 +check ovn-nbctl --wait=sb lr-nat-add ro2 snat 20.0.0.200 192.168.2.200/30 + +AS_BOX([Ensure that unreachable flood flows are not installed, since all addresses are reachable]) + +AT_CHECK([ovn-sbctl lflow-list sw | grep "ls_in_l2_lkup" | grep "priority=90" -c], [1], [dnl +0 +]) + +AS_BOX([Adding some unreachable NAT addresses]) + +check ovn-nbctl lr-nat-add ro1 dnat 30.0.0.100 192.168.1.130 +check ovn-nbctl lr-nat-add ro1 snat 30.0.0.200 192.168.1.148/30 + +check ovn-nbctl lr-nat-add ro2 dnat 40.0.0.100 192.168.2.130 +check ovn-nbctl --wait=sb lr-nat-add ro2 snat 40.0.0.200 192.168.2.148/30 + +AS_BOX([Ensure that unreachable flood flows are installed, since there are unreachable addresses]) + +ovn-sbctl lflow-list + +# We expect two flows to be installed, one per connected router port on sw +AT_CHECK([ovn-sbctl lflow-list sw | grep ls_in_l2_lkup | grep priority=90 -c], [0], [dnl +2 +]) + +# We expect that the installed flows will match the unreachable DNAT addresses only. +AT_CHECK([ovn-sbctl lflow-list sw | grep ls_in_l2_lkup | grep priority=90 | grep "arp.tpa == {30.0.0.100}" -c], [0], [dnl +1 +]) + +AT_CHECK([ovn-sbctl lflow-list sw | grep ls_in_l2_lkup | grep priority=90 | grep "arp.tpa == {40.0.0.100}" -c], [0], [dnl +1 +]) + +# Ensure that we do not create flows for SNAT addresses +AT_CHECK([ovn-sbctl lflow-list sw | grep ls_in_l2_lkup | grep priority=90 | grep "arp.tpa == {30.0.0.200}" -c], [1], [dnl +0 +]) + +AT_CHECK([ovn-sbctl lflow-list sw | grep ls_in_l2_lkup | grep priority=90 | grep "arp.tpa == {40.0.0.200}" -c], [1], [dnl +0 +]) + +AT_CLEANUP +]) diff --git a/tests/system-ovn.at b/tests/system-ovn.at index a0d90e574..aa48a68e8 100644 --- a/tests/system-ovn.at +++ b/tests/system-ovn.at @@ -6001,3 +6001,110 @@ OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d AT_CLEANUP ]) + +OVN_FOR_EACH_NORTHD([ +AT_SETUP([ovn -- Floating IP outside router subnet IPv4]) +AT_KEYWORDS(NAT) + +ovn_start + +OVS_TRAFFIC_VSWITCHD_START() +ADD_BR([br-int]) + +# Set external-ids in br-int needed for ovn-controller +ovs-vsctl \ + -- set Open_vSwitch . external-ids:system-id=hv1 \ + -- set Open_vSwitch . external-ids:ovn-remote=unix:$ovs_base/ovn-sb/ovn-sb.sock \ + -- set Open_vSwitch . external-ids:ovn-encap-type=geneve \ + -- set Open_vSwitch . external-ids:ovn-encap-ip=169.0.0.1 \ + -- set bridge br-int fail-mode=secure other-config:disable-in-band=true + +start_daemon ovn-controller + +# Logical network: +# Two VMs +# * VM1 with IP address 192.168.100.5 +# * VM2 with IP address 192.168.200.5 +# +# VM1 connects to logical switch ls1. ls1 connects to logical router lr1. +# VM2 connects to logical switch ls2. ls2 connects to logical router lr2. +# lr1 and lr2 both connect to logical switch ls-pub. +# * lr1's interface that connects to ls-pub has IP address 172.18.2.110/24 +# * lr2's interface that connects to ls-pub has IP address 172.18.1.173/24 +# +# lr1 has the following attributes: +# * It has a DNAT rule that translates 172.18.2.11 to 192.168.100.5 (VM1) +# +# lr2 has the following attributes: +# * It has a DNAT rule that translates 172.18.2.12 to 192.168.200.5 (VM2) +# +# In this test, we want to ensure that a ping from VM1 to IP address 172.18.2.12 reaches VM2. +# When the NAT rules are set up, there should be MAC_Bindings created that allow for traffic +# to exit lr1, go through ls-pub, and reach the NAT external IP configured on lr2. + +check ovn-nbctl ls-add ls1 +check ovn-nbctl lsp-add ls1 vm1 -- lsp-set-addresses vm1 "00:00:00:00:01:05 192.168.100.5" + +check ovn-nbctl ls-add ls2 +check ovn-nbctl lsp-add ls2 vm2 -- lsp-set-addresses vm2 "00:00:00:00:02:05 192.168.200.5" + +check ovn-nbctl ls-add ls-pub + +check ovn-nbctl lr-add lr1 +check ovn-nbctl lrp-add lr1 lr1-ls1 00:00:00:00:01:01 192.168.100.1/24 +check ovn-nbctl lsp-add ls1 ls1-lr1 \ + -- lsp-set-type ls1-lr1 router \ + -- lsp-set-addresses ls1-lr1 router \ + -- lsp-set-options ls1-lr1 router-port=lr1-ls1 + +check ovn-nbctl lr-add lr2 +check ovn-nbctl lrp-add lr2 lr2-ls2 00:00:00:00:02:01 192.168.200.1/24 +check ovn-nbctl lsp-add ls2 ls2-lr2 \ + -- lsp-set-type ls2-lr2 router \ + -- lsp-set-addresses ls2-lr2 router \ + -- lsp-set-options ls2-lr2 router-port=lr2-ls2 + +check ovn-nbctl lrp-add lr1 lr1-ls-pub 00:00:00:00:03:01 172.18.2.110/24 +check ovn-nbctl lrp-set-gateway-chassis lr1-ls-pub hv1 +check ovn-nbctl lsp-add ls-pub ls-pub-lr1 \ + -- lsp-set-type ls-pub-lr1 router \ + -- lsp-set-addresses ls-pub-lr1 router \ + -- lsp-set-options ls-pub-lr1 router-port=lr1-ls-pub + +check ovn-nbctl lrp-add lr2 lr2-ls-pub 00:00:00:00:03:02 172.18.1.173/24 +check ovn-nbctl lrp-set-gateway-chassis lr2-ls-pub hv1 +check ovn-nbctl lsp-add ls-pub ls-pub-lr2 \ + -- lsp-set-type ls-pub-lr2 router \ + -- lsp-set-addresses ls-pub-lr2 router \ + -- lsp-set-options ls-pub-lr2 router-port=lr2-ls-pub + +check ovn-nbctl lr-nat-add lr1 dnat_and_snat 172.18.2.11 192.168.100.5 vm1 00:00:00:00:03:01 + +check ovn-nbctl lr-nat-add lr2 dnat_and_snat 172.18.2.12 192.168.200.5 vm2 00:00:00:00:03:02 +check ovn-nbctl lr-route-add lr2 172.18.2.11 172.18.2.110 lr2-ls-pub + +ADD_NAMESPACES(vm1) +ADD_VETH(vm1, vm1, br-int, "192.168.100.5/24", "00:00:00:00:01:05", \ + "192.168.100.1") + +ADD_NAMESPACES(vm2) +ADD_VETH(vm2, vm2, br-int, "192.168.200.5/24", "00:00:00:00:02:05", \ + "192.168.200.1") + +OVN_POPULATE_ARP +check ovn-nbctl --wait=hv sync + +AS_BOX([Checking for NAT-related MAC_Bindings]) + +check_column "172.18.1.173 172.18.2.12" MAC_Binding ip chassis_name=hv1 mac="\"00:00:00:00:03:02\"" logical_port="lr1-ls-pub" +check_column "172.18.2.11 172.18.2.110" MAC_Binding ip chassis_name=hv1 mac="\"00:00:00:00:03:01\"" logical_port="lr2-ls-pub" + +AS_BOX([Testing a ping]) + +NS_CHECK_EXEC([vm1], [ping -q -c 3 -i 0.3 -w 2 172.18.2.12 | FORMAT_PING], \ +[0], [dnl +3 packets transmitted, 3 received, 0% packet loss, time 0ms +]) + +AT_CLEANUP +])