From patchwork Fri Feb 10 11:18:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 1740430 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=2605:bc80:3010::136; helo=smtp3.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=gy1GPYxn; dkim-atps=neutral Received: from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4PCrpF1MdDz23fc for ; Fri, 10 Feb 2023 22:18:45 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id 6A68460FE4; Fri, 10 Feb 2023 11:18:43 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp3.osuosl.org 6A68460FE4 Authentication-Results: smtp3.osuosl.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=gy1GPYxn X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id kmSdgPFVmi-T; Fri, 10 Feb 2023 11:18:41 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp3.osuosl.org (Postfix) with ESMTPS id 427C160769; Fri, 10 Feb 2023 11:18:40 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp3.osuosl.org 427C160769 Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 0D847C0032; Fri, 10 Feb 2023 11:18:40 +0000 (UTC) X-Original-To: ovs-dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id CB828C002B for ; Fri, 10 Feb 2023 11:18:38 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 295DD41090 for ; Fri, 10 Feb 2023 11:18:38 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 295DD41090 Authentication-Results: smtp2.osuosl.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=gy1GPYxn X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 43vzLjuoEx4E for ; Fri, 10 Feb 2023 11:18:35 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 761654107F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by smtp2.osuosl.org (Postfix) with ESMTPS id 761654107F for ; Fri, 10 Feb 2023 11:18:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676027914; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=qggMVA/XpakISjekJZlJH9B2NZXCi3Jk9bP36R1sjtA=; b=gy1GPYxnppgo05jK8dJFPb+jVPJ7heOYDnLVkRieeG73pV6j0pBBR3WMh/8J0V6Ttg6YAr Lr2rxEfUDBqdAWnbQNFmTDVUP9dM7vO7zk/A1wO+JhR94/cgFdmGGY+ec3f9pLS6pjmyO/ pzpR+KAdC0rjh79w21r2li3eBnijK9g= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-455-fjRsjSdGM4agqhUYA9qU-g-1; Fri, 10 Feb 2023 06:18:33 -0500 X-MC-Unique: fjRsjSdGM4agqhUYA9qU-g-1 Received: by mail-wm1-f72.google.com with SMTP id iz20-20020a05600c555400b003dc53fcc88fso2417023wmb.2 for ; Fri, 10 Feb 2023 03:18:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=qggMVA/XpakISjekJZlJH9B2NZXCi3Jk9bP36R1sjtA=; b=fcnloWTXRLlo470VtatqBJDuLAddz3M8EXMUBH6QzsSvmJobzw7Iv58QahTgAgkf/N eXL3nLS/DqYZe+xhc4ngPkjEcChyHE4AVeRE1Pe1u5ZKgY9MJZg9oozQrJp7S+a9yJKU 55/is9gwLUXSYMyBFzn2yqNsSu+o6bQ+EtcsIALJYqLPAcEi5qWhzwFhp6DAi0Uqx21H 0tInpyGuE++xwWkzqvGZok8lELIVuAF2aZHs2d3h2NRvVEqvcanP4oogGBCb0Y4RwLAa SoDCkmhfn6OfEPEM7KSU64JcSL35n9xY38iR3HXqxm/p6QuD5+GAeM2C3o9XQmrnSS58 fXxw== X-Gm-Message-State: AO0yUKWGkm5ap2giCqnZLzSdodZpgHyZoMKFlbZxA7qa8tbyhnjctXJB hvsAguF80HpkwfZlOJIbXXEVJeCvI8LQY6HvtGNdtL0QSm7F+Fz+CEiJtudAqn+SCOHpzlEqDXN KJKbkUTVbaIJUWThNA0GFzxrlRXQ+Ob8iEbjrsx/8C8YxhOOJim+Va3dzDJQfy7YHpd79sFWDYA yPdFpWtzZZJg== X-Received: by 2002:a05:600c:2ac8:b0:3e0:117:b627 with SMTP id t8-20020a05600c2ac800b003e00117b627mr12847341wme.20.1676027910738; Fri, 10 Feb 2023 03:18:30 -0800 (PST) X-Google-Smtp-Source: AK7set9knZ2/Oy/BC+kJNtZJT0pZp1k5RKTZHx6mbqdhpC7qr0czfsncZw+2M1iCJg37UcSxrLX+lA== X-Received: by 2002:a05:600c:2ac8:b0:3e0:117:b627 with SMTP id t8-20020a05600c2ac800b003e00117b627mr12847313wme.20.1676027910077; Fri, 10 Feb 2023 03:18:30 -0800 (PST) Received: from localhost (net-188-216-77-84.cust.vodafonedsl.it. [188.216.77.84]) by smtp.gmail.com with ESMTPSA id t8-20020a05600c2f8800b003ddf2865aeasm8145139wmn.41.2023.02.10.03.18.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Feb 2023 03:18:29 -0800 (PST) From: Lorenzo Bianconi To: ovs-dev@openvswitch.org Date: Fri, 10 Feb 2023 12:18:26 +0100 Message-Id: <7885b0f383870557c415ced70efdbe6dc168beb3.1676027700.git.lorenzo.bianconi@redhat.com> X-Mailer: git-send-email 2.39.1 MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [ovs-dev] [PATCH v3 ovn] Add IPv6 support for lb health-check X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" Add Similar to IPv4 counterpart, introduce IPv6 load-balancer health check support. Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=2136094 Acked-by: Mark Michelson Signed-off-by: Lorenzo Bianconi Reviewed-by: Ales Musil --- Changes since v2: - cosmetics Changes since v1: - fix potential crash in ovn-northd - improve documentation --- controller/pinctrl.c | 216 ++++++++++++++++++++++++------------- northd/northd.c | 74 +++++++++---- northd/ovn-northd.8.xml | 17 +++ ovn-nb.xml | 21 ++-- tests/ovn.at | 201 ++++++++++++++++++++++++++++++++++- tests/system-ovn.at | 230 +++++++++++++++++++++++++++++++++++++++- 6 files changed, 656 insertions(+), 103 deletions(-) diff --git a/controller/pinctrl.c b/controller/pinctrl.c index ffceb7e5f..f8ebc3a9e 100644 --- a/controller/pinctrl.c +++ b/controller/pinctrl.c @@ -6736,9 +6736,10 @@ sync_svc_monitors(struct ovsdb_idl_txn *ovnsb_idl_txn, struct in6_addr ip_addr; ovs_be32 ip4; - if (ip_parse(sb_svc_mon->ip, &ip4)) { + bool is_ipv4 = ip_parse(sb_svc_mon->ip, &ip4); + if (is_ipv4) { ip_addr = in6_addr_mapped_ipv4(ip4); - } else { + } else if (!ipv6_parse(sb_svc_mon->ip, &ip_addr)) { continue; } @@ -6751,16 +6752,27 @@ sync_svc_monitors(struct ovsdb_idl_txn *ovnsb_idl_txn, continue; } - for (size_t j = 0; j < laddrs.n_ipv4_addrs; j++) { - if (ip4 == laddrs.ipv4_addrs[j].addr) { - ea = laddrs.ea; - mac_found = true; - break; + if (is_ipv4) { + for (size_t j = 0; j < laddrs.n_ipv4_addrs; j++) { + if (ip4 == laddrs.ipv4_addrs[j].addr) { + ea = laddrs.ea; + mac_found = true; + break; + } + } + } else { + for (size_t j = 0; j < laddrs.n_ipv6_addrs; j++) { + if (IN6_ARE_ADDR_EQUAL(&ip_addr, + &laddrs.ipv6_addrs[j].addr)) { + ea = laddrs.ea; + mac_found = true; + break; + } } } - if (!mac_found && !laddrs.n_ipv4_addrs) { - /* IPv4 address(es) are not configured. Use the first mac. */ + if (!mac_found && !laddrs.n_ipv4_addrs && !laddrs.n_ipv6_addrs) { + /* IP address(es) are not configured. Use the first mac. */ ea = laddrs.ea; mac_found = true; } @@ -6794,7 +6806,7 @@ sync_svc_monitors(struct ovsdb_idl_txn *ovnsb_idl_txn, svc_mon->port_key = port_key; svc_mon->proto_port = sb_svc_mon->port; svc_mon->ip = ip_addr; - svc_mon->is_ip6 = false; + svc_mon->is_ip6 = !is_ipv4; svc_mon->state = SVC_MON_S_INIT; svc_mon->status = SVC_MON_ST_UNKNOWN; svc_mon->protocol = protocol; @@ -7562,26 +7574,30 @@ svc_monitor_send_tcp_health_check__(struct rconn *swconn, ovs_be32 tcp_ack, ovs_be16 tcp_src) { - if (svc_mon->is_ip6) { - return; - } - /* Compose a TCP-SYN packet. */ uint64_t packet_stub[128 / 8]; struct dp_packet packet; + dp_packet_use_stub(&packet, packet_stub, sizeof packet_stub); struct eth_addr eth_src; eth_addr_from_string(svc_mon->sb_svc_mon->src_mac, ð_src); - ovs_be32 ip4_src; - ip_parse(svc_mon->sb_svc_mon->src_ip, &ip4_src); - - dp_packet_use_stub(&packet, packet_stub, sizeof packet_stub); - pinctrl_compose_ipv4(&packet, eth_src, svc_mon->ea, - ip4_src, in6_addr_get_mapped_ipv4(&svc_mon->ip), - IPPROTO_TCP, 63, TCP_HEADER_LEN); + if (svc_mon->is_ip6) { + struct in6_addr ip6_src; + ipv6_parse(svc_mon->sb_svc_mon->src_ip, &ip6_src); + pinctrl_compose_ipv6(&packet, eth_src, svc_mon->ea, + &ip6_src, &svc_mon->ip, IPPROTO_TCP, + 63, TCP_HEADER_LEN); + } else { + ovs_be32 ip4_src; + ip_parse(svc_mon->sb_svc_mon->src_ip, &ip4_src); + pinctrl_compose_ipv4(&packet, eth_src, svc_mon->ea, + ip4_src, in6_addr_get_mapped_ipv4(&svc_mon->ip), + IPPROTO_TCP, 63, TCP_HEADER_LEN); + } struct tcp_header *th = dp_packet_l4(&packet); dp_packet_set_l4(&packet, th); + th->tcp_csum = 0; th->tcp_dst = htons(svc_mon->proto_port); th->tcp_src = tcp_src; @@ -7592,7 +7608,11 @@ svc_monitor_send_tcp_health_check__(struct rconn *swconn, th->tcp_winsz = htons(65160); uint32_t csum; - csum = packet_csum_pseudoheader(dp_packet_l3(&packet)); + if (svc_mon->is_ip6) { + csum = packet_csum_pseudoheader6(dp_packet_l3(&packet)); + } else { + csum = packet_csum_pseudoheader(dp_packet_l3(&packet)); + } csum = csum_continue(csum, th, dp_packet_size(&packet) - ((const unsigned char *)th - (const unsigned char *)dp_packet_eth(&packet))); @@ -7627,21 +7647,26 @@ svc_monitor_send_udp_health_check(struct rconn *swconn, struct svc_monitor *svc_mon, ovs_be16 udp_src) { - if (svc_mon->is_ip6) { - return; - } - struct eth_addr eth_src; eth_addr_from_string(svc_mon->sb_svc_mon->src_mac, ð_src); - ovs_be32 ip4_src; - ip_parse(svc_mon->sb_svc_mon->src_ip, &ip4_src); uint64_t packet_stub[128 / 8]; struct dp_packet packet; dp_packet_use_stub(&packet, packet_stub, sizeof packet_stub); - pinctrl_compose_ipv4(&packet, eth_src, svc_mon->ea, - ip4_src, in6_addr_get_mapped_ipv4(&svc_mon->ip), - IPPROTO_UDP, 63, UDP_HEADER_LEN + 8); + + if (svc_mon->is_ip6) { + struct in6_addr ip6_src; + ipv6_parse(svc_mon->sb_svc_mon->src_ip, &ip6_src); + pinctrl_compose_ipv6(&packet, eth_src, svc_mon->ea, + &ip6_src, &svc_mon->ip, IPPROTO_UDP, + 63, UDP_HEADER_LEN + 8); + } else { + ovs_be32 ip4_src; + ip_parse(svc_mon->sb_svc_mon->src_ip, &ip4_src); + pinctrl_compose_ipv4(&packet, eth_src, svc_mon->ea, + ip4_src, in6_addr_get_mapped_ipv4(&svc_mon->ip), + IPPROTO_UDP, 63, UDP_HEADER_LEN + 8); + } struct udp_header *uh = dp_packet_l4(&packet); dp_packet_set_l4(&packet, uh); @@ -7649,6 +7674,16 @@ svc_monitor_send_udp_health_check(struct rconn *swconn, uh->udp_src = udp_src; uh->udp_len = htons(UDP_HEADER_LEN + 8); uh->udp_csum = 0; + if (svc_mon->is_ip6) { + uint32_t csum = packet_csum_pseudoheader6(dp_packet_l3(&packet)); + csum = csum_continue(csum, uh, dp_packet_size(&packet) - + ((const unsigned char *) uh - + (const unsigned char *) dp_packet_eth(&packet))); + uh->udp_csum = csum_finish(csum); + if (!uh->udp_csum) { + uh->udp_csum = htons(0xffff); + } + } uint64_t ofpacts_stub[4096 / 8]; struct ofpbuf ofpacts = OFPBUF_STUB_INITIALIZER(ofpacts_stub); @@ -7711,6 +7746,7 @@ svc_monitors_run(struct rconn *swconn, long long int current_time = time_msec(); long long int next_run_time = LLONG_MAX; enum svc_monitor_status old_status = svc_mon->status; + switch (svc_mon->state) { case SVC_MON_S_INIT: svc_monitor_send_health_check(swconn, svc_mon); @@ -7841,32 +7877,38 @@ pinctrl_handle_svc_check(struct rconn *swconn, const struct flow *ip_flow, uint32_t port_key = md->flow.regs[MFF_LOG_INPORT - MFF_REG0]; struct in6_addr ip_addr; struct eth_header *in_eth = dp_packet_data(pkt_in); - struct ip_header *in_ip = dp_packet_l3(pkt_in); + uint8_t ip_proto; - if (in_ip->ip_proto != IPPROTO_TCP && in_ip->ip_proto != IPPROTO_ICMP) { - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); - VLOG_WARN_RL(&rl, - "handle service check: Unsupported protocol - [%x]", - in_ip->ip_proto); - return; + if (in_eth->eth_type == htons(ETH_TYPE_IP)) { + struct ip_header *in_ip = dp_packet_l3(pkt_in); + uint16_t in_ip_len = ntohs(in_ip->ip_tot_len); + if (in_ip_len < IP_HEADER_LEN) { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); + VLOG_WARN_RL(&rl, + "IP packet with invalid length (%u)", + in_ip_len); + return; + } + + ip_addr = in6_addr_mapped_ipv4(ip_flow->nw_src); + ip_proto = in_ip->ip_proto; + } else { + struct ovs_16aligned_ip6_hdr *in_ip = dp_packet_l3(pkt_in); + ip_addr = ip_flow->ipv6_src; + ip_proto = in_ip->ip6_nxt; } - uint16_t in_ip_len = ntohs(in_ip->ip_tot_len); - if (in_ip_len < IP_HEADER_LEN) { + if (ip_proto != IPPROTO_TCP && ip_proto != IPPROTO_ICMP && + ip_proto != IPPROTO_ICMPV6) { static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); VLOG_WARN_RL(&rl, - "IP packet with invalid length (%u)", - in_ip_len); + "handle service check: Unsupported protocol - [%x]", + ip_proto); return; } - if (in_eth->eth_type == htons(ETH_TYPE_IP)) { - ip_addr = in6_addr_mapped_ipv4(ip_flow->nw_src); - } else { - ip_addr = ip_flow->ipv6_dst; - } - if (in_ip->ip_proto == IPPROTO_TCP) { + if (ip_proto == IPPROTO_TCP) { uint32_t hash = hash_bytes(&ip_addr, sizeof ip_addr, hash_3words(dp_key, port_key, ntohs(ip_flow->tp_src))); @@ -7883,44 +7925,68 @@ pinctrl_handle_svc_check(struct rconn *swconn, const struct flow *ip_flow, } pinctrl_handle_tcp_svc_check(swconn, pkt_in, svc_mon); } else { - /* It's ICMP packet. */ - struct icmp_header *ih = dp_packet_l4(pkt_in); - if (!ih) { - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); - VLOG_WARN_RL(&rl, "ICMPv4 packet with invalid header"); - return; - } - - if (ih->icmp_type != ICMP4_DST_UNREACH || ih->icmp_code != 3) { - return; - } - + struct udp_header *orig_uh; const char *end = (char *)dp_packet_l4(pkt_in) + dp_packet_l4_size(pkt_in); - const struct ip_header *orig_ip_hr = - dp_packet_get_icmp_payload(pkt_in); - if (!orig_ip_hr) { + void *l4h = dp_packet_l4(pkt_in); + if (!l4h) { static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); - VLOG_WARN_RL(&rl, "Original IP datagram not present in " - "ICMP packet"); + VLOG_WARN_RL(&rl, "ICMP packet with invalid header"); return; } - if (ntohs(orig_ip_hr->ip_tot_len) != - (IP_HEADER_LEN + UDP_HEADER_LEN + 8)) { + const void *in_ip = dp_packet_get_icmp_payload(pkt_in); + if (!in_ip) { static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); - VLOG_WARN_RL(&rl, "Invalid original IP datagram length present " - "in ICMP packet"); + VLOG_WARN_RL(&rl, "Original IP datagram not present in " + "ICMP packet"); return; } - struct udp_header *orig_uh = (struct udp_header *) (orig_ip_hr + 1); - if ((char *)orig_uh >= end) { - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); - VLOG_WARN_RL(&rl, "Invalid UDP header in the original " - "IP datagram"); - return; + if (in_eth->eth_type == htons(ETH_TYPE_IP)) { + struct icmp_header *ih = l4h; + /* It's ICMP packet. */ + if (ih->icmp_type != ICMP4_DST_UNREACH || ih->icmp_code != 3) { + return; + } + + const struct ip_header *orig_ip_hr = in_ip; + if (ntohs(orig_ip_hr->ip_tot_len) != + (IP_HEADER_LEN + UDP_HEADER_LEN + 8)) { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); + VLOG_WARN_RL(&rl, "Invalid original IP datagram length " + "present in ICMP packet"); + return; + } + + orig_uh = (struct udp_header *) (orig_ip_hr + 1); + if ((char *) orig_uh >= end) { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); + VLOG_WARN_RL(&rl, "Invalid UDP header in the original " + "IP datagram"); + return; + } + } else { + struct icmp6_header *ih6 = l4h; + if (ih6->icmp6_type != 1 || ih6->icmp6_code != 4) { + return; + } + + const struct ovs_16aligned_ip6_hdr *ip6_hdr = in_ip; + if (ntohs(ip6_hdr->ip6_plen) != UDP_HEADER_LEN + 8) { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); + VLOG_WARN_RL(&rl, "Invalid original IP datagram length " + "present in ICMP packet"); + } + + orig_uh = (struct udp_header *) (ip6_hdr + 1); + if ((char *) orig_uh >= end) { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); + VLOG_WARN_RL(&rl, "Invalid UDP header in the original " + "IP datagram"); + return; + } } uint32_t hash = diff --git a/northd/northd.c b/northd/northd.c index 77e105b86..51a9d92cb 100644 --- a/northd/northd.c +++ b/northd/northd.c @@ -3809,8 +3809,13 @@ ovn_lb_svc_create(struct ovsdb_idl_txn *ovnsb_txn, struct ovn_northd_lb *lb, struct ovn_port *op = NULL; char *svc_mon_src_ip = NULL; + + bool ipv6 = !IN6_IS_ADDR_V4MAPPED(&lb_vip->vip); + struct ds key = DS_EMPTY_INITIALIZER; + ds_put_format(&key, ipv6 ? "[%s]" : "%s", backend->ip_str); + const char *s = smap_get(&lb->nlb->ip_port_mappings, - backend->ip_str); + ds_cstr(&key)); if (s) { char *port_name = xstrdup(s); char *p = strstr(port_name, ":"); @@ -3818,10 +3823,21 @@ ovn_lb_svc_create(struct ovsdb_idl_txn *ovnsb_txn, struct ovn_northd_lb *lb, *p = 0; p++; op = ovn_port_find(ports, port_name); - svc_mon_src_ip = xstrdup(p); + if (ipv6) { + char *t, *q = strstr(p, "["); + p = NULL; + if (q && (t = strstr(q + 1, "]"))) { + p = q + 1; + *t = 0; + } + } + if (p) { + svc_mon_src_ip = xstrdup(p); + } } free(port_name); } + ds_destroy(&key); backend_nb->op = op; backend_nb->svc_mon_src_ip = svc_mon_src_ip; @@ -3902,7 +3918,8 @@ build_lb_vip_actions(struct ovn_lb_vip *lb_vip, } n_active_backends++; - ds_put_format(action, "%s:%"PRIu16",", + bool ipv6 = !IN6_IS_ADDR_V4MAPPED(&backend->ip); + ds_put_format(action, ipv6 ? "[%s]:%"PRIu16"," : "%s:%"PRIu16",", backend->ip_str, backend->port); } ds_chomp(action, ','); @@ -8752,6 +8769,7 @@ build_lswitch_arp_nd_service_monitor(struct ovn_northd_lb *lb, continue; } + struct ovn_lb_vip *lb_vip = &lb->vips[i]; for (size_t j = 0; j < lb_vip_nb->n_backends; j++) { struct ovn_northd_lb_backend *backend_nb = &lb_vip_nb->backends_nb[j]; @@ -8760,22 +8778,42 @@ build_lswitch_arp_nd_service_monitor(struct ovn_northd_lb *lb, } ds_clear(match); - ds_put_format(match, "arp.tpa == %s && arp.op == 1", - backend_nb->svc_mon_src_ip); ds_clear(actions); - ds_put_format(actions, - "eth.dst = eth.src; " - "eth.src = %s; " - "arp.op = 2; /* ARP reply */ " - "arp.tha = arp.sha; " - "arp.sha = %s; " - "arp.tpa = arp.spa; " - "arp.spa = %s; " - "outport = inport; " - "flags.loopback = 1; " - "output;", - svc_monitor_mac, svc_monitor_mac, - backend_nb->svc_mon_src_ip); + if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) { + ds_put_format(match, "arp.tpa == %s && arp.op == 1", + backend_nb->svc_mon_src_ip); + ds_put_format(actions, + "eth.dst = eth.src; " + "eth.src = %s; " + "arp.op = 2; /* ARP reply */ " + "arp.tha = arp.sha; " + "arp.sha = %s; " + "arp.tpa = arp.spa; " + "arp.spa = %s; " + "outport = inport; " + "flags.loopback = 1; " + "output;", + svc_monitor_mac, svc_monitor_mac, + backend_nb->svc_mon_src_ip); + } else { + ds_put_format(match, "nd_ns && nd.target == %s", + backend_nb->svc_mon_src_ip); + ds_put_format(actions, + "nd_na { " + "eth.dst = eth.src; " + "eth.src = %s; " + "ip6.src = %s; " + "nd.target = %s; " + "nd.tll = %s; " + "outport = inport; " + "flags.loopback = 1; " + "output; " + "};", + svc_monitor_mac, + backend_nb->svc_mon_src_ip, + backend_nb->svc_mon_src_ip, + svc_monitor_mac); + } ovn_lflow_add_with_hint(lflows, backend_nb->op->od, S_SWITCH_IN_ARP_ND_RSP, 110, diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml index 3d7a92ea8..2eab2c4ae 100644 --- a/northd/ovn-northd.8.xml +++ b/northd/ovn-northd.8.xml @@ -1469,6 +1469,23 @@ output; These flows are required if an ARP request is sent for the IP SVC_MON_SRC_IP.

+ +

+ For IPv6 the similar flow is added with the following action +

+ +
+nd_na {
+    eth.dst = eth.src;
+    eth.src = E;
+    ip6.src = A;
+    nd.target = A;
+    nd.tll = E;
+    outport = inport;
+    flags.loopback = 1;
+    output;
+};
+        
  • diff --git a/ovn-nb.xml b/ovn-nb.xml index 4b52b9953..8d56d0c6e 100644 --- a/ovn-nb.xml +++ b/ovn-nb.xml @@ -1847,9 +1847,8 @@

    - OVN supports health checks for load balancer endpoints, for IPv4 load - balancers only. When health checks are enabled, the load balancer uses - only healthy endpoints. + OVN supports health checks for load balancer endpoints. When health + checks are enabled, the load balancer uses only healthy endpoints.

    @@ -1861,7 +1860,7 @@ column="health_check"/> a reference to a row whose is set to - 10.0.0.10. + 10.0.0.10. The same approach can be used for IPv6 as well.

    @@ -1872,8 +1871,10 @@

    Maps from endpoint IP to a colon-separated pair of logical port name and source IP, - e.g. port_name:sourc_ip. Health - checks are sent to this port with the specified source IP. + e.g. port_name:sourc_ip for IPv4. + Health checks are sent to this port with the specified source IP. + For IPv6 square brackets must be used around IP address, e.g: + port_name:[sourc_ip]

    @@ -1882,6 +1883,11 @@ 20.0.0.4=sw1-p1:20.0.0.2, if the values given were suitable ports and IP addresses.

    + +

    + For IPv6 IP to port mappings might be defined as + [2001::1]=sw0-p1:[2002::1]. +

    @@ -2055,8 +2061,7 @@ or

    - Each row represents one load balancer health check. Health checks - are supported for IPv4 load balancers only. + Each row represents one load balancer health check.

    diff --git a/tests/ovn.at b/tests/ovn.at index e9b8bc677..8854030a4 100644 --- a/tests/ovn.at +++ b/tests/ovn.at @@ -24243,7 +24243,7 @@ AT_CLEANUP ]) OVN_FOR_EACH_NORTHD([ -AT_SETUP([Load balancer health checks]) +AT_SETUP([Load balancer health checks - IPv4]) AT_KEYWORDS([lb]) ovn_start @@ -24441,6 +24441,205 @@ OVN_CLEANUP([hv1], [hv2]) AT_CLEANUP ]) +OVN_FOR_EACH_NORTHD([ +AT_SETUP([Load balancer health checks - IPv6]) +AT_KEYWORDS([lb]) +ovn_start + +net_add n1 + +sim_add hv1 +as hv1 +ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.1 +check ovs-vsctl -- add-port br-int hv1-vif1 -- \ + set interface hv1-vif1 external-ids:iface-id=sw0-p1 \ + options:tx_pcap=hv1/vif1-tx.pcap \ + options:rxq_pcap=hv1/vif1-rx.pcap \ + ofport-request=1 +check ovs-vsctl -- add-port br-int hv1-vif2 -- \ + set interface hv1-vif2 external-ids:iface-id=sw0-p2 \ + options:tx_pcap=hv1/vif2-tx.pcap \ + options:rxq_pcap=hv1/vif2-rx.pcap \ + ofport-request=2 + +sim_add hv2 +as hv2 +check ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.2 +check ovs-vsctl -- add-port br-int hv2-vif1 -- \ + set interface hv2-vif1 external-ids:iface-id=sw1-p1 \ + options:tx_pcap=hv2/vif1-tx.pcap \ + options:rxq_pcap=hv2/vif1-rx.pcap \ + ofport-request=1 + +check ovn-nbctl ls-add sw0 + +check ovn-nbctl lsp-add sw0 sw0-p1 +check ovn-nbctl lsp-set-addresses sw0-p1 "50:54:00:00:00:03 2001::3" +check ovn-nbctl lsp-set-port-security sw0-p1 "50:54:00:00:00:03 2001::3" + +# Create port group and ACLs for sw0 ports. +check ovn-nbctl pg-add pg0_drop sw0-p1 +check ovn-nbctl acl-add pg0_drop from-lport 1001 "inport == @pg0_drop && ip" drop +check ovn-nbctl acl-add pg0_drop to-lport 1001 "outport == @pg0_drop && ip" drop + +# Create the second logical switch with one port +check ovn-nbctl ls-add sw1 +check ovn-nbctl lsp-add sw1 sw1-p1 +check ovn-nbctl lsp-set-addresses sw1-p1 "40:54:00:00:00:03 2002::3" +check ovn-nbctl lsp-set-port-security sw1-p1 "40:54:00:00:00:03 2002::3" + +# Create port group and ACLs for sw1 ports. +check ovn-nbctl pg-add pg1_drop sw1-p1 +check ovn-nbctl acl-add pg1_drop from-lport 1001 "inport == @pg1_drop && ip" drop +check ovn-nbctl acl-add pg1_drop to-lport 1001 "outport == @pg1_drop && ip" drop + +check ovn-nbctl pg-add pg1 sw1-p1 +check ovn-nbctl acl-add pg1 from-lport 1002 "inport == @pg1 && ip6" allow-related +check ovn-nbctl acl-add pg1 to-lport 1002 "outport == @pg1 && ip6 && ip6.src == ::/0 && icmp6" allow-related +check ovn-nbctl acl-add pg1 to-lport 1002 "outport == @pg1 && ip6 && ip6.src == ::/0 && tcp && tcp.dst == 80" allow-related +check ovn-nbctl acl-add pg1 to-lport 1002 "outport == @pg1 && ip6 && ip6.src == ::/0 && udp && udp.dst == 80" allow-related + +# Create a logical router and attach both logical switches +check ovn-nbctl lr-add lr0 +check ovn-nbctl lrp-add lr0 lr0-sw0 00:00:00:00:ff:01 2001::1/64 +check ovn-nbctl lsp-add sw0 sw0-lr0 +check ovn-nbctl lsp-set-type sw0-lr0 router +check ovn-nbctl lsp-set-addresses sw0-lr0 router +check ovn-nbctl lsp-set-options sw0-lr0 router-port=lr0-sw0 + +check ovn-nbctl lrp-add lr0 lr0-sw1 00:00:00:00:ff:02 2001::a/64 +check ovn-nbctl lsp-add sw1 sw1-lr0 +check ovn-nbctl lsp-set-type sw1-lr0 router +check ovn-nbctl lsp-set-addresses sw1-lr0 router +check ovn-nbctl lsp-set-options sw1-lr0 router-port=lr0-sw1 + +check ovn-nbctl lb-add lb1 [[2001::a]]:80 [[2001::3]]:80,[[2002::3]]:80 +OVN_LB_ID=$(ovn-nbctl --bare --column _uuid find load_balancer name=lb1) +check ovn-nbctl set load_balancer ${OVN_LB_ID} selection_fields="ip_dst,ip_src,tp_dst,tp_src" +# +check ovn-nbctl --wait=sb set load_balancer . ip_port_mappings:\"[[2001::3]]\"=\"sw0-p1:[[2001::2]]\" +check ovn-nbctl --wait=sb set load_balancer . ip_port_mappings:\"[[2002::3]]\"=\"sw1-p1:[[2002::2]]\" + +AT_CHECK([ovn-nbctl --wait=sb \ + -- --id=@hc create Load_Balancer_Health_Check vip="\[\[2001\:\:a\]\]\:80" \ + options:failure_count=100 \ + -- add Load_Balancer . health_check @hc | uuidfilt], [0], [<0> +]) + +check ovn-nbctl --wait=sb ls-lb-add sw0 lb1 +check ovn-nbctl --wait=sb ls-lb-add sw1 lb1 +check ovn-nbctl --wait=sb lr-lb-add lr0 lb1 + +check ovn-nbctl ls-add public +check ovn-nbctl lrp-add lr0 lr0-public 00:00:20:20:12:13 2003::1/64 +check ovn-nbctl lsp-add public public-lr0 +check ovn-nbctl lsp-set-type public-lr0 router +check ovn-nbctl lsp-set-addresses public-lr0 router +check ovn-nbctl lsp-set-options public-lr0 router-port=lr0-public + +# localnet port +check ovn-nbctl lsp-add public ln-public +check ovn-nbctl lsp-set-type ln-public localnet +check ovn-nbctl lsp-set-addresses ln-public unknown +check ovn-nbctl lsp-set-options ln-public network_name=public + +# schedule the gw router port to a chassis. Change the name of the chassis +check ovn-nbctl --wait=hv lrp-set-gateway-chassis lr0-public hv1 20 + +OVN_POPULATE_ARP +wait_for_ports_up +check ovn-nbctl --wait=hv sync + +wait_row_count Service_Monitor 2 + +AT_CAPTURE_FILE([sbflows]) +OVS_WAIT_FOR_OUTPUT( + [ovn-sbctl dump-flows > sbflows + ovn-sbctl dump-flows sw0 | grep ct_lb_mark | grep priority=120 | sed 's/table=..//'], 0, + [dnl + (ls_in_pre_stateful ), priority=120 , match=(reg0[[2]] == 1 && ip6.dst == 2001::a && tcp.dst == 80), action=(xxreg1 = 2001::a; reg2[[0..15]] = 80; ct_lb_mark;) + (ls_in_lb ), priority=120 , match=(ct.new && ip6.dst == 2001::a && tcp.dst == 80), action=(reg0[[1]] = 0; ct_lb_mark(backends=[[2001::3]]:80,[[2002::3]]:80; hash_fields="ip_dst,ip_src,tcp_dst,tcp_src");) +]) + +AT_CAPTURE_FILE([sbflows2]) +OVS_WAIT_FOR_OUTPUT( + [ovn-sbctl dump-flows > sbflows2 + ovn-sbctl dump-flows lr0 | grep ct_lb_mark | grep priority=120 | sed 's/table=..//'], 0, + [ (lr_in_dnat ), priority=120 , match=(ct.new && !ct.rel && ip6 && xxreg0 == 2001::a && tcp && reg9[[16..31]] == 80 && is_chassis_resident("cr-lr0-public")), action=(ct_lb_mark(backends=[[2001::3]]:80,[[2002::3]]:80; hash_fields="ip_dst,ip_src,tcp_dst,tcp_src");) +]) + +# get the svc monitor mac. +svc_mon_src_mac=`ovn-nbctl get NB_Global . options:svc_monitor_mac | \ +sed s/":"//g | sed s/\"//g` + +OVS_WAIT_UNTIL( + [test 1 = `$PYTHON "$ovs_srcdir/utilities/ovs-pcap.in" hv1/vif1-tx.pcap | \ +grep "505400000003${svc_mon_src_mac}" | wc -l`] +) + +OVS_WAIT_UNTIL( + [test 1 = `$PYTHON "$ovs_srcdir/utilities/ovs-pcap.in" hv2/vif1-tx.pcap | \ +grep "405400000003${svc_mon_src_mac}" | wc -l`] +) + +check ovn-nbctl set load_balancer_health_check [[2001::a]]:80 options:failure_count=1 +wait_row_count Service_Monitor 2 status=offline + +OVS_WAIT_UNTIL( + [test 2 = `$PYTHON "$ovs_srcdir/utilities/ovs-pcap.in" hv1/vif1-tx.pcap | \ +grep "505400000003${svc_mon_src_mac}" | wc -l`] +) + +OVS_WAIT_UNTIL( + [test 2 = `$PYTHON "$ovs_srcdir/utilities/ovs-pcap.in" hv2/vif1-tx.pcap | \ +grep "405400000003${svc_mon_src_mac}" | wc -l`] +) + +AT_CAPTURE_FILE([sbflows3]) +ovn-sbctl dump-flows sw0 > sbflows3 +AT_CHECK( + [grep "ip6.dst == 2001::a && tcp.dst == 80" sbflows3 | grep priority=120 |\ + sed 's/table=../table=??/'], [0], [dnl + table=??(ls_in_pre_stateful ), priority=120 , match=(reg0[[2]] == 1 && ip6.dst == 2001::a && tcp.dst == 80), action=(xxreg1 = 2001::a; reg2[[0..15]] = 80; ct_lb_mark;) + table=??(ls_in_lb ), priority=120 , match=(ct.new && ip6.dst == 2001::a && tcp.dst == 80), action=(drop;) +]) + +AT_CAPTURE_FILE([sbflows4]) +ovn-sbctl dump-flows lr0 > sbflows4 +AT_CHECK([grep lr_in_dnat sbflows4 | grep priority=120 | sed 's/table=..//' | sort], [0], [dnl + (lr_in_dnat ), priority=120 , match=(ct.est && !ct.rel && ip6 && xxreg0 == 2001::a && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1 && is_chassis_resident("cr-lr0-public")), action=(next;) + (lr_in_dnat ), priority=120 , match=(ct.new && !ct.rel && ip6 && xxreg0 == 2001::a && tcp && reg9[[16..31]] == 80 && is_chassis_resident("cr-lr0-public")), action=(drop;) +]) + +# Delete sw0-p1 +check ovn-nbctl lsp-del sw0-p1 + +wait_row_count Service_Monitor 1 + +# Add back sw0-p1 but without any IP address. +check ovn-nbctl lsp-add sw0 sw0-p1 +check ovn-nbctl lsp-set-addresses sw0-p1 "50:54:00:00:00:03" -- \ + lsp-set-port-security sw0-p1 "50:54:00:00:00:03" + +wait_row_count Service_Monitor 2 status=offline + +check ovn-nbctl lsp-del sw0-p1 +check ovn-nbctl lsp-del sw1-p1 +wait_row_count Service_Monitor 0 + +# Add back sw0-p1 but without any address set. +check ovn-nbctl lsp-add sw0 sw0-p1 + +wait_row_count Service_Monitor 1 +wait_row_count Service_Monitor 0 status=offline +wait_row_count Service_Monitor 0 status=online + +OVN_CLEANUP([hv1], [hv2]) +AT_CLEANUP +]) + OVN_FOR_EACH_NORTHD([ AT_SETUP([SCTP Load balancer health checks]) AT_KEYWORDS([lb sctp]) diff --git a/tests/system-ovn.at b/tests/system-ovn.at index 2ece0f571..7fa899edd 100644 --- a/tests/system-ovn.at +++ b/tests/system-ovn.at @@ -4378,7 +4378,7 @@ AT_CLEANUP ]) OVN_FOR_EACH_NORTHD([ -AT_SETUP([Load balancer health checks]) +AT_SETUP([Load balancer health checks - IPv4]) AT_KEYWORDS([lb]) ovn_start @@ -4603,6 +4603,234 @@ OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d AT_CLEANUP ]) +OVN_FOR_EACH_NORTHD([ +AT_SETUP([Load balancer health checks - IPv6]) +AT_KEYWORDS([lb]) +ovn_start + +OVS_TRAFFIC_VSWITCHD_START() +ADD_BR([br-int]) + +# Set external-ids in br-int needed for ovn-controller +ovs-vsctl \ + -- set Open_vSwitch . external-ids:system-id=hv1 \ + -- set Open_vSwitch . external-ids:ovn-remote=unix:$ovs_base/ovn-sb/ovn-sb.sock \ + -- set Open_vSwitch . external-ids:ovn-encap-type=geneve \ + -- set Open_vSwitch . external-ids:ovn-encap-ip=169.0.0.1 \ + -- set bridge br-int fail-mode=secure other-config:disable-in-band=true + +# Start ovn-controller +start_daemon ovn-controller + +ovn-nbctl ls-add sw0 + +ovn-nbctl lsp-add sw0 sw0-p1 +ovn-nbctl lsp-set-addresses sw0-p1 "50:54:00:00:00:03 2001::3" +ovn-nbctl lsp-set-port-security sw0-p1 "50:54:00:00:00:03 2001::3" + +ovn-nbctl lsp-add sw0 sw0-p2 +ovn-nbctl lsp-set-addresses sw0-p2 "50:54:00:00:00:04 2001::4" +ovn-nbctl lsp-set-port-security sw0-p2 "50:54:00:00:00:04 2001::4" + +# Create port group and ACLs for sw0 ports. +ovn-nbctl pg-add pg0_drop sw0-p1 sw0-p2 +ovn-nbctl acl-add pg0_drop from-lport 1001 "inport == @pg0_drop && ip" drop +ovn-nbctl acl-add pg0_drop to-lport 1001 "outport == @pg0_drop && ip" drop + +ovn-nbctl pg-add pg0 sw0-p1 sw0-p2 +ovn-nbctl acl-add pg0 from-lport 1002 "inport == @pg0 && ip6" allow-related +ovn-nbctl acl-add pg0 to-lport 1002 "outport == @pg0 && ip6 && ip6.src == ::/0 && icmp6" allow-related +ovn-nbctl acl-add pg0 to-lport 1002 "outport == @pg0 && ip6 && ip6.src == ::/0 && tcp && tcp.dst == 80" allow-related +ovn-nbctl acl-add pg0 to-lport 1002 "outport == @pg0 && ip6 && ip6.src == ::/0 && udp && udp.dst == 80" allow-related + +# Create the second logical switch with one port +ovn-nbctl ls-add sw1 +ovn-nbctl lsp-add sw1 sw1-p1 +ovn-nbctl lsp-set-addresses sw1-p1 "40:54:00:00:00:03 2002::3" +ovn-nbctl lsp-set-port-security sw1-p1 "40:54:00:00:00:03 2002::3" + +# Create port group and ACLs for sw1 ports. +ovn-nbctl pg-add pg1_drop sw1-p1 +ovn-nbctl acl-add pg1_drop from-lport 1001 "inport == @pg1_drop && ip" drop +ovn-nbctl acl-add pg1_drop to-lport 1001 "outport == @pg1_drop && ip" drop + +ovn-nbctl pg-add pg1 sw1-p1 +ovn-nbctl acl-add pg1 from-lport 1002 "inport == @pg1 && ip6" allow-related +ovn-nbctl acl-add pg1 to-lport 1002 "outport == @pg1 && ip6 && ip6.src == ::/0 && icmp6" allow-related +ovn-nbctl acl-add pg1 to-lport 1002 "outport == @pg1 && ip6 && ip6.src == ::/0 && tcp && tcp.dst == 80" allow-related +ovn-nbctl acl-add pg1 to-lport 1002 "outport == @pg1 && ip6 && ip6.src == ::/0 && udp && udp.dst == 80" allow-related + +# Create a logical router and attach both logical switches +ovn-nbctl lr-add lr0 +ovn-nbctl lrp-add lr0 lr0-sw0 00:00:00:00:ff:01 2001::1/64 +ovn-nbctl lsp-add sw0 sw0-lr0 +ovn-nbctl lsp-set-type sw0-lr0 router +ovn-nbctl lsp-set-addresses sw0-lr0 router +ovn-nbctl lsp-set-options sw0-lr0 router-port=lr0-sw0 + +ovn-nbctl lrp-add lr0 lr0-sw1 00:00:00:00:ff:02 2002::1/64 +ovn-nbctl lsp-add sw1 sw1-lr0 +ovn-nbctl lsp-set-type sw1-lr0 router +ovn-nbctl lsp-set-addresses sw1-lr0 router +ovn-nbctl lsp-set-options sw1-lr0 router-port=lr0-sw1 + +ovn-nbctl --reject lb-add lb1 [[2001::a]]:80 [[2001::3]]:80,[[2002::3]]:80 + +check ovn-nbctl --wait=sb set load_balancer . ip_port_mappings:\"[[2001::3]]\"=\"sw0-p1:[[2001::2]]\" +check ovn-nbctl --wait=sb set load_balancer . ip_port_mappings:\"[[2002::3]]\"=\"sw1-p1:[[2002::2]]\" + +ovn-nbctl --wait=sb -- --id=@hc create \ +Load_Balancer_Health_Check vip="\[\[2001\:\:a\]\]\:80" -- add Load_Balancer . \ +health_check @hc + +ovn-nbctl --wait=sb ls-lb-add sw0 lb1 +ovn-nbctl --wait=sb ls-lb-add sw1 lb1 +ovn-nbctl --wait=sb lr-lb-add lr0 lb1 + +OVN_POPULATE_ARP +ovn-nbctl --wait=hv sync + +ADD_NAMESPACES(sw0-p1) +ADD_VETH(sw0-p1, sw0-p1, br-int, "2001::3/64", "50:54:00:00:00:03", \ + "2001::1") + +ADD_NAMESPACES(sw1-p1) +ADD_VETH(sw1-p1, sw1-p1, br-int, "2002::3/64", "40:54:00:00:00:03", \ + "2002::1") + +ADD_NAMESPACES(sw0-p2) +ADD_VETH(sw0-p2, sw0-p2, br-int, "2001::4/64", "50:54:00:00:00:04", \ + "2001::1") + +# Wait until all the services are set to offline. +OVS_WAIT_UNTIL([test 2 = `ovn-sbctl --bare --columns status find \ +service_monitor | sed '/^$/d' | grep offline | wc -l`]) + +# Start webservers in 'sw0-p1' and 'sw1-p1'. +OVS_START_L7([sw0-p1], [http6]) +sw0_p1_pid_file=$(cat l7_pid_file) +OVS_START_L7([sw1-p1], [http6]) + +# Wait until the services are set to online. +OVS_WAIT_UNTIL([test 2 = `ovn-sbctl --bare --columns status find \ +service_monitor | sed '/^$/d' | grep online | wc -l`]) + +OVS_WAIT_UNTIL( + [ovn-sbctl dump-flows sw0 | grep ct_lb_mark | grep priority=120 | grep "ip6.dst == 2001::a" > lflows.txt + test 1 = `cat lflows.txt | grep "ct_lb_mark(backends=[\[2001::3\]]:80,[\[2002::3\]]:80)" | wc -l`] +) + +# From sw0-p2 send traffic to vip - 2001::a +for i in `seq 1 20`; do + echo Request $i + ovn-sbctl list service_monitor + NS_CHECK_EXEC([sw0-p2], [wget http://[[2001::a]] -t 5 -T 1 --retry-connrefused -v -o wget$i.log]) +done + +dnl Each server should have at least one connection. +AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(2001::a) | grep -v fe80 | \ +sed -e 's/zone=[[0-9]]*/zone=/'], [0], [dnl +tcp,orig=(src=2001::4,dst=2001::a,sport=,dport=),reply=(src=2001::3,dst=2001::4,sport=,dport=),zone=,mark=2,protoinfo=(state=) +tcp,orig=(src=2001::4,dst=2001::a,sport=,dport=),reply=(src=2002::3,dst=2001::4,sport=,dport=),zone=,mark=2,protoinfo=(state=) +]) + +# Stop webserver in sw0-p1 +kill `cat $sw0_p1_pid_file` + +# Wait until service_monitor for sw0-p1 is set to offline +OVS_WAIT_UNTIL([test 1 = `ovn-sbctl --bare --columns status find \ +service_monitor logical_port=sw0-p1 | sed '/^$/d' | grep offline | wc -l`]) + +OVS_WAIT_UNTIL( + [ovn-sbctl dump-flows sw0 | grep ct_lb_mark | grep priority=120 | grep "ip6.dst == 2001::a" > lflows.txt + test 1 = `cat lflows.txt | grep "ct_lb_mark(backends=[\[2002::3\]]:80)" | wc -l`] +) + +ovs-appctl dpctl/flush-conntrack +# From sw0-p2 send traffic to vip - 2001::a +for i in `seq 1 20`; do + echo Request $i + NS_CHECK_EXEC([sw0-p2], [wget http://[[2001::a]] -t 5 -T 1 --retry-connrefused -v -o wget$i.log]) +done + +AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(2001::a) | grep -v fe80 | \ +sed -e 's/zone=[[0-9]]*/zone=/'], [0], [dnl +tcp,orig=(src=2001::4,dst=2001::a,sport=,dport=),reply=(src=2002::3,dst=2001::4,sport=,dport=),zone=,mark=2,protoinfo=(state=) +]) + +# trigger port binding release and check if status changed to offline +ovs-vsctl remove interface ovs-sw1-p1 external_ids iface-id +wait_row_count Service_Monitor 2 +wait_row_count Service_Monitor 2 status=offline + +ovs-vsctl set interface ovs-sw1-p1 external_ids:iface-id=sw1-p1 +wait_row_count Service_Monitor 2 +wait_row_count Service_Monitor 1 status=online + +# Create udp load balancer. +#ovn-nbctl lb-add lb2 10.0.0.10:80 10.0.0.3:80,20.0.0.3:80 udp +#lb_udp=`ovn-nbctl lb-list | grep udp | awk '{print $1}'` +# +#echo "lb udp uuid = $lb_udp" +# +#ovn-nbctl list load_balancer +# +#ovn-nbctl --wait=sb set load_balancer $lb_udp ip_port_mappings:10.0.0.3=sw0-p1:10.0.0.2 +#ovn-nbctl --wait=sb set load_balancer $lb_udp ip_port_mappings:20.0.0.3=sw1-p1:20.0.0.2 +# +#ovn-nbctl --wait=sb -- --id=@hc create \ +#Load_Balancer_Health_Check vip="10.0.0.10\:80" -- add Load_Balancer $lb_udp \ +#health_check @hc +# +#ovn-nbctl --wait=sb ls-lb-add sw0 lb2 +#ovn-nbctl --wait=sb ls-lb-add sw1 lb2 +#ovn-nbctl --wait=sb lr-lb-add lr0 lb2 +# +#sleep 10 +# +#ovn-nbctl list load_balancer +#echo "*******Next is health check*******" +#ovn-nbctl list Load_Balancer_Health_Check +#echo "********************" +#ovn-sbctl list service_monitor +# +## Wait until udp service_monitor are set to offline +#OVS_WAIT_UNTIL([test 2 = `ovn-sbctl --bare --columns status find \ +#service_monitor protocol=udp | sed '/^$/d' | grep offline | wc -l`]) +# +## Stop webserver in sw1-p1 +#pid_file=$(cat l7_pid_file) +#NS_CHECK_EXEC([sw1-p1], [kill $(cat $pid_file)]) +# +#NS_CHECK_EXEC([sw0-p2], [tcpdump -c 1 -neei sw0-p2 ip[[33:1]]=0x14 > rst.pcap &]) +#OVS_WAIT_UNTIL([test 2 = `ovn-sbctl --bare --columns status find \ +#service_monitor protocol=tcp | sed '/^$/d' | grep offline | wc -l`]) +#NS_CHECK_EXEC([sw0-p2], [wget 10.0.0.10 -v -o wget$i.log],[4]) +# +#OVS_WAIT_UNTIL([ +# n_reset=$(cat rst.pcap | wc -l) +# test "${n_reset}" = "1" +#]) + +OVS_APP_EXIT_AND_WAIT([ovn-controller]) + +as ovn-sb +OVS_APP_EXIT_AND_WAIT([ovsdb-server]) + +as ovn-nb +OVS_APP_EXIT_AND_WAIT([ovsdb-server]) + +as northd +OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) + +as +OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d +/connection dropped.*/d +/Service monitor not found.*/d"]) + +AT_CLEANUP +]) + OVN_FOR_EACH_NORTHD([ AT_SETUP([Load Balancer LS hairpin IPv4]) AT_SKIP_IF([test $HAVE_NC = no])